Compare commits

310 Commits

Author SHA1 Message Date
JingleManSweep
31d591723d Add acknowledgments section to README 2026-03-05 11:32:09 +00:00
JingleManSweep
3eff7f03db Merge pull request #130 from shiqual/main
Add Dutch localization file nl.json
2026-03-02 23:38:37 +00:00
JingleManSweep
905ea0190b Merge branch 'main' into main 2026-03-02 23:35:45 +00:00
JingleManSweep
86cc7edca3 Merge pull request #129 from ipnet-mesh/renovate/major-github-artifact-actions
Update actions/upload-artifact action to v7
2026-03-02 23:30:39 +00:00
shiqual
eb3f8508b7 Add Dutch localization file nl.json
Dutch translation
2026-03-02 00:13:46 +01:00
renovate[bot]
74a34fdcba Update actions/upload-artifact action to v7 2026-02-26 20:53:07 +00:00
JingleManSweep
175fc8c524 Merge pull request #127 from ipnet-mesh/chore/fix-metrics-labels
Add role label to node last seen metric and filter alerts by role
2026-02-19 00:05:08 +00:00
Louis King
2a153a5239 Add role label to node last seen metric and filter alerts by role
Joins NodeTag (key='role') to the node last seen Prometheus metric so
alert rules can target infrastructure nodes only (role="infra").

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 00:01:20 +00:00
JingleManSweep
de85e0cd7a Merge pull request #126 from ipnet-mesh/feat/prometheus
Add Prometheus metrics endpoint, Alertmanager, and 1h stats window
2026-02-18 23:09:22 +00:00
Louis King
5a20da3afa Add Prometheus metrics endpoint, Alertmanager, and 1h stats window
Add /metrics endpoint with Prometheus gauges for nodes, messages,
advertisements, telemetry, trace paths, events, and members. Include
per-node last_seen timestamps for alerting. Add Alertmanager service
to Docker Compose metrics profile with default blackhole receiver.
Add NodeNotSeen alert rule (48h threshold). Add 1h time window to
all windowed metrics alongside existing 24h/7d/30d windows.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-18 23:06:07 +00:00
JingleManSweep
dcd33711db Merge pull request #125 from ipnet-mesh/feat/auto-update-lists
Add configurable auto-refresh for list pages
2026-02-18 16:07:25 +00:00
Louis King
a8cb20fea5 Add configurable auto-refresh for list pages
Nodes, advertisements, and messages pages now auto-refresh on a
configurable interval (WEB_AUTO_REFRESH_SECONDS, default 30s). A
pause/play toggle in the page header lets users control it. Setting
the interval to 0 disables auto-refresh entirely.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-18 14:37:33 +00:00
JingleManSweep
3ac5667d7a Merge pull request #118 from ipnet-mesh/feat/node-list-tag-improvements
Fix clipboard copy error with null target
2026-02-14 01:49:09 +00:00
JingleManSweep
c8c53b25bd Merge branch 'main' into feat/node-list-tag-improvements 2026-02-14 01:46:45 +00:00
Louis King
e4a1b005dc Fix clipboard copy error with null target
Capture e.currentTarget synchronously before async operations
to prevent it from becoming null in async promise handlers.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-14 01:44:28 +00:00
JingleManSweep
27adc6e2de Merge pull request #117 from ipnet-mesh/feat/node-list-tag-improvements
Improve node list tag display with name, description, members, and emoji extraction
2026-02-14 01:37:11 +00:00
Louis King
835fb1c094 Respect FEATURE_MEMBERS flag in advertisements page
- Only fetch members data when feature is enabled
- Hide member filter when feature is disabled

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-14 01:32:18 +00:00
Louis King
d7a351a803 Respect FEATURE_MEMBERS flag in nodes list
- Only fetch members data when feature is enabled
- Hide member filter when feature is disabled
- Hide member column when feature is disabled
- Adjust table colspan dynamically based on feature

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-14 01:30:38 +00:00
JingleManSweep
317627833c Merge pull request #116 from ipnet-mesh/feat/node-list-tag-improvements
Improve node display with descriptions, members, and emoji extraction
2026-02-14 01:24:05 +00:00
Louis King
f4514d1150 Improve node display with descriptions, members, and emoji extraction
Enhances the web dashboard's node presentation to match official MeshCore
app behavior and provide better user experience:

- Extract emoji from node names (e.g., "🏠 Home Gateway" uses 🏠 icon)
- Display description tags under node names across all list pages
- Add Member column to show network member associations
- Add copyable public key columns on Nodes and Advertisements pages
- Create reusable renderNodeDisplay() component for consistency
- Improve node detail page layout with larger emoji and inline description
- Document standard node tags (name, description, member_id, etc.)
- Fix documentation: correct Python version requirement and tag examples

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-14 01:20:52 +00:00
JingleManSweep
7be5f6afdf Merge pull request #115 from ipnet-mesh/chore/http-caching
Add HTTP caching for web dashboard resources
2026-02-14 00:05:44 +00:00
Louis King
54695ab9e2 Add beautifulsoup4 to dev dependencies
Required for HTML parsing in web caching tests.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-14 00:03:28 +00:00
Louis King
189eb3a139 Add HTTP caching for web dashboard resources
Implement cache-control middleware to optimize browser caching and reduce
bandwidth usage. Static files are cached for 1 year when accessed with
version parameters, while dynamic content is never cached.

Changes:
- Add CacheControlMiddleware with path-based caching logic
- Register middleware in web app after ProxyHeadersMiddleware
- Add version query parameters to CSS, JS, and app.js references
- Create comprehensive test suite (20 tests) for all cache behaviors

Cache strategy:
- Static files with ?v=X.Y.Z: 1 year (immutable)
- Static files without version: 1 hour (fallback)
- SPA shell HTML: no-cache (dynamic config)
- Health endpoints: no-cache, no-store (always fresh)
- Map data: 5 minutes (location updates)
- Custom pages: 1 hour (stable markdown)
- API proxy: pass-through (backend controls)

All 458 tests passing, 95% middleware coverage.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-14 00:01:08 +00:00
JingleManSweep
96ca6190db Merge pull request #113 from ipnet-mesh/claude/add-i18n-support-1duUx
Fix translation key in node detail page: nodes.tags → entities.tags
2026-02-13 23:10:35 +00:00
Louis King
baf08a9545 Shorten translation call-to-action with GitHub alert
Replaced verbose translation section with concise GitHub alert notification.

- Uses [!IMPORTANT] alert style for better visibility
- Reduced from 16 lines to 4 lines
- Keeps essential information and link to Translation Guide
- More scannable and professional appearance

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-13 23:07:53 +00:00
JingleManSweep
1d3e649ce0 Merge branch 'main' into claude/add-i18n-support-1duUx 2026-02-13 23:03:43 +00:00
Louis King
45abc66816 Remove Claude Code review GitHub action
Removed the code-review.yml workflow that automatically runs Claude Code review on pull requests.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-13 23:03:22 +00:00
Louis King
9c8eb27455 Fix translation key in node detail page: nodes.tags → entities.tags
The Tags panel title was showing 'nodes.tags' as literal text instead of the translation.

Fixed: node-detail.js line 174 now uses entities.tags

Comprehensive review completed:
- Verified all 115 unique translation keys across all pages
- All keys properly resolve to valid translations in en.json
- All i18n tests passing

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-13 23:01:49 +00:00
JingleManSweep
e6c6d4aecc Merge pull request #112 from ipnet-mesh/claude/add-i18n-support-1duUx
Add i18n support for web dashboard
2026-02-13 22:38:49 +00:00
Louis King
19bb06953e Fix remaining translation key: common.all_nodes
Replaced non-existent common.all_nodes key with common.all_entity pattern.

- advertisements.js: Use common.all_entity with entities.nodes
- map.js: Use common.all_entity with entities.nodes

All translation keys now properly resolve across the entire dashboard.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-13 22:36:17 +00:00
Louis King
1f55d912ea Fix translation key references across all pages
Fixes critical issue where translation keys were displaying as literal text instead of translations.

Changes:
- home.js: Fix stat headers (home.* → entities.*)
- dashboard.js: Fix stat headers, chart labels, table columns
- nodes.js: Fix table columns and filter labels (common.* → entities.*)
- advertisements.js: Fix filter widgets and table headers
- messages.js: Fix table column header
- map.js: Fix filter label and dropdown
- admin/node-tags.js: Fix node label reference

All translation keys now correctly reference entities.* section.
Used common.all_entity pattern instead of non-existent common.all_members.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-13 22:31:59 +00:00
Louis King
5272a72647 Refactor i18n, add translation guide, and audit documentation
## i18n Refactoring

- Refactor admin translations to use common composable patterns
- Add common patterns: delete_entity_confirm, entity_added_success, move_entity_to_another_node, etc.
- Remove 18 duplicate keys from admin_members and admin_node_tags sections
- Update all admin JavaScript files to use new common patterns with dynamic entity composition
- Fix label consistency: rename first_seen to first_seen_label to match naming convention

## Translation Documentation

- Create comprehensive translation reference guide (languages.md) with 200+ documented keys
- Add translation architecture documentation to AGENTS.md with examples and best practices
- Add "Help Translate" call-to-action section in README with link to translation guide
- Add i18n feature to README features list

## Documentation Audit

- Add undocumented config options: API_KEY, WEB_LOCALE, WEB_DOMAIN to README and .env.example
- Fix outdated CLI syntax: interface --mode receiver → interface receiver
- Update database migration commands to use CLI wrapper (meshcore-hub db) instead of direct alembic
- Add static/locales/ directory to project structure section
- Add i18n configuration (WEB_LOCALE, WEB_THEME) to docker-compose.yml

## Testing

- All 438 tests passing
- All pre-commit checks passing (black, flake8, mypy)
- Added tests for new common translation patterns

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-13 22:19:37 +00:00
Louis King
b2f8e18f13 Fix admin translations to use entity references
- Update admin index page to use entities.members and entities.tags
- Rename admin.node_tags_description to admin.tags_description
- Remove redundant admin.*_title keys in favor of entities

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-13 21:33:35 +00:00
Louis King
a15e91c754 Further refine i18n structure
- Remove "nav" section, use "entities" references instead
- Remove composite strings like "Total Nodes", "Recent Advertisements"
  - Use composed patterns: t('common.total_entity', { entity: t('entities.nodes') })
  - Use common.recent_entity, common.edit_entity, common.add_entity patterns
- Hardcode MeshCore tagline (official trademark, not configurable)
- Update all page components and templates to use entity-based translations
- Update tests to reflect new structure
- Remove redundant page-specific composite keys

This maximizes reusability and reduces duplication across translations.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-13 21:32:36 +00:00
Louis King
85129e528e Refactor i18n translations for better maintainability
- Remove page_title section, compose titles dynamically as "{{entity}} - {{network_name}}"
- Add entities section for centralized entity names (nodes, members, tags, etc.)
- Replace specific action translations with composed patterns (add_entity, edit_entity, etc.)
- Create links section for common platform names (github, discord, youtube)
- Remove redundant page-specific title fields, use entity names instead
- Update all page components to use new translation structure
- Keep user-defined strings (network_name) separate from translatable content

This follows i18n best practices by using composition over duplication,
centralizing reusable terms, and making it easier to add new languages.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-13 21:19:02 +00:00
Claude
127cd7adf6 Add i18n support for web dashboard
Implement lightweight i18n infrastructure with shared JSON translation
files used by both server-side Jinja2 templates and client-side SPA.

- Add custom i18n module (Python + JS, ~80 lines total, zero deps)
- Create en.json with ~200 translation keys covering all web strings
- Add WEB_LOCALE config setting (default: 'en', with localStorage override)
- Translate all navigation labels, page titles, and footer in spa.html
- Translate all 13 SPA page modules (home, dashboard, nodes, etc.)
- Translate shared components (pagination, relative time, charts)
- Translate all 3 admin pages (index, members, node-tags)
- Fix Adverts/Advertisements inconsistency (standardize to Advertisements)
- Add i18n unit tests with 100% coverage

https://claude.ai/code/session_01FbnUnwYAwPrsQmAh5EuSkF
2026-02-13 18:49:06 +00:00
JingleManSweep
91b3f1926f Merge pull request #110 from ipnet-mesh/chore/testing-claude-github-actions
Testing Claude GitHub Actions integrations
2026-02-11 12:53:17 +00:00
Louis King
3ef94a21df Testing Claude GitHub Actions integrations 2026-02-11 12:49:17 +00:00
JingleManSweep
19e724fcc8 Merge pull request #109 from ipnet-mesh/chore/test-claude
Updates
2026-02-11 12:40:56 +00:00
Louis King
7b7910b42e Updates 2026-02-11 12:35:45 +00:00
JingleManSweep
c711a0eb9b Merge pull request #108 from ipnet-mesh/renovate/actions-checkout-6.x
Update actions/checkout action to v6
2026-02-11 12:25:52 +00:00
renovate[bot]
dcd7ed248d Update actions/checkout action to v6 2026-02-11 12:24:09 +00:00
JingleManSweep
b0ea6bcc0e Merge pull request #107 from ipnet-mesh/add-claude-github-actions-1770812503821
Add Claude Code GitHub Workflow
2026-02-11 12:23:36 +00:00
JingleManSweep
7ef41a3671 "Claude Code Review workflow" 2026-02-11 12:21:46 +00:00
JingleManSweep
a7611dd8d4 "Claude PR Assistant workflow" 2026-02-11 12:21:44 +00:00
JingleManSweep
8f907edce6 Merge pull request #106 from ipnet-mesh/chore/screenshot
Updated Screenshot
2026-02-11 12:09:05 +00:00
JingleManSweep
95d1b260ab Merge pull request #105 from ipnet-mesh/renovate/docker-build-push-action-6.x
Update docker/build-push-action action to v6
2026-02-11 12:08:35 +00:00
renovate[bot]
fba2656268 Update docker/build-push-action action to v6 2026-02-11 12:08:24 +00:00
JingleManSweep
69adca09e3 Merge pull request #102 from ipnet-mesh/renovate/major-github-artifact-actions
Update actions/upload-artifact action to v6
2026-02-11 12:06:28 +00:00
JingleManSweep
9c2a0527ff Merge pull request #101 from ipnet-mesh/renovate/actions-setup-python-6.x
Update actions/setup-python action to v6
2026-02-11 12:04:56 +00:00
JingleManSweep
c0db5b1da5 Merge pull request #103 from ipnet-mesh/renovate/codecov-codecov-action-5.x
Update codecov/codecov-action action to v5
2026-02-11 12:04:31 +00:00
Louis King
77dcbb77ba Push 2026-02-11 12:02:40 +00:00
renovate[bot]
5bf0265fd9 Update codecov/codecov-action action to v5 2026-02-11 12:01:49 +00:00
renovate[bot]
1adef40fdc Update actions/upload-artifact action to v6 2026-02-11 12:01:21 +00:00
renovate[bot]
c9beb7e801 Update actions/setup-python action to v6 2026-02-11 12:01:18 +00:00
JingleManSweep
cd14c23cf2 Merge pull request #104 from ipnet-mesh/chore/ci-fixes
CI Fixes
2026-02-11 11:54:10 +00:00
Louis King
708bfd1811 CI Fixes 2026-02-11 11:53:21 +00:00
JingleManSweep
afdc76e546 Merge pull request #97 from ipnet-mesh/renovate/python-3.x
Update python Docker tag to v3.14
2026-02-11 11:34:18 +00:00
renovate[bot]
e07b9ee2ab Update python Docker tag to v3.14 2026-02-11 11:33:31 +00:00
JingleManSweep
00851bfcaa Merge pull request #100 from ipnet-mesh/chore/fix-ci
Push
2026-02-11 11:30:44 +00:00
Louis King
6a035e41c0 Push 2026-02-11 11:30:25 +00:00
JingleManSweep
2ffc78fda2 Merge pull request #98 from ipnet-mesh/renovate/actions-checkout-6.x
Update actions/checkout action to v6
2026-02-11 11:26:25 +00:00
renovate[bot]
3f341a4031 Update actions/checkout action to v6 2026-02-11 11:24:17 +00:00
JingleManSweep
1ea729bd51 Merge pull request #96 from ipnet-mesh/renovate/configure
Configure Renovate
2026-02-11 11:23:03 +00:00
renovate[bot]
d329f67ba8 Add renovate.json 2026-02-11 11:22:03 +00:00
JingleManSweep
c42a2deffb Merge pull request #95 from ipnet-mesh/chore/add-sponsorship-badge
Add README badges and workflow path filters
2026-02-11 00:40:57 +00:00
Louis King
dfa4157c9c Fixed funding 2026-02-11 00:36:13 +00:00
Louis King
b52fd32106 Add path filters to CI and Docker workflows
Skip unnecessary workflow runs when only non-code files change (README,
docs, etc). Docker workflow always runs on version tags.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-11 00:32:46 +00:00
Louis King
4bbf43a078 Add CI, Docker, and sponsorship badges to README 2026-02-11 00:29:06 +00:00
JingleManSweep
deae9c67fe Add Buy Me a Coffee funding option
Added Buy Me a Coffee funding option.
2026-02-11 00:25:26 +00:00
JingleManSweep
ceee27a3af Merge pull request #94 from ipnet-mesh/chore/docs-update
Update docs and add Claude Code skills
2026-02-11 00:24:24 +00:00
Louis King
f478096bc2 Add Claude Code skills for git branching, PRs, and releases 2026-02-11 00:01:51 +00:00
Louis King
8ae94a7763 Add Claude Code skills for documentation and quality checks 2026-02-10 23:49:58 +00:00
Louis King
fb6cc6f5a9 Update docs to reflect recent features and config options
- Add contact cleanup, admin UI, content home, and webhook secret
  settings to .env.example and README
- Update AGENTS.md project structure with pages.py, example content
  dirs, and corrected receiver init steps
- Document new API endpoints (prefix lookup, members, dashboard
  activity, send-advertisement) in README
- Fix Docker Compose core profile to include db-migrate service
2026-02-10 23:49:31 +00:00
JingleManSweep
a98b295618 Merge pull request #93 from ipnet-mesh/feat/theme-improvements
Add radial glow and solid tint backgrounds to panels and filter bars
2026-02-10 20:26:50 +00:00
Louis King
da512c0d9f Add radial glow and solid tint backgrounds to panels and filter bars
- Add panel-glow CSS class with radial gradient using section colors
- Add panel-solid CSS class for neutral solid-tinted filter bars
- Apply colored glow to stat cards on home and dashboard pages
- Apply neutral grey glow to dashboard chart and data panels
- Apply neutral solid background to filter panels on list pages
- Add shadow-xl drop shadows to dashboard panels and home hero
- Limit dashboard recent adverts to 5 rows

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-10 20:23:19 +00:00
JingleManSweep
652486aa15 Merge pull request #92 from ipnet-mesh/fix/network-name-colours
Fix hero title to use black/white per theme
2026-02-10 18:24:16 +00:00
Louis King
947c12bfe1 Fix hero title to use black/white per theme 2026-02-10 18:23:46 +00:00
JingleManSweep
e80cd3a83c Merge pull request #91 from ipnet-mesh/feature/light-mode
Add light mode theme with dark/light toggle
2026-02-10 18:16:07 +00:00
Louis King
70ecb5e4da Add light mode theme with dark/light toggle
- Add sun/moon toggle in navbar (top-right) using DaisyUI swap component
- Store user theme preference in localStorage, default to server config
- Add WEB_THEME env var to configure default theme (dark/light)
- Add light mode color palette with adjusted section colors for contrast
- Use CSS filter to invert white SVG logos in light mode
- Add section-colored hover/active backgrounds for navbar items
- Style hero buttons with thicker outlines and white text on hover
- Soften hero heading color in light mode
- Change member callsign badges from green to neutral
- Update AGENTS.md, .env.example with WEB_THEME documentation

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-10 18:11:11 +00:00
JingleManSweep
565e0ffc7b Merge pull request #90 from ipnet-mesh/feat/feature-flags
Add feature flags to control web dashboard page visibility
2026-02-10 16:52:31 +00:00
Louis King
bdc3b867ea Fix missing receiver tooltips on advertisements and messages pages
The multi-receiver table view used data-* attributes that were never
read instead of native title attributes. Replace with title= so the
browser shows the receiver node name on hover.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-10 16:23:40 +00:00
Louis King
48786a18f9 Fix missing profile and tx_power in radio config JSON
The radio_config_dict passed to the frontend was missing the profile
and tx_power fields, causing the Network Info panel to omit them.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-10 15:56:45 +00:00
Louis King
706c32ae01 Add feature flags to control web dashboard page visibility
Operators can now disable specific pages (Dashboard, Nodes, Advertisements,
Messages, Map, Members, Pages) via FEATURE_* environment variables. Disabled
features are fully hidden: removed from navigation, return 404 on routes,
and excluded from sitemap/robots.txt. Dashboard auto-disables when all of
Nodes/Advertisements/Messages are off. Map auto-disables when Nodes is off.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-10 15:43:23 +00:00
JingleManSweep
bafc16d746 Merge pull request #89 from ipnet-mesh/claude/fix-admin-auth-bypass-atTWJ
Enforce authentication for admin API proxy mutations
2026-02-10 08:51:40 +00:00
Claude
9b09e32d41 Fix admin authentication bypass in web dashboard
The admin pages only checked config.admin_enabled but not
config.is_authenticated, allowing unauthenticated users to access
admin functionality when WEB_ADMIN_ENABLED=true. Additionally, the
API proxy forwarded the service-level Bearer token on all requests
regardless of user authentication, granting full admin API access
to unauthenticated browsers.

Server-side: block POST/PUT/DELETE/PATCH through the API proxy when
admin is enabled and no X-Forwarded-User header is present.

Client-side: add is_authenticated check to all three admin pages,
showing a sign-in prompt instead of admin content.

https://claude.ai/code/session_01HYuz5XLjYZ6JaowWqz643A
2026-02-10 01:20:04 +00:00
JingleManSweep
2b9f83e55e Merge pull request #88 from ipnet-mesh/feat/spa
Initial SPA (Single Page App) Conversion
2026-02-10 00:43:53 +00:00
Louis King
75c1966385 Fix Map nav icon color to exact DaisyUI warning yellow
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-10 00:39:06 +00:00
Louis King
3089ff46a8 Clean up legacy templates, fix nav colors and QR code timing
Remove all old Jinja2 templates (only spa.html is used now). Fix Map
nav icon color to yellow (matching btn-warning) and Members to orange.
Fix QR code intermittently not rendering on node detail pages with GPS
coords by deferring init to requestAnimationFrame.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-10 00:36:24 +00:00
Louis King
f1bceb5780 Rewrite web dashboard as Single Page Application
Replace server-side rendered Jinja2 page routes with a client-side SPA
using ES modules, lit-html templating, and a custom History API router.
All page rendering now happens in the browser with efficient DOM diffing.

Key changes:
- Add SPA router, API client, shared components, and 14 page modules
- Serve single spa.html shell template with catch-all route
- Remove server-side page routes (web/routes/) and legacy JS files
- Add centralized OKLCH color palette in CSS custom properties
- Add colored nav icons, navbar spacing, and loading spinner
- Add canonical URL and SEO path exclusions to SPA router
- Update charts.js to read from shared color palette
- Update tests for SPA architecture (template-agnostic assertions)
- Update AGENTS.md and README.md with SPA documentation

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-10 00:23:45 +00:00
JingleManSweep
caf88bdba1 Merge pull request #87 from ipnet-mesh/feat/timezones
Move timezone display to page headers instead of each timestamp
2026-02-09 00:52:28 +00:00
Louis King
9eb1acfc02 Add TZ variable to .env.example
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-09 00:47:58 +00:00
Louis King
62e0568646 Use timezone abbreviation (GMT, EST) instead of full name in headers
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-09 00:46:39 +00:00
Louis King
b4da93e4f0 Move timezone display to page headers instead of each timestamp
- Remove timezone abbreviation from datetime format strings
- Add timezone label to page headers (Nodes, Messages, Advertisements, Map)
- Only show timezone when not UTC to reduce clutter

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-09 00:43:51 +00:00
JingleManSweep
981402f7aa Merge pull request #86 from ipnet-mesh/feat/timezones
Add timezone support for web dashboard date/time display
2026-02-09 00:38:43 +00:00
Louis King
76717179c2 Add timezone support for web dashboard date/time display
- Add TZ environment variable support (standard Linux timezone)
- Create Jinja2 filters for timezone-aware formatting (localtime, localdate, etc.)
- Update all templates to use timezone filters with abbreviation suffix
- Pass TZ through docker-compose for web service
- Document TZ setting in README and AGENTS.md

Timestamps remain stored as UTC; only display is converted.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-09 00:34:57 +00:00
JingleManSweep
f42987347e Merge pull request #85 from ipnet-mesh/chore/tidy-map
Use colored dots for map markers instead of logo
2026-02-08 23:53:48 +00:00
Louis King
25831f14e6 Use colored dots for map markers instead of logo
Replace logo icons with colored circle markers:
- Red dots for infrastructure nodes
- Blue dots for public nodes

Update popup overlay to show type emoji (📡, 💬, etc.) on the left
and infra/public indicator dot on the right of the node name.
Update legend to match new marker style.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-08 23:50:44 +00:00
Louis King
0e6cbc8094 Optimised GitHub CI workflow triggers 2026-02-08 23:40:35 +00:00
JingleManSweep
76630f0bb0 Merge pull request #84 from ipnet-mesh/chore/youtube-link
Add NETWORK_CONTACT_YOUTUBE config for footer link
2026-02-08 23:39:35 +00:00
Louis King
8fbac2cbd6 Add NETWORK_CONTACT_YOUTUBE config for footer link
Add YouTube channel URL configuration option alongside existing
GitHub/Discord/Email contact links. Also crop logo SVG to content
bounds and pass YouTube env var through docker-compose.
2026-02-08 23:36:40 +00:00
Louis King
fcac5e01dc Use network welcome text for SEO meta description
Meta description now uses NETWORK_WELCOME_TEXT prefixed with network
name for better SEO, falling back to generic message if not set.
2026-02-08 23:21:17 +00:00
Louis King
b6f3b2d864 Redesign node detail page with hero map header
- Add hero panel with non-interactive map background when GPS coords exist
- Fix coordinate detection: check node model fields before falling back to tags
- Move node name to standard page header above hero panel
- QR code displayed in hero panel (right side, 140px)
- Map pans to show node at 1/3 horizontal position (avoiding QR overlap)
- Replace Telemetry section with Tags card in grid layout
- Consolidate First Seen, Last Seen, Location into single row
- Add configurable offset support to map-node.js (offsetX, offsetY)
- Add configurable size support to qrcode-init.js
2026-02-08 23:16:13 +00:00
JingleManSweep
7de6520ae7 Merge pull request #83 from ipnet-mesh/feat/js-filter-submit
Add auto-submit for filter controls on list pages
2026-02-08 22:11:10 +00:00
Louis King
5b8b2eda10 Fix mixed content blocking for static assets behind reverse proxy
Add ProxyHeadersMiddleware to trust X-Forwarded-Proto headers from
reverse proxies. This ensures url_for() generates HTTPS URLs when
the app is accessed via HTTPS through nginx or similar proxies.

Without this, static assets (CSS, JS) were blocked by browsers as
mixed content when the site was served over HTTPS.
2026-02-08 22:08:04 +00:00
Louis King
042a1b04fa Add auto-submit for filter controls on list pages
Filter forms now auto-submit when select dropdowns change or when
Enter is pressed in text inputs. Uses a data-auto-submit attribute
pattern for consistency with existing data attribute conventions.
2026-02-08 21:53:35 +00:00
JingleManSweep
5832cbf53a Merge pull request #82 from ipnet-mesh/chore/tidy-html-output
Refactored inline styles/SVG/scripts, improved SEO
2026-02-08 21:45:30 +00:00
Louis King
c540e15432 Improve HTML output and SEO title tags
- Add Jinja2 whitespace control (trim_blocks, lstrip_blocks) to
  eliminate excessive newlines in rendered HTML output
- Reverse title tag order to "Page - Brand" for better SEO (specific
  content first, brand name second to avoid truncation)
- Add dynamic titles for node detail pages using node name
- Standardize UI text: Dashboard, Advertisements, Map, Members
- Remove refresh button from dashboard page

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-08 21:40:19 +00:00
Louis King
6b1b277c6c Refactor HTML output: extract inline CSS, JS, and SVGs
Extract inline styles, JavaScript, and SVG icons from templates into
reusable external resources for improved maintainability and caching.

New static files:
- static/css/app.css: Custom CSS (scrollbar, prose, animations, Leaflet)
- static/js/charts.js: Chart.js helpers with shared colors/options
- static/js/map-main.js: Full map page functionality
- static/js/map-node.js: Node detail page map
- static/js/qrcode-init.js: QR code generation

New icon macros in macros/icons.html:
- icon_info, icon_alert, icon_chart, icon_refresh, icon_menu
- icon_github, icon_globe, icon_error, icon_channel
- icon_success, icon_lock, icon_user, icon_email, icon_tag, icon_users

Updated templates to use external resources and icon macros:
- base.html, home.html, dashboard.html, map.html, node_detail.html
- nodes.html, messages.html, advertisements.html, members.html
- errors/404.html, admin/*.html

Net reduction: ~700 lines of inline code removed from templates.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-08 21:23:06 +00:00
Louis King
470c374f11 Remove redundant Show Chat Nodes checkbox from map
The Node Type dropdown already provides chat node filtering,
making the separate checkbox unnecessary.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 21:06:25 +00:00
Louis King
71859b2168 Adjust map zoom levels for mobile devices
- Mobile portrait (< 480px): padding [50, 50] for wider view
- Mobile landscape (< 768px): padding [75, 75]
- Desktop: padding [100, 100]

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 21:01:29 +00:00
Louis King
3d7ed53df3 Improve map UI and add QR code to node detail page
Map improvements:
- Change non-infra nodes from emojis to subtle blue circles
- Add "Show Chat Nodes" checkbox (hidden by default)
- Fix z-index for hovered marker labels
- Increase zoom on mobile devices
- Simplify legend to show Infrastructure and Node icons

Node detail page:
- Add QR code for meshcore:// contact protocol
- Move activity (first/last seen) to title row
- QR code positioned under public key with white background
- Protocol: meshcore://contact/add?name=<name>&public_key=<key>&type=<n>
- Type mapping: chat=1, repeater=2, room=3, sensor=4

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 20:51:25 +00:00
Louis King
ceaef9178a Fixed map z-order 2026-02-07 20:13:24 +00:00
JingleManSweep
5ccb077188 Merge pull request #81 from ipnet-mesh/feat/public-node-map
Enhance map page with GPS fallback, infrastructure filter, and UI improvements
2026-02-07 20:09:13 +00:00
Louis King
8f660d6b94 Enhance map page with GPS fallback, infrastructure filter, and UI improvements
- Add GPS coordinate fallback: use tag coords, fall back to model coords
- Filter out nodes at (0, 0) coordinates (likely unset defaults)
- Add "Show" filter to toggle between All Nodes and Infrastructure Only
- Add "Show Labels" checkbox (labels hidden by default, appear on hover)
- Infrastructure nodes display network logo instead of emoji
- Add radius-based bounds filtering (20km) to prevent outlier zoom issues
- Position labels underneath pins, centered with transparent background
- Calculate and return infra_center for infrastructure node focus
- Initial map view focuses on infrastructure nodes when available
- Update popup button to outline style
- Add comprehensive tests for new functionality

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 20:05:56 +00:00
Louis King
6e40be6487 Updated home page buttons 2026-02-07 14:51:48 +00:00
Louis King
d79e29bc0a Updates 2026-02-07 14:47:12 +00:00
Louis King
2758cf4dd5 Fixed mobile menu 2026-02-07 14:40:17 +00:00
Louis King
f37e993ede Updates 2026-02-07 14:32:44 +00:00
Louis King
b18b3c9aa4 Refactor PAGES_HOME to CONTENT_HOME and add custom logo support
- Replace PAGES_HOME with CONTENT_HOME configuration (default: ./content)
- Content directory now contains pages/ and media/ subdirectories
- Add support for custom logo at $CONTENT_HOME/media/images/logo.svg
- Custom logo replaces favicon and navbar/home logos when present
- Mount media directory as /media for serving custom assets
- Simplify default logo to generic WiFi-style radiating arcs
- Update documentation and example directory structure
- Update tests for new CONTENT_HOME structure

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-07 13:45:42 +00:00
Louis King
9d99262401 Updates 2026-02-06 23:48:43 +00:00
Louis King
adfe5bc503 Updates 2026-02-06 23:38:08 +00:00
Louis King
deaab9b9de Rename /network to /dashboard and add reusable icon macros
- Renamed network route, template, and tests to dashboard
- Added logo.svg for favicon and navbar branding
- Created reusable Jinja2 icon macros for navigation and UI elements
- Updated home page hero layout with centered content and larger logo
- Added Map button alongside Dashboard button in hero section
- Navigation menu items now display icons before labels

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-06 22:53:36 +00:00
Louis King
95636ef580 Removed Claude Code workflow 2026-02-06 19:19:10 +00:00
JingleManSweep
5831592f88 Merge pull request #79 from ipnet-mesh/feat/custom-pages
Feat/custom pages
2026-02-06 19:14:53 +00:00
Louis King
bc7bff8b82 Updates 2026-02-06 19:14:19 +00:00
Louis King
9445d2150c Fix links and update join guide
- Fix T114 manufacturer (Heltec, not LilyGO) and link
- Fix T1000-E product link
- Fix Google Play and App Store links
- Add Amazon to where to buy options
- Add radio configuration step

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-06 19:10:00 +00:00
Louis King
3e9f478a65 Replace example about page with join guide
Add getting started guide covering:
- Node types (Companion, Repeater, Room Server)
- Frequency regulations (868MHz EU/UK, 915MHz US/AU)
- Recommended hardware (Heltec V3, T114, T1000-E, T-Deck Plus)
- Mobile apps (Android/iOS)
- Links to MeshCore docs and web flasher

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-06 19:04:56 +00:00
JingleManSweep
6656bd8214 Merge pull request #78 from ipnet-mesh/feat/custom-pages
Add custom markdown pages feature to web dashboard
2026-02-06 18:40:42 +00:00
Louis King
0f50bf4a41 Add custom markdown pages feature to web dashboard
Allows adding static content pages (About, FAQ, etc.) as markdown files
with YAML frontmatter. Pages are stored in PAGES_HOME directory (default:
./pages), automatically appear in navigation menu, and are included in
the sitemap.

- Add PageLoader class to parse markdown with frontmatter
- Add /pages/{slug} route for rendering custom pages
- Add PAGES_HOME config setting to WebSettings
- Add prose CSS styles for markdown content
- Add pages to navigation and sitemap
- Update docker-compose.yml with pages volume mount
- Add comprehensive tests for PageLoader and routes

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-06 18:36:23 +00:00
Louis King
99206f7467 Updated README 2026-02-06 17:53:02 +00:00
Louis King
3a89daa9c0 Use empty Disallow in robots.txt for broader compatibility
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-06 15:58:52 +00:00
Louis King
86c5ff8f1c SEO fixes 2026-02-06 14:38:26 +00:00
JingleManSweep
59d0edc96f Merge pull request #76 from ipnet-mesh/chore/add-dynamic-sitemap-xml
Added dynamic XML sitemap for SEO
2026-02-06 12:53:20 +00:00
Louis King
b01611e0e8 Added dynamic XML sitemap for SEO 2026-02-06 12:50:40 +00:00
JingleManSweep
1e077f50f7 Merge pull request #75 from ipnet-mesh/chore/add-meshcore-text-seo
Updated SEO descriptions
2026-02-06 12:34:25 +00:00
Louis King
09146a2e94 Updated SEO descriptions 2026-02-06 12:31:40 +00:00
JingleManSweep
56487597b7 Merge pull request #73 from ipnet-mesh/chore/improve-seo
Added SEO optimisations
2026-02-06 12:21:30 +00:00
Louis King
de968f397d Added SEO optimisations 2026-02-06 12:17:27 +00:00
JingleManSweep
3ca5284c11 Merge pull request #72 from ipnet-mesh/chore/add-permissive-robots-txt
Added permissive robots.txt route
2026-02-06 12:12:20 +00:00
Louis King
75d7e5bdfa Added permissive robots.txt route 2026-02-06 12:09:36 +00:00
Louis King
927fcd6efb Fixed README and Docker Compose 2026-02-03 22:58:58 +00:00
JingleManSweep
3132d296bb Merge pull request #71 from ipnet-mesh/chore/fix-compose-profile
Fixed Compose dependencies and switched to Docker managed volume
2026-01-28 21:56:32 +00:00
Louis King
96e4215c29 Fixed Compose dependencies and switched to Docker managed volume 2026-01-28 21:53:36 +00:00
Louis King
fd3c3171ce Fix FastAPI response model for union return type 2026-01-26 22:29:13 +00:00
Louis King
345ffd219b Separate API prefix search from exact match endpoint
- Add /api/v1/nodes/prefix/{prefix} for prefix-based node lookup
- Change /api/v1/nodes/{public_key} to exact match only
- /n/{prefix} now simply redirects to /nodes/{prefix}
- /nodes/{key} resolves prefixes via API and redirects to full key
2026-01-26 22:27:15 +00:00
Louis King
9661b22390 Fix node detail 404 to use custom error page 2026-01-26 22:11:48 +00:00
Louis King
31aa48c9a0 Return 404 page when node not found in detail view 2026-01-26 22:08:01 +00:00
Louis King
1a3649b3be Revert "Simplify 404 page design"
This reverts commit 33649a065b.
2026-01-26 22:07:29 +00:00
Louis King
33649a065b Simplify 404 page design 2026-01-26 22:05:31 +00:00
Louis King
fd582bda35 Add custom 404 error page 2026-01-26 22:01:00 +00:00
Louis King
c42b26c8f3 Make /n/ short link resolve prefix to full public key 2026-01-26 21:57:04 +00:00
Louis King
d52163949a Change /n/ short link to redirect to /nodes/ 2026-01-26 21:48:55 +00:00
Louis King
ca101583f0 Add /n/ short link alias and simplify CI lint job
- Add /n/{public_key} route as alias for /nodes/{public_key} for shorter URLs
- Replace individual lint tools in CI with pre-commit/action for consistency
2026-01-26 21:41:33 +00:00
JingleManSweep
4af0f2ea80 Merge pull request #70 from ipnet-mesh/chore/node-page-prefix-support
Add prefix matching support to node API endpoint
2026-01-26 21:28:43 +00:00
Louis King
0b3ac64845 Add prefix matching support to node API endpoint
Allow users to navigate to a node using any prefix of its public key
instead of requiring the full 64-character key. If multiple nodes match
the prefix, the first one alphabetically is returned.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 21:27:36 +00:00
Louis King
3c7a8981ee Increased dedup bucket window to 120s 2026-01-17 20:15:46 +00:00
JingleManSweep
238e28ae41 Merge pull request #67 from ipnet-mesh/chore/tidyup-message-filters-columns
Message Filter and Table Tidying
2026-01-15 18:19:30 +00:00
Louis King
68d5049963 Removed pointless channel number filter and tidied column headings/values 2026-01-15 18:16:31 +00:00
JingleManSweep
624fa458ac Merge pull request #66 from ipnet-mesh/chore/fix-sqlite-path-exists
Ensure SQLite database path/subdirectories exist before initialising …
2026-01-15 17:36:58 +00:00
Louis King
309d575fc0 Ensure SQLite database path/subdirectories exist before initialising database 2026-01-15 17:32:56 +00:00
Louis King
f7b4df13a7 Added more test coverage 2026-01-12 21:00:02 +00:00
Louis King
13bae5c8d7 Added more test coverage 2026-01-12 20:34:53 +00:00
Louis King
8a6b4d8e88 Tidying 2026-01-12 20:02:45 +00:00
JingleManSweep
b67e1b5b2b Merge pull request #65 from ipnet-mesh/claude/plan-member-editor-BwkcS
Plan Member Editor for Organization Management
2026-01-12 19:59:32 +00:00
Louis King
d4e3dc0399 Local tweaks 2026-01-12 19:59:14 +00:00
Claude
7f0adfa6a7 Implement Member Editor admin interface
Add a complete CRUD interface for managing network members at /a/members,
following the proven pattern established by the Tag Editor.

Changes:
- Add member routes to admin.py (GET, POST create/update/delete)
- Create admin/members.html template with member table, forms, and modals
- Add Members navigation card to admin index page
- Include proper authentication checks and flash message handling
- Fix mypy type hints for optional form fields

The Member Editor allows admins to:
- View all network members in a sortable table
- Create new members with all fields (member_id, name, callsign, role, contact, description)
- Edit existing members via modal dialog
- Delete members with confirmation
- Client-side validation for member_id format (alphanumeric + underscore)

All backend API infrastructure (models, schemas, routes) was already implemented.
This is purely a web UI layer built on top of the existing /api/v1/members endpoints.
2026-01-12 19:41:56 +00:00
Claude
94b03b49d9 Add comprehensive Member Editor implementation plan
Create detailed plan for building a Member Editor admin interface at /a/members.
The plan follows the proven Tag Editor pattern and includes:

- Complete route structure for CRUD operations
- Full HTML template layout with modals and forms
- JavaScript event handlers for edit/delete actions
- Integration with existing Member API endpoints
- Testing checklist and acceptance criteria

All backend infrastructure (API, models, schemas) already exists.
This is purely a web UI implementation task estimated at 2-3 hours.
2026-01-12 19:33:13 +00:00
Louis King
20d75fe041 Add bulk copy and delete all tags for node replacement workflow
When replacing a node device, users can now:
- Copy All: Copy all tags to a new node (skips existing tags)
- Delete All: Remove all tags from a node after migration

New API endpoints:
- POST /api/v1/nodes/{pk}/tags/copy-to/{dest_pk}
- DELETE /api/v1/nodes/{pk}/tags

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 14:46:51 +00:00
Louis King
307f3935e0 Add access denied page for unauthenticated admin access
When users try to access /a/ without valid OAuth2Proxy headers (e.g.,
GitHub account not in org), they now see a friendly 403 page instead
of a 500 error. Added authentication checks to all admin routes.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 13:34:03 +00:00
Louis King
6901bafb02 Tidying Tag Editor layout 2026-01-11 13:13:22 +00:00
JingleManSweep
e595dc2b27 Merge pull request #63 from ipnet-mesh/claude/admin-node-tags-interface-pHbKm
Add admin interface for managing node tags
2026-01-11 12:51:56 +00:00
Louis King
ed2cf09ff3 Improve admin UI and remove unused coordinate tag type
- Replace node type badge with icon in admin tag editor
- Add Edit/Add Tags button on node detail page (when admin enabled and authenticated)
- Remove automatic seed container startup to prevent overwriting user changes
- Remove unused 'coordinate' value type from node tags (only string, number, boolean remain)
2026-01-11 12:49:34 +00:00
Claude
bec736a894 Sort node dropdown alphabetically in admin interface
Nodes in the dropdown are now sorted alphabetically by name,
with unnamed nodes appearing at the end.
2026-01-11 12:01:11 +00:00
Claude
1457360703 Use API_ADMIN_KEY for web service to enable admin operations
The web admin interface needs write permissions to create, update,
move, and delete node tags. Changed to use API_ADMIN_KEY with
fallback to API_READ_KEY if admin key is not configured.
2026-01-11 11:55:15 +00:00
Claude
d8a0f2abb8 Fix security vulnerabilities and add validation
- Fix XSS vulnerability by using data attributes instead of inline
  onclick handlers in node_tags.html template
- Fix URL injection by using urlencode for all redirect URL parameters
- Add validation to reject moves where source and destination nodes
  are the same (returns 400 Bad Request)
- Add error handling for response.json() calls that may fail
- Add missing test coverage for update endpoint error scenarios
2026-01-11 11:51:57 +00:00
Claude
367f838371 Add admin interface for managing node tags
Implement CRUD operations for NodeTags in the admin interface:

- Add NodeTagMove schema for moving tags between nodes
- Add PUT /nodes/{public_key}/tags/{key}/move API endpoint
- Add web routes at /a/node-tags for tag management
- Create admin templates with node selector and tag management UI
- Support editing, adding, moving, and deleting tags via API calls
- Add comprehensive tests for new functionality

The interface allows selecting a node from a dropdown, viewing its
tags, and performing all CRUD operations including moving a tag
to a different node without having to delete and recreate it.
2026-01-11 01:34:07 +00:00
Louis King
741dd3ce84 Initial admin commit 2026-01-11 00:42:57 +00:00
JingleManSweep
0a12f389df Merge pull request #62 from ipnet-mesh/feature/contact-gps
Store Node GPS Coordinates
2026-01-09 20:17:40 +00:00
Louis King
8240c2fd57 Initial commit 2026-01-09 20:07:36 +00:00
Louis King
38f7fe291e Add member filtering to map page using member_id tag
Change the map filter from matching nodes by public_key to using the
member_id tag system. Now populates the member dropdown with all members
from the database and filters nodes based on their member_id tag value.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 19:16:15 +00:00
JingleManSweep
e4087efbf0 Merge pull request #61 from ipnet-mesh/feature/ui-improvements
Remove SNR column from messages and add last seen to members
2026-01-08 21:25:03 +00:00
Louis King
3051984fb9 Remove SNR column from messages and add last seen to members
- Remove SNR column from messages list (no longer provided by meshcore library)
- Add relative "last seen" time to nodes on members page with tooltip
- Add populateRelativeTimeElements() utility for time elements
2026-01-08 21:23:14 +00:00
JingleManSweep
eea2c90ea4 Merge pull request #58 from ipnet-mesh/feature/ui-improvements
Add member/node filters, mobile card views, and pagination macro
2026-01-08 20:15:54 +00:00
Louis King
d52c23fc29 Add member/node filters, mobile card views, and pagination macro
- Add member_id filter to nodes and advertisements API endpoints
- Add member and node dropdowns to web list pages
- Implement responsive mobile card view for nodes and advertisements
- Extract pagination into reusable Jinja2 macro (_macros.html)
- Fix Python version in README (3.11+ -> 3.13+)
2026-01-08 20:13:49 +00:00
Louis King
a1fb71ce65 Add responsive mobile card view for messages page 2026-01-08 16:50:29 +00:00
JingleManSweep
6a5549081f Merge pull request #56 from ipnet-mesh/fix/receiver-contact-cleanup
Add contact cleanup to interface RECEIVER mode
2026-01-08 10:28:26 +00:00
Louis King
68e24ee886 Fix 2026-01-08 10:26:31 +00:00
Louis King
61d6b6287e Add contact cleanup to interface RECEIVER mode
- Add CONTACT_CLEANUP_ENABLED and CONTACT_CLEANUP_DAYS settings
- Implement remove_contact and schedule_remove_contact on device classes
- During contact sync, remove stale contacts from companion node
- Stale contacts (not advertised for > N days) not published to MQTT
- Update Python version to 3.13 across project config
- Remove brittle config tests that assumed default env values
2026-01-08 10:22:27 +00:00
Louis King
7007c84577 Updated screenshot 2025-12-08 23:45:22 +00:00
Louis King
fd928d9fea Updated diagrams 2025-12-08 23:40:52 +00:00
Louis King
68b6aa85cd Updated diagrams 2025-12-08 23:39:25 +00:00
Louis King
abbc07edb3 Updated diagrams 2025-12-08 23:37:13 +00:00
Louis King
b42add310e Updated diagrams 2025-12-08 23:36:13 +00:00
Louis King
98a5526e80 Updated diagrams 2025-12-08 23:34:28 +00:00
Louis King
db86b3198e Some minor UI improvements, updated env.example, and docs 2025-12-08 23:06:04 +00:00
Louis King
cd4f0b91dc Various UI improvements 2025-12-08 22:07:46 +00:00
Louis King
a290db0491 Updated chart stats 2025-12-08 19:37:45 +00:00
Louis King
92b0b883e6 More website improvements 2025-12-08 17:07:39 +00:00
Louis King
9e621c0029 Fixed test 2025-12-08 16:42:13 +00:00
Louis King
a251f3a09f Added map to node detail page, made title consistent with emoji 2025-12-08 16:37:53 +00:00
Louis King
0fdedfe5ba Tidied Advert/Node search 2025-12-08 16:22:08 +00:00
Louis King
243a3e8521 Added truncate CLI command 2025-12-08 15:54:32 +00:00
JingleManSweep
b24a6f0894 Merge pull request #54 from ipnet-mesh/feature/more-filters
Fixed Member model
2025-12-08 15:15:04 +00:00
Louis King
57f51c741c Fixed Member model 2025-12-08 15:13:24 +00:00
Louis King
65b8418af4 Fixed last seen issue 2025-12-08 00:15:25 +00:00
JingleManSweep
89ceee8741 Merge pull request #51 from ipnet-mesh/feat/sync-receiver-contacts-on-advert
Receiver nodes now sync contacts to MQTT on every advert received
2025-12-07 23:36:11 +00:00
Louis King
64ec1a7135 Receiver nodes now sync contacts to MQTT on every advert received 2025-12-07 23:34:33 +00:00
JingleManSweep
3d632a94b1 Merge pull request #50 from ipnet-mesh/feat/remove-friendly-name
Removed friendly name support and tidied tags
2025-12-07 23:03:39 +00:00
Louis King
fbd29ff78e Removed friendly name support and tidied tags 2025-12-07 23:02:19 +00:00
Louis King
86bff07f7d Removed contrib 2025-12-07 22:22:32 +00:00
Louis King
3abd5ce3ea Updates 2025-12-07 22:18:16 +00:00
Louis King
0bf2086f16 Added screenshot 2025-12-07 22:05:34 +00:00
Louis King
40dc6647e9 Updates 2025-12-07 22:02:42 +00:00
Louis King
f4e95a254e Fixes 2025-12-07 22:00:46 +00:00
Louis King
ba43be9e62 Fixes 2025-12-07 21:58:42 +00:00
JingleManSweep
5b22ab29cf Merge pull request #49 from ipnet-mesh/fix/version-display
Fixed version display
2025-12-07 21:56:26 +00:00
Louis King
278d102064 Fixed version display 2025-12-07 21:55:10 +00:00
JingleManSweep
f0cee14bd8 Merge pull request #48 from ipnet-mesh/feature/mqtt-tls
Added support for MQTT TLS
2025-12-07 21:16:13 +00:00
Louis King
5ff8d16bcb Added support for MQTT TLS 2025-12-07 21:15:05 +00:00
JingleManSweep
e8a60d4869 Merge pull request #47 from ipnet-mesh/feature/node-cleanup
Added Node/Data cleanup
2025-12-07 20:50:09 +00:00
Louis King
84b8614e29 Updates 2025-12-06 21:42:33 +00:00
Louis King
3bc47a33bc Added data retention and node cleanup 2025-12-06 21:27:19 +00:00
Louis King
3ae8ecbd70 Updates 2025-12-06 20:46:31 +00:00
JingleManSweep
38164380af Merge pull request #41 from ipnet-mesh/claude/issue-37-20251206-1854
feat: Add MESHCORE_DEVICE_NAME config to set node name on startup
2025-12-06 19:33:32 +00:00
claude[bot]
dc3c771c76 docs: Document MESHCORE_DEVICE_NAME configuration option
Add documentation for the new MESHCORE_DEVICE_NAME environment variable
that was introduced in this PR. Updates include:

- Added to .env.example with description
- Added to Interface Settings table in README.md
- Added to CLI Reference examples in README.md
- Added to Interface configuration table in PLAN.md

🤖 Generated with [Claude Code](https://claude.ai/claude-code)

Co-authored-by: JingleManSweep <jinglemansweep@users.noreply.github.com>
Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-06 19:07:57 +00:00
claude[bot]
deb307c6ae feat: Add MESHCORE_DEVICE_NAME config to set node name on startup
- Add meshcore_device_name field to InterfaceSettings
- Implement set_name() method in device interface (real and mock)
- Update receiver to set device name during initialization if configured
- Add --device-name CLI option with MESHCORE_DEVICE_NAME env var support
- Device name is set after time sync and before advertisement broadcast

Fixes #37

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: JingleManSweep <jinglemansweep@users.noreply.github.com>
2025-12-06 19:00:56 +00:00
JingleManSweep
b8c8284643 Merge pull request #39 from ipnet-mesh/claude/issue-38-20251206-1840
Send flood advertisement on receiver startup
2025-12-06 18:48:50 +00:00
JingleManSweep
d310a119ed Update src/meshcore_hub/interface/receiver.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-12-06 18:48:03 +00:00
claude[bot]
2b307679c9 Send flood advertisement on receiver startup
Changed the startup advertisement from flood=False to flood=True
so that the device name is broadcast to the mesh network.

Fixes #38

Co-authored-by: JingleManSweep <jinglemansweep@users.noreply.github.com>
2025-12-06 18:42:53 +00:00
Louis King
6f7521951f Updates 2025-12-06 18:29:12 +00:00
Louis King
ab498292b2 Updated README with upgrading instructions 2025-12-06 17:31:10 +00:00
JingleManSweep
df2b9ea432 Merge pull request #35 from ipnet-mesh/claude/issue-33-20251206-1703
Add last seen time to map node labels
2025-12-06 17:18:49 +00:00
claude[bot]
55443376be Use full display name in map node labels
Update map node labels to show the full display name (friendly_name tag
→ advertised node name → public key prefix) instead of just the 2-char
public key prefix. This makes node labels consistent with the rest of
the site and easier to identify at a glance.

The backend already computes the correct display name in map.py:96-101,
so this change just uses that computed name in the label.

Co-authored-by: JingleManSweep <jinglemansweep@users.noreply.github.com>
2025-12-06 17:12:50 +00:00
claude[bot]
ed7a46b1a7 Add last seen time to map node labels
Display relative time since last seen (e.g., '2m', '1h', '2d') in node
labels on the map page. This makes it easier to quickly identify how
recently nodes were active without opening the popup.

- Add formatRelativeTime() function to calculate time difference
- Update createNodeIcon() to include relative time in label
- Adjust icon size to accommodate additional text
- Format: keyPrefix (timeAgo) e.g., 'ab (5m)'

Fixes #33

Co-authored-by: JingleManSweep <jinglemansweep@users.noreply.github.com>
2025-12-06 17:05:30 +00:00
JingleManSweep
c2eef3db50 Merge pull request #34 from ipnet-mesh/claude/issue-25-20251206-1646
fix: Handle empty channel_idx parameter in messages filter
2025-12-06 16:53:34 +00:00
claude[bot]
4916ea0cea fix: Handle empty channel_idx parameter in messages filter
Fixed parse error when clicking the filter button on messages screen
with "All Channels" selected. The form was sending an empty string
for channel_idx, but FastAPI expected either a valid integer or None.

Changes:
- Accept channel_idx as string in query parameter
- Parse and validate channel_idx before passing to API
- Treat empty strings as None to prevent validation errors
- Add error handling for invalid integer values

Fixes #25

Co-authored-by: JingleManSweep <jinglemansweep@users.noreply.github.com>
2025-12-06 16:49:23 +00:00
JingleManSweep
e3fc7e4f07 Merge pull request #32 from ipnet-mesh/add-claude-github-actions-1765039315831
Add Claude Code GitHub Workflow
2025-12-06 16:42:13 +00:00
JingleManSweep
b656bfda21 "Claude PR Assistant workflow" 2025-12-06 16:41:56 +00:00
Louis King
fb7201dc2d Updates 2025-12-06 16:37:25 +00:00
Louis King
74346d9c82 Hopefully use Git tag as version on website 2025-12-06 16:32:31 +00:00
Louis King
beb471fcd8 Switched to Git versioning 2025-12-06 16:28:13 +00:00
Louis King
8597052bf7 Fixed table overflows 2025-12-06 16:07:13 +00:00
Louis King
f85768f661 Fixed DB migration 2025-12-06 15:46:41 +00:00
JingleManSweep
79bb4a4250 Merge pull request #23 from ipnet-mesh/claude/plan-event-deduplication-01NWhrJaWzQiGi1xLzmk1udg
Deduplication for multiple receiver nodes
2025-12-06 15:36:37 +00:00
Louis King
714c3cbbd2 Set sensible Docker tag label 2025-12-06 15:32:15 +00:00
Louis King
f0531c9e40 Updated env example 2025-12-06 15:16:26 +00:00
Louis King
dd0b4c73c5 More fixes 2025-12-06 15:10:03 +00:00
Louis King
78a086e1ea Updates 2025-12-06 14:51:47 +00:00
Louis King
9cd1d50bf6 Updates 2025-12-06 14:38:53 +00:00
Louis King
733342a9ec Fixed README and Compose 2025-12-06 14:21:17 +00:00
Louis King
d715e4e4f0 Updates 2025-12-06 13:33:02 +00:00
Louis King
2ea04deb7e Updates 2025-12-06 12:53:29 +00:00
Claude
6e3b86a1ad Add collector-level event deduplication using content hashes
Replace presentation-layer deduplication with collector-level approach:
- Add event_hash column to messages, advertisements, trace_paths, telemetry tables
- Handlers compute content hashes and skip duplicate events at insertion time
- Use 5-minute time buckets for advertisements and telemetry
- Include Alembic migration for schema changes
2025-12-06 12:23:14 +00:00
Louis King
b807932ca3 Added arm64 Docker image support 2025-12-06 12:16:05 +00:00
Claude
c80986fe67 Add event deduplication at presentation layer
When multiple receiver nodes are running, the same mesh events (messages,
advertisements) are reported multiple times. This causes duplicate entries
in the Web UI.

Changes:
- Add hash_utils.py with deterministic hash functions for each event type
- Add `dedupe` parameter to messages and advertisements API endpoints (default: True)
- Update dashboard stats to use distinct counts for messages/advertisements
- Deduplicate recent advertisements and channel messages in dashboard
- Add comprehensive tests for hash utilities

Hash strategy:
- Messages: hash of text + pubkey_prefix + channel_idx + sender_timestamp + txt_type
- Advertisements: hash of public_key + name + adv_type + flags + 5-minute time bucket
2025-12-06 12:08:07 +00:00
Louis King
3a060f77cc Updates 2025-12-06 11:46:36 +00:00
JingleManSweep
7461cf6dc1 Merge pull request #22 from ipnet-mesh/claude/add-member-node-association-01JezMA7XzvwsX37rMNoBmvo
Updates
2025-12-05 21:18:31 +00:00
Louis King
23f6c290c9 Updates 2025-12-05 21:17:34 +00:00
JingleManSweep
1ae1736391 Merge pull request #21 from ipnet-mesh/claude/add-member-node-association-01JezMA7XzvwsX37rMNoBmvo
Associate nodes with members in database
2025-12-05 21:17:20 +00:00
Claude
a4b13d3456 Add member-node association support
Members can now have multiple associated nodes, each with a public_key
and node_role (e.g., 'chat', 'repeater'). This replaces the single
public_key field on members with a one-to-many relationship.

Changes:
- Add MemberNode model for member-node associations
- Update Member model to remove public_key, add nodes relationship
- Update Pydantic schemas with MemberNodeCreate/MemberNodeRead
- Update member_import.py to handle nodes list in seed files
- Update API routes to handle nodes in create/update/read operations
- Add Alembic migration to create member_nodes table and migrate data
- Update example seed file with new format
2025-12-05 20:34:09 +00:00
Louis King
0016edbdac Updates 2025-12-05 20:03:14 +00:00
Louis King
0b8fc6e707 Charts 2025-12-05 19:50:22 +00:00
Louis King
d1181ae4f9 Updates 2025-12-05 19:27:06 +00:00
JingleManSweep
6b41e64b26 Merge pull request #20 from ipnet-mesh/claude/ui-improvements-ads-page-01GQvLau46crtrqftWzFie5d
UI improvements and advertisements page
2025-12-05 18:26:07 +00:00
Claude
995b066b0d Remove sender name from channel messages summary, keep only timestamp 2025-12-05 18:23:33 +00:00
Claude
0d14ed0ccc Add latest channel messages to Network dashboard
Replace the channel counts table with actual recent messages per channel:
- Added ChannelMessage schema for channel message summaries
- Dashboard API now fetches latest 5 messages for each channel with sender name lookups
- Network page displays messages grouped by channel with sender names and timestamps
- Only shows channels that have messages
2025-12-05 18:20:49 +00:00
Claude
e3ce1258a8 Fix advertisements Type column by falling back to source node's adv_type
The adv_type from the Advertisement record is often null, but the linked
Node has the correct adv_type. Now falls back to source_node.adv_type
when adv.adv_type is null.
2025-12-05 18:14:16 +00:00
Claude
087a3c4c43 Messages list: swap Time/Type columns, add receiver node links
- Swap Time and Type columns (Type now first)
- Add receiver_name and receiver_friendly_name to MessageRead schema
- Update messages API to fetch receiver node names and tags
- Make Receiver column a link showing name with public key prefix
2025-12-05 18:09:29 +00:00
Claude
5077178a6d Add Type column with emoji to Advertisements list 2025-12-05 18:04:04 +00:00
Claude
a44d38dad6 Update Nodes list to match Advertisements style
- Rename Name column to Node
- Remove separate Public Key column
- Show name with public key prefix below (like Advertisements list)
- Add whitespace-nowrap to Last Seen column
2025-12-05 18:03:14 +00:00
Claude
3469278fba Add node name lookups to advertisements list
- Join with Node table to get node names and tags for both source
  and receiver nodes
- Display friendly_name (from tags), node_name, or advertised name
  with priority in that order
- Show name with public key preview for both Node and Received By columns
2025-12-05 17:58:52 +00:00
Claude
89c81630c9 Link 'Powered by MeshCore Hub' to GitHub repository 2025-12-05 17:55:59 +00:00
Claude
ab4a5886db Simplify advertisements table: Node, Received By, Time columns
- Remove Name and Type columns (usually null)
- Reorder columns: Node first, then Received By, then Time
- Link both Node and Received By to their node detail pages
- Show node name with public key preview when available
2025-12-05 17:53:54 +00:00
Claude
ec7082e01a Fix message text indentation in messages list
Put message content inline with td tag to prevent whitespace-pre-wrap
from preserving template indentation.
2025-12-05 17:43:09 +00:00
Claude
b4e7d45cf6 UI improvements: smaller hero, stats bar, advertisements page, messages fixes
- Reduce hero section size and add stats bar with node/message counts
- Add new Advertisements page with public key filtering
- Update hero navigation buttons: Dashboard, Nodes, Advertisements, Messages
- Add Advertisements to main navigation menu
- Remove Hops column from messages list (always empty)
- Display full message text with proper multi-line wrapping
2025-12-05 17:14:26 +00:00
JingleManSweep
864494c3a8 Merge pull request #19 from ipnet-mesh/claude/research-node-id-usage-019DURYQHkvodx9sV39hTmNM
Research node ID and public key usage
2025-12-05 16:59:30 +00:00
Claude
84e83a3384 Rename receiver_public_key to received_by
Shorter, cleaner field name for the receiving interface node's
public key in API responses.
2025-12-05 16:57:24 +00:00
Claude
796e303665 Remove internal UUID fields from API responses
Internal database UUIDs (id, node_id, receiver_node_id) were being
exposed in API responses. These are implementation details that should
not be visible to API consumers. The canonical identifier for nodes
is the 64-char hex public_key.

Changes:
- Remove id, node_id from NodeTagRead, NodeRead schemas
- Remove id from MemberRead schema
- Remove id, receiver_node_id, node_id from MessageRead, AdvertisementRead,
  TracePathRead, TelemetryRead schemas
- Update web map component to use public_key instead of member.id
  for owner filtering
- Update tests to not assert on removed fields
2025-12-05 16:50:21 +00:00
Louis King
a5d8d586e1 Updated README 2025-12-05 12:56:12 +00:00
JingleManSweep
26239fe11f Merge pull request #18 from ipnet-mesh/claude/prepare-public-release-01AFsizAneHmWjHZD6of5MWy
Prepare repository for public release
2025-12-05 12:40:56 +00:00
Claude
0e50a9d3b0 Prepare repository for public release
- Update license from MIT to GPL-3.0-or-later in pyproject.toml
- Update project URLs from meshcore-dev to ipnet-mesh organization
- Add explicit GPL-3.0 license statement to README
- Fix AGENTS.md venv directory reference (.venv vs venv)
- Remove undocumented NETWORK_LOCATION from README
- Fix stats endpoint path in README (/api/v1/dashboard/stats)
- Clarify seed and data directory descriptions in project structure
2025-12-05 12:12:55 +00:00
JingleManSweep
9b7d8cc31b Merge pull request #17 from ipnet-mesh/claude/implement-map-page-01F5nbj6KfXDv4Dsp1KE6jJZ
Updates
2025-12-04 19:35:15 +00:00
Louis King
d7152a5359 Updates 2025-12-04 19:34:18 +00:00
JingleManSweep
a1cc4388ae Merge pull request #16 from ipnet-mesh/claude/implement-map-page-01F5nbj6KfXDv4Dsp1KE6jJZ
Implement Map page with node filtering
2025-12-04 19:32:24 +00:00
Claude
bb0b9f05ec Add debug info to map data endpoint for troubleshooting
- Return total_nodes, nodes_with_coords, and error in response
- Display meaningful messages when no nodes or no coordinates found
- Log API errors and node counts for debugging
2025-12-04 18:37:50 +00:00
Claude
fe744c7c0c Fix map markers to use inline styles and center on nodes
- Use inline styles for marker colors instead of CSS classes for reliable rendering
- Center map on node locations when data is first loaded
- Refactor filter logic to separate recentering behavior
- Update legend to use inline styles
2025-12-04 18:34:24 +00:00
Claude
cf4e82503a Add filters to map page for node type, infrastructure, and owner
- Enhanced /map/data endpoint to include node role tag and member ownership
- Added client-side filtering for node type (chat, repeater, room)
- Added toggle to filter for infrastructure nodes only (role: infra)
- Added dropdown filter for member owner (nodes linked via public_key)
- Color-coded markers by node type with gold border for infrastructure
- Added legend showing marker types
- Dynamic count display showing total vs filtered nodes
2025-12-04 18:29:43 +00:00
Louis King
d6346fdfde Updates 2025-12-04 18:18:36 +00:00
Louis King
cf2c3350cc Updates 2025-12-04 18:10:29 +00:00
Louis King
110c701787 Updates 2025-12-04 16:37:59 +00:00
Louis King
fc0dc1a448 Updates 2025-12-04 16:12:51 +00:00
Louis King
d283a8c79b Updates 2025-12-04 16:00:15 +00:00
JingleManSweep
f129a4e0f3 Merge pull request #15 from ipnet-mesh/feature/originator-address
Updates
2025-12-04 15:52:49 +00:00
JingleManSweep
90f6a68b9b Merge pull request #14 from ipnet-mesh/claude/update-docs-docker-agents-01B9FYrem1tEwNRkx7rCH1QU
Update docs: Docker profiles, seed data, and Members model
2025-12-04 15:46:00 +00:00
Louis King
6cf3152ef9 Updates 2025-12-04 15:45:35 +00:00
Claude
058a6e2c95 Update docs: Docker profiles, seed data, and Members model
- Update Docker Compose section: core services run by default (mqtt,
  collector, api, web), optional profiles for interfaces and utilities
- Document automatic seeding on collector startup
- Add SEED_HOME environment variable documentation
- Document new Members model and YAML seed file format
- Update node_tags format to YAML with public_key-keyed structure
- Update project structure to reflect seed/ and data/ directories
- Add CLI reference for collector seed commands
2025-12-04 15:31:04 +00:00
JingleManSweep
acccdfedba Merge pull request #13 from ipnet-mesh/claude/fix-black-ci-linting-01UGvpVAnxCMthohsvzFBiQF
Fix Black linting issues in CI pipeline
2025-12-04 15:12:36 +00:00
Claude
83f3157e8b Fix Black formatting: add trailing comma in set_dispatch_callback
Black requires a trailing comma after the callback parameter when the
function signature spans multiple lines.
2025-12-04 15:06:13 +00:00
JingleManSweep
c564a61cf7 Merge pull request #12 from ipnet-mesh/claude/trigger-contact-database-01Qp5vbzzE1c77Kv21wYfQbQ
Trigger Contact database before CONTACT events
2025-12-04 14:59:21 +00:00
Claude
bbe8491ff1 Add info-level logging to contact handler for debugging
Change debug logging to info level so contact processing is visible
in default log output. This helps verify that contact events are
being received and processed correctly.
2025-12-04 14:39:14 +00:00
Claude
1d6a9638a1 Refactor contacts to emit individual MQTT events per contact
Instead of sending all contacts in one MQTT message, the interface
now splits the device's contacts response into individual 'contact'
events. This is more consistent with other event patterns and makes
the collector simpler.

Interface changes:
- Add _publish_contacts() to split contacts dict into individual events
- Publish each contact as 'contact' event (not 'contacts')

Collector changes:
- Rename handle_contacts to handle_contact for single contact
- Simplify handler to process one contact per message
- Register handler for 'contact' events
2025-12-04 14:34:05 +00:00
Claude
cf633f9f44 Update contacts handler to match actual device payload format
The device sends contact entries with different field names than
originally expected:
- adv_name (not name) for the advertised node name
- type (numeric: 0=none, 1=chat, 2=repeater, 3=room) instead of node_type

Changes:
- Update handle_contacts to extract adv_name and convert numeric type
- Add NODE_TYPE_MAP for type conversion
- Always update node name if different (not just if empty)
- Add debug logging for node updates
- Update ContactInfo schema with actual device fields
2025-12-04 14:25:11 +00:00
Claude
102e40a395 Fix event subscriptions setup timing for contact database sync
Move _setup_event_subscriptions() from run() to connect() so that
event subscriptions are active before get_contacts() is called during
device initialization. Previously, CONTACTS events were lost because
subscriptions weren't set up until run() was called.
2025-12-04 14:18:27 +00:00
Claude
241902685d Add contact database sync to interface startup
Trigger get_contacts() during receiver initialization to fetch the
device's contact database and broadcast CONTACTS events over MQTT.
This enables the collector to associate broadcast node names with
node records in the database.

Changes:
- Add get_contacts() abstract method to BaseMeshCoreDevice
- Implement get_contacts() in MeshCoreDevice using meshcore library
- Implement get_contacts() in MockMeshCoreDevice for testing
- Call get_contacts() in Receiver._initialize_device() after startup
2025-12-04 14:10:58 +00:00
JingleManSweep
2f2ae30c89 Merge pull request #11 from ipnet-mesh/claude/collector-seed-yaml-conversion-01WgpDYuzrzP5nkeC2EG9o2L
Convert collector seed mechanism from JSON to YAML
2025-12-04 01:34:47 +00:00
Louis King
fff04e4b99 Updates 2025-12-04 01:33:25 +00:00
Claude
df05c3a462 Convert collector seed mechanism from JSON to YAML
- Replace JSON seed files with YAML format for better readability
- Auto-detect YAML primitive types (number, boolean, string) from values
- Add automatic seed import on collector startup
- Split lat/lon into separate tags instead of combined coordinate string
- Add PyYAML dependency and types-PyYAML for type checking
- Update example/seed and contrib/seed/ipnet with clean YAML format
- Update tests to verify YAML primitive type detection
2025-12-04 01:27:03 +00:00
Louis King
e2d865f200 Fix nodes page test to match template output
The test was checking for adv_type values (REPEATER, CLIENT) but the
nodes.html template doesn't display that column. Updated to check for
public key prefixes instead.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-04 01:03:05 +00:00
Louis King
fa335bdb14 Updates 2025-12-04 00:59:49 +00:00
174 changed files with 18296 additions and 3352 deletions

View File

@@ -0,0 +1,60 @@
---
allowed-tools: Bash(gh label list:*),Bash(gh issue view:*),Bash(gh issue edit:*),Bash(gh search:*)
description: Apply labels to GitHub issues
---
You're an issue triage assistant for GitHub issues. Your task is to analyze the issue and select appropriate labels from the provided list.
IMPORTANT: Don't post any comments or messages to the issue. Your only action should be to apply labels.
Issue Information:
- REPO: ${{ github.repository }}
- ISSUE_NUMBER: ${{ github.event.issue.number }}
TASK OVERVIEW:
1. First, fetch the list of labels available in this repository by running: `gh label list`. Run exactly this command with nothing else.
2. Next, use gh commands to get context about the issue:
- Use `gh issue view ${{ github.event.issue.number }}` to retrieve the current issue's details
- Use `gh search issues` to find similar issues that might provide context for proper categorization
- You have access to these Bash commands:
- Bash(gh label list:\*) - to get available labels
- Bash(gh issue view:\*) - to view issue details
- Bash(gh issue edit:\*) - to apply labels to the issue
- Bash(gh search:\*) - to search for similar issues
3. Analyze the issue content, considering:
- The issue title and description
- The type of issue (bug report, feature request, question, etc.)
- Technical areas mentioned
- Severity or priority indicators
- User impact
- Components affected
4. Select appropriate labels from the available labels list provided above:
- Choose labels that accurately reflect the issue's nature
- Be specific but comprehensive
- IMPORTANT: Add a priority label (P1, P2, or P3) based on the label descriptions from gh label list
- Consider platform labels (android, ios) if applicable
- If you find similar issues using gh search, consider using a "duplicate" label if appropriate. Only do so if the issue is a duplicate of another OPEN issue.
5. Apply the selected labels:
- Use `gh issue edit` to apply your selected labels
- DO NOT post any comments explaining your decision
- DO NOT communicate directly with users
- If no labels are clearly applicable, do not apply any labels
IMPORTANT GUIDELINES:
- Be thorough in your analysis
- Only select labels from the provided list above
- DO NOT post any comments to the issue
- Your ONLY action should be to apply labels using gh issue edit
- It's okay to not add any labels if none are clearly applicable
---

View File

@@ -0,0 +1,44 @@
---
name: documentation
description: Audit and update project documentation to accurately reflect the current codebase. Use when documentation may be outdated, after significant code changes, or when the user asks to review or update docs.
---
# Documentation Audit
Audit and update all project documentation so it accurately reflects the current state of the codebase. Documentation must only describe features, options, configurations, and functionality that actually exist in the code.
## Files to Review
- **README.md** - Project overview, setup instructions, usage examples
- **AGENTS.md** - AI coding assistant guidelines, project structure, conventions
- **.env.example** - Example environment variables
Also check for substantial comments or inline instructions within the codebase that may be outdated.
## Process
1. **Read all documentation files** listed above in full.
2. **Cross-reference against the codebase.** For every documented item (features, env vars, CLI commands, routes, models, directory paths, conventions), search the code to verify:
- It actually exists.
- Its described behavior matches the implementation.
- File paths and directory structures are accurate.
3. **Identify and fix discrepancies:**
- **Version updates** — ensure documentation reflects any new/updated/removed versions. Check .python-version, pyproject.toml, etc.
- **Stale/legacy content** — documented but no longer in the code. Remove it.
- **Missing content** — exists in the code but not documented. Add it.
- **Inaccurate descriptions** — documented behavior doesn't match implementation. Correct it.
4. **Apply updates** to each file. Preserve existing style and structure.
5. **Verify consistency** across all documentation files — they must not contradict each other.
## Rules
- Do NOT invent features or options that don't exist in the code.
- Do NOT remove documentation for features that DO exist.
- Do NOT change the fundamental structure or style of the docs.
- Do NOT modify CLAUDE.md.
- Focus on accuracy, not cosmetic changes.
- When in doubt, check the source code.

View File

@@ -0,0 +1,49 @@
---
name: git-branch
description: Create a new branch from latest main with the project's naming convention (feat/fix/chore). Use when starting new work on a feature, bug fix, or chore.
---
# Git Branch
Create a new branch from the latest `main` branch using the project's naming convention.
## Arguments
The user may provide arguments in the format: `<type>/<description>`
- `type` — one of `feat`, `fix`, or `chore`
- `description` — short kebab-case description (e.g., `add-map-clustering`)
If not provided, ask the user for the branch type and description.
## Process
1. **Fetch latest main:**
```bash
git fetch origin main
```
2. **Determine branch name:**
- If the user provided arguments (e.g., `/git-branch feat/add-map-clustering`), use them directly.
- Otherwise, ask the user for:
- **Branch type**: `feat`, `fix`, or `chore`
- **Short description**: a brief kebab-case slug describing the work
- Construct the branch name as `{type}/{slug}` (e.g., `feat/add-map-clustering`).
3. **Create and switch to the new branch:**
```bash
git checkout -b {branch_name} origin/main
```
4. **Confirm** by reporting the new branch name to the user.
## Rules
- Branch names MUST follow the `{type}/{slug}` convention.
- Valid types are `feat`, `fix`, and `chore` only.
- The slug MUST be kebab-case (lowercase, hyphens, no spaces or underscores).
- Always branch from `origin/main`, never from the current branch.
- Do NOT push the branch — just create it locally.

View File

@@ -0,0 +1,94 @@
---
name: git-pr
description: Create a pull request to main from the current branch. Runs quality checks, commits changes, pushes, and opens a PR via gh CLI. Use when ready to submit work for review.
---
# Git PR
Create a pull request to `main` from the current feature branch.
## Process
### Phase 1: Pre-flight Checks
1. **Verify branch:**
```bash
git branch --show-current
```
- The current branch must NOT be `main`. If on `main`, tell the user to create a feature branch first (e.g., `/git-branch`).
2. **Check for uncommitted changes:**
```bash
git status
```
- If there are uncommitted changes, ask the user for a commit message and commit them using the `/git-commit` skill conventions (no Claude authoring details).
### Phase 2: Quality Checks
1. **Determine changed components** by comparing against `main`:
```bash
git diff --name-only main...HEAD
```
2. **Run targeted tests** based on changed files:
- `tests/test_web/` for web-only changes (templates, static JS, web routes)
- `tests/test_api/` for API changes
- `tests/test_collector/` for collector changes
- `tests/test_interface/` for interface/sender/receiver changes
- `tests/test_common/` for common models/schemas/config changes
- Run the full `pytest` if changes span multiple components
3. **Run pre-commit checks:**
```bash
pre-commit run --all-files
```
- If checks fail and auto-fix files, commit the fixes and re-run until clean.
4. If tests or checks fail and cannot be auto-fixed, report the issues to the user and stop.
### Phase 3: Push and Create PR
1. **Push the branch to origin:**
```bash
git push -u origin HEAD
```
2. **Generate PR content:**
- **Title**: Derive from the branch name. Convert `feat/add-map-clustering` to `Add map clustering`, `fix/login-error` to `Fix login error`, etc. Keep under 70 characters.
- **Body**: Generate a summary from the commit history:
```bash
git log main..HEAD --oneline
```
3. **Create the PR:**
```bash
gh pr create --title "{title}" --body "$(cat <<'EOF'
## Summary
{bullet points summarizing the changes}
## Test plan
{checklist of testing steps}
EOF
)"
```
4. **Return the PR URL** to the user.
## Rules
- Do NOT create a PR from `main`.
- Do NOT skip quality checks — tests and pre-commit must pass.
- Do NOT force-push.
- Always target `main` as the base branch.
- Keep the PR title concise (under 70 characters).
- If quality checks fail, fix issues or report to the user — do NOT create the PR with failing checks.

View File

@@ -0,0 +1,66 @@
---
name: quality
description: Run the full test suite, pre-commit checks, and re-run tests to ensure code quality. Fixes any issues found. Use after code changes, before commits, or when the user asks to check quality.
---
# Quality Check
Run the full quality pipeline: tests, pre-commit checks, and a verification test run. Fix any issues discovered at each stage.
## Prerequisites
Before running checks, ensure the environment is ready:
1. Check for `.venv` directory — create with `python -m venv .venv` if missing.
2. Activate the virtual environment: `source .venv/bin/activate`
3. Install dependencies: `pip install -e ".[dev]"`
## Process
### Phase 1: Initial Test Run
Run the full test suite to establish a baseline:
```bash
pytest
```
- If tests **pass**, proceed to Phase 2.
- If tests **fail**, investigate and fix the failures before continuing. Re-run the failing tests to confirm fixes. Then proceed to Phase 2.
### Phase 2: Pre-commit Checks
Run all pre-commit hooks against the entire codebase:
```bash
pre-commit run --all-files
```
- If all checks **pass**, proceed to Phase 3.
- If checks **fail**:
- Many hooks (black, trailing whitespace, end-of-file) auto-fix issues. Re-run `pre-commit run --all-files` to confirm auto-fixes resolved the issues.
- For remaining failures (flake8, mypy, etc.), investigate and fix manually.
- Re-run `pre-commit run --all-files` until all checks pass.
- Then proceed to Phase 3.
### Phase 3: Verification Test Run
Run the full test suite again to ensure pre-commit fixes (formatting, import sorting, etc.) haven't broken any functionality:
```bash
pytest
```
- If tests **pass**, the quality check is complete.
- If tests **fail**, the pre-commit fixes introduced a regression. Investigate and fix, then re-run both `pre-commit run --all-files` and `pytest` until both pass cleanly.
## Rules
- Always run the FULL test suite (`pytest`), not targeted tests.
- Always run pre-commit against ALL files (`--all-files`).
- Do NOT skip or ignore failing tests — investigate and fix them.
- Do NOT skip or ignore pre-commit failures — investigate and fix them.
- Do NOT modify test assertions to make tests pass unless the test is genuinely wrong.
- Do NOT disable pre-commit hooks or add noqa/type:ignore unless truly justified.
- Fix root causes, not symptoms.
- If a fix requires changes outside the scope of a simple quality fix (e.g., a design change), report it to the user rather than making the change silently.

View File

@@ -0,0 +1,114 @@
---
name: release
description: Full release workflow — quality gate, semantic version tag, push, and GitHub release. Use when ready to cut a new release from main.
---
# Release
Run the full release workflow: quality checks, version tagging, push, and GitHub release creation.
## Arguments
The user may optionally provide a version number (e.g., `/release 1.2.0`). If not provided, one will be suggested based on commit history.
## Process
### Phase 1: Pre-flight Checks
1. **Verify on `main` branch:**
```bash
git branch --show-current
```
- Must be on `main`. If not, tell the user to switch to `main` first.
2. **Verify working tree is clean:**
```bash
git status --porcelain
```
- If there are uncommitted changes, tell the user to commit or stash them first.
3. **Pull latest:**
```bash
git pull origin main
```
### Phase 2: Quality Gate
1. **Run full test suite:**
```bash
pytest
```
2. **Run pre-commit checks:**
```bash
pre-commit run --all-files
```
3. If either fails, report the issues and stop. Do NOT proceed with a release that has failing checks.
### Phase 3: Determine Version
1. **Get the latest tag:**
```bash
git describe --tags --abbrev=0 2>/dev/null || echo "none"
```
2. **List commits since last tag:**
```bash
git log {last_tag}..HEAD --oneline
```
If no previous tag exists, list the last 20 commits:
```bash
git log --oneline -20
```
3. **Determine next version:**
- If the user provided a version, use it.
- Otherwise, suggest a version based on commit prefixes:
- Any commit starting with `feat` or `Add`**minor** bump
- Only `fix` or `Fix` commits → **patch** bump
- If no previous tag, suggest `0.1.0`
- Present the suggestion and ask the user to confirm or provide a different version.
### Phase 4: Tag and Release
1. **Create annotated tag:**
```bash
git tag -a v{version} -m "Release v{version}"
```
2. **Push tag to origin:**
```bash
git push origin v{version}
```
3. **Create GitHub release:**
```bash
gh release create v{version} --title "v{version}" --generate-notes
```
4. **Report** the release URL to the user.
## Rules
- MUST be on `main` branch with a clean working tree.
- MUST pass all quality checks before tagging.
- Tags MUST follow the `v{major}.{minor}.{patch}` format (e.g., `v1.2.0`).
- Always create an annotated tag, not a lightweight tag.
- Always confirm the version with the user before tagging.
- Do NOT skip quality checks under any circumstances.
- Do NOT force-push tags.

View File

@@ -1,110 +1,138 @@
# MeshCore Hub - Docker Compose Environment Configuration
# MeshCore Hub - Environment Configuration
# Copy this file to .env and customize values
#
# Configuration is grouped by service. Most deployments only need:
# - Common Settings (always required)
# - MQTT Settings (always required)
# - Interface Settings (for receiver/sender services)
#
# The Collector, API, and Web services typically run as a combined "core"
# profile and share the same data directory.
#
# -----------------------------------------------------------------------------
# QUICK START: Receiver/Sender Only
# -----------------------------------------------------------------------------
# For a minimal receiver or sender setup, you only need these settings:
#
# MQTT_HOST=your-mqtt-broker.example.com
# MQTT_PORT=1883
# MQTT_USERNAME=your_username
# MQTT_PASSWORD=your_password
# MQTT_TLS=false
# SERIAL_PORT=/dev/ttyUSB0
#
# Serial ports are typically /dev/ttyUSB[0-9] or /dev/ttyACM[0-9] on Linux.
# -----------------------------------------------------------------------------
# ===================
# Docker Image
# ===================
# =============================================================================
# COMMON SETTINGS
# =============================================================================
# These settings apply to all services
# Leave empty to build from local Dockerfile, or set to use a pre-built image:
# MESHCORE_IMAGE=ghcr.io/ipnet-mesh/meshcore-hub:latest
# MESHCORE_IMAGE=ghcr.io/ipnet-mesh/meshcore-hub:main
# MESHCORE_IMAGE=ghcr.io/ipnet-mesh/meshcore-hub:v1.0.0
MESHCORE_IMAGE=
# Docker image version tag to use
# Options: latest, main, v1.0.0, etc.
IMAGE_VERSION=latest
# ===================
# Data Directory
# ===================
# Logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL)
LOG_LEVEL=INFO
# Base directory for all service data (collector DB, tags, members, etc.)
# Base directory for runtime data (database, etc.)
# Default: ./data (relative to docker-compose.yml location)
# Inside containers this is mapped to /data
#
# Structure:
# ${DATA_HOME}/
# ── collector/
# ── meshcore.db # SQLite database
# │ └── tags.json # Node tags for import
# └── web/
# └── members.json # Network members list
# ── collector/
# ── meshcore.db # SQLite database
DATA_HOME=./data
# ===================
# Common Settings
# ===================
# Directory containing seed data files for import
# Default: ./seed (relative to docker-compose.yml location)
# Inside containers this is mapped to /seed
#
# Structure:
# ${SEED_HOME}/
# ├── node_tags.yaml # Node tags for import
# └── members.yaml # Network members for import
SEED_HOME=./seed
# Logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL)
LOG_LEVEL=INFO
# =============================================================================
# MQTT SETTINGS
# =============================================================================
# MQTT broker connection settings for interface, collector, and API services
# MQTT Broker Settings (internal use)
# MQTT Broker host
# When using the local MQTT broker (--profile mqtt), use "mqtt"
# When using an external broker, set the hostname/IP
MQTT_HOST=mqtt
# MQTT Broker port (default: 1883, or 8883 for TLS)
MQTT_PORT=1883
# MQTT authentication (optional)
MQTT_USERNAME=
MQTT_PASSWORD=
# MQTT topic prefix for all MeshCore messages
MQTT_PREFIX=meshcore
# External MQTT port mapping
# Enable TLS/SSL for MQTT connection
# When enabled, uses TLS with system CA certificates (e.g., for Let's Encrypt)
MQTT_TLS=false
# External port mappings for local MQTT broker (--profile mqtt only)
MQTT_EXTERNAL_PORT=1883
MQTT_WS_PORT=9001
# ===================
# Interface Settings
# ===================
# =============================================================================
# INTERFACE SETTINGS (Receiver/Sender)
# =============================================================================
# Settings for the MeshCore device interface services
# Serial port for receiver device
SERIAL_PORT=/dev/ttyUSB0
# Serial port for sender device (if separate)
# Serial port for sender device (if using separate device)
SERIAL_PORT_SENDER=/dev/ttyUSB1
# Baud rate for serial communication
SERIAL_BAUD=115200
# Optional device/node name to set on startup
# This name is broadcast to the mesh network in advertisements
MESHCORE_DEVICE_NAME=
# Optional node address override (64-char hex string)
# Only set if you need to override the device's public key
NODE_ADDRESS=
NODE_ADDRESS_SENDER=
# ===================
# API Settings
# ===================
# -------------------
# Contact Cleanup Settings (RECEIVER mode only)
# -------------------
# Automatic removal of stale contacts from the MeshCore companion node
# External API port
API_PORT=8000
# Enable automatic removal of stale contacts from companion node
CONTACT_CLEANUP_ENABLED=true
# API Keys for authentication (generate secure keys for production!)
# Example: openssl rand -hex 32
API_READ_KEY=
API_ADMIN_KEY=
# Remove contacts not advertised for this many days
CONTACT_CLEANUP_DAYS=7
# ===================
# Web Dashboard Settings
# ===================
# =============================================================================
# COLLECTOR SETTINGS
# =============================================================================
# The collector subscribes to MQTT events and stores them in the database
# External web port
WEB_PORT=8080
# Network Information (displayed on web dashboard)
NETWORK_NAME=MeshCore Network
NETWORK_CITY=
NETWORK_COUNTRY=
NETWORK_LOCATION=
NETWORK_RADIO_CONFIG=
NETWORK_CONTACT_EMAIL=
NETWORK_CONTACT_DISCORD=
# Members file location (optional override)
# Default: ${DATA_HOME}/web/members.json
# Only set this if you want to use a different location
# MEMBERS_FILE=/custom/path/to/members.json
# ===================
# -------------------
# Webhook Settings
# ===================
# -------------------
# Webhooks forward mesh events to external HTTP endpoints as POST requests
# Webhook for advertisement events (node discovery)
# Events are sent as POST requests with JSON payload
WEBHOOK_ADVERTISEMENT_URL=
WEBHOOK_ADVERTISEMENT_SECRET=
# Webhook for all message events (channel and direct messages)
# Use this for a single endpoint handling all messages
WEBHOOK_MESSAGE_URL=
WEBHOOK_MESSAGE_SECRET=
@@ -119,3 +147,156 @@ WEBHOOK_MESSAGE_SECRET=
WEBHOOK_TIMEOUT=10.0
WEBHOOK_MAX_RETRIES=3
WEBHOOK_RETRY_BACKOFF=2.0
# -------------------
# Data Retention Settings
# -------------------
# Automatic cleanup of old event data (advertisements, messages, telemetry, etc.)
# Enable automatic cleanup of old event data
DATA_RETENTION_ENABLED=true
# Number of days to retain event data
# Events older than this are deleted during cleanup
DATA_RETENTION_DAYS=30
# Hours between automatic cleanup runs
# Applies to both event data and node cleanup
DATA_RETENTION_INTERVAL_HOURS=24
# -------------------
# Node Cleanup Settings
# -------------------
# Automatic removal of inactive nodes
# Enable automatic cleanup of inactive nodes
# Nodes with last_seen=NULL (never seen on network) are NOT removed
NODE_CLEANUP_ENABLED=true
# Remove nodes not seen for this many days (based on last_seen field)
NODE_CLEANUP_DAYS=7
# =============================================================================
# API SETTINGS
# =============================================================================
# REST API for querying data and sending commands
# External API port
API_PORT=8000
# API Keys for authentication
# Generate secure keys for production: openssl rand -hex 32
# Leave empty to disable authentication (not recommended for production)
API_READ_KEY=
API_ADMIN_KEY=
# -------------------
# Prometheus Metrics
# -------------------
# Prometheus metrics endpoint exposed at /metrics on the API service
# Enable Prometheus metrics endpoint
# Default: true
METRICS_ENABLED=true
# Seconds to cache metrics output (reduces database load)
# Default: 60
METRICS_CACHE_TTL=60
# External Prometheus port (when using --profile metrics)
PROMETHEUS_PORT=9090
# External Alertmanager port (when using --profile metrics)
ALERTMANAGER_PORT=9093
# =============================================================================
# WEB DASHBOARD SETTINGS
# =============================================================================
# Web interface for visualizing network status
# External web port
WEB_PORT=8080
# API endpoint URL for the web dashboard
# Default: http://localhost:8000
# API_BASE_URL=http://localhost:8000
# API key for web dashboard queries (optional)
# If API_READ_KEY is set on the API, provide it here
# API_KEY=
# Default theme for the web dashboard (dark or light)
# Users can override via the theme toggle; their preference is saved in localStorage
# Default: dark
# WEB_THEME=dark
# Locale/language for the web dashboard
# Default: en
# Supported: en (see src/meshcore_hub/web/static/locales/ for available translations)
# WEB_LOCALE=en
# Auto-refresh interval in seconds for list pages (nodes, advertisements, messages)
# Set to 0 to disable auto-refresh
# Default: 30
# WEB_AUTO_REFRESH_SECONDS=30
# Enable admin interface at /a/ (requires auth proxy in front)
# Default: false
# WEB_ADMIN_ENABLED=false
# Timezone for displaying dates/times on the web dashboard
# Uses standard IANA timezone names (e.g., America/New_York, Europe/London)
# Default: UTC
TZ=UTC
# Directory containing custom content (pages/, media/)
# Default: ./content
# CONTENT_HOME=./content
# -------------------
# Network Information
# -------------------
# Displayed on the web dashboard homepage
# Network domain name (optional)
# NETWORK_DOMAIN=
# Network display name
NETWORK_NAME=MeshCore Network
# Network location
NETWORK_CITY=
NETWORK_COUNTRY=
# Radio configuration (comma-delimited)
# Format: <profile>,<frequency>,<bandwidth>,<spreading_factor>,<coding_rate>,<tx_power>
# Example: EU/UK Narrow,869.618MHz,62.5kHz,SF8,CR8,22dBm
NETWORK_RADIO_CONFIG=
# Welcome text displayed on the homepage (optional, plain text)
# If not set, a default welcome message is shown
NETWORK_WELCOME_TEXT=
# -------------------
# Feature Flags
# -------------------
# Control which pages are visible in the web dashboard
# Set to false to completely hide a page (nav, routes, sitemap, robots.txt)
# FEATURE_DASHBOARD=true
# FEATURE_NODES=true
# FEATURE_ADVERTISEMENTS=true
# FEATURE_MESSAGES=true
# FEATURE_MAP=true
# FEATURE_MEMBERS=true
# FEATURE_PAGES=true
# -------------------
# Contact Information
# -------------------
# Contact links displayed in the footer
NETWORK_CONTACT_EMAIL=
NETWORK_CONTACT_DISCORD=
NETWORK_CONTACT_GITHUB=
NETWORK_CONTACT_YOUTUBE=

1
.github/FUNDING.yml vendored Normal file
View File

@@ -0,0 +1 @@
buy_me_a_coffee: jinglemansweep

View File

@@ -1,53 +1,42 @@
name: CI
on:
push:
branches: [main, master]
pull_request:
branches: [main, master]
branches: [main]
paths:
- "src/**"
- "tests/**"
- "alembic/**"
- ".python-version"
- "pyproject.toml"
- ".pre-commit-config.yaml"
- ".github/workflows/ci.yml"
jobs:
lint:
name: Lint
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6
- name: Set up Python
uses: actions/setup-python@v5
uses: actions/setup-python@v6
with:
python-version: "3.11"
python-version-file: ".python-version"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install black flake8 mypy
pip install -e ".[dev]"
- name: Check formatting with black
run: black --check src/ tests/
- name: Lint with flake8
run: flake8 src/ tests/
- name: Type check with mypy
run: mypy src/
- name: Run pre-commit
uses: pre-commit/action@v3.0.1
test:
name: Test (Python ${{ matrix.python-version }})
name: Test
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
python-version: ["3.11"]
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
- name: Set up Python
uses: actions/setup-python@v6
with:
python-version: ${{ matrix.python-version }}
python-version-file: ".python-version"
- name: Install dependencies
run: |
@@ -59,8 +48,8 @@ jobs:
pytest --cov=meshcore_hub --cov-report=xml --cov-report=term-missing
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v4
if: matrix.python-version == '3.11'
uses: codecov/codecov-action@v5
if: always()
with:
files: ./coverage.xml
fail_ci_if_error: false
@@ -71,12 +60,12 @@ jobs:
runs-on: ubuntu-latest
needs: [lint, test]
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6
- name: Set up Python
uses: actions/setup-python@v5
uses: actions/setup-python@v6
with:
python-version: "3.11"
python-version-file: ".python-version"
- name: Install build tools
run: |
@@ -87,7 +76,7 @@ jobs:
run: python -m build
- name: Upload artifacts
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v7
with:
name: dist
path: dist/

43
.github/workflows/claude.yml vendored Normal file
View File

@@ -0,0 +1,43 @@
name: Claude Code
on:
issue_comment:
types: [created]
pull_request_review_comment:
types: [created]
issues:
types: [opened, assigned]
pull_request_review:
types: [submitted]
jobs:
claude:
if: |
(github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) ||
(github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude')) ||
(github.event_name == 'pull_request_review' && contains(github.event.review.body, '@claude')) ||
(github.event_name == 'issues' && (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude')))
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: read
issues: read
id-token: write
actions: read # Required for Claude to read CI results on PRs
steps:
- name: Checkout repository
uses: actions/checkout@v6
with:
fetch-depth: 1
- name: Run Claude Code
id: claude
uses: anthropics/claude-code-action@v1
with:
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
# This is an optional setting that allows Claude to read CI results on PRs
additional_permissions: |
actions: read
# Optional: Give a custom prompt to Claude. If this is not specified, Claude will perform the instructions specified in the comment that tagged it.
# prompt: 'Update the pull request description to include a summary of changes.'
# claude_args: '--allowed-tools Bash(gh pr:*)'

View File

@@ -2,11 +2,18 @@ name: Docker
on:
push:
branches: [main, master]
branches: [main]
paths:
- "src/**"
- "alembic/**"
- "alembic.ini"
- ".python-version"
- "pyproject.toml"
- "Dockerfile"
- "docker-compose.yml"
- ".github/workflows/docker.yml"
tags:
- "v*"
pull_request:
branches: [main, master]
env:
REGISTRY: ghcr.io
@@ -21,7 +28,10 @@ jobs:
packages: write
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
@@ -47,19 +57,22 @@ jobs:
type=sha
- name: Build and push Docker image
uses: docker/build-push-action@v5
uses: docker/build-push-action@v6
with:
context: .
file: Dockerfile
platforms: linux/amd64,linux/arm64
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
build-args: |
BUILD_VERSION=${{ github.ref_name }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Test Docker image
if: github.event_name == 'pull_request'
run: |
docker build -t meshcore-hub-test -f Dockerfile .
docker build -t meshcore-hub-test --build-arg BUILD_VERSION=${{ github.ref_name }} -f Dockerfile .
docker run --rm meshcore-hub-test --version
docker run --rm meshcore-hub-test --help

27
.github/workflows/issue-triage.yml vendored Normal file
View File

@@ -0,0 +1,27 @@
name: Claude Issue Triage
description: Run Claude Code for issue triage in GitHub Actions
on:
issues:
types: [opened]
jobs:
triage-issue:
runs-on: ubuntu-latest
timeout-minutes: 10
permissions:
contents: read
issues: write
steps:
- name: Checkout repository
uses: actions/checkout@v6
with:
fetch-depth: 0
- name: Run Claude Code for Issue Triage
uses: anthropics/claude-code-action@v1
with:
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
prompt: "/label-issue REPO: ${{ github.repository }} ISSUE_NUMBER${{ github.event.issue.number }}"
allowed_non_write_users: "*"
github_token: ${{ secrets.GITHUB_TOKEN }}

1
.gitignore vendored
View File

@@ -33,6 +33,7 @@ share/python-wheels/
MANIFEST
uv.lock
docker-compose.override.yml
# PyInstaller
# Usually these files are written by a python script from a template

View File

@@ -1,3 +1,6 @@
default_language_version:
python: python3
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.5.0
@@ -14,7 +17,6 @@ repos:
rev: 24.3.0
hooks:
- id: black
language_version: python3.11
args: ["--line-length=88"]
- repo: https://github.com/pycqa/flake8
@@ -38,3 +40,4 @@ repos:
- fastapi>=0.100.0
- alembic>=1.7.0
- types-paho-mqtt>=1.6.0
- types-PyYAML>=6.0.0

View File

@@ -1 +1 @@
3.11
3.14

399
AGENTS.md
View File

@@ -5,20 +5,26 @@ This document provides context and guidelines for AI coding assistants working o
## Agent Rules
* You MUST use Python (version in `.python-version` file)
* You MUST activate a Python virtual environment in the `venv` directory or create one if it does not exist:
- `ls ./venv` to check if it exists
* You MUST activate a Python virtual environment in the `.venv` directory or create one if it does not exist:
- `ls ./.venv` to check if it exists
- `python -m venv .venv` to create it
* You MUST always activate the virtual environment before running any commands
- `source .venv/bin/activate`
* You MUST install all project dependencies using `pip install -e ".[dev]"` command`
* You MUST install `pre-commit` for quality checks
* Before commiting:
- Run tests with `pytest` to ensure recent changes haven't broken anything
- Run **targeted tests** for the components you changed, not the full suite:
- `pytest tests/test_web/` for web-only changes (templates, static JS, web routes)
- `pytest tests/test_api/` for API changes
- `pytest tests/test_collector/` for collector changes
- `pytest tests/test_interface/` for interface/sender/receiver changes
- `pytest tests/test_common/` for common models/schemas/config changes
- Only run the full `pytest` if changes span multiple components
- Run `pre-commit run --all-files` to perform all quality checks
## Project Overview
MeshCore Hub is a Python 3.11+ monorepo for managing and orchestrating MeshCore mesh networks. It consists of five main components:
MeshCore Hub is a Python 3.13+ monorepo for managing and orchestrating MeshCore mesh networks. It consists of five main components:
- **meshcore_interface**: Serial/USB interface to MeshCore companion nodes, publishes/subscribes to MQTT
- **meshcore_collector**: Collects MeshCore events from MQTT and stores them in a database
@@ -37,7 +43,7 @@ MeshCore Hub is a Python 3.11+ monorepo for managing and orchestrating MeshCore
| Category | Technology |
|----------|------------|
| Language | Python 3.11+ |
| Language | Python 3.13+ |
| Package Management | pip with pyproject.toml |
| CLI Framework | Click |
| Configuration | Pydantic Settings |
@@ -46,7 +52,8 @@ MeshCore Hub is a Python 3.11+ monorepo for managing and orchestrating MeshCore
| REST API | FastAPI |
| MQTT Client | paho-mqtt |
| MeshCore Interface | meshcore |
| Templates | Jinja2 |
| Templates | Jinja2 (server), lit-html (SPA) |
| Frontend | ES Modules SPA with client-side routing |
| CSS Framework | Tailwind CSS + DaisyUI |
| Testing | pytest, pytest-asyncio |
| Formatting | black |
@@ -114,17 +121,15 @@ class NodeRead(BaseModel):
### SQLAlchemy Models
```python
from sqlalchemy import String, DateTime, ForeignKey
from sqlalchemy import String, DateTime, Text
from sqlalchemy.orm import Mapped, mapped_column, relationship
from datetime import datetime
from uuid import uuid4
from typing import Optional
from meshcore_hub.common.models.base import Base
from meshcore_hub.common.models.base import Base, TimestampMixin, UUIDMixin
class Node(Base):
class Node(Base, UUIDMixin, TimestampMixin):
__tablename__ = "nodes"
id: Mapped[str] = mapped_column(String(36), primary_key=True, default=lambda: str(uuid4()))
public_key: Mapped[str] = mapped_column(String(64), unique=True, index=True)
name: Mapped[str | None] = mapped_column(String(255), nullable=True)
adv_type: Mapped[str | None] = mapped_column(String(20), nullable=True)
@@ -132,6 +137,18 @@ class Node(Base):
# Relationships
tags: Mapped[list["NodeTag"]] = relationship(back_populates="node", cascade="all, delete-orphan")
class Member(Base, UUIDMixin, TimestampMixin):
"""Network member model - stores info about network operators."""
__tablename__ = "members"
name: Mapped[str] = mapped_column(String(255), nullable=False)
callsign: Mapped[Optional[str]] = mapped_column(String(20), nullable=True)
role: Mapped[Optional[str]] = mapped_column(String(100), nullable=True)
description: Mapped[Optional[str]] = mapped_column(Text, nullable=True)
contact: Mapped[Optional[str]] = mapped_column(String(255), nullable=True)
public_key: Mapped[Optional[str]] = mapped_column(String(64), nullable=True, index=True)
```
### FastAPI Routes
@@ -239,7 +256,12 @@ meshcore-hub/
│ │ ├── mqtt.py # MQTT utilities
│ │ ├── logging.py # Logging config
│ │ ├── models/ # SQLAlchemy models
│ │ │ ├── node.py # Node model
│ │ │ ├── member.py # Network member model
│ │ │ └── ...
│ │ └── schemas/ # Pydantic schemas
│ │ ├── members.py # Member API schemas
│ │ └── ...
│ ├── interface/
│ │ ├── cli.py
│ │ ├── device.py # MeshCore device wrapper
@@ -247,9 +269,11 @@ meshcore-hub/
│ │ ├── receiver.py # RECEIVER mode
│ │ └── sender.py # SENDER mode
│ ├── collector/
│ │ ├── cli.py
│ │ ├── cli.py # Collector CLI with seed commands
│ │ ├── subscriber.py # MQTT subscriber
│ │ ├── tag_import.py # Tag import from JSON
│ │ ├── cleanup.py # Data retention/cleanup service
│ │ ├── tag_import.py # Tag import from YAML
│ │ ├── member_import.py # Member import from YAML
│ │ ├── handlers/ # Event handlers
│ │ └── webhook.py # Webhook dispatcher
│ ├── api/
@@ -257,14 +281,26 @@ meshcore-hub/
│ │ ├── app.py # FastAPI app
│ │ ├── auth.py # Authentication
│ │ ├── dependencies.py
│ │ ├── routes/ # API routes
│ │ └── templates/ # Dashboard HTML
│ │ ├── metrics.py # Prometheus metrics endpoint
│ │ └── routes/ # API routes
│ │ ├── members.py # Member CRUD endpoints
│ │ └── ...
│ └── web/
│ ├── cli.py
│ ├── app.py # FastAPI app
│ ├── routes/ # Page routes
│ ├── templates/ # Jinja2 templates
│ └── static/ # CSS, JS
│ ├── pages.py # Custom markdown page loader
│ ├── templates/ # Jinja2 templates (spa.html shell)
│ └── static/
│ ├── css/app.css # Custom styles
│ └── js/spa/ # SPA frontend (ES modules)
│ ├── app.js # Entry point, route registration
│ ├── router.js # Client-side History API router
│ ├── api.js # API fetch helper
│ ├── components.js # Shared UI components (lit-html)
│ ├── icons.js # SVG icon functions (lit-html)
│ └── pages/ # Page modules (lazy-loaded)
│ ├── home.js, dashboard.js, nodes.js, ...
│ └── admin/ # Admin page modules
├── tests/
│ ├── conftest.py
│ ├── test_common/
@@ -276,22 +312,27 @@ meshcore-hub/
│ ├── env.py
│ └── versions/
├── etc/
── mosquitto.conf # MQTT broker configuration
── mosquitto.conf # MQTT broker configuration
│ ├── prometheus/ # Prometheus configuration
│ │ ├── prometheus.yml # Scrape and alerting config
│ │ └── alerts.yml # Alert rules
│ └── alertmanager/ # Alertmanager configuration
│ └── alertmanager.yml # Routing and receiver config
├── example/
── data/
├── collector/
│ └── tags.json # Example node tags data
└── web/
└── members.json # Example network members data
── seed/ # Example seed data files
├── node_tags.yaml # Example node tags
│ │ └── members.yaml # Example network members
└── content/ # Example custom content
├── pages/ # Example custom pages
│ └── media/ # Example media files
├── seed/ # Seed data directory (SEED_HOME)
│ ├── node_tags.yaml # Node tags for import
│ └── members.yaml # Network members for import
├── data/ # Runtime data (gitignored, DATA_HOME default)
── collector/ # Collector data
── meshcore.db # SQLite database
│ │ └── tags.json # Node tags for import
│ └── web/ # Web data
│ └── members.json # Network members list
── collector/ # Collector data
── meshcore.db # SQLite database
├── Dockerfile # Docker build configuration
── docker-compose.yml # Docker Compose services (gitignored)
└── docker-compose.yml.example # Docker Compose template
── docker-compose.yml # Docker Compose services
```
## MQTT Topic Structure
@@ -324,6 +365,25 @@ Examples:
- JSON columns for flexible data (path_hashes, parsed_data, etc.)
- Foreign keys reference nodes by UUID, not public_key
## Standard Node Tags
Node tags are flexible key-value pairs that allow custom metadata to be attached to nodes. While tags are completely optional and freeform, the following standard tag keys are recommended for consistent use across the web dashboard:
| Tag Key | Description | Usage |
|---------|-------------|-------|
| `name` | Node display name | Used as the primary display name throughout the UI (overrides the advertised name) |
| `description` | Short description | Displayed as supplementary text under the node name |
| `member_id` | Member identifier reference | Links the node to a network member (matches `member_id` in Members table) |
| `lat` | GPS latitude override | Overrides node-reported latitude for map display |
| `lon` | GPS longitude override | Overrides node-reported longitude for map display |
| `elevation` | GPS elevation override | Overrides node-reported elevation |
| `role` | Node role/purpose | Used for website presentation and filtering (e.g., "gateway", "repeater", "sensor") |
**Important Notes:**
- All tags are optional - nodes can function without any tags
- Tag keys are case-sensitive
- The `member_id` tag should reference a valid `member_id` from the Members table
## Testing Guidelines
### Unit Tests
@@ -400,13 +460,121 @@ async def client(db_session):
5. Add Alembic migration if schema changed
6. Add tests in `tests/test_collector/`
### Adding a New SPA Page
The web dashboard is a Single Page Application. Pages are ES modules loaded by the client-side router.
1. Create a page module in `web/static/js/spa/pages/` (e.g., `my-page.js`)
2. Export an `async function render(container, params, router)` that renders into `container` using `litRender(html\`...\`, container)`
3. Register the route in `web/static/js/spa/app.js` with `router.addRoute('/my-page', pageHandler(pages.myPage))`
4. Add the page title to `updatePageTitle()` in `app.js`
5. Add a nav link in `web/templates/spa.html` (both mobile and desktop menus)
**Key patterns:**
- Import `html`, `litRender`, `nothing` from `../components.js` (re-exports lit-html)
- Use `apiGet()` from `../api.js` for API calls
- For list pages with filters, use the `renderPage()` pattern: render the page header immediately, then re-render with the filter form + results after fetch (keeps the form out of the shell to avoid layout shift from data-dependent filter selects)
- Old page content stays visible until data is ready (navbar spinner indicates loading)
- Use `pageColors` from `components.js` for section-specific colors (reads CSS custom properties from `app.css`)
- Return a cleanup function if the page creates resources (e.g., Leaflet maps, Chart.js instances)
### Internationalization (i18n)
The web dashboard supports internationalization via JSON translation files. The default language is English.
**Translation files location:** `src/meshcore_hub/web/static/locales/`
**Key files:**
- `en.json` - English translations (reference implementation)
- `languages.md` - Comprehensive translation reference guide for translators
**Using translations in JavaScript:**
Import the `t()` function from `components.js`:
```javascript
import { t } from '../components.js';
// Simple translation
const label = t('common.save'); // "Save"
// Translation with variable interpolation
const title = t('common.add_entity', { entity: t('entities.node') }); // "Add Node"
// Composed patterns for consistency
const emptyMsg = t('common.no_entity_found', { entity: t('entities.nodes').toLowerCase() }); // "No nodes found"
```
**Translation architecture:**
1. **Entity-based composition:** Core entity names (`entities.*`) are referenced by composite patterns for consistency
2. **Reusable patterns:** Common UI patterns (`common.*`) use `{{variable}}` interpolation for dynamic content
3. **Separation of concerns:**
- Keys without `_label` suffix = table headers (title case, no colon)
- Keys with `_label` suffix = inline labels (sentence case, with colon)
**When adding/modifying translations:**
1. **Add new keys** to `en.json` following existing patterns:
- Use composition when possible (reference `entities.*` in `common.*` patterns)
- Group related keys by section (e.g., `admin_members.*`, `admin_node_tags.*`)
- Use `{{variable}}` syntax for dynamic content
2. **Update `languages.md`** with:
- Key name, English value, and usage context
- Variable descriptions if using interpolation
- Notes about HTML content or special formatting
3. **Add tests** in `tests/test_common/test_i18n.py`:
- Test new interpolation patterns
- Test required sections if adding new top-level sections
- Test composed patterns with entity references
4. **Run i18n tests:**
```bash
pytest tests/test_common/test_i18n.py -v
```
**Best practices:**
- **Avoid duplication:** Use `common.*` patterns instead of duplicating similar strings
- **Compose with entities:** Reference `entities.*` keys in patterns rather than hardcoding entity names
- **Preserve variables:** Keep `{{variable}}` placeholders unchanged when translating
- **Test composition:** Verify patterns work with all entity types (singular/plural, lowercase/uppercase)
- **Document context:** Always update `languages.md` so translators understand usage
**Example - adding a new entity and patterns:**
```javascript
// 1. Add entity to en.json
"entities": {
"sensor": "Sensor"
}
// 2. Use with existing common patterns
t('common.add_entity', { entity: t('entities.sensor') }) // "Add Sensor"
t('common.no_entity_found', { entity: t('entities.sensors').toLowerCase() }) // "No sensors found"
// 3. Update languages.md with context
// 4. Add test to test_i18n.py
```
**Translation loading:**
The i18n system (`src/meshcore_hub/common/i18n.py`) loads translations on startup:
- Defaults to English (`en`)
- Falls back to English for missing keys
- Returns the key itself if translation not found
For full translation guidelines, see `src/meshcore_hub/web/static/locales/languages.md`.
### Adding a New Database Model
1. Create model in `common/models/`
2. Export in `common/models/__init__.py`
3. Create Alembic migration: `alembic revision --autogenerate -m "description"`
3. Create Alembic migration: `meshcore-hub db revision --autogenerate -m "description"`
4. Review and adjust migration file
5. Test migration: `alembic upgrade head`
5. Test migration: `meshcore-hub db upgrade`
### Running the Development Environment
@@ -428,7 +596,7 @@ pytest
# Run specific component
meshcore-hub api --reload
meshcore-hub collector
meshcore-hub interface --mode receiver --mock
meshcore-hub interface receiver --mock
```
## Environment Variables
@@ -436,26 +604,84 @@ meshcore-hub interface --mode receiver --mock
See [PLAN.md](PLAN.md#configuration-environment-variables) for complete list.
Key variables:
- `DATA_HOME` - Base directory for all service data (default: `./data`)
- `DATA_HOME` - Base directory for runtime data (default: `./data`)
- `SEED_HOME` - Directory containing seed data files (default: `./seed`)
- `CONTENT_HOME` - Directory containing custom content (pages, media) (default: `./content`)
- `MQTT_HOST`, `MQTT_PORT`, `MQTT_PREFIX` - MQTT broker connection
- `DATABASE_URL` - SQLAlchemy database URL (default: `sqlite:///{DATA_HOME}/collector/meshcore.db`)
- `MQTT_TLS` - Enable TLS/SSL for MQTT (default: `false`)
- `API_READ_KEY`, `API_ADMIN_KEY` - API authentication keys
- `WEB_ADMIN_ENABLED` - Enable admin interface at /a/ (default: `false`, requires auth proxy)
- `WEB_THEME` - Default theme for the web dashboard (default: `dark`, options: `dark`, `light`). Users can override via the theme toggle in the navbar, which persists their preference in browser localStorage.
- `WEB_AUTO_REFRESH_SECONDS` - Auto-refresh interval in seconds for list pages (default: `30`, `0` to disable)
- `TZ` - Timezone for web dashboard date/time display (default: `UTC`, e.g., `America/New_York`, `Europe/London`)
- `FEATURE_DASHBOARD`, `FEATURE_NODES`, `FEATURE_ADVERTISEMENTS`, `FEATURE_MESSAGES`, `FEATURE_MAP`, `FEATURE_MEMBERS`, `FEATURE_PAGES` - Feature flags to enable/disable specific web dashboard pages (default: all `true`). Dependencies: Dashboard auto-disables when all of Nodes/Advertisements/Messages are disabled. Map auto-disables when Nodes is disabled.
- `METRICS_ENABLED` - Enable Prometheus metrics endpoint at /metrics (default: `true`)
- `METRICS_CACHE_TTL` - Seconds to cache metrics output (default: `60`)
- `LOG_LEVEL` - Logging verbosity
### Data Directory Structure
The database defaults to `sqlite:///{DATA_HOME}/collector/meshcore.db` and does not typically need to be configured.
The `DATA_HOME` environment variable controls where all service data is stored:
### Directory Structure
**Seed Data (`SEED_HOME`)** - Contains initial data files for database seeding:
```
${SEED_HOME}/
├── node_tags.yaml # Node tags (keyed by public_key)
└── members.yaml # Network members list
```
**Custom Content (`CONTENT_HOME`)** - Contains custom pages and media for the web dashboard:
```
${CONTENT_HOME}/
├── pages/ # Custom markdown pages
│ ├── about.md # Example: About page (/pages/about)
│ ├── faq.md # Example: FAQ page (/pages/faq)
│ └── getting-started.md # Example: Getting Started (/pages/getting-started)
└── media/ # Custom media files
└── images/
└── logo.svg # Custom logo (replaces default favicon and navbar/home logo)
```
Pages use YAML frontmatter for metadata:
```markdown
---
title: About Us # Browser tab title and nav link (not rendered on page)
slug: about # URL path (default: filename without .md)
menu_order: 10 # Nav sort order (default: 100, lower = earlier)
---
# About Our Network
Markdown content here (include your own heading)...
```
**Runtime Data (`DATA_HOME`)** - Contains runtime data (gitignored):
```
${DATA_HOME}/
── collector/
── meshcore.db # SQLite database
│ └── tags.json # Node tags for import
└── web/
└── members.json # Network members list
── collector/
── meshcore.db # SQLite database
```
Services automatically create their subdirectories if they don't exist.
### Seeding
The database can be seeded with node tags and network members from YAML files in `SEED_HOME`:
- `node_tags.yaml` - Node tag definitions (keyed by public_key)
- `members.yaml` - Network member definitions
**Important:** Seeding is NOT automatic and must be run explicitly. This prevents seed files from overwriting user changes made via the admin UI.
```bash
# Native CLI
meshcore-hub collector seed
# With Docker Compose
docker compose --profile seed up
```
**Note:** Once the admin UI is enabled (`WEB_ADMIN_ENABLED=true`), tags should be managed through the web interface rather than seed files.
### Webhook Configuration
The collector supports forwarding events to external HTTP endpoints:
@@ -472,6 +698,64 @@ The collector supports forwarding events to external HTTP endpoints:
| `WEBHOOK_MAX_RETRIES` | Max retries on failure (default: 3) |
| `WEBHOOK_RETRY_BACKOFF` | Exponential backoff multiplier (default: 2.0) |
### Data Retention / Cleanup Configuration
The collector supports automatic cleanup of old event data and inactive nodes:
**Event Data Cleanup:**
| Variable | Description |
|----------|-------------|
| `DATA_RETENTION_ENABLED` | Enable automatic event data cleanup (default: true) |
| `DATA_RETENTION_DAYS` | Days to retain event data (default: 30) |
| `DATA_RETENTION_INTERVAL_HOURS` | Hours between cleanup runs (default: 24) |
When enabled, the collector automatically deletes event data older than the retention period:
- Advertisements
- Messages (channel and direct)
- Telemetry
- Trace paths
- Event logs
**Node Cleanup:**
| Variable | Description |
|----------|-------------|
| `NODE_CLEANUP_ENABLED` | Enable automatic cleanup of inactive nodes (default: true) |
| `NODE_CLEANUP_DAYS` | Remove nodes not seen for this many days (default: 7) |
When enabled, the collector automatically removes nodes where:
- `last_seen` is older than the configured number of days
- Nodes with `last_seen=NULL` (never seen on network) are **NOT** removed
- Nodes created via tag import that have never been seen on the mesh are preserved
**Note:** Both event data and node cleanup run on the same schedule (DATA_RETENTION_INTERVAL_HOURS).
**Contact Cleanup (Interface RECEIVER):**
The interface RECEIVER mode can automatically remove stale contacts from the MeshCore companion node's contact database. This prevents the companion node from resyncing old/dead contacts back to the collector, freeing up memory on the device (typically limited to ~100 contacts).
| Variable | Description |
|----------|-------------|
| `CONTACT_CLEANUP_ENABLED` | Enable automatic removal of stale contacts (default: true) |
| `CONTACT_CLEANUP_DAYS` | Remove contacts not advertised for this many days (default: 7) |
When enabled, during each contact sync the receiver checks each contact's `last_advert` timestamp:
- Contacts with `last_advert` older than `CONTACT_CLEANUP_DAYS` are removed from the device
- Stale contacts are not published to MQTT (preventing collector database pollution)
- Contacts without a `last_advert` timestamp are preserved (no removal without data)
This cleanup runs automatically whenever the receiver syncs contacts (on startup and after each advertisement event).
Manual cleanup can be triggered at any time with:
```bash
# Dry run to see what would be deleted
meshcore-hub collector cleanup --retention-days 30 --dry-run
# Live cleanup
meshcore-hub collector cleanup --retention-days 30
```
Webhook payload structure:
```json
{
@@ -486,9 +770,13 @@ Webhook payload structure:
### Common Issues
1. **MQTT Connection Failed**: Check broker is running and `MQTT_HOST`/`MQTT_PORT` are correct
2. **Database Migration Errors**: Ensure `DATABASE_URL` is correct, run `alembic upgrade head`
2. **Database Migration Errors**: Ensure `DATA_HOME` is writable, run `meshcore-hub db upgrade`
3. **Import Errors**: Ensure package is installed with `pip install -e .`
4. **Type Errors**: Run `mypy src/` to check type annotations
4. **Type Errors**: Run `pre-commit run --all-files` to check type annotations and other issues
5. **NixOS greenlet errors**: On NixOS, the pre-built greenlet wheel may fail with `libstdc++.so.6` errors. Rebuild from source:
```bash
pip install --no-binary greenlet greenlet
```
### Debugging
@@ -549,8 +837,23 @@ await mc.start_auto_message_fetching()
On startup, the receiver performs these initialization steps:
1. Set device clock to current Unix timestamp
2. Send a local (non-flood) advertisement
3. Start automatic message fetching
2. Optionally set the device name (if `MESHCORE_DEVICE_NAME` is configured)
3. Send a flood advertisement (broadcasts device name to the mesh)
4. Start automatic message fetching
5. Sync the device's contact database
### Contact Sync Behavior
The receiver syncs the device's contact database in two scenarios:
1. **Startup**: Initial sync when receiver starts
2. **Advertisement Events**: Automatic sync triggered whenever an advertisement is received from the mesh
Since advertisements are typically received every ~20 minutes, contact sync happens automatically without manual intervention. Each contact from the device is published individually to MQTT:
- Topic: `{prefix}/{device_public_key}/event/contact`
- Payload: `{public_key, adv_name, type}`
This ensures the collector's database stays current with all nodes discovered on the mesh network.
## References

View File

@@ -4,7 +4,7 @@
# =============================================================================
# Stage 1: Builder - Install dependencies and build package
# =============================================================================
FROM python:3.11-slim AS builder
FROM python:3.14-slim AS builder
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE=1 \
@@ -28,14 +28,18 @@ COPY src/ ./src/
COPY alembic/ ./alembic/
COPY alembic.ini ./
# Install the package
RUN pip install --upgrade pip && \
# Build argument for version (set via CI or manually)
ARG BUILD_VERSION=dev
# Set version in _version.py and install the package
RUN sed -i "s|__version__ = \"dev\"|__version__ = \"${BUILD_VERSION}\"|" src/meshcore_hub/_version.py && \
pip install --upgrade pip && \
pip install .
# =============================================================================
# Stage 2: Runtime - Final production image
# =============================================================================
FROM python:3.11-slim AS runtime
FROM python:3.14-slim AS runtime
# Labels
LABEL org.opencontainers.image.title="MeshCore Hub" \

View File

@@ -481,6 +481,7 @@ ${DATA_HOME}/
| INTERFACE_MODE | RECEIVER | RECEIVER or SENDER |
| SERIAL_PORT | /dev/ttyUSB0 | Serial port path |
| SERIAL_BAUD | 115200 | Baud rate |
| MESHCORE_DEVICE_NAME | *(none)* | Device/node name set on startup |
| MOCK_DEVICE | false | Use mock device |
### Collector

606
README.md
View File

@@ -1,6 +1,17 @@
# MeshCore Hub
Python 3.11+ platform for managing and orchestrating MeshCore mesh networks.
[![CI](https://github.com/ipnet-mesh/meshcore-hub/actions/workflows/ci.yml/badge.svg)](https://github.com/ipnet-mesh/meshcore-hub/actions/workflows/ci.yml)
[![Docker](https://github.com/ipnet-mesh/meshcore-hub/actions/workflows/docker.yml/badge.svg)](https://github.com/ipnet-mesh/meshcore-hub/actions/workflows/docker.yml)
[![BuyMeACoffee](https://raw.githubusercontent.com/pachadotdev/buymeacoffee-badges/main/bmc-donate-yellow.svg)](https://www.buymeacoffee.com/jinglemansweep)
Python 3.13+ platform for managing and orchestrating MeshCore mesh networks.
![MeshCore Hub Web Dashboard](docs/images/web.png)
> [!IMPORTANT]
> **Help Translate MeshCore Hub** 🌍
>
> We need volunteers to translate the web dashboard! Currently only English is available. Check out the [Translation Guide](src/meshcore_hub/web/static/locales/languages.md) to contribute a language pack. Partial translations welcome!
## Overview
@@ -11,45 +22,49 @@ MeshCore Hub provides a complete solution for monitoring, collecting, and intera
| **Interface** | Connects to MeshCore companion nodes via Serial/USB, bridges events to/from MQTT |
| **Collector** | Subscribes to MQTT events and persists them to a database |
| **API** | REST API for querying data and sending commands to the network |
| **Web Dashboard** | User-friendly web interface for visualizing network status |
| **Web Dashboard** | Single Page Application (SPA) for visualizing network status |
## Architecture
```
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
MeshCore │ │ MeshCore │ │ MeshCore │
Device 1 │ │ Device 2 │ │ Device 3 │
└────────┬────────┘ └────────┬────────┘ └────────┬────────┘
│ │ │
│ Serial/USB │ Serial/USB │ Serial/USB
│ │ │
┌────────▼────────┐ ┌────────▼────────┐ ┌────────▼────────┐
Interface │ │ Interface │ │ Interface │
(RECEIVER) │ │ (RECEIVER) │ │ (SENDER) │
└────────┬────────┘ └────────┬────────┘ └────────▲────────┘
│ │ │
│ Publish │ Publish │ Subscribe
│ │ │
└───────────┬───────────┴───────────────────────┘
┌──────▼──────┐
│ MQTT │
│ Broker │
└──────┬──────┘
┌──────▼──────┐
│ Collector │
└──────┬──────┘
┌──────▼──────┐
│ Database │
└──────┬──────┘
┌───────────┴───────────┐
│ │
┌──────▼──────┐ ┌───────▼───────┐
│ API │◄──────│ Web Dashboard │
└─────────────┘ └───────────────┘
```mermaid
flowchart LR
subgraph Devices["MeshCore Devices"]
D1["Device 1"]
D2["Device 2"]
D3["Device 3"]
end
subgraph Interfaces["Interface Layer"]
I1["RECEIVER"]
I2["RECEIVER"]
I3["SENDER"]
end
D1 -->|Serial| I1
D2 -->|Serial| I2
D3 -->|Serial| I3
I1 -->|Publish| MQTT
I2 -->|Publish| MQTT
MQTT -->|Subscribe| I3
MQTT["MQTT Broker"]
subgraph Backend["Backend Services"]
Collector --> Database --> API
end
MQTT --> Collector
API --> Web["Web Dashboard"]
style Devices fill:none,stroke:#0288d1,stroke-width:2px
style Interfaces fill:none,stroke:#f57c00,stroke-width:2px
style Backend fill:none,stroke:#388e3c,stroke-width:2px
style MQTT fill:none,stroke:#7b1fa2,stroke-width:3px
style Collector fill:none,stroke:#388e3c,stroke-width:2px
style Database fill:none,stroke:#c2185b,stroke-width:2px
style API fill:none,stroke:#1976d2,stroke-width:2px
style Web fill:none,stroke:#ffa000,stroke-width:2px
```
## Features
@@ -60,54 +75,144 @@ MeshCore Hub provides a complete solution for monitoring, collecting, and intera
- **Command Dispatch**: Send messages and advertisements via the API
- **Node Tagging**: Add custom metadata to nodes for organization
- **Web Dashboard**: Visualize network status, node locations, and message history
- **Internationalization**: Full i18n support with composable translation patterns
- **Docker Ready**: Single image with all components, easy deployment
## Quick Start
## Getting Started
### Using Docker Compose (Recommended)
### Simple Self-Hosted Setup
Docker Compose supports **profiles** to selectively enable/disable components:
The quickest way to get started is running the entire stack on a single machine with a connected MeshCore device.
| Profile | Services |
|---------|----------|
| `mqtt` | Eclipse Mosquitto MQTT broker |
| `interface-receiver` | MeshCore device receiver (events to MQTT) |
| `interface-sender` | MeshCore device sender (MQTT to device) |
| `collector` | MQTT subscriber + database storage |
| `api` | REST API server |
| `web` | Web dashboard |
| `mock` | All services with mock device (for testing) |
| `all` | All production services |
**Prerequisites:**
1. Flash the [USB Companion firmware](https://meshcore.dev/) onto a compatible device (e.g., Heltec V3, T-Beam)
2. Connect the device via USB to a machine that supports Docker or Python
**Steps:**
```bash
# Clone the repository
git clone https://github.com/your-org/meshcore-hub.git
cd meshcore-hub/docker
# Create a directory, download the Docker Compose file and
# example environment configuration file
mkdir meshcore-hub
cd meshcore-hub
wget https://raw.githubusercontent.com/ipnet-mesh/meshcore-hub/refs/heads/main/docker-compose.yml
wget https://raw.githubusercontent.com/ipnet-mesh/meshcore-hub/refs/heads/main/.env.example
# Copy and configure environment
cp .env.example .env
# Edit .env with your settings (API keys, serial port, network info)
# Edit .env: set SERIAL_PORT to your device (e.g., /dev/ttyUSB0 or /dev/ttyACM0)
# Option 1: Start all services with mock device (for testing)
docker compose --profile mock up -d
# Start the entire stack with local MQTT broker
docker compose --profile mqtt --profile core --profile receiver up -d
# Option 2: Start specific services for production
docker compose --profile mqtt --profile collector --profile api --profile web up -d
# View the web dashboard
open http://localhost:8080
```
# Option 3: Start all production services (requires real MeshCore device)
docker compose --profile all up -d
This starts all services: MQTT broker, collector, API, web dashboard, and the interface receiver that bridges your MeshCore device to the system.
### Distributed Community Setup
For larger deployments, you can separate receiver nodes from the central infrastructure. This allows multiple community members to contribute receiver coverage while hosting the backend centrally.
```mermaid
flowchart TB
subgraph Community["Community Members"]
R1["Raspberry Pi + MeshCore"]
R2["Raspberry Pi + MeshCore"]
R3["Any Linux + MeshCore"]
end
subgraph Server["Community VPS / Server"]
MQTT["MQTT Broker"]
Collector
API
Web["Web Dashboard (public)"]
MQTT --> Collector --> API
API <--- Web
end
R1 -->|MQTT port 1883| MQTT
R2 -->|MQTT port 1883| MQTT
R3 -->|MQTT port 1883| MQTT
style Community fill:none,stroke:#0288d1,stroke-width:2px
style Server fill:none,stroke:#388e3c,stroke-width:2px
style MQTT fill:none,stroke:#7b1fa2,stroke-width:3px
style Collector fill:none,stroke:#388e3c,stroke-width:2px
style API fill:none,stroke:#1976d2,stroke-width:2px
style Web fill:none,stroke:#ffa000,stroke-width:2px
```
**On each receiver node (Raspberry Pi, etc.):**
```bash
# Only run the receiver component
# Configure .env with MQTT_HOST pointing to your central server
MQTT_HOST=your-community-server.com
SERIAL_PORT=/dev/ttyUSB0
docker compose --profile receiver up -d
```
**On the central server (VPS/cloud):**
```bash
# Run the core infrastructure with local MQTT broker
docker compose --profile mqtt --profile core up -d
# Or connect to an existing MQTT broker (set MQTT_HOST in .env)
docker compose --profile core up -d
```
This architecture allows:
- Multiple receivers for better RF coverage across a geographic area
- Centralized data storage and web interface
- Community members to contribute coverage with minimal setup
- The central server to be hosted anywhere with internet access
## Deployment
### Docker Compose Profiles
Docker Compose uses **profiles** to select which services to run:
| Profile | Services | Use Case |
|---------|----------|----------|
| `core` | db-migrate, collector, api, web | Central server infrastructure |
| `receiver` | interface-receiver | Receiver node (events to MQTT) |
| `sender` | interface-sender | Sender node (MQTT to device) |
| `mqtt` | mosquitto broker | Local MQTT broker (optional) |
| `mock` | interface-mock-receiver | Testing without hardware |
| `migrate` | db-migrate | One-time database migration |
| `seed` | seed | One-time seed data import |
| `metrics` | prometheus, alertmanager | Prometheus metrics and alerting |
**Note:** Most deployments connect to an external MQTT broker. Add `--profile mqtt` only if you need a local broker.
```bash
# Create database schema
docker compose --profile migrate run --rm db-migrate
# Seed the database
docker compose --profile seed run --rm seed
# Start core services with local MQTT broker
docker compose --profile mqtt --profile core up -d
# Or connect to external MQTT (configure MQTT_HOST in .env)
docker compose --profile core up -d
# Start just the receiver (connects to MQTT_HOST from .env)
docker compose --profile receiver up -d
# View logs
docker compose logs -f
# Run database migrations
docker compose --profile migrate up
# Stop services
docker compose --profile mock down
docker compose down
```
#### Serial Device Access
### Serial Device Access
For production with real MeshCore devices, ensure the serial port is accessible:
@@ -123,13 +228,25 @@ SERIAL_PORT=/dev/ttyUSB0
SERIAL_PORT_SENDER=/dev/ttyUSB1 # If using separate sender device
```
**Tip:** If USB devices reconnect as different numeric IDs (e.g., `/dev/ttyUSB0` becomes `/dev/ttyUSB1`), use the stable `/dev/serial/by-id/` path instead:
```bash
# List available devices by ID
ls -la /dev/serial/by-id/
# Example output:
# usb-Silicon_Labs_CP2102N_USB_to_UART_Bridge_abc123-if00-port0 -> ../../ttyUSB0
# Configure using the stable ID
SERIAL_PORT=/dev/serial/by-id/usb-Silicon_Labs_CP2102N_USB_to_UART_Bridge_abc123-if00-port0
```
### Manual Installation
```bash
# Create virtual environment
python -m venv .venv
source .venv/bin/activate # Linux/macOS
# .venv\Scripts\activate # Windows
source .venv/bin/activate
# Install the package
pip install -e ".[dev]"
@@ -138,7 +255,7 @@ pip install -e ".[dev]"
meshcore-hub db upgrade
# Start components (in separate terminals)
meshcore-hub interface --mode receiver --port /dev/ttyUSB0
meshcore-hub interface receiver --port /dev/ttyUSB0
meshcore-hub collector
meshcore-hub api
meshcore-hub web
@@ -153,28 +270,30 @@ All components are configured via environment variables. Create a `.env` file or
| Variable | Default | Description |
|----------|---------|-------------|
| `LOG_LEVEL` | `INFO` | Logging level (DEBUG, INFO, WARNING, ERROR) |
| `DATA_HOME` | `./data` | Base directory for runtime data |
| `SEED_HOME` | `./seed` | Directory containing seed data files |
| `MQTT_HOST` | `localhost` | MQTT broker hostname |
| `MQTT_PORT` | `1883` | MQTT broker port |
| `MQTT_USERNAME` | *(none)* | MQTT username (optional) |
| `MQTT_PASSWORD` | *(none)* | MQTT password (optional) |
| `MQTT_PREFIX` | `meshcore` | Topic prefix for all MQTT messages |
| `MQTT_TLS` | `false` | Enable TLS/SSL for MQTT connection |
### Interface Settings
| Variable | Default | Description |
|----------|---------|-------------|
| `INTERFACE_MODE` | `RECEIVER` | Operating mode (RECEIVER or SENDER) |
| `SERIAL_PORT` | `/dev/ttyUSB0` | Serial port for MeshCore device |
| `SERIAL_BAUD` | `115200` | Serial baud rate |
| `MOCK_DEVICE` | `false` | Use mock device for testing |
| `MESHCORE_DEVICE_NAME` | *(none)* | Device/node name set on startup (broadcast in advertisements) |
| `NODE_ADDRESS` | *(none)* | Override for device public key (64-char hex string) |
| `NODE_ADDRESS_SENDER` | *(none)* | Override for sender device public key |
| `CONTACT_CLEANUP_ENABLED` | `true` | Enable automatic removal of stale contacts from companion node |
| `CONTACT_CLEANUP_DAYS` | `7` | Remove contacts not advertised for this many days |
### Collector Settings
### Webhooks
| Variable | Default | Description |
|----------|---------|-------------|
| `DATABASE_URL` | `sqlite:///./meshcore.db` | SQLAlchemy database URL |
#### Webhook Configuration
The collector can forward events to external HTTP endpoints:
The collector can forward certain events to external HTTP endpoints:
| Variable | Default | Description |
|----------|---------|-------------|
@@ -183,7 +302,9 @@ The collector can forward events to external HTTP endpoints:
| `WEBHOOK_MESSAGE_URL` | *(none)* | Webhook URL for all message events |
| `WEBHOOK_MESSAGE_SECRET` | *(none)* | Secret for message webhook |
| `WEBHOOK_CHANNEL_MESSAGE_URL` | *(none)* | Override URL for channel messages only |
| `WEBHOOK_CHANNEL_MESSAGE_SECRET` | *(none)* | Secret for channel message webhook |
| `WEBHOOK_DIRECT_MESSAGE_URL` | *(none)* | Override URL for direct messages only |
| `WEBHOOK_DIRECT_MESSAGE_SECRET` | *(none)* | Secret for direct message webhook |
| `WEBHOOK_TIMEOUT` | `10.0` | Request timeout in seconds |
| `WEBHOOK_MAX_RETRIES` | `3` | Max retry attempts on failure |
| `WEBHOOK_RETRY_BACKOFF` | `2.0` | Exponential backoff multiplier |
@@ -197,6 +318,18 @@ Webhook payload format:
}
```
### Data Retention
The collector automatically cleans up old event data and inactive nodes:
| Variable | Default | Description |
|----------|---------|-------------|
| `DATA_RETENTION_ENABLED` | `true` | Enable automatic cleanup of old events |
| `DATA_RETENTION_DAYS` | `30` | Days to retain event data |
| `DATA_RETENTION_INTERVAL_HOURS` | `24` | Hours between cleanup runs |
| `NODE_CLEANUP_ENABLED` | `true` | Enable removal of inactive nodes |
| `NODE_CLEANUP_DAYS` | `7` | Remove nodes not seen for this many days |
### API Settings
| Variable | Default | Description |
@@ -205,6 +338,8 @@ Webhook payload format:
| `API_PORT` | `8000` | API port |
| `API_READ_KEY` | *(none)* | Read-only API key |
| `API_ADMIN_KEY` | *(none)* | Admin API key (required for commands) |
| `METRICS_ENABLED` | `true` | Enable Prometheus metrics endpoint at `/metrics` |
| `METRICS_CACHE_TTL` | `60` | Seconds to cache metrics output (reduces database load) |
### Web Dashboard Settings
@@ -213,134 +348,189 @@ Webhook payload format:
| `WEB_HOST` | `0.0.0.0` | Web server bind address |
| `WEB_PORT` | `8080` | Web server port |
| `API_BASE_URL` | `http://localhost:8000` | API endpoint URL |
| `API_KEY` | *(none)* | API key for web dashboard queries (optional) |
| `WEB_THEME` | `dark` | Default theme (`dark` or `light`). Users can override via theme toggle in navbar. |
| `WEB_LOCALE` | `en` | Locale/language for the web dashboard (e.g., `en`, `es`, `fr`) |
| `WEB_AUTO_REFRESH_SECONDS` | `30` | Auto-refresh interval in seconds for list pages (0 to disable) |
| `WEB_ADMIN_ENABLED` | `false` | Enable admin interface at /a/ (requires auth proxy) |
| `TZ` | `UTC` | Timezone for displaying dates/times (e.g., `America/New_York`, `Europe/London`) |
| `NETWORK_DOMAIN` | *(none)* | Network domain name (optional) |
| `NETWORK_NAME` | `MeshCore Network` | Display name for the network |
| `NETWORK_CITY` | *(none)* | City where network is located |
| `NETWORK_COUNTRY` | *(none)* | Country code (ISO 3166-1 alpha-2) |
| `NETWORK_LOCATION` | *(none)* | Center coordinates (lat,lon) |
| `NETWORK_RADIO_CONFIG` | *(none)* | Radio config (comma-delimited: profile,freq,bw,sf,cr,power) |
| `NETWORK_WELCOME_TEXT` | *(none)* | Custom welcome text for homepage |
| `NETWORK_CONTACT_EMAIL` | *(none)* | Contact email address |
| `NETWORK_CONTACT_DISCORD` | *(none)* | Discord server link |
| `NETWORK_CONTACT_GITHUB` | *(none)* | GitHub repository URL |
| `NETWORK_CONTACT_YOUTUBE` | *(none)* | YouTube channel URL |
| `CONTENT_HOME` | `./content` | Directory containing custom content (pages/, media/) |
## CLI Reference
#### Feature Flags
```bash
# Show help
meshcore-hub --help
Control which pages are visible in the web dashboard. Disabled features are fully hidden: removed from navigation, return 404 on their routes, and excluded from sitemap/robots.txt.
# Interface component
meshcore-hub interface --mode receiver --port /dev/ttyUSB0
meshcore-hub interface --mode sender --mock # Use mock device
| Variable | Default | Description |
|----------|---------|-------------|
| `FEATURE_DASHBOARD` | `true` | Enable the `/dashboard` page |
| `FEATURE_NODES` | `true` | Enable the `/nodes` pages (list, detail, short links) |
| `FEATURE_ADVERTISEMENTS` | `true` | Enable the `/advertisements` page |
| `FEATURE_MESSAGES` | `true` | Enable the `/messages` page |
| `FEATURE_MAP` | `true` | Enable the `/map` page and `/map/data` endpoint |
| `FEATURE_MEMBERS` | `true` | Enable the `/members` page |
| `FEATURE_PAGES` | `true` | Enable custom markdown pages |
# Collector component
meshcore-hub collector --database-url sqlite:///./data.db
**Dependencies:** Dashboard auto-disables when all of Nodes/Advertisements/Messages are disabled. Map auto-disables when Nodes is disabled.
# Import node tags from JSON file
meshcore-hub collector import-tags /path/to/tags.json
### Custom Content
# API component
meshcore-hub api --host 0.0.0.0 --port 8000
The web dashboard supports custom content including markdown pages and media files. Content is organized in subdirectories:
# Web dashboard
meshcore-hub web --port 8080 --network-name "My Network"
# Database management
meshcore-hub db upgrade # Run migrations
meshcore-hub db downgrade # Rollback one migration
meshcore-hub db current # Show current revision
```
content/
├── pages/ # Custom markdown pages
│ └── about.md
└── media/ # Custom media files
└── images/
└── logo.svg # Custom logo (replaces favicon and navbar/home logo)
```
## Node Tags
**Setup:**
```bash
# Create content directory structure
mkdir -p content/pages content/media
# Create a custom page
cat > content/pages/about.md << 'EOF'
---
title: About Us
slug: about
menu_order: 10
---
# About Our Network
Welcome to our MeshCore mesh network!
## Getting Started
1. Get a compatible LoRa device
2. Flash MeshCore firmware
3. Configure your radio settings
EOF
```
**Frontmatter fields:**
| Field | Default | Description |
|-------|---------|-------------|
| `title` | Filename titlecased | Browser tab title and navigation link text (not rendered on page) |
| `slug` | Filename without `.md` | URL path (e.g., `about``/pages/about`) |
| `menu_order` | `100` | Sort order in navigation (lower = earlier) |
The markdown content is rendered as-is, so include your own `# Heading` if desired.
Pages automatically appear in the navigation menu and sitemap. With Docker, mount the content directory:
```yaml
# docker-compose.yml (already configured)
volumes:
- ${CONTENT_HOME:-./content}:/content:ro
environment:
- CONTENT_HOME=/content
```
## Seed Data
The database can be seeded with node tags and network members from YAML files in the `SEED_HOME` directory (default: `./seed`).
#### Running the Seed Process
Seeding is a separate process and must be run explicitly:
```bash
docker compose --profile seed up
```
This imports data from the following files (if they exist):
- `{SEED_HOME}/node_tags.yaml` - Node tag definitions
- `{SEED_HOME}/members.yaml` - Network member definitions
#### Directory Structure
```
seed/ # SEED_HOME (seed data files)
├── node_tags.yaml # Node tags for import
└── members.yaml # Network members for import
data/ # DATA_HOME (runtime data)
└── collector/
└── meshcore.db # SQLite database
```
Example seed files are provided in `example/seed/`.
### Node Tags
Node tags allow you to attach custom metadata to nodes (e.g., location, role, owner). Tags are stored in the database and returned with node data via the API.
### Importing Tags from JSON
#### Node Tags YAML Format
Tags can be bulk imported from a JSON file:
Tags are keyed by public key in YAML format:
```bash
# Native CLI
meshcore-hub collector import-tags /path/to/tags.json
```yaml
# Each key is a 64-character hex public key
0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef:
name: Gateway Node
description: Main network gateway
role: gateway
lat: 37.7749
lon: -122.4194
member_id: alice
# With Docker Compose
docker compose --profile import-tags run --rm import-tags
fedcba9876543210fedcba9876543210fedcba9876543210fedcba9876543210:
name: Oakland Repeater
elevation: 150
```
### Tags JSON Format
Tag values can be:
- **YAML primitives** (auto-detected type): strings, numbers, booleans
- **Explicit type** (when you need to force a specific type):
```yaml
altitude:
value: "150"
type: number
```
```json
{
"tags": [
{
"public_key": "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef",
"key": "location",
"value": "San Francisco, CA",
"value_type": "string"
},
{
"public_key": "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef",
"key": "altitude",
"value": "150",
"value_type": "number"
}
]
}
Supported types: `string`, `number`, `boolean`
### Network Members
Network members represent the people operating nodes in your network. Members can optionally be linked to nodes via their public key.
#### Members YAML Format
```yaml
- member_id: walshie86
name: Walshie
callsign: Walshie86
role: member
description: IPNet Member
- member_id: craig
name: Craig
callsign: M7XCN
role: member
description: IPNet Member
```
| Field | Required | Description |
|-------|----------|-------------|
| `public_key` | Yes | 64-character hex public key of the node |
| `key` | Yes | Tag name (max 100 characters) |
| `value` | No | Tag value (stored as text) |
| `value_type` | No | Type hint: `string`, `number`, `boolean`, or `coordinate` (default: `string`) |
### Import Options
```bash
# Create nodes if they don't exist (default behavior)
meshcore-hub collector import-tags tags.json
# Skip tags for nodes that don't exist
meshcore-hub collector import-tags --no-create-nodes tags.json
```
### Data Directory Structure
For Docker deployments, organize your data files:
```
data/
├── collector/
│ └── tags.json # Node tags for import
└── web/
└── members.json # Network members list
```
Example files are provided in `example/data/`.
### Managing Tags via API
Tags can also be managed via the REST API:
```bash
# List tags for a node
curl http://localhost:8000/api/v1/nodes/{public_key}/tags
# Create a tag (requires admin key)
curl -X POST \
-H "Authorization: Bearer <API_ADMIN_KEY>" \
-H "Content-Type: application/json" \
-d '{"key": "location", "value": "Building A"}' \
http://localhost:8000/api/v1/nodes/{public_key}/tags
# Update a tag
curl -X PUT \
-H "Authorization: Bearer <API_ADMIN_KEY>" \
-H "Content-Type: application/json" \
-d '{"value": "Building B"}' \
http://localhost:8000/api/v1/nodes/{public_key}/tags/location
# Delete a tag
curl -X DELETE \
-H "Authorization: Bearer <API_ADMIN_KEY>" \
http://localhost:8000/api/v1/nodes/{public_key}/tags/location
```
| `member_id` | Yes | Unique identifier for the member |
| `name` | Yes | Member's display name |
| `callsign` | No | Amateur radio callsign |
| `role` | No | Member's role in the network |
| `description` | No | Additional description |
| `contact` | No | Contact information |
| `public_key` | No | Associated node public key (64-char hex) |
## API Documentation
@@ -354,6 +544,7 @@ Health check endpoints are also available:
- **Health**: http://localhost:8000/health
- **Ready**: http://localhost:8000/health/ready (includes database check)
- **Metrics**: http://localhost:8000/metrics (Prometheus format)
### Authentication
@@ -377,15 +568,21 @@ curl -X POST \
|--------|----------|-------------|
| GET | `/api/v1/nodes` | List all known nodes |
| GET | `/api/v1/nodes/{public_key}` | Get node details |
| GET | `/api/v1/nodes/prefix/{prefix}` | Get node by public key prefix |
| GET | `/api/v1/nodes/{public_key}/tags` | Get node tags |
| POST | `/api/v1/nodes/{public_key}/tags` | Create node tag |
| GET | `/api/v1/messages` | List messages with filters |
| GET | `/api/v1/advertisements` | List advertisements |
| GET | `/api/v1/telemetry` | List telemetry data |
| GET | `/api/v1/trace-paths` | List trace paths |
| GET | `/api/v1/members` | List network members |
| POST | `/api/v1/commands/send-message` | Send direct message |
| POST | `/api/v1/commands/send-channel-message` | Send channel message |
| GET | `/api/v1/stats` | Get network statistics |
| POST | `/api/v1/commands/send-advertisement` | Send advertisement |
| GET | `/api/v1/dashboard/stats` | Get network statistics |
| GET | `/api/v1/dashboard/activity` | Get daily advertisement activity |
| GET | `/api/v1/dashboard/message-activity` | Get daily message activity |
| GET | `/api/v1/dashboard/node-count` | Get cumulative node count history |
## Development
@@ -393,7 +590,7 @@ curl -X POST \
```bash
# Clone and setup
git clone https://github.com/your-org/meshcore-hub.git
git clone https://github.com/ipnet-mesh/meshcore-hub.git
cd meshcore-hub
python -m venv .venv
source .venv/bin/activate
@@ -422,14 +619,8 @@ pytest -k "test_list"
### Code Quality
```bash
# Format code
black src/ tests/
# Lint
flake8 src/ tests/
# Type check
mypy src/
# Run all code quality checks (formatting, linting, type checking)
pre-commit run --all-files
```
### Creating Database Migrations
@@ -455,15 +646,30 @@ meshcore-hub/
│ ├── collector/ # MQTT event collector
│ ├── api/ # REST API
│ └── web/ # Web dashboard
│ ├── templates/ # Jinja2 templates (SPA shell)
│ └── static/
│ ├── js/spa/ # SPA frontend (ES modules, lit-html)
│ └── locales/ # Translation files (en.json, languages.md)
├── tests/ # Test suite
├── alembic/ # Database migrations
├── etc/ # Configuration files (mosquitto.conf)
├── example/ # Example files for testing
── data/ # Example data files (members.json)
├── data/ # Runtime data (gitignored)
├── etc/ # Configuration files (MQTT, Prometheus, Alertmanager)
├── example/ # Example files for reference
── seed/ # Example seed data files
├── node_tags.yaml # Example node tags
│ │ └── members.yaml # Example network members
│ └── content/ # Example custom content
│ ├── pages/ # Example custom pages
│ │ └── join.md # Example join page
│ └── media/ # Example media files
│ └── images/ # Custom images
├── seed/ # Seed data directory (SEED_HOME, copy from example/seed/)
├── content/ # Custom content directory (CONTENT_HOME, optional)
│ ├── pages/ # Custom markdown pages
│ └── media/ # Custom media files
│ └── images/ # Custom images (logo.svg replaces default logo)
├── data/ # Runtime data directory (DATA_HOME, created at runtime)
├── Dockerfile # Docker build configuration
├── docker-compose.yml # Docker Compose services (gitignored)
├── docker-compose.yml.example # Docker Compose template
├── docker-compose.yml # Docker Compose services
├── PROMPT.md # Project specification
├── SCHEMAS.md # Event schema documentation
├── PLAN.md # Implementation plan
@@ -484,14 +690,16 @@ meshcore-hub/
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Make your changes
4. Run tests and linting (`pytest && black . && flake8`)
4. Run tests and quality checks (`pytest && pre-commit run --all-files`)
5. Commit your changes (`git commit -m 'Add amazing feature'`)
6. Push to the branch (`git push origin feature/amazing-feature`)
7. Open a Pull Request
## License
See [LICENSE](LICENSE) for details.
This project is licensed under the GNU General Public License v3.0 or later (GPL-3.0-or-later). See [LICENSE](LICENSE) for details.
## Acknowledgments

View File

@@ -0,0 +1,114 @@
"""Add member_nodes association table
Revision ID: 002
Revises: 001
Create Date: 2024-12-05
"""
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision: str = "002"
down_revision: Union[str, None] = "001"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# Create member_nodes table
op.create_table(
"member_nodes",
sa.Column("id", sa.String(), nullable=False),
sa.Column("member_id", sa.String(36), nullable=False),
sa.Column("public_key", sa.String(64), nullable=False),
sa.Column("node_role", sa.String(50), nullable=True),
sa.Column(
"created_at",
sa.DateTime(timezone=True),
server_default=sa.func.now(),
nullable=False,
),
sa.Column(
"updated_at",
sa.DateTime(timezone=True),
server_default=sa.func.now(),
nullable=False,
),
sa.ForeignKeyConstraint(["member_id"], ["members.id"], ondelete="CASCADE"),
sa.PrimaryKeyConstraint("id"),
)
op.create_index("ix_member_nodes_member_id", "member_nodes", ["member_id"])
op.create_index("ix_member_nodes_public_key", "member_nodes", ["public_key"])
op.create_index(
"ix_member_nodes_member_public_key",
"member_nodes",
["member_id", "public_key"],
)
# Migrate existing public_key data from members to member_nodes
# Get all members with a public_key
connection = op.get_bind()
members_with_keys = connection.execute(
sa.text("SELECT id, public_key FROM members WHERE public_key IS NOT NULL")
).fetchall()
# Insert into member_nodes
for member_id, public_key in members_with_keys:
# Generate a UUID for the new row
import uuid
node_id = str(uuid.uuid4())
connection.execute(
sa.text(
"""
INSERT INTO member_nodes (id, member_id, public_key, created_at, updated_at)
VALUES (:id, :member_id, :public_key, CURRENT_TIMESTAMP, CURRENT_TIMESTAMP)
"""
),
{"id": node_id, "member_id": member_id, "public_key": public_key},
)
# Drop the public_key column from members
op.drop_index("ix_members_public_key", table_name="members")
op.drop_column("members", "public_key")
def downgrade() -> None:
# Add public_key column back to members
op.add_column(
"members",
sa.Column("public_key", sa.String(64), nullable=True),
)
op.create_index("ix_members_public_key", "members", ["public_key"])
# Migrate data back - take the first node for each member
connection = op.get_bind()
member_nodes = connection.execute(
sa.text(
"""
SELECT DISTINCT member_id, public_key
FROM member_nodes
WHERE (member_id, created_at) IN (
SELECT member_id, MIN(created_at)
FROM member_nodes
GROUP BY member_id
)
"""
)
).fetchall()
for member_id, public_key in member_nodes:
connection.execute(
sa.text(
"UPDATE members SET public_key = :public_key WHERE id = :member_id"
),
{"public_key": public_key, "member_id": member_id},
)
# Drop member_nodes table
op.drop_table("member_nodes")

View File

@@ -0,0 +1,67 @@
"""Add event_hash column to event tables for deduplication
Revision ID: 003
Revises: 002
Create Date: 2024-12-06
"""
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision: str = "003"
down_revision: Union[str, None] = "002"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# Add event_hash column to messages table
op.add_column(
"messages",
sa.Column("event_hash", sa.String(32), nullable=True),
)
op.create_index("ix_messages_event_hash", "messages", ["event_hash"])
# Add event_hash column to advertisements table
op.add_column(
"advertisements",
sa.Column("event_hash", sa.String(32), nullable=True),
)
op.create_index("ix_advertisements_event_hash", "advertisements", ["event_hash"])
# Add event_hash column to trace_paths table
op.add_column(
"trace_paths",
sa.Column("event_hash", sa.String(32), nullable=True),
)
op.create_index("ix_trace_paths_event_hash", "trace_paths", ["event_hash"])
# Add event_hash column to telemetry table
op.add_column(
"telemetry",
sa.Column("event_hash", sa.String(32), nullable=True),
)
op.create_index("ix_telemetry_event_hash", "telemetry", ["event_hash"])
def downgrade() -> None:
# Remove event_hash from telemetry
op.drop_index("ix_telemetry_event_hash", table_name="telemetry")
op.drop_column("telemetry", "event_hash")
# Remove event_hash from trace_paths
op.drop_index("ix_trace_paths_event_hash", table_name="trace_paths")
op.drop_column("trace_paths", "event_hash")
# Remove event_hash from advertisements
op.drop_index("ix_advertisements_event_hash", table_name="advertisements")
op.drop_column("advertisements", "event_hash")
# Remove event_hash from messages
op.drop_index("ix_messages_event_hash", table_name="messages")
op.drop_column("messages", "event_hash")

View File

@@ -0,0 +1,63 @@
"""Add event_receivers junction table for multi-receiver tracking
Revision ID: 004
Revises: 003
Create Date: 2024-12-06
"""
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision: str = "004"
down_revision: Union[str, None] = "003"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
op.create_table(
"event_receivers",
sa.Column("id", sa.String(36), primary_key=True),
sa.Column("event_type", sa.String(20), nullable=False),
sa.Column("event_hash", sa.String(32), nullable=False),
sa.Column(
"receiver_node_id",
sa.String(36),
sa.ForeignKey("nodes.id", ondelete="CASCADE"),
nullable=False,
),
sa.Column("snr", sa.Float, nullable=True),
sa.Column("received_at", sa.DateTime(timezone=True), nullable=False),
sa.Column("created_at", sa.DateTime(timezone=True), nullable=False),
sa.Column("updated_at", sa.DateTime(timezone=True), nullable=False),
sa.UniqueConstraint(
"event_hash", "receiver_node_id", name="uq_event_receivers_hash_node"
),
)
op.create_index(
"ix_event_receivers_event_hash",
"event_receivers",
["event_hash"],
)
op.create_index(
"ix_event_receivers_receiver_node_id",
"event_receivers",
["receiver_node_id"],
)
op.create_index(
"ix_event_receivers_type_hash",
"event_receivers",
["event_type", "event_hash"],
)
def downgrade() -> None:
op.drop_index("ix_event_receivers_type_hash", table_name="event_receivers")
op.drop_index("ix_event_receivers_receiver_node_id", table_name="event_receivers")
op.drop_index("ix_event_receivers_event_hash", table_name="event_receivers")
op.drop_table("event_receivers")

View File

@@ -0,0 +1,126 @@
"""Make event_hash columns unique for race condition prevention
Revision ID: 005
Revises: 004
Create Date: 2024-12-06
"""
from typing import Sequence, Union
from alembic import op
from sqlalchemy import inspect
# revision identifiers, used by Alembic.
revision: str = "005"
down_revision: Union[str, None] = "004"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def _index_exists(table_name: str, index_name: str) -> bool:
"""Check if an index exists on a table."""
bind = op.get_bind()
inspector = inspect(bind)
indexes = inspector.get_indexes(table_name)
return any(idx["name"] == index_name for idx in indexes)
def _has_unique_on_column(table_name: str, column_name: str) -> bool:
"""Check if a unique constraint or unique index exists on a column."""
bind = op.get_bind()
inspector = inspect(bind)
# Check unique constraints
uniques = inspector.get_unique_constraints(table_name)
for uq in uniques:
if column_name in uq.get("column_names", []):
return True
# Also check indexes (SQLite may create unique index instead of constraint)
indexes = inspector.get_indexes(table_name)
for idx in indexes:
if idx.get("unique") and column_name in idx.get("column_names", []):
return True
return False
def upgrade() -> None:
# Convert non-unique indexes to unique indexes for race condition prevention
# Note: SQLite handles NULL values as unique (each NULL is distinct)
# SQLite doesn't support ALTER TABLE ADD CONSTRAINT, so we use unique indexes
# Messages
if _index_exists("messages", "ix_messages_event_hash"):
op.drop_index("ix_messages_event_hash", table_name="messages")
if not _has_unique_on_column("messages", "event_hash"):
op.create_index(
"ix_messages_event_hash_unique",
"messages",
["event_hash"],
unique=True,
)
# Advertisements
if _index_exists("advertisements", "ix_advertisements_event_hash"):
op.drop_index("ix_advertisements_event_hash", table_name="advertisements")
if not _has_unique_on_column("advertisements", "event_hash"):
op.create_index(
"ix_advertisements_event_hash_unique",
"advertisements",
["event_hash"],
unique=True,
)
# Trace paths
if _index_exists("trace_paths", "ix_trace_paths_event_hash"):
op.drop_index("ix_trace_paths_event_hash", table_name="trace_paths")
if not _has_unique_on_column("trace_paths", "event_hash"):
op.create_index(
"ix_trace_paths_event_hash_unique",
"trace_paths",
["event_hash"],
unique=True,
)
# Telemetry
if _index_exists("telemetry", "ix_telemetry_event_hash"):
op.drop_index("ix_telemetry_event_hash", table_name="telemetry")
if not _has_unique_on_column("telemetry", "event_hash"):
op.create_index(
"ix_telemetry_event_hash_unique",
"telemetry",
["event_hash"],
unique=True,
)
def downgrade() -> None:
# Restore non-unique indexes
# Telemetry
if _index_exists("telemetry", "ix_telemetry_event_hash_unique"):
op.drop_index("ix_telemetry_event_hash_unique", table_name="telemetry")
if not _index_exists("telemetry", "ix_telemetry_event_hash"):
op.create_index("ix_telemetry_event_hash", "telemetry", ["event_hash"])
# Trace paths
if _index_exists("trace_paths", "ix_trace_paths_event_hash_unique"):
op.drop_index("ix_trace_paths_event_hash_unique", table_name="trace_paths")
if not _index_exists("trace_paths", "ix_trace_paths_event_hash"):
op.create_index("ix_trace_paths_event_hash", "trace_paths", ["event_hash"])
# Advertisements
if _index_exists("advertisements", "ix_advertisements_event_hash_unique"):
op.drop_index(
"ix_advertisements_event_hash_unique", table_name="advertisements"
)
if not _index_exists("advertisements", "ix_advertisements_event_hash"):
op.create_index(
"ix_advertisements_event_hash", "advertisements", ["event_hash"]
)
# Messages
if _index_exists("messages", "ix_messages_event_hash_unique"):
op.drop_index("ix_messages_event_hash_unique", table_name="messages")
if not _index_exists("messages", "ix_messages_event_hash"):
op.create_index("ix_messages_event_hash", "messages", ["event_hash"])

View File

@@ -0,0 +1,39 @@
"""Make Node.last_seen nullable
Revision ID: 0b944542ccd8
Revises: 005
Create Date: 2025-12-08 00:07:49.891245+00:00
"""
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision: str = "0b944542ccd8"
down_revision: Union[str, None] = "005"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
# Make Node.last_seen nullable since nodes from contact sync
# haven't actually been "seen" on the mesh yet
with op.batch_alter_table("nodes", schema=None) as batch_op:
batch_op.alter_column("last_seen", existing_type=sa.DATETIME(), nullable=True)
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
# Revert Node.last_seen to non-nullable
# Note: This will fail if there are NULL values in last_seen
with op.batch_alter_table("nodes", schema=None) as batch_op:
batch_op.alter_column("last_seen", existing_type=sa.DATETIME(), nullable=False)
# ### end Alembic commands ###

View File

@@ -0,0 +1,111 @@
"""Add member_id field to members table
Revision ID: 03b9b2451bd9
Revises: 0b944542ccd8
Create Date: 2025-12-08 14:34:30.337799+00:00
"""
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision: str = "03b9b2451bd9"
down_revision: Union[str, None] = "0b944542ccd8"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("advertisements", schema=None) as batch_op:
batch_op.drop_index(batch_op.f("ix_advertisements_event_hash_unique"))
batch_op.create_unique_constraint(
"uq_advertisements_event_hash", ["event_hash"]
)
with op.batch_alter_table("members", schema=None) as batch_op:
# Add member_id as nullable first to handle existing data
batch_op.add_column(
sa.Column("member_id", sa.String(length=100), nullable=True)
)
# Generate member_id for existing members based on their name
# Convert name to lowercase and replace spaces with underscores
connection = op.get_bind()
connection.execute(
sa.text(
"UPDATE members SET member_id = LOWER(REPLACE(name, ' ', '_')) WHERE member_id IS NULL"
)
)
with op.batch_alter_table("members", schema=None) as batch_op:
# Now make it non-nullable and add unique index
batch_op.alter_column("member_id", nullable=False)
batch_op.drop_index(batch_op.f("ix_members_name"))
batch_op.create_index(
batch_op.f("ix_members_member_id"), ["member_id"], unique=True
)
with op.batch_alter_table("messages", schema=None) as batch_op:
batch_op.drop_index(batch_op.f("ix_messages_event_hash_unique"))
batch_op.create_unique_constraint("uq_messages_event_hash", ["event_hash"])
with op.batch_alter_table("nodes", schema=None) as batch_op:
batch_op.drop_index(batch_op.f("ix_nodes_public_key"))
batch_op.create_index(
batch_op.f("ix_nodes_public_key"), ["public_key"], unique=True
)
with op.batch_alter_table("telemetry", schema=None) as batch_op:
batch_op.drop_index(batch_op.f("ix_telemetry_event_hash_unique"))
batch_op.create_unique_constraint("uq_telemetry_event_hash", ["event_hash"])
with op.batch_alter_table("trace_paths", schema=None) as batch_op:
batch_op.drop_index(batch_op.f("ix_trace_paths_event_hash_unique"))
batch_op.create_unique_constraint("uq_trace_paths_event_hash", ["event_hash"])
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("trace_paths", schema=None) as batch_op:
batch_op.drop_constraint("uq_trace_paths_event_hash", type_="unique")
batch_op.create_index(
batch_op.f("ix_trace_paths_event_hash_unique"), ["event_hash"], unique=1
)
with op.batch_alter_table("telemetry", schema=None) as batch_op:
batch_op.drop_constraint("uq_telemetry_event_hash", type_="unique")
batch_op.create_index(
batch_op.f("ix_telemetry_event_hash_unique"), ["event_hash"], unique=1
)
with op.batch_alter_table("nodes", schema=None) as batch_op:
batch_op.drop_index(batch_op.f("ix_nodes_public_key"))
batch_op.create_index(
batch_op.f("ix_nodes_public_key"), ["public_key"], unique=False
)
with op.batch_alter_table("messages", schema=None) as batch_op:
batch_op.drop_constraint("uq_messages_event_hash", type_="unique")
batch_op.create_index(
batch_op.f("ix_messages_event_hash_unique"), ["event_hash"], unique=1
)
with op.batch_alter_table("members", schema=None) as batch_op:
batch_op.drop_index(batch_op.f("ix_members_member_id"))
batch_op.create_index(batch_op.f("ix_members_name"), ["name"], unique=False)
batch_op.drop_column("member_id")
with op.batch_alter_table("advertisements", schema=None) as batch_op:
batch_op.drop_constraint("uq_advertisements_event_hash", type_="unique")
batch_op.create_index(
batch_op.f("ix_advertisements_event_hash_unique"), ["event_hash"], unique=1
)
# ### end Alembic commands ###

View File

@@ -0,0 +1,57 @@
"""Remove member_nodes table
Revision ID: aa1162502616
Revises: 03b9b2451bd9
Create Date: 2025-12-08 15:04:37.260923+00:00
"""
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision: str = "aa1162502616"
down_revision: Union[str, None] = "03b9b2451bd9"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# Drop the member_nodes table
# Nodes are now associated with members via a 'member_id' tag on the node
op.drop_table("member_nodes")
def downgrade() -> None:
# Recreate the member_nodes table if needed for rollback
op.create_table(
"member_nodes",
sa.Column("id", sa.String(length=36), nullable=False),
sa.Column("member_id", sa.String(length=36), nullable=False),
sa.Column("public_key", sa.String(length=64), nullable=False),
sa.Column("node_role", sa.String(length=50), nullable=True),
sa.Column("created_at", sa.DateTime(timezone=True), nullable=False),
sa.Column("updated_at", sa.DateTime(timezone=True), nullable=False),
sa.ForeignKeyConstraint(
["member_id"],
["members.id"],
name=op.f("fk_member_nodes_member_id_members"),
ondelete="CASCADE",
),
sa.PrimaryKeyConstraint("id", name=op.f("pk_member_nodes")),
)
op.create_index(
op.f("ix_member_nodes_member_id"), "member_nodes", ["member_id"], unique=False
)
op.create_index(
op.f("ix_member_nodes_public_key"), "member_nodes", ["public_key"], unique=False
)
op.create_index(
"ix_member_nodes_member_public_key",
"member_nodes",
["member_id", "public_key"],
unique=False,
)

View File

@@ -0,0 +1,37 @@
"""add lat lon columns to nodes
Revision ID: 4e2e787a1660
Revises: aa1162502616
Create Date: 2026-01-09 20:04:04.273741+00:00
"""
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision: str = "4e2e787a1660"
down_revision: Union[str, None] = "aa1162502616"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("nodes", schema=None) as batch_op:
batch_op.add_column(sa.Column("lat", sa.Float(), nullable=True))
batch_op.add_column(sa.Column("lon", sa.Float(), nullable=True))
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("nodes", schema=None) as batch_op:
batch_op.drop_column("lon")
batch_op.drop_column("lat")
# ### end Alembic commands ###

View File

@@ -1,10 +0,0 @@
{
"members": [
{
"name": "Louis",
"callsign": "Louis",
"role": "admin",
"description": "IPNet Founder"
}
]
}

View File

@@ -1,613 +0,0 @@
{
"2337484665ced7e210007e9fd9db98ced0a24a6eab8b4cbe3a06b3a1cea33ca1": {
"friendly_name": "IP2 Repeater 1",
"node_id": "ip2-rep01.ipnt.uk",
"member_id": "louis",
"area": "IP2",
"location": {
"value": "52.0357627,1.132079",
"type": "coordinate"
},
"location_description": "Fountains Road",
"hardware": "Heltec V3",
"antenna": "Paradar 8.5dBi Omni",
"elevation": {
"value": "31",
"type": "number"
},
"show_on_map": {
"value": "true",
"type": "boolean"
},
"is_public": {
"value": "true",
"type": "boolean"
},
"is_online": {
"value": "true",
"type": "boolean"
},
"is_testing": {
"value": "false",
"type": "boolean"
},
"mesh_role": "repeater"
},
"8cb01fff1afc099055af418ce5fc5e60384df9ff763c25dd7e6a5e0922e8df90": {
"friendly_name": "IP2 Repeater 2",
"node_id": "ip2-rep02.ipnt.uk",
"member_id": "louis",
"area": "IP2",
"location": {
"value": "52.0390682,1.1304141",
"type": "coordinate"
},
"location_description": "Belstead Road",
"hardware": "Heltec V3",
"antenna": "McGill 6dBi Omni",
"elevation": {
"value": "44",
"type": "number"
},
"show_on_map": {
"value": "true",
"type": "boolean"
},
"is_public": {
"value": "true",
"type": "boolean"
},
"is_online": {
"value": "true",
"type": "boolean"
},
"is_testing": {
"value": "false",
"type": "boolean"
},
"mesh_role": "repeater"
},
"5b565df747913358e24d890b2227de9c35d09763746b6ec326c15ebbf9b8be3b": {
"friendly_name": "IP2 Repeater 3",
"node_id": "ip2-rep03.ipnt.uk",
"member_id": "louis",
"area": "IP2",
"location": {
"value": "52.046356,1.134661",
"type": "coordinate"
},
"location_description": "Birkfield Drive",
"hardware": "Heltec V3",
"antenna": "Paradar 8.5dBi Omni",
"elevation": {
"value": "52",
"type": "number"
},
"show_on_map": {
"value": "true",
"type": "boolean"
},
"is_public": {
"value": "true",
"type": "boolean"
},
"is_online": {
"value": "true",
"type": "boolean"
},
"is_testing": {
"value": "false",
"type": "boolean"
},
"mesh_role": "repeater"
},
"780d0939f90b22d3bd7cbedcaf4e8d468a12c01886ab24b8cfa11eab2f5516c5": {
"friendly_name": "IP2 Integration 1",
"node_id": "ip2-int01.ipnt.uk",
"member_id": "louis",
"area": "IP2",
"location": {
"value": "52.0354539,1.1295338",
"type": "coordinate"
},
"location_description": "Fountains Road",
"hardware": "Heltec V3",
"antenna": "Generic 5dBi Whip",
"elevation": {
"value": "25",
"type": "number"
},
"show_on_map": {
"value": "false",
"type": "boolean"
},
"is_public": {
"value": "true",
"type": "boolean"
},
"is_online": {
"value": "true",
"type": "boolean"
},
"is_testing": {
"value": "false",
"type": "boolean"
},
"mesh_role": "integration"
},
"30121dc60362c633c457ffa18f49b3e1d6823402c33709f32d7df70612250b96": {
"friendly_name": "MeshBot",
"node_id": "bot.ipnt.uk",
"member_id": "louis",
"area": "IP2",
"location": {
"value": "52.0354539,1.1295338",
"type": "coordinate"
},
"location_description": "Fountains Road",
"hardware": "Heltec V3",
"antenna": "Generic 5dBi Whip",
"elevation": {
"value": "25",
"type": "number"
},
"show_on_map": {
"value": "false",
"type": "boolean"
},
"is_public": {
"value": "true",
"type": "boolean"
},
"is_online": {
"value": "true",
"type": "boolean"
},
"is_testing": {
"value": "false",
"type": "boolean"
},
"mesh_role": "integration"
},
"9135986b83815ada92883358435cc6528c7db60cb647f9b6547739a1ce5eb1c8": {
"friendly_name": "IP3 Repeater 1",
"node_id": "ip3-rep01.ipnt.uk",
"member_id": "markab",
"area": "IP3",
"location": {
"value": "52.045803,1.204416",
"type": "coordinate"
},
"location_description": "Brokehall",
"hardware": "Heltec V3",
"antenna": "Paradar 8.5dBi Omni",
"elevation": {
"value": "42",
"type": "number"
},
"show_on_map": {
"value": "true",
"type": "boolean"
},
"is_public": {
"value": "true",
"type": "boolean"
},
"is_online": {
"value": "true",
"type": "boolean"
},
"is_testing": {
"value": "false",
"type": "boolean"
},
"mesh_role": "repeater"
},
"e334ec5475789d542ed9e692fbeef7444a371fcc05adcbda1f47ba6a3191b459": {
"friendly_name": "IP3 Repeater 2",
"node_id": "ip3-rep02.ipnt.uk",
"member_id": "ccz",
"area": "IP3",
"location": {
"value": "52.03297,1.17543",
"type": "coordinate"
},
"location_description": "Morland Road Allotments",
"hardware": "Heltec T114",
"antenna": "Unknown",
"elevation": {
"value": "39",
"type": "number"
},
"show_on_map": {
"value": "true",
"type": "boolean"
},
"is_public": {
"value": "true",
"type": "boolean"
},
"is_online": {
"value": "true",
"type": "boolean"
},
"is_testing": {
"value": "false",
"type": "boolean"
},
"mesh_role": "repeater"
},
"cc15fb33e98f2e098a543f516f770dc3061a1a6b30f79b84780663bf68ae6b53": {
"friendly_name": "IP3 Repeater 3",
"node_id": "ip3-rep03.ipnt.uk",
"member_id": "ccz",
"area": "IP3",
"location": {
"value": "52.04499,1.18149",
"type": "coordinate"
},
"location_description": "Hatfield Road",
"hardware": "Heltec V3",
"antenna": "Unknown",
"elevation": {
"value": "39",
"type": "number"
},
"show_on_map": {
"value": "true",
"type": "boolean"
},
"is_public": {
"value": "true",
"type": "boolean"
},
"is_online": {
"value": "true",
"type": "boolean"
},
"is_testing": {
"value": "false",
"type": "boolean"
},
"mesh_role": "repeater"
},
"22309435fbd9dd1f14870a1895dc854779f6b2af72b08542f6105d264a493ebe": {
"friendly_name": "IP3 Integration 1",
"node_id": "ip3-int01.ipnt.uk",
"member_id": "markab",
"area": "IP3",
"location": {
"value": "52.045773,1.212808",
"type": "coordinate"
},
"location_description": "Brokehall",
"hardware": "Heltec V3",
"antenna": "Generic 3dBi Whip",
"elevation": {
"value": "37",
"type": "number"
},
"show_on_map": {
"value": "false",
"type": "boolean"
},
"is_public": {
"value": "true",
"type": "boolean"
},
"is_online": {
"value": "false",
"type": "boolean"
},
"is_testing": {
"value": "false",
"type": "boolean"
},
"mesh_role": "integration"
},
"2a4f89e766dfa1758e35a69962c1f6d352b206a5e3562a589155a3ebfe7fc2bb": {
"friendly_name": "IP3 Repeater 4",
"node_id": "ip3-rep04.ipnt.uk",
"member_id": "markab",
"area": "IP3",
"location": {
"value": "52.046383,1.174542",
"type": "coordinate"
},
"location_description": "Holywells",
"hardware": "Sensecap Solar",
"antenna": "Paradar 6.5dbi Omni",
"elevation": {
"value": "21",
"type": "number"
},
"show_on_map": {
"value": "true",
"type": "boolean"
},
"is_public": {
"value": "true",
"type": "boolean"
},
"is_online": {
"value": "true",
"type": "boolean"
},
"is_testing": {
"value": "false",
"type": "boolean"
},
"mesh_role": "repeater"
},
"e790b73b2d6e377dd0f575c847f3ef42232f610eb9a19af57083fc4f647309ac": {
"friendly_name": "IP3 Repeater 5",
"node_id": "ip3-rep05.ipnt.uk",
"member_id": "markab",
"area": "IP3",
"location": {
"value": "52.05252,1.17034",
"type": "coordinate"
},
"location_description": "Back Hamlet",
"hardware": "Heltec T114",
"antenna": "Paradar 6.5dBi Omni",
"elevation": {
"value": "38",
"type": "number"
},
"show_on_map": {
"value": "true",
"type": "boolean"
},
"is_public": {
"value": "true",
"type": "boolean"
},
"is_online": {
"value": "true",
"type": "boolean"
},
"is_testing": {
"value": "false",
"type": "boolean"
},
"mesh_role": "repeater"
},
"20ed75ffc0f9777951716bb3d308d7f041fd2ad32fe2e998e600d0361e1fe2ac": {
"friendly_name": "IP3 Repeater 6",
"node_id": "ip3-rep06.ipnt.uk",
"member_id": "ccz",
"area": "IP3",
"location": {
"value": "52.04893,1.18965",
"type": "coordinate"
},
"location_description": "Dover Road",
"hardware": "Unknown",
"antenna": "Generic 5dBi Whip",
"elevation": {
"value": "38",
"type": "number"
},
"show_on_map": {
"value": "true",
"type": "boolean"
},
"is_public": {
"value": "true",
"type": "boolean"
},
"is_online": {
"value": "true",
"type": "boolean"
},
"is_testing": {
"value": "false",
"type": "boolean"
},
"mesh_role": "repeater"
},
"bd7b5ac75f660675b39f368e1dbb6d1dbcefd8bd7a170e21a942954f67c8bf52": {
"friendly_name": "IP8 Repeater 1",
"node_id": "rep01.ip8.ipnt.uk",
"member_id": "walshie86",
"area": "IP8",
"location": {
"value": "52.033684,1.118384",
"type": "coordinate"
},
"location_description": "Grove Hill",
"hardware": "Heltec V3",
"antenna": "McGill 3dBi Omni",
"elevation": {
"value": "13",
"type": "number"
},
"show_on_map": {
"value": "true",
"type": "boolean"
},
"is_public": {
"value": "true",
"type": "boolean"
},
"is_online": {
"value": "true",
"type": "boolean"
},
"is_testing": {
"value": "false",
"type": "boolean"
},
"mesh_role": "repeater"
},
"9cf300c40112ea34d0a59858270948b27ab6cd87e840de338f3ca782c17537b2": {
"friendly_name": "IP8 Repeater 2",
"node_id": "rep02.ip8.ipnt.uk",
"member_id": "walshie86",
"area": "IP8",
"location": {
"value": "52.035648,1.073271",
"type": "coordinate"
},
"location_description": "Washbrook",
"hardware": "Sensecap Solar",
"elevation": {
"value": "13",
"type": "number"
},
"show_on_map": {
"value": "true",
"type": "boolean"
},
"is_public": {
"value": "true",
"type": "boolean"
},
"is_online": {
"value": "true",
"type": "boolean"
},
"is_testing": {
"value": "false",
"type": "boolean"
},
"mesh_role": "repeater"
},
"d3c20d962f7384c111fbafad6fbc1c1dc0e5c3ce802fb3ee11020e8d8207ed3a": {
"friendly_name": "IP4 Repeater 1",
"node_id": "ip4-rep01.ipnt.uk",
"member_id": "markab",
"area": "IP4",
"location": {
"value": "52.052445,1.156882",
"type": "coordinate"
},
"location_description": "Wine Rack",
"hardware": "Heltec T114",
"antenna": "Generic 5dbi Whip",
"elevation": {
"value": "50",
"type": "number"
},
"show_on_map": {
"value": "true",
"type": "boolean"
},
"is_public": {
"value": "true",
"type": "boolean"
},
"is_online": {
"value": "true",
"type": "boolean"
},
"is_testing": {
"value": "false",
"type": "boolean"
},
"mesh_role": "repeater"
},
"b00ce9d218203e96d8557a4d59e06f5de59bbc4dcc4df9c870079d2cb8b5bd80": {
"friendly_name": "IP4 Repeater 2",
"node_id": "ip4-rep02.ipnt.uk",
"member_id": "markab",
"area": "IP4",
"location": {
"value": "52.06217,1.18332",
"type": "coordinate"
},
"location_description": "Rushmere Road",
"hardware": "Heltec V3",
"antenna": "Paradar 5dbi Whip",
"elevation": {
"value": "35",
"type": "number"
},
"show_on_map": {
"value": "true",
"type": "boolean"
},
"is_public": {
"value": "true",
"type": "boolean"
},
"is_online": {
"value": "false",
"type": "boolean"
},
"is_testing": {
"value": "false",
"type": "boolean"
},
"mesh_role": "repeater"
},
"8accb6d0189ccaffb745ba54793e7fe3edd515edb45554325d957e48c1b9f3b3": {
"friendly_name": "IP4 Repeater 3",
"node_id": "ip4-rep03.ipnt.uk",
"member_id": "craig",
"area": "IP4",
"location": {
"value": "52.058,1.165",
"type": "coordinate"
},
"location_description": "IP4 Area",
"hardware": "Heltec v3",
"antenna": "Generic Whip",
"elevation": {
"value": "30",
"type": "number"
},
"show_on_map": {
"value": "true",
"type": "boolean"
},
"is_public": {
"value": "true",
"type": "boolean"
},
"is_online": {
"value": "true",
"type": "boolean"
},
"is_testing": {
"value": "false",
"type": "boolean"
},
"mesh_role": "repeater"
},
"69fb8431e7ab307513797544fab99ce53ce24c46ec2d3a11767fe70f2ca37b23": {
"friendly_name": "IP3 Test Repeater 1",
"node_id": "ip3-tst01.ipnt.uk",
"member_id": "markab",
"area": "IP3",
"location": {
"value": "52.041869,1.204789",
"type": "coordinate"
},
"location_description": "Brokehall",
"hardware": "Station G2",
"antenna": "McGill 10dBi Panel",
"elevation": {
"value": "37",
"type": "number"
},
"show_on_map": {
"value": "true",
"type": "boolean"
},
"is_public": {
"value": "true",
"type": "boolean"
},
"is_online": {
"value": "false",
"type": "boolean"
},
"is_testing": {
"value": "true",
"type": "boolean"
},
"mesh_role": "repeater"
}
}

View File

@@ -1,14 +0,0 @@
services:
api:
networks:
- default
- pangolin
web:
networks:
- default
- pangolin
networks:
pangolin:
external: true

View File

@@ -1,16 +1,20 @@
services:
# ==========================================================================
# MQTT Broker - Eclipse Mosquitto
# MQTT Broker - Eclipse Mosquitto (optional, use --profile mqtt)
# Most users will connect to an external MQTT broker instead
# ==========================================================================
mqtt:
image: eclipse-mosquitto:2
container_name: meshcore-mqtt
profiles:
- all
- mqtt
restart: unless-stopped
ports:
- "${MQTT_EXTERNAL_PORT:-1883}:1883"
- "${MQTT_WS_PORT:-9001}:9001"
volumes:
- ./etc/mosquitto.conf:/mosquitto/config/mosquitto.conf:ro
# - ./etc/mosquitto.conf:/mosquitto/config/mosquitto.conf:ro
- mosquitto_data:/mosquitto/data
- mosquitto_log:/mosquitto/log
healthcheck:
@@ -24,17 +28,15 @@ services:
# Interface Receiver - MeshCore device to MQTT bridge (events)
# ==========================================================================
interface-receiver:
image: ghcr.io/ipnet-mesh/meshcore-hub:main
image: ghcr.io/ipnet-mesh/meshcore-hub:${IMAGE_VERSION:-latest}
build:
context: .
dockerfile: Dockerfile
container_name: meshcore-interface-receiver
profiles:
- interface-receiver
- all
- receiver
restart: unless-stopped
depends_on:
mqtt:
condition: service_healthy
devices:
- "${SERIAL_PORT:-/dev/ttyUSB0}:${SERIAL_PORT:-/dev/ttyUSB0}"
user: root # Required for device access
@@ -45,6 +47,7 @@ services:
- MQTT_USERNAME=${MQTT_USERNAME:-}
- MQTT_PASSWORD=${MQTT_PASSWORD:-}
- MQTT_PREFIX=${MQTT_PREFIX:-meshcore}
- MQTT_TLS=${MQTT_TLS:-false}
- SERIAL_PORT=${SERIAL_PORT:-/dev/ttyUSB0}
- SERIAL_BAUD=${SERIAL_BAUD:-115200}
- NODE_ADDRESS=${NODE_ADDRESS:-}
@@ -60,17 +63,15 @@ services:
# Interface Sender - MQTT to MeshCore device bridge (commands)
# ==========================================================================
interface-sender:
image: ghcr.io/ipnet-mesh/meshcore-hub:main
image: ghcr.io/ipnet-mesh/meshcore-hub:${IMAGE_VERSION:-latest}
build:
context: .
dockerfile: Dockerfile
container_name: meshcore-interface-sender
profiles:
- interface-sender
- all
- sender
restart: unless-stopped
depends_on:
mqtt:
condition: service_healthy
devices:
- "${SERIAL_PORT_SENDER:-/dev/ttyUSB1}:${SERIAL_PORT_SENDER:-/dev/ttyUSB1}"
user: root # Required for device access
@@ -81,6 +82,7 @@ services:
- MQTT_USERNAME=${MQTT_USERNAME:-}
- MQTT_PASSWORD=${MQTT_PASSWORD:-}
- MQTT_PREFIX=${MQTT_PREFIX:-meshcore}
- MQTT_TLS=${MQTT_TLS:-false}
- SERIAL_PORT=${SERIAL_PORT_SENDER:-/dev/ttyUSB1}
- SERIAL_BAUD=${SERIAL_BAUD:-115200}
- NODE_ADDRESS=${NODE_ADDRESS_SENDER:-}
@@ -96,17 +98,15 @@ services:
# Interface Mock Receiver - For testing without real devices
# ==========================================================================
interface-mock-receiver:
image: ghcr.io/ipnet-mesh/meshcore-hub:main
image: ghcr.io/ipnet-mesh/meshcore-hub:${IMAGE_VERSION:-latest}
build:
context: .
dockerfile: Dockerfile
container_name: meshcore-interface-mock-receiver
profiles:
- all
- mock
restart: unless-stopped
depends_on:
mqtt:
condition: service_healthy
environment:
- LOG_LEVEL=${LOG_LEVEL:-INFO}
- MQTT_HOST=${MQTT_HOST:-mqtt}
@@ -114,6 +114,7 @@ services:
- MQTT_USERNAME=${MQTT_USERNAME:-}
- MQTT_PASSWORD=${MQTT_PASSWORD:-}
- MQTT_PREFIX=${MQTT_PREFIX:-meshcore}
- MQTT_TLS=${MQTT_TLS:-false}
- MOCK_DEVICE=true
- NODE_ADDRESS=${NODE_ADDRESS:-0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef}
command: ["interface", "receiver", "--mock"]
@@ -128,18 +129,21 @@ services:
# Collector - MQTT subscriber and database storage
# ==========================================================================
collector:
image: ghcr.io/ipnet-mesh/meshcore-hub:main
image: ghcr.io/ipnet-mesh/meshcore-hub:${IMAGE_VERSION:-latest}
build:
context: .
dockerfile: Dockerfile
container_name: meshcore-collector
profiles:
- all
- core
restart: unless-stopped
depends_on:
mqtt:
condition: service_healthy
db-migrate:
condition: service_completed_successfully
volumes:
# Mount data directory (contains collector/meshcore.db)
- ${DATA_HOME:-./data}:/data
- hub_data:/data
- ${SEED_HOME:-./seed}:/seed
environment:
- LOG_LEVEL=${LOG_LEVEL:-INFO}
- MQTT_HOST=${MQTT_HOST:-mqtt}
@@ -147,9 +151,9 @@ services:
- MQTT_USERNAME=${MQTT_USERNAME:-}
- MQTT_PASSWORD=${MQTT_PASSWORD:-}
- MQTT_PREFIX=${MQTT_PREFIX:-meshcore}
- MQTT_TLS=${MQTT_TLS:-false}
- DATA_HOME=/data
# Explicitly unset to use DATA_HOME-based default path
- DATABASE_URL=
- SEED_HOME=/seed
# Webhook configuration
- WEBHOOK_ADVERTISEMENT_URL=${WEBHOOK_ADVERTISEMENT_URL:-}
- WEBHOOK_ADVERTISEMENT_SECRET=${WEBHOOK_ADVERTISEMENT_SECRET:-}
@@ -162,6 +166,12 @@ services:
- WEBHOOK_TIMEOUT=${WEBHOOK_TIMEOUT:-10.0}
- WEBHOOK_MAX_RETRIES=${WEBHOOK_MAX_RETRIES:-3}
- WEBHOOK_RETRY_BACKOFF=${WEBHOOK_RETRY_BACKOFF:-2.0}
# Data retention and cleanup configuration
- DATA_RETENTION_ENABLED=${DATA_RETENTION_ENABLED:-true}
- DATA_RETENTION_DAYS=${DATA_RETENTION_DAYS:-30}
- DATA_RETENTION_INTERVAL_HOURS=${DATA_RETENTION_INTERVAL_HOURS:-24}
- NODE_CLEANUP_ENABLED=${NODE_CLEANUP_ENABLED:-true}
- NODE_CLEANUP_DAYS=${NODE_CLEANUP_DAYS:-7}
command: ["collector"]
healthcheck:
test: ["CMD", "meshcore-hub", "health", "collector"]
@@ -174,22 +184,24 @@ services:
# API Server - REST API for querying data and sending commands
# ==========================================================================
api:
image: ghcr.io/ipnet-mesh/meshcore-hub:main
image: ghcr.io/ipnet-mesh/meshcore-hub:${IMAGE_VERSION:-latest}
build:
context: .
dockerfile: Dockerfile
container_name: meshcore-api
profiles:
- all
- core
restart: unless-stopped
depends_on:
mqtt:
condition: service_healthy
db-migrate:
condition: service_completed_successfully
collector:
condition: service_started
ports:
- "${API_PORT:-8000}:8000"
volumes:
# Mount data directory (uses collector/meshcore.db)
- ${DATA_HOME:-./data}:/data
- hub_data:/data
environment:
- LOG_LEVEL=${LOG_LEVEL:-INFO}
- MQTT_HOST=${MQTT_HOST:-mqtt}
@@ -197,13 +209,14 @@ services:
- MQTT_USERNAME=${MQTT_USERNAME:-}
- MQTT_PASSWORD=${MQTT_PASSWORD:-}
- MQTT_PREFIX=${MQTT_PREFIX:-meshcore}
- MQTT_TLS=${MQTT_TLS:-false}
- DATA_HOME=/data
# Explicitly unset to use DATA_HOME-based default path
- DATABASE_URL=
- API_HOST=0.0.0.0
- API_PORT=8000
- API_READ_KEY=${API_READ_KEY:-}
- API_ADMIN_KEY=${API_ADMIN_KEY:-}
- METRICS_ENABLED=${METRICS_ENABLED:-true}
- METRICS_CACHE_TTL=${METRICS_CACHE_TTL:-60}
command: ["api"]
healthcheck:
test: ["CMD", "python", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:8000/health')"]
@@ -216,30 +229,52 @@ services:
# Web Dashboard - Web interface for network visualization
# ==========================================================================
web:
image: ghcr.io/ipnet-mesh/meshcore-hub:main
image: ghcr.io/ipnet-mesh/meshcore-hub:${IMAGE_VERSION:-latest}
build:
context: .
dockerfile: Dockerfile
container_name: meshcore-web
profiles:
- all
- core
restart: unless-stopped
depends_on:
api:
condition: service_healthy
ports:
- "${WEB_PORT:-8080}:8080"
volumes:
- ${CONTENT_HOME:-./content}:/content:ro
environment:
- LOG_LEVEL=${LOG_LEVEL:-INFO}
- API_BASE_URL=http://api:8000
- API_KEY=${API_READ_KEY:-}
# Use ADMIN key to allow write operations from admin interface
# Falls back to READ key if ADMIN key is not set
- API_KEY=${API_ADMIN_KEY:-${API_READ_KEY:-}}
- WEB_HOST=0.0.0.0
- WEB_PORT=8080
- WEB_THEME=${WEB_THEME:-dark}
- WEB_LOCALE=${WEB_LOCALE:-en}
- WEB_ADMIN_ENABLED=${WEB_ADMIN_ENABLED:-false}
- NETWORK_NAME=${NETWORK_NAME:-MeshCore Network}
- NETWORK_CITY=${NETWORK_CITY:-}
- NETWORK_COUNTRY=${NETWORK_COUNTRY:-}
- NETWORK_LOCATION=${NETWORK_LOCATION:-}
- NETWORK_RADIO_CONFIG=${NETWORK_RADIO_CONFIG:-}
- NETWORK_CONTACT_EMAIL=${NETWORK_CONTACT_EMAIL:-}
- NETWORK_CONTACT_DISCORD=${NETWORK_CONTACT_DISCORD:-}
- NETWORK_CONTACT_GITHUB=${NETWORK_CONTACT_GITHUB:-}
- NETWORK_CONTACT_YOUTUBE=${NETWORK_CONTACT_YOUTUBE:-}
- NETWORK_WELCOME_TEXT=${NETWORK_WELCOME_TEXT:-}
- CONTENT_HOME=/content
- TZ=${TZ:-UTC}
# Feature flags (set to false to disable specific pages)
- FEATURE_DASHBOARD=${FEATURE_DASHBOARD:-true}
- FEATURE_NODES=${FEATURE_NODES:-true}
- FEATURE_ADVERTISEMENTS=${FEATURE_ADVERTISEMENTS:-true}
- FEATURE_MESSAGES=${FEATURE_MESSAGES:-true}
- FEATURE_MAP=${FEATURE_MAP:-true}
- FEATURE_MEMBERS=${FEATURE_MEMBERS:-true}
- FEATURE_PAGES=${FEATURE_PAGES:-true}
command: ["web"]
healthcheck:
test: ["CMD", "python", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:8080/health')"]
@@ -252,52 +287,100 @@ services:
# Database Migrations - Run Alembic migrations
# ==========================================================================
db-migrate:
image: ghcr.io/ipnet-mesh/meshcore-hub:main
image: ghcr.io/ipnet-mesh/meshcore-hub:${IMAGE_VERSION:-latest}
build:
context: .
dockerfile: Dockerfile
container_name: meshcore-db-migrate
profiles:
- all
- core
- migrate
restart: "no"
volumes:
# Mount data directory (uses collector/meshcore.db)
- ${DATA_HOME:-./data}:/data
- hub_data:/data
environment:
- DATA_HOME=/data
# Explicitly unset to use DATA_HOME-based default path
- DATABASE_URL=
command: ["db", "upgrade"]
# ==========================================================================
# Seed Data - Import node_tags.json and members.json from SEED_HOME
# Seed Data - Import node_tags.yaml and members.yaml from SEED_HOME
# NOTE: This is NOT run automatically. Use --profile seed to run explicitly.
# Since tags are now managed via the admin UI, automatic seeding would
# overwrite user changes.
# ==========================================================================
seed:
image: ghcr.io/ipnet-mesh/meshcore-hub:main
image: ghcr.io/ipnet-mesh/meshcore-hub:${IMAGE_VERSION:-latest}
build:
context: .
dockerfile: Dockerfile
container_name: meshcore-seed
profiles:
- seed
restart: "no"
volumes:
# Mount data directory for database (read-write)
- ${DATA_HOME:-./data}:/data
# Mount seed directory for seed files (read-only)
- hub_data:/data
- ${SEED_HOME:-./seed}:/seed:ro
environment:
- DATA_HOME=/data
- SEED_HOME=/seed
- LOG_LEVEL=${LOG_LEVEL:-INFO}
# Explicitly unset to use DATA_HOME-based default path
- DATABASE_URL=
# Imports both node_tags.json and members.json if they exist
# Imports both node_tags.yaml and members.yaml if they exist
command: ["collector", "seed"]
# ==========================================================================
# Prometheus - Metrics collection and monitoring (optional, use --profile metrics)
# ==========================================================================
prometheus:
image: prom/prometheus:latest
container_name: meshcore-prometheus
profiles:
- all
- metrics
restart: unless-stopped
depends_on:
api:
condition: service_healthy
ports:
- "${PROMETHEUS_PORT:-9090}:9090"
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.retention.time=30d'
volumes:
- ./etc/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml:ro
- ./etc/prometheus/alerts.yml:/etc/prometheus/alerts.yml:ro
- prometheus_data:/prometheus
# ==========================================================================
# Alertmanager - Alert routing and notifications (optional, use --profile metrics)
# ==========================================================================
alertmanager:
image: prom/alertmanager:latest
container_name: meshcore-alertmanager
profiles:
- all
- metrics
restart: unless-stopped
ports:
- "${ALERTMANAGER_PORT:-9093}:9093"
volumes:
- ./etc/alertmanager/alertmanager.yml:/etc/alertmanager/alertmanager.yml:ro
- alertmanager_data:/alertmanager
command:
- '--config.file=/etc/alertmanager/alertmanager.yml'
- '--storage.path=/alertmanager'
# ==========================================================================
# Volumes
# ==========================================================================
volumes:
hub_data:
name: meshcore_hub_data
mosquitto_data:
name: meshcore_mosquitto_data
mosquitto_log:
name: meshcore_mosquitto_log
prometheus_data:
name: meshcore_prometheus_data
alertmanager_data:
name: meshcore_alertmanager_data

BIN
docs/images/web.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 238 KiB

View File

@@ -0,0 +1,35 @@
# Alertmanager configuration for MeshCore Hub
#
# Default configuration routes all alerts to a "blackhole" receiver
# (logs only, no external notifications).
#
# To receive notifications, configure a receiver below.
# See: https://prometheus.io/docs/alerting/latest/configuration/
#
# Examples:
#
# Email:
# receivers:
# - name: 'email'
# email_configs:
# - to: 'admin@example.com'
# from: 'alertmanager@example.com'
# smarthost: 'smtp.example.com:587'
# auth_username: 'alertmanager@example.com'
# auth_password: 'password'
#
# Webhook (e.g. Slack incoming webhook, ntfy, Gotify):
# receivers:
# - name: 'webhook'
# webhook_configs:
# - url: 'https://example.com/webhook'
route:
receiver: 'default'
group_by: ['alertname']
group_wait: 30s
group_interval: 5m
repeat_interval: 4h
receivers:
- name: 'default'

16
etc/prometheus/alerts.yml Normal file
View File

@@ -0,0 +1,16 @@
# Prometheus alert rules for MeshCore Hub
#
# These rules are evaluated by Prometheus and fired alerts are sent
# to Alertmanager for routing and notification.
groups:
- name: meshcore
rules:
- alert: NodeNotSeen
expr: time() - meshcore_node_last_seen_timestamp_seconds{role="infra"} > 48 * 3600
for: 5m
labels:
severity: warning
annotations:
summary: "Node {{ $labels.node_name }} ({{ $labels.role }}) not seen for 48+ hours"
description: "Node {{ $labels.public_key }} ({{ $labels.adv_type }}, role={{ $labels.role }}) last seen {{ $value | humanizeDuration }} ago."

View File

@@ -0,0 +1,29 @@
# Prometheus scrape configuration for MeshCore Hub
#
# This file is used when running Prometheus via Docker Compose:
# docker compose --profile core --profile metrics up -d
#
# The scrape interval matches the default metrics cache TTL (60s)
# to avoid unnecessary database queries.
global:
scrape_interval: 60s
evaluation_interval: 60s
alerting:
alertmanagers:
- static_configs:
- targets: ['alertmanager:9093']
rule_files:
- 'alerts.yml'
scrape_configs:
- job_name: 'meshcore-hub'
metrics_path: '/metrics'
# Uncomment basic_auth if API_READ_KEY is configured
# basic_auth:
# username: 'metrics'
# password: '<API_READ_KEY>'
static_configs:
- targets: ['api:8000']

View File

@@ -0,0 +1,61 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg
viewBox="0 0 115 100"
width="115"
height="100"
version="1.1"
id="svg4"
sodipodi:docname="logo-dark.svg"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg">
<defs
id="defs4" />
<sodipodi:namedview
id="namedview4"
pagecolor="#ffffff"
bordercolor="#000000"
borderopacity="0.25"
inkscape:showpageshadow="2"
inkscape:pageopacity="0.0"
inkscape:pagecheckerboard="0"
inkscape:deskcolor="#d1d1d1" />
<!-- I letter - muted -->
<rect
x="0"
y="0"
width="25"
height="100"
rx="2"
fill="#ffffff"
opacity="0.5"
id="rect1" />
<!-- P vertical stem -->
<rect
x="35"
y="0"
width="25"
height="100"
rx="2"
fill="#ffffff"
id="rect2" />
<!-- WiFi arcs: center at mid-stem (90, 60), sweeping from right up to top -->
<g
fill="none"
stroke="#ffffff"
stroke-width="10"
stroke-linecap="round"
id="g4"
transform="translate(-30,-10)">
<path
d="M 110,65 A 20,20 0 0 0 90,45"
id="path2" />
<path
d="M 125,65 A 35,35 0 0 0 90,30"
id="path3" />
<path
d="M 140,65 A 50,50 0 0 0 90,15"
id="path4" />
</g>
</svg>

After

Width:  |  Height:  |  Size: 1.4 KiB

View File

@@ -0,0 +1,87 @@
---
title: Join
slug: join
menu_order: 10
---
# Getting Started with MeshCore
MeshCore is an open-source off-grid LoRa mesh networking platform. This guide will help you get connected to the network.
For detailed documentation, see the [MeshCore FAQ](https://github.com/meshcore-dev/MeshCore/blob/main/docs/faq.md).
## Node Types
MeshCore devices operate in different modes:
| Mode | Description |
|------|-------------|
| **Companion** | Connects to your phone via Bluetooth. Use this for messaging and interacting with the network. |
| **Repeater** | Standalone node that extends network coverage. Place these in elevated locations for best results. |
| **Room Server** | Hosts chat rooms that persist messages for offline users. |
Most users start with a **Companion** node paired to their phone.
## Frequency Regulations
MeshCore uses LoRa radio, which operates on unlicensed ISM bands. You **must** use the correct frequency for your region:
| Region | Frequency | Notes |
|--------|-----------|-------|
| Europe (EU) | 868 MHz | EU868 band |
| United Kingdom | 868 MHz | Same as EU |
| North America | 915 MHz | US915 band |
| Australia | 915 MHz | AU915 band |
Using the wrong frequency is illegal and may cause interference. Check your local regulations.
## Compatible Hardware
MeshCore runs on inexpensive low-power LoRa devices. Popular options include:
### Recommended Devices
| Device | Manufacturer | Features |
|--------|--------------|----------|
| [Heltec V3](https://heltec.org/project/wifi-lora-32-v3/) | Heltec | Budget-friendly, OLED display |
| [T114](https://heltec.org/project/mesh-node-t114/) | Heltec | Compact, GPS, colour display |
| [T1000-E](https://www.seeedstudio.com/SenseCAP-Card-Tracker-T1000-E-for-Meshtastic-p-5913.html) | Seeed Studio | Credit-card sized, GPS, weatherproof |
| [T-Deck Plus](https://www.lilygo.cc/products/t-deck-plus) | LilyGO | Built-in keyboard, touchscreen, GPS |
Ensure you purchase the correct frequency variant (868MHz for EU/UK, 915MHz for US/AU).
### Where to Buy
- **Heltec**: [Official Store](https://heltec.org/) or AliExpress
- **LilyGO**: [Official Store](https://lilygo.cc/) or AliExpress
- **Seeed Studio**: [Official Store](https://www.seeedstudio.com/)
- **Amazon**: Search for device name + "LoRa 868" (or 915 for US)
## Mobile Apps
Connect to your Companion node using the official MeshCore apps:
| Platform | App | Link |
|----------|-----|------|
| Android | MeshCore | [Google Play](https://play.google.com/store/apps/details?id=com.liamcottle.meshcore.android) |
| iOS | MeshCore | [App Store](https://apps.apple.com/us/app/meshcore/id6742354151) |
The app connects via Bluetooth to your Companion node, allowing you to send messages, view the network, and configure your device.
## Flashing Firmware
1. Use the [MeshCore Web Flasher](https://flasher.meshcore.co.uk/) for easy browser-based flashing
2. Select your device type and region (frequency)
3. Connect via USB and flash
## Next Steps
Once your device is flashed and paired:
1. Open the MeshCore app on your phone
2. Enable Bluetooth and pair with your device
3. Set your node name in the app settings
4. Configure your radio settings/profile for your region
4. You should start seeing other nodes on the network
Welcome to the mesh!

View File

@@ -1,10 +0,0 @@
{
"members": [
{
"name": "Example Member",
"callsign": "N0CALL",
"role": "Network Operator",
"description": "Example member entry"
}
]
}

14
example/seed/members.yaml Normal file
View File

@@ -0,0 +1,14 @@
# Example members seed file
# Note: Nodes are associated with members via a 'member_id' tag on the node.
# Use node_tags.yaml to set member_id tags on nodes.
members:
- member_id: example_member
name: Example Member
callsign: N0CALL
role: Network Operator
description: Example network operator member
- member_id: simple_member
name: Simple Member
callsign: N0CALL2
role: Observer
description: Example observer member

View File

@@ -1,16 +0,0 @@
{
"0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef": {
"friendly_name": "Gateway Node",
"location": {"value": "37.7749,-122.4194", "type": "coordinate"},
"lat": {"value": "37.7749", "type": "number"},
"lon": {"value": "-122.4194", "type": "number"},
"role": "gateway"
},
"fedcba9876543210fedcba9876543210fedcba9876543210fedcba9876543210": {
"friendly_name": "Oakland Repeater",
"location": {"value": "37.8044,-122.2712", "type": "coordinate"},
"lat": {"value": "37.8044", "type": "number"},
"lon": {"value": "-122.2712", "type": "number"},
"altitude": {"value": "150", "type": "number"}
}
}

View File

@@ -0,0 +1,29 @@
# Example node tags seed file
# Each key is a 64-character hex public key
#
# Tag values can be:
# - YAML primitives (auto-detected type):
# friendly_name: Gateway Node # string
# elevation: 150 # number
# is_online: true # boolean
#
# - Explicit type (when you need to force a specific type):
# altitude:
# value: "150"
# type: number
#
# Supported types: string, number, boolean
0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef:
friendly_name: Gateway Node
role: gateway
lat: 37.7749
lon: -122.4194
is_online: true
fedcba9876543210fedcba9876543210fedcba9876543210fedcba9876543210:
friendly_name: Oakland Repeater
lat: 37.8044
lon: -122.2712
altitude: 150
is_online: false

View File

@@ -4,22 +4,21 @@ build-backend = "setuptools.build_meta"
[project]
name = "meshcore-hub"
version = "0.1.0"
version = "0.0.0"
description = "Python monorepo for managing and orchestrating MeshCore mesh networks"
readme = "README.md"
license = {text = "MIT"}
requires-python = ">=3.11"
license = {text = "GPL-3.0-or-later"}
requires-python = ">=3.13"
authors = [
{name = "MeshCore Hub Contributors"}
]
classifiers = [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Communications",
"Topic :: System :: Networking",
]
@@ -39,6 +38,10 @@ dependencies = [
"httpx>=0.25.0",
"aiosqlite>=0.19.0",
"meshcore>=2.2.0",
"pyyaml>=6.0.0",
"python-frontmatter>=1.0.0",
"markdown>=3.5.0",
"prometheus-client>=0.20.0",
]
[project.optional-dependencies]
@@ -50,7 +53,9 @@ dev = [
"flake8>=6.1.0",
"mypy>=1.5.0",
"pre-commit>=3.4.0",
"beautifulsoup4>=4.12.0",
"types-paho-mqtt>=1.6.0",
"types-PyYAML>=6.0.0",
]
postgres = [
"asyncpg>=0.28.0",
@@ -61,10 +66,10 @@ postgres = [
meshcore-hub = "meshcore_hub.__main__:main"
[project.urls]
Homepage = "https://github.com/meshcore-dev/meshcore-hub"
Documentation = "https://github.com/meshcore-dev/meshcore-hub#readme"
Repository = "https://github.com/meshcore-dev/meshcore-hub"
Issues = "https://github.com/meshcore-dev/meshcore-hub/issues"
Homepage = "https://github.com/ipnet-mesh/meshcore-hub"
Documentation = "https://github.com/ipnet-mesh/meshcore-hub#readme"
Repository = "https://github.com/ipnet-mesh/meshcore-hub"
Issues = "https://github.com/ipnet-mesh/meshcore-hub/issues"
[tool.setuptools.packages.find]
where = ["src"]
@@ -76,7 +81,7 @@ meshcore_hub = ["py.typed"]
[tool.black]
line-length = 88
target-version = ["py311"]
target-version = ["py312"]
include = '\.pyi?$'
extend-exclude = '''
/(
@@ -95,7 +100,7 @@ extend-exclude = '''
'''
[tool.mypy]
python_version = "3.11"
python_version = "3.13"
warn_return_any = true
warn_unused_ignores = true
disallow_untyped_defs = true
@@ -110,6 +115,9 @@ module = [
"uvicorn.*",
"alembic.*",
"meshcore.*",
"frontmatter.*",
"markdown.*",
"prometheus_client.*",
]
ignore_missing_imports = true

6
renovate.json Normal file
View File

@@ -0,0 +1,6 @@
{
"$schema": "https://docs.renovatebot.com/renovate-schema.json",
"extends": [
"config:recommended"
]
}

View File

@@ -1,3 +1,5 @@
"""MeshCore Hub - Python monorepo for managing MeshCore mesh networks."""
__version__ = "0.1.0"
from meshcore_hub._version import __version__
__all__ = ["__version__"]

View File

@@ -174,6 +174,40 @@ def db_history() -> None:
command.history(alembic_cfg)
@db.command("stamp")
@click.option(
"--revision",
type=str,
default="head",
help="Target revision to stamp (default: head)",
)
@click.option(
"--database-url",
type=str,
default=None,
envvar="DATABASE_URL",
help="Database connection URL",
)
def db_stamp(revision: str, database_url: str | None) -> None:
"""Stamp database with revision without running migrations.
Use this to mark an existing database as up-to-date when the schema
was created before Alembic migrations were introduced.
"""
import os
from alembic import command
from alembic.config import Config
click.echo(f"Stamping database with revision: {revision}")
alembic_cfg = Config("alembic.ini")
if database_url:
os.environ["DATABASE_URL"] = database_url
command.stamp(alembic_cfg, revision)
click.echo("Database stamped successfully.")
# Health check commands for Docker HEALTHCHECK
@cli.group()
def health() -> None:

View File

@@ -0,0 +1,8 @@
"""MeshCore Hub version information.
This file contains the version string for the package.
It can be overridden at build time by setting BUILD_VERSION environment variable.
"""
__version__ = "dev"
__all__ = ["__version__"]

View File

@@ -32,10 +32,9 @@ async def lifespan(app: FastAPI) -> AsyncGenerator[None, None]:
# Get database URL from app state
database_url = getattr(app.state, "database_url", "sqlite:///./meshcore.db")
# Initialize database
# Initialize database (schema managed by Alembic migrations)
logger.info(f"Initializing database: {database_url}")
_db_manager = DatabaseManager(database_url)
_db_manager.create_tables()
yield
@@ -53,7 +52,10 @@ def create_app(
mqtt_host: str = "localhost",
mqtt_port: int = 1883,
mqtt_prefix: str = "meshcore",
mqtt_tls: bool = False,
cors_origins: list[str] | None = None,
metrics_enabled: bool = True,
metrics_cache_ttl: int = 60,
) -> FastAPI:
"""Create and configure the FastAPI application.
@@ -64,7 +66,10 @@ def create_app(
mqtt_host: MQTT broker host
mqtt_port: MQTT broker port
mqtt_prefix: MQTT topic prefix
mqtt_tls: Enable TLS/SSL for MQTT connection
cors_origins: Allowed CORS origins
metrics_enabled: Enable Prometheus metrics endpoint at /metrics
metrics_cache_ttl: Seconds to cache metrics output
Returns:
Configured FastAPI application
@@ -86,6 +91,8 @@ def create_app(
app.state.mqtt_host = mqtt_host
app.state.mqtt_port = mqtt_port
app.state.mqtt_prefix = mqtt_prefix
app.state.mqtt_tls = mqtt_tls
app.state.metrics_cache_ttl = metrics_cache_ttl
# Configure CORS
if cors_origins is None:
@@ -104,6 +111,12 @@ def create_app(
app.include_router(api_router, prefix="/api/v1")
# Include Prometheus metrics endpoint
if metrics_enabled:
from meshcore_hub.api.metrics import router as metrics_router
app.include_router(metrics_router)
# Health check endpoints
@app.get("/health", tags=["Health"])
async def health() -> dict:

View File

@@ -67,6 +67,13 @@ import click
envvar="MQTT_TOPIC_PREFIX",
help="MQTT topic prefix",
)
@click.option(
"--mqtt-tls",
is_flag=True,
default=False,
envvar="MQTT_TLS",
help="Enable TLS/SSL for MQTT connection",
)
@click.option(
"--cors-origins",
type=str,
@@ -74,6 +81,19 @@ import click
envvar="CORS_ORIGINS",
help="Comma-separated list of allowed CORS origins",
)
@click.option(
"--metrics-enabled/--no-metrics",
default=True,
envvar="METRICS_ENABLED",
help="Enable Prometheus metrics endpoint at /metrics",
)
@click.option(
"--metrics-cache-ttl",
type=int,
default=60,
envvar="METRICS_CACHE_TTL",
help="Seconds to cache metrics output (reduces database load)",
)
@click.option(
"--reload",
is_flag=True,
@@ -92,7 +112,10 @@ def api(
mqtt_host: str,
mqtt_port: int,
mqtt_prefix: str,
mqtt_tls: bool,
cors_origins: str | None,
metrics_enabled: bool,
metrics_cache_ttl: int,
reload: bool,
) -> None:
"""Run the REST API server.
@@ -141,6 +164,8 @@ def api(
click.echo(f"Read key configured: {read_key is not None}")
click.echo(f"Admin key configured: {admin_key is not None}")
click.echo(f"CORS origins: {cors_origins or 'none'}")
click.echo(f"Metrics enabled: {metrics_enabled}")
click.echo(f"Metrics cache TTL: {metrics_cache_ttl}s")
click.echo(f"Reload mode: {reload}")
click.echo("=" * 50)
@@ -171,7 +196,10 @@ def api(
mqtt_host=mqtt_host,
mqtt_port=mqtt_port,
mqtt_prefix=mqtt_prefix,
mqtt_tls=mqtt_tls,
cors_origins=origins_list,
metrics_enabled=metrics_enabled,
metrics_cache_ttl=metrics_cache_ttl,
)
click.echo("\nStarting API server...")

View File

@@ -57,6 +57,7 @@ def get_mqtt_client(request: Request) -> MQTTClient:
mqtt_host = getattr(request.app.state, "mqtt_host", "localhost")
mqtt_port = getattr(request.app.state, "mqtt_port", 1883)
mqtt_prefix = getattr(request.app.state, "mqtt_prefix", "meshcore")
mqtt_tls = getattr(request.app.state, "mqtt_tls", False)
# Use unique client ID to allow multiple API instances
unique_id = uuid.uuid4().hex[:8]
@@ -65,6 +66,7 @@ def get_mqtt_client(request: Request) -> MQTTClient:
port=mqtt_port,
prefix=mqtt_prefix,
client_id=f"meshcore-api-{unique_id}",
tls=mqtt_tls,
)
client = MQTTClient(config)

View File

@@ -0,0 +1,331 @@
"""Prometheus metrics endpoint for MeshCore Hub API."""
import base64
import logging
import time
from typing import Any
from fastapi import APIRouter, Request, Response
from fastapi.responses import PlainTextResponse
from prometheus_client import CollectorRegistry, Gauge, generate_latest
from sqlalchemy import func, select
from meshcore_hub.common.models import (
Advertisement,
EventLog,
Member,
Message,
Node,
NodeTag,
Telemetry,
TracePath,
)
logger = logging.getLogger(__name__)
router = APIRouter()
# Module-level cache
_cache: dict[str, Any] = {"output": b"", "expires_at": 0.0}
def verify_basic_auth(request: Request) -> bool:
"""Verify HTTP Basic Auth credentials for metrics endpoint.
Uses username 'metrics' and the API read key as password.
Returns True if no read key is configured (public access).
Args:
request: FastAPI request
Returns:
True if authentication passes
"""
read_key = getattr(request.app.state, "read_key", None)
# No read key configured = public access
if not read_key:
return True
auth_header = request.headers.get("Authorization", "")
if not auth_header.startswith("Basic "):
return False
try:
decoded = base64.b64decode(auth_header[6:]).decode("utf-8")
username, password = decoded.split(":", 1)
return username == "metrics" and password == read_key
except Exception:
return False
def collect_metrics(session: Any) -> bytes:
"""Collect all metrics from the database and generate Prometheus output.
Creates a fresh CollectorRegistry per call to avoid global state issues.
Args:
session: SQLAlchemy database session
Returns:
Prometheus text exposition format as bytes
"""
from meshcore_hub import __version__
registry = CollectorRegistry()
# -- Info gauge --
info_gauge = Gauge(
"meshcore_info",
"MeshCore Hub application info",
["version"],
registry=registry,
)
info_gauge.labels(version=__version__).set(1)
# -- Nodes total --
nodes_total = Gauge(
"meshcore_nodes_total",
"Total number of nodes",
registry=registry,
)
count = session.execute(select(func.count(Node.id))).scalar() or 0
nodes_total.set(count)
# -- Nodes active by time window --
nodes_active = Gauge(
"meshcore_nodes_active",
"Number of active nodes in time window",
["window"],
registry=registry,
)
for window, hours in [("1h", 1), ("24h", 24), ("7d", 168), ("30d", 720)]:
cutoff = time.time() - (hours * 3600)
from datetime import datetime, timezone
cutoff_dt = datetime.fromtimestamp(cutoff, tz=timezone.utc)
count = (
session.execute(
select(func.count(Node.id)).where(Node.last_seen >= cutoff_dt)
).scalar()
or 0
)
nodes_active.labels(window=window).set(count)
# -- Nodes by type --
nodes_by_type = Gauge(
"meshcore_nodes_by_type",
"Number of nodes by advertisement type",
["adv_type"],
registry=registry,
)
type_counts = session.execute(
select(Node.adv_type, func.count(Node.id)).group_by(Node.adv_type)
).all()
for adv_type, count in type_counts:
nodes_by_type.labels(adv_type=adv_type or "unknown").set(count)
# -- Nodes with location --
nodes_with_location = Gauge(
"meshcore_nodes_with_location",
"Number of nodes with GPS coordinates",
registry=registry,
)
count = (
session.execute(
select(func.count(Node.id)).where(
Node.lat.isnot(None), Node.lon.isnot(None)
)
).scalar()
or 0
)
nodes_with_location.set(count)
# -- Node last seen timestamp --
node_last_seen = Gauge(
"meshcore_node_last_seen_timestamp_seconds",
"Unix timestamp of when the node was last seen",
["public_key", "node_name", "adv_type", "role"],
registry=registry,
)
role_subq = (
select(NodeTag.node_id, NodeTag.value.label("role"))
.where(NodeTag.key == "role")
.subquery()
)
nodes_with_last_seen = session.execute(
select(
Node.public_key,
Node.name,
Node.adv_type,
Node.last_seen,
role_subq.c.role,
)
.outerjoin(role_subq, Node.id == role_subq.c.node_id)
.where(Node.last_seen.isnot(None))
).all()
for public_key, name, adv_type, last_seen, role in nodes_with_last_seen:
node_last_seen.labels(
public_key=public_key,
node_name=name or "",
adv_type=adv_type or "unknown",
role=role or "",
).set(last_seen.timestamp())
# -- Messages total by type --
messages_total = Gauge(
"meshcore_messages_total",
"Total number of messages by type",
["type"],
registry=registry,
)
msg_type_counts = session.execute(
select(Message.message_type, func.count(Message.id)).group_by(
Message.message_type
)
).all()
for msg_type, count in msg_type_counts:
messages_total.labels(type=msg_type).set(count)
# -- Messages received by type and window --
messages_received = Gauge(
"meshcore_messages_received",
"Messages received in time window by type",
["type", "window"],
registry=registry,
)
for window, hours in [("1h", 1), ("24h", 24), ("7d", 168), ("30d", 720)]:
cutoff = time.time() - (hours * 3600)
cutoff_dt = datetime.fromtimestamp(cutoff, tz=timezone.utc)
window_counts = session.execute(
select(Message.message_type, func.count(Message.id))
.where(Message.received_at >= cutoff_dt)
.group_by(Message.message_type)
).all()
for msg_type, count in window_counts:
messages_received.labels(type=msg_type, window=window).set(count)
# -- Advertisements total --
advertisements_total = Gauge(
"meshcore_advertisements_total",
"Total number of advertisements",
registry=registry,
)
count = session.execute(select(func.count(Advertisement.id))).scalar() or 0
advertisements_total.set(count)
# -- Advertisements received by window --
advertisements_received = Gauge(
"meshcore_advertisements_received",
"Advertisements received in time window",
["window"],
registry=registry,
)
for window, hours in [("1h", 1), ("24h", 24), ("7d", 168), ("30d", 720)]:
cutoff = time.time() - (hours * 3600)
cutoff_dt = datetime.fromtimestamp(cutoff, tz=timezone.utc)
count = (
session.execute(
select(func.count(Advertisement.id)).where(
Advertisement.received_at >= cutoff_dt
)
).scalar()
or 0
)
advertisements_received.labels(window=window).set(count)
# -- Telemetry total --
telemetry_total = Gauge(
"meshcore_telemetry_total",
"Total number of telemetry records",
registry=registry,
)
count = session.execute(select(func.count(Telemetry.id))).scalar() or 0
telemetry_total.set(count)
# -- Trace paths total --
trace_paths_total = Gauge(
"meshcore_trace_paths_total",
"Total number of trace path records",
registry=registry,
)
count = session.execute(select(func.count(TracePath.id))).scalar() or 0
trace_paths_total.set(count)
# -- Events by type --
events_total = Gauge(
"meshcore_events_total",
"Total events by type from event log",
["event_type"],
registry=registry,
)
event_counts = session.execute(
select(EventLog.event_type, func.count(EventLog.id)).group_by(
EventLog.event_type
)
).all()
for event_type, count in event_counts:
events_total.labels(event_type=event_type).set(count)
# -- Members total --
members_total = Gauge(
"meshcore_members_total",
"Total number of network members",
registry=registry,
)
count = session.execute(select(func.count(Member.id))).scalar() or 0
members_total.set(count)
output: bytes = generate_latest(registry)
return output
@router.get("/metrics")
async def metrics(request: Request) -> Response:
"""Prometheus metrics endpoint.
Returns metrics in Prometheus text exposition format.
Supports HTTP Basic Auth with username 'metrics' and API read key as password.
Results are cached with a configurable TTL to reduce database load.
"""
# Check authentication
if not verify_basic_auth(request):
return PlainTextResponse(
"Unauthorized",
status_code=401,
headers={"WWW-Authenticate": 'Basic realm="metrics"'},
)
# Check cache
cache_ttl = getattr(request.app.state, "metrics_cache_ttl", 60)
now = time.time()
if _cache["output"] and now < _cache["expires_at"]:
return Response(
content=_cache["output"],
media_type="text/plain; version=0.0.4; charset=utf-8",
)
# Collect fresh metrics
try:
from meshcore_hub.api.app import get_db_manager
db_manager = get_db_manager()
with db_manager.session_scope() as session:
output = collect_metrics(session)
# Update cache
_cache["output"] = output
_cache["expires_at"] = now + cache_ttl
return Response(
content=output,
media_type="text/plain; version=0.0.4; charset=utf-8",
)
except Exception as e:
logger.exception("Failed to collect metrics: %s", e)
return PlainTextResponse(
f"# Error collecting metrics: {e}\n",
status_code=500,
media_type="text/plain; version=0.0.4; charset=utf-8",
)

View File

@@ -4,33 +4,168 @@ from datetime import datetime
from typing import Optional
from fastapi import APIRouter, HTTPException, Query
from sqlalchemy import func, select
from sqlalchemy import func, or_, select
from sqlalchemy.orm import aliased, selectinload
from meshcore_hub.api.auth import RequireRead
from meshcore_hub.api.dependencies import DbSession
from meshcore_hub.common.models import Advertisement
from meshcore_hub.common.schemas.messages import AdvertisementList, AdvertisementRead
from meshcore_hub.common.models import Advertisement, EventReceiver, Node, NodeTag
from meshcore_hub.common.schemas.messages import (
AdvertisementList,
AdvertisementRead,
ReceiverInfo,
)
router = APIRouter()
def _get_tag_name(node: Optional[Node]) -> Optional[str]:
"""Extract name tag from a node's tags."""
if not node or not node.tags:
return None
for tag in node.tags:
if tag.key == "name":
return tag.value
return None
def _get_tag_description(node: Optional[Node]) -> Optional[str]:
"""Extract description tag from a node's tags."""
if not node or not node.tags:
return None
for tag in node.tags:
if tag.key == "description":
return tag.value
return None
def _fetch_receivers_for_events(
session: DbSession,
event_type: str,
event_hashes: list[str],
) -> dict[str, list[ReceiverInfo]]:
"""Fetch receiver info for a list of events by their hashes."""
if not event_hashes:
return {}
query = (
select(
EventReceiver.event_hash,
EventReceiver.snr,
EventReceiver.received_at,
Node.id.label("node_id"),
Node.public_key,
Node.name,
)
.join(Node, EventReceiver.receiver_node_id == Node.id)
.where(EventReceiver.event_type == event_type)
.where(EventReceiver.event_hash.in_(event_hashes))
.order_by(EventReceiver.received_at)
)
results = session.execute(query).all()
receivers_by_hash: dict[str, list[ReceiverInfo]] = {}
node_ids = [r.node_id for r in results]
tag_names: dict[str, str] = {}
if node_ids:
tag_query = (
select(NodeTag.node_id, NodeTag.value)
.where(NodeTag.node_id.in_(node_ids))
.where(NodeTag.key == "name")
)
for node_id, value in session.execute(tag_query).all():
tag_names[node_id] = value
for row in results:
if row.event_hash not in receivers_by_hash:
receivers_by_hash[row.event_hash] = []
receivers_by_hash[row.event_hash].append(
ReceiverInfo(
node_id=row.node_id,
public_key=row.public_key,
name=row.name,
tag_name=tag_names.get(row.node_id),
snr=row.snr,
received_at=row.received_at,
)
)
return receivers_by_hash
@router.get("", response_model=AdvertisementList)
async def list_advertisements(
_: RequireRead,
session: DbSession,
search: Optional[str] = Query(
None, description="Search in name tag, node name, or public key"
),
public_key: Optional[str] = Query(None, description="Filter by public key"),
received_by: Optional[str] = Query(
None, description="Filter by receiver node public key"
),
member_id: Optional[str] = Query(
None, description="Filter by member_id tag value of source node"
),
since: Optional[datetime] = Query(None, description="Start timestamp"),
until: Optional[datetime] = Query(None, description="End timestamp"),
limit: int = Query(50, ge=1, le=100, description="Page size"),
offset: int = Query(0, ge=0, description="Page offset"),
) -> AdvertisementList:
"""List advertisements with filtering and pagination."""
# Build query
query = select(Advertisement)
# Aliases for node joins
ReceiverNode = aliased(Node)
SourceNode = aliased(Node)
# Build query with both receiver and source node joins
query = (
select(
Advertisement,
ReceiverNode.public_key.label("receiver_pk"),
ReceiverNode.name.label("receiver_name"),
ReceiverNode.id.label("receiver_id"),
SourceNode.name.label("source_name"),
SourceNode.id.label("source_id"),
SourceNode.adv_type.label("source_adv_type"),
)
.outerjoin(ReceiverNode, Advertisement.receiver_node_id == ReceiverNode.id)
.outerjoin(SourceNode, Advertisement.node_id == SourceNode.id)
)
if search:
# Search in public key, advertisement name, node name, or name tag
search_pattern = f"%{search}%"
query = query.where(
or_(
Advertisement.public_key.ilike(search_pattern),
Advertisement.name.ilike(search_pattern),
SourceNode.name.ilike(search_pattern),
SourceNode.id.in_(
select(NodeTag.node_id).where(
NodeTag.key == "name", NodeTag.value.ilike(search_pattern)
)
),
)
)
if public_key:
query = query.where(Advertisement.public_key == public_key)
if received_by:
query = query.where(ReceiverNode.public_key == received_by)
if member_id:
# Filter advertisements from nodes that have a member_id tag with the specified value
query = query.where(
SourceNode.id.in_(
select(NodeTag.node_id).where(
NodeTag.key == "member_id", NodeTag.value == member_id
)
)
)
if since:
query = query.where(Advertisement.received_at >= since)
@@ -45,10 +180,59 @@ async def list_advertisements(
query = query.order_by(Advertisement.received_at.desc()).offset(offset).limit(limit)
# Execute
advertisements = session.execute(query).scalars().all()
results = session.execute(query).all()
# Collect node IDs to fetch tags
node_ids = set()
for row in results:
if row.receiver_id:
node_ids.add(row.receiver_id)
if row.source_id:
node_ids.add(row.source_id)
# Fetch nodes with tags
nodes_by_id: dict[str, Node] = {}
if node_ids:
nodes_query = (
select(Node).where(Node.id.in_(node_ids)).options(selectinload(Node.tags))
)
nodes = session.execute(nodes_query).scalars().all()
nodes_by_id = {n.id: n for n in nodes}
# Fetch all receivers for these advertisements
event_hashes = [r[0].event_hash for r in results if r[0].event_hash]
receivers_by_hash = _fetch_receivers_for_events(
session, "advertisement", event_hashes
)
# Build response with node details
items = []
for row in results:
adv = row[0]
receiver_node = nodes_by_id.get(row.receiver_id) if row.receiver_id else None
source_node = nodes_by_id.get(row.source_id) if row.source_id else None
data = {
"received_by": row.receiver_pk,
"receiver_name": row.receiver_name,
"receiver_tag_name": _get_tag_name(receiver_node),
"public_key": adv.public_key,
"name": adv.name,
"node_name": row.source_name,
"node_tag_name": _get_tag_name(source_node),
"node_tag_description": _get_tag_description(source_node),
"adv_type": adv.adv_type or row.source_adv_type,
"flags": adv.flags,
"received_at": adv.received_at,
"created_at": adv.created_at,
"receivers": (
receivers_by_hash.get(adv.event_hash, []) if adv.event_hash else []
),
}
items.append(AdvertisementRead(**data))
return AdvertisementList(
items=[AdvertisementRead.model_validate(a) for a in advertisements],
items=items,
total=total,
limit=limit,
offset=offset,
@@ -62,10 +246,68 @@ async def get_advertisement(
advertisement_id: str,
) -> AdvertisementRead:
"""Get a single advertisement by ID."""
query = select(Advertisement).where(Advertisement.id == advertisement_id)
advertisement = session.execute(query).scalar_one_or_none()
ReceiverNode = aliased(Node)
SourceNode = aliased(Node)
query = (
select(
Advertisement,
ReceiverNode.public_key.label("receiver_pk"),
ReceiverNode.name.label("receiver_name"),
ReceiverNode.id.label("receiver_id"),
SourceNode.name.label("source_name"),
SourceNode.id.label("source_id"),
SourceNode.adv_type.label("source_adv_type"),
)
.outerjoin(ReceiverNode, Advertisement.receiver_node_id == ReceiverNode.id)
.outerjoin(SourceNode, Advertisement.node_id == SourceNode.id)
.where(Advertisement.id == advertisement_id)
)
result = session.execute(query).one_or_none()
if not advertisement:
if not result:
raise HTTPException(status_code=404, detail="Advertisement not found")
return AdvertisementRead.model_validate(advertisement)
adv = result[0]
# Fetch nodes with tags for friendly names
node_ids = []
if result.receiver_id:
node_ids.append(result.receiver_id)
if result.source_id:
node_ids.append(result.source_id)
nodes_by_id: dict[str, Node] = {}
if node_ids:
nodes_query = (
select(Node).where(Node.id.in_(node_ids)).options(selectinload(Node.tags))
)
nodes = session.execute(nodes_query).scalars().all()
nodes_by_id = {n.id: n for n in nodes}
receiver_node = nodes_by_id.get(result.receiver_id) if result.receiver_id else None
source_node = nodes_by_id.get(result.source_id) if result.source_id else None
# Fetch receivers for this advertisement
receivers = []
if adv.event_hash:
receivers_by_hash = _fetch_receivers_for_events(
session, "advertisement", [adv.event_hash]
)
receivers = receivers_by_hash.get(adv.event_hash, [])
data = {
"received_by": result.receiver_pk,
"receiver_name": result.receiver_name,
"receiver_tag_name": _get_tag_name(receiver_node),
"public_key": adv.public_key,
"name": adv.name,
"node_name": result.source_name,
"node_tag_name": _get_tag_name(source_node),
"node_tag_description": _get_tag_description(source_node),
"adv_type": adv.adv_type or result.source_adv_type,
"flags": adv.flags,
"received_at": adv.received_at,
"created_at": adv.created_at,
"receivers": receivers,
}
return AdvertisementRead(**data)

View File

@@ -9,7 +9,15 @@ from sqlalchemy import func, select
from meshcore_hub.api.auth import RequireRead
from meshcore_hub.api.dependencies import DbSession
from meshcore_hub.common.models import Advertisement, Message, Node, NodeTag
from meshcore_hub.common.schemas.messages import DashboardStats, RecentAdvertisement
from meshcore_hub.common.schemas.messages import (
ChannelMessage,
DailyActivity,
DailyActivityPoint,
DashboardStats,
MessageActivity,
NodeCountHistory,
RecentAdvertisement,
)
router = APIRouter()
@@ -23,6 +31,7 @@ async def get_stats(
now = datetime.now(timezone.utc)
today_start = now.replace(hour=0, minute=0, second=0, microsecond=0)
yesterday = now - timedelta(days=1)
seven_days_ago = now - timedelta(days=7)
# Total nodes
total_nodes = session.execute(select(func.count()).select_from(Node)).scalar() or 0
@@ -65,6 +74,26 @@ async def get_stats(
or 0
)
# Advertisements in last 7 days
advertisements_7d = (
session.execute(
select(func.count())
.select_from(Advertisement)
.where(Advertisement.received_at >= seven_days_ago)
).scalar()
or 0
)
# Messages in last 7 days
messages_7d = (
session.execute(
select(func.count())
.select_from(Message)
.where(Message.received_at >= seven_days_ago)
).scalar()
or 0
)
# Recent advertisements (last 10)
recent_ads = (
session.execute(
@@ -74,25 +103,38 @@ async def get_stats(
.all()
)
# Get friendly_name tags for the advertised nodes
# Get node names, adv_types, and name tags for the advertised nodes
ad_public_keys = [ad.public_key for ad in recent_ads]
friendly_names: dict[str, str] = {}
node_names: dict[str, str] = {}
node_adv_types: dict[str, str] = {}
tag_names: dict[str, str] = {}
if ad_public_keys:
friendly_name_query = (
# Get node names and adv_types from Node table
node_query = select(Node.public_key, Node.name, Node.adv_type).where(
Node.public_key.in_(ad_public_keys)
)
for public_key, name, adv_type in session.execute(node_query).all():
if name:
node_names[public_key] = name
if adv_type:
node_adv_types[public_key] = adv_type
# Get name tags
tag_name_query = (
select(Node.public_key, NodeTag.value)
.join(NodeTag, Node.id == NodeTag.node_id)
.where(Node.public_key.in_(ad_public_keys))
.where(NodeTag.key == "friendly_name")
.where(NodeTag.key == "name")
)
for public_key, value in session.execute(friendly_name_query).all():
friendly_names[public_key] = value
for public_key, value in session.execute(tag_name_query).all():
tag_names[public_key] = value
recent_advertisements = [
RecentAdvertisement(
public_key=ad.public_key,
name=ad.name,
friendly_name=friendly_names.get(ad.public_key),
adv_type=ad.adv_type,
name=ad.name or node_names.get(ad.public_key),
tag_name=tag_names.get(ad.public_key),
adv_type=ad.adv_type or node_adv_types.get(ad.public_key),
received_at=ad.received_at,
)
for ad in recent_ads
@@ -110,18 +152,218 @@ async def get_stats(
int(channel): int(count) for channel, count in channel_results
}
# Get latest 5 messages for each channel that has messages
channel_messages: dict[int, list[ChannelMessage]] = {}
for channel_idx, _ in channel_results:
messages_query = (
select(Message)
.where(Message.message_type == "channel")
.where(Message.channel_idx == channel_idx)
.order_by(Message.received_at.desc())
.limit(5)
)
channel_msgs = session.execute(messages_query).scalars().all()
# Look up sender names for these messages
msg_prefixes = [m.pubkey_prefix for m in channel_msgs if m.pubkey_prefix]
msg_sender_names: dict[str, str] = {}
msg_tag_names: dict[str, str] = {}
if msg_prefixes:
for prefix in set(msg_prefixes):
sender_node_query = select(Node.public_key, Node.name).where(
Node.public_key.startswith(prefix)
)
for public_key, name in session.execute(sender_node_query).all():
if name:
msg_sender_names[public_key[:12]] = name
sender_tag_query = (
select(Node.public_key, NodeTag.value)
.join(NodeTag, Node.id == NodeTag.node_id)
.where(Node.public_key.startswith(prefix))
.where(NodeTag.key == "name")
)
for public_key, value in session.execute(sender_tag_query).all():
msg_tag_names[public_key[:12]] = value
channel_messages[int(channel_idx)] = [
ChannelMessage(
text=m.text,
sender_name=(
msg_sender_names.get(m.pubkey_prefix) if m.pubkey_prefix else None
),
sender_tag_name=(
msg_tag_names.get(m.pubkey_prefix) if m.pubkey_prefix else None
),
pubkey_prefix=m.pubkey_prefix,
received_at=m.received_at,
)
for m in channel_msgs
]
return DashboardStats(
total_nodes=total_nodes,
active_nodes=active_nodes,
total_messages=total_messages,
messages_today=messages_today,
messages_7d=messages_7d,
total_advertisements=total_advertisements,
advertisements_24h=advertisements_24h,
advertisements_7d=advertisements_7d,
recent_advertisements=recent_advertisements,
channel_message_counts=channel_message_counts,
channel_messages=channel_messages,
)
@router.get("/activity", response_model=DailyActivity)
async def get_activity(
_: RequireRead,
session: DbSession,
days: int = 30,
) -> DailyActivity:
"""Get daily advertisement activity for the specified period.
Args:
days: Number of days to include (default 30, max 90)
Returns:
Daily advertisement counts for each day in the period (excluding today)
"""
# Limit to max 90 days
days = min(days, 90)
now = datetime.now(timezone.utc)
# End at start of today (exclude today's incomplete data)
end_date = now.replace(hour=0, minute=0, second=0, microsecond=0)
start_date = end_date - timedelta(days=days)
# Query advertisement counts grouped by date
# Use SQLite's date() function for grouping (returns string 'YYYY-MM-DD')
date_expr = func.date(Advertisement.received_at)
query = (
select(
date_expr.label("date"),
func.count().label("count"),
)
.where(Advertisement.received_at >= start_date)
.where(Advertisement.received_at < end_date)
.group_by(date_expr)
.order_by(date_expr)
)
results = session.execute(query).all()
# Build a dict of date -> count from results (date is already a string)
counts_by_date = {row.date: row.count for row in results}
# Generate all dates in the range, filling in zeros for missing days
data = []
for i in range(days):
date = start_date + timedelta(days=i)
date_str = date.strftime("%Y-%m-%d")
count = counts_by_date.get(date_str, 0)
data.append(DailyActivityPoint(date=date_str, count=count))
return DailyActivity(days=days, data=data)
@router.get("/message-activity", response_model=MessageActivity)
async def get_message_activity(
_: RequireRead,
session: DbSession,
days: int = 30,
) -> MessageActivity:
"""Get daily message activity for the specified period.
Args:
days: Number of days to include (default 30, max 90)
Returns:
Daily message counts for each day in the period (excluding today)
"""
days = min(days, 90)
now = datetime.now(timezone.utc)
# End at start of today (exclude today's incomplete data)
end_date = now.replace(hour=0, minute=0, second=0, microsecond=0)
start_date = end_date - timedelta(days=days)
# Query message counts grouped by date
date_expr = func.date(Message.received_at)
query = (
select(
date_expr.label("date"),
func.count().label("count"),
)
.where(Message.received_at >= start_date)
.where(Message.received_at < end_date)
.group_by(date_expr)
.order_by(date_expr)
)
results = session.execute(query).all()
counts_by_date = {row.date: row.count for row in results}
# Generate all dates in the range, filling in zeros for missing days
data = []
for i in range(days):
date = start_date + timedelta(days=i)
date_str = date.strftime("%Y-%m-%d")
count = counts_by_date.get(date_str, 0)
data.append(DailyActivityPoint(date=date_str, count=count))
return MessageActivity(days=days, data=data)
@router.get("/node-count", response_model=NodeCountHistory)
async def get_node_count_history(
_: RequireRead,
session: DbSession,
days: int = 30,
) -> NodeCountHistory:
"""Get cumulative node count over time.
For each day, shows the total number of nodes that existed by that date
(based on their created_at timestamp).
Args:
days: Number of days to include (default 30, max 90)
Returns:
Cumulative node count for each day in the period (excluding today)
"""
days = min(days, 90)
now = datetime.now(timezone.utc)
# End at start of today (exclude today's incomplete data)
end_date = now.replace(hour=0, minute=0, second=0, microsecond=0)
start_date = end_date - timedelta(days=days)
# Get all nodes with their creation dates
# Count nodes created on or before each date
data = []
for i in range(days):
date = start_date + timedelta(days=i)
end_of_day = date.replace(hour=23, minute=59, second=59, microsecond=999999)
date_str = date.strftime("%Y-%m-%d")
# Count nodes created on or before this date
count = (
session.execute(
select(func.count())
.select_from(Node)
.where(Node.created_at <= end_of_day)
).scalar()
or 0
)
data.append(DailyActivityPoint(date=date_str, count=count))
return NodeCountHistory(days=days, data=data)
@router.get("/", response_class=HTMLResponse)
async def dashboard(
request: Request,

View File

@@ -20,7 +20,7 @@ router = APIRouter()
async def list_members(
_: RequireRead,
session: DbSession,
limit: int = Query(default=50, ge=1, le=100),
limit: int = Query(default=50, ge=1, le=500),
offset: int = Query(default=0, ge=0),
) -> MemberList:
"""List all members with pagination."""
@@ -28,9 +28,9 @@ async def list_members(
count_query = select(func.count()).select_from(Member)
total = session.execute(count_query).scalar() or 0
# Get members
# Get members ordered by name
query = select(Member).order_by(Member.name).limit(limit).offset(offset)
members = session.execute(query).scalars().all()
members = list(session.execute(query).scalars().all())
return MemberList(
items=[MemberRead.model_validate(m) for m in members],
@@ -63,17 +63,23 @@ async def create_member(
member: MemberCreate,
) -> MemberRead:
"""Create a new member."""
# Normalize public_key to lowercase if provided
public_key = member.public_key.lower() if member.public_key else None
# Check if member_id already exists
query = select(Member).where(Member.member_id == member.member_id)
existing = session.execute(query).scalar_one_or_none()
if existing:
raise HTTPException(
status_code=400,
detail=f"Member with member_id '{member.member_id}' already exists",
)
# Create member
new_member = Member(
member_id=member.member_id,
name=member.name,
callsign=member.callsign,
role=member.role,
description=member.description,
contact=member.contact,
public_key=public_key,
)
session.add(new_member)
session.commit()
@@ -97,6 +103,18 @@ async def update_member(
raise HTTPException(status_code=404, detail="Member not found")
# Update fields
if member.member_id is not None:
# Check if new member_id is already taken by another member
check_query = select(Member).where(
Member.member_id == member.member_id, Member.id != member_id
)
collision = session.execute(check_query).scalar_one_or_none()
if collision:
raise HTTPException(
status_code=400,
detail=f"Member with member_id '{member.member_id}' already exists",
)
existing.member_id = member.member_id
if member.name is not None:
existing.name = member.name
if member.callsign is not None:
@@ -107,8 +125,6 @@ async def update_member(
existing.description = member.description
if member.contact is not None:
existing.contact = member.contact
if member.public_key is not None:
existing.public_key = member.public_key.lower()
session.commit()
session.refresh(existing)

View File

@@ -5,15 +5,95 @@ from typing import Optional
from fastapi import APIRouter, HTTPException, Query
from sqlalchemy import func, select
from sqlalchemy.orm import aliased, selectinload
from meshcore_hub.api.auth import RequireRead
from meshcore_hub.api.dependencies import DbSession
from meshcore_hub.common.models import Message, Node, NodeTag
from meshcore_hub.common.schemas.messages import MessageList, MessageRead
from meshcore_hub.common.models import EventReceiver, Message, Node, NodeTag
from meshcore_hub.common.schemas.messages import MessageList, MessageRead, ReceiverInfo
router = APIRouter()
def _get_tag_name(node: Optional[Node]) -> Optional[str]:
"""Extract name tag from a node's tags."""
if not node or not node.tags:
return None
for tag in node.tags:
if tag.key == "name":
return tag.value
return None
def _fetch_receivers_for_events(
session: DbSession,
event_type: str,
event_hashes: list[str],
) -> dict[str, list[ReceiverInfo]]:
"""Fetch receiver info for a list of events by their hashes.
Args:
session: Database session
event_type: Type of event ('message', 'advertisement', etc.)
event_hashes: List of event hashes to fetch receivers for
Returns:
Dict mapping event_hash to list of ReceiverInfo objects
"""
if not event_hashes:
return {}
# Query event_receivers with receiver node info
query = (
select(
EventReceiver.event_hash,
EventReceiver.snr,
EventReceiver.received_at,
Node.id.label("node_id"),
Node.public_key,
Node.name,
)
.join(Node, EventReceiver.receiver_node_id == Node.id)
.where(EventReceiver.event_type == event_type)
.where(EventReceiver.event_hash.in_(event_hashes))
.order_by(EventReceiver.received_at)
)
results = session.execute(query).all()
# Group by event_hash
receivers_by_hash: dict[str, list[ReceiverInfo]] = {}
# Get tag names for receiver nodes
node_ids = [r.node_id for r in results]
tag_names: dict[str, str] = {}
if node_ids:
tag_query = (
select(NodeTag.node_id, NodeTag.value)
.where(NodeTag.node_id.in_(node_ids))
.where(NodeTag.key == "name")
)
for node_id, value in session.execute(tag_query).all():
tag_names[node_id] = value
for row in results:
if row.event_hash not in receivers_by_hash:
receivers_by_hash[row.event_hash] = []
receivers_by_hash[row.event_hash].append(
ReceiverInfo(
node_id=row.node_id,
public_key=row.public_key,
name=row.name,
tag_name=tag_names.get(row.node_id),
snr=row.snr,
received_at=row.received_at,
)
)
return receivers_by_hash
@router.get("", response_model=MessageList)
async def list_messages(
_: RequireRead,
@@ -21,6 +101,9 @@ async def list_messages(
message_type: Optional[str] = Query(None, description="Filter by message type"),
pubkey_prefix: Optional[str] = Query(None, description="Filter by sender prefix"),
channel_idx: Optional[int] = Query(None, description="Filter by channel"),
received_by: Optional[str] = Query(
None, description="Filter by receiver node public key"
),
since: Optional[datetime] = Query(None, description="Start timestamp"),
until: Optional[datetime] = Query(None, description="End timestamp"),
search: Optional[str] = Query(None, description="Search in message text"),
@@ -28,8 +111,16 @@ async def list_messages(
offset: int = Query(0, ge=0, description="Page offset"),
) -> MessageList:
"""List messages with filtering and pagination."""
# Build query
query = select(Message)
# Alias for receiver node join
ReceiverNode = aliased(Node)
# Build query with receiver node join
query = select(
Message,
ReceiverNode.public_key.label("receiver_pk"),
ReceiverNode.name.label("receiver_name"),
ReceiverNode.id.label("receiver_id"),
).outerjoin(ReceiverNode, Message.receiver_node_id == ReceiverNode.id)
if message_type:
query = query.where(Message.message_type == message_type)
@@ -40,6 +131,9 @@ async def list_messages(
if channel_idx is not None:
query = query.where(Message.channel_idx == channel_idx)
if received_by:
query = query.where(ReceiverNode.public_key == received_by)
if since:
query = query.where(Message.received_at >= since)
@@ -57,34 +151,77 @@ async def list_messages(
query = query.order_by(Message.received_at.desc()).offset(offset).limit(limit)
# Execute
messages = session.execute(query).scalars().all()
results = session.execute(query).all()
# Look up friendly_names for senders with pubkey_prefix
pubkey_prefixes = [m.pubkey_prefix for m in messages if m.pubkey_prefix]
friendly_names: dict[str, str] = {}
# Look up sender names and tag names for senders with pubkey_prefix
pubkey_prefixes = [r[0].pubkey_prefix for r in results if r[0].pubkey_prefix]
sender_names: dict[str, str] = {}
sender_tag_names: dict[str, str] = {}
if pubkey_prefixes:
# Find nodes whose public_key starts with any of these prefixes
for prefix in set(pubkey_prefixes):
friendly_name_query = (
# Get node name
node_query = select(Node.public_key, Node.name).where(
Node.public_key.startswith(prefix)
)
for public_key, name in session.execute(node_query).all():
if name:
sender_names[public_key[:12]] = name
# Get name tag
tag_name_query = (
select(Node.public_key, NodeTag.value)
.join(NodeTag, Node.id == NodeTag.node_id)
.where(Node.public_key.startswith(prefix))
.where(NodeTag.key == "friendly_name")
.where(NodeTag.key == "name")
)
for public_key, value in session.execute(friendly_name_query).all():
# Map the prefix to the friendly_name
friendly_names[public_key[:12]] = value
for public_key, value in session.execute(tag_name_query).all():
sender_tag_names[public_key[:12]] = value
# Build response with friendly_names
# Collect receiver node IDs to fetch tags
receiver_ids = set()
for row in results:
if row.receiver_id:
receiver_ids.add(row.receiver_id)
# Fetch receiver nodes with tags
receivers_by_id: dict[str, Node] = {}
if receiver_ids:
receivers_query = (
select(Node)
.where(Node.id.in_(receiver_ids))
.options(selectinload(Node.tags))
)
receivers = session.execute(receivers_query).scalars().all()
receivers_by_id = {n.id: n for n in receivers}
# Fetch all receivers for these messages
event_hashes = [r[0].event_hash for r in results if r[0].event_hash]
receivers_by_hash = _fetch_receivers_for_events(session, "message", event_hashes)
# Build response with sender info and received_by
items = []
for m in messages:
for row in results:
m = row[0]
receiver_pk = row.receiver_pk
receiver_name = row.receiver_name
receiver_node = (
receivers_by_id.get(row.receiver_id) if row.receiver_id else None
)
msg_dict = {
"id": m.id,
"receiver_node_id": m.receiver_node_id,
"received_by": receiver_pk,
"receiver_name": receiver_name,
"receiver_tag_name": _get_tag_name(receiver_node),
"message_type": m.message_type,
"pubkey_prefix": m.pubkey_prefix,
"sender_friendly_name": (
friendly_names.get(m.pubkey_prefix) if m.pubkey_prefix else None
"sender_name": (
sender_names.get(m.pubkey_prefix) if m.pubkey_prefix else None
),
"sender_tag_name": (
sender_tag_names.get(m.pubkey_prefix) if m.pubkey_prefix else None
),
"channel_idx": m.channel_idx,
"text": m.text,
@@ -95,6 +232,9 @@ async def list_messages(
"sender_timestamp": m.sender_timestamp,
"received_at": m.received_at,
"created_at": m.created_at,
"receivers": (
receivers_by_hash.get(m.event_hash, []) if m.event_hash else []
),
}
items.append(MessageRead(**msg_dict))
@@ -113,10 +253,42 @@ async def get_message(
message_id: str,
) -> MessageRead:
"""Get a single message by ID."""
query = select(Message).where(Message.id == message_id)
message = session.execute(query).scalar_one_or_none()
ReceiverNode = aliased(Node)
query = (
select(Message, ReceiverNode.public_key.label("receiver_pk"))
.outerjoin(ReceiverNode, Message.receiver_node_id == ReceiverNode.id)
.where(Message.id == message_id)
)
result = session.execute(query).one_or_none()
if not message:
if not result:
raise HTTPException(status_code=404, detail="Message not found")
return MessageRead.model_validate(message)
message, receiver_pk = result
# Fetch receivers for this message
receivers = []
if message.event_hash:
receivers_by_hash = _fetch_receivers_for_events(
session, "message", [message.event_hash]
)
receivers = receivers_by_hash.get(message.event_hash, [])
data = {
"id": message.id,
"receiver_node_id": message.receiver_node_id,
"received_by": receiver_pk,
"message_type": message.message_type,
"pubkey_prefix": message.pubkey_prefix,
"channel_idx": message.channel_idx,
"text": message.text,
"path_len": message.path_len,
"txt_type": message.txt_type,
"signature": message.signature,
"snr": message.snr,
"sender_timestamp": message.sender_timestamp,
"received_at": message.received_at,
"created_at": message.created_at,
"receivers": receivers,
}
return MessageRead(**data)

View File

@@ -6,7 +6,13 @@ from sqlalchemy import select
from meshcore_hub.api.auth import RequireAdmin, RequireRead
from meshcore_hub.api.dependencies import DbSession
from meshcore_hub.common.models import Node, NodeTag
from meshcore_hub.common.schemas.nodes import NodeTagCreate, NodeTagRead, NodeTagUpdate
from meshcore_hub.common.schemas.nodes import (
NodeTagCreate,
NodeTagMove,
NodeTagRead,
NodeTagsCopyResult,
NodeTagUpdate,
)
router = APIRouter()
@@ -130,6 +136,131 @@ async def update_node_tag(
return NodeTagRead.model_validate(node_tag)
@router.put("/nodes/{public_key}/tags/{key}/move", response_model=NodeTagRead)
async def move_node_tag(
_: RequireAdmin,
session: DbSession,
public_key: str,
key: str,
data: NodeTagMove,
) -> NodeTagRead:
"""Move a node tag to a different node."""
# Check if source and destination are the same
if public_key == data.new_public_key:
raise HTTPException(
status_code=400,
detail="Source and destination nodes are the same",
)
# Find source node
source_query = select(Node).where(Node.public_key == public_key)
source_node = session.execute(source_query).scalar_one_or_none()
if not source_node:
raise HTTPException(status_code=404, detail="Source node not found")
# Find tag
tag_query = select(NodeTag).where(
(NodeTag.node_id == source_node.id) & (NodeTag.key == key)
)
node_tag = session.execute(tag_query).scalar_one_or_none()
if not node_tag:
raise HTTPException(status_code=404, detail="Tag not found")
# Find destination node
dest_query = select(Node).where(Node.public_key == data.new_public_key)
dest_node = session.execute(dest_query).scalar_one_or_none()
if not dest_node:
raise HTTPException(status_code=404, detail="Destination node not found")
# Check if tag already exists on destination node
conflict_query = select(NodeTag).where(
(NodeTag.node_id == dest_node.id) & (NodeTag.key == key)
)
conflict = session.execute(conflict_query).scalar_one_or_none()
if conflict:
raise HTTPException(
status_code=409,
detail=f"Tag '{key}' already exists on destination node",
)
# Move tag to destination node
node_tag.node_id = dest_node.id
session.commit()
session.refresh(node_tag)
return NodeTagRead.model_validate(node_tag)
@router.post(
"/nodes/{public_key}/tags/copy-to/{dest_public_key}",
response_model=NodeTagsCopyResult,
)
async def copy_all_tags(
_: RequireAdmin,
session: DbSession,
public_key: str,
dest_public_key: str,
) -> NodeTagsCopyResult:
"""Copy all tags from one node to another.
Tags that already exist on the destination node are skipped.
"""
# Check if source and destination are the same
if public_key == dest_public_key:
raise HTTPException(
status_code=400,
detail="Source and destination nodes are the same",
)
# Find source node
source_query = select(Node).where(Node.public_key == public_key)
source_node = session.execute(source_query).scalar_one_or_none()
if not source_node:
raise HTTPException(status_code=404, detail="Source node not found")
# Find destination node
dest_query = select(Node).where(Node.public_key == dest_public_key)
dest_node = session.execute(dest_query).scalar_one_or_none()
if not dest_node:
raise HTTPException(status_code=404, detail="Destination node not found")
# Get existing tags on destination node
existing_query = select(NodeTag.key).where(NodeTag.node_id == dest_node.id)
existing_keys = set(session.execute(existing_query).scalars().all())
# Copy tags
copied = 0
skipped_keys = []
for tag in source_node.tags:
if tag.key in existing_keys:
skipped_keys.append(tag.key)
continue
new_tag = NodeTag(
node_id=dest_node.id,
key=tag.key,
value=tag.value,
value_type=tag.value_type,
)
session.add(new_tag)
copied += 1
session.commit()
return NodeTagsCopyResult(
copied=copied,
skipped=len(skipped_keys),
skipped_keys=skipped_keys,
)
@router.delete("/nodes/{public_key}/tags/{key}", status_code=204)
async def delete_node_tag(
_: RequireAdmin,
@@ -156,3 +287,27 @@ async def delete_node_tag(
session.delete(node_tag)
session.commit()
@router.delete("/nodes/{public_key}/tags")
async def delete_all_node_tags(
_: RequireAdmin,
session: DbSession,
public_key: str,
) -> dict:
"""Delete all tags for a node."""
# Find node
node_query = select(Node).where(Node.public_key == public_key)
node = session.execute(node_query).scalar_one_or_none()
if not node:
raise HTTPException(status_code=404, detail="Node not found")
# Count and delete all tags
count = len(node.tags)
for tag in node.tags:
session.delete(tag)
session.commit()
return {"deleted": count}

View File

@@ -2,12 +2,13 @@
from typing import Optional
from fastapi import APIRouter, HTTPException, Query
from sqlalchemy import func, select
from fastapi import APIRouter, HTTPException, Path, Query
from sqlalchemy import func, or_, select
from sqlalchemy.orm import selectinload
from meshcore_hub.api.auth import RequireRead
from meshcore_hub.api.dependencies import DbSession
from meshcore_hub.common.models import Node
from meshcore_hub.common.models import Node, NodeTag
from meshcore_hub.common.schemas.nodes import NodeList, NodeRead
router = APIRouter()
@@ -17,28 +18,63 @@ router = APIRouter()
async def list_nodes(
_: RequireRead,
session: DbSession,
search: Optional[str] = Query(None, description="Search in name or public key"),
search: Optional[str] = Query(
None, description="Search in name tag, node name, or public key"
),
adv_type: Optional[str] = Query(None, description="Filter by advertisement type"),
limit: int = Query(50, ge=1, le=100, description="Page size"),
member_id: Optional[str] = Query(None, description="Filter by member_id tag value"),
role: Optional[str] = Query(None, description="Filter by role tag value"),
limit: int = Query(50, ge=1, le=500, description="Page size"),
offset: int = Query(0, ge=0, description="Page offset"),
) -> NodeList:
"""List all nodes with pagination and filtering."""
# Build query
query = select(Node)
# Build base query with tags loaded
query = select(Node).options(selectinload(Node.tags))
if search:
# Search in public key, node name, or name tag
# For name tag search, we need to join with NodeTag
search_pattern = f"%{search}%"
query = query.where(
(Node.name.ilike(f"%{search}%")) | (Node.public_key.ilike(f"%{search}%"))
or_(
Node.public_key.ilike(search_pattern),
Node.name.ilike(search_pattern),
Node.id.in_(
select(NodeTag.node_id).where(
NodeTag.key == "name", NodeTag.value.ilike(search_pattern)
)
),
)
)
if adv_type:
query = query.where(Node.adv_type == adv_type)
if member_id:
# Filter nodes that have a member_id tag with the specified value
query = query.where(
Node.id.in_(
select(NodeTag.node_id).where(
NodeTag.key == "member_id", NodeTag.value == member_id
)
)
)
if role:
# Filter nodes that have a role tag with the specified value
query = query.where(
Node.id.in_(
select(NodeTag.node_id).where(
NodeTag.key == "role", NodeTag.value == role
)
)
)
# Get total count
count_query = select(func.count()).select_from(query.subquery())
total = session.execute(count_query).scalar() or 0
# Apply pagination
# Apply pagination and ordering
query = query.order_by(Node.last_seen.desc()).offset(offset).limit(limit)
# Execute
@@ -52,14 +88,43 @@ async def list_nodes(
)
@router.get("/{public_key}", response_model=NodeRead)
async def get_node(
@router.get("/prefix/{prefix}", response_model=NodeRead)
async def get_node_by_prefix(
_: RequireRead,
session: DbSession,
public_key: str,
prefix: str = Path(description="Public key prefix to search for"),
) -> NodeRead:
"""Get a single node by public key."""
query = select(Node).where(Node.public_key == public_key)
"""Get a single node by public key prefix.
Returns the first node (alphabetically by public_key) that matches the prefix.
"""
query = (
select(Node)
.options(selectinload(Node.tags))
.where(Node.public_key.startswith(prefix))
.order_by(Node.public_key)
.limit(1)
)
node = session.execute(query).scalar_one_or_none()
if not node:
raise HTTPException(status_code=404, detail="Node not found")
return NodeRead.model_validate(node)
@router.get("/{public_key}", response_model=NodeRead)
async def get_node(
_: RequireRead,
session: DbSession,
public_key: str = Path(description="Full 64-character public key"),
) -> NodeRead:
"""Get a single node by exact public key match."""
query = (
select(Node)
.options(selectinload(Node.tags))
.where(Node.public_key == public_key)
)
node = session.execute(query).scalar_one_or_none()
if not node:

View File

@@ -5,10 +5,11 @@ from typing import Optional
from fastapi import APIRouter, HTTPException, Query
from sqlalchemy import func, select
from sqlalchemy.orm import aliased
from meshcore_hub.api.auth import RequireRead
from meshcore_hub.api.dependencies import DbSession
from meshcore_hub.common.models import Telemetry
from meshcore_hub.common.models import Node, Telemetry
from meshcore_hub.common.schemas.messages import TelemetryList, TelemetryRead
router = APIRouter()
@@ -19,18 +20,29 @@ async def list_telemetry(
_: RequireRead,
session: DbSession,
node_public_key: Optional[str] = Query(None, description="Filter by node"),
received_by: Optional[str] = Query(
None, description="Filter by receiver node public key"
),
since: Optional[datetime] = Query(None, description="Start timestamp"),
until: Optional[datetime] = Query(None, description="End timestamp"),
limit: int = Query(50, ge=1, le=100, description="Page size"),
offset: int = Query(0, ge=0, description="Page offset"),
) -> TelemetryList:
"""List telemetry records with filtering and pagination."""
# Build query
query = select(Telemetry)
# Alias for receiver node join
ReceiverNode = aliased(Node)
# Build query with receiver node join
query = select(Telemetry, ReceiverNode.public_key.label("receiver_pk")).outerjoin(
ReceiverNode, Telemetry.receiver_node_id == ReceiverNode.id
)
if node_public_key:
query = query.where(Telemetry.node_public_key == node_public_key)
if received_by:
query = query.where(ReceiverNode.public_key == received_by)
if since:
query = query.where(Telemetry.received_at >= since)
@@ -45,10 +57,25 @@ async def list_telemetry(
query = query.order_by(Telemetry.received_at.desc()).offset(offset).limit(limit)
# Execute
records = session.execute(query).scalars().all()
results = session.execute(query).all()
# Build response with received_by
items = []
for tel, receiver_pk in results:
data = {
"id": tel.id,
"receiver_node_id": tel.receiver_node_id,
"received_by": receiver_pk,
"node_id": tel.node_id,
"node_public_key": tel.node_public_key,
"parsed_data": tel.parsed_data,
"received_at": tel.received_at,
"created_at": tel.created_at,
}
items.append(TelemetryRead(**data))
return TelemetryList(
items=[TelemetryRead.model_validate(t) for t in records],
items=items,
total=total,
limit=limit,
offset=offset,
@@ -62,10 +89,26 @@ async def get_telemetry(
telemetry_id: str,
) -> TelemetryRead:
"""Get a single telemetry record by ID."""
query = select(Telemetry).where(Telemetry.id == telemetry_id)
telemetry = session.execute(query).scalar_one_or_none()
ReceiverNode = aliased(Node)
query = (
select(Telemetry, ReceiverNode.public_key.label("receiver_pk"))
.outerjoin(ReceiverNode, Telemetry.receiver_node_id == ReceiverNode.id)
.where(Telemetry.id == telemetry_id)
)
result = session.execute(query).one_or_none()
if not telemetry:
if not result:
raise HTTPException(status_code=404, detail="Telemetry record not found")
return TelemetryRead.model_validate(telemetry)
tel, receiver_pk = result
data = {
"id": tel.id,
"receiver_node_id": tel.receiver_node_id,
"received_by": receiver_pk,
"node_id": tel.node_id,
"node_public_key": tel.node_public_key,
"parsed_data": tel.parsed_data,
"received_at": tel.received_at,
"created_at": tel.created_at,
}
return TelemetryRead(**data)

View File

@@ -5,10 +5,11 @@ from typing import Optional
from fastapi import APIRouter, HTTPException, Query
from sqlalchemy import func, select
from sqlalchemy.orm import aliased
from meshcore_hub.api.auth import RequireRead
from meshcore_hub.api.dependencies import DbSession
from meshcore_hub.common.models import TracePath
from meshcore_hub.common.models import Node, TracePath
from meshcore_hub.common.schemas.messages import TracePathList, TracePathRead
router = APIRouter()
@@ -18,14 +19,25 @@ router = APIRouter()
async def list_trace_paths(
_: RequireRead,
session: DbSession,
received_by: Optional[str] = Query(
None, description="Filter by receiver node public key"
),
since: Optional[datetime] = Query(None, description="Start timestamp"),
until: Optional[datetime] = Query(None, description="End timestamp"),
limit: int = Query(50, ge=1, le=100, description="Page size"),
offset: int = Query(0, ge=0, description="Page offset"),
) -> TracePathList:
"""List trace paths with filtering and pagination."""
# Build query
query = select(TracePath)
# Alias for receiver node join
ReceiverNode = aliased(Node)
# Build query with receiver node join
query = select(TracePath, ReceiverNode.public_key.label("receiver_pk")).outerjoin(
ReceiverNode, TracePath.receiver_node_id == ReceiverNode.id
)
if received_by:
query = query.where(ReceiverNode.public_key == received_by)
if since:
query = query.where(TracePath.received_at >= since)
@@ -41,10 +53,29 @@ async def list_trace_paths(
query = query.order_by(TracePath.received_at.desc()).offset(offset).limit(limit)
# Execute
trace_paths = session.execute(query).scalars().all()
results = session.execute(query).all()
# Build response with received_by
items = []
for tp, receiver_pk in results:
data = {
"id": tp.id,
"receiver_node_id": tp.receiver_node_id,
"received_by": receiver_pk,
"initiator_tag": tp.initiator_tag,
"path_len": tp.path_len,
"flags": tp.flags,
"auth": tp.auth,
"path_hashes": tp.path_hashes,
"snr_values": tp.snr_values,
"hop_count": tp.hop_count,
"received_at": tp.received_at,
"created_at": tp.created_at,
}
items.append(TracePathRead(**data))
return TracePathList(
items=[TracePathRead.model_validate(t) for t in trace_paths],
items=items,
total=total,
limit=limit,
offset=offset,
@@ -58,10 +89,30 @@ async def get_trace_path(
trace_path_id: str,
) -> TracePathRead:
"""Get a single trace path by ID."""
query = select(TracePath).where(TracePath.id == trace_path_id)
trace_path = session.execute(query).scalar_one_or_none()
ReceiverNode = aliased(Node)
query = (
select(TracePath, ReceiverNode.public_key.label("receiver_pk"))
.outerjoin(ReceiverNode, TracePath.receiver_node_id == ReceiverNode.id)
.where(TracePath.id == trace_path_id)
)
result = session.execute(query).one_or_none()
if not trace_path:
if not result:
raise HTTPException(status_code=404, detail="Trace path not found")
return TracePathRead.model_validate(trace_path)
tp, receiver_pk = result
data = {
"id": tp.id,
"receiver_node_id": tp.receiver_node_id,
"received_by": receiver_pk,
"initiator_tag": tp.initiator_tag,
"path_len": tp.path_len,
"flags": tp.flags,
"auth": tp.auth,
"path_hashes": tp.path_hashes,
"snr_values": tp.snr_values,
"hop_count": tp.hop_count,
"received_at": tp.received_at,
"created_at": tp.created_at,
}
return TracePathRead(**data)

View File

@@ -0,0 +1,225 @@
"""Data retention and cleanup service for MeshCore Hub.
This module provides functionality to delete old event data and inactive nodes
based on configured retention policies.
"""
import logging
from datetime import datetime, timedelta, timezone
from sqlalchemy import delete, func, select
from sqlalchemy.ext.asyncio import AsyncSession
from meshcore_hub.common.models import (
Advertisement,
EventLog,
Message,
Node,
Telemetry,
TracePath,
)
logger = logging.getLogger(__name__)
class CleanupStats:
"""Statistics from a cleanup operation."""
def __init__(self) -> None:
self.advertisements_deleted: int = 0
self.messages_deleted: int = 0
self.telemetry_deleted: int = 0
self.trace_paths_deleted: int = 0
self.event_logs_deleted: int = 0
self.nodes_deleted: int = 0
self.total_deleted: int = 0
def __repr__(self) -> str:
return (
f"CleanupStats(total={self.total_deleted}, "
f"advertisements={self.advertisements_deleted}, "
f"messages={self.messages_deleted}, "
f"telemetry={self.telemetry_deleted}, "
f"trace_paths={self.trace_paths_deleted}, "
f"event_logs={self.event_logs_deleted}, "
f"nodes={self.nodes_deleted})"
)
async def cleanup_old_data(
db: AsyncSession,
retention_days: int,
dry_run: bool = False,
) -> CleanupStats:
"""Delete event data older than the retention period.
Args:
db: Database session
retention_days: Number of days to retain data
dry_run: If True, only count records without deleting
Returns:
CleanupStats object with deletion counts
"""
stats = CleanupStats()
cutoff_date = datetime.now(timezone.utc) - timedelta(days=retention_days)
logger.info(
"Starting data cleanup (dry_run=%s, retention_days=%d, cutoff=%s)",
dry_run,
retention_days,
cutoff_date.isoformat(),
)
# Clean up advertisements
stats.advertisements_deleted = await _cleanup_table(
db, Advertisement, cutoff_date, "advertisements", dry_run
)
# Clean up messages
stats.messages_deleted = await _cleanup_table(
db, Message, cutoff_date, "messages", dry_run
)
# Clean up telemetry
stats.telemetry_deleted = await _cleanup_table(
db, Telemetry, cutoff_date, "telemetry", dry_run
)
# Clean up trace paths
stats.trace_paths_deleted = await _cleanup_table(
db, TracePath, cutoff_date, "trace_paths", dry_run
)
# Clean up event logs
stats.event_logs_deleted = await _cleanup_table(
db, EventLog, cutoff_date, "event_logs", dry_run
)
stats.total_deleted = (
stats.advertisements_deleted
+ stats.messages_deleted
+ stats.telemetry_deleted
+ stats.trace_paths_deleted
+ stats.event_logs_deleted
)
if not dry_run:
await db.commit()
logger.info("Cleanup completed: %s", stats)
else:
logger.info("Cleanup dry run completed: %s", stats)
return stats
async def _cleanup_table(
db: AsyncSession,
model: type,
cutoff_date: datetime,
table_name: str,
dry_run: bool,
) -> int:
"""Delete old records from a specific table.
Args:
db: Database session
model: SQLAlchemy model class
cutoff_date: Delete records older than this date
table_name: Name of table for logging
dry_run: If True, only count without deleting
Returns:
Number of records deleted (or would be deleted in dry_run)
"""
from sqlalchemy import select
if dry_run:
# Count records that would be deleted
stmt = (
select(func.count())
.select_from(model)
.where(model.created_at < cutoff_date) # type: ignore[attr-defined]
)
result = await db.execute(stmt)
count = result.scalar() or 0
logger.debug(
"[DRY RUN] Would delete %d records from %s older than %s",
count,
table_name,
cutoff_date.isoformat(),
)
return count
else:
# Delete old records
result = await db.execute(delete(model).where(model.created_at < cutoff_date)) # type: ignore[attr-defined]
count = result.rowcount or 0 # type: ignore[attr-defined]
logger.debug(
"Deleted %d records from %s older than %s",
count,
table_name,
cutoff_date.isoformat(),
)
return count
async def cleanup_inactive_nodes(
db: AsyncSession,
inactivity_days: int,
dry_run: bool = False,
) -> int:
"""Delete nodes that haven't been seen for the specified number of days.
Only deletes nodes where last_seen is older than the cutoff date.
Nodes with last_seen=NULL are NOT deleted (never seen on network).
Args:
db: Database session
inactivity_days: Delete nodes not seen for this many days
dry_run: If True, only count without deleting
Returns:
Number of nodes deleted (or would be deleted in dry_run)
"""
cutoff_date = datetime.now(timezone.utc) - timedelta(days=inactivity_days)
logger.info(
"Starting node cleanup (dry_run=%s, inactivity_days=%d, cutoff=%s)",
dry_run,
inactivity_days,
cutoff_date.isoformat(),
)
if dry_run:
# Count nodes that would be deleted
# Only count nodes with last_seen < cutoff (excludes NULL last_seen)
stmt = (
select(func.count())
.select_from(Node)
.where(Node.last_seen < cutoff_date)
.where(Node.last_seen.isnot(None))
)
result = await db.execute(stmt)
count = result.scalar() or 0
logger.info(
"[DRY RUN] Would delete %d nodes not seen since %s",
count,
cutoff_date.isoformat(),
)
return count
else:
# Delete inactive nodes
# Only delete nodes with last_seen < cutoff (excludes NULL last_seen)
result = await db.execute(
delete(Node)
.where(Node.last_seen < cutoff_date)
.where(Node.last_seen.isnot(None))
)
await db.commit()
count = result.rowcount or 0 # type: ignore[attr-defined]
logger.info(
"Deleted %d nodes not seen since %s",
count,
cutoff_date.isoformat(),
)
return count

View File

@@ -1,9 +1,14 @@
"""CLI for the Collector component."""
from typing import TYPE_CHECKING
import click
from meshcore_hub.common.logging import configure_logging
if TYPE_CHECKING:
from meshcore_hub.common.database import DatabaseManager
@click.group(invoke_without_command=True)
@click.pass_context
@@ -42,6 +47,13 @@ from meshcore_hub.common.logging import configure_logging
envvar="MQTT_PREFIX",
help="MQTT topic prefix",
)
@click.option(
"--mqtt-tls",
is_flag=True,
default=False,
envvar="MQTT_TLS",
help="Enable TLS/SSL for MQTT connection",
)
@click.option(
"--data-home",
type=str,
@@ -77,6 +89,7 @@ def collector(
mqtt_username: str | None,
mqtt_password: str | None,
prefix: str,
mqtt_tls: bool,
data_home: str | None,
seed_home: str | None,
database_url: str | None,
@@ -120,6 +133,7 @@ def collector(
ctx.obj["mqtt_username"] = mqtt_username
ctx.obj["mqtt_password"] = mqtt_password
ctx.obj["prefix"] = prefix
ctx.obj["mqtt_tls"] = mqtt_tls
ctx.obj["data_home"] = data_home or settings.data_home
ctx.obj["seed_home"] = settings.effective_seed_home
ctx.obj["database_url"] = effective_db_url
@@ -134,9 +148,11 @@ def collector(
mqtt_username=mqtt_username,
mqtt_password=mqtt_password,
prefix=prefix,
mqtt_tls=mqtt_tls,
database_url=effective_db_url,
log_level=log_level,
data_home=data_home or settings.data_home,
seed_home=settings.effective_seed_home,
)
@@ -146,12 +162,17 @@ def _run_collector_service(
mqtt_username: str | None,
mqtt_password: str | None,
prefix: str,
mqtt_tls: bool,
database_url: str,
log_level: str,
data_home: str,
seed_home: str,
) -> None:
"""Run the collector service.
Note: Seed data import should be done using the 'meshcore-hub collector seed'
command or the dedicated seed container before starting the collector service.
Webhooks can be configured via environment variables:
- WEBHOOK_ADVERTISEMENT_URL: Webhook for advertisement events
- WEBHOOK_MESSAGE_URL: Webhook for all message events
@@ -168,20 +189,22 @@ def _run_collector_service(
click.echo("Starting MeshCore Collector")
click.echo(f"Data home: {data_home}")
click.echo(f"Seed home: {seed_home}")
click.echo(f"MQTT: {mqtt_host}:{mqtt_port} (prefix: {prefix})")
click.echo(f"Database: {database_url}")
# Load webhook configuration from settings
from meshcore_hub.common.config import get_collector_settings
from meshcore_hub.collector.webhook import (
WebhookDispatcher,
create_webhooks_from_settings,
)
from meshcore_hub.common.config import get_collector_settings
settings = get_collector_settings()
webhooks = create_webhooks_from_settings(settings)
webhook_dispatcher = WebhookDispatcher(webhooks) if webhooks else None
click.echo("")
if webhook_dispatcher and webhook_dispatcher.webhooks:
click.echo(f"Webhooks configured: {len(webhooks)}")
for wh in webhooks:
@@ -191,14 +214,42 @@ def _run_collector_service(
from meshcore_hub.collector.subscriber import run_collector
# Show cleanup configuration
click.echo("")
click.echo("Cleanup configuration:")
if settings.data_retention_enabled:
click.echo(
f" Event data: Enabled (retention: {settings.data_retention_days} days)"
)
else:
click.echo(" Event data: Disabled")
if settings.node_cleanup_enabled:
click.echo(
f" Inactive nodes: Enabled (inactivity: {settings.node_cleanup_days} days)"
)
else:
click.echo(" Inactive nodes: Disabled")
if settings.data_retention_enabled or settings.node_cleanup_enabled:
click.echo(f" Interval: {settings.data_retention_interval_hours} hours")
click.echo("")
click.echo("Starting MQTT subscriber...")
run_collector(
mqtt_host=mqtt_host,
mqtt_port=mqtt_port,
mqtt_username=mqtt_username,
mqtt_password=mqtt_password,
mqtt_prefix=prefix,
mqtt_tls=mqtt_tls,
database_url=database_url,
webhook_dispatcher=webhook_dispatcher,
cleanup_enabled=settings.data_retention_enabled,
cleanup_retention_days=settings.data_retention_days,
cleanup_interval_hours=settings.data_retention_interval_hours,
node_cleanup_enabled=settings.node_cleanup_enabled,
node_cleanup_days=settings.node_cleanup_days,
)
@@ -215,9 +266,11 @@ def run_cmd(ctx: click.Context) -> None:
mqtt_username=ctx.obj["mqtt_username"],
mqtt_password=ctx.obj["mqtt_password"],
prefix=ctx.obj["prefix"],
mqtt_tls=ctx.obj["mqtt_tls"],
database_url=ctx.obj["database_url"],
log_level=ctx.obj["log_level"],
data_home=ctx.obj["data_home"],
seed_home=ctx.obj["seed_home"],
)
@@ -236,17 +289,15 @@ def seed_cmd(
"""Import seed data from SEED_HOME directory.
Looks for the following files in SEED_HOME:
- node_tags.json: Node tag definitions (keyed by public_key)
- members.json: Network member definitions
- node_tags.yaml: Node tag definitions (keyed by public_key)
- members.yaml: Network member definitions
Files that don't exist are skipped. This command is idempotent -
existing records are updated, new records are created.
SEED_HOME defaults to {DATA_HOME}/collector but can be overridden
SEED_HOME defaults to ./seed but can be overridden
with the --seed-home option or SEED_HOME environment variable.
"""
from pathlib import Path
configure_logging(level=ctx.obj["log_level"])
seed_home = ctx.obj["seed_home"]
@@ -254,50 +305,17 @@ def seed_cmd(
click.echo(f"Database: {ctx.obj['database_url']}")
from meshcore_hub.common.database import DatabaseManager
from meshcore_hub.collector.tag_import import import_tags
from meshcore_hub.collector.member_import import import_members
# Initialize database
# Initialize database (schema managed by Alembic migrations)
db = DatabaseManager(ctx.obj["database_url"])
db.create_tables()
# Track what was imported
imported_any = False
# Import node tags if file exists
node_tags_file = Path(seed_home) / "node_tags.json"
if node_tags_file.exists():
click.echo(f"\nImporting node tags from: {node_tags_file}")
stats = import_tags(
file_path=str(node_tags_file),
db=db,
create_nodes=not no_create_nodes,
)
click.echo(f" Tags: {stats['created']} created, {stats['updated']} updated")
if stats["nodes_created"]:
click.echo(f" Nodes created: {stats['nodes_created']}")
if stats["errors"]:
for error in stats["errors"]:
click.echo(f" Error: {error}", err=True)
imported_any = True
else:
click.echo(f"\nNo node_tags.json found in {seed_home}")
# Import members if file exists
members_file = Path(seed_home) / "members.json"
if members_file.exists():
click.echo(f"\nImporting members from: {members_file}")
stats = import_members(
file_path=str(members_file),
db=db,
)
click.echo(f" Members: {stats['created']} created, {stats['updated']} updated")
if stats["errors"]:
for error in stats["errors"]:
click.echo(f" Error: {error}", err=True)
imported_any = True
else:
click.echo(f"\nNo members.json found in {seed_home}")
# Run seed import
imported_any = _run_seed_import(
seed_home=seed_home,
db=db,
create_nodes=not no_create_nodes,
verbose=True,
)
if not imported_any:
click.echo("\nNo seed files found. Nothing to import.")
@@ -307,6 +325,79 @@ def seed_cmd(
db.dispose()
def _run_seed_import(
seed_home: str,
db: "DatabaseManager",
create_nodes: bool = True,
verbose: bool = False,
) -> bool:
"""Run seed import from SEED_HOME directory.
Args:
seed_home: Path to seed home directory
db: Database manager instance
create_nodes: If True, create nodes that don't exist
verbose: If True, output progress messages
Returns:
True if any files were imported, False otherwise
"""
from pathlib import Path
from meshcore_hub.collector.member_import import import_members
from meshcore_hub.collector.tag_import import import_tags
imported_any = False
# Import node tags if file exists
node_tags_file = Path(seed_home) / "node_tags.yaml"
if node_tags_file.exists():
if verbose:
click.echo(f"\nImporting node tags from: {node_tags_file}")
stats = import_tags(
file_path=str(node_tags_file),
db=db,
create_nodes=create_nodes,
clear_existing=True,
)
if verbose:
if stats["deleted"]:
click.echo(f" Deleted {stats['deleted']} existing tags")
click.echo(
f" Tags: {stats['created']} created, {stats['updated']} updated"
)
if stats["nodes_created"]:
click.echo(f" Nodes created: {stats['nodes_created']}")
if stats["errors"]:
for error in stats["errors"]:
click.echo(f" Error: {error}", err=True)
imported_any = True
elif verbose:
click.echo(f"\nNo node_tags.yaml found in {seed_home}")
# Import members if file exists
members_file = Path(seed_home) / "members.yaml"
if members_file.exists():
if verbose:
click.echo(f"\nImporting members from: {members_file}")
stats = import_members(
file_path=str(members_file),
db=db,
)
if verbose:
click.echo(
f" Members: {stats['created']} created, {stats['updated']} updated"
)
if stats["errors"]:
for error in stats["errors"]:
click.echo(f" Error: {error}", err=True)
imported_any = True
elif verbose:
click.echo(f"\nNo members.yaml found in {seed_home}")
return imported_any
@collector.command("import-tags")
@click.argument("file", type=click.Path(), required=False, default=None)
@click.option(
@@ -315,40 +406,48 @@ def seed_cmd(
default=False,
help="Skip tags for nodes that don't exist (default: create nodes)",
)
@click.option(
"--clear-existing",
is_flag=True,
default=False,
help="Delete all existing tags before importing",
)
@click.pass_context
def import_tags_cmd(
ctx: click.Context,
file: str | None,
no_create_nodes: bool,
clear_existing: bool,
) -> None:
"""Import node tags from a JSON file.
"""Import node tags from a YAML file.
Reads a JSON file containing tag definitions and upserts them
into the database. Existing tags are updated, new tags are created.
Reads a YAML file containing tag definitions and upserts them
into the database. By default, existing tags are updated and new tags are created.
Use --clear-existing to delete all tags before importing.
FILE is the path to the JSON file containing tags.
If not provided, defaults to {SEED_HOME}/node_tags.json.
FILE is the path to the YAML file containing tags.
If not provided, defaults to {SEED_HOME}/node_tags.yaml.
Expected YAML format (keyed by public_key):
Expected JSON format (keyed by public_key):
\b
{
"0123456789abcdef...": {
"friendly_name": "My Node",
"location": {"value": "52.0,1.0", "type": "coordinate"},
"altitude": {"value": "150", "type": "number"}
}
}
0123456789abcdef...:
friendly_name: My Node
altitude:
value: "150"
type: number
active:
value: "true"
type: boolean
Shorthand is also supported (string values with default type):
\b
{
"0123456789abcdef...": {
"friendly_name": "My Node",
"role": "gateway"
}
}
Supported types: string, number, boolean, coordinate
\b
0123456789abcdef...:
friendly_name: My Node
role: gateway
Supported types: string, number, boolean
"""
from pathlib import Path
@@ -362,7 +461,7 @@ def import_tags_cmd(
if not Path(tags_file).exists():
click.echo(f"Tags file not found: {tags_file}")
if not file:
click.echo("Specify a file path or create the default node_tags.json.")
click.echo("Specify a file path or create the default node_tags.yaml.")
return
click.echo(f"Importing tags from: {tags_file}")
@@ -371,20 +470,22 @@ def import_tags_cmd(
from meshcore_hub.common.database import DatabaseManager
from meshcore_hub.collector.tag_import import import_tags
# Initialize database
# Initialize database (schema managed by Alembic migrations)
db = DatabaseManager(ctx.obj["database_url"])
db.create_tables()
# Import tags
stats = import_tags(
file_path=tags_file,
db=db,
create_nodes=not no_create_nodes,
clear_existing=clear_existing,
)
# Report results
click.echo("")
click.echo("Import complete:")
if stats["deleted"]:
click.echo(f" Tags deleted: {stats['deleted']}")
click.echo(f" Total tags in file: {stats['total']}")
click.echo(f" Tags created: {stats['created']}")
click.echo(f" Tags updated: {stats['updated']}")
@@ -407,33 +508,29 @@ def import_members_cmd(
ctx: click.Context,
file: str | None,
) -> None:
"""Import network members from a JSON file.
"""Import network members from a YAML file.
Reads a JSON file containing member definitions and upserts them
Reads a YAML file containing member definitions and upserts them
into the database. Existing members (matched by name) are updated,
new members are created.
FILE is the path to the JSON file containing members.
If not provided, defaults to {SEED_HOME}/members.json.
FILE is the path to the YAML file containing members.
If not provided, defaults to {SEED_HOME}/members.yaml.
Expected YAML format (list):
Expected JSON format (list):
\b
[
{
"name": "John Doe",
"callsign": "N0CALL",
"role": "Network Operator",
"description": "Example member"
}
]
- name: John Doe
callsign: N0CALL
role: Network Operator
description: Example member
Or with "members" key:
\b
{
"members": [
{"name": "John Doe", "callsign": "N0CALL", ...}
]
}
members:
- name: John Doe
callsign: N0CALL
"""
from pathlib import Path
@@ -447,7 +544,7 @@ def import_members_cmd(
if not Path(members_file).exists():
click.echo(f"Members file not found: {members_file}")
if not file:
click.echo("Specify a file path or create the default members.json.")
click.echo("Specify a file path or create the default members.yaml.")
return
click.echo(f"Importing members from: {members_file}")
@@ -456,9 +553,8 @@ def import_members_cmd(
from meshcore_hub.common.database import DatabaseManager
from meshcore_hub.collector.member_import import import_members
# Initialize database
# Initialize database (schema managed by Alembic migrations)
db = DatabaseManager(ctx.obj["database_url"])
db.create_tables()
# Import members
stats = import_members(
@@ -480,3 +576,299 @@ def import_members_cmd(
click.echo(f" - {error}", err=True)
db.dispose()
@collector.command("cleanup")
@click.option(
"--retention-days",
type=int,
default=30,
envvar="DATA_RETENTION_DAYS",
help="Number of days to retain data (default: 30)",
)
@click.option(
"--dry-run",
is_flag=True,
default=False,
help="Show what would be deleted without deleting",
)
@click.pass_context
def cleanup_cmd(
ctx: click.Context,
retention_days: int,
dry_run: bool,
) -> None:
"""Manually run data cleanup to delete old events.
Deletes event data older than the retention period:
- Advertisements
- Messages (channel and direct)
- Telemetry
- Trace paths
- Event logs
Node records are never deleted - only event data.
Use --dry-run to preview what would be deleted without
actually deleting anything.
"""
import asyncio
configure_logging(level=ctx.obj["log_level"])
click.echo(f"Database: {ctx.obj['database_url']}")
click.echo(f"Retention: {retention_days} days")
click.echo(f"Mode: {'DRY RUN' if dry_run else 'LIVE'}")
click.echo("")
if dry_run:
click.echo("Running in dry-run mode - no data will be deleted.")
else:
click.echo("WARNING: This will permanently delete old event data!")
if not click.confirm("Continue?"):
click.echo("Aborted.")
return
click.echo("")
from meshcore_hub.common.database import DatabaseManager
from meshcore_hub.collector.cleanup import cleanup_old_data
# Initialize database
db = DatabaseManager(ctx.obj["database_url"])
# Run cleanup
async def run_cleanup() -> None:
async with db.async_session() as session:
stats = await cleanup_old_data(
session,
retention_days,
dry_run=dry_run,
)
click.echo("")
click.echo("Cleanup results:")
click.echo(f" Advertisements: {stats.advertisements_deleted}")
click.echo(f" Messages: {stats.messages_deleted}")
click.echo(f" Telemetry: {stats.telemetry_deleted}")
click.echo(f" Trace paths: {stats.trace_paths_deleted}")
click.echo(f" Event logs: {stats.event_logs_deleted}")
click.echo(f" Total: {stats.total_deleted}")
if dry_run:
click.echo("")
click.echo("(Dry run - no data was actually deleted)")
asyncio.run(run_cleanup())
db.dispose()
click.echo("")
click.echo("Cleanup complete." if not dry_run else "Dry run complete.")
@collector.command("truncate")
@click.option(
"--members",
is_flag=True,
default=False,
help="Truncate members table",
)
@click.option(
"--nodes",
is_flag=True,
default=False,
help="Truncate nodes table (also clears tags, advertisements, messages, telemetry, trace paths)",
)
@click.option(
"--messages",
is_flag=True,
default=False,
help="Truncate messages table",
)
@click.option(
"--advertisements",
is_flag=True,
default=False,
help="Truncate advertisements table",
)
@click.option(
"--telemetry",
is_flag=True,
default=False,
help="Truncate telemetry table",
)
@click.option(
"--trace-paths",
is_flag=True,
default=False,
help="Truncate trace_paths table",
)
@click.option(
"--event-logs",
is_flag=True,
default=False,
help="Truncate event_logs table",
)
@click.option(
"--all",
"truncate_all",
is_flag=True,
default=False,
help="Truncate ALL tables (use with caution!)",
)
@click.option(
"--yes",
is_flag=True,
default=False,
help="Skip confirmation prompt",
)
@click.pass_context
def truncate_cmd(
ctx: click.Context,
members: bool,
nodes: bool,
messages: bool,
advertisements: bool,
telemetry: bool,
trace_paths: bool,
event_logs: bool,
truncate_all: bool,
yes: bool,
) -> None:
"""Truncate (clear) data tables.
WARNING: This permanently deletes data! Use with caution.
Examples:
# Clear members table
meshcore-hub collector truncate --members
# Clear messages and advertisements
meshcore-hub collector truncate --messages --advertisements
# Clear everything (requires confirmation)
meshcore-hub collector truncate --all
Note: Clearing nodes also clears all related data (tags, advertisements,
messages, telemetry, trace paths) due to foreign key constraints.
"""
configure_logging(level=ctx.obj["log_level"])
# Determine what to truncate
if truncate_all:
tables_to_clear = {
"members": True,
"nodes": True,
"messages": True,
"advertisements": True,
"telemetry": True,
"trace_paths": True,
"event_logs": True,
}
else:
tables_to_clear = {
"members": members,
"nodes": nodes,
"messages": messages,
"advertisements": advertisements,
"telemetry": telemetry,
"trace_paths": trace_paths,
"event_logs": event_logs,
}
# Check if any tables selected
if not any(tables_to_clear.values()):
click.echo("No tables specified. Use --help to see available options.")
return
# Show what will be cleared
click.echo("Database: " + ctx.obj["database_url"])
click.echo("")
click.echo("The following tables will be PERMANENTLY CLEARED:")
for table, should_clear in tables_to_clear.items():
if should_clear:
click.echo(f" - {table}")
if tables_to_clear.get("nodes"):
click.echo("")
click.echo(
"WARNING: Clearing nodes will also clear all related data due to foreign keys:"
)
click.echo(" - node_tags")
click.echo(" - advertisements")
click.echo(" - messages")
click.echo(" - telemetry")
click.echo(" - trace_paths")
click.echo("")
# Confirm
if not yes:
if not click.confirm(
"Are you sure you want to permanently delete this data?", default=False
):
click.echo("Aborted.")
return
from meshcore_hub.common.database import DatabaseManager
from meshcore_hub.common.models import (
Advertisement,
EventLog,
Member,
Message,
Node,
NodeTag,
Telemetry,
TracePath,
)
from sqlalchemy import delete
from sqlalchemy.engine import CursorResult
db = DatabaseManager(ctx.obj["database_url"])
with db.session_scope() as session:
# Truncate in correct order to respect foreign keys
cleared: list[str] = []
# Clear members (no dependencies)
if tables_to_clear.get("members"):
result: CursorResult = session.execute(delete(Member)) # type: ignore
cleared.append(f"members: {result.rowcount} rows")
# Clear event-specific tables first (they depend on nodes)
if tables_to_clear.get("messages"):
result = session.execute(delete(Message)) # type: ignore
cleared.append(f"messages: {result.rowcount} rows")
if tables_to_clear.get("advertisements"):
result = session.execute(delete(Advertisement)) # type: ignore
cleared.append(f"advertisements: {result.rowcount} rows")
if tables_to_clear.get("telemetry"):
result = session.execute(delete(Telemetry)) # type: ignore
cleared.append(f"telemetry: {result.rowcount} rows")
if tables_to_clear.get("trace_paths"):
result = session.execute(delete(TracePath)) # type: ignore
cleared.append(f"trace_paths: {result.rowcount} rows")
if tables_to_clear.get("event_logs"):
result = session.execute(delete(EventLog)) # type: ignore
cleared.append(f"event_logs: {result.rowcount} rows")
# Clear nodes last (this will cascade delete tags and any remaining events)
if tables_to_clear.get("nodes"):
# Delete tags first (they depend on nodes)
tag_result: CursorResult = session.execute(delete(NodeTag)) # type: ignore
cleared.append(f"node_tags: {tag_result.rowcount} rows (cascade)")
# Delete nodes (will cascade to remaining related tables)
node_result: CursorResult = session.execute(delete(Node)) # type: ignore
cleared.append(f"nodes: {node_result.rowcount} rows")
db.dispose()
click.echo("")
click.echo("Truncate complete. Cleared:")
for item in cleared:
click.echo(f" - {item}")
click.echo("")

View File

@@ -19,7 +19,7 @@ def register_all_handlers(subscriber: "Subscriber") -> None:
)
from meshcore_hub.collector.handlers.trace import handle_trace_data
from meshcore_hub.collector.handlers.telemetry import handle_telemetry
from meshcore_hub.collector.handlers.contacts import handle_contacts
from meshcore_hub.collector.handlers.contacts import handle_contact
from meshcore_hub.collector.handlers.event_log import handle_event_log
# Persisted events with specific handlers
@@ -28,7 +28,7 @@ def register_all_handlers(subscriber: "Subscriber") -> None:
subscriber.register_handler("channel_msg_recv", handle_channel_message)
subscriber.register_handler("trace_data", handle_trace_data)
subscriber.register_handler("telemetry_response", handle_telemetry)
subscriber.register_handler("contacts", handle_contacts)
subscriber.register_handler("contact", handle_contact) # Individual contact events
# Informational events (logged only)
subscriber.register_handler("send_confirmed", handle_event_log)

View File

@@ -5,9 +5,11 @@ from datetime import datetime, timezone
from typing import Any
from sqlalchemy import select
from sqlalchemy.exc import IntegrityError
from meshcore_hub.common.database import DatabaseManager
from meshcore_hub.common.models import Advertisement, Node
from meshcore_hub.common.hash_utils import compute_advertisement_hash
from meshcore_hub.common.models import Advertisement, Node, add_event_receiver
logger = logging.getLogger(__name__)
@@ -40,8 +42,17 @@ def handle_advertisement(
flags = payload.get("flags")
now = datetime.now(timezone.utc)
# Compute event hash for deduplication (30-second time bucket)
event_hash = compute_advertisement_hash(
public_key=adv_public_key,
name=name,
adv_type=adv_type,
flags=flags,
received_at=now,
)
with db.session_scope() as session:
# Find or create receiver node
# Find or create receiver node first (needed for both new and duplicate events)
receiver_node = None
if public_key:
receiver_query = select(Node).where(Node.public_key == public_key)
@@ -55,6 +66,37 @@ def handle_advertisement(
)
session.add(receiver_node)
session.flush()
else:
receiver_node.last_seen = now
# Check if advertisement with same hash already exists
existing = session.execute(
select(Advertisement.id).where(Advertisement.event_hash == event_hash)
).scalar_one_or_none()
if existing:
# Still update advertised node's last_seen even for duplicate advertisements
node_query = select(Node).where(Node.public_key == adv_public_key)
node = session.execute(node_query).scalar_one_or_none()
if node:
node.last_seen = now
# Add this receiver to the junction table
if receiver_node:
added = add_event_receiver(
session=session,
event_type="advertisement",
event_hash=event_hash,
receiver_node_id=receiver_node.id,
snr=None, # Advertisements don't have SNR
received_at=now,
)
if added:
logger.debug(
f"Added receiver {public_key[:12]}... to advertisement "
f"(hash={event_hash[:8]}...)"
)
return
# Find or create advertised node
node_query = select(Node).where(Node.public_key == adv_public_key)
@@ -91,9 +133,43 @@ def handle_advertisement(
adv_type=adv_type,
flags=flags,
received_at=now,
event_hash=event_hash,
)
session.add(advertisement)
# Add first receiver to junction table
if receiver_node:
add_event_receiver(
session=session,
event_type="advertisement",
event_hash=event_hash,
receiver_node_id=receiver_node.id,
snr=None,
received_at=now,
)
# Flush to check for duplicate constraint violation (race condition)
try:
session.flush()
except IntegrityError:
# Race condition: another request inserted the same event_hash
session.rollback()
logger.debug(
f"Duplicate advertisement skipped (race condition, "
f"hash={event_hash[:8]}...)"
)
# Re-add receiver to existing event in a new transaction
if receiver_node:
add_event_receiver(
session=session,
event_type="advertisement",
event_hash=event_hash,
receiver_node_id=receiver_node.id,
snr=None,
received_at=now,
)
return
logger.info(
f"Stored advertisement from {name or adv_public_key[:12]!r} "
f"(type={adv_type})"

View File

@@ -1,4 +1,4 @@
"""Handler for contacts sync events."""
"""Handler for contact sync events."""
import logging
from datetime import datetime, timezone
@@ -11,65 +11,90 @@ from meshcore_hub.common.models import Node
logger = logging.getLogger(__name__)
# Map numeric node type to string representation
NODE_TYPE_MAP = {
0: "none",
1: "chat",
2: "repeater",
3: "room",
}
def handle_contacts(
def handle_contact(
public_key: str,
event_type: str,
payload: dict[str, Any],
db: DatabaseManager,
) -> None:
"""Handle a contacts sync event.
"""Handle a single contact event.
Upserts all contacts in the contacts list.
Upserts a contact into the nodes table.
Args:
public_key: Receiver node's public key (from MQTT topic)
event_type: Event type name
payload: Contacts payload
payload: Single contact object with fields:
- public_key: Contact's public key
- adv_name: Advertised name
- type: Numeric node type (0=none, 1=chat, 2=repeater, 3=room)
db: Database manager
"""
contacts = payload.get("contacts", [])
if not contacts:
logger.debug("Empty contacts list received")
contact_key = payload.get("public_key")
if not contact_key:
logger.warning("Contact event missing public_key field")
return
# Device uses 'adv_name' for the advertised name
name = payload.get("adv_name") or payload.get("name")
# GPS coordinates (optional)
lat = payload.get("adv_lat")
lon = payload.get("adv_lon")
logger.info(f"Processing contact: {contact_key[:12]}... adv_name={name}")
# Device uses numeric 'type' field, convert to string
raw_type = payload.get("type")
if raw_type is not None:
node_type: str | None = NODE_TYPE_MAP.get(raw_type, str(raw_type))
else:
node_type = payload.get("node_type")
now = datetime.now(timezone.utc)
created_count = 0
updated_count = 0
with db.session_scope() as session:
for contact in contacts:
contact_key = contact.get("public_key")
if not contact_key:
continue
# Find or create node
node_query = select(Node).where(Node.public_key == contact_key)
node = session.execute(node_query).scalar_one_or_none()
name = contact.get("name")
node_type = contact.get("node_type")
# Find or create node
node_query = select(Node).where(Node.public_key == contact_key)
node = session.execute(node_query).scalar_one_or_none()
if node:
# Update existing node
if name and not node.name:
node.name = name
if node_type and not node.adv_type:
node.adv_type = node_type
node.last_seen = now
updated_count += 1
else:
# Create new node
node = Node(
public_key=contact_key,
name=name,
adv_type=node_type,
first_seen=now,
last_seen=now,
if node:
# Update existing node - always update name if we have one
if name and name != node.name:
logger.info(
f"Updating node {contact_key[:12]}... "
f"name: {node.name!r} -> {name!r}"
)
session.add(node)
created_count += 1
logger.info(
f"Processed contacts sync: {created_count} new, {updated_count} updated"
)
node.name = name
if node_type and not node.adv_type:
node.adv_type = node_type
# Update GPS coordinates if provided
if lat is not None:
node.lat = lat
if lon is not None:
node.lon = lon
# Do NOT update last_seen for contact sync - only advertisement events
# should update last_seen since that's when the node was actually seen
else:
# Create new node from contact database
# Set last_seen=None since we haven't actually seen this node advertise yet
node = Node(
public_key=contact_key,
name=name,
adv_type=node_type,
first_seen=now,
last_seen=None, # Will be set when we receive an advertisement
lat=lat,
lon=lon,
)
session.add(node)
logger.info(f"Created node from contact: {contact_key[:12]}... ({name})")

View File

@@ -5,9 +5,11 @@ from datetime import datetime, timezone
from typing import Any
from sqlalchemy import select
from sqlalchemy.exc import IntegrityError
from meshcore_hub.common.database import DatabaseManager
from meshcore_hub.common.models import Message, Node
from meshcore_hub.common.hash_utils import compute_message_hash
from meshcore_hub.common.models import Message, Node, add_event_receiver
logger = logging.getLogger(__name__)
@@ -84,8 +86,17 @@ def _handle_message(
except (ValueError, OSError):
pass
# Compute event hash for deduplication
event_hash = compute_message_hash(
text=text,
pubkey_prefix=pubkey_prefix,
channel_idx=channel_idx,
sender_timestamp=sender_timestamp,
txt_type=txt_type,
)
with db.session_scope() as session:
# Find receiver node
# Find or create receiver node first (needed for both new and duplicate events)
receiver_node = None
if public_key:
receiver_query = select(Node).where(Node.public_key == public_key)
@@ -102,6 +113,29 @@ def _handle_message(
else:
receiver_node.last_seen = now
# Check if message with same hash already exists
existing = session.execute(
select(Message.id).where(Message.event_hash == event_hash)
).scalar_one_or_none()
if existing:
# Event already exists - just add this receiver to the junction table
if receiver_node:
added = add_event_receiver(
session=session,
event_type="message",
event_hash=event_hash,
receiver_node_id=receiver_node.id,
snr=snr,
received_at=now,
)
if added:
logger.debug(
f"Added receiver {public_key[:12]}... to message "
f"(hash={event_hash[:8]}...)"
)
return
# Create message record
message = Message(
receiver_node_id=receiver_node.id if receiver_node else None,
@@ -115,9 +149,42 @@ def _handle_message(
snr=snr,
sender_timestamp=sender_timestamp,
received_at=now,
event_hash=event_hash,
)
session.add(message)
# Add first receiver to junction table
if receiver_node:
add_event_receiver(
session=session,
event_type="message",
event_hash=event_hash,
receiver_node_id=receiver_node.id,
snr=snr,
received_at=now,
)
# Flush to check for duplicate constraint violation (race condition)
try:
session.flush()
except IntegrityError:
# Race condition: another request inserted the same event_hash
session.rollback()
logger.debug(
f"Duplicate message skipped (race condition, hash={event_hash[:8]}...)"
)
# Re-add receiver to existing event in a new transaction
if receiver_node:
add_event_receiver(
session=session,
event_type="message",
event_hash=event_hash,
receiver_node_id=receiver_node.id,
snr=snr,
received_at=now,
)
return
if message_type == "contact":
logger.info(
f"Stored contact message from {pubkey_prefix!r}: "

View File

@@ -5,9 +5,11 @@ from datetime import datetime, timezone
from typing import Any
from sqlalchemy import select
from sqlalchemy.exc import IntegrityError
from meshcore_hub.common.database import DatabaseManager
from meshcore_hub.common.models import Node, Telemetry
from meshcore_hub.common.hash_utils import compute_telemetry_hash
from meshcore_hub.common.models import Node, Telemetry, add_event_receiver
logger = logging.getLogger(__name__)
@@ -49,8 +51,15 @@ def handle_telemetry(
except ValueError:
lpp_bytes = lpp_data.encode()
# Compute event hash for deduplication (30-second time bucket)
event_hash = compute_telemetry_hash(
node_public_key=node_public_key,
parsed_data=parsed_data,
received_at=now,
)
with db.session_scope() as session:
# Find receiver node
# Find or create receiver node first (needed for both new and duplicate events)
receiver_node = None
if public_key:
receiver_query = select(Node).where(Node.public_key == public_key)
@@ -67,6 +76,29 @@ def handle_telemetry(
else:
receiver_node.last_seen = now
# Check if telemetry with same hash already exists
existing = session.execute(
select(Telemetry.id).where(Telemetry.event_hash == event_hash)
).scalar_one_or_none()
if existing:
# Event already exists - just add this receiver to the junction table
if receiver_node:
added = add_event_receiver(
session=session,
event_type="telemetry",
event_hash=event_hash,
receiver_node_id=receiver_node.id,
snr=None,
received_at=now,
)
if added:
logger.debug(
f"Added receiver {public_key[:12]}... to telemetry "
f"(node={node_public_key[:12]}...)"
)
return
# Find or create reporting node
reporting_node = None
if node_public_key:
@@ -92,9 +124,43 @@ def handle_telemetry(
lpp_data=lpp_bytes,
parsed_data=parsed_data,
received_at=now,
event_hash=event_hash,
)
session.add(telemetry)
# Add first receiver to junction table
if receiver_node:
add_event_receiver(
session=session,
event_type="telemetry",
event_hash=event_hash,
receiver_node_id=receiver_node.id,
snr=None,
received_at=now,
)
# Flush to check for duplicate constraint violation (race condition)
try:
session.flush()
except IntegrityError:
# Race condition: another request inserted the same event_hash
session.rollback()
logger.debug(
f"Duplicate telemetry skipped (race condition, "
f"node={node_public_key[:12]}...)"
)
# Re-add receiver to existing event in a new transaction
if receiver_node:
add_event_receiver(
session=session,
event_type="telemetry",
event_hash=event_hash,
receiver_node_id=receiver_node.id,
snr=None,
received_at=now,
)
return
# Log telemetry values
if parsed_data:
values = ", ".join(f"{k}={v}" for k, v in parsed_data.items())

View File

@@ -5,9 +5,11 @@ from datetime import datetime, timezone
from typing import Any
from sqlalchemy import select
from sqlalchemy.exc import IntegrityError
from meshcore_hub.common.database import DatabaseManager
from meshcore_hub.common.models import Node, TracePath
from meshcore_hub.common.hash_utils import compute_trace_hash
from meshcore_hub.common.models import Node, TracePath, add_event_receiver
logger = logging.getLogger(__name__)
@@ -40,8 +42,11 @@ def handle_trace_data(
snr_values = payload.get("snr_values")
hop_count = payload.get("hop_count")
# Compute event hash for deduplication (initiator_tag is unique per trace)
event_hash = compute_trace_hash(initiator_tag=initiator_tag)
with db.session_scope() as session:
# Find receiver node
# Find or create receiver node first (needed for both new and duplicate events)
receiver_node = None
if public_key:
receiver_query = select(Node).where(Node.public_key == public_key)
@@ -58,6 +63,29 @@ def handle_trace_data(
else:
receiver_node.last_seen = now
# Check if trace with same hash already exists
existing = session.execute(
select(TracePath.id).where(TracePath.event_hash == event_hash)
).scalar_one_or_none()
if existing:
# Event already exists - just add this receiver to the junction table
if receiver_node:
added = add_event_receiver(
session=session,
event_type="trace",
event_hash=event_hash,
receiver_node_id=receiver_node.id,
snr=None, # Trace events don't have a single SNR value
received_at=now,
)
if added:
logger.debug(
f"Added receiver {public_key[:12]}... to trace "
f"(tag={initiator_tag})"
)
return
# Create trace path record
trace_path = TracePath(
receiver_node_id=receiver_node.id if receiver_node else None,
@@ -69,7 +97,40 @@ def handle_trace_data(
snr_values=snr_values,
hop_count=hop_count,
received_at=now,
event_hash=event_hash,
)
session.add(trace_path)
# Add first receiver to junction table
if receiver_node:
add_event_receiver(
session=session,
event_type="trace",
event_hash=event_hash,
receiver_node_id=receiver_node.id,
snr=None,
received_at=now,
)
# Flush to check for duplicate constraint violation (race condition)
try:
session.flush()
except IntegrityError:
# Race condition: another request inserted the same event_hash
session.rollback()
logger.debug(
f"Duplicate trace skipped (race condition, tag={initiator_tag})"
)
# Re-add receiver to existing event in a new transaction
if receiver_node:
add_event_receiver(
session=session,
event_type="trace",
event_hash=event_hash,
receiver_node_id=receiver_node.id,
snr=None,
received_at=now,
)
return
logger.info(f"Stored trace data: tag={initiator_tag}, hops={hop_count}")

View File

@@ -1,11 +1,11 @@
"""Import members from JSON file."""
"""Import members from YAML file."""
import json
import logging
from pathlib import Path
from typing import Any, Optional
from pydantic import BaseModel, Field, field_validator
import yaml
from pydantic import BaseModel, Field
from sqlalchemy import select
from meshcore_hub.common.database import DatabaseManager
@@ -15,47 +15,46 @@ logger = logging.getLogger(__name__)
class MemberData(BaseModel):
"""Schema for a member entry in the import file."""
"""Schema for a member entry in the import file.
Note: Nodes are associated with members via a 'member_id' tag on the node,
not through this schema.
"""
member_id: str = Field(..., min_length=1, max_length=100)
name: str = Field(..., min_length=1, max_length=255)
callsign: Optional[str] = Field(default=None, max_length=20)
role: Optional[str] = Field(default=None, max_length=100)
description: Optional[str] = Field(default=None)
contact: Optional[str] = Field(default=None, max_length=255)
public_key: Optional[str] = Field(default=None)
@field_validator("public_key")
@classmethod
def validate_public_key(cls, v: Optional[str]) -> Optional[str]:
"""Validate and normalize public key if provided."""
if v is None:
return None
if len(v) != 64:
raise ValueError(f"public_key must be 64 characters, got {len(v)}")
if not all(c in "0123456789abcdefABCDEF" for c in v):
raise ValueError("public_key must be a valid hex string")
return v.lower()
def load_members_file(file_path: str | Path) -> list[dict[str, Any]]:
"""Load and validate members from a JSON file.
"""Load and validate members from a YAML file.
Supports two formats:
1. List of member objects:
[{"name": "Member 1", ...}, {"name": "Member 2", ...}]
- member_id: member1
name: Member 1
callsign: M1
2. Object with "members" key:
{"members": [{"name": "Member 1", ...}, ...]}
members:
- member_id: member1
name: Member 1
callsign: M1
Args:
file_path: Path to the members JSON file
file_path: Path to the members YAML file
Returns:
List of validated member dictionaries
Raises:
FileNotFoundError: If file does not exist
json.JSONDecodeError: If file is not valid JSON
yaml.YAMLError: If file is not valid YAML
ValueError: If file content is invalid
"""
path = Path(file_path)
@@ -63,7 +62,7 @@ def load_members_file(file_path: str | Path) -> list[dict[str, Any]]:
raise FileNotFoundError(f"Members file not found: {file_path}")
with open(path, "r") as f:
data = json.load(f)
data = yaml.safe_load(f)
# Handle both formats
if isinstance(data, list):
@@ -73,15 +72,15 @@ def load_members_file(file_path: str | Path) -> list[dict[str, Any]]:
if not isinstance(members_list, list):
raise ValueError("'members' key must contain a list")
else:
raise ValueError(
"Members file must be a list or an object with 'members' key"
)
raise ValueError("Members file must be a list or a mapping with 'members' key")
# Validate each member
validated: list[dict[str, Any]] = []
for i, member in enumerate(members_list):
if not isinstance(member, dict):
raise ValueError(f"Member at index {i} must be an object")
if "member_id" not in member:
raise ValueError(f"Member at index {i} must have a 'member_id' field")
if "name" not in member:
raise ValueError(f"Member at index {i} must have a 'name' field")
@@ -99,13 +98,16 @@ def import_members(
file_path: str | Path,
db: DatabaseManager,
) -> dict[str, Any]:
"""Import members from a JSON file into the database.
"""Import members from a YAML file into the database.
Performs upsert operations based on name - existing members are updated,
Performs upsert operations based on member_id - existing members are updated,
new members are created.
Note: Nodes are associated with members via a 'member_id' tag on the node.
This import does not manage node associations.
Args:
file_path: Path to the members JSON file
file_path: Path to the members YAML file
db: Database manager instance
Returns:
@@ -134,14 +136,17 @@ def import_members(
with db.session_scope() as session:
for member_data in members_data:
try:
member_id = member_data["member_id"]
name = member_data["name"]
# Find existing member by name
query = select(Member).where(Member.name == name)
# Find existing member by member_id
query = select(Member).where(Member.member_id == member_id)
existing = session.execute(query).scalar_one_or_none()
if existing:
# Update existing member
if member_data.get("name") is not None:
existing.name = member_data["name"]
if member_data.get("callsign") is not None:
existing.callsign = member_data["callsign"]
if member_data.get("role") is not None:
@@ -150,27 +155,26 @@ def import_members(
existing.description = member_data["description"]
if member_data.get("contact") is not None:
existing.contact = member_data["contact"]
if member_data.get("public_key") is not None:
existing.public_key = member_data["public_key"]
stats["updated"] += 1
logger.debug(f"Updated member: {name}")
logger.debug(f"Updated member: {member_id} ({name})")
else:
# Create new member
new_member = Member(
member_id=member_id,
name=name,
callsign=member_data.get("callsign"),
role=member_data.get("role"),
description=member_data.get("description"),
contact=member_data.get("contact"),
public_key=member_data.get("public_key"),
)
session.add(new_member)
stats["created"] += 1
logger.debug(f"Created member: {name}")
logger.debug(f"Created member: {member_id} ({name})")
except Exception as e:
error_msg = f"Error processing member '{member_data.get('name', 'unknown')}': {e}"
error_msg = f"Error processing member '{member_data.get('member_id', 'unknown')}' ({member_data.get('name', 'unknown')}): {e}"
stats["errors"].append(error_msg)
logger.error(error_msg)

View File

@@ -6,6 +6,7 @@ The subscriber:
3. Routes events to appropriate handlers
4. Persists data to database
5. Dispatches events to configured webhooks
6. Performs scheduled data cleanup if enabled
"""
import asyncio
@@ -14,6 +15,7 @@ import signal
import threading
import time
import uuid
from datetime import datetime, timezone
from typing import Any, Callable, Optional, TYPE_CHECKING
from meshcore_hub.common.database import DatabaseManager
@@ -38,6 +40,11 @@ class Subscriber:
mqtt_client: MQTTClient,
db_manager: DatabaseManager,
webhook_dispatcher: Optional["WebhookDispatcher"] = None,
cleanup_enabled: bool = False,
cleanup_retention_days: int = 30,
cleanup_interval_hours: int = 24,
node_cleanup_enabled: bool = False,
node_cleanup_days: int = 90,
):
"""Initialize subscriber.
@@ -45,6 +52,11 @@ class Subscriber:
mqtt_client: MQTT client instance
db_manager: Database manager instance
webhook_dispatcher: Optional webhook dispatcher for event forwarding
cleanup_enabled: Enable automatic event data cleanup
cleanup_retention_days: Number of days to retain event data
cleanup_interval_hours: Hours between cleanup runs
node_cleanup_enabled: Enable automatic cleanup of inactive nodes
node_cleanup_days: Remove nodes not seen for this many days
"""
self.mqtt = mqtt_client
self.db = db_manager
@@ -59,6 +71,14 @@ class Subscriber:
self._webhook_queue: list[tuple[str, dict[str, Any], str]] = []
self._webhook_lock = threading.Lock()
self._webhook_thread: Optional[threading.Thread] = None
# Data cleanup
self._cleanup_enabled = cleanup_enabled
self._cleanup_retention_days = cleanup_retention_days
self._cleanup_interval_hours = cleanup_interval_hours
self._node_cleanup_enabled = node_cleanup_enabled
self._node_cleanup_days = node_cleanup_days
self._cleanup_thread: Optional[threading.Thread] = None
self._last_cleanup: Optional[datetime] = None
@property
def is_healthy(self) -> bool:
@@ -202,18 +222,129 @@ class Subscriber:
if self._webhook_thread.is_alive():
logger.warning("Webhook processor thread did not stop cleanly")
def _start_cleanup_scheduler(self) -> None:
"""Start background thread for periodic data cleanup."""
if not self._cleanup_enabled and not self._node_cleanup_enabled:
logger.info("Data cleanup and node cleanup are both disabled")
return
logger.info(
"Starting cleanup scheduler (interval_hours=%d)",
self._cleanup_interval_hours,
)
if self._cleanup_enabled:
logger.info(
" Event data cleanup: ENABLED (retention_days=%d)",
self._cleanup_retention_days,
)
else:
logger.info(" Event data cleanup: DISABLED")
if self._node_cleanup_enabled:
logger.info(
" Node cleanup: ENABLED (inactivity_days=%d)", self._node_cleanup_days
)
else:
logger.info(" Node cleanup: DISABLED")
def run_cleanup_loop() -> None:
"""Run async cleanup tasks in background thread."""
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
while self._running:
# Check if cleanup is due
now = datetime.now(timezone.utc)
should_run = False
if self._last_cleanup is None:
# First run
should_run = True
else:
# Check if interval has passed
hours_since_last = (
now - self._last_cleanup
).total_seconds() / 3600
should_run = hours_since_last >= self._cleanup_interval_hours
if should_run:
try:
logger.info("Starting scheduled cleanup")
from meshcore_hub.collector.cleanup import (
cleanup_old_data,
cleanup_inactive_nodes,
)
# Get async session and run cleanup
async def run_cleanup() -> None:
async with self.db.async_session() as session:
# Run event data cleanup if enabled
if self._cleanup_enabled:
stats = await cleanup_old_data(
session,
self._cleanup_retention_days,
dry_run=False,
)
logger.info(
"Event cleanup completed: %s", stats
)
# Run node cleanup if enabled
if self._node_cleanup_enabled:
nodes_deleted = await cleanup_inactive_nodes(
session,
self._node_cleanup_days,
dry_run=False,
)
logger.info(
"Node cleanup completed: %d nodes deleted",
nodes_deleted,
)
loop.run_until_complete(run_cleanup())
self._last_cleanup = now
except Exception as e:
logger.error(f"Cleanup error: {e}", exc_info=True)
# Sleep for 1 hour before next check
for _ in range(3600):
if not self._running:
break
time.sleep(1)
finally:
loop.close()
logger.info("Cleanup scheduler stopped")
self._cleanup_thread = threading.Thread(
target=run_cleanup_loop, daemon=True, name="cleanup-scheduler"
)
self._cleanup_thread.start()
def _stop_cleanup_scheduler(self) -> None:
"""Stop the cleanup scheduler thread."""
if self._cleanup_thread and self._cleanup_thread.is_alive():
# Thread will exit when self._running becomes False
self._cleanup_thread.join(timeout=5.0)
if self._cleanup_thread.is_alive():
logger.warning("Cleanup scheduler thread did not stop cleanly")
def start(self) -> None:
"""Start the subscriber."""
logger.info("Starting collector subscriber")
# Create database tables if needed
# Verify database connection (schema managed by Alembic migrations)
try:
self.db.create_tables()
# Test connection by getting a session
session = self.db.get_session()
session.close()
self._db_connected = True
logger.info("Database initialized")
logger.info("Database connection verified")
except Exception as e:
self._db_connected = False
logger.error(f"Failed to initialize database: {e}")
logger.error(f"Failed to connect to database: {e}")
raise
# Connect to MQTT broker
@@ -237,6 +368,9 @@ class Subscriber:
# Start webhook processor if configured
self._start_webhook_processor()
# Start cleanup scheduler if configured
self._start_cleanup_scheduler()
# Start health reporter for Docker health checks
self._health_reporter = HealthReporter(
component="collector",
@@ -269,6 +403,9 @@ class Subscriber:
self._running = False
self._shutdown_event.set()
# Stop cleanup scheduler
self._stop_cleanup_scheduler()
# Stop webhook processor
self._stop_webhook_processor()
@@ -291,8 +428,14 @@ def create_subscriber(
mqtt_username: Optional[str] = None,
mqtt_password: Optional[str] = None,
mqtt_prefix: str = "meshcore",
mqtt_tls: bool = False,
database_url: str = "sqlite:///./meshcore.db",
webhook_dispatcher: Optional["WebhookDispatcher"] = None,
cleanup_enabled: bool = False,
cleanup_retention_days: int = 30,
cleanup_interval_hours: int = 24,
node_cleanup_enabled: bool = False,
node_cleanup_days: int = 90,
) -> Subscriber:
"""Create a configured subscriber instance.
@@ -302,8 +445,14 @@ def create_subscriber(
mqtt_username: MQTT username
mqtt_password: MQTT password
mqtt_prefix: MQTT topic prefix
mqtt_tls: Enable TLS/SSL for MQTT connection
database_url: Database connection URL
webhook_dispatcher: Optional webhook dispatcher for event forwarding
cleanup_enabled: Enable automatic event data cleanup
cleanup_retention_days: Number of days to retain event data
cleanup_interval_hours: Hours between cleanup runs
node_cleanup_enabled: Enable automatic cleanup of inactive nodes
node_cleanup_days: Remove nodes not seen for this many days
Returns:
Configured Subscriber instance
@@ -317,6 +466,7 @@ def create_subscriber(
password=mqtt_password,
prefix=mqtt_prefix,
client_id=f"meshcore-collector-{unique_id}",
tls=mqtt_tls,
)
mqtt_client = MQTTClient(mqtt_config)
@@ -324,7 +474,16 @@ def create_subscriber(
db_manager = DatabaseManager(database_url)
# Create subscriber
subscriber = Subscriber(mqtt_client, db_manager, webhook_dispatcher)
subscriber = Subscriber(
mqtt_client,
db_manager,
webhook_dispatcher,
cleanup_enabled=cleanup_enabled,
cleanup_retention_days=cleanup_retention_days,
cleanup_interval_hours=cleanup_interval_hours,
node_cleanup_enabled=node_cleanup_enabled,
node_cleanup_days=node_cleanup_days,
)
# Register handlers
from meshcore_hub.collector.handlers import register_all_handlers
@@ -340,8 +499,14 @@ def run_collector(
mqtt_username: Optional[str] = None,
mqtt_password: Optional[str] = None,
mqtt_prefix: str = "meshcore",
mqtt_tls: bool = False,
database_url: str = "sqlite:///./meshcore.db",
webhook_dispatcher: Optional["WebhookDispatcher"] = None,
cleanup_enabled: bool = False,
cleanup_retention_days: int = 30,
cleanup_interval_hours: int = 24,
node_cleanup_enabled: bool = False,
node_cleanup_days: int = 90,
) -> None:
"""Run the collector (blocking).
@@ -351,8 +516,14 @@ def run_collector(
mqtt_username: MQTT username
mqtt_password: MQTT password
mqtt_prefix: MQTT topic prefix
mqtt_tls: Enable TLS/SSL for MQTT connection
database_url: Database connection URL
webhook_dispatcher: Optional webhook dispatcher for event forwarding
cleanup_enabled: Enable automatic event data cleanup
cleanup_retention_days: Number of days to retain event data
cleanup_interval_hours: Hours between cleanup runs
node_cleanup_enabled: Enable automatic cleanup of inactive nodes
node_cleanup_days: Remove nodes not seen for this many days
"""
subscriber = create_subscriber(
mqtt_host=mqtt_host,
@@ -360,8 +531,14 @@ def run_collector(
mqtt_username=mqtt_username,
mqtt_password=mqtt_password,
mqtt_prefix=mqtt_prefix,
mqtt_tls=mqtt_tls,
database_url=database_url,
webhook_dispatcher=webhook_dispatcher,
cleanup_enabled=cleanup_enabled,
cleanup_retention_days=cleanup_retention_days,
cleanup_interval_hours=cleanup_interval_hours,
node_cleanup_enabled=node_cleanup_enabled,
node_cleanup_days=node_cleanup_days,
)
# Set up signal handlers

View File

@@ -1,13 +1,13 @@
"""Import node tags from JSON file."""
"""Import node tags from YAML file."""
import json
import logging
from datetime import datetime, timezone
from pathlib import Path
from typing import Any
import yaml
from pydantic import BaseModel, Field, model_validator
from sqlalchemy import select
from sqlalchemy import delete, func, select
from meshcore_hub.common.database import DatabaseManager
from meshcore_hub.common.models import Node, NodeTag
@@ -19,7 +19,7 @@ class TagValue(BaseModel):
"""Schema for a tag value with type."""
value: str | None = None
type: str = Field(default="string", pattern=r"^(string|number|boolean|coordinate)$")
type: str = Field(default="string", pattern=r"^(string|number|boolean)$")
class NodeTags(BaseModel):
@@ -64,33 +64,33 @@ def validate_public_key(public_key: str) -> str:
def load_tags_file(file_path: str | Path) -> dict[str, dict[str, Any]]:
"""Load and validate tags from a JSON file.
"""Load and validate tags from a YAML file.
New format - dictionary keyed by public_key:
{
"0123456789abcdef...": {
"friendly_name": "My Node",
"location": {"value": "52.0,1.0", "type": "coordinate"},
"altitude": {"value": "150", "type": "number"}
}
}
YAML format - dictionary keyed by public_key:
0123456789abcdef...:
friendly_name: My Node
location:
value: "52.0,1.0"
type: coordinate
altitude:
value: "150"
type: number
Shorthand is allowed - string values are auto-converted:
{
"0123456789abcdef...": {
"friendly_name": "My Node"
}
}
0123456789abcdef...:
friendly_name: My Node
Args:
file_path: Path to the tags JSON file
file_path: Path to the tags YAML file
Returns:
Dictionary mapping public_key to tag dictionary
Raises:
FileNotFoundError: If file does not exist
json.JSONDecodeError: If file is not valid JSON
yaml.YAMLError: If file is not valid YAML
ValueError: If file content is invalid
"""
path = Path(file_path)
@@ -98,10 +98,10 @@ def load_tags_file(file_path: str | Path) -> dict[str, dict[str, Any]]:
raise FileNotFoundError(f"Tags file not found: {file_path}")
with open(path, "r") as f:
data = json.load(f)
data = yaml.safe_load(f)
if not isinstance(data, dict):
raise ValueError("Tags file must contain a JSON object")
raise ValueError("Tags file must contain a YAML mapping")
# Validate each entry
validated: dict[str, dict[str, Any]] = {}
@@ -117,12 +117,24 @@ def load_tags_file(file_path: str | Path) -> dict[str, dict[str, Any]]:
for tag_key, tag_value in tags.items():
if isinstance(tag_value, dict):
# Full format with value and type
raw_value = tag_value.get("value")
# Convert value to string if it's not None
str_value = str(raw_value) if raw_value is not None else None
validated_tags[tag_key] = {
"value": tag_value.get("value"),
"value": str_value,
"type": tag_value.get("type", "string"),
}
elif isinstance(tag_value, bool):
# YAML boolean - must check before int since bool is subclass of int
validated_tags[tag_key] = {
"value": str(tag_value).lower(),
"type": "boolean",
}
elif isinstance(tag_value, (int, float)):
# YAML number (int or float)
validated_tags[tag_key] = {"value": str(tag_value), "type": "number"}
elif isinstance(tag_value, str):
# Shorthand: just a string value
# String value
validated_tags[tag_key] = {"value": tag_value, "type": "string"}
elif tag_value is None:
validated_tags[tag_key] = {"value": None, "type": "string"}
@@ -139,16 +151,19 @@ def import_tags(
file_path: str | Path,
db: DatabaseManager,
create_nodes: bool = True,
clear_existing: bool = False,
) -> dict[str, Any]:
"""Import tags from a JSON file into the database.
"""Import tags from a YAML file into the database.
Performs upsert operations - existing tags are updated, new tags are created.
Optionally clears all existing tags before import.
Args:
file_path: Path to the tags JSON file
file_path: Path to the tags YAML file
db: Database manager instance
create_nodes: If True, create nodes that don't exist. If False, skip tags
for non-existent nodes.
clear_existing: If True, delete all existing tags before importing.
Returns:
Dictionary with import statistics:
@@ -157,6 +172,7 @@ def import_tags(
- updated: Number of existing tags updated
- skipped: Number of tags skipped (node not found and create_nodes=False)
- nodes_created: Number of new nodes created
- deleted: Number of existing tags deleted (if clear_existing=True)
- errors: List of error messages
"""
stats: dict[str, Any] = {
@@ -165,6 +181,7 @@ def import_tags(
"updated": 0,
"skipped": 0,
"nodes_created": 0,
"deleted": 0,
"errors": [],
}
@@ -182,6 +199,15 @@ def import_tags(
now = datetime.now(timezone.utc)
with db.session_scope() as session:
# Clear all existing tags if requested
if clear_existing:
delete_count = (
session.execute(select(func.count()).select_from(NodeTag)).scalar() or 0
)
session.execute(delete(NodeTag))
stats["deleted"] = delete_count
logger.info(f"Deleted {delete_count} existing tags")
# Cache nodes by public_key to reduce queries
node_cache: dict[str, Node] = {}
@@ -198,7 +224,8 @@ def import_tags(
node = Node(
public_key=public_key,
first_seen=now,
last_seen=now,
# last_seen is intentionally left unset (None)
# It will be set when the node is actually seen via events
)
session.add(node)
session.flush()
@@ -219,24 +246,8 @@ def import_tags(
tag_value = tag_data.get("value")
tag_type = tag_data.get("type", "string")
# Find or create tag
tag_query = select(NodeTag).where(
NodeTag.node_id == node.id,
NodeTag.key == tag_key,
)
existing_tag = session.execute(tag_query).scalar_one_or_none()
if existing_tag:
# Update existing tag
existing_tag.value = tag_value
existing_tag.value_type = tag_type
stats["updated"] += 1
logger.debug(
f"Updated tag {tag_key}={tag_value} "
f"for {public_key[:12]}..."
)
else:
# Create new tag
if clear_existing:
# When clearing, always create new tags
new_tag = NodeTag(
node_id=node.id,
key=tag_key,
@@ -249,6 +260,39 @@ def import_tags(
f"Created tag {tag_key}={tag_value} "
f"for {public_key[:12]}..."
)
else:
# Find or create tag
tag_query = select(NodeTag).where(
NodeTag.node_id == node.id,
NodeTag.key == tag_key,
)
existing_tag = session.execute(
tag_query
).scalar_one_or_none()
if existing_tag:
# Update existing tag
existing_tag.value = tag_value
existing_tag.value_type = tag_type
stats["updated"] += 1
logger.debug(
f"Updated tag {tag_key}={tag_value} "
f"for {public_key[:12]}..."
)
else:
# Create new tag
new_tag = NodeTag(
node_id=node.id,
key=tag_key,
value=tag_value,
value_type=tag_type,
)
session.add(new_tag)
stats["created"] += 1
logger.debug(
f"Created tag {tag_key}={tag_value} "
f"for {public_key[:12]}..."
)
except Exception as e:
error_msg = f"Error processing tag {tag_key} for {public_key[:12]}...: {e}"

View File

@@ -404,7 +404,7 @@ _dispatch_callback: Optional[Callable[[str, dict[str, Any], Optional[str]], None
def set_dispatch_callback(
callback: Optional[Callable[[str, dict[str, Any], Optional[str]], None]]
callback: Optional[Callable[[str, dict[str, Any], Optional[str]], None]],
) -> None:
"""Set a callback for synchronous webhook dispatch.

View File

@@ -52,6 +52,9 @@ class CommonSettings(BaseSettings):
default=None, description="MQTT password (optional)"
)
mqtt_prefix: str = Field(default="meshcore", description="MQTT topic prefix")
mqtt_tls: bool = Field(
default=False, description="Enable TLS/SSL for MQTT connection"
)
class InterfaceSettings(CommonSettings):
@@ -70,6 +73,22 @@ class InterfaceSettings(CommonSettings):
# Mock device
mock_device: bool = Field(default=False, description="Use mock device for testing")
# Device name
meshcore_device_name: Optional[str] = Field(
default=None, description="Device/node name (optional)"
)
# Contact cleanup settings
contact_cleanup_enabled: bool = Field(
default=True,
description="Enable automatic removal of stale contacts from companion node",
)
contact_cleanup_days: int = Field(
default=7,
description="Remove contacts not advertised for this many days",
ge=1,
)
class CollectorSettings(CommonSettings):
"""Settings for the Collector component."""
@@ -80,7 +99,7 @@ class CollectorSettings(CommonSettings):
description="SQLAlchemy database URL (default: sqlite:///{data_home}/collector/meshcore.db)",
)
# Seed home directory - contains initial data files (node_tags.json, members.json)
# Seed home directory - contains initial data files (node_tags.yaml, members.yaml)
seed_home: str = Field(
default="./seed",
description="Directory containing seed data files (default: ./seed)",
@@ -121,6 +140,29 @@ class CollectorSettings(CommonSettings):
default=2.0, description="Retry backoff multiplier"
)
# Data retention / cleanup settings
data_retention_enabled: bool = Field(
default=True, description="Enable automatic event data cleanup"
)
data_retention_days: int = Field(
default=30, description="Number of days to retain event data", ge=1
)
data_retention_interval_hours: int = Field(
default=24,
description="Hours between automatic cleanup runs (applies to both events and nodes)",
ge=1,
)
# Node cleanup settings
node_cleanup_enabled: bool = Field(
default=True, description="Enable automatic cleanup of inactive nodes"
)
node_cleanup_days: int = Field(
default=7,
description="Remove nodes not seen for this many days (last_seen)",
ge=1,
)
@property
def collector_data_dir(self) -> str:
"""Get the collector data directory path."""
@@ -147,17 +189,17 @@ class CollectorSettings(CommonSettings):
@property
def node_tags_file(self) -> str:
"""Get the path to node_tags.json in seed_home."""
"""Get the path to node_tags.yaml in seed_home."""
from pathlib import Path
return str(Path(self.effective_seed_home) / "node_tags.json")
return str(Path(self.effective_seed_home) / "node_tags.yaml")
@property
def members_file(self) -> str:
"""Get the path to members.json in seed_home."""
"""Get the path to members.yaml in seed_home."""
from pathlib import Path
return str(Path(self.effective_seed_home) / "members.json")
return str(Path(self.effective_seed_home) / "members.yaml")
@field_validator("database_url")
@classmethod
@@ -211,6 +253,34 @@ class WebSettings(CommonSettings):
web_host: str = Field(default="0.0.0.0", description="Web server host")
web_port: int = Field(default=8080, description="Web server port")
# Timezone for date/time display (uses standard TZ environment variable)
tz: str = Field(default="UTC", description="Timezone for displaying dates/times")
# Theme (dark or light, default dark)
web_theme: str = Field(
default="dark",
description="Default theme for the web dashboard (dark or light)",
)
# Locale / language (default: English)
web_locale: str = Field(
default="en",
description="Locale/language for the web dashboard (e.g. 'en')",
)
# Auto-refresh interval for list pages
web_auto_refresh_seconds: int = Field(
default=30,
description="Auto-refresh interval in seconds for list pages (0 to disable)",
ge=0,
)
# Admin interface (disabled by default for security)
web_admin_enabled: bool = Field(
default=False,
description="Enable admin interface at /a/ (requires OAuth2Proxy in front)",
)
# API connection
api_base_url: str = Field(
default="http://localhost:8000",
@@ -231,9 +301,6 @@ class WebSettings(CommonSettings):
network_country: Optional[str] = Field(
default=None, description="Network country (ISO 3166-1 alpha-2)"
)
network_location: Optional[str] = Field(
default=None, description="Network location (lat,lon)"
)
network_radio_config: Optional[str] = Field(
default=None, description="Radio configuration details"
)
@@ -243,6 +310,82 @@ class WebSettings(CommonSettings):
network_contact_discord: Optional[str] = Field(
default=None, description="Discord server link"
)
network_contact_github: Optional[str] = Field(
default=None, description="GitHub repository URL"
)
network_contact_youtube: Optional[str] = Field(
default=None, description="YouTube channel URL"
)
network_welcome_text: Optional[str] = Field(
default=None, description="Welcome text for homepage"
)
# Feature flags (control which pages are visible in the web dashboard)
feature_dashboard: bool = Field(
default=True, description="Enable the /dashboard page"
)
feature_nodes: bool = Field(default=True, description="Enable the /nodes pages")
feature_advertisements: bool = Field(
default=True, description="Enable the /advertisements page"
)
feature_messages: bool = Field(
default=True, description="Enable the /messages page"
)
feature_map: bool = Field(
default=True, description="Enable the /map page and /map/data endpoint"
)
feature_members: bool = Field(default=True, description="Enable the /members page")
feature_pages: bool = Field(
default=True, description="Enable custom markdown pages"
)
# Content directory (contains pages/ and media/ subdirectories)
content_home: Optional[str] = Field(
default=None,
description="Directory containing custom content (pages/, media/) (default: ./content)",
)
@property
def features(self) -> dict[str, bool]:
"""Get feature flags as a dictionary.
Automatic dependencies:
- Dashboard requires at least one of nodes/advertisements/messages.
- Map requires nodes (map displays node locations).
"""
has_dashboard_content = (
self.feature_nodes or self.feature_advertisements or self.feature_messages
)
return {
"dashboard": self.feature_dashboard and has_dashboard_content,
"nodes": self.feature_nodes,
"advertisements": self.feature_advertisements,
"messages": self.feature_messages,
"map": self.feature_map and self.feature_nodes,
"members": self.feature_members,
"pages": self.feature_pages,
}
@property
def effective_content_home(self) -> str:
"""Get the effective content home directory."""
from pathlib import Path
return str(Path(self.content_home or "./content"))
@property
def effective_pages_home(self) -> str:
"""Get the effective pages directory (content_home/pages)."""
from pathlib import Path
return str(Path(self.effective_content_home) / "pages")
@property
def effective_media_home(self) -> str:
"""Get the effective media directory (content_home/media)."""
from pathlib import Path
return str(Path(self.effective_content_home) / "media")
@property
def web_data_dir(self) -> str:

View File

@@ -1,10 +1,11 @@
"""Database connection and session management."""
from contextlib import contextmanager
from typing import Generator
from contextlib import asynccontextmanager, contextmanager
from typing import AsyncGenerator, Generator
from sqlalchemy import create_engine, event
from sqlalchemy.engine import Engine
from sqlalchemy.ext.asyncio import AsyncSession, create_async_engine
from sqlalchemy.orm import Session, sessionmaker
from meshcore_hub.common.models.base import Base
@@ -97,9 +98,29 @@ class DatabaseManager:
echo: Enable SQL query logging
"""
self.database_url = database_url
# Ensure parent directory exists for SQLite databases
if database_url.startswith("sqlite:///"):
from pathlib import Path
# Extract path from sqlite:///path/to/db.sqlite
db_path = Path(database_url.replace("sqlite:///", ""))
db_path.parent.mkdir(parents=True, exist_ok=True)
self.engine = create_database_engine(database_url, echo=echo)
self.session_factory = create_session_factory(self.engine)
# Create async engine for async operations
async_url = database_url.replace("sqlite://", "sqlite+aiosqlite://")
self.async_engine = create_async_engine(async_url, echo=echo)
from sqlalchemy.ext.asyncio import async_sessionmaker
self.async_session_factory = async_sessionmaker(
self.async_engine,
class_=AsyncSession,
expire_on_commit=False,
)
def create_tables(self) -> None:
"""Create all database tables."""
create_tables(self.engine)
@@ -138,6 +159,21 @@ class DatabaseManager:
finally:
session.close()
@asynccontextmanager
async def async_session(self) -> AsyncGenerator[AsyncSession, None]:
"""Provide an async session context manager.
Yields:
AsyncSession instance
Example:
async with db.async_session() as session:
result = await session.execute(select(Node))
await session.commit()
"""
async with self.async_session_factory() as session:
yield session
def dispose(self) -> None:
"""Dispose of the database engine and connection pool."""
self.engine.dispose()

View File

@@ -0,0 +1,142 @@
"""Event hash utilities for deduplication.
This module provides functions to compute deterministic hashes for events,
allowing deduplication when multiple receiver nodes report the same event.
"""
import hashlib
from datetime import datetime
from typing import Optional
def compute_message_hash(
text: str,
pubkey_prefix: Optional[str] = None,
channel_idx: Optional[int] = None,
sender_timestamp: Optional[datetime] = None,
txt_type: Optional[int] = None,
) -> str:
"""Compute a deterministic hash for a message.
The hash is computed from fields that uniquely identify a message's content
and sender, excluding receiver-specific data.
Args:
text: Message content
pubkey_prefix: Sender's public key prefix (12 chars)
channel_idx: Channel index for channel messages
sender_timestamp: Sender's timestamp
txt_type: Message type indicator
Returns:
32-character hex hash string
"""
# Build a canonical string from the relevant fields
parts = [
text or "",
pubkey_prefix or "",
str(channel_idx) if channel_idx is not None else "",
sender_timestamp.isoformat() if sender_timestamp else "",
str(txt_type) if txt_type is not None else "",
]
canonical = "|".join(parts)
return hashlib.md5(canonical.encode("utf-8")).hexdigest()
def compute_advertisement_hash(
public_key: str,
name: Optional[str] = None,
adv_type: Optional[str] = None,
flags: Optional[int] = None,
received_at: Optional[datetime] = None,
bucket_seconds: int = 120,
) -> str:
"""Compute a deterministic hash for an advertisement.
Advertisements are bucketed by time since the same node may advertise
periodically and we want to deduplicate within a time window.
Args:
public_key: Advertised node's public key
name: Advertised name
adv_type: Node type
flags: Capability flags
received_at: When received (used for time bucketing)
bucket_seconds: Time bucket size in seconds (default 30)
Returns:
32-character hex hash string
"""
# Bucket the time to allow deduplication within a window
time_bucket = ""
if received_at:
# Round down to nearest bucket
epoch = int(received_at.timestamp())
bucket_epoch = (epoch // bucket_seconds) * bucket_seconds
time_bucket = str(bucket_epoch)
parts = [
public_key,
name or "",
adv_type or "",
str(flags) if flags is not None else "",
time_bucket,
]
canonical = "|".join(parts)
return hashlib.md5(canonical.encode("utf-8")).hexdigest()
def compute_trace_hash(initiator_tag: int) -> str:
"""Compute a deterministic hash for a trace path.
Trace paths have a unique initiator_tag that serves as the identifier.
Args:
initiator_tag: Unique trace identifier
Returns:
32-character hex hash string
"""
return hashlib.md5(str(initiator_tag).encode("utf-8")).hexdigest()
def compute_telemetry_hash(
node_public_key: str,
parsed_data: Optional[dict] = None,
received_at: Optional[datetime] = None,
bucket_seconds: int = 120,
) -> str:
"""Compute a deterministic hash for a telemetry record.
Telemetry is bucketed by time since nodes report periodically.
Args:
node_public_key: Reporting node's public key
parsed_data: Decoded sensor readings
received_at: When received (used for time bucketing)
bucket_seconds: Time bucket size in seconds (default 30)
Returns:
32-character hex hash string
"""
# Bucket the time
time_bucket = ""
if received_at:
epoch = int(received_at.timestamp())
bucket_epoch = (epoch // bucket_seconds) * bucket_seconds
time_bucket = str(bucket_epoch)
# Serialize parsed_data deterministically
data_str = ""
if parsed_data:
# Sort keys for deterministic serialization
sorted_items = sorted(parsed_data.items())
data_str = str(sorted_items)
parts = [
node_public_key,
data_str,
time_bucket,
]
canonical = "|".join(parts)
return hashlib.md5(canonical.encode("utf-8")).hexdigest()

View File

@@ -0,0 +1,81 @@
"""Lightweight i18n support for MeshCore Hub.
Loads JSON translation files and provides a ``t()`` lookup function
that is shared between the Python (Jinja2) and JavaScript (SPA) sides.
The same ``en.json`` file is served as a static asset for the client and
read from disk for server-side template rendering.
"""
import json
import logging
from pathlib import Path
from typing import Any
logger = logging.getLogger(__name__)
_translations: dict[str, Any] = {}
_locale: str = "en"
# Directory where locale JSON files live (web/static/locales/)
LOCALES_DIR = Path(__file__).parent.parent / "web" / "static" / "locales"
def load_locale(locale: str = "en", locales_dir: Path | None = None) -> None:
"""Load a locale's translation file into memory.
Args:
locale: Language code (e.g. ``"en"``).
locales_dir: Override directory containing ``<locale>.json`` files.
"""
global _translations, _locale
directory = locales_dir or LOCALES_DIR
path = directory / f"{locale}.json"
if not path.exists():
logger.warning("Locale file not found: %s falling back to 'en'", path)
path = directory / "en.json"
if path.exists():
_translations = json.loads(path.read_text(encoding="utf-8"))
_locale = locale
logger.info("Loaded locale '%s' from %s", locale, path)
else:
logger.error("No locale files found in %s", directory)
def _resolve(key: str) -> Any:
"""Walk a dot-separated key through the nested translation dict."""
value: Any = _translations
for part in key.split("."):
if isinstance(value, dict):
value = value.get(part)
else:
return None
return value
def t(key: str, **kwargs: Any) -> str:
"""Translate a key with optional interpolation.
Supports ``{{var}}`` placeholders in translation strings.
Args:
key: Dot-separated translation key (e.g. ``"nav.home"``).
**kwargs: Interpolation values.
Returns:
Translated string, or the key itself as fallback.
"""
val = _resolve(key)
if not isinstance(val, str):
return key
# Interpolation: replace {{var}} placeholders
for k, v in kwargs.items():
val = val.replace("{{" + k + "}}", str(v))
return val
def get_locale() -> str:
"""Return the currently loaded locale code."""
return _locale

View File

@@ -9,6 +9,7 @@ from meshcore_hub.common.models.trace_path import TracePath
from meshcore_hub.common.models.telemetry import Telemetry
from meshcore_hub.common.models.event_log import EventLog
from meshcore_hub.common.models.member import Member
from meshcore_hub.common.models.event_receiver import EventReceiver, add_event_receiver
__all__ = [
"Base",
@@ -21,4 +22,6 @@ __all__ = [
"Telemetry",
"EventLog",
"Member",
"EventReceiver",
"add_event_receiver",
]

View File

@@ -58,6 +58,11 @@ class Advertisement(Base, UUIDMixin, TimestampMixin):
default=utc_now,
nullable=False,
)
event_hash: Mapped[Optional[str]] = mapped_column(
String(32),
nullable=True,
unique=True,
)
__table_args__ = (Index("ix_advertisements_received_at", "received_at"),)

View File

@@ -0,0 +1,127 @@
"""EventReceiver model for tracking which nodes received each event."""
from datetime import datetime
from typing import TYPE_CHECKING, Optional
from uuid import uuid4
from sqlalchemy import DateTime, Float, ForeignKey, Index, String, UniqueConstraint
from sqlalchemy.dialects.sqlite import insert as sqlite_insert
from sqlalchemy.orm import Mapped, Session, mapped_column, relationship
from meshcore_hub.common.models.base import Base, TimestampMixin, UUIDMixin, utc_now
if TYPE_CHECKING:
from meshcore_hub.common.models.node import Node
class EventReceiver(Base, UUIDMixin, TimestampMixin):
"""Junction model tracking which receivers observed each event.
This table enables multi-receiver tracking for deduplicated events.
When multiple receiver nodes observe the same mesh event, each receiver
gets an entry in this table linked by the event_hash.
Attributes:
id: UUID primary key
event_type: Type of event ('message', 'advertisement', 'trace', 'telemetry')
event_hash: Hash identifying the unique event (links to event tables)
receiver_node_id: FK to the node that received this event
snr: Signal-to-noise ratio at this receiver (if available)
received_at: When this specific receiver saw the event
created_at: Record creation timestamp
updated_at: Record update timestamp
"""
__tablename__ = "event_receivers"
event_type: Mapped[str] = mapped_column(
String(20),
nullable=False,
)
event_hash: Mapped[str] = mapped_column(
String(32),
nullable=False,
index=True,
)
receiver_node_id: Mapped[str] = mapped_column(
String(36),
ForeignKey("nodes.id", ondelete="CASCADE"),
nullable=False,
index=True,
)
snr: Mapped[Optional[float]] = mapped_column(
Float,
nullable=True,
)
received_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True),
default=utc_now,
nullable=False,
)
# Relationship to receiver node
receiver_node: Mapped["Node"] = relationship(
"Node",
foreign_keys=[receiver_node_id],
)
__table_args__ = (
UniqueConstraint(
"event_hash", "receiver_node_id", name="uq_event_receivers_hash_node"
),
Index("ix_event_receivers_type_hash", "event_type", "event_hash"),
)
def __repr__(self) -> str:
return (
f"<EventReceiver(type={self.event_type}, "
f"hash={self.event_hash[:8]}..., "
f"node={self.receiver_node_id[:8]}...)>"
)
def add_event_receiver(
session: Session,
event_type: str,
event_hash: str,
receiver_node_id: str,
snr: Optional[float] = None,
received_at: Optional[datetime] = None,
) -> bool:
"""Add a receiver to an event, handling duplicates gracefully.
Uses INSERT OR IGNORE to handle the unique constraint on (event_hash, receiver_node_id).
Args:
session: SQLAlchemy session
event_type: Type of event ('message', 'advertisement', 'trace', 'telemetry')
event_hash: Hash identifying the unique event
receiver_node_id: UUID of the receiver node
snr: Signal-to-noise ratio at this receiver (optional)
received_at: When this receiver saw the event (defaults to now)
Returns:
True if a new receiver entry was added, False if it already existed.
"""
from datetime import timezone
now = received_at or datetime.now(timezone.utc)
stmt = (
sqlite_insert(EventReceiver)
.values(
id=str(uuid4()),
event_type=event_type,
event_hash=event_hash,
receiver_node_id=receiver_node_id,
snr=snr,
received_at=now,
created_at=now,
updated_at=now,
)
.on_conflict_do_nothing(index_elements=["event_hash", "receiver_node_id"])
)
result = session.execute(stmt)
# CursorResult has rowcount attribute
rowcount = getattr(result, "rowcount", 0)
return bool(rowcount and rowcount > 0)

View File

@@ -12,21 +12,28 @@ class Member(Base, UUIDMixin, TimestampMixin):
"""Member model for network member information.
Stores information about network members/operators.
Nodes are associated with members via a 'member_id' tag on the node.
Attributes:
id: UUID primary key
member_id: Unique member identifier (e.g., 'walshie86')
name: Member's display name
callsign: Amateur radio callsign (optional)
role: Member's role in the network (optional)
description: Additional description (optional)
contact: Contact information (optional)
public_key: Associated node public key (optional, 64-char hex)
created_at: Record creation timestamp
updated_at: Record update timestamp
"""
__tablename__ = "members"
member_id: Mapped[str] = mapped_column(
String(100),
nullable=False,
unique=True,
index=True,
)
name: Mapped[str] = mapped_column(
String(255),
nullable=False,
@@ -47,11 +54,6 @@ class Member(Base, UUIDMixin, TimestampMixin):
String(255),
nullable=True,
)
public_key: Mapped[Optional[str]] = mapped_column(
String(64),
nullable=True,
index=True,
)
def __repr__(self) -> str:
return f"<Member(id={self.id}, name={self.name}, callsign={self.callsign})>"
return f"<Member(id={self.id}, member_id={self.member_id}, name={self.name}, callsign={self.callsign})>"

View File

@@ -76,6 +76,11 @@ class Message(Base, UUIDMixin, TimestampMixin):
default=utc_now,
nullable=False,
)
event_hash: Mapped[Optional[str]] = mapped_column(
String(32),
nullable=True,
unique=True,
)
__table_args__ = (
Index("ix_messages_message_type", "message_type"),

View File

@@ -3,7 +3,7 @@
from datetime import datetime
from typing import TYPE_CHECKING, Optional
from sqlalchemy import DateTime, Index, Integer, String
from sqlalchemy import DateTime, Float, Index, Integer, String
from sqlalchemy.orm import Mapped, mapped_column, relationship
from meshcore_hub.common.models.base import Base, TimestampMixin, UUIDMixin, utc_now
@@ -23,6 +23,8 @@ class Node(Base, UUIDMixin, TimestampMixin):
flags: Capability/status flags bitmask
first_seen: Timestamp of first advertisement
last_seen: Timestamp of most recent activity
lat: GPS latitude coordinate (if available)
lon: GPS longitude coordinate (if available)
created_at: Record creation timestamp
updated_at: Record update timestamp
"""
@@ -52,10 +54,18 @@ class Node(Base, UUIDMixin, TimestampMixin):
default=utc_now,
nullable=False,
)
last_seen: Mapped[datetime] = mapped_column(
last_seen: Mapped[Optional[datetime]] = mapped_column(
DateTime(timezone=True),
default=utc_now,
nullable=False,
default=None,
nullable=True,
)
lat: Mapped[Optional[float]] = mapped_column(
Float,
nullable=True,
)
lon: Mapped[Optional[float]] = mapped_column(
Float,
nullable=True,
)
# Relationships

View File

@@ -21,7 +21,7 @@ class NodeTag(Base, UUIDMixin, TimestampMixin):
node_id: Foreign key to nodes table
key: Tag name/key
value: Tag value (stored as text, can be JSON for typed values)
value_type: Type hint (string, number, boolean, coordinate)
value_type: Type hint (string, number, boolean)
created_at: Record creation timestamp
updated_at: Record update timestamp
"""

View File

@@ -54,6 +54,11 @@ class Telemetry(Base, UUIDMixin, TimestampMixin):
default=utc_now,
nullable=False,
)
event_hash: Mapped[Optional[str]] = mapped_column(
String(32),
nullable=True,
unique=True,
)
__table_args__ = (Index("ix_telemetry_received_at", "received_at"),)

View File

@@ -3,7 +3,7 @@
from datetime import datetime
from typing import Optional
from sqlalchemy import BigInteger, DateTime, ForeignKey, Index, Integer
from sqlalchemy import BigInteger, DateTime, ForeignKey, Index, Integer, String
from sqlalchemy.dialects.sqlite import JSON
from sqlalchemy.orm import Mapped, mapped_column
@@ -67,6 +67,11 @@ class TracePath(Base, UUIDMixin, TimestampMixin):
default=utc_now,
nullable=False,
)
event_hash: Mapped[Optional[str]] = mapped_column(
String(32),
nullable=True,
unique=True,
)
__table_args__ = (
Index("ix_trace_paths_initiator_tag", "initiator_tag"),

View File

@@ -23,6 +23,7 @@ class MQTTConfig:
client_id: Optional[str] = None
keepalive: int = 60
clean_session: bool = True
tls: bool = False
class TopicBuilder:
@@ -131,6 +132,11 @@ class MQTTClient:
self._connected = False
self._message_handlers: dict[str, list[MessageHandler]] = {}
# Set up TLS if enabled
if config.tls:
self._client.tls_set()
logger.debug("TLS/SSL enabled for MQTT connection")
# Set up authentication if provided
if config.username:
self._client.username_pw_set(config.username, config.password)
@@ -344,6 +350,7 @@ def create_mqtt_client(
password: Optional[str] = None,
prefix: str = "meshcore",
client_id: Optional[str] = None,
tls: bool = False,
) -> MQTTClient:
"""Create and configure an MQTT client.
@@ -354,6 +361,7 @@ def create_mqtt_client(
password: MQTT password (optional)
prefix: Topic prefix
client_id: Client identifier (optional)
tls: Enable TLS/SSL connection (optional)
Returns:
Configured MQTTClient instance
@@ -365,5 +373,6 @@ def create_mqtt_client(
password=password,
prefix=prefix,
client_id=client_id,
tls=tls,
)
return MQTTClient(config)

View File

@@ -20,6 +20,7 @@ from meshcore_hub.common.schemas.nodes import (
NodeTagRead,
)
from meshcore_hub.common.schemas.messages import (
ReceiverInfo,
MessageRead,
MessageList,
MessageFilters,
@@ -35,6 +36,9 @@ from meshcore_hub.common.schemas.members import (
MemberRead,
MemberList,
)
from meshcore_hub.common.schemas.network import (
RadioConfig,
)
__all__ = [
# Events
@@ -54,7 +58,8 @@ __all__ = [
"NodeTagCreate",
"NodeTagUpdate",
"NodeTagRead",
# Messages
# Messages & Events
"ReceiverInfo",
"MessageRead",
"MessageList",
"MessageFilters",
@@ -67,4 +72,6 @@ __all__ = [
"MemberUpdate",
"MemberRead",
"MemberList",
# Network
"RadioConfig",
]

View File

@@ -157,7 +157,16 @@ class TelemetryResponseEvent(BaseModel):
class ContactInfo(BaseModel):
"""Schema for a single contact in CONTACTS event."""
"""Schema for a single contact in CONTACTS event.
Device payload fields:
- public_key: Node's 64-char hex public key
- adv_name: Node's advertised name (device field)
- type: Numeric node type (0=none, 1=chat, 2=repeater, 3=room)
- flags: Capability flags
- last_advert: Unix timestamp of last advertisement
- adv_lat, adv_lon: GPS coordinates (if available)
"""
public_key: str = Field(
...,
@@ -165,14 +174,40 @@ class ContactInfo(BaseModel):
max_length=64,
description="Node's full public key",
)
adv_name: Optional[str] = Field(
default=None,
max_length=255,
description="Node's advertised name (from device)",
)
type: Optional[int] = Field(
default=None,
description="Numeric node type: 0=none, 1=chat, 2=repeater, 3=room",
)
flags: Optional[int] = Field(
default=None,
description="Capability/status flags bitmask",
)
last_advert: Optional[int] = Field(
default=None,
description="Unix timestamp of last advertisement",
)
adv_lat: Optional[float] = Field(
default=None,
description="GPS latitude (if available)",
)
adv_lon: Optional[float] = Field(
default=None,
description="GPS longitude (if available)",
)
# Legacy field names for backwards compatibility
name: Optional[str] = Field(
default=None,
max_length=255,
description="Node name/alias",
description="Node name/alias (legacy, prefer adv_name)",
)
node_type: Optional[str] = Field(
default=None,
description="Node type: chat, repeater, room, none",
description="Node type string (legacy, prefer type)",
)

View File

@@ -7,8 +7,18 @@ from pydantic import BaseModel, Field
class MemberCreate(BaseModel):
"""Schema for creating a member."""
"""Schema for creating a member.
Note: Nodes are associated with members via a 'member_id' tag on the node,
not through this schema.
"""
member_id: str = Field(
...,
min_length=1,
max_length=100,
description="Unique member identifier (e.g., 'walshie86')",
)
name: str = Field(
...,
min_length=1,
@@ -34,18 +44,21 @@ class MemberCreate(BaseModel):
max_length=255,
description="Contact information",
)
public_key: Optional[str] = Field(
default=None,
min_length=64,
max_length=64,
pattern=r"^[0-9a-fA-F]{64}$",
description="Associated node public key (64-char hex)",
)
class MemberUpdate(BaseModel):
"""Schema for updating a member."""
"""Schema for updating a member.
Note: Nodes are associated with members via a 'member_id' tag on the node,
not through this schema.
"""
member_id: Optional[str] = Field(
default=None,
min_length=1,
max_length=100,
description="Unique member identifier (e.g., 'walshie86')",
)
name: Optional[str] = Field(
default=None,
min_length=1,
@@ -71,27 +84,22 @@ class MemberUpdate(BaseModel):
max_length=255,
description="Contact information",
)
public_key: Optional[str] = Field(
default=None,
min_length=64,
max_length=64,
pattern=r"^[0-9a-fA-F]{64}$",
description="Associated node public key (64-char hex)",
)
class MemberRead(BaseModel):
"""Schema for reading a member."""
"""Schema for reading a member.
Note: Nodes are associated with members via a 'member_id' tag on the node.
To find nodes for a member, query nodes with a 'member_id' tag matching this member.
"""
id: str = Field(..., description="Member UUID")
member_id: str = Field(..., description="Unique member identifier")
name: str = Field(..., description="Member's display name")
callsign: Optional[str] = Field(default=None, description="Amateur radio callsign")
role: Optional[str] = Field(default=None, description="Member's role")
description: Optional[str] = Field(default=None, description="Description")
contact: Optional[str] = Field(default=None, description="Contact information")
public_key: Optional[str] = Field(
default=None, description="Associated node public key"
)
created_at: datetime = Field(..., description="Creation timestamp")
updated_at: datetime = Field(..., description="Last update timestamp")

View File

@@ -6,19 +6,41 @@ from typing import Literal, Optional
from pydantic import BaseModel, Field
class ReceiverInfo(BaseModel):
"""Information about a receiver that observed an event."""
node_id: str = Field(..., description="Receiver node UUID")
public_key: str = Field(..., description="Receiver node public key")
name: Optional[str] = Field(default=None, description="Receiver node name")
tag_name: Optional[str] = Field(default=None, description="Receiver name from tags")
snr: Optional[float] = Field(
default=None, description="Signal-to-noise ratio at this receiver"
)
received_at: datetime = Field(..., description="When this receiver saw the event")
class Config:
from_attributes = True
class MessageRead(BaseModel):
"""Schema for reading a message."""
id: str = Field(..., description="Message UUID")
receiver_node_id: Optional[str] = Field(
default=None, description="Receiving interface node UUID"
received_by: Optional[str] = Field(
default=None, description="Receiving interface node public key"
)
receiver_name: Optional[str] = Field(default=None, description="Receiver node name")
receiver_tag_name: Optional[str] = Field(
default=None, description="Receiver name from tags"
)
message_type: str = Field(..., description="Message type (contact, channel)")
pubkey_prefix: Optional[str] = Field(
default=None, description="Sender's public key prefix (12 chars)"
)
sender_friendly_name: Optional[str] = Field(
default=None, description="Sender's friendly name from node tags"
sender_name: Optional[str] = Field(
default=None, description="Sender's advertised node name"
)
sender_tag_name: Optional[str] = Field(
default=None, description="Sender's name from node tags"
)
channel_idx: Optional[int] = Field(default=None, description="Channel index")
text: str = Field(..., description="Message content")
@@ -31,6 +53,9 @@ class MessageRead(BaseModel):
)
received_at: datetime = Field(..., description="When received by interface")
created_at: datetime = Field(..., description="Record creation timestamp")
receivers: list[ReceiverInfo] = Field(
default_factory=list, description="All receivers that observed this message"
)
class Config:
from_attributes = True
@@ -79,17 +104,32 @@ class MessageFilters(BaseModel):
class AdvertisementRead(BaseModel):
"""Schema for reading an advertisement."""
id: str = Field(..., description="Advertisement UUID")
receiver_node_id: Optional[str] = Field(
default=None, description="Receiving interface node UUID"
received_by: Optional[str] = Field(
default=None, description="Receiving interface node public key"
)
receiver_name: Optional[str] = Field(default=None, description="Receiver node name")
receiver_tag_name: Optional[str] = Field(
default=None, description="Receiver name from tags"
)
node_id: Optional[str] = Field(default=None, description="Advertised node UUID")
public_key: str = Field(..., description="Advertised public key")
name: Optional[str] = Field(default=None, description="Advertised name")
node_name: Optional[str] = Field(
default=None, description="Node name from nodes table"
)
node_tag_name: Optional[str] = Field(
default=None, description="Node name from tags"
)
node_tag_description: Optional[str] = Field(
default=None, description="Node description from tags"
)
adv_type: Optional[str] = Field(default=None, description="Node type")
flags: Optional[int] = Field(default=None, description="Capability flags")
received_at: datetime = Field(..., description="When received")
created_at: datetime = Field(..., description="Record creation timestamp")
receivers: list[ReceiverInfo] = Field(
default_factory=list,
description="All receivers that observed this advertisement",
)
class Config:
from_attributes = True
@@ -107,9 +147,8 @@ class AdvertisementList(BaseModel):
class TracePathRead(BaseModel):
"""Schema for reading a trace path."""
id: str = Field(..., description="Trace path UUID")
receiver_node_id: Optional[str] = Field(
default=None, description="Receiving interface node UUID"
received_by: Optional[str] = Field(
default=None, description="Receiving interface node public key"
)
initiator_tag: int = Field(..., description="Trace identifier")
path_len: Optional[int] = Field(default=None, description="Path length")
@@ -124,6 +163,10 @@ class TracePathRead(BaseModel):
hop_count: Optional[int] = Field(default=None, description="Total hops")
received_at: datetime = Field(..., description="When received")
created_at: datetime = Field(..., description="Record creation timestamp")
receivers: list[ReceiverInfo] = Field(
default_factory=list,
description="All receivers that observed this trace",
)
class Config:
from_attributes = True
@@ -141,17 +184,19 @@ class TracePathList(BaseModel):
class TelemetryRead(BaseModel):
"""Schema for reading a telemetry record."""
id: str = Field(..., description="Telemetry UUID")
receiver_node_id: Optional[str] = Field(
default=None, description="Receiving interface node UUID"
received_by: Optional[str] = Field(
default=None, description="Receiving interface node public key"
)
node_id: Optional[str] = Field(default=None, description="Reporting node UUID")
node_public_key: str = Field(..., description="Reporting node public key")
parsed_data: Optional[dict] = Field(
default=None, description="Decoded sensor readings"
)
received_at: datetime = Field(..., description="When received")
created_at: datetime = Field(..., description="Record creation timestamp")
receivers: list[ReceiverInfo] = Field(
default_factory=list,
description="All receivers that observed this telemetry",
)
class Config:
from_attributes = True
@@ -171,11 +216,25 @@ class RecentAdvertisement(BaseModel):
public_key: str = Field(..., description="Node public key")
name: Optional[str] = Field(default=None, description="Node name")
friendly_name: Optional[str] = Field(default=None, description="Friendly name tag")
tag_name: Optional[str] = Field(default=None, description="Name tag")
adv_type: Optional[str] = Field(default=None, description="Node type")
received_at: datetime = Field(..., description="When received")
class ChannelMessage(BaseModel):
"""Schema for a channel message summary."""
text: str = Field(..., description="Message text")
sender_name: Optional[str] = Field(default=None, description="Sender name")
sender_tag_name: Optional[str] = Field(
default=None, description="Sender name from tags"
)
pubkey_prefix: Optional[str] = Field(
default=None, description="Sender public key prefix"
)
received_at: datetime = Field(..., description="When received")
class DashboardStats(BaseModel):
"""Schema for dashboard statistics."""
@@ -183,10 +242,14 @@ class DashboardStats(BaseModel):
active_nodes: int = Field(..., description="Nodes active in last 24h")
total_messages: int = Field(..., description="Total number of messages")
messages_today: int = Field(..., description="Messages received today")
messages_7d: int = Field(default=0, description="Messages received in last 7 days")
total_advertisements: int = Field(..., description="Total advertisements")
advertisements_24h: int = Field(
default=0, description="Advertisements received in last 24h"
)
advertisements_7d: int = Field(
default=0, description="Advertisements received in last 7 days"
)
recent_advertisements: list[RecentAdvertisement] = Field(
default_factory=list, description="Last 10 advertisements"
)
@@ -194,3 +257,39 @@ class DashboardStats(BaseModel):
default_factory=dict,
description="Message count per channel",
)
channel_messages: dict[int, list[ChannelMessage]] = Field(
default_factory=dict,
description="Recent messages per channel (up to 5 each)",
)
class DailyActivityPoint(BaseModel):
"""Schema for a single day's activity count."""
date: str = Field(..., description="Date in YYYY-MM-DD format")
count: int = Field(..., description="Count for this day")
class DailyActivity(BaseModel):
"""Schema for daily advertisement activity over a period."""
days: int = Field(..., description="Number of days in the period")
data: list[DailyActivityPoint] = Field(
..., description="Daily advertisement counts"
)
class MessageActivity(BaseModel):
"""Schema for daily message activity over a period."""
days: int = Field(..., description="Number of days in the period")
data: list[DailyActivityPoint] = Field(..., description="Daily message counts")
class NodeCountHistory(BaseModel):
"""Schema for node count over time."""
days: int = Field(..., description="Number of days in the period")
data: list[DailyActivityPoint] = Field(
..., description="Cumulative node count per day"
)

View File

@@ -0,0 +1,65 @@
"""Pydantic schemas for network configuration."""
from typing import Optional
from pydantic import BaseModel
class RadioConfig(BaseModel):
"""Parsed radio configuration from comma-delimited string.
Format: "<profile>,<frequency>,<bandwidth>,<spreading_factor>,<coding_rate>,<tx_power>"
Example: "EU/UK Narrow,869.618MHz,62.5kHz,8,8,22dBm"
"""
profile: Optional[str] = None
frequency: Optional[str] = None
bandwidth: Optional[str] = None
spreading_factor: Optional[int] = None
coding_rate: Optional[int] = None
tx_power: Optional[str] = None
@classmethod
def from_config_string(cls, config_str: Optional[str]) -> Optional["RadioConfig"]:
"""Parse a comma-delimited radio config string.
Args:
config_str: Comma-delimited string in format:
"<profile>,<frequency>,<bandwidth>,<spreading_factor>,<coding_rate>,<tx_power>"
Returns:
RadioConfig instance if parsing succeeds, None if input is None or empty
"""
if not config_str:
return None
parts = [p.strip() for p in config_str.split(",")]
# Handle partial configs by filling with None
while len(parts) < 6:
parts.append("")
# Parse spreading factor and coding rate as integers
spreading_factor = None
coding_rate = None
try:
if parts[3]:
spreading_factor = int(parts[3])
except ValueError:
pass
try:
if parts[4]:
coding_rate = int(parts[4])
except ValueError:
pass
return cls(
profile=parts[0] or None,
frequency=parts[1] or None,
bandwidth=parts[2] or None,
spreading_factor=spreading_factor,
coding_rate=coding_rate,
tx_power=parts[5] or None,
)

View File

@@ -19,7 +19,7 @@ class NodeTagCreate(BaseModel):
default=None,
description="Tag value",
)
value_type: Literal["string", "number", "boolean", "coordinate"] = Field(
value_type: Literal["string", "number", "boolean"] = Field(
default="string",
description="Value type hint",
)
@@ -32,17 +32,36 @@ class NodeTagUpdate(BaseModel):
default=None,
description="Tag value",
)
value_type: Optional[Literal["string", "number", "boolean", "coordinate"]] = Field(
value_type: Optional[Literal["string", "number", "boolean"]] = Field(
default=None,
description="Value type hint",
)
class NodeTagMove(BaseModel):
"""Schema for moving a node tag to a different node."""
new_public_key: str = Field(
...,
min_length=64,
max_length=64,
description="Public key of the destination node",
)
class NodeTagsCopyResult(BaseModel):
"""Schema for bulk copy tags result."""
copied: int = Field(..., description="Number of tags copied")
skipped: int = Field(..., description="Number of tags skipped (already exist)")
skipped_keys: list[str] = Field(
default_factory=list, description="Keys of skipped tags"
)
class NodeTagRead(BaseModel):
"""Schema for reading a node tag."""
id: str = Field(..., description="Tag UUID")
node_id: str = Field(..., description="Parent node UUID")
key: str = Field(..., description="Tag name/key")
value: Optional[str] = Field(default=None, description="Tag value")
value_type: str = Field(..., description="Value type hint")
@@ -56,13 +75,16 @@ class NodeTagRead(BaseModel):
class NodeRead(BaseModel):
"""Schema for reading a node."""
id: str = Field(..., description="Node UUID")
public_key: str = Field(..., description="Node's 64-character hex public key")
name: Optional[str] = Field(default=None, description="Node display name")
adv_type: Optional[str] = Field(default=None, description="Advertisement type")
flags: Optional[int] = Field(default=None, description="Capability flags")
first_seen: datetime = Field(..., description="First advertisement timestamp")
last_seen: datetime = Field(..., description="Last activity timestamp")
last_seen: Optional[datetime] = Field(
default=None, description="Last activity timestamp"
)
lat: Optional[float] = Field(default=None, description="GPS latitude coordinate")
lon: Optional[float] = Field(default=None, description="GPS longitude coordinate")
created_at: datetime = Field(..., description="Record creation timestamp")
updated_at: datetime = Field(..., description="Record update timestamp")
tags: list[NodeTagRead] = Field(default_factory=list, description="Node tags")
@@ -85,7 +107,7 @@ class NodeFilters(BaseModel):
search: Optional[str] = Field(
default=None,
description="Search in name or public key",
description="Search in name tag, node name, or public key",
)
adv_type: Optional[str] = Field(
default=None,

View File

@@ -51,6 +51,13 @@ def interface() -> None:
envvar="NODE_ADDRESS",
help="Override for device public key/address (hex string)",
)
@click.option(
"--device-name",
type=str,
default=None,
envvar="MESHCORE_DEVICE_NAME",
help="Device/node name (optional)",
)
@click.option(
"--mqtt-host",
type=str,
@@ -86,6 +93,26 @@ def interface() -> None:
envvar="MQTT_PREFIX",
help="MQTT topic prefix",
)
@click.option(
"--mqtt-tls",
is_flag=True,
default=False,
envvar="MQTT_TLS",
help="Enable TLS/SSL for MQTT connection",
)
@click.option(
"--contact-cleanup/--no-contact-cleanup",
default=True,
envvar="CONTACT_CLEANUP_ENABLED",
help="Enable/disable automatic removal of stale contacts (RECEIVER mode only)",
)
@click.option(
"--contact-cleanup-days",
type=int,
default=7,
envvar="CONTACT_CLEANUP_DAYS",
help="Remove contacts not advertised for this many days (RECEIVER mode only)",
)
@click.option(
"--log-level",
type=click.Choice(["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"]),
@@ -99,11 +126,15 @@ def run(
baud: int,
mock: bool,
node_address: str | None,
device_name: str | None,
mqtt_host: str,
mqtt_port: int,
mqtt_username: str | None,
mqtt_password: str | None,
prefix: str,
mqtt_tls: bool,
contact_cleanup: bool,
contact_cleanup_days: int,
log_level: str,
) -> None:
"""Run the interface component.
@@ -139,11 +170,15 @@ def run(
baud=baud,
mock=mock,
node_address=node_address,
device_name=device_name,
mqtt_host=mqtt_host,
mqtt_port=mqtt_port,
mqtt_username=mqtt_username,
mqtt_password=mqtt_password,
mqtt_prefix=prefix,
mqtt_tls=mqtt_tls,
contact_cleanup_enabled=contact_cleanup,
contact_cleanup_days=contact_cleanup_days,
)
elif mode_upper == "SENDER":
from meshcore_hub.interface.sender import run_sender
@@ -153,11 +188,13 @@ def run(
baud=baud,
mock=mock,
node_address=node_address,
device_name=device_name,
mqtt_host=mqtt_host,
mqtt_port=mqtt_port,
mqtt_username=mqtt_username,
mqtt_password=mqtt_password,
mqtt_prefix=prefix,
mqtt_tls=mqtt_tls,
)
else:
click.echo(f"Unknown mode: {mode}", err=True)
@@ -193,6 +230,13 @@ def run(
envvar="NODE_ADDRESS",
help="Override for device public key/address (hex string)",
)
@click.option(
"--device-name",
type=str,
default=None,
envvar="MESHCORE_DEVICE_NAME",
help="Device/node name (optional)",
)
@click.option(
"--mqtt-host",
type=str,
@@ -228,16 +272,40 @@ def run(
envvar="MQTT_PREFIX",
help="MQTT topic prefix",
)
@click.option(
"--mqtt-tls",
is_flag=True,
default=False,
envvar="MQTT_TLS",
help="Enable TLS/SSL for MQTT connection",
)
@click.option(
"--contact-cleanup/--no-contact-cleanup",
default=True,
envvar="CONTACT_CLEANUP_ENABLED",
help="Enable/disable automatic removal of stale contacts",
)
@click.option(
"--contact-cleanup-days",
type=int,
default=7,
envvar="CONTACT_CLEANUP_DAYS",
help="Remove contacts not advertised for this many days",
)
def receiver(
port: str,
baud: int,
mock: bool,
node_address: str | None,
device_name: str | None,
mqtt_host: str,
mqtt_port: int,
mqtt_username: str | None,
mqtt_password: str | None,
prefix: str,
mqtt_tls: bool,
contact_cleanup: bool,
contact_cleanup_days: int,
) -> None:
"""Run interface in RECEIVER mode.
@@ -257,11 +325,15 @@ def receiver(
baud=baud,
mock=mock,
node_address=node_address,
device_name=device_name,
mqtt_host=mqtt_host,
mqtt_port=mqtt_port,
mqtt_username=mqtt_username,
mqtt_password=mqtt_password,
mqtt_prefix=prefix,
mqtt_tls=mqtt_tls,
contact_cleanup_enabled=contact_cleanup,
contact_cleanup_days=contact_cleanup_days,
)
@@ -294,6 +366,13 @@ def receiver(
envvar="NODE_ADDRESS",
help="Override for device public key/address (hex string)",
)
@click.option(
"--device-name",
type=str,
default=None,
envvar="MESHCORE_DEVICE_NAME",
help="Device/node name (optional)",
)
@click.option(
"--mqtt-host",
type=str,
@@ -329,16 +408,25 @@ def receiver(
envvar="MQTT_PREFIX",
help="MQTT topic prefix",
)
@click.option(
"--mqtt-tls",
is_flag=True,
default=False,
envvar="MQTT_TLS",
help="Enable TLS/SSL for MQTT connection",
)
def sender(
port: str,
baud: int,
mock: bool,
node_address: str | None,
device_name: str | None,
mqtt_host: str,
mqtt_port: int,
mqtt_username: str | None,
mqtt_password: str | None,
prefix: str,
mqtt_tls: bool,
) -> None:
"""Run interface in SENDER mode.
@@ -363,4 +451,5 @@ def sender(
mqtt_username=mqtt_username,
mqtt_password=mqtt_password,
mqtt_prefix=prefix,
mqtt_tls=mqtt_tls,
)

View File

@@ -164,6 +164,18 @@ class BaseMeshCoreDevice(ABC):
"""
pass
@abstractmethod
def set_name(self, name: str) -> bool:
"""Set the device's node name.
Args:
name: Node name to set
Returns:
True if name was set successfully
"""
pass
@abstractmethod
def start_message_fetching(self) -> bool:
"""Start automatic message fetching.
@@ -175,6 +187,56 @@ class BaseMeshCoreDevice(ABC):
"""
pass
@abstractmethod
def get_contacts(self) -> bool:
"""Fetch contacts from device contact database.
Triggers a CONTACTS event with all stored contacts from the device.
Note: This should only be called before the event loop is running.
Returns:
True if request was sent successfully
"""
pass
@abstractmethod
def schedule_get_contacts(self) -> bool:
"""Schedule a get_contacts request on the event loop.
This is safe to call from event handlers while the event loop is running.
Returns:
True if request was scheduled successfully
"""
pass
@abstractmethod
def remove_contact(self, public_key: str) -> bool:
"""Remove a contact from the device's contact database.
Args:
public_key: The 64-character hex public key of the contact to remove
Returns:
True if contact was removed successfully
"""
pass
@abstractmethod
def schedule_remove_contact(self, public_key: str) -> bool:
"""Schedule a remove_contact request on the event loop.
This is safe to call from event handlers while the event loop is running.
Args:
public_key: The 64-character hex public key of the contact to remove
Returns:
True if request was scheduled successfully
"""
pass
@abstractmethod
def run(self) -> None:
"""Run the device event loop (blocking)."""
@@ -322,6 +384,10 @@ class MeshCoreDevice(BaseMeshCoreDevice):
self._connected = True
logger.info(f"Connected to MeshCore device, public_key: {self._public_key}")
# Set up event subscriptions so events can be received immediately
self._setup_event_subscriptions()
return True
except Exception as e:
@@ -503,6 +569,24 @@ class MeshCoreDevice(BaseMeshCoreDevice):
logger.error(f"Failed to set device time: {e}")
return False
def set_name(self, name: str) -> bool:
"""Set the device's node name."""
if not self._connected or not self._mc:
logger.error("Cannot set name: not connected")
return False
try:
async def _set_name() -> None:
await self._mc.commands.set_name(name)
self._loop.run_until_complete(_set_name())
logger.info(f"Set device name to '{name}'")
return True
except Exception as e:
logger.error(f"Failed to set device name: {e}")
return False
def start_message_fetching(self) -> bool:
"""Start automatic message fetching."""
if not self._connected or not self._mc:
@@ -521,14 +605,107 @@ class MeshCoreDevice(BaseMeshCoreDevice):
logger.error(f"Failed to start message fetching: {e}")
return False
def get_contacts(self) -> bool:
"""Fetch contacts from device contact database.
Note: This method should only be called before the event loop is running
(e.g., during initialization). For calling during event processing,
use schedule_get_contacts() instead.
"""
if not self._connected or not self._mc:
logger.error("Cannot get contacts: not connected")
return False
try:
async def _get_contacts() -> None:
await self._mc.commands.get_contacts()
self._loop.run_until_complete(_get_contacts())
logger.info("Requested contacts from device")
return True
except Exception as e:
logger.error(f"Failed to get contacts: {e}")
return False
def schedule_get_contacts(self) -> bool:
"""Schedule a get_contacts request on the event loop.
This is safe to call from event handlers while the event loop is running.
The request is scheduled as a task on the event loop.
Returns:
True if request was scheduled, False if device not connected
"""
if not self._connected or not self._mc:
logger.error("Cannot get contacts: not connected")
return False
try:
async def _get_contacts() -> None:
await self._mc.commands.get_contacts()
asyncio.run_coroutine_threadsafe(_get_contacts(), self._loop)
logger.info("Scheduled contact sync request")
return True
except Exception as e:
logger.error(f"Failed to schedule get contacts: {e}")
return False
def remove_contact(self, public_key: str) -> bool:
"""Remove a contact from the device's contact database.
Note: This method should only be called before the event loop is running
(e.g., during initialization). For calling during event processing,
use schedule_remove_contact() instead.
"""
if not self._connected or not self._mc:
logger.error("Cannot remove contact: not connected")
return False
try:
async def _remove_contact() -> None:
await self._mc.commands.remove_contact(public_key)
self._loop.run_until_complete(_remove_contact())
logger.info(f"Removed contact {public_key[:12]}...")
return True
except Exception as e:
logger.error(f"Failed to remove contact: {e}")
return False
def schedule_remove_contact(self, public_key: str) -> bool:
"""Schedule a remove_contact request on the event loop.
This is safe to call from event handlers while the event loop is running.
The request is scheduled as a task on the event loop.
Returns:
True if request was scheduled, False if device not connected
"""
if not self._connected or not self._mc:
logger.error("Cannot remove contact: not connected")
return False
try:
async def _remove_contact() -> None:
await self._mc.commands.remove_contact(public_key)
asyncio.run_coroutine_threadsafe(_remove_contact(), self._loop)
logger.debug(f"Scheduled removal of contact {public_key[:12]}...")
return True
except Exception as e:
logger.error(f"Failed to schedule remove contact: {e}")
return False
def run(self) -> None:
"""Run the device event loop."""
self._running = True
logger.info("Starting device event loop")
# Set up event subscriptions
self._setup_event_subscriptions()
# Run the async event loop
async def _run_loop() -> None:
while self._running and self._connected:

View File

@@ -271,6 +271,17 @@ class MockMeshCoreDevice(BaseMeshCoreDevice):
logger.info(f"Mock: Set device time to {timestamp}")
return True
def set_name(self, name: str) -> bool:
"""Set the mock device's node name."""
if not self._connected:
logger.error("Cannot set name: not connected")
return False
logger.info(f"Mock: Set device name to '{name}'")
# Update the mock config name
self.mock_config.name = name
return True
def start_message_fetching(self) -> bool:
"""Start automatic message fetching (mock)."""
if not self._connected:
@@ -280,6 +291,68 @@ class MockMeshCoreDevice(BaseMeshCoreDevice):
logger.info("Mock: Started automatic message fetching")
return True
def get_contacts(self) -> bool:
"""Fetch contacts from mock device contact database.
Note: This should only be called before the event loop is running.
"""
if not self._connected:
logger.error("Cannot get contacts: not connected")
return False
logger.info("Mock: Requesting contacts from device")
# Generate CONTACTS event with all configured mock nodes
def send_contacts() -> None:
time.sleep(0.2)
contacts = [
{
"public_key": node.public_key,
"name": node.name,
"node_type": node.adv_type,
}
for node in self.mock_config.nodes
]
self._dispatch_event(
EventType.CONTACTS,
{"contacts": contacts},
)
threading.Thread(target=send_contacts, daemon=True).start()
return True
def schedule_get_contacts(self) -> bool:
"""Schedule a get_contacts request.
For the mock device, this is the same as get_contacts() since we
don't have a real async event loop. The contacts are sent via a thread.
"""
return self.get_contacts()
def remove_contact(self, public_key: str) -> bool:
"""Remove a contact from the mock device's contact database."""
if not self._connected:
logger.error("Cannot remove contact: not connected")
return False
# Find and remove the contact from mock_config.nodes
for i, node in enumerate(self.mock_config.nodes):
if node.public_key == public_key:
del self.mock_config.nodes[i]
logger.info(f"Mock: Removed contact {public_key[:12]}...")
return True
logger.warning(f"Mock: Contact {public_key[:12]}... not found")
return True # Return True even if not found (idempotent)
def schedule_remove_contact(self, public_key: str) -> bool:
"""Schedule a remove_contact request.
For the mock device, this is the same as remove_contact() since we
don't have a real async event loop.
"""
return self.remove_contact(public_key)
def run(self) -> None:
"""Run the mock device event loop."""
self._running = True

View File

@@ -20,6 +20,9 @@ from meshcore_hub.interface.device import (
create_device,
)
# Default contact cleanup settings
DEFAULT_CONTACT_CLEANUP_DAYS = 7
logger = logging.getLogger(__name__)
@@ -33,15 +36,24 @@ class Receiver:
self,
device: BaseMeshCoreDevice,
mqtt_client: MQTTClient,
device_name: Optional[str] = None,
contact_cleanup_enabled: bool = True,
contact_cleanup_days: int = DEFAULT_CONTACT_CLEANUP_DAYS,
):
"""Initialize receiver.
Args:
device: MeshCore device instance
mqtt_client: MQTT client instance
device_name: Optional device/node name to set on startup
contact_cleanup_enabled: Whether to remove stale contacts from device
contact_cleanup_days: Remove contacts not advertised for this many days
"""
self.device = device
self.mqtt = mqtt_client
self.device_name = device_name
self.contact_cleanup_enabled = contact_cleanup_enabled
self.contact_cleanup_days = contact_cleanup_days
self._running = False
self._shutdown_event = threading.Event()
self._device_connected = False
@@ -71,10 +83,14 @@ class Receiver:
"device_public_key": self.device.public_key,
}
def _initialize_device(self) -> None:
def _initialize_device(self, device_name: Optional[str] = None) -> None:
"""Initialize device after connection.
Sets the hardware clock, sends a local advertisement, and starts message fetching.
Sets the hardware clock, optionally sets device name, sends a local advertisement,
starts message fetching, and syncs the contact database.
Args:
device_name: Optional device/node name to set
"""
# Set device time to current Unix timestamp
current_time = int(time.time())
@@ -83,11 +99,18 @@ class Receiver:
else:
logger.warning("Failed to synchronize device clock")
# Send a local (non-flood) advertisement to announce presence
if self.device.send_advertisement(flood=False):
logger.info("Sent local advertisement")
# Set device name if provided
if device_name:
if self.device.set_name(device_name):
logger.info(f"Set device name to '{device_name}'")
else:
logger.warning(f"Failed to set device name to '{device_name}'")
# Send a flood advertisement to broadcast device name
if self.device.send_advertisement(flood=True):
logger.info("Sent flood advertisement")
else:
logger.warning("Failed to send local advertisement")
logger.warning("Failed to send flood advertisement")
# Start automatic message fetching
if self.device.start_message_fetching():
@@ -95,6 +118,12 @@ class Receiver:
else:
logger.warning("Failed to start automatic message fetching")
# Fetch contact database to sync known nodes
if self.device.get_contacts():
logger.info("Requested contact database sync")
else:
logger.warning("Failed to request contact database")
def _handle_event(self, event_type: EventType, payload: dict[str, Any]) -> None:
"""Handle device event and publish to MQTT.
@@ -110,6 +139,11 @@ class Receiver:
# Convert event type to MQTT topic name
event_name = event_type.value
# Special handling for CONTACTS: split into individual messages
if event_type == EventType.CONTACTS:
self._publish_contacts(payload)
return
# Publish to MQTT
self.mqtt.publish_event(
self.device.public_key,
@@ -119,9 +153,101 @@ class Receiver:
logger.debug(f"Published {event_name} event to MQTT")
# Trigger contact sync on advertisements
if event_type == EventType.ADVERTISEMENT:
self._sync_contacts()
except Exception as e:
logger.error(f"Failed to publish event to MQTT: {e}")
def _sync_contacts(self) -> None:
"""Request contact sync from device.
Called when advertisements are received to ensure contact database
stays current with all nodes on the mesh.
"""
logger.info("Advertisement received, triggering contact sync")
success = self.device.schedule_get_contacts()
if not success:
logger.warning("Contact sync request failed")
def _publish_contacts(self, payload: dict[str, Any]) -> None:
"""Publish each contact as a separate MQTT message.
The device returns contacts as a dict keyed by public_key.
We split this into individual 'contact' events for cleaner processing.
Stale contacts (not advertised for > contact_cleanup_days) are removed
from the device and not published.
Args:
payload: Dict of contacts keyed by public_key
"""
if not self.device.public_key:
logger.warning("Cannot publish contacts: device public key not available")
return
# Handle both formats:
# - Dict keyed by public_key (real device)
# - Dict with "contacts" array (mock device)
if "contacts" in payload:
contacts = payload["contacts"]
else:
contacts = list(payload.values())
if not contacts:
logger.debug("Empty contacts list received")
return
device_key = self.device.public_key # Capture for type narrowing
current_time = int(time.time())
stale_threshold = current_time - (self.contact_cleanup_days * 24 * 60 * 60)
published_count = 0
removed_count = 0
for contact in contacts:
if not isinstance(contact, dict):
continue
public_key = contact.get("public_key")
if not public_key:
continue
# Check if contact is stale based on last_advert timestamp
# Only check if cleanup is enabled and last_advert exists
if self.contact_cleanup_enabled:
last_advert = contact.get("last_advert")
if last_advert is not None and last_advert > 0:
if last_advert < stale_threshold:
# Contact is stale - remove from device
adv_name = contact.get("adv_name", contact.get("name", ""))
logger.info(
f"Removing stale contact {public_key[:12]}... "
f"({adv_name}) - last advertised "
f"{(current_time - last_advert) // 86400} days ago"
)
self.device.schedule_remove_contact(public_key)
removed_count += 1
continue # Don't publish stale contacts
try:
self.mqtt.publish_event(
device_key,
"contact", # Use singular 'contact' for individual events
contact,
)
published_count += 1
except Exception as e:
logger.error(f"Failed to publish contact event: {e}")
if removed_count > 0:
logger.info(
f"Contact sync: published {published_count}, "
f"removed {removed_count} stale contacts"
)
else:
logger.info(f"Published {published_count} contact events to MQTT")
def start(self) -> None:
"""Start the receiver."""
logger.info("Starting RECEIVER mode")
@@ -142,20 +268,22 @@ class Receiver:
logger.error(f"Failed to connect to MQTT broker: {e}")
raise
# Connect to device
if not self.device.connect():
self._device_connected = False
logger.error("Failed to connect to MeshCore device")
self.mqtt.stop()
self.mqtt.disconnect()
self._mqtt_connected = False
raise RuntimeError("Failed to connect to MeshCore device")
# Device should already be connected (from create_receiver)
# but handle case where start() is called directly
if not self.device.is_connected:
if not self.device.connect():
self._device_connected = False
logger.error("Failed to connect to MeshCore device")
self.mqtt.stop()
self.mqtt.disconnect()
self._mqtt_connected = False
raise RuntimeError("Failed to connect to MeshCore device")
logger.info(f"Connected to MeshCore device: {self.device.public_key}")
self._device_connected = True
logger.info(f"Connected to MeshCore device: {self.device.public_key}")
# Initialize device: set time and send local advertisement
self._initialize_device()
# Initialize device: set time, optionally set name, and send local advertisement
self._initialize_device(device_name=self.device_name)
self._running = True
@@ -214,11 +342,15 @@ def create_receiver(
baud: int = 115200,
mock: bool = False,
node_address: Optional[str] = None,
device_name: Optional[str] = None,
mqtt_host: str = "localhost",
mqtt_port: int = 1883,
mqtt_username: Optional[str] = None,
mqtt_password: Optional[str] = None,
mqtt_prefix: str = "meshcore",
mqtt_tls: bool = False,
contact_cleanup_enabled: bool = True,
contact_cleanup_days: int = DEFAULT_CONTACT_CLEANUP_DAYS,
) -> Receiver:
"""Create a configured receiver instance.
@@ -227,30 +359,46 @@ def create_receiver(
baud: Baud rate
mock: Use mock device
node_address: Optional override for device public key/address
device_name: Optional device/node name to set on startup
mqtt_host: MQTT broker host
mqtt_port: MQTT broker port
mqtt_username: MQTT username
mqtt_password: MQTT password
mqtt_prefix: MQTT topic prefix
mqtt_tls: Enable TLS/SSL for MQTT connection
contact_cleanup_enabled: Whether to remove stale contacts from device
contact_cleanup_days: Remove contacts not advertised for this many days
Returns:
Configured Receiver instance
"""
# Create device
# Create and connect device first to get public key
device = create_device(port=port, baud=baud, mock=mock, node_address=node_address)
# Create MQTT client
if not device.connect():
raise RuntimeError("Failed to connect to MeshCore device")
logger.info(f"Connected to MeshCore device: {device.public_key}")
# Create MQTT client with device's public key for unique client ID
mqtt_config = MQTTConfig(
host=mqtt_host,
port=mqtt_port,
username=mqtt_username,
password=mqtt_password,
prefix=mqtt_prefix,
client_id=f"meshcore-receiver-{device.public_key[:8] if device.public_key else 'unknown'}",
client_id=f"meshcore-receiver-{device.public_key[:12] if device.public_key else 'unknown'}",
tls=mqtt_tls,
)
mqtt_client = MQTTClient(mqtt_config)
return Receiver(device, mqtt_client)
return Receiver(
device,
mqtt_client,
device_name=device_name,
contact_cleanup_enabled=contact_cleanup_enabled,
contact_cleanup_days=contact_cleanup_days,
)
def run_receiver(
@@ -258,11 +406,15 @@ def run_receiver(
baud: int = 115200,
mock: bool = False,
node_address: Optional[str] = None,
device_name: Optional[str] = None,
mqtt_host: str = "localhost",
mqtt_port: int = 1883,
mqtt_username: Optional[str] = None,
mqtt_password: Optional[str] = None,
mqtt_prefix: str = "meshcore",
mqtt_tls: bool = False,
contact_cleanup_enabled: bool = True,
contact_cleanup_days: int = DEFAULT_CONTACT_CLEANUP_DAYS,
) -> None:
"""Run the receiver (blocking).
@@ -273,22 +425,30 @@ def run_receiver(
baud: Baud rate
mock: Use mock device
node_address: Optional override for device public key/address
device_name: Optional device/node name to set on startup
mqtt_host: MQTT broker host
mqtt_port: MQTT broker port
mqtt_username: MQTT username
mqtt_password: MQTT password
mqtt_prefix: MQTT topic prefix
mqtt_tls: Enable TLS/SSL for MQTT connection
contact_cleanup_enabled: Whether to remove stale contacts from device
contact_cleanup_days: Remove contacts not advertised for this many days
"""
receiver = create_receiver(
port=port,
baud=baud,
mock=mock,
node_address=node_address,
device_name=device_name,
mqtt_host=mqtt_host,
mqtt_port=mqtt_port,
mqtt_username=mqtt_username,
mqtt_password=mqtt_password,
mqtt_prefix=mqtt_prefix,
mqtt_tls=mqtt_tls,
contact_cleanup_enabled=contact_cleanup_enabled,
contact_cleanup_days=contact_cleanup_days,
)
# Set up signal handlers

View File

@@ -200,14 +200,16 @@ class Sender:
"""Start the sender."""
logger.info("Starting SENDER mode")
# Connect to device first
if not self.device.connect():
self._device_connected = False
logger.error("Failed to connect to MeshCore device")
raise RuntimeError("Failed to connect to MeshCore device")
# Device should already be connected (from create_sender)
# but handle case where start() is called directly
if not self.device.is_connected:
if not self.device.connect():
self._device_connected = False
logger.error("Failed to connect to MeshCore device")
raise RuntimeError("Failed to connect to MeshCore device")
logger.info(f"Connected to MeshCore device: {self.device.public_key}")
self._device_connected = True
logger.info(f"Connected to MeshCore device: {self.device.public_key}")
# Connect to MQTT broker
try:
@@ -285,11 +287,13 @@ def create_sender(
baud: int = 115200,
mock: bool = False,
node_address: Optional[str] = None,
device_name: Optional[str] = None,
mqtt_host: str = "localhost",
mqtt_port: int = 1883,
mqtt_username: Optional[str] = None,
mqtt_password: Optional[str] = None,
mqtt_prefix: str = "meshcore",
mqtt_tls: bool = False,
) -> Sender:
"""Create a configured sender instance.
@@ -298,26 +302,34 @@ def create_sender(
baud: Baud rate
mock: Use mock device
node_address: Optional override for device public key/address
device_name: Optional device/node name (not used in SENDER mode)
mqtt_host: MQTT broker host
mqtt_port: MQTT broker port
mqtt_username: MQTT username
mqtt_password: MQTT password
mqtt_prefix: MQTT topic prefix
mqtt_tls: Enable TLS/SSL for MQTT connection
Returns:
Configured Sender instance
"""
# Create device
# Create and connect device first to get public key
device = create_device(port=port, baud=baud, mock=mock, node_address=node_address)
# Create MQTT client
if not device.connect():
raise RuntimeError("Failed to connect to MeshCore device")
logger.info(f"Connected to MeshCore device: {device.public_key}")
# Create MQTT client with device's public key for unique client ID
mqtt_config = MQTTConfig(
host=mqtt_host,
port=mqtt_port,
username=mqtt_username,
password=mqtt_password,
prefix=mqtt_prefix,
client_id=f"meshcore-sender-{device.public_key[:8] if device.public_key else 'unknown'}",
client_id=f"meshcore-sender-{device.public_key[:12] if device.public_key else 'unknown'}",
tls=mqtt_tls,
)
mqtt_client = MQTTClient(mqtt_config)
@@ -329,11 +341,13 @@ def run_sender(
baud: int = 115200,
mock: bool = False,
node_address: Optional[str] = None,
device_name: Optional[str] = None,
mqtt_host: str = "localhost",
mqtt_port: int = 1883,
mqtt_username: Optional[str] = None,
mqtt_password: Optional[str] = None,
mqtt_prefix: str = "meshcore",
mqtt_tls: bool = False,
) -> None:
"""Run the sender (blocking).
@@ -344,22 +358,26 @@ def run_sender(
baud: Baud rate
mock: Use mock device
node_address: Optional override for device public key/address
device_name: Optional device/node name (not used in SENDER mode)
mqtt_host: MQTT broker host
mqtt_port: MQTT broker port
mqtt_username: MQTT username
mqtt_password: MQTT password
mqtt_prefix: MQTT topic prefix
mqtt_tls: Enable TLS/SSL for MQTT connection
"""
sender = create_sender(
port=port,
baud=baud,
mock=mock,
node_address=node_address,
device_name=device_name,
mqtt_host=mqtt_host,
mqtt_port=mqtt_port,
mqtt_username=mqtt_username,
mqtt_password=mqtt_password,
mqtt_prefix=mqtt_prefix,
mqtt_tls=mqtt_tls,
)
# Set up signal handlers

View File

@@ -1,16 +1,25 @@
"""FastAPI application for MeshCore Hub Web Dashboard."""
"""FastAPI application for MeshCore Hub Web Dashboard (SPA)."""
import json
import logging
from contextlib import asynccontextmanager
from datetime import datetime
from pathlib import Path
from typing import AsyncGenerator
from typing import Any, AsyncGenerator
from zoneinfo import ZoneInfo
import httpx
from fastapi import FastAPI, Request
from fastapi import FastAPI, Request, Response
from fastapi.responses import HTMLResponse, JSONResponse, PlainTextResponse
from fastapi.staticfiles import StaticFiles
from fastapi.templating import Jinja2Templates
from uvicorn.middleware.proxy_headers import ProxyHeadersMiddleware
from meshcore_hub import __version__
from meshcore_hub.common.i18n import load_locale, t
from meshcore_hub.common.schemas import RadioConfig
from meshcore_hub.web.middleware import CacheControlMiddleware
from meshcore_hub.web.pages import PageLoader
logger = logging.getLogger(__name__)
@@ -46,33 +55,117 @@ async def lifespan(app: FastAPI) -> AsyncGenerator[None, None]:
logger.info("Web dashboard stopped")
def _build_config_json(app: FastAPI, request: Request) -> str:
"""Build the JSON config object to embed in the SPA shell.
Args:
app: The FastAPI application instance.
request: The current HTTP request.
Returns:
JSON string with app configuration.
"""
# Parse radio config
radio_config = RadioConfig.from_config_string(app.state.network_radio_config)
radio_config_dict = None
if radio_config:
radio_config_dict = {
"profile": radio_config.profile,
"frequency": radio_config.frequency,
"bandwidth": radio_config.bandwidth,
"spreading_factor": radio_config.spreading_factor,
"coding_rate": radio_config.coding_rate,
"tx_power": radio_config.tx_power,
}
# Get feature flags
features = app.state.features
# Get custom pages for navigation (empty when pages feature is disabled)
page_loader = app.state.page_loader
custom_pages = (
[
{
"slug": p.slug,
"title": p.title,
"url": p.url,
"menu_order": p.menu_order,
}
for p in page_loader.get_menu_pages()
]
if features.get("pages", True)
else []
)
config = {
"network_name": app.state.network_name,
"network_city": app.state.network_city,
"network_country": app.state.network_country,
"network_radio_config": radio_config_dict,
"network_contact_email": app.state.network_contact_email,
"network_contact_discord": app.state.network_contact_discord,
"network_contact_github": app.state.network_contact_github,
"network_contact_youtube": app.state.network_contact_youtube,
"network_welcome_text": app.state.network_welcome_text,
"admin_enabled": app.state.admin_enabled,
"features": features,
"custom_pages": custom_pages,
"logo_url": app.state.logo_url,
"version": __version__,
"timezone": app.state.timezone_abbr,
"timezone_iana": app.state.timezone,
"is_authenticated": bool(request.headers.get("X-Forwarded-User")),
"default_theme": app.state.web_theme,
"locale": app.state.web_locale,
"auto_refresh_seconds": app.state.auto_refresh_seconds,
}
return json.dumps(config)
def create_app(
api_url: str = "http://localhost:8000",
api_url: str | None = None,
api_key: str | None = None,
network_name: str = "MeshCore Network",
admin_enabled: bool | None = None,
network_name: str | None = None,
network_city: str | None = None,
network_country: str | None = None,
network_location: tuple[float, float] | None = None,
network_radio_config: str | None = None,
network_contact_email: str | None = None,
network_contact_discord: str | None = None,
network_contact_github: str | None = None,
network_contact_youtube: str | None = None,
network_welcome_text: str | None = None,
features: dict[str, bool] | None = None,
) -> FastAPI:
"""Create and configure the web dashboard application.
When called without arguments (e.g., in reload mode), settings are loaded
from environment variables via the WebSettings class.
Args:
api_url: Base URL of the MeshCore Hub API
api_key: API key for authentication
admin_enabled: Enable admin interface at /a/
network_name: Display name for the network
network_city: City where the network is located
network_country: Country where the network is located
network_location: (lat, lon) tuple for map centering
network_radio_config: Radio configuration description
network_contact_email: Contact email address
network_contact_discord: Discord invite/server info
network_contact_github: GitHub repository URL
network_contact_youtube: YouTube channel URL
network_welcome_text: Welcome text for homepage
features: Feature flags dict (default: all enabled from settings)
Returns:
Configured FastAPI application
"""
# Load settings from environment if not provided
from meshcore_hub.common.config import get_web_settings
settings = get_web_settings()
app = FastAPI(
title="MeshCore Hub Dashboard",
description="Web dashboard for MeshCore network visualization",
@@ -82,31 +175,347 @@ def create_app(
redoc_url=None,
)
# Store configuration in app state
app.state.api_url = api_url
app.state.api_key = api_key
app.state.network_name = network_name
app.state.network_city = network_city
app.state.network_country = network_country
app.state.network_location = network_location or (0.0, 0.0)
app.state.network_radio_config = network_radio_config
app.state.network_contact_email = network_contact_email
app.state.network_contact_discord = network_contact_discord
# Trust proxy headers (X-Forwarded-Proto, X-Forwarded-For) for HTTPS detection
app.add_middleware(ProxyHeadersMiddleware, trusted_hosts="*")
# Set up templates
# Add cache control headers based on resource type
app.add_middleware(CacheControlMiddleware)
# Load i18n translations
app.state.web_locale = settings.web_locale or "en"
load_locale(app.state.web_locale)
# Auto-refresh interval
app.state.auto_refresh_seconds = settings.web_auto_refresh_seconds
# Store configuration in app state (use args if provided, else settings)
app.state.web_theme = (
settings.web_theme if settings.web_theme in ("dark", "light") else "dark"
)
app.state.api_url = api_url or settings.api_base_url
app.state.api_key = api_key or settings.api_key
app.state.admin_enabled = (
admin_enabled if admin_enabled is not None else settings.web_admin_enabled
)
app.state.network_name = network_name or settings.network_name
app.state.network_city = network_city or settings.network_city
app.state.network_country = network_country or settings.network_country
app.state.network_radio_config = (
network_radio_config or settings.network_radio_config
)
app.state.network_contact_email = (
network_contact_email or settings.network_contact_email
)
app.state.network_contact_discord = (
network_contact_discord or settings.network_contact_discord
)
app.state.network_contact_github = (
network_contact_github or settings.network_contact_github
)
app.state.network_contact_youtube = (
network_contact_youtube or settings.network_contact_youtube
)
app.state.network_welcome_text = (
network_welcome_text or settings.network_welcome_text
)
# Store feature flags with automatic dependencies:
# - Dashboard requires at least one of nodes/advertisements/messages
# - Map requires nodes (map displays node locations)
effective_features = features if features is not None else settings.features
overrides: dict[str, bool] = {}
has_dashboard_content = (
effective_features.get("nodes", True)
or effective_features.get("advertisements", True)
or effective_features.get("messages", True)
)
if not has_dashboard_content:
overrides["dashboard"] = False
if not effective_features.get("nodes", True):
overrides["map"] = False
if overrides:
effective_features = {**effective_features, **overrides}
app.state.features = effective_features
# Set up templates (for SPA shell only)
templates = Jinja2Templates(directory=str(TEMPLATES_DIR))
templates.env.trim_blocks = True
templates.env.lstrip_blocks = True
templates.env.globals["t"] = t
app.state.templates = templates
# Compute timezone
app.state.timezone = settings.tz
try:
tz = ZoneInfo(settings.tz)
app.state.timezone_abbr = datetime.now(tz).strftime("%Z")
except Exception:
app.state.timezone_abbr = "UTC"
# Initialize page loader for custom markdown pages
page_loader = PageLoader(settings.effective_pages_home)
page_loader.load_pages()
app.state.page_loader = page_loader
# Check for custom logo and store media path
media_home = Path(settings.effective_media_home)
custom_logo_path = media_home / "images" / "logo.svg"
if custom_logo_path.exists():
app.state.logo_url = "/media/images/logo.svg"
logger.info(f"Using custom logo from {custom_logo_path}")
else:
app.state.logo_url = "/static/img/logo.svg"
# Mount static files
if STATIC_DIR.exists():
app.mount("/static", StaticFiles(directory=str(STATIC_DIR)), name="static")
# Include routers
from meshcore_hub.web.routes import web_router
# Mount custom media files if directory exists
if media_home.exists() and media_home.is_dir():
app.mount("/media", StaticFiles(directory=str(media_home)), name="media")
app.include_router(web_router)
# --- API Proxy ---
@app.api_route(
"/api/{path:path}",
methods=["GET", "POST", "PUT", "DELETE", "PATCH"],
tags=["API Proxy"],
)
async def api_proxy(request: Request, path: str) -> Response:
"""Proxy API requests to the backend API server."""
client: httpx.AsyncClient = request.app.state.http_client
url = f"/api/{path}"
# Health check endpoint
# Forward query parameters
params = dict(request.query_params)
# Forward body for write methods
body = None
if request.method in ("POST", "PUT", "PATCH"):
body = await request.body()
# Forward content-type header
headers: dict[str, str] = {}
if "content-type" in request.headers:
headers["content-type"] = request.headers["content-type"]
# Forward auth proxy headers for admin operations
for h in ("x-forwarded-user", "x-forwarded-email", "x-forwarded-groups"):
if h in request.headers:
headers[h] = request.headers[h]
# Block mutating requests from unauthenticated users when admin is
# enabled. OAuth2Proxy is expected to set X-Forwarded-User for
# authenticated sessions; without it, write operations must be
# rejected server-side to prevent auth bypass.
if (
request.method in ("POST", "PUT", "DELETE", "PATCH")
and request.app.state.admin_enabled
and not request.headers.get("x-forwarded-user")
):
return JSONResponse(
{"detail": "Authentication required"},
status_code=401,
)
try:
response = await client.request(
method=request.method,
url=url,
params=params,
content=body,
headers=headers,
)
# Filter response headers (remove hop-by-hop headers)
resp_headers: dict[str, str] = {}
for k, v in response.headers.items():
if k.lower() not in (
"transfer-encoding",
"connection",
"keep-alive",
"content-encoding",
):
resp_headers[k] = v
return Response(
content=response.content,
status_code=response.status_code,
headers=resp_headers,
)
except httpx.ConnectError:
return JSONResponse(
{"detail": "API server unavailable"},
status_code=502,
)
except Exception as e:
logger.error(f"API proxy error: {e}")
return JSONResponse(
{"detail": "API proxy error"},
status_code=502,
)
# --- Map Data Endpoint (server-side aggregation) ---
@app.get("/map/data", tags=["Map"])
async def map_data(request: Request) -> JSONResponse:
"""Return node location data as JSON for the map."""
if not request.app.state.features.get("map", True):
return JSONResponse({"detail": "Map feature is disabled"}, status_code=404)
nodes_with_location: list[dict[str, Any]] = []
members_list: list[dict[str, Any]] = []
members_by_id: dict[str, dict[str, Any]] = {}
error: str | None = None
total_nodes = 0
nodes_with_coords = 0
try:
# Fetch all members to build lookup by member_id
members_response = await request.app.state.http_client.get(
"/api/v1/members", params={"limit": 500}
)
if members_response.status_code == 200:
members_data = members_response.json()
for member in members_data.get("items", []):
member_info = {
"member_id": member.get("member_id"),
"name": member.get("name"),
"callsign": member.get("callsign"),
}
members_list.append(member_info)
if member.get("member_id"):
members_by_id[member["member_id"]] = member_info
# Fetch all nodes from API
response = await request.app.state.http_client.get(
"/api/v1/nodes", params={"limit": 500}
)
if response.status_code == 200:
data = response.json()
nodes = data.get("items", [])
total_nodes = len(nodes)
for node in nodes:
tags = node.get("tags", [])
tag_lat = None
tag_lon = None
friendly_name = None
role = None
node_member_id = None
for tag in tags:
key = tag.get("key")
if key == "lat":
try:
tag_lat = float(tag.get("value"))
except (ValueError, TypeError):
pass
elif key == "lon":
try:
tag_lon = float(tag.get("value"))
except (ValueError, TypeError):
pass
elif key == "friendly_name":
friendly_name = tag.get("value")
elif key == "role":
role = tag.get("value")
elif key == "member_id":
node_member_id = tag.get("value")
lat = tag_lat if tag_lat is not None else node.get("lat")
lon = tag_lon if tag_lon is not None else node.get("lon")
if lat is None or lon is None:
continue
if lat == 0.0 and lon == 0.0:
continue
nodes_with_coords += 1
display_name = (
friendly_name
or node.get("name")
or node.get("public_key", "")[:12]
)
public_key = node.get("public_key")
owner = (
members_by_id.get(node_member_id) if node_member_id else None
)
nodes_with_location.append(
{
"public_key": public_key,
"name": display_name,
"adv_type": node.get("adv_type"),
"lat": lat,
"lon": lon,
"last_seen": node.get("last_seen"),
"role": role,
"is_infra": role == "infra",
"member_id": node_member_id,
"owner": owner,
}
)
else:
error = f"API returned status {response.status_code}"
except Exception as e:
error = str(e)
logger.warning(f"Failed to fetch nodes for map: {e}")
infra_nodes = [n for n in nodes_with_location if n.get("is_infra")]
infra_count = len(infra_nodes)
center_lat = 0.0
center_lon = 0.0
if nodes_with_location:
center_lat = sum(n["lat"] for n in nodes_with_location) / len(
nodes_with_location
)
center_lon = sum(n["lon"] for n in nodes_with_location) / len(
nodes_with_location
)
infra_center: dict[str, float] | None = None
if infra_nodes:
infra_center = {
"lat": sum(n["lat"] for n in infra_nodes) / len(infra_nodes),
"lon": sum(n["lon"] for n in infra_nodes) / len(infra_nodes),
}
return JSONResponse(
{
"nodes": nodes_with_location,
"members": members_list,
"center": {"lat": center_lat, "lon": center_lon},
"infra_center": infra_center,
"debug": {
"total_nodes": total_nodes,
"nodes_with_coords": nodes_with_coords,
"infra_nodes": infra_count,
"error": error,
},
}
)
# --- Custom Pages API ---
@app.get("/spa/pages/{slug}", tags=["SPA"])
async def get_custom_page(request: Request, slug: str) -> JSONResponse:
"""Get a custom page by slug."""
if not request.app.state.features.get("pages", True):
return JSONResponse(
{"detail": "Pages feature is disabled"}, status_code=404
)
page_loader = request.app.state.page_loader
page = page_loader.get_page(slug)
if not page:
return JSONResponse({"detail": "Page not found"}, status_code=404)
return JSONResponse(
{
"slug": page.slug,
"title": page.title,
"content_html": page.content_html,
}
)
# --- Health Endpoints ---
@app.get("/health", tags=["Health"])
async def health() -> dict:
"""Basic health check."""
@@ -123,24 +532,134 @@ def create_app(
except Exception as e:
return {"status": "not_ready", "api": str(e)}
# --- SEO Endpoints ---
def _get_https_base_url(request: Request) -> str:
"""Get base URL, ensuring HTTPS is used for public-facing URLs."""
base_url = str(request.base_url).rstrip("/")
if base_url.startswith("http://"):
base_url = "https://" + base_url[7:]
return base_url
@app.get("/robots.txt", response_class=PlainTextResponse)
async def robots_txt(request: Request) -> str:
"""Serve robots.txt."""
base_url = _get_https_base_url(request)
features = request.app.state.features
# Always disallow message and node detail pages
disallow_lines = [
"Disallow: /messages",
"Disallow: /nodes/",
]
# Add disallow for disabled features
feature_paths = {
"dashboard": "/dashboard",
"nodes": "/nodes",
"advertisements": "/advertisements",
"map": "/map",
"members": "/members",
"pages": "/pages",
}
for feature, path in feature_paths.items():
if not features.get(feature, True):
line = f"Disallow: {path}"
if line not in disallow_lines:
disallow_lines.append(line)
disallow_block = "\n".join(disallow_lines)
return (
f"User-agent: *\n"
f"{disallow_block}\n"
f"\n"
f"Sitemap: {base_url}/sitemap.xml\n"
)
@app.get("/sitemap.xml")
async def sitemap_xml(request: Request) -> Response:
"""Generate dynamic sitemap."""
base_url = _get_https_base_url(request)
features = request.app.state.features
# Home is always included; other pages depend on feature flags
all_static_pages = [
("", "daily", "1.0", None),
("/dashboard", "hourly", "0.9", "dashboard"),
("/nodes", "hourly", "0.9", "nodes"),
("/advertisements", "hourly", "0.8", "advertisements"),
("/map", "daily", "0.7", "map"),
("/members", "weekly", "0.6", "members"),
]
static_pages = [
(path, freq, prio)
for path, freq, prio, feature in all_static_pages
if feature is None or features.get(feature, True)
]
urls = []
for path, changefreq, priority in static_pages:
urls.append(
f" <url>\n"
f" <loc>{base_url}{path}</loc>\n"
f" <changefreq>{changefreq}</changefreq>\n"
f" <priority>{priority}</priority>\n"
f" </url>"
)
if features.get("pages", True):
page_loader = request.app.state.page_loader
for page in page_loader.get_menu_pages():
urls.append(
f" <url>\n"
f" <loc>{base_url}{page.url}</loc>\n"
f" <changefreq>weekly</changefreq>\n"
f" <priority>0.6</priority>\n"
f" </url>"
)
xml = (
'<?xml version="1.0" encoding="UTF-8"?>\n'
'<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">\n'
+ "\n".join(urls)
+ "\n</urlset>"
)
return Response(content=xml, media_type="application/xml")
# --- SPA Catch-All (MUST be last) ---
@app.api_route("/{path:path}", methods=["GET"], tags=["SPA"])
async def spa_catchall(request: Request, path: str = "") -> HTMLResponse:
"""Serve the SPA shell for all non-API routes."""
templates_inst: Jinja2Templates = request.app.state.templates
features = request.app.state.features
page_loader = request.app.state.page_loader
custom_pages = (
page_loader.get_menu_pages() if features.get("pages", True) else []
)
config_json = _build_config_json(request.app, request)
return templates_inst.TemplateResponse(
"spa.html",
{
"request": request,
"network_name": request.app.state.network_name,
"network_city": request.app.state.network_city,
"network_country": request.app.state.network_country,
"network_contact_email": request.app.state.network_contact_email,
"network_contact_discord": request.app.state.network_contact_discord,
"network_contact_github": request.app.state.network_contact_github,
"network_contact_youtube": request.app.state.network_contact_youtube,
"network_welcome_text": request.app.state.network_welcome_text,
"admin_enabled": request.app.state.admin_enabled,
"features": features,
"custom_pages": custom_pages,
"logo_url": request.app.state.logo_url,
"version": __version__,
"default_theme": request.app.state.web_theme,
"config_json": config_json,
},
)
return app
def get_templates(request: Request) -> Jinja2Templates:
"""Get templates from app state."""
templates: Jinja2Templates = request.app.state.templates
return templates
def get_network_context(request: Request) -> dict:
"""Get network configuration context for templates."""
return {
"network_name": request.app.state.network_name,
"network_city": request.app.state.network_city,
"network_country": request.app.state.network_country,
"network_location": request.app.state.network_location,
"network_radio_config": request.app.state.network_radio_config,
"network_contact_email": request.app.state.network_contact_email,
"network_contact_discord": request.app.state.network_contact_discord,
"version": __version__,
}

View File

@@ -7,21 +7,21 @@ import click
@click.option(
"--host",
type=str,
default="0.0.0.0",
default=None,
envvar="WEB_HOST",
help="Web server host",
help="Web server host (default: 0.0.0.0)",
)
@click.option(
"--port",
type=int,
default=8080,
default=None,
envvar="WEB_PORT",
help="Web server port",
help="Web server port (default: 8080)",
)
@click.option(
"--api-url",
type=str,
default="http://localhost:8000",
default=None,
envvar="API_BASE_URL",
help="API server base URL",
)
@@ -42,7 +42,7 @@ import click
@click.option(
"--network-name",
type=str,
default="MeshCore Network",
default=None,
envvar="NETWORK_NAME",
help="Network display name",
)
@@ -60,20 +60,6 @@ import click
envvar="NETWORK_COUNTRY",
help="Network country",
)
@click.option(
"--network-lat",
type=float,
default=0.0,
envvar="NETWORK_LAT",
help="Network center latitude",
)
@click.option(
"--network-lon",
type=float,
default=0.0,
envvar="NETWORK_LON",
help="Network center longitude",
)
@click.option(
"--network-radio-config",
type=str,
@@ -95,6 +81,27 @@ import click
envvar="NETWORK_CONTACT_DISCORD",
help="Discord server info",
)
@click.option(
"--network-contact-github",
type=str,
default=None,
envvar="NETWORK_CONTACT_GITHUB",
help="GitHub repository URL",
)
@click.option(
"--network-contact-youtube",
type=str,
default=None,
envvar="NETWORK_CONTACT_YOUTUBE",
help="YouTube channel URL",
)
@click.option(
"--network-welcome-text",
type=str,
default=None,
envvar="NETWORK_WELCOME_TEXT",
help="Welcome text for homepage",
)
@click.option(
"--reload",
is_flag=True,
@@ -104,19 +111,20 @@ import click
@click.pass_context
def web(
ctx: click.Context,
host: str,
port: int,
api_url: str,
host: str | None,
port: int | None,
api_url: str | None,
api_key: str | None,
data_home: str | None,
network_name: str,
network_name: str | None,
network_city: str | None,
network_country: str | None,
network_lat: float,
network_lon: float,
network_radio_config: str | None,
network_contact_email: str | None,
network_contact_discord: str | None,
network_contact_github: str | None,
network_contact_youtube: str | None,
network_welcome_text: str | None,
reload: bool,
) -> None:
"""Run the web dashboard.
@@ -146,46 +154,51 @@ def web(
from meshcore_hub.common.config import get_web_settings
from meshcore_hub.web.app import create_app
# Get settings to compute effective values
# Get settings for defaults and display
settings = get_web_settings()
# Override data_home if provided
if data_home:
settings = settings.model_copy(update={"data_home": data_home})
# Use CLI args or fall back to settings
effective_host = host or settings.web_host
effective_port = port or settings.web_port
effective_data_home = data_home or settings.data_home
# Ensure web data directory exists
web_data_dir = Path(effective_data_home) / "web"
web_data_dir.mkdir(parents=True, exist_ok=True)
# Display effective settings
effective_network_name = network_name or settings.network_name
click.echo("=" * 50)
click.echo("MeshCore Hub Web Dashboard")
click.echo("=" * 50)
click.echo(f"Host: {host}")
click.echo(f"Port: {port}")
click.echo(f"Host: {effective_host}")
click.echo(f"Port: {effective_port}")
click.echo(f"Data home: {effective_data_home}")
click.echo(f"API URL: {api_url}")
click.echo(f"API key configured: {api_key is not None}")
click.echo(f"Network: {network_name}")
if network_city and network_country:
click.echo(f"Location: {network_city}, {network_country}")
if network_lat != 0.0 or network_lon != 0.0:
click.echo(f"Map center: {network_lat}, {network_lon}")
click.echo(f"API URL: {api_url or settings.api_base_url}")
click.echo(f"API key configured: {(api_key or settings.api_key) is not None}")
click.echo(f"Network: {effective_network_name}")
effective_city = network_city or settings.network_city
effective_country = network_country or settings.network_country
if effective_city and effective_country:
click.echo(f"Location: {effective_city}, {effective_country}")
click.echo(f"Reload mode: {reload}")
disabled_features = [
name for name, enabled in settings.features.items() if not enabled
]
if disabled_features:
click.echo(f"Disabled features: {', '.join(disabled_features)}")
click.echo("=" * 50)
network_location = (network_lat, network_lon)
if reload:
# For development, use uvicorn's reload feature
click.echo("\nStarting in development mode with auto-reload...")
click.echo("Note: Using default settings for reload mode.")
click.echo("Note: Settings loaded from environment/config.")
uvicorn.run(
"meshcore_hub.web.app:create_app",
host=host,
port=port,
host=effective_host,
port=effective_port,
reload=True,
factory=True,
)
@@ -197,11 +210,13 @@ def web(
network_name=network_name,
network_city=network_city,
network_country=network_country,
network_location=network_location,
network_radio_config=network_radio_config,
network_contact_email=network_contact_email,
network_contact_discord=network_contact_discord,
network_contact_github=network_contact_github,
network_contact_youtube=network_contact_youtube,
network_welcome_text=network_welcome_text,
)
click.echo("\nStarting web dashboard...")
uvicorn.run(app, host=host, port=port)
uvicorn.run(app, host=effective_host, port=effective_port)

View File

@@ -0,0 +1,85 @@
"""HTTP caching middleware for the web component."""
from collections.abc import Awaitable, Callable
from starlette.middleware.base import BaseHTTPMiddleware
from starlette.requests import Request
from starlette.responses import Response
from starlette.types import ASGIApp
class CacheControlMiddleware(BaseHTTPMiddleware):
"""Middleware to set appropriate Cache-Control headers based on resource type."""
def __init__(self, app: ASGIApp) -> None:
"""Initialize the middleware.
Args:
app: The ASGI application to wrap.
"""
super().__init__(app)
async def dispatch(
self,
request: Request,
call_next: Callable[[Request], Awaitable[Response]],
) -> Response:
"""Process the request and add appropriate caching headers.
Args:
request: The incoming HTTP request.
call_next: The next middleware or route handler.
Returns:
The response with cache headers added.
"""
response: Response = await call_next(request)
# Skip if Cache-Control already set (explicit override)
if "cache-control" in response.headers:
return response
path = request.url.path
query_params = request.url.query
# Health endpoints - never cache
if path.startswith("/health"):
response.headers["cache-control"] = "no-cache, no-store, must-revalidate"
# Static files with version parameter - long-term cache
elif path.startswith("/static/") and "v=" in query_params:
response.headers["cache-control"] = "public, max-age=31536000, immutable"
# Static files without version - short cache as fallback
elif path.startswith("/static/"):
response.headers["cache-control"] = "public, max-age=3600"
# Media files with version parameter - long-term cache
elif path.startswith("/media/") and "v=" in query_params:
response.headers["cache-control"] = "public, max-age=31536000, immutable"
# Media files without version - short cache (user may update)
elif path.startswith("/media/"):
response.headers["cache-control"] = "public, max-age=3600"
# Map data - short cache (5 minutes)
elif path == "/map/data":
response.headers["cache-control"] = "public, max-age=300"
# Custom pages - moderate cache (1 hour)
elif path.startswith("/spa/pages/"):
response.headers["cache-control"] = "public, max-age=3600"
# SEO files - moderate cache (1 hour)
elif path in ("/robots.txt", "/sitemap.xml"):
response.headers["cache-control"] = "public, max-age=3600"
# API proxy - don't add headers (pass through backend)
elif path.startswith("/api/"):
pass
# SPA shell HTML (catch-all for client-side routes) - no cache
elif response.headers.get("content-type", "").startswith("text/html"):
response.headers["cache-control"] = "no-cache, public"
return response

View File

@@ -0,0 +1,119 @@
"""Custom markdown pages loader for MeshCore Hub Web Dashboard."""
import logging
from dataclasses import dataclass
from pathlib import Path
from typing import Optional
import frontmatter
import markdown
logger = logging.getLogger(__name__)
@dataclass
class CustomPage:
"""Represents a custom markdown page."""
slug: str
title: str
menu_order: int
content_html: str
file_path: str
@property
def url(self) -> str:
"""Get the URL path for this page."""
return f"/pages/{self.slug}"
class PageLoader:
"""Loads and manages custom markdown pages from a directory."""
def __init__(self, pages_dir: str) -> None:
"""Initialize the page loader.
Args:
pages_dir: Path to the directory containing markdown pages.
"""
self.pages_dir = Path(pages_dir)
self._pages: dict[str, CustomPage] = {}
self._md = markdown.Markdown(
extensions=["tables", "fenced_code", "toc"],
output_format="html",
)
def load_pages(self) -> None:
"""Load all markdown pages from the pages directory."""
self._pages.clear()
if not self.pages_dir.exists():
logger.debug(f"Pages directory does not exist: {self.pages_dir}")
return
if not self.pages_dir.is_dir():
logger.warning(f"Pages path is not a directory: {self.pages_dir}")
return
for md_file in self.pages_dir.glob("*.md"):
try:
page = self._load_page(md_file)
if page:
self._pages[page.slug] = page
logger.info(f"Loaded custom page: {page.slug} ({md_file.name})")
except Exception as e:
logger.error(f"Failed to load page {md_file}: {e}")
logger.info(f"Loaded {len(self._pages)} custom page(s)")
def _load_page(self, file_path: Path) -> Optional[CustomPage]:
"""Load a single markdown page.
Args:
file_path: Path to the markdown file.
Returns:
CustomPage instance or None if loading failed.
"""
content = file_path.read_text(encoding="utf-8")
post = frontmatter.loads(content)
# Extract frontmatter fields
slug = post.get("slug", file_path.stem)
title = post.get("title", slug.replace("-", " ").replace("_", " ").title())
menu_order = post.get("menu_order", 100)
# Convert markdown to HTML
self._md.reset()
content_html = self._md.convert(post.content)
return CustomPage(
slug=slug,
title=title,
menu_order=menu_order,
content_html=content_html,
file_path=str(file_path),
)
def get_page(self, slug: str) -> Optional[CustomPage]:
"""Get a page by its slug.
Args:
slug: The page slug.
Returns:
CustomPage instance or None if not found.
"""
return self._pages.get(slug)
def get_menu_pages(self) -> list[CustomPage]:
"""Get all pages sorted by menu_order for navigation.
Returns:
List of CustomPage instances sorted by menu_order.
"""
return sorted(self._pages.values(), key=lambda p: (p.menu_order, p.title))
def reload(self) -> None:
"""Reload all pages from disk."""
self.load_pages()

View File

@@ -1,23 +0,0 @@
"""Web routes for MeshCore Hub Dashboard."""
from fastapi import APIRouter
from meshcore_hub.web.routes.home import router as home_router
from meshcore_hub.web.routes.network import router as network_router
from meshcore_hub.web.routes.nodes import router as nodes_router
from meshcore_hub.web.routes.messages import router as messages_router
from meshcore_hub.web.routes.map import router as map_router
from meshcore_hub.web.routes.members import router as members_router
# Create main web router
web_router = APIRouter()
# Include all sub-routers
web_router.include_router(home_router)
web_router.include_router(network_router)
web_router.include_router(nodes_router)
web_router.include_router(messages_router)
web_router.include_router(map_router)
web_router.include_router(members_router)
__all__ = ["web_router"]

Some files were not shown because too many files have changed in this diff Show More