Compare commits

..

101 Commits

Author SHA1 Message Date
l5y
4548f750d3 Add connection recovery for TCP interface (#186)
* Add connection recovery for TCP interface

* run black
2025-09-27 18:52:56 +02:00
l5y
31f02010d3 bump version to 0.3 (#191)
* bump version to 0.3

* update readme
2025-09-27 18:52:41 +02:00
l5y
ec1ea5cbba pgrade styles and fix interface issues (#190) 2025-09-27 18:46:56 +02:00
l5y
8500c59755 some updates in the front (#188)
* ok, i'm added correct image loader

* and some css

* make zebra in a table and add a background and some little changes in app

* for example you can check how it work on https://vrs.kdd2105.ru

* fix ai comments

---------

Co-authored-by: dkorotkih2014-hub <d.korotkih2014@gmail.com>
2025-09-27 18:18:02 +02:00
l5y
556dd6b51c Update last heard on node entry change (#185) 2025-09-26 20:43:53 +02:00
l5y
3863e2d63d Populate chat metadata for unknown nodes (#182)
* Populate chat metadata for unknown nodes

* run rufo

* fix comments

* run rufo
2025-09-26 16:45:42 +02:00
l5y
9e62621819 Update role colors to new palette (#183) 2025-09-26 16:08:14 +02:00
l5y
c8c7c8cc05 Add placeholder nodes for unknown senders (#181)
* Add placeholder nodes for unknown senders

* run rufo
2025-09-26 14:24:30 +02:00
l5y
5116313ab0 fix: update role colors and ordering for firmware 2.7.10 (#180) 2025-09-26 13:30:34 +02:00
l5y
66389dd27c Handle plain IP addresses in mesh TCP detection (#154)
* Fix TCP target detection for plain IPs

* run black
2025-09-26 13:25:42 +02:00
l5y
ee6501243f Handle encrypted messages (#173)
* Handle encrypted messages

* Remove redundant message node columns

* Preserve original numeric message senders

* Normalize message sender IDs in API responses

* Exclude encrypted messages from API responses

* run rufo
2025-09-24 07:34:28 +02:00
l5y
8dd912175d Add fallback display names for unnamed nodes (#171) 2025-09-23 19:06:28 +02:00
l5y
02f9fb45e2 Ensure routers render above other node types (#169) 2025-09-23 18:59:34 +02:00
l5y
4254dbda91 Reorder lint steps after tests in CI (#168) 2025-09-23 18:31:38 +02:00
l5y
a46bed1c33 Handle proto values in nodeinfo payloads (#167) 2025-09-23 18:31:22 +02:00
l5y
d711300442 Remove raw payload storage from database schema (#166) 2025-09-23 17:29:08 +02:00
l5y
98a8203591 Add POSITION_APP ingestion and API support (#160)
* Add POSITION_APP ingestion and API support

* Adjust mesh receive subscriptions and priorities

* run linters
2025-09-23 16:42:51 +02:00
l5y
084c5ae158 Add support for NODEINFO_APP packets (#159)
* Add support for NODEINFO_APP packets

* run black
2025-09-23 14:40:35 +02:00
l5y
17018aeb19 Derive SEO metadata from existing config (#153) 2025-09-23 08:20:42 +02:00
l5y
74b3da6f00 tests: create helper script to dump all mesh data from serial (#152)
* tests: create helper script to dump all mesh data from serial

* tests: use public callbacks for dump script
2025-09-23 08:09:31 +02:00
l5y
ab1217a8bf Limit chat log to recent entries (#151) 2025-09-22 18:54:09 +02:00
l5y
62de1480f7 Require time library before formatting ISO timestamps (#149)
* Require time library for ISO timestamp formatting

* Default to host networking in Compose
2025-09-22 09:21:04 +02:00
l5y
ab2e9b06e1 Define potatomesh network (#148) 2025-09-22 08:58:39 +02:00
l5y
e91ad24cf9 Fix sqlite3 native extension on Alpine (#146) 2025-09-22 08:12:48 +02:00
l5y
2e543b7cd4 Allow binding to all interfaces in app.sh (#147) 2025-09-22 08:11:36 +02:00
l5y
db4353ccdc Force building sqlite3 gem on Alpine (#145) 2025-09-22 08:10:00 +02:00
l5y
5a610cf08a Support mock serial interface in CI (#143) 2025-09-21 10:00:30 +02:00
l5y
71b854998c Fix Docker workflow to build linux images (#142) 2025-09-21 09:39:09 +02:00
l5y
0a70ae4b3e Add clickable role filters to the map legend (#140)
* Make map legend role entries filter nodes

* Adjust map legend spacing and toggle text
2025-09-21 09:33:48 +02:00
l5y
6e709b0b67 Rebuild chat log on each refresh (#139) 2025-09-21 09:19:07 +02:00
l5y
a4256cee83 fix: retain runtime libs for alpine production (#138) 2025-09-21 09:18:55 +02:00
l5y
89f0b1bcfe fix: support windows ingestor build (#136)
* fix: support windows ingestor build

* fix: restore alpine build deps for ingestor (#137)
2025-09-20 22:00:45 +02:00
l5y
e8af3b2397 fix: use supported ruby image (#135) 2025-09-20 19:10:36 +00:00
Taylor Rose
812d3c851f feat: Add comprehensive Docker support (#122)
* feat: Add comprehensive Docker support

- Add multi-container Docker setup with web app and data ingestor
- Create production-ready Dockerfiles with multi-stage builds
- Add Docker Compose configurations for dev, prod, and custom environments
- Implement CI/CD pipeline with GitHub Actions for automated builds
- Add comprehensive Docker documentation and setup guides
- Include security scanning and multi-platform builds
- Support for Meshtastic device integration via serial access
- Persistent data storage with named volumes
- Health checks and monitoring capabilities

Addresses GitHub issue #120: Dockerize the project for easier community adoption

Files added:
- web/Dockerfile: Ruby web application container
- data/Dockerfile: Python data ingestor container
- data/requirements.txt: Python dependencies
- docker-compose.yml: Base Docker Compose configuration
- docker-compose.dev.yml: Development environment overrides
- docker-compose.prod.yml: Production environment overrides
- .env.example: Environment configuration template
- .dockerignore: Docker build context optimization
- .github/workflows/docker.yml: CI/CD pipeline
- DOCKER.md: Comprehensive Docker documentation

This implementation transforms PotatoMesh from a complex manual setup
to a single-command deployment: docker-compose up -d

* feat: Add Docker support with multi-architecture builds

- Add web/Dockerfile with Ruby 3.4 Alpine base
- Add data/Dockerfile with Python 3.13 Alpine base
- Use Alpine's SQLite3 packages for cross-platform compatibility
- Support AMD64, ARM64, ARMv7, and Windows architectures
- Multi-stage builds for optimized production images
- Non-root user security and proper file permissions

* feat: Add Docker Compose configurations for different environments

- docker-compose.yml: Production setup with GHCR images
- docker-compose.dev.yml: Development setup with local builds
- docker-compose.raspberry-pi.yml: Pi-optimized with resource limits
- Support for all architectures (AMD64, ARM64, ARMv7)
- Proper volume mounts and network configuration
- Environment variable configuration for different deployments

* feat: Add GitHub Actions workflows for Docker CI/CD

- docker.yml: Multi-architecture build and push to GHCR
- test-raspberry-pi-hardware.yml: ARM64 testing with QEMU
- Support for manual workflow dispatch with version input
- Build and test all Docker variants (AMD64, ARM64, ARMv7, Windows)
- Automated publishing to GitHub Container Registry
- Comprehensive testing for Raspberry Pi deployments

* feat: Add Docker documentation and configuration tools

- docs/DOCKER.md: Comprehensive Docker setup and usage guide
- configure.sh: Interactive configuration script for deployment
- Platform-specific setup instructions (macOS, Linux, Windows)
- Raspberry Pi optimization guidelines
- Environment variable configuration
- Troubleshooting and best practices

* docs: Update README with comprehensive Docker support

- Add Docker Quick Start section with published images
- Add comprehensive table of all available GHCR images
- Include architecture-specific pull commands
- Update manual installation instructions
- Add platform-specific deployment examples
- Document all supported architectures and use cases

* chore: Update dependencies and project configuration

- Update data/requirements.txt for Python 3.13 compatibility
- Add v0.3.0 changelog entry documenting Docker support
- Update .gitignore for Docker-related files
- Prepare project for Docker deployment

* feat: Update web interface for Denver Mesh Network

- Update default configuration to center on Denver, Colorado
- Set SITE_NAME to 'Denver Mesh Network'
- Configure 915MHz frequency for US region
- Update map center coordinates (39.7392, -104.9903)
- Set appropriate node distance and Matrix room settings

* Update Docker configuration and documentation

- Remove Raspberry Pi specific Docker files and workflows
- Update Docker workflow configuration
- Consolidate Docker documentation
- Add AGENTS.md for opencode integration
- Update README with current project status

* cleanup: workflow/readme

* Update README.md

Co-authored-by: l5y <220195275+l5yth@users.noreply.github.com>

* Add .env.example and simplify documentation

- Add comprehensive .env.example with all environment variables
- Update web Dockerfile to use Berlin coordinates instead of Denver
- Simplify README Docker quick start with helpful comments
- Greatly simplify DOCKER.md with only essential information

* cleanup: readme

* Remove Stadia API key references

- Remove STADIA_API_KEY from docker-compose.yml environment variables
- Remove Stadia Maps configuration section from configure.sh
- Remove Stadia API key references from .env.example
- Simplify configuration to use basic OpenStreetMap tiles only

* quickfix

* cleanup: remove example usage from docker gh action output

---------

Co-authored-by: l5y <220195275+l5yth@users.noreply.github.com>
2025-09-20 21:04:19 +02:00
l5y
608d1e0396 bump version to 0.2.1 (#134) 2025-09-20 20:59:21 +02:00
l5y
63787454ca Fix dark mode tile styling on new map tiles (#132)
* Ensure dark mode styling applied to new map tiles

* Ensure dark mode filters apply to new map tiles

* Improve map tile filter handling
2025-09-20 18:13:18 +02:00
l5y
55c1384f80 Switch map tiles to OSM HOT and add theme filters (#130)
* Switch map tiles to OSM HOT and add theme filters

* Ensure OSM tiles are filtered for theme modes

* Ensure tile filters update when toggling dark mode

* run rufo
2025-09-19 23:02:55 +02:00
l5y
6750d7bc12 Add footer version display (#128)
* Add footer version display

* Ensure footer version text matches spec
2025-09-19 11:22:28 +02:00
l5y
d33fcaf5db Add responsive controls for map legend (#129) 2025-09-19 11:21:00 +02:00
l5y
7974fd9597 update changelog (#119) 2025-09-17 16:57:32 +02:00
l5y
dcb512636c update readme for 0.2 (#118)
* update readme for 0.2

* update readme for 0.2

* update readme for 0.2

* update readme for 0.2
2025-09-17 10:23:36 +02:00
l5y
7c6bf801e9 Add PotatoMesh logo to header and favicon (#117)
* Add PotatoMesh logo to header and favicon

* Ensure header logo remains visible

* update svg
2025-09-17 10:12:35 +02:00
l5y
71e9f89aae Harden API auth and request limits (#116)
* Harden API auth and request limits

* run rufo
2025-09-17 08:00:25 +02:00
l5y
0936c6087b Add sortable node table columns (#114) 2025-09-17 07:06:13 +02:00
l5y
95e3e8723a Add short name overlay for node details (#111)
* Add node details overlay for short names

* Simplify short info overlay layout
2025-09-16 23:22:41 +02:00
l5y
671a910936 Adjust python ingestor interval to 60 seconds (#112) 2025-09-16 21:07:53 +02:00
l5y
3b64e829a8 Hide location columns on medium screens (#109) 2025-09-16 19:43:31 +02:00
l5y
84ed739a61 Handle message updates based on sender info (#108)
* Handle message updates based on sender info

* run rufo
2025-09-16 19:41:56 +02:00
l5y
cffdb7dca6 Prioritize node posts in queued API updates (#107)
* Prioritize node posts in queued API updates

* run black
2025-09-16 19:30:38 +02:00
l5y
4182a9f83c Add auto-refresh toggle (#105) 2025-09-16 19:21:54 +02:00
l5y
9873f6105d Adjust Leaflet popup styling for dark mode (#104)
* Adjust Leaflet popup styling for dark mode

* some css fixing
2025-09-16 17:14:36 +00:00
l5y
8d3829cc4e feat: add site info overlay (#103) 2025-09-16 19:00:31 +02:00
l5y
e424485761 Add long name tooltip to short name badge (#102) 2025-09-16 18:58:29 +02:00
l5y
baf7f5d137 Ensure node numeric aliases are derived from canonical IDs (#101)
* Derive node numeric aliases when missing

* Preserve raw message senders when storing payloads

* Normalize packet message sender ids when available

* run rufo
2025-09-16 18:41:49 +02:00
l5y
3edf60c625 chore: clean up repository (#96)
* chore: clean up repository

* Fix message spec node lookup for numeric IDs (#98)

* Fix message spec node lookup for numeric IDs

* run rufo

* Fix message node fallback lookup (#99)
2025-09-16 15:25:12 +02:00
l5y
1beb343501 Handle SQLite busy errors when upserting nodes (#100) 2025-09-16 15:24:01 +02:00
l5y
0c0f877b13 Configure Sinatra logging level from DEBUG flag (#97)
* Configure Sinatra logging level

* Fix logger level helper invocation

* Fix Sinatra logger helper definition syntax
2025-09-16 14:46:50 +02:00
l5y
f7a1b5c5ad Add penetration tests for authentication and SQL injection (#95) 2025-09-16 13:13:57 +02:00
l5y
051d09dcaf Document Python and Ruby source modules (#94) 2025-09-16 13:13:12 +02:00
l5y
eb900aecb6 Add tests covering mesh helper edge cases (#93)
* test: expand coverage for mesh helpers

* run black
2025-09-16 12:48:01 +02:00
l5y
f16393eafd fix py code cov (#92) 2025-09-16 12:10:17 +02:00
l5y
49dcfebfb3 Add Codecov coverage and test analytics for Python CI (#91) 2025-09-16 12:04:46 +02:00
l5y
1c13b99f3b Skip null fields when choosing packet identifiers (#88) 2025-09-16 11:56:02 +02:00
l5y
54a1eb5b42 create python yml ga (#90)
* Create python.yml

* ci: add black

* run an actual formatter

* also add rufo

* fix pytest

* run black
2025-09-16 11:50:33 +02:00
l5y
2818c6d2b8 Add unit tests for mesh ingestor script (#89) 2025-09-16 11:44:28 +02:00
l5y
f4aa5d3873 Add coverage for debug logging on messages without sender (#86)
* Add debug logging spec for messages without sender

* Route debug logging through Kernel.warn

* Relax debug log matchers
2025-09-16 11:33:03 +02:00
l5y
542f4dd0e2 Handle concurrent node snapshot updates (#85) 2025-09-16 11:10:11 +02:00
l5y
4a72cdda75 Fix extraction of packet sender ids (#84) 2025-09-16 10:35:11 +02:00
l5y
4b9d581448 Add coverage for API authentication and payload edge cases (#83) 2025-09-16 10:18:10 +02:00
l5y
1d3b3f11e9 Add Codecov test analytics to Ruby workflow (#82) 2025-09-16 10:12:25 +02:00
l5y
e97824fd0b Configure SimpleCov for Codecov coverage (#81) 2025-09-16 09:58:44 +02:00
l5y
1cd9058685 update codecov job (#80)
* update codecov job

* add codecov condif
2025-09-16 09:55:53 +02:00
l5y
47e23ea14c fix readme badges (#79)
* fix readme badges

* fix readme badges
2025-09-16 09:46:44 +02:00
l5y
afd18794c7 Add Codecov upload step to Ruby workflow (#78) 2025-09-16 09:43:09 +02:00
l5y
203bd623bd Add Apache license headers to source files (#77)
* Add Apache license headers to source files

* fix formatting
2025-09-16 09:39:28 +02:00
l5y
2b6b44a31d Add integration specs for node and message APIs (#76) 2025-09-16 09:29:31 +02:00
l5y
0059a6aab3 docs: update for 0.2.0 release (#75)
* docs: update for 0.2.0 release

* docs: add scrot 0.2
2025-09-16 09:23:11 +02:00
l5y
fc30a080ff create ruby workflow (#74)
* create ruby workflow

* add step for dependencies

* bump ruby version

* Set up Ruby action in web directory
2025-09-16 08:52:33 +02:00
l5y
7399c02be9 Add RSpec tests for app boot and database setup (#73) 2025-09-16 08:25:13 +02:00
l5y
02e985d2a8 Align refresh controls with status text (#72)
* Align refresh controls with status text

* Improve mobile alignment for refresh controls
2025-09-16 08:21:15 +02:00
l5y
954352809f spec: update testdata 2025-09-16 08:11:11 +02:00
l5y
7eb36a5a3d remove duplication 2025-09-15 21:35:59 +02:00
l5y
0768b4d91a Improve mobile layout (#68)
* Improve mobile layout

* styling tweaks
2025-09-15 21:32:56 +02:00
l5y
be1306c9c0 Normalize message sender IDs using node numbers (#67) 2025-09-15 21:04:29 +02:00
l5y
7904717597 style: simplify node table (#65) 2025-09-15 18:16:36 +02:00
l5y
e2c19e1611 Add debug logging for missing from_id (#64) 2025-09-15 18:15:46 +02:00
l5y
b230e79ab0 Handle nested dataclasses in node snapshots (#63) 2025-09-15 14:59:23 +02:00
l5y
31727e35bb add placeholder for default frequency 2025-09-15 14:48:12 +02:00
l5y
22127bbfb4 ignore log files 2025-09-15 14:44:32 +02:00
l5y
413278544a Log node object on snapshot update failure (#62) 2025-09-15 14:34:56 +02:00
l5y
580a588df7 Run schema initialization only when database or tables are missing (#61) 2025-09-15 14:05:01 +02:00
l5y
b39b83fb51 Send mesh data to Potatomesh API (#60)
* feat: post mesh data to API

* Serialize node objects before posting

* don't put raw json in api/db
2025-09-15 14:00:48 +02:00
l5y
6d948603c9 Convert boolean flags to integers for SQLite (#59) 2025-09-15 13:37:30 +02:00
l5y
648bcc9b92 Use packet id as message primary key (#58)
* Use packet id as message primary key

* fix query

* fix query
2025-09-15 13:34:59 +02:00
l5y
4dc1227be7 Add POST /api/messages and enforce API token (#56) 2025-09-15 13:13:47 +02:00
l5y
3b097feaae Update README.md 2025-09-15 12:17:45 +02:00
l5y
da2e5fbde1 feat: parameterize community info (#55)
* feat: parameterize community info

* chore: restore test data and document env defaults

* also make default channel configurable
2025-09-15 12:15:51 +02:00
l5y
003db7c36a feat: add dark mode toggle (#54)
* feat: add dark mode toggle

* fix chat colors in dark mode
2025-09-15 11:53:49 +02:00
l5y
9aa640338d Update README.md 2025-09-15 11:44:44 +02:00
l5y
3c24b71f16 ignore copies 2025-09-15 11:42:27 +02:00
l5y
eee6738a9c add changelog 2025-09-15 08:49:12 +02:00
51 changed files with 15805 additions and 4297 deletions

6
.codecov.yml Normal file
View File

@@ -0,0 +1,6 @@
coverage:
status:
project:
default:
target: 99%
threshold: 1%

76
.dockerignore Normal file
View File

@@ -0,0 +1,76 @@
# Git
.git
.gitignore
# Documentation
README.md
CHANGELOG.md
*.md
# Docker files
docker-compose*.yml
.dockerignore
# Environment files
.env*
!.env.example
# Logs
*.log
logs/
# Runtime data
*.pid
*.seed
*.pid.lock
# Coverage directory used by tools like istanbul
coverage/
# nyc test coverage
.nyc_output
# Dependency directories
node_modules/
vendor/
# Optional npm cache directory
.npm
# Optional REPL history
.node_repl_history
# Output of 'npm pack'
*.tgz
# Yarn Integrity file
.yarn-integrity
# dotenv environment variables file
.env
# IDE files
.vscode/
.idea/
*.swp
*.swo
*~
# OS generated files
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
ehthumbs.db
Thumbs.db
# Test files
tests/
spec/
test_*
*_test.py
*_spec.rb
# Development files
ai_docs/

77
.env.example Normal file
View File

@@ -0,0 +1,77 @@
# PotatoMesh Environment Configuration
# Copy this file to .env and customize for your setup
# =============================================================================
# REQUIRED SETTINGS
# =============================================================================
# API authentication token (required for ingestor communication)
# Generate a secure token: openssl rand -hex 32
API_TOKEN=your-secure-api-token-here
# Meshtastic device path (required for ingestor)
# Common paths:
# - Linux: /dev/ttyACM0, /dev/ttyUSB0
# - macOS: /dev/cu.usbserial-*
# - Windows (WSL): /dev/ttyS*
MESH_SERIAL=/dev/ttyACM0
# =============================================================================
# SITE CUSTOMIZATION
# =============================================================================
# Your mesh network name
SITE_NAME=My Meshtastic Network
# Default Meshtastic channel
DEFAULT_CHANNEL=#MediumFast
# Default frequency for your region
# Common frequencies: 868MHz (Europe), 915MHz (US), 433MHz (Worldwide)
DEFAULT_FREQUENCY=868MHz
# Map center coordinates (latitude, longitude)
# Berlin, Germany: 52.502889, 13.404194
# Denver, Colorado: 39.7392, -104.9903
# London, UK: 51.5074, -0.1278
MAP_CENTER_LAT=52.502889
MAP_CENTER_LON=13.404194
# Maximum distance to show nodes (kilometers)
MAX_NODE_DISTANCE_KM=50
# =============================================================================
# OPTIONAL INTEGRATIONS
# =============================================================================
# Matrix chat room for your community (optional)
# Format: !roomid:matrix.org
MATRIX_ROOM='#meshtastic-berlin:matrix.org'
# =============================================================================
# ADVANCED SETTINGS
# =============================================================================
# Debug mode (0=off, 1=on)
DEBUG=0
# Docker Compose networking profile
# Leave unset for Linux hosts (default host networking).
# Set to "bridge" on Docker Desktop (macOS/Windows) if host networking
# is unavailable.
# COMPOSE_PROFILES=bridge
# Meshtastic snapshot interval (seconds)
MESH_SNAPSHOT_SECS=60
# Meshtastic channel index (0=primary, 1=secondary, etc.)
MESH_CHANNEL_INDEX=0
# Database settings
DB_BUSY_TIMEOUT_MS=5000
DB_BUSY_MAX_RETRIES=5
DB_BUSY_RETRY_DELAY=0.05
# Application settings
MAX_JSON_BODY_BYTES=1048576

View File

@@ -1,15 +1,10 @@
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://docs.github.com/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file
version: 2
updates:
- package-ecosystem: "ruby" # See documentation for possible values
directory: "/web" # Location of package manifests
- package-ecosystem: "ruby"
directory: "/web"
schedule:
interval: "weekly"
- package-ecosystem: "python" # See documentation for possible values
directory: "/data" # Location of package manifests
- package-ecosystem: "python"
directory: "/"
schedule:
interval: "weekly"

18
.github/workflows/README.md vendored Normal file
View File

@@ -0,0 +1,18 @@
# GitHub Actions Workflows
## Workflows
- **`docker.yml`** - Build and push Docker images to GHCR
- **`codeql.yml`** - Security scanning
- **`python.yml`** - Python testing
- **`ruby.yml`** - Ruby testing
## Usage
```bash
# Build locally
docker-compose build
# Deploy
docker-compose up -d
```

View File

@@ -1,14 +1,3 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL Advanced"
on:
@@ -20,20 +9,10 @@ on:
jobs:
analyze:
name: Analyze (${{ matrix.language }})
# Runner size impacts CodeQL analysis time. To learn more, please see:
# - https://gh.io/recommended-hardware-resources-for-running-codeql
# - https://gh.io/supported-runners-and-hardware-resources
# - https://gh.io/using-larger-runners (GitHub.com only)
# Consider using larger runners or machines with greater resources for possible analysis time improvements.
runs-on: ${{ (matrix.language == 'swift' && 'macos-latest') || 'ubuntu-latest' }}
permissions:
# required for all workflows
security-events: write
# required to fetch internal or private CodeQL packs
packages: read
# only required for workflows in private repositories
actions: read
contents: read
@@ -47,53 +26,14 @@ jobs:
build-mode: none
- language: javascript-typescript
build-mode: none
# CodeQL supports the following values keywords for 'language': 'actions', 'c-cpp', 'csharp', 'go', 'java-kotlin', 'javascript-typescript', 'python', 'ruby', 'rust', 'swift'
# Use `c-cpp` to analyze code written in C, C++ or both
# Use 'java-kotlin' to analyze code written in Java, Kotlin or both
# Use 'javascript-typescript' to analyze code written in JavaScript, TypeScript or both
# To learn more about changing the languages that are analyzed or customizing the build mode for your analysis,
# see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/customizing-your-advanced-setup-for-code-scanning.
# If you are analyzing a compiled language, you can modify the 'build-mode' for that language to customize how
# your codebase is analyzed, see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/codeql-code-scanning-for-compiled-languages
steps:
- name: Checkout repository
uses: actions/checkout@v4
# Add any setup steps before running the `github/codeql-action/init` action.
# This includes steps like installing compilers or runtimes (`actions/setup-node`
# or others). This is typically only required for manual builds.
# - name: Setup runtime (example)
# uses: actions/setup-example@v1
# Initializes the CodeQL tools for scanning.
uses: actions/checkout@v5
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: ${{ matrix.language }}
build-mode: ${{ matrix.build-mode }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# For more details on CodeQL's query packs, refer to: https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
# queries: security-extended,security-and-quality
# If the analyze step fails for one of the languages you are analyzing with
# "We were unable to automatically build your code", modify the matrix above
# to set the build mode to "manual" for that language. Then modify this step
# to build your code.
# Command-line programs to run using the OS shell.
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
- if: matrix.build-mode == 'manual'
shell: bash
run: |
echo 'If you are using a "manual" build mode for one or more of the' \
'languages you are analyzing, replace this with the commands to build' \
'your code, for example:'
echo ' make bootstrap'
echo ' make release'
exit 1
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
with:

171
.github/workflows/docker.yml vendored Normal file
View File

@@ -0,0 +1,171 @@
name: Build and Push Docker Images
on:
push:
tags: [ 'v*' ]
workflow_dispatch:
inputs:
version:
description: 'Version to publish (e.g., 1.0.0)'
required: true
default: '1.0.0'
publish_all_variants:
description: 'Publish all Docker image variants (latest tag)'
type: boolean
default: false
env:
REGISTRY: ghcr.io
IMAGE_PREFIX: l5yth/potato-mesh
jobs:
build-and-push:
runs-on: ubuntu-latest
if: (startsWith(github.ref, 'refs/tags/v') && github.event_name == 'push') || github.event_name == 'workflow_dispatch'
environment: production
permissions:
contents: read
packages: write
strategy:
matrix:
service: [web, ingestor]
architecture:
- { name: linux-amd64, platform: linux/amd64, label: "Linux x86_64" }
- { name: linux-arm64, platform: linux/arm64, label: "Linux ARM64" }
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up QEMU emulation
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract version from tag or input
id: version
run: |
if [ "${{ github.event_name }}" = "workflow_dispatch" ]; then
VERSION="${{ github.event.inputs.version }}"
else
VERSION=${GITHUB_REF#refs/tags/v}
fi
echo "version=$VERSION" >> $GITHUB_OUTPUT
echo "Published version: $VERSION"
- name: Build and push ${{ matrix.service }} for ${{ matrix.architecture.name }}
uses: docker/build-push-action@v5
with:
context: .
file: ./${{ matrix.service == 'web' && 'web/Dockerfile' || 'data/Dockerfile' }}
target: production
platforms: ${{ matrix.architecture.platform }}
push: true
tags: |
${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-${{ matrix.service }}-${{ matrix.architecture.name }}:latest
${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-${{ matrix.service }}-${{ matrix.architecture.name }}:${{ steps.version.outputs.version }}
labels: |
org.opencontainers.image.source=https://github.com/${{ github.repository }}
org.opencontainers.image.description=PotatoMesh ${{ matrix.service == 'web' && 'Web Application' || 'Python Ingestor' }} for ${{ matrix.architecture.label }}
org.opencontainers.image.licenses=Apache-2.0
org.opencontainers.image.version=${{ steps.version.outputs.version }}
org.opencontainers.image.created=${{ github.event.head_commit.timestamp }}
org.opencontainers.image.revision=${{ github.sha }}
org.opencontainers.image.title=PotatoMesh ${{ matrix.service == 'web' && 'Web' || 'Ingestor' }} (${{ matrix.architecture.label }})
org.opencontainers.image.vendor=PotatoMesh
org.opencontainers.image.architecture=${{ matrix.architecture.name }}
org.opencontainers.image.os=linux
org.opencontainers.image.arch=${{ matrix.architecture.name }}
cache-from: type=gha,scope=${{ matrix.service }}-${{ matrix.architecture.name }}
cache-to: type=gha,mode=max,scope=${{ matrix.service }}-${{ matrix.architecture.name }}
test-images:
runs-on: ubuntu-latest
needs: build-and-push
if: startsWith(github.ref, 'refs/tags/v') && github.event_name == 'push'
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract version from tag
id: version
run: |
VERSION=${GITHUB_REF#refs/tags/v}
echo "version=$VERSION" >> $GITHUB_OUTPUT
- name: Test web application (Linux AMD64)
run: |
docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-web-linux-amd64:${{ steps.version.outputs.version }}
docker run --rm -d --name web-test -p 41447:41447 \
-e API_TOKEN=test-token \
-e DEBUG=1 \
${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-web-linux-amd64:${{ steps.version.outputs.version }}
sleep 10
curl -f http://localhost:41447/ || exit 1
docker stop web-test
- name: Test ingestor (Linux AMD64)
run: |
docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-ingestor-linux-amd64:${{ steps.version.outputs.version }}
docker run --rm --name ingestor-test \
-e POTATOMESH_INSTANCE=http://localhost:41447 \
-e API_TOKEN=test-token \
-e MESH_SERIAL=mock \
-e DEBUG=1 \
${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-ingestor-linux-amd64:${{ steps.version.outputs.version }} &
sleep 5
docker stop ingestor-test || true
publish-summary:
runs-on: ubuntu-latest
needs: [build-and-push, test-images]
if: always() && startsWith(github.ref, 'refs/tags/v') && github.event_name == 'push'
steps:
- name: Extract version from tag
id: version
run: |
VERSION=${GITHUB_REF#refs/tags/v}
echo "version=$VERSION" >> $GITHUB_OUTPUT
- name: Publish release summary
run: |
echo "## 🚀 PotatoMesh Images Published to GHCR" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Version:** ${{ steps.version.outputs.version }}" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Published Images:**" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
# Web images
echo "### 🌐 Web Application" >> $GITHUB_STEP_SUMMARY
echo "- \`${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-web-linux-amd64:latest\` - Linux x86_64" >> $GITHUB_STEP_SUMMARY
echo "- \`${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-web-linux-arm64:latest\` - Linux ARM64" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
# Ingestor images
echo "### 📡 Ingestor Service" >> $GITHUB_STEP_SUMMARY
echo "- \`${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-ingestor-linux-amd64:latest\` - Linux x86_64" >> $GITHUB_STEP_SUMMARY
echo "- \`${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-ingestor-linux-arm64:latest\` - Linux ARM64" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY

47
.github/workflows/python.yml vendored Normal file
View File

@@ -0,0 +1,47 @@
name: Python
on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
permissions:
contents: read
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- name: Set up Python 3.13
uses: actions/setup-python@v3
with:
python-version: "3.13"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install black pytest pytest-cov meshtastic
- name: Test with pytest and coverage
run: |
mkdir -p reports
pytest --cov=data --cov-report=term --cov-report=xml:reports/python-coverage.xml --junitxml=reports/python-junit.xml
- name: Upload coverage to Codecov
if: always()
uses: codecov/codecov-action@v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: reports/python-coverage.xml
flags: python-ingestor
name: python-ingestor
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
- name: Upload test results to Codecov
uses: codecov/test-results-action@v1
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: reports/python-junit.xml
flags: python-ingestor
- name: Lint with black
run: |
black --check ./

55
.github/workflows/ruby.yml vendored Normal file
View File

@@ -0,0 +1,55 @@
name: Ruby
on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
permissions:
contents: read
jobs:
test:
defaults:
run:
working-directory: ./web
runs-on: ubuntu-latest
strategy:
matrix:
ruby-version: ['3.3', '3.4']
steps:
- uses: actions/checkout@v5
- name: Set up Ruby
uses: ruby/setup-ruby@v1
with:
ruby-version: ${{ matrix.ruby-version }}
bundler-cache: true
working-directory: ./web
- name: Set up dependencies
run: bundle install
- name: Run tests
run: |
mkdir -p tmp/test-results
bundle exec rspec \
--require rspec_junit_formatter \
--format progress \
--format RspecJunitFormatter \
--out tmp/test-results/rspec.xml
- name: Upload test results to Codecov
uses: codecov/test-results-action@v1
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: ./web/tmp/test-results/rspec.xml
flags: ruby-${{ matrix.ruby-version }}
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
fail_ci_if_error: false
flags: ruby-${{ matrix.ruby-version }}
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
- name: Run rufo
run: bundle exec rufo --check .

11
.gitignore vendored
View File

@@ -11,7 +11,7 @@
/tmp/
# Used by dotenv library to load environment variables.
# .env
.env
# Ignore Byebug command history file.
.byebug_history
@@ -57,3 +57,12 @@ Gemfile.lock
# Python cache directories
__pycache__/
.coverage
coverage/
coverage.xml
htmlcov/
reports/
# AI planning and documentation
ai_docs/
*.log

98
CHANGELOG.md Normal file
View File

@@ -0,0 +1,98 @@
# CHANGELOG
## v0.3.0
* Add comprehensive Docker support with multi-architecture builds and automated CI/CD by @trose in <https://github.com/l5yth/potato-mesh/pull/122>
## v0.2.0
* Update readme for 0.2 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/118>
* Add PotatoMesh logo to header and favicon by @l5yth in <https://github.com/l5yth/potato-mesh/pull/117>
* Harden API auth and request limits by @l5yth in <https://github.com/l5yth/potato-mesh/pull/116>
* Add client-side sorting to node table by @l5yth in <https://github.com/l5yth/potato-mesh/pull/114>
* Add short name overlay for node details by @l5yth in <https://github.com/l5yth/potato-mesh/pull/111>
* Adjust python ingestor interval to 60 seconds by @l5yth in <https://github.com/l5yth/potato-mesh/pull/112>
* Hide location columns on medium screens by @l5yth in <https://github.com/l5yth/potato-mesh/pull/109>
* Handle message updates based on sender info by @l5yth in <https://github.com/l5yth/potato-mesh/pull/108>
* Prioritize node posts in queued API updates by @l5yth in <https://github.com/l5yth/potato-mesh/pull/107>
* Add auto-refresh toggle to UI by @l5yth in <https://github.com/l5yth/potato-mesh/pull/105>
* Adjust Leaflet popup styling for dark mode by @l5yth in <https://github.com/l5yth/potato-mesh/pull/104>
* Add site info overlay by @l5yth in <https://github.com/l5yth/potato-mesh/pull/103>
* Add long name tooltip to short name badge by @l5yth in <https://github.com/l5yth/potato-mesh/pull/102>
* Ensure node numeric aliases are derived from canonical IDs by @l5yth in <https://github.com/l5yth/potato-mesh/pull/101>
* Chore: clean up repository by @l5yth in <https://github.com/l5yth/potato-mesh/pull/96>
* Handle SQLite busy errors when upserting nodes by @l5yth in <https://github.com/l5yth/potato-mesh/pull/100>
* Configure Sinatra logging level from DEBUG flag by @l5yth in <https://github.com/l5yth/potato-mesh/pull/97>
* Add penetration tests for authentication and SQL injection by @l5yth in <https://github.com/l5yth/potato-mesh/pull/95>
* Document Python and Ruby source modules by @l5yth in <https://github.com/l5yth/potato-mesh/pull/94>
* Add tests covering mesh helper edge cases by @l5yth in <https://github.com/l5yth/potato-mesh/pull/93>
* Fix py code cov by @l5yth in <https://github.com/l5yth/potato-mesh/pull/92>
* Add Codecov reporting to Python CI by @l5yth in <https://github.com/l5yth/potato-mesh/pull/91>
* Skip null identifiers when selecting packet fields by @l5yth in <https://github.com/l5yth/potato-mesh/pull/88>
* Create python yml ga by @l5yth in <https://github.com/l5yth/potato-mesh/pull/90>
* Add unit tests for mesh ingestor script by @l5yth in <https://github.com/l5yth/potato-mesh/pull/89>
* Add coverage for debug logging on messages without sender by @l5yth in <https://github.com/l5yth/potato-mesh/pull/86>
* Handle concurrent node snapshot updates by @l5yth in <https://github.com/l5yth/potato-mesh/pull/85>
* Fix ingestion mapping for message sender IDs by @l5yth in <https://github.com/l5yth/potato-mesh/pull/84>
* Add coverage for API authentication and payload edge cases by @l5yth in <https://github.com/l5yth/potato-mesh/pull/83>
* Add JUnit test reporting to Ruby CI by @l5yth in <https://github.com/l5yth/potato-mesh/pull/82>
* Configure SimpleCov reporting for Codecov by @l5yth in <https://github.com/l5yth/potato-mesh/pull/81>
* Update codecov job by @l5yth in <https://github.com/l5yth/potato-mesh/pull/80>
* Fix readme badges by @l5yth in <https://github.com/l5yth/potato-mesh/pull/79>
* Add Codecov upload step to Ruby workflow by @l5yth in <https://github.com/l5yth/potato-mesh/pull/78>
* Add Apache license headers to source files by @l5yth in <https://github.com/l5yth/potato-mesh/pull/77>
* Add integration specs for node and message APIs by @l5yth in <https://github.com/l5yth/potato-mesh/pull/76>
* Docs: update for 0.2.0 release by @l5yth in <https://github.com/l5yth/potato-mesh/pull/75>
* Create ruby workflow by @l5yth in <https://github.com/l5yth/potato-mesh/pull/74>
* Add RSpec smoke tests for app boot and database init by @l5yth in <https://github.com/l5yth/potato-mesh/pull/73>
* Align refresh controls with status text by @l5yth in <https://github.com/l5yth/potato-mesh/pull/72>
* Improve mobile layout by @l5yth in <https://github.com/l5yth/potato-mesh/pull/68>
* Normalize message sender IDs using node numbers by @l5yth in <https://github.com/l5yth/potato-mesh/pull/67>
* Style: condense node table by @l5yth in <https://github.com/l5yth/potato-mesh/pull/65>
* Log debug details for messages without sender by @l5yth in <https://github.com/l5yth/potato-mesh/pull/64>
* Fix nested dataclass serialization for node snapshots by @l5yth in <https://github.com/l5yth/potato-mesh/pull/63>
* Log node object on snapshot update failure by @l5yth in <https://github.com/l5yth/potato-mesh/pull/62>
* Initialize database on startup by @l5yth in <https://github.com/l5yth/potato-mesh/pull/61>
* Send mesh data to Potatomesh API by @l5yth in <https://github.com/l5yth/potato-mesh/pull/60>
* Convert boolean flags for SQLite binding by @l5yth in <https://github.com/l5yth/potato-mesh/pull/59>
* Use packet id as message primary key by @l5yth in <https://github.com/l5yth/potato-mesh/pull/58>
* Add message ingestion API and stricter auth by @l5yth in <https://github.com/l5yth/potato-mesh/pull/56>
* Feat: parameterize community info by @l5yth in <https://github.com/l5yth/potato-mesh/pull/55>
* Feat: add dark mode toggle by @l5yth in <https://github.com/l5yth/potato-mesh/pull/54>
## v0.1.0
* Show daily node count in title and header by @l5yth in <https://github.com/l5yth/potato-mesh/pull/49>
* Add daily date separators to chat log by @l5yth in <https://github.com/l5yth/potato-mesh/pull/47>
* Feat: make frontend responsive for mobile by @l5yth in <https://github.com/l5yth/potato-mesh/pull/46>
* Harden mesh utilities by @l5yth in <https://github.com/l5yth/potato-mesh/pull/45>
* Filter out distant nodes from Berlin map view by @l5yth in <https://github.com/l5yth/potato-mesh/pull/43>
* Display filtered active node counts in #MediumFast subheading by @l5yth in <https://github.com/l5yth/potato-mesh/pull/44>
* Limit chat log and highlight short names by role by @l5yth in <https://github.com/l5yth/potato-mesh/pull/42>
* Fix string/integer comparison in node query by @l5yth in <https://github.com/l5yth/potato-mesh/pull/40>
* Escape chat message and node entries by @l5yth in <https://github.com/l5yth/potato-mesh/pull/39>
* Sort chat entries by timestamp by @l5yth in <https://github.com/l5yth/potato-mesh/pull/38>
* Feat: append messages to chat log by @l5yth in <https://github.com/l5yth/potato-mesh/pull/36>
* Normalize future timestamps for nodes by @l5yth in <https://github.com/l5yth/potato-mesh/pull/35>
* Optimize web frontend and Ruby app by @l5yth in <https://github.com/l5yth/potato-mesh/pull/32>
* Add messages API endpoint with node details by @l5yth in <https://github.com/l5yth/potato-mesh/pull/33>
* Clamp node timestamps and sync last_heard with position time by @l5yth in <https://github.com/l5yth/potato-mesh/pull/31>
* Refactor: replace deprecated utcfromtimestamp by @l5yth in <https://github.com/l5yth/potato-mesh/pull/30>
* Add optional debug logging for node and message operations by @l5yth in <https://github.com/l5yth/potato-mesh/pull/29>
* Data: enable serial collection of messages on channel 0 by @l5yth in <https://github.com/l5yth/potato-mesh/pull/25>
* Add first_heard timestamp by @l5yth in <https://github.com/l5yth/potato-mesh/pull/23>
* Add persistent footer with contact information by @l5yth in <https://github.com/l5yth/potato-mesh/pull/22>
* Sort initial chat entries by last-heard by @l5yth in <https://github.com/l5yth/potato-mesh/pull/20>
* Display position time in relative 'time ago' format by @l5yth in <https://github.com/l5yth/potato-mesh/pull/19>
* Adjust marker size and map tile opacity by @l5yth in <https://github.com/l5yth/potato-mesh/pull/18>
* Add chat box for node notifications by @l5yth in <https://github.com/l5yth/potato-mesh/pull/17>
* Color markers by role with grayscale map by @l5yth in <https://github.com/l5yth/potato-mesh/pull/16>
* Default missing node role to client by @l5yth in <https://github.com/l5yth/potato-mesh/pull/15>
* Show live node count in nodes page titles by @l5yth in <https://github.com/l5yth/potato-mesh/pull/14>
* Filter stale nodes and add live search by @l5yth in <https://github.com/l5yth/potato-mesh/pull/13>
* Remove raw node JSON column by @l5yth in <https://github.com/l5yth/potato-mesh/pull/12>
* Add JSON ingest API for node updates by @l5yth in <https://github.com/l5yth/potato-mesh/pull/11>
* Ignore Python __pycache__ directories by @l5yth in <https://github.com/l5yth/potato-mesh/pull/10>
* Feat: load nodes from json for tests by @l5yth in <https://github.com/l5yth/potato-mesh/pull/8>
* Handle dataclass fields in node snapshots by @l5yth in <https://github.com/l5yth/potato-mesh/pull/6>
* Add index page and /nodes route for node map by @l5yth in <https://github.com/l5yth/potato-mesh/pull/4>

103
DOCKER.md Normal file
View File

@@ -0,0 +1,103 @@
# PotatoMesh Docker Setup
## Quick Start
```bash
./configure.sh
docker-compose up -d
docker-compose logs -f
```
The default configuration attaches both services to the host network. This
avoids creating Docker bridge interfaces on platforms where that operation is
blocked. Access the dashboard at `http://127.0.0.1:41447` as soon as the
containers are running. On Docker Desktop (macOS/Windows) or when you prefer
traditional bridged networking, start Compose with the `bridge` profile:
```bash
COMPOSE_PROFILES=bridge docker-compose up -d
```
Access at `http://localhost:41447`
## Configuration
Edit `.env` file or run `./configure.sh` to set:
- `API_TOKEN` - Required for ingestor authentication
- `MESH_SERIAL` - Your Meshtastic device path (e.g., `/dev/ttyACM0`)
- `SITE_NAME` - Your mesh network name
- `MAP_CENTER_LAT/LON` - Map center coordinates
## Device Setup
**Find your device:**
```bash
# Linux
ls /dev/ttyACM* /dev/ttyUSB*
# macOS
ls /dev/cu.usbserial-*
# Windows
ls /dev/ttyS*
```
**Set permissions (Linux/macOS):**
```bash
sudo chmod 666 /dev/ttyACM0
# Or add user to dialout group
sudo usermod -a -G dialout $USER
```
## Common Commands
```bash
# Start services
docker-compose up -d
# View logs
docker-compose logs -f
# Stop services
docker-compose down
# Stop and remove data
docker-compose down -v
# Update images
docker-compose pull && docker-compose up -d
```
## Troubleshooting
**Device access issues:**
```bash
# Check device exists and permissions
ls -la /dev/ttyACM0
# Fix permissions
sudo chmod 666 /dev/ttyACM0
```
**Port conflicts:**
```bash
# Find what's using port 41447
sudo lsof -i :41447
```
**Container issues:**
```bash
# Check logs
docker-compose logs
# Restart services
docker-compose restart
```
For more Docker help, see [Docker Compose documentation](https://docs.docker.com/compose/).

View File

@@ -33,7 +33,7 @@
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
Object form, made available under the Licen2se, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
@@ -186,7 +186,7 @@
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Copyright (C) 2025 l5yth
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

172
README.md
View File

@@ -1,70 +1,55 @@
# potato-mesh
# 🥔 PotatoMesh
a simple meshtastic node dashboard for your local community. here: berlin mediumfast.
[![GitHub Workflow Status](https://img.shields.io/github/actions/workflow/status/l5yth/potato-mesh/ruby.yml?branch=main)](https://github.com/l5yth/potato-mesh/actions)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/l5yth/potato-mesh)](https://github.com/l5yth/potato-mesh/releases)
[![codecov](https://codecov.io/gh/l5yth/potato-mesh/branch/main/graph/badge.svg?token=FS7252JVZT)](https://codecov.io/gh/l5yth/potato-mesh)
[![Open-Source License](https://img.shields.io/github/license/l5yth/potato-mesh)](LICENSE)
[![Contributions Welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat)](https://github.com/l5yth/potato-mesh/issues)
![screenshot of the first version](./scrot-0.1.png)
A simple Meshtastic-powered node dashboard for your local community. _No MQTT clutter, just local LoRa aether._
## status
* Web app with chat window and map view showing nodes and messages.
* API to POST (authenticated) and to GET nodes and messages.
* Supplemental Python ingestor to feed the POST APIs of the Web app with data remotely.
* Shows new node notifications (first seen) in chat.
* Allows searching and filtering for nodes in map and table view.
_in active development._
Live demo for Berlin #MediumFast: [potatomesh.net](https://potatomesh.net)
what works:
![screenshot of the third version](./scrot-0.3.png)
* updating nodes from a locally connected meshtastic device (via serial)
* awaiting messages on default channel (0) from a local meshtastic device
* storing nodes and messages in a local database (sqlite3)
* displaying nodes ordered by last seen in a web app table view
* displaying nodes by geographic coordinates on a map layer, coloured by device role
* displaying new node notifications and chat messages in default channel in chat box
* displaying active node count and filtering nodes by name
* exposing nodes and messages to api endpoints
what does not work _(yet):_
* posting nodes and messages to the api endpoints _(wip)_
## requirements
requires a meshtastic node connected (via serial) to gather mesh data and the meshtastic cli.
requires the meshtastic python api for the database.
## Quick Start with Docker
```bash
python -m venv .venv
source .venv/bin/activate
pip install -U meshtastic
./configure.sh # Configure your setup
docker-compose up -d # Start services
docker-compose logs -f # View logs
```
requires latest ruby and ruby gems for the sinatra web app.
PotatoMesh uses host networking by default so it can run on restricted
systems where Docker cannot create bridged interfaces. The web UI listens on
`http://127.0.0.1:41447` immediately without explicit port mappings. If you
are using Docker Desktop (macOS/Windows) or otherwise require bridged
networking, enable the Compose profile with:
```bash
gem install bundler
COMPOSE_PROFILES=bridge docker-compose up -d
```
## Web App
Requires Ruby for the Sinatra web app and SQLite3 for the app's database.
```bash
pacman -S ruby sqlite3
gem install sinatra sqlite3 rackup puma rspec rack-test rufo
cd ./web
bundle install
```
### database
### Run
uses python meshtastic library to ingest mesh data into an sqlite3 database locally
run `mesh.sh` in `data/` to keep updating node records and parsing new incoming messages.
```bash
MESH_SERIAL=/dev/ttyACM0 DEBUG=1 ./mesh.sh
[...]
[debug] upserted node !849b7154 shortName='7154'
[debug] upserted node !ba653ae8 shortName='3ae8'
[debug] upserted node !16ced364 shortName='Pat'
[debug] stored message from '!9ee71c38' to '^all' ch=0 text='Guten Morgen!'
```
enable debug output with `DEBUG=1`, specify the serial port with `MESH_SERIAL` (default `/dev/ttyACM0`).
### web app
uses a ruby sinatra webapp to display data from the sqlite database
run `app.sh` in `web/` to run the sinatra webserver and check
[127.0.0.1:41447](http://127.0.0.1:41447/) for the correct node map.
Check out the `app.sh` run script in `./web` directory.
```bash
API_TOKEN="1eb140fd-cab4-40be-b862-41c607762246" ./app.sh
@@ -76,17 +61,86 @@ Puma starting in single mode...
* Listening on http://127.0.0.1:41447
```
set `API_TOKEN` required for authorizations on the api post-endpoints (wip).
Check [127.0.0.1:41447](http://127.0.0.1:41447/) for the development preview
of the node map. Set `API_TOKEN` required for authorizations on the API's POST endpoints.
## api
The web app can be configured with environment variables (defaults shown):
the web app contains an api:
* `SITE_NAME` - title and header shown in the ui (default: "Meshtastic Berlin")
* `DEFAULT_CHANNEL` - default channel shown in the ui (default: "#MediumFast")
* `DEFAULT_FREQUENCY` - default channel shown in the ui (default: "868MHz")
* `MAP_CENTER_LAT` / `MAP_CENTER_LON` - default map center coordinates (default: `52.502889` / `13.404194`)
* `MAX_NODE_DISTANCE_KM` - hide nodes farther than this distance from the center (default: `137`)
* `MATRIX_ROOM` - matrix room id for a footer link (default: `#meshtastic-berlin:matrix.org`)
* GET `/api/nodes?limit=1000` - returns the latest 1000 nodes reported to the app
* GET `/api/messages?limit=1000` - returns the latest 1000 messages
The application derives SEO-friendly document titles, descriptions, and social
preview tags from these existing configuration values and reuses the bundled
logo for Open Graph and Twitter cards.
the `POST` apis are _currently being worked on (tm)._
Example:
## license
```bash
SITE_NAME="Meshtastic Berlin" MAP_CENTER_LAT=52.502889 MAP_CENTER_LON=13.404194 MAX_NODE_DISTANCE_KM=137 MATRIX_ROOM="#meshtastic-berlin:matrix.org" ./app.sh
```
apache v2.0
### API
The web app contains an API:
* GET `/api/nodes?limit=100` - returns the latest 100 nodes reported to the app
* GET `/api/positions?limit=100` - returns the latest 100 position data
* GET `/api/messages?limit=100` - returns the latest 100 messages
* POST `/api/nodes` - upserts nodes provided as JSON object mapping node ids to node data (requires `Authorization: Bearer <API_TOKEN>`)
* POST `/api/messages` - appends positions provided as a JSON object or array (requires `Authorization: Bearer <API_TOKEN>`)
* POST `/api/messages` - appends messages provided as a JSON object or array (requires `Authorization: Bearer <API_TOKEN>`)
The `API_TOKEN` environment variable must be set to a non-empty value and match the token supplied in the `Authorization` header for `POST` requests.
## Python Ingestor
The web app is not meant to be run locally connected to a Meshtastic node but rather
on a remote host without access to a physical Meshtastic device. Therefore, it only
accepts data through the API POST endpoints. Benefit is, here multiple nodes across the
community can feed the dashboard with data. The web app handles messages and nodes
by ID and there will be no duplication.
For convenience, the directory `./data` contains a Python ingestor. It connects to a
Meshtastic node via serial port or to a remote device that exposes the Meshtastic TCP
interface to gather nodes and messages seen by the node.
```bash
pacman -S python
cd ./data
python -m venv .venv
source .venv/bin/activate
pip install -U meshtastic
```
It uses the Meshtastic Python library to ingest mesh data and post nodes and messages
to the configured potato-mesh instance.
Check out `mesh.sh` ingestor script in the `./data` directory.
```bash
POTATOMESH_INSTANCE=http://127.0.0.1:41447 API_TOKEN=1eb140fd-cab4-40be-b862-41c607762246 MESH_SERIAL=/dev/ttyACM0 DEBUG=1 ./mesh.sh
Mesh daemon: nodes+messages → http://127.0.0.1 | port=41447 | channel=0
[...]
[debug] upserted node !849b7154 shortName='7154'
[debug] upserted node !ba653ae8 shortName='3ae8'
[debug] upserted node !16ced364 shortName='Pat'
[debug] stored message from '!9ee71c38' to '^all' ch=0 text='Guten Morgen!'
```
Run the script with `POTATOMESH_INSTANCE` and `API_TOKEN` to keep updating
node records and parsing new incoming messages. Enable debug output with `DEBUG=1`,
specify the serial port with `MESH_SERIAL` (default `/dev/ttyACM0`) or set it to an IP
address (for example `192.168.1.20:4403`) to use the Meshtastic TCP interface.
## Demos
* <https://potatomesh.net/>
* <https://vrs.kdd2105.ru/>
## License
Apache v2.0, Contact <COM0@l5y.tech>

155
configure.sh Executable file
View File

@@ -0,0 +1,155 @@
#!/bin/bash
# PotatoMesh Configuration Script
# This script helps you configure your PotatoMesh instance with your local settings
set -e
echo "🥔 PotatoMesh Configuration"
echo "=========================="
echo ""
# Check if .env exists, if not create from .env.example
if [ ! -f .env ]; then
if [ -f .env.example ]; then
echo "📋 Creating .env file from .env.example..."
cp .env.example .env
else
echo "📋 Creating new .env file..."
touch .env
fi
fi
echo "🔧 Let's configure your PotatoMesh instance!"
echo ""
# Function to read input with default
read_with_default() {
local prompt="$1"
local default="$2"
local var_name="$3"
if [ -n "$default" ]; then
read -p "$prompt [$default]: " input
input=${input:-$default}
else
read -p "$prompt: " input
fi
eval "$var_name='$input'"
}
# Function to update .env file
update_env() {
local key="$1"
local value="$2"
if grep -q "^$key=" .env; then
# Update existing value
sed -i.bak "s/^$key=.*/$key=$value/" .env
else
# Add new value
echo "$key=$value" >> .env
fi
}
# Get current values from .env if they exist
SITE_NAME=$(grep "^SITE_NAME=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "My Meshtastic Network")
DEFAULT_CHANNEL=$(grep "^DEFAULT_CHANNEL=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "#MediumFast")
DEFAULT_FREQUENCY=$(grep "^DEFAULT_FREQUENCY=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "868MHz")
MAP_CENTER_LAT=$(grep "^MAP_CENTER_LAT=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "52.502889")
MAP_CENTER_LON=$(grep "^MAP_CENTER_LON=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "13.404194")
MAX_NODE_DISTANCE_KM=$(grep "^MAX_NODE_DISTANCE_KM=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "50")
MATRIX_ROOM=$(grep "^MATRIX_ROOM=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "")
API_TOKEN=$(grep "^API_TOKEN=" .env 2>/dev/null | cut -d'=' -f2- | tr -d '"' || echo "")
echo "📍 Location Settings"
echo "-------------------"
read_with_default "Site Name (your mesh network name)" "$SITE_NAME" SITE_NAME
read_with_default "Map Center Latitude" "$MAP_CENTER_LAT" MAP_CENTER_LAT
read_with_default "Map Center Longitude" "$MAP_CENTER_LON" MAP_CENTER_LON
read_with_default "Max Node Distance (km)" "$MAX_NODE_DISTANCE_KM" MAX_NODE_DISTANCE_KM
echo ""
echo "📡 Meshtastic Settings"
echo "---------------------"
read_with_default "Default Channel" "$DEFAULT_CHANNEL" DEFAULT_CHANNEL
read_with_default "Default Frequency (868MHz, 915MHz, etc.)" "$DEFAULT_FREQUENCY" DEFAULT_FREQUENCY
echo ""
echo "💬 Optional Settings"
echo "-------------------"
read_with_default "Matrix Room (optional, e.g., #meshtastic-berlin:matrix.org)" "$MATRIX_ROOM" MATRIX_ROOM
echo ""
echo "🔐 Security Settings"
echo "-------------------"
echo "The API token is used for secure communication between the web app and ingestor."
echo "You can provide your own custom token or let us generate a secure one for you."
echo ""
if [ -z "$API_TOKEN" ]; then
echo "No existing API token found. Generating a secure token..."
API_TOKEN=$(openssl rand -hex 32 2>/dev/null || python3 -c "import secrets; print(secrets.token_hex(32))" 2>/dev/null || echo "your-secure-api-token-here")
echo "✅ Generated secure API token: ${API_TOKEN:0:8}..."
echo ""
read -p "Use this generated token? (Y/n): " use_generated
if [[ "$use_generated" =~ ^[Nn]$ ]]; then
read -p "Enter your custom API token: " API_TOKEN
fi
else
echo "Existing API token found: ${API_TOKEN:0:8}..."
read -p "Keep existing token? (Y/n): " keep_existing
if [[ "$keep_existing" =~ ^[Nn]$ ]]; then
read -p "Enter new API token (or press Enter to generate): " new_token
if [ -n "$new_token" ]; then
API_TOKEN="$new_token"
else
echo "Generating new secure token..."
API_TOKEN=$(openssl rand -hex 32 2>/dev/null || python3 -c "import secrets; print(secrets.token_hex(32))" 2>/dev/null || echo "your-secure-api-token-here")
echo "✅ Generated new API token: ${API_TOKEN:0:8}..."
fi
fi
fi
echo ""
echo "📝 Updating .env file..."
# Update .env file
update_env "SITE_NAME" "\"$SITE_NAME\""
update_env "DEFAULT_CHANNEL" "\"$DEFAULT_CHANNEL\""
update_env "DEFAULT_FREQUENCY" "\"$DEFAULT_FREQUENCY\""
update_env "MAP_CENTER_LAT" "$MAP_CENTER_LAT"
update_env "MAP_CENTER_LON" "$MAP_CENTER_LON"
update_env "MAX_NODE_DISTANCE_KM" "$MAX_NODE_DISTANCE_KM"
update_env "MATRIX_ROOM" "\"$MATRIX_ROOM\""
update_env "API_TOKEN" "$API_TOKEN"
# Add other common settings if they don't exist
if ! grep -q "^MESH_SERIAL=" .env; then
echo "MESH_SERIAL=/dev/ttyACM0" >> .env
fi
if ! grep -q "^DEBUG=" .env; then
echo "DEBUG=0" >> .env
fi
# Clean up backup file
rm -f .env.bak
echo ""
echo "✅ Configuration complete!"
echo ""
echo "📋 Your settings:"
echo " Site Name: $SITE_NAME"
echo " Map Center: $MAP_CENTER_LAT, $MAP_CENTER_LON"
echo " Max Distance: ${MAX_NODE_DISTANCE_KM}km"
echo " Channel: $DEFAULT_CHANNEL"
echo " Frequency: $DEFAULT_FREQUENCY"
echo " Matrix Room: ${MATRIX_ROOM:-'Not set'}"
echo " API Token: ${API_TOKEN:0:8}..."
echo ""
echo "🚀 You can now start PotatoMesh with:"
echo " docker-compose up -d"
echo ""
echo "📖 For more configuration options, see the README.md"

2
data/.gitignore vendored
View File

@@ -2,3 +2,5 @@
*.db-wal
*.db-shm
*.backup
*.copy
*.log

72
data/Dockerfile Normal file
View File

@@ -0,0 +1,72 @@
# syntax=docker/dockerfile:1.6
ARG TARGETOS=linux
ARG PYTHON_VERSION=3.12.6
# Linux production image
FROM python:${PYTHON_VERSION}-alpine AS production-linux
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1
WORKDIR /app
COPY data/requirements.txt ./
RUN set -eux; \
apk add --no-cache \
tzdata \
curl \
libstdc++ \
libgcc; \
apk add --no-cache --virtual .build-deps \
gcc \
musl-dev \
linux-headers \
build-base; \
python -m pip install --no-cache-dir -r requirements.txt; \
apk del .build-deps
COPY data/ .
RUN addgroup -S potatomesh && \
adduser -S potatomesh -G potatomesh && \
adduser potatomesh dialout && \
chown -R potatomesh:potatomesh /app
USER potatomesh
ENV MESH_SERIAL=/dev/ttyACM0 \
MESH_SNAPSHOT_SECS=60 \
MESH_CHANNEL_INDEX=0 \
DEBUG=0 \
POTATOMESH_INSTANCE="" \
API_TOKEN=""
CMD ["python", "mesh.py"]
# Windows production image
FROM python:${PYTHON_VERSION}-windowsservercore-ltsc2022 AS production-windows
SHELL ["cmd", "/S", "/C"]
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
WORKDIR /app
COPY data/requirements.txt ./
RUN python -m pip install --no-cache-dir -r requirements.txt
COPY data/ .
USER ContainerUser
ENV MESH_SERIAL=/dev/ttyACM0 \
MESH_SNAPSHOT_SECS=60 \
MESH_CHANNEL_INDEX=0 \
DEBUG=0 \
POTATOMESH_INSTANCE="" \
API_TOKEN=""
CMD ["python", "mesh.py"]
FROM production-${TARGETOS} AS production

View File

@@ -0,0 +1,19 @@
# Copyright (C) 2025 l5yth
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Data utilities for the Potato Mesh synchronisation daemon.
The ``data.mesh`` module exposes helpers for reading Meshtastic node and
message information before forwarding it to the accompanying web application.
"""

File diff suppressed because it is too large Load Diff

View File

@@ -1,7 +1,22 @@
#!/usr/bin/env bash
# Copyright (C) 2025 l5yth
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -euo pipefail
python -m venv .venv
source .venv/bin/activate
pip install -U meshtastic
pip install -U meshtastic black pytest
exec python mesh.py

View File

@@ -1,16 +1,30 @@
-- Copyright (C) 2025 l5yth
--
-- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License.
-- You may obtain a copy of the License at
--
-- http://www.apache.org/licenses/LICENSE-2.0
--
-- Unless required by applicable law or agreed to in writing, software
-- distributed under the License is distributed on an "AS IS" BASIS,
-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-- See the License for the specific language governing permissions and
-- limitations under the License.
CREATE TABLE IF NOT EXISTS messages (
id INTEGER PRIMARY KEY AUTOINCREMENT,
rx_time INTEGER NOT NULL, -- unix seconds when received
rx_iso TEXT NOT NULL, -- ISO8601 UTC timestamp
from_id TEXT, -- sender node id (string form)
to_id TEXT, -- recipient node id
channel INTEGER, -- channel index
portnum TEXT, -- application portnum (e.g. TEXT_MESSAGE_APP)
text TEXT, -- decoded text payload if present
snr REAL, -- signal-to-noise ratio
rssi INTEGER, -- received signal strength
hop_limit INTEGER, -- hops left when received
raw_json TEXT -- entire packet JSON dump
id INTEGER PRIMARY KEY,
rx_time INTEGER NOT NULL,
rx_iso TEXT NOT NULL,
from_id TEXT,
to_id TEXT,
channel INTEGER,
portnum TEXT,
text TEXT,
encrypted TEXT,
snr REAL,
rssi INTEGER,
hop_limit INTEGER
);
CREATE INDEX IF NOT EXISTS idx_messages_rx_time ON messages(rx_time);

View File

@@ -0,0 +1,4 @@
-- Add support for encrypted messages to the existing schema.
BEGIN;
ALTER TABLE messages ADD COLUMN encrypted TEXT;
COMMIT;

View File

@@ -1,4 +1,17 @@
-- nodes.sql
-- Copyright (C) 2025 l5yth
--
-- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License.
-- You may obtain a copy of the License at
--
-- http://www.apache.org/licenses/LICENSE-2.0
--
-- Unless required by applicable law or agreed to in writing, software
-- distributed under the License is distributed on an "AS IS" BASIS,
-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-- See the License for the specific language governing permissions and
-- limitations under the License.
PRAGMA journal_mode=WAL;
CREATE TABLE IF NOT EXISTS nodes (

40
data/positions.sql Normal file
View File

@@ -0,0 +1,40 @@
-- Copyright (C) 2025 l5yth
--
-- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License.
-- You may obtain a copy of the License at
--
-- http://www.apache.org/licenses/LICENSE-2.0
--
-- Unless required by applicable law or agreed to in writing, software
-- distributed under the License is distributed on an "AS IS" BASIS,
-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-- See the License for the specific language governing permissions and
-- limitations under the License.
CREATE TABLE IF NOT EXISTS positions (
id INTEGER PRIMARY KEY,
node_id TEXT,
node_num INTEGER,
rx_time INTEGER NOT NULL,
rx_iso TEXT NOT NULL,
position_time INTEGER,
to_id TEXT,
latitude REAL,
longitude REAL,
altitude REAL,
location_source TEXT,
precision_bits INTEGER,
sats_in_view INTEGER,
pdop REAL,
ground_speed REAL,
ground_track REAL,
snr REAL,
rssi INTEGER,
hop_limit INTEGER,
bitfield INTEGER,
payload_b64 TEXT
);
CREATE INDEX IF NOT EXISTS idx_positions_rx_time ON positions(rx_time);
CREATE INDEX IF NOT EXISTS idx_positions_node_id ON positions(node_id);

7
data/requirements.txt Normal file
View File

@@ -0,0 +1,7 @@
# Production dependencies
meshtastic>=2.0.0
protobuf>=4.21.12
# Development dependencies (optional)
black>=23.0.0
pytest>=7.0.0

34
docker-compose.dev.yml Normal file
View File

@@ -0,0 +1,34 @@
# Development overrides for docker-compose.yml
services:
web:
environment:
DEBUG: 1
volumes:
- ./web:/app
- ./data:/data
- /app/vendor/bundle
web-bridge:
environment:
DEBUG: 1
volumes:
- ./web:/app
- ./data:/data
- /app/vendor/bundle
ports:
- "41447:41447"
- "9292:9292"
ingestor:
environment:
DEBUG: 1
volumes:
- ./data:/app
- /app/.local
ingestor-bridge:
environment:
DEBUG: 1
volumes:
- ./data:/app
- /app/.local

29
docker-compose.prod.yml Normal file
View File

@@ -0,0 +1,29 @@
# Production overrides for docker-compose.yml
services:
web:
build:
target: production
environment:
DEBUG: 0
restart: always
web-bridge:
build:
target: production
environment:
DEBUG: 0
restart: always
ingestor:
build:
target: production
environment:
DEBUG: 0
restart: always
ingestor-bridge:
build:
target: production
environment:
DEBUG: 0
restart: always

92
docker-compose.yml Normal file
View File

@@ -0,0 +1,92 @@
x-web-base: &web-base
image: ghcr.io/l5yth/potato-mesh-web-linux-amd64:latest
environment:
SITE_NAME: ${SITE_NAME:-My Meshtastic Network}
DEFAULT_CHANNEL: ${DEFAULT_CHANNEL:-#MediumFast}
DEFAULT_FREQUENCY: ${DEFAULT_FREQUENCY:-868MHz}
MAP_CENTER_LAT: ${MAP_CENTER_LAT:-52.502889}
MAP_CENTER_LON: ${MAP_CENTER_LON:-13.404194}
MAX_NODE_DISTANCE_KM: ${MAX_NODE_DISTANCE_KM:-50}
MATRIX_ROOM: ${MATRIX_ROOM:-}
API_TOKEN: ${API_TOKEN}
DEBUG: ${DEBUG:-0}
volumes:
- potatomesh_data:/app/data
- potatomesh_logs:/app/logs
restart: unless-stopped
deploy:
resources:
limits:
memory: 512M
cpus: '0.5'
reservations:
memory: 256M
cpus: '0.25'
x-ingestor-base: &ingestor-base
image: ghcr.io/l5yth/potato-mesh-ingestor-linux-amd64:latest
environment:
MESH_SERIAL: ${MESH_SERIAL:-/dev/ttyACM0}
MESH_SNAPSHOT_SECS: ${MESH_SNAPSHOT_SECS:-60}
MESH_CHANNEL_INDEX: ${MESH_CHANNEL_INDEX:-0}
POTATOMESH_INSTANCE: ${POTATOMESH_INSTANCE:-http://web:41447}
API_TOKEN: ${API_TOKEN}
DEBUG: ${DEBUG:-0}
volumes:
- potatomesh_data:/app/data
- potatomesh_logs:/app/logs
devices:
- ${MESH_SERIAL:-/dev/ttyACM0}:${MESH_SERIAL:-/dev/ttyACM0}
privileged: false
restart: unless-stopped
deploy:
resources:
limits:
memory: 256M
cpus: '0.25'
reservations:
memory: 128M
cpus: '0.1'
services:
web:
<<: *web-base
network_mode: host
ingestor:
<<: *ingestor-base
network_mode: host
depends_on:
- web
extra_hosts:
- "web:127.0.0.1"
web-bridge:
<<: *web-base
container_name: potatomesh-web-bridge
networks:
- potatomesh-network
ports:
- "41447:41447"
profiles:
- bridge
ingestor-bridge:
<<: *ingestor-base
container_name: potatomesh-ingestor-bridge
networks:
- potatomesh-network
depends_on:
- web-bridge
profiles:
- bridge
volumes:
potatomesh_data:
driver: local
potatomesh_logs:
driver: local
networks:
potatomesh-network:
driver: bridge

BIN
scrot-0.2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 748 KiB

BIN
scrot-0.3.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 952 KiB

Binary file not shown.

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,19 @@
#!/usr/bin/env python3
# Copyright (C) 2025 l5yth
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import time, json, base64, threading
from pubsub import pub # comes with meshtastic
from meshtastic.serial_interface import SerialInterface

77
tests/dump.py Normal file
View File

@@ -0,0 +1,77 @@
#!/usr/bin/env python3
import json, os, signal, sys, time, threading
from datetime import datetime, timezone
from meshtastic.serial_interface import SerialInterface
from meshtastic.mesh_interface import MeshInterface
from pubsub import pub
PORT = os.environ.get("MESH_SERIAL", "/dev/ttyACM0")
OUT = os.environ.get("MESH_DUMP_FILE", "meshtastic-dump.ndjson")
# line-buffered append so you can tail -f safely
f = open(OUT, "a", buffering=1, encoding="utf-8")
def now():
return datetime.now(timezone.utc).isoformat()
def write(kind, payload):
rec = {"ts": now(), "kind": kind, **payload}
f.write(json.dumps(rec, ensure_ascii=False, default=str) + "\n")
# Connect to the node
iface: MeshInterface = SerialInterface(PORT)
# Packet callback: every RF/Mesh packet the node receives/decodes lands here
def on_packet(packet, iface):
# 'packet' already includes decoded fields when available (portnum, payload, position, telemetry, etc.)
write("packet", {"packet": packet})
# Node callback: topology/metadata updates (nodeinfo, hops, lastHeard, etc.)
def on_node(node, iface):
write("node", {"node": node})
iface.onReceive = on_packet
pub.subscribe(on_node, "meshtastic.node")
# Write a little header so you know what you captured
try:
my = getattr(iface, "myInfo", None)
write(
"meta",
{
"event": "started",
"port": PORT,
"my_node_num": getattr(my, "my_node_num", None) if my else None,
},
)
except Exception as e:
write("meta", {"event": "started", "port": PORT, "error": str(e)})
# Keep the process alive until Ctrl-C
def _stop(signum, frame):
write("meta", {"event": "stopping"})
try:
try:
pub.unsubscribe(on_node, "meshtastic.node")
except Exception:
pass
iface.close()
finally:
f.close()
sys.exit(0)
signal.signal(signal.SIGINT, _stop)
signal.signal(signal.SIGTERM, _stop)
# Simple sleep loop; avoids busy-wait
while True:
time.sleep(1)

BIN
tests/mesh.db Normal file

Binary file not shown.

3902
tests/messages.json Normal file

File diff suppressed because it is too large Load Diff

3653
tests/nodes.json Normal file

File diff suppressed because it is too large Load Diff

1050
tests/test_mesh.py Normal file

File diff suppressed because it is too large Load Diff

21
tests/update.sh Executable file
View File

@@ -0,0 +1,21 @@
#!/usr/bin/env bash
# Copyright (C) 2025 l5yth
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -euo pipefail
sqlite3 ../data/mesh.db ".backup './mesh.db'"
curl http://127.0.0.1:41447/api/nodes |jq > ./nodes.json
curl http://127.0.0.1:41447/api/messages |jq > ./messages.json

79
web/Dockerfile Normal file
View File

@@ -0,0 +1,79 @@
# Main application builder stage
FROM ruby:3.3-alpine AS builder
# Ensure native extensions are built against musl libc rather than
# using glibc precompiled binaries (which fail on Alpine).
ENV BUNDLE_FORCE_RUBY_PLATFORM=true
# Install build dependencies and SQLite3
RUN apk add --no-cache \
build-base \
sqlite-dev \
linux-headers \
pkgconfig
# Set working directory
WORKDIR /app
# Copy Gemfile and install dependencies
COPY web/Gemfile web/Gemfile.lock* ./
# Install gems with SQLite3 support
RUN bundle config set --local force_ruby_platform true && \
bundle config set --local without 'development test' && \
bundle install --jobs=4 --retry=3
# Production stage
FROM ruby:3.3-alpine AS production
# Install runtime dependencies
RUN apk add --no-cache \
sqlite \
tzdata \
curl
# Create non-root user
RUN addgroup -g 1000 -S potatomesh && \
adduser -u 1000 -S potatomesh -G potatomesh
# Set working directory
WORKDIR /app
# Copy installed gems from builder stage
COPY --from=builder /usr/local/bundle /usr/local/bundle
# Copy application code (exclude Dockerfile from web directory)
COPY --chown=potatomesh:potatomesh web/app.rb web/app.sh web/Gemfile web/Gemfile.lock* web/public/ web/spec/ ./
COPY --chown=potatomesh:potatomesh web/views/ ./views/
# Copy SQL schema files from data directory
COPY --chown=potatomesh:potatomesh data/*.sql /data/
# Create data directory for SQLite database
RUN mkdir -p /app/data && \
chown -R potatomesh:potatomesh /app/data
# Switch to non-root user
USER potatomesh
# Expose port
EXPOSE 41447
# Default environment variables (can be overridden by host)
ENV APP_ENV=production \
MESH_DB=/app/data/mesh.db \
DB_BUSY_TIMEOUT_MS=5000 \
DB_BUSY_MAX_RETRIES=5 \
DB_BUSY_RETRY_DELAY=0.05 \
MAX_JSON_BODY_BYTES=1048576 \
SITE_NAME="Berlin Mesh Network" \
DEFAULT_CHANNEL="#MediumFast" \
DEFAULT_FREQUENCY="868MHz" \
MAP_CENTER_LAT=52.502889 \
MAP_CENTER_LON=13.404194 \
MAX_NODE_DISTANCE_KM=50 \
MATRIX_ROOM="" \
DEBUG=0
# Start the application
CMD ["ruby", "app.rb", "-p", "41447", "-o", "0.0.0.0"]

View File

@@ -1,6 +1,29 @@
# Copyright (C) 2025 l5yth
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
source "https://rubygems.org"
gem "sinatra", "~> 4.0"
gem "sqlite3", "~> 1.7"
gem "rackup", "~> 2.2"
gem "puma", "~> 7.0"
group :test do
gem "rspec", "~> 3.12"
gem "rack-test", "~> 2.1"
gem "rufo", "~> 0.18.1"
gem "simplecov", "~> 0.22", require: false
gem "simplecov_json_formatter", "~> 0.1", require: false
gem "rspec_junit_formatter", "~> 0.6", require: false
end

1165
web/app.rb

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +1,24 @@
#!/usr/bin/env bash
# Copyright (C) 2025 l5yth
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -euo pipefail
bundle install
exec ruby app.rb -p 41447 -o 127.0.0.1
PORT=${PORT:-41447}
BIND_ADDRESS=${BIND_ADDRESS:-0.0.0.0}
exec ruby app.rb -p "${PORT}" -o "${BIND_ADDRESS}"

0
web/public/.keep Normal file
View File

BIN
web/public/favicon.ico Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.2 KiB

View File

@@ -1,455 +0,0 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width,initial-scale=1" />
<title>Meshtastic Berlin</title>
<!-- Leaflet CSS/JS (CDN) -->
<link
rel="stylesheet"
href="https://unpkg.com/leaflet@1.9.4/dist/leaflet.css"
integrity="sha256-p4NxAoJBhIIN+hmNHrzRCf9tD/miZyoHS5obTRR9BMY="
crossorigin=""
/>
<script
src="https://unpkg.com/leaflet@1.9.4/dist/leaflet.js"
integrity="sha256-20nQCchB9co0qIjJZRGuk2/Z9VM+kNiyxNV1lvTlZBo="
crossorigin=""
></script>
<style>
:root { --pad: 16px; }
body { font-family: system-ui, Segoe UI, Roboto, Ubuntu, Arial, sans-serif; margin: var(--pad); padding-bottom: 32px; }
h1 { margin: 0 0 8px }
.meta { color:#555; margin-bottom:12px }
.pill{ display:inline-block; padding:2px 8px; border-radius:999px; background:#eee; font-size:12px }
#map { flex: 1; height: 60vh; border: 1px solid #ddd; border-radius: 8px; }
table { border-collapse: collapse; width: 100%; margin-top: var(--pad); }
th, td { border-bottom: 1px solid #ddd; padding: 6px; text-align: left; }
th { position: sticky; top: 0; background: #fafafa; }
.mono { font-family: ui-monospace, Menlo, Consolas, monospace; }
.row { display: flex; gap: var(--pad); align-items: center; justify-content: space-between; }
.map-row { display: flex; gap: var(--pad); align-items: stretch; }
#chat { flex: 0 0 33%; max-width: 33%; height: 60vh; border: 1px solid #ddd; border-radius: 8px; overflow-y: auto; padding: 6px; font-size: 12px; }
.chat-entry-node { font-family: ui-monospace, Menlo, Consolas, monospace; color: #555 }
.chat-entry-msg { font-family: ui-monospace, Menlo, Consolas, monospace; }
.chat-entry-date { font-family: ui-monospace, Menlo, Consolas, monospace; font-weight: bold; }
.short-name { display:inline-block; border-radius:4px; padding:0 2px; }
.controls { display: flex; gap: 8px; align-items: center; }
button { padding: 6px 10px; border: 1px solid #ccc; background: #fff; border-radius: 6px; cursor: pointer; }
button:hover { background: #f6f6f6; }
label { font-size: 14px; color: #333; }
input[type="text"] { padding: 6px 10px; border: 1px solid #ccc; border-radius: 6px; }
.legend { background: #fff; padding: 6px 8px; border: 1px solid #ccc; border-radius: 4px; font-size: 12px; line-height: 18px; }
.legend span { display: inline-block; width: 12px; height: 12px; margin-right: 6px; vertical-align: middle; }
#map .leaflet-tile { filter: opacity(70%); }
footer { position: fixed; bottom: 0; left: var(--pad); width: calc(100% - 2 * var(--pad)); background: #fafafa; border-top: 1px solid #ddd; text-align: center; font-size: 12px; padding: 4px 0; }
@media (max-width: 768px) {
.map-row { flex-direction: column; }
#map { order: 1; flex: none; max-width: 100%; height: 50vh; }
#chat { order: 2; flex: none; max-width: 100%; height: 30vh; }
}
</style>
</head>
<body>
<h1>Meshtastic Berlin</h1>
<div class="row meta">
<div>
<span id="refreshInfo"></span>
<button id="refreshBtn" type="button">Refresh now</button>
<span id="status" class="pill">loading…</span>
</div>
<div class="controls">
<label><input type="checkbox" id="fitBounds" checked /> Auto-fit map</label>
<input type="text" id="filterInput" placeholder="Filter nodes" />
</div>
</div>
<div class="map-row">
<div id="chat" aria-label="Chat log"></div>
<div id="map" role="region" aria-label="Nodes map"></div>
</div>
<table id="nodes">
<thead>
<tr>
<th>Node ID</th>
<th>Short</th>
<th>Long Name</th>
<th>Last Seen</th>
<th>Role</th>
<th>HW Model</th>
<th>Battery</th>
<th>Voltage</th>
<th>Uptime</th>
<th>Channel Util</th>
<th>Air Util Tx</th>
<th>Latitude</th>
<th>Longitude</th>
<th>Altitude</th>
<th>Last Position</th>
</tr>
</thead>
<tbody></tbody>
</table>
<footer>
PotatoMesh GitHub: <a href="https://github.com/l5yth/potato-mesh" target="_blank">l5yth/potato-mesh</a>
Meshtastic Berlin Matrix:
<a href="https://matrix.to/#/#meshtastic-berlin:matrix.org" target="_blank">#meshtastic-berlin:matrix.org</a>
</footer>
<script>
const statusEl = document.getElementById('status');
const fitBoundsEl = document.getElementById('fitBounds');
const refreshBtn = document.getElementById('refreshBtn');
const filterInput = document.getElementById('filterInput');
const titleEl = document.querySelector('title');
const headerEl = document.querySelector('h1');
const chatEl = document.getElementById('chat');
const refreshInfo = document.getElementById('refreshInfo');
const baseTitle = document.title;
let allNodes = [];
const seenNodeIds = new Set();
const seenMessageIds = new Set();
let lastChatDate;
const NODE_LIMIT = 1000;
const CHAT_LIMIT = 1000;
const REFRESH_MS = 60000;
refreshInfo.textContent = `#MediumFast — auto-refresh every ${REFRESH_MS / 1000} seconds.`;
const MAP_CENTER = L.latLng(52.502889, 13.404194);
const MAX_NODE_DISTANCE_KM = 137;
const roleColors = Object.freeze({
CLIENT: '#A8D5BA',
CLIENT_HIDDEN: '#B8DCA9',
CLIENT_MUTE: '#D2E3A2',
TRACKER: '#E8E6A1',
SENSOR: '#F4E3A3',
LOST_AND_FOUND: '#F9D4A6',
REPEATER: '#F7B7A3',
ROUTER_LATE: '#F29AA3',
ROUTER: '#E88B94'
});
// --- Map setup ---
const map = L.map('map', { worldCopyJump: true });
const tiles = L.tileLayer('https://tiles.stadiamaps.com/tiles/stamen_toner_lite/{z}/{x}/{y}.png', {
maxZoom: 18,
attribution: '&copy; OpenStreetMap contributors &amp; WMF Labs'
}).addTo(map);
// Default view (Berlin center) until first data arrives
map.setView(MAP_CENTER, 10);
const markersLayer = L.layerGroup().addTo(map);
const legend = L.control({ position: 'bottomright' });
legend.onAdd = function () {
const div = L.DomUtil.create('div', 'legend');
for (const [role, color] of Object.entries(roleColors)) {
div.innerHTML += `<div><span style="background:${color}"></span>${role}</div>`;
}
return div;
};
legend.addTo(map);
// --- Helpers ---
function escapeHtml(str) {
return String(str)
.replace(/&/g, '&amp;')
.replace(/</g, '&lt;')
.replace(/>/g, '&gt;')
.replace(/"/g, '&quot;')
.replace(/'/g, '&#39;');
}
function renderShortHtml(short, role){
if (!short) {
return `<span class="short-name" style="background:#ccc">?&nbsp;&nbsp;&nbsp;</span>`;
}
const padded = escapeHtml(String(short).padStart(4, ' ')).replace(/ /g, '&nbsp;');
const color = roleColors[role] || roleColors.CLIENT;
return `<span class="short-name" style="background:${color}">${padded}</span>`;
}
function appendChatEntry(div) {
chatEl.appendChild(div);
while (chatEl.childElementCount > CHAT_LIMIT) {
chatEl.removeChild(chatEl.firstChild);
}
chatEl.scrollTop = chatEl.scrollHeight;
}
function maybeAddDateDivider(ts) {
if (!ts) return;
const d = new Date(ts * 1000);
const key = `${d.getFullYear()}-${pad(d.getMonth() + 1)}-${pad(d.getDate())}`;
if (lastChatDate !== key) {
lastChatDate = key;
const midnight = new Date(d);
midnight.setHours(0, 0, 0, 0);
const div = document.createElement('div');
div.className = 'chat-entry-date';
div.textContent = `-- ${formatDate(midnight)} --`;
appendChatEntry(div);
}
}
function addNewNodeChatEntry(n) {
maybeAddDateDivider(n.first_heard);
const div = document.createElement('div');
const ts = formatTime(new Date(n.first_heard * 1000));
div.className = 'chat-entry-node';
const short = renderShortHtml(n.short_name, n.role);
const longName = escapeHtml(n.long_name || '');
div.innerHTML = `[${ts}] ${short} <em>New node: ${longName}</em>`;
appendChatEntry(div);
}
function addNewMessageChatEntry(m) {
maybeAddDateDivider(m.rx_time);
const div = document.createElement('div');
const ts = formatTime(new Date(m.rx_time * 1000));
const short = renderShortHtml(m.node?.short_name, m.node?.role);
const text = escapeHtml(m.text || '');
div.className = 'chat-entry-msg';
div.innerHTML = `[${ts}] ${short} ${text}`;
appendChatEntry(div);
}
function pad(n) { return String(n).padStart(2, "0"); }
function formatTime(d) {
return pad(d.getHours()) + ":" +
pad(d.getMinutes()) + ":" +
pad(d.getSeconds());
}
function formatDate(d) {
return d.getFullYear() + "-" +
pad(d.getMonth() + 1) + "-" +
pad(d.getDate());
}
function fmtHw(v) {
return v && v !== "UNSET" ? String(v) : "";
}
function fmtCoords(v, d = 5) {
if (v == null || v === '') return "";
const n = Number(v);
return Number.isFinite(n) ? n.toFixed(d) : "";
}
function fmtAlt(v, s) {
return (v == null || v === '') ? "" : `${v}${s}`;
}
function fmtTx(v, d = 3) {
if (v == null || v === '') return "";
const n = Number(v);
return Number.isFinite(n) ? `${n.toFixed(d)}%` : "";
}
function timeHum(unixSec) {
if (!unixSec) return "";
if (unixSec < 0) return "0s";
if (unixSec < 60) return `${unixSec}s`;
if (unixSec < 3600) return `${Math.floor(unixSec/60)}m ${Math.floor((unixSec%60))}s`;
if (unixSec < 86400) return `${Math.floor(unixSec/3600)}h ${Math.floor((unixSec%3600)/60)}m`;
return `${Math.floor(unixSec/86400)}d ${Math.floor((unixSec%86400)/3600)}h`;
}
function timeAgo(unixSec, nowSec = Date.now()/1000) {
if (!unixSec) return "";
const diff = Math.floor(nowSec - Number(unixSec));
if (diff < 0) return "0s";
if (diff < 60) return `${diff}s`;
if (diff < 3600) return `${Math.floor(diff/60)}m ${Math.floor((diff%60))}s`;
if (diff < 86400) return `${Math.floor(diff/3600)}h ${Math.floor((diff%3600)/60)}m`;
return `${Math.floor(diff/86400)}d ${Math.floor((diff%86400)/3600)}h`;
}
async function fetchNodes(limit = NODE_LIMIT) {
const r = await fetch(`/api/nodes?limit=${limit}`, { cache: 'no-store' });
if (!r.ok) throw new Error('HTTP ' + r.status);
return r.json();
}
async function fetchMessages(limit = NODE_LIMIT) {
const r = await fetch(`/api/messages?limit=${limit}`, { cache: 'no-store' });
if (!r.ok) throw new Error('HTTP ' + r.status);
return r.json();
}
function computeDistances(nodes) {
for (const n of nodes) {
const latRaw = n.latitude;
const lonRaw = n.longitude;
if (latRaw == null || latRaw === '' || lonRaw == null || lonRaw === '') {
n.distance_km = null;
continue;
}
const lat = Number(latRaw);
const lon = Number(lonRaw);
if (!Number.isFinite(lat) || !Number.isFinite(lon)) {
n.distance_km = null;
continue;
}
n.distance_km = L.latLng(lat, lon).distanceTo(MAP_CENTER) / 1000;
}
}
function renderTable(nodes, nowSec) {
const tb = document.querySelector('#nodes tbody');
const frag = document.createDocumentFragment();
for (const n of nodes) {
const tr = document.createElement('tr');
tr.innerHTML = `
<td class="mono">${n.node_id || ""}</td>
<td>${renderShortHtml(n.short_name, n.role)}</td>
<td>${n.long_name || ""}</td>
<td>${timeAgo(n.last_heard, nowSec)}</td>
<td>${n.role || "CLIENT"}</td>
<td>${fmtHw(n.hw_model)}</td>
<td>${fmtAlt(n.battery_level, "%")}</td>
<td>${fmtAlt(n.voltage, "V")}</td>
<td>${timeHum(n.uptime_seconds)}</td>
<td>${fmtTx(n.channel_utilization)}</td>
<td>${fmtTx(n.air_util_tx)}</td>
<td>${fmtCoords(n.latitude)}</td>
<td>${fmtCoords(n.longitude)}</td>
<td>${fmtAlt(n.altitude, "m")}</td>
<td class="mono">${n.pos_time_iso ? `${timeAgo(n.position_time, nowSec)}` : ""}</td>`;
frag.appendChild(tr);
}
tb.replaceChildren(frag);
}
function renderMap(nodes, nowSec) {
markersLayer.clearLayers();
const pts = [];
for (const n of nodes) {
const latRaw = n.latitude, lonRaw = n.longitude;
if (latRaw == null || latRaw === '' || lonRaw == null || lonRaw === '') continue;
const lat = Number(latRaw), lon = Number(lonRaw);
if (!Number.isFinite(lat) || !Number.isFinite(lon)) continue;
if (n.distance_km != null && n.distance_km > MAX_NODE_DISTANCE_KM) continue;
const color = roleColors[n.role] || '#3388ff';
const marker = L.circleMarker([lat, lon], {
radius: 9,
color: '#000',
weight: 1,
fillColor: color,
fillOpacity: 0.7,
opacity: 0.7
});
const lines = [
`<b>${n.long_name || ''}</b>`,
`${renderShortHtml(n.short_name, n.role)} <span class="mono">${n.node_id || ''}</span>`,
n.hw_model ? `Model: ${fmtHw(n.hw_model)}` : null,
`Role: ${n.role || 'CLIENT'}`,
(n.battery_level != null ? `Battery: ${fmtAlt(n.battery_level, "%")}, ${fmtAlt(n.voltage, "V")}` : null),
(n.last_heard ? `Last seen: ${timeAgo(n.last_heard, nowSec)}` : null),
(n.pos_time_iso ? `Last Position: ${timeAgo(n.position_time, nowSec)}` : null),
(n.uptime_seconds ? `Uptime: ${timeHum(n.uptime_seconds)}` : null),
].filter(Boolean);
marker.bindPopup(lines.join('<br/>'));
marker.addTo(markersLayer);
pts.push([lat, lon]);
}
if (pts.length && fitBoundsEl.checked) {
const b = L.latLngBounds(pts);
map.fitBounds(b.pad(0.2), { animate: false });
}
}
function applyFilter() {
const q = filterInput.value.trim().toLowerCase();
const nodes = !q ? allNodes : allNodes.filter(n => {
return [n.node_id, n.short_name, n.long_name]
.filter(Boolean)
.some(v => v.toLowerCase().includes(q));
});
const nowSec = Date.now()/1000;
renderTable(nodes, nowSec);
renderMap(nodes, nowSec);
updateCount(nodes, nowSec);
updateRefreshInfo(nodes, nowSec);
}
filterInput.addEventListener('input', applyFilter);
async function refresh() {
try {
statusEl.textContent = 'refreshing…';
const nodes = await fetchNodes();
computeDistances(nodes);
const newNodes = [];
for (const n of nodes) {
if (n.node_id && !seenNodeIds.has(n.node_id)) {
newNodes.push(n);
}
}
const messages = await fetchMessages();
const newMessages = [];
for (const m of messages) {
if (m.id && !seenMessageIds.has(m.id)) {
newMessages.push(m);
}
}
const entries = [];
for (const n of newNodes) entries.push({ type: 'node', ts: n.first_heard ?? 0, item: n });
for (const m of newMessages) entries.push({ type: 'msg', ts: m.rx_time ?? 0, item: m });
entries.sort((a, b) => {
if (a.ts !== b.ts) return a.ts - b.ts;
return a.type === 'node' && b.type === 'msg' ? -1 : a.type === 'msg' && b.type === 'node' ? 1 : 0;
});
for (const e of entries) {
if (e.type === 'node') {
addNewNodeChatEntry(e.item);
if (e.item.node_id) seenNodeIds.add(e.item.node_id);
} else {
addNewMessageChatEntry(e.item);
if (e.item.id) seenMessageIds.add(e.item.id);
}
}
allNodes = nodes;
applyFilter();
statusEl.textContent = 'updated ' + new Date().toLocaleTimeString();
} catch (e) {
statusEl.textContent = 'error: ' + e.message;
console.error(e);
}
}
refresh();
setInterval(refresh, REFRESH_MS);
refreshBtn.addEventListener('click', refresh);
function updateCount(nodes, nowSec) {
const dayAgoSec = nowSec - 86400;
const count = nodes.filter(n => n.last_heard && Number(n.last_heard) >= dayAgoSec).length;
const text = `${baseTitle} (${count})`;
titleEl.textContent = text;
headerEl.textContent = text;
}
function updateRefreshInfo(nodes, nowSec) {
const windows = [
{ label: 'hour', secs: 3600 },
{ label: 'day', secs: 86400 },
{ label: 'week', secs: 7 * 86400 },
];
const counts = windows.map(w => {
const c = nodes.filter(n => n.last_heard && nowSec - Number(n.last_heard) <= w.secs).length;
return `${c}/${w.label}`;
}).join(', ');
refreshInfo.textContent = `#MediumFast — active nodes: ${counts} — auto-refresh every ${REFRESH_MS / 1000} seconds.`;
}
</script>
</body>
</html>

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 65 KiB

1350
web/spec/app_spec.rb Normal file

File diff suppressed because it is too large Load Diff

62
web/spec/spec_helper.rb Normal file
View File

@@ -0,0 +1,62 @@
# Copyright (C) 2025 l5yth
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# frozen_string_literal: true
require "simplecov"
require "simplecov_json_formatter"
SimpleCov.formatters = SimpleCov::Formatter::MultiFormatter.new(
[
SimpleCov::Formatter::SimpleFormatter,
SimpleCov::Formatter::HTMLFormatter,
SimpleCov::Formatter::JSONFormatter,
],
)
SimpleCov.start do
enable_coverage :branch
add_filter "/spec/"
end
require "tmpdir"
require "fileutils"
ENV["RACK_ENV"] = "test"
SPEC_TMPDIR = Dir.mktmpdir("potato-mesh-spec-")
ENV["MESH_DB"] = File.join(SPEC_TMPDIR, "mesh.db")
require_relative "../app"
require "rack/test"
require "rspec"
RSpec.configure do |config|
config.expect_with :rspec do |expectations|
expectations.include_chain_clauses_in_custom_matcher_descriptions = true
end
config.mock_with :rspec do |mocks|
mocks.verify_partial_doubles = true
end
config.shared_context_metadata_behavior = :apply_to_host_groups
config.include Rack::Test::Methods
config.after(:suite) do
FileUtils.remove_entry(SPEC_TMPDIR) if File.directory?(SPEC_TMPDIR)
end
end

1597
web/views/index.erb Normal file

File diff suppressed because it is too large Load Diff