* feat: Add comprehensive Docker support - Add multi-container Docker setup with web app and data ingestor - Create production-ready Dockerfiles with multi-stage builds - Add Docker Compose configurations for dev, prod, and custom environments - Implement CI/CD pipeline with GitHub Actions for automated builds - Add comprehensive Docker documentation and setup guides - Include security scanning and multi-platform builds - Support for Meshtastic device integration via serial access - Persistent data storage with named volumes - Health checks and monitoring capabilities Addresses GitHub issue #120: Dockerize the project for easier community adoption Files added: - web/Dockerfile: Ruby web application container - data/Dockerfile: Python data ingestor container - data/requirements.txt: Python dependencies - docker-compose.yml: Base Docker Compose configuration - docker-compose.dev.yml: Development environment overrides - docker-compose.prod.yml: Production environment overrides - .env.example: Environment configuration template - .dockerignore: Docker build context optimization - .github/workflows/docker.yml: CI/CD pipeline - DOCKER.md: Comprehensive Docker documentation This implementation transforms PotatoMesh from a complex manual setup to a single-command deployment: docker-compose up -d * feat: Add Docker support with multi-architecture builds - Add web/Dockerfile with Ruby 3.4 Alpine base - Add data/Dockerfile with Python 3.13 Alpine base - Use Alpine's SQLite3 packages for cross-platform compatibility - Support AMD64, ARM64, ARMv7, and Windows architectures - Multi-stage builds for optimized production images - Non-root user security and proper file permissions * feat: Add Docker Compose configurations for different environments - docker-compose.yml: Production setup with GHCR images - docker-compose.dev.yml: Development setup with local builds - docker-compose.raspberry-pi.yml: Pi-optimized with resource limits - Support for all architectures (AMD64, ARM64, ARMv7) - Proper volume mounts and network configuration - Environment variable configuration for different deployments * feat: Add GitHub Actions workflows for Docker CI/CD - docker.yml: Multi-architecture build and push to GHCR - test-raspberry-pi-hardware.yml: ARM64 testing with QEMU - Support for manual workflow dispatch with version input - Build and test all Docker variants (AMD64, ARM64, ARMv7, Windows) - Automated publishing to GitHub Container Registry - Comprehensive testing for Raspberry Pi deployments * feat: Add Docker documentation and configuration tools - docs/DOCKER.md: Comprehensive Docker setup and usage guide - configure.sh: Interactive configuration script for deployment - Platform-specific setup instructions (macOS, Linux, Windows) - Raspberry Pi optimization guidelines - Environment variable configuration - Troubleshooting and best practices * docs: Update README with comprehensive Docker support - Add Docker Quick Start section with published images - Add comprehensive table of all available GHCR images - Include architecture-specific pull commands - Update manual installation instructions - Add platform-specific deployment examples - Document all supported architectures and use cases * chore: Update dependencies and project configuration - Update data/requirements.txt for Python 3.13 compatibility - Add v0.3.0 changelog entry documenting Docker support - Update .gitignore for Docker-related files - Prepare project for Docker deployment * feat: Update web interface for Denver Mesh Network - Update default configuration to center on Denver, Colorado - Set SITE_NAME to 'Denver Mesh Network' - Configure 915MHz frequency for US region - Update map center coordinates (39.7392, -104.9903) - Set appropriate node distance and Matrix room settings * Update Docker configuration and documentation - Remove Raspberry Pi specific Docker files and workflows - Update Docker workflow configuration - Consolidate Docker documentation - Add AGENTS.md for opencode integration - Update README with current project status * cleanup: workflow/readme * Update README.md Co-authored-by: l5y <220195275+l5yth@users.noreply.github.com> * Add .env.example and simplify documentation - Add comprehensive .env.example with all environment variables - Update web Dockerfile to use Berlin coordinates instead of Denver - Simplify README Docker quick start with helpful comments - Greatly simplify DOCKER.md with only essential information * cleanup: readme * Remove Stadia API key references - Remove STADIA_API_KEY from docker-compose.yml environment variables - Remove Stadia Maps configuration section from configure.sh - Remove Stadia API key references from .env.example - Simplify configuration to use basic OpenStreetMap tiles only * quickfix * cleanup: remove example usage from docker gh action output --------- Co-authored-by: l5y <220195275+l5yth@users.noreply.github.com>
5.1 KiB
🥔 PotatoMesh
A simple Meshtastic-powered node dashboard for your local community. No MQTT clutter, just local LoRa aether.
- Web app with chat window and map view showing nodes and messages.
- API to POST (authenticated) and to GET nodes and messages.
- Supplemental Python ingestor to feed the POST APIs of the Web app with data remotely.
- Shows new node notifications (first seen) in chat.
- Allows searching and filtering for nodes in map and table view.
Live demo for Berlin #MediumFast: potatomesh.net
🐳 Quick Start with Docker
./configure.sh # Configure your setup
docker-compose up -d # Start services
docker-compose logs -f # View logs
Web App
Requires Ruby for the Sinatra web app and SQLite3 for the app's database.
pacman -S ruby sqlite3
gem install sinatra sqlite3 rackup puma rspec rack-test rufo
cd ./web
bundle install
Run
Check out the app.sh run script in ./web directory.
API_TOKEN="1eb140fd-cab4-40be-b862-41c607762246" ./app.sh
== Sinatra (v4.1.1) has taken the stage on 41447 for development with backup from Puma
Puma starting in single mode...
[...]
* Environment: development
* PID: 188487
* Listening on http://127.0.0.1:41447
Check 127.0.0.1:41447 for the development preview
of the node map. Set API_TOKEN required for authorizations on the API's POST endpoints.
The web app can be configured with environment variables (defaults shown):
SITE_NAME- title and header shown in the ui (default: "Meshtastic Berlin")DEFAULT_CHANNEL- default channel shown in the ui (default: "#MediumFast")DEFAULT_FREQUENCY- default channel shown in the ui (default: "868MHz")MAP_CENTER_LAT/MAP_CENTER_LON- default map center coordinates (default:52.502889/13.404194)MAX_NODE_DISTANCE_KM- hide nodes farther than this distance from the center (default:137)MATRIX_ROOM- matrix room id for a footer link (default:#meshtastic-berlin:matrix.org)
Example:
SITE_NAME="Meshtastic Berlin" MAP_CENTER_LAT=52.502889 MAP_CENTER_LON=13.404194 MAX_NODE_DISTANCE_KM=137 MATRIX_ROOM="#meshtastic-berlin:matrix.org" ./app.sh
API
The web app contains an API:
- GET
/api/nodes?limit=100- returns the latest 100 nodes reported to the app - GET
/api/messages?limit=100- returns the latest 100 messages - POST
/api/nodes- upserts nodes provided as JSON object mapping node ids to node data (requiresAuthorization: Bearer <API_TOKEN>) - POST
/api/messages- appends messages provided as a JSON object or array (requiresAuthorization: Bearer <API_TOKEN>)
The API_TOKEN environment variable must be set to a non-empty value and match the token supplied in the Authorization header for POST requests.
Python Ingestor
The web app is not meant to be run locally connected to a Meshtastic node but rather on a remote host without access to a physical Meshtastic device. Therefore, it only accepts data through the API POST endpoints. Benefit is, here multiple nodes across the community can feed the dashboard with data. The web app handles messages and nodes by ID and there will be no duplication.
For convenience, the directory ./data contains a Python ingestor. It connects to a local
Meshtastic node via serial port to gather nodes and messages seen by the node.
pacman -S python
cd ./data
python -m venv .venv
source .venv/bin/activate
pip install -U meshtastic
It uses the Meshtastic Python library to ingest mesh data and post nodes and messages to the configured potato-mesh instance.
Check out mesh.sh ingestor script in the ./data directory.
POTATOMESH_INSTANCE=http://127.0.0.1:41447 API_TOKEN=1eb140fd-cab4-40be-b862-41c607762246 MESH_SERIAL=/dev/ttyACM0 DEBUG=1 ./mesh.sh
Mesh daemon: nodes+messages → http://127.0.0.1 | port=41447 | channel=0
[...]
[debug] upserted node !849b7154 shortName='7154'
[debug] upserted node !ba653ae8 shortName='3ae8'
[debug] upserted node !16ced364 shortName='Pat'
[debug] stored message from '!9ee71c38' to '^all' ch=0 text='Guten Morgen!'
Run the script with POTATOMESH_INSTANCE and API_TOKEN to keep updating
node records and parsing new incoming messages. Enable debug output with DEBUG=1,
specify the serial port with MESH_SERIAL (default /dev/ttyACM0), etc.
License
Apache v2.0, Contact COM0@l5y.tech
