73 Commits

Author SHA1 Message Date
Joel Krauska
8bea2bd744 ruff 2025-11-25 18:12:04 -08:00
Pablo Revilla
7edb7b5c38 Fix the net page as it was not showing the date information 2025-11-25 09:31:56 -08:00
Pablo Revilla
17a1265842 Fix the net page as it was not showing the date information 2025-11-25 09:05:37 -08:00
Pablo Revilla
cc03d237bb Fix the net page as it was not showing the date information 2025-11-24 19:38:46 -08:00
Pablo Revilla
06cc15a03c Fix the net page as it was not showing the date information 2025-11-24 18:46:28 -08:00
Pablo Revilla
99cb5654e4 More changes... almost ready for release.
Ranamed 2 pages for easy or reading.
2025-11-24 16:11:11 -08:00
Pablo Revilla
3c8fa0185e Remamed new_node to node. shorter and descriptive. 2025-11-24 10:20:06 -08:00
Pablo Revilla
1e85aa01c6 Remamed new_node to node. shorter and descriptive. 2025-11-24 10:16:30 -08:00
Pablo Revilla
535c5c8ada Remamed new_node to node. shorter and descriptive. 2025-11-24 10:06:43 -08:00
Pablo Revilla
4a5b982e6f Remamed new_node to node. shorter and descriptive. 2025-11-24 09:25:09 -08:00
Pablo Revilla
a71f371c85 Remamed new_node to node. shorter and descriptive. 2025-11-24 09:12:47 -08:00
Pablo Revilla
4150953b96 Remamed new_node to node. shorter and descriptive. 2025-11-24 09:09:51 -08:00
Pablo Revilla
0eed8f8001 Finishing up all the pages for the 3.0 release.
Now all pages are functional.
2025-11-21 15:41:25 -08:00
Pablo Revilla
14aabc3b10 Merge remote-tracking branch 'origin/dev-v3' into dev-v3 2025-11-21 13:47:37 -08:00
Pablo Revilla
fc01cb6a85 Finishing up all the pages for the 3.0 release.
Now all pages are functional.
2025-11-21 13:47:22 -08:00
Joel Krauska
5214b80816 another compatibility fix when _us is empty and we need to sort by BOTH old and new 2025-11-21 12:24:11 -08:00
Joel Krauska
73cd325b35 Make the robots do our bidding 2025-11-21 12:07:14 -08:00
Joel Krauska
f89686fb88 fix 0 epoch dates in /chat 2025-11-21 12:06:50 -08:00
Joel Krauska
0c89b3ec22 use sys.executable 2025-11-21 12:06:50 -08:00
Joel Krauska
9cd1975278 pyproject.toml requirements 2025-11-21 12:06:50 -08:00
Pablo Revilla
052a9460ca Finishing up all the pages for the 3.0 release.
Now all pages are functional.
2025-11-21 11:49:11 -08:00
Pablo Revilla
af6bb0fa64 Merge remote-tracking branch 'origin/dev-v3' into dev-v3 2025-11-21 11:43:07 -08:00
Pablo Revilla
8fae62e51a Finishing up all the pages for the 3.0 release.
Now all pages are functional.
2025-11-21 11:41:50 -08:00
Joel Krauska
4af1aac6ec more ruff 2025-11-21 11:37:24 -08:00
Joel Krauska
ed695684d9 fix ruff format 2025-11-21 11:36:48 -08:00
Pablo Revilla
5e0852e558 Finishing up all the pages for the 3.0 release.
Now all pages are functional.
2025-11-21 11:10:16 -08:00
Pablo Revilla
e135630f8d Finishing up all the pages for the 3.0 release.
Now all pages are functional.
2025-11-21 10:57:18 -08:00
Pablo Revilla
5f5ae75d84 Worked on /api/packet. Needed to modify
- Added new api endpoint /api/packets_seen
- Modified web.py and store.py to support changes to APIs.
- Started to work on new_node.html and new_packet.html for presentation of data.
2025-11-19 11:18:34 -08:00
Pablo Revilla
39c0dd589d Worked on /api/packet. Needed to modify
- Added new api endpoint /api/packets_seen
- Modified web.py and store.py to support changes to APIs.
- Started to work on new_node.html and new_packet.html for presentation of data.
2025-11-13 15:28:45 -08:00
Pablo Revilla
7411c7e8ee Worked on /api/packet. Needed to modify
- Store.py to read the new time data
- api.py to present the new time data
- firehose.html chat.html and map.html now use the new apis and the time is the browser local time
2025-11-10 21:51:48 -08:00
Pablo Revilla
27daa92694 Worked on /api/packet. Needed to modify
- Store.py to read the new time data
- api.py to present the new time data
- firehose.html chat.html and map.html now use the new apis and the time is the browser local time
2025-11-10 21:33:50 -08:00
Óscar García Amor
d4f251f1b6 Improves container build (#94) 2025-11-10 10:32:06 -08:00
Pablo Revilla
ac4ac9264f Worked on /api/packet. Needed to modify
- Store.py to read the new time data
- api.py to present the new time data
- firehose.html chat.html and map.html now use the new apis and the time is the browser local time
2025-11-05 20:46:12 -08:00
Pablo Revilla
4a3f205d26 Worked on /api/packet. Needed to modify
- Store.py to read the new time data
- api.py to present the new time data
- firehose.html chat.html and map.html now use the new apis and the time is the browser local time
2025-11-05 19:07:23 -08:00
Joel Krauska
9fa874762e Add us first/last timestamps to node table too 2025-11-05 16:48:29 -08:00
Joel Krauska
e0d8ceecac Alembic was blocking mqtt logs 2025-11-05 14:36:50 -08:00
Joel Krauska
67738105c8 Summary of 3.0.0 stuff 2025-11-04 21:33:08 -08:00
Joel Krauska
04e76ebd28 graphviz for dot in Container 2025-11-04 21:12:22 -08:00
Joel Krauska
04051bc00a setup-dev 2025-11-04 21:11:38 -08:00
Joel Krauska
f903c82c79 Docker Docs 2025-11-04 21:04:20 -08:00
Joel Krauska
4b9dfba03d ruff 2025-11-04 20:57:58 -08:00
Joel Krauska
70f727a6dd backups and cleanups are different 2025-11-04 20:55:55 -08:00
Joel Krauska
bc70b5c39d DB Backups 2025-11-04 20:51:50 -08:00
Jim Schrempp
a65de73b3a Traceroute Return Path logged and displayed (#97)
* traceroute returns are now logged and /packetlist now graphs the correct data for a return route
* now using alembic to update schema
* HOWTO - Alembic

---------

Co-authored-by: Joel Krauska <jkrauska@gmail.com>
2025-11-04 20:36:24 -08:00
Joel Krauska
cc053951b1 make /app owned by ap0p 2025-11-04 20:16:41 -08:00
Joel Krauska
fcff4f5849 checkout and containerfile 2025-11-04 20:04:05 -08:00
Joel Krauska
4e9f121514 fix symlink 2025-11-04 20:00:58 -08:00
Joel Krauska
c0ed5031e6 symlink 2025-11-04 20:00:01 -08:00
Joel Krauska
a6b1e30d29 auto build containers 2025-11-04 19:57:49 -08:00
Joel Krauska
24de8e73fb Container using slim/uv 2025-11-04 19:51:19 -08:00
Joel Krauska
b86af326af improve migrations and fix logging problem with mqtt 2025-11-04 16:27:18 -08:00
Joel Krauska
0a0ec5c45f remove unused loop 2025-11-03 20:23:21 -08:00
Joel Krauska
9ac045a1c5 fallback if missing config 2025-11-03 20:18:17 -08:00
Joel Krauska
e343d6aa15 mvrun work 2025-11-03 20:13:36 -08:00
Óscar García Amor
4de92da1ae Set dbcleanup.log location configurable 2025-11-03 20:07:34 -08:00
Óscar García Amor
74369deaea Improves arguments in mvrun.py 2025-11-03 20:07:34 -08:00
Joel Krauska
44671a1358 vuln 2025-11-03 20:05:48 -08:00
Joel Krauska
64261d3bc4 ruff and docs 2025-11-03 20:05:04 -08:00
Joel Krauska
fe59b42a53 break out api calls in to their own file to reduce footprint 2025-11-03 19:07:39 -08:00
Joel Krauska
f8ed76b41e alembic log format 2025-11-03 15:07:17 -08:00
Joel Krauska
7d0d704412 health endpoint 2025-11-03 15:05:28 -08:00
Joel Krauska
991794ed3d ignore other database files 2025-11-03 15:00:05 -08:00
Joel Krauska
87ade281ba update api docs 2025-11-03 14:52:34 -08:00
Joel Krauska
4c3858958b rm 2025-11-03 14:50:55 -08:00
Joel Krauska
0139169c7d more doc tidy 2025-11-03 14:50:43 -08:00
Joel Krauska
0b438366f1 add readme in docs: 2025-11-03 14:48:09 -08:00
Joel Krauska
9e38a3a394 remove old migrate script 2025-11-03 14:47:45 -08:00
Joel Krauska
cf55334165 move technical docs 2025-11-03 14:47:29 -08:00
Joel Krauska
dda94aa2cb add /version json endpoint 2025-11-03 14:26:49 -08:00
Joel Krauska
64169787b3 modify alembic to support cleaner migrations 2025-11-03 14:11:42 -08:00
Joel Krauska
fa28f6b63f Remove old index notes script -- no longer needed 2025-11-03 13:29:11 -08:00
Joel Krauska
5ca3b472a6 Store UTC int time in DB (#81)
* use UTC int time
2025-11-03 13:26:44 -08:00
Joel Krauska
8ec44ad552 Add alembic DB schema management (#86)
* Use alembic
* add creation helper
* example migration tool
2025-11-03 12:53:34 -08:00
70 changed files with 7028 additions and 4069 deletions

10
.dockerignore Normal file
View File

@@ -0,0 +1,10 @@
# This keeps Docker from including hostOS virtual environment folders
env/
.venv/
# Database files and backups
*.db
*.db-shm
*.db-wal
backups/
*.db.gz

52
.github/workflows/container.yml vendored Normal file
View File

@@ -0,0 +1,52 @@
name: Build container
on:
push:
jobs:
docker:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Docker meta
id: meta
uses: docker/metadata-action@v5
with:
# list of Docker images to use as base name for tags
images: |
ghcr.io/${{ github.repository }}
# generate Docker tags based on the following events/attributes
tags: |
type=ref,event=branch
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=match,pattern=v\d.\d.\d,value=latest
- name: Login to GitHub Container Registry
if: github.event_name != 'pull_request'
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build and push
uses: docker/build-push-action@v6
with:
context: .
file: ./Containerfile
push: ${{ github.event_name != 'pull_request' }}
labels: ${{ steps.meta.outputs.labels }}
tags: ${{ steps.meta.outputs.tags }}
platforms: linux/amd64,linux/arm64
# optional cache (speeds up rebuilds)
cache-from: type=gha
cache-to: type=gha,mode=max

36
.gitignore vendored
View File

@@ -1,11 +1,47 @@
env/*
__pycache__/*
meshview/__pycache__/*
alembic/__pycache__/*
meshtastic/protobuf/*
# Database files
packets.db
packets*.db
*.db
*.db-shm
*.db-wal
# Database backups
backups/
*.db.gz
# Process files
meshview-db.pid
meshview-web.pid
# Config and logs
/table_details.py
config.ini
*.log
# Screenshots
screenshots/*
# Python
python/nanopb
__pycache__/
*.pyc
*.pyo
*.pyd
.Python
# IDE
.vscode/
.idea/
*.swp
*.swo
*~
# OS
.DS_Store
Thumbs.db

204
AGENTS.md Normal file
View File

@@ -0,0 +1,204 @@
# AI Agent Guidelines for Meshview
This document provides context and guidelines for AI coding assistants working on the Meshview project.
## Project Overview
Meshview is a real-time monitoring and diagnostic tool for Meshtastic mesh networks. It provides web-based visualization and analysis of network activity, including:
- Real-time packet monitoring from MQTT streams
- Interactive map visualization of node locations
- Network topology graphs showing connectivity
- Message traffic analysis and conversation tracking
- Node statistics and telemetry data
- Packet inspection and traceroute analysis
## Architecture
### Core Components
1. **MQTT Reader** (`meshview/mqtt_reader.py`) - Subscribes to MQTT topics and receives mesh packets
2. **Database Manager** (`meshview/database.py`, `startdb.py`) - Handles database initialization and migrations
3. **MQTT Store** (`meshview/mqtt_store.py`) - Processes and stores packets in the database
4. **Web Server** (`meshview/web.py`, `main.py`) - Serves the web interface and API endpoints
5. **API Layer** (`meshview/web_api/api.py`) - REST API endpoints for data access
6. **Models** (`meshview/models.py`) - SQLAlchemy database models
7. **Decode Payload** (`meshview/decode_payload.py`) - Protobuf message decoding
### Technology Stack
- **Python 3.13+** - Main language
- **aiohttp** - Async web framework
- **aiomqtt** - Async MQTT client
- **SQLAlchemy (async)** - ORM with async support
- **Alembic** - Database migrations
- **Jinja2** - Template engine
- **Protobuf** - Message serialization (Meshtastic protocol)
- **SQLite/PostgreSQL** - Database backends (SQLite default, PostgreSQL via asyncpg)
### Key Patterns
- **Async/Await** - All I/O operations are asynchronous
- **Database Migrations** - Use Alembic for schema changes (see `docs/Database-Changes-With-Alembic.md`)
- **Configuration** - INI file-based config (`config.ini`, see `sample.config.ini`)
- **Modular API** - API routes separated into `meshview/web_api/` module
## Project Structure
```
meshview/
├── alembic/ # Database migration scripts
├── docs/ # Technical documentation
├── meshview/ # Main application package
│ ├── static/ # Static web assets (HTML, JS, CSS)
│ ├── templates/ # Jinja2 HTML templates
│ ├── web_api/ # API route handlers
│ └── *.py # Core modules
├── main.py # Web server entry point
├── startdb.py # Database manager entry point
├── mvrun.py # Combined runner (starts both services)
├── config.ini # Runtime configuration
└── requirements.txt # Python dependencies
```
## Development Workflow
### Setup
1. Use Python 3.13+ virtual environment
### Running
- **Database**: `./env/bin/python startdb.py`
- **Web Server**: `./env/bin/python main.py`
- **Both**: `./env/bin/python mvrun.py`
## Code Style
- **Line length**: 100 characters (see `pyproject.toml`)
- **Linting**: Ruff (configured in `pyproject.toml`)
- **Formatting**: Ruff formatter
- **Type hints**: Preferred but not strictly required
- **Async**: Use `async def` and `await` for I/O operations
## Important Files
### Configuration
- `config.ini` - Runtime configuration (server, MQTT, database, cleanup)
- `sample.config.ini` - Template configuration file
- `alembic.ini` - Alembic migration configuration
### Database
- `meshview/models.py` - SQLAlchemy models (Packet, Node, Traceroute, etc.)
- `meshview/database.py` - Database initialization and session management
- `alembic/versions/` - Migration scripts
### Core Logic
- `meshview/mqtt_reader.py` - MQTT subscription and message reception
- `meshview/mqtt_store.py` - Packet processing and storage
- `meshview/decode_payload.py` - Protobuf decoding
- `meshview/web.py` - Web server routes and handlers
- `meshview/web_api/api.py` - REST API endpoints
### Templates
- `meshview/templates/` - Jinja2 HTML templates
- `meshview/static/` - Static files (HTML pages, JS, CSS)
## Common Tasks
### Adding a New API Endpoint
1. Add route handler in `meshview/web_api/api.py`
2. Register route in `meshview/web.py` (if needed)
3. Update `docs/API_Documentation.md` if public API
### Database Schema Changes
1. Modify models in `meshview/models.py`
2. Create migration: `alembic revision --autogenerate -m "description"`
3. Review generated migration in `alembic/versions/`
4. Test migration: `alembic upgrade head`
5. **Never** modify existing migration files after they've been applied
### Adding a New Web Page
1. Create template in `meshview/templates/`
2. Add route in `meshview/web.py`
3. Add navigation link if needed (check existing templates for pattern)
4. Add static assets if needed in `meshview/static/`
### Processing New Packet Types
1. Check `meshview/decode_payload.py` for existing decoders
2. Add decoder function if new type
3. Update `meshview/mqtt_store.py` to handle new packet type
4. Update database models if new data needs storage
## Key Concepts
### Meshtastic Protocol
- Uses Protobuf for message serialization
- Packets contain various message types (text, position, telemetry, etc.)
- MQTT topics follow pattern: `msh/{region}/{subregion}/#`
### Database Schema
- **packet** - Raw packet data
- **node** - Mesh node information
- **traceroute** - Network path information
- **packet_seen** - Packet observation records
### Real-time Updates
- Web pages use Server-Sent Events (SSE) for live updates
- Map and firehose pages auto-refresh based on config intervals
- API endpoints return JSON for programmatic access
## Best Practices
1. **Always use async/await** for database and network operations
2. **Use Alembic** for all database schema changes
3. **Follow existing patterns** - check similar code before adding new features
4. **Update documentation** - keep `docs/` and README current
5. **Test migrations** - verify migrations work both up and down
6. **Handle errors gracefully** - log errors, don't crash on bad packets
7. **Respect configuration** - use `config.ini` values, don't hardcode
## Common Pitfalls
- **Don't modify applied migrations** - create new ones instead
- **Don't block the event loop** - use async I/O, not sync
- **Don't forget timezone handling** - timestamps are stored in UTC
- **Don't hardcode paths** - use configuration values
- **Don't ignore MQTT reconnection** - handle connection failures gracefully
## Resources
- **Main README**: `README.md` - Installation and basic usage
- **Docker Guide**: `README-Docker.md` - Container deployment
- **API Docs**: `docs/API_Documentation.md` - API endpoint reference
- **Migration Guide**: `docs/Database-Changes-With-Alembic.md` - Database workflow
- **Contributing**: `CONTRIBUTING.md` - Contribution guidelines
## Version Information
- **Current Version**: 3.0.0 (November 2025)
- **Python Requirement**: 3.13+
- **Key Features**: Alembic migrations, automated backups, Docker support, traceroute return paths
## Rules for robots
- Always run ruff check and ruff format after making changes (only on python changes)
---
When working on this project, prioritize:
1. Maintaining async patterns
2. Following existing code structure
3. Using proper database migrations
4. Keeping documentation updated
5. Testing changes thoroughly

80
Containerfile Normal file
View File

@@ -0,0 +1,80 @@
# Build Image
# Uses python:3.13-slim because no native dependencies are needed for meshview itself
# (everything is available as a wheel)
FROM docker.io/python:3.13-slim AS meshview-build
RUN apt-get update && \
apt-get install -y --no-install-recommends curl patch && \
rm -rf /var/lib/apt/lists/*
# Add a non-root user/group
ARG APP_USER=app
RUN useradd -m -u 10001 -s /bin/bash ${APP_USER}
# Install uv and put it on PATH system-wide
RUN curl -LsSf https://astral.sh/uv/install.sh | sh \
&& install -m 0755 /root/.local/bin/uv /usr/local/bin/uv
WORKDIR /app
RUN chown -R ${APP_USER}:${APP_USER} /app
# Copy deps first for caching
COPY --chown=${APP_USER}:${APP_USER} pyproject.toml uv.lock* requirements*.txt ./
# Optional: wheels-only to avoid slow source builds
ENV UV_NO_BUILD=1
RUN uv venv /opt/venv
# RUN uv sync --frozen
ENV VIRTUAL_ENV=/opt/venv
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN uv pip install --no-cache-dir --upgrade pip \
&& if [ -f requirements.txt ]; then uv pip install --only-binary=:all: -r requirements.txt; fi
# Copy app code
COPY --chown=${APP_USER}:${APP_USER} . .
# Patch config
RUN patch sample.config.ini < container/config.patch
# Clean
RUN rm -rf /app/.git* && \
rm -rf /app/.pre-commit-config.yaml && \
rm -rf /app/*.md && \
rm -rf /app/COPYING && \
rm -rf /app/Containerfile && \
rm -rf /app/Dockerfile && \
rm -rf /app/container && \
rm -rf /app/docker && \
rm -rf /app/docs && \
rm -rf /app/pyproject.toml && \
rm -rf /app/requirements.txt && \
rm -rf /app/screenshots
# Prepare /app and /opt to copy
RUN mkdir -p /meshview && \
mv /app /opt /meshview
# Use a clean container for install
FROM docker.io/python:3.13-slim
ARG APP_USER=app
COPY --from=meshview-build /meshview /
RUN apt-get update && \
apt-get install -y --no-install-recommends graphviz && \
rm -rf /var/lib/apt/lists/* && \
useradd -m -u 10001 -s /bin/bash ${APP_USER} && \
mkdir -p /etc/meshview /var/lib/meshview /var/log/meshview && \
mv /app/sample.config.ini /etc/meshview/config.ini && \
chown -R ${APP_USER}:${APP_USER} /var/lib/meshview /var/log/meshview
# Drop privileges
USER ${APP_USER}
WORKDIR /app
ENTRYPOINT [ "/opt/venv/bin/python", "mvrun.py"]
CMD ["--pid_dir", "/tmp", "--py_exec", "/opt/venv/bin/python", "--config", "/etc/meshview/config.ini" ]
EXPOSE 8081
VOLUME [ "/etc/meshview", "/var/lib/meshview", "/var/log/meshview" ]

1
Dockerfile Symbolic link
View File

@@ -0,0 +1 @@
Containerfile

View File

@@ -1,203 +0,0 @@
# /top Endpoint Performance Optimization
## Problem
The `/top` endpoint was taking over 1 second to execute due to inefficient database queries. The query joins three tables (node, packet, packet_seen) and performs COUNT aggregations on large result sets without proper indexes.
## Root Cause Analysis
The `get_top_traffic_nodes()` query in `meshview/store.py` executes:
```sql
SELECT
n.node_id,
n.long_name,
n.short_name,
n.channel,
COUNT(DISTINCT p.id) AS total_packets_sent,
COUNT(ps.packet_id) AS total_times_seen
FROM node n
LEFT JOIN packet p ON n.node_id = p.from_node_id
AND p.import_time >= DATETIME('now', 'localtime', '-24 hours')
LEFT JOIN packet_seen ps ON p.id = ps.packet_id
GROUP BY n.node_id, n.long_name, n.short_name
HAVING total_packets_sent > 0
ORDER BY total_times_seen DESC;
```
### Performance Bottlenecks Identified:
1. **Missing composite index on packet(from_node_id, import_time)**
- The query filters packets by BOTH `from_node_id` AND `import_time >= -24 hours`
- Without a composite index, SQLite must:
- Scan using `idx_packet_from_node_id` index
- Then filter each result by `import_time` (expensive!)
2. **Missing index on packet_seen(packet_id)**
- The LEFT JOIN to packet_seen uses `packet_id` as the join key
- Without an index, SQLite performs a table scan for each packet
- With potentially millions of packet_seen records, this is very slow
## Solution
### 1. Added Database Indexes
Modified `meshview/models.py` to include two new indexes:
```python
# In Packet class
Index("idx_packet_from_node_time", "from_node_id", desc("import_time"))
# In PacketSeen class
Index("idx_packet_seen_packet_id", "packet_id")
```
### 2. Added Performance Profiling
Modified `meshview/web.py` `/top` endpoint to include:
- Timing instrumentation for database queries
- Timing for data processing
- Detailed logging with `[PROFILE /top]` prefix
- On-page performance metrics display
### 3. Created Migration Script
Created `add_db_indexes.py` to add indexes to existing databases.
## Implementation Steps
### Step 1: Stop the Database Writer
```bash
# Stop startdb.py if it's running
pkill -f startdb.py
```
### Step 2: Run Migration Script
```bash
python add_db_indexes.py
```
Expected output:
```
======================================================================
Database Index Migration for /top Endpoint Performance
======================================================================
Connecting to database: sqlite+aiosqlite:///path/to/packets.db
======================================================================
Checking for index: idx_packet_from_node_time
======================================================================
Creating index idx_packet_from_node_time...
Table: packet
Columns: from_node_id, import_time DESC
Purpose: Speeds up filtering packets by sender and time range
✓ Index created successfully in 2.34 seconds
======================================================================
Checking for index: idx_packet_seen_packet_id
======================================================================
Creating index idx_packet_seen_packet_id...
Table: packet_seen
Columns: packet_id
Purpose: Speeds up joining packet_seen with packet table
✓ Index created successfully in 3.12 seconds
... (index listings)
======================================================================
Migration completed successfully!
======================================================================
```
### Step 3: Restart Services
```bash
# Restart server
python mvrun.py &
```
### Step 4: Verify Performance Improvement
1. Visit `/top` endpoint eg http://127.0.0.1:8081/top?perf=true
2. Scroll to bottom of page
3. Check the Performance Metrics panel
4. Compare DB query time before and after
**Expected Results:**
- **Before:** 1000-2000ms query time
- **After:** 50-200ms query time
- **Improvement:** 80-95% reduction
## Performance Metrics
The `/top` page now displays at the bottom:
```
⚡ Performance Metrics
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Database Query: 45.23ms
Data Processing: 2.15ms
Total Time: 47.89ms
Nodes Processed: 156
Total Packets: 45,678
Times Seen: 123,456
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
## Technical Details
### Why Composite Index Works
SQLite can use a composite index `(from_node_id, import_time DESC)` to:
1. Quickly find all packets for a specific `from_node_id`
2. Filter by `import_time` without additional I/O (data is already sorted)
3. Both operations use a single index lookup
### Why packet_id Index Works
The `packet_seen` table can have millions of rows. Without an index:
- Each packet requires a full table scan of packet_seen
- O(n * m) complexity where n=packets, m=packet_seen rows
With the index:
- Each packet uses an index lookup
- O(n * log m) complexity - dramatically faster
### Index Size Impact
- `idx_packet_from_node_time`: ~10-20% of packet table size
- `idx_packet_seen_packet_id`: ~5-10% of packet_seen table size
- Total additional disk space: typically 50-200MB depending on data volume
- Performance gain: 80-95% query time reduction
## Future Optimizations
If query is still slow after indexes:
1. **Add ANALYZE**: Run `ANALYZE;` to update SQLite query planner statistics
2. **Consider materialized view**: Pre-compute traffic stats in a background job
3. **Add caching**: Cache results for 5-10 minutes using Redis/memcached
4. **Partition data**: Archive old packet_seen records
## Rollback
If needed, indexes can be removed:
```sql
DROP INDEX IF EXISTS idx_packet_from_node_time;
DROP INDEX IF EXISTS idx_packet_seen_packet_id;
```
## Files Modified
- `meshview/models.py` - Added index definitions
- `meshview/web.py` - Added performance profiling
- `meshview/templates/top.html` - Added metrics display
- `add_db_indexes.py` - Migration script (NEW)
- `PERFORMANCE_OPTIMIZATION.md` - This documentation (NEW)
## Support
For questions or issues:
1. Verify indexes exist: `python add_db_indexes.py` (safe to re-run)
2. Review SQLite EXPLAIN QUERY PLAN for the query

243
README-Docker.md Normal file
View File

@@ -0,0 +1,243 @@
# Running MeshView with Docker
MeshView container images are built automatically and published to GitHub Container Registry.
## Quick Start
Pull and run the latest image:
```bash
docker pull ghcr.io/pablorevilla-meshtastic/meshview:latest
docker run -d \
--name meshview \
-p 8081:8081 \
-v ./config:/etc/meshview \
-v ./data:/var/lib/meshview \
-v ./logs:/var/log/meshview \
ghcr.io/pablorevilla-meshtastic/meshview:latest
```
Access the web interface at: http://localhost:8081
## Volume Mounts
The container uses three volumes for persistent data:
| Volume | Purpose | Required |
|--------|---------|----------|
| `/etc/meshview` | Configuration files | Yes |
| `/var/lib/meshview` | Database storage | Recommended |
| `/var/log/meshview` | Log files | Optional |
### Configuration Volume
Mount a directory containing your `config.ini` file:
```bash
-v /path/to/your/config:/etc/meshview
```
If no config is provided, the container will use the default `sample.config.ini`.
### Database Volume
Mount a directory to persist the SQLite database:
```bash
-v /path/to/your/data:/var/lib/meshview
```
**Important:** Without this mount, your database will be lost when the container stops.
### Logs Volume
Mount a directory to access logs from the host:
```bash
-v /path/to/your/logs:/var/log/meshview
```
## Complete Example
Create a directory structure and run:
```bash
# Create directories
mkdir -p meshview/{config,data,logs,backups}
# Copy sample config (first time only)
docker run --rm ghcr.io/pablorevilla-meshtastic/meshview:latest \
cat /etc/meshview/config.ini > meshview/config/config.ini
# Edit config.ini with your MQTT settings
nano meshview/config/config.ini
# Run the container
docker run -d \
--name meshview \
--restart unless-stopped \
-p 8081:8081 \
-v $(pwd)/meshview/config:/etc/meshview \
-v $(pwd)/meshview/data:/var/lib/meshview \
-v $(pwd)/meshview/logs:/var/log/meshview \
ghcr.io/pablorevilla-meshtastic/meshview:latest
```
## Docker Compose
Create a `docker-compose.yml`:
```yaml
version: '3.8'
services:
meshview:
image: ghcr.io/pablorevilla-meshtastic/meshview:latest
container_name: meshview
restart: unless-stopped
ports:
- "8081:8081"
volumes:
- ./config:/etc/meshview
- ./data:/var/lib/meshview
- ./logs:/var/log/meshview
- ./backups:/var/lib/meshview/backups # For database backups
environment:
- TZ=America/Los_Angeles # Set your timezone
```
Run with:
```bash
docker-compose up -d
```
## Configuration
### Minimum Configuration
Edit your `config.ini` to configure MQTT connection:
```ini
[mqtt]
server = mqtt.meshtastic.org
topics = ["msh/US/#"]
port = 1883
username =
password =
[database]
connection_string = sqlite+aiosqlite:///var/lib/meshview/packets.db
```
### Database Backups
To enable automatic daily backups inside the container:
```ini
[cleanup]
backup_enabled = True
backup_dir = /var/lib/meshview/backups
backup_hour = 2
backup_minute = 00
```
Then mount the backups directory:
```bash
-v $(pwd)/meshview/backups:/var/lib/meshview/backups
```
## Available Tags
| Tag | Description |
|-----|-------------|
| `latest` | Latest build from the main branch |
| `dev-v3` | Development branch |
| `v1.2.3` | Specific version tags |
## Updating
Pull the latest image and restart:
```bash
docker pull ghcr.io/pablorevilla-meshtastic/meshview:latest
docker restart meshview
```
Or with docker-compose:
```bash
docker-compose pull
docker-compose up -d
```
## Logs
View container logs:
```bash
docker logs meshview
# Follow logs
docker logs -f meshview
# Last 100 lines
docker logs --tail 100 meshview
```
## Troubleshooting
### Container won't start
Check logs:
```bash
docker logs meshview
```
### Database permission issues
Ensure the data directory is writable:
```bash
chmod -R 755 meshview/data
```
### Can't connect to MQTT
1. Check your MQTT configuration in `config.ini`
2. Verify network connectivity from the container:
```bash
docker exec meshview ping mqtt.meshtastic.org
```
### Port already in use
Change the host port (left side):
```bash
-p 8082:8081
```
Then access at: http://localhost:8082
## Building Your Own Image
If you want to build from source:
```bash
git clone https://github.com/pablorevilla-meshtastic/meshview.git
cd meshview
docker build -f Containerfile -t meshview:local .
```
## Security Notes
- The container runs as a non-root user (`app`, UID 10001)
- No privileged access required
- Only port 8081 is exposed
- All data stored in mounted volumes
## Support
- GitHub Issues: https://github.com/pablorevilla-meshtastic/meshview/issues
- Documentation: https://github.com/pablorevilla-meshtastic/meshview

105
README.md
View File

@@ -4,6 +4,31 @@
The project serves as a real-time monitoring and diagnostic tool for the Meshtastic mesh network. It provides detailed insights into network activity, including message traffic, node positions, and telemetry data.
### Version 3.0.0 update - November 2025
**Major Infrastructure Improvements:**
* **Database Migrations**: Alembic integration for safe schema upgrades and database versioning
* **Automated Backups**: Independent database backup system with gzip compression (separate from cleanup)
* **Development Tools**: Quick setup script (`setup-dev.sh`) with pre-commit hooks for code quality
* **Docker Support**: Pre-built containers now available on GitHub Container Registry with automatic builds - ogarcia
**New Features:**
* **Traceroute Return Path**: Log and display return path data for traceroute packets - jschrempp
* **Microsecond Timestamps**: Added `import_time_us` columns for higher precision time tracking
**Technical Improvements:**
* Migration from manual SQL to Alembic-managed schema
* Container images use `uv` for faster dependency installation
* Python 3.13 support with slim Debian-based images
* Documentation collection in `docs/` directory
* API routes moved to separate modules for better organization
* /version and /health endpoints added for monitoring
See [README-Docker.md](README-Docker.md) for container deployment and [docs/](docs/) for technical documentation.
### Version 2.0.7 update - September 2025
* New database maintenance capability to automatically keep a specific number of days of data.
* Added configuration for update intervals for both the Live Map and the Firehose pages.
@@ -14,6 +39,7 @@ The project serves as a real-time monitoring and diagnostic tool for the Meshtas
* New API /api/edges (See API documentation)
* Adds edges to the map (click to see traceroute and neighbours)
### Version 2.0.4 update - August 2025
* New statistic page with more data.
* New API /api/stats (See API documentation).
@@ -60,20 +86,42 @@ Samples of currently running instances:
## Installing
Requires **`python3.11`** or above.
### Using Docker (Recommended)
The easiest way to run MeshView is using Docker. Pre-built images are available from GitHub Container Registry.
See **[README-Docker.md](README-Docker.md)** for complete Docker installation and usage instructions.
### Manual Installation
Requires **`python3.13`** or above.
Clone the repo from GitHub:
```bash
git clone https://github.com/pablorevilla-meshtastic/meshview.git
```
```bash
cd meshview
```
#### Quick Setup (Recommended)
Run the development setup script:
```bash
./setup-dev.sh
```
This will:
- Create Python virtual environment
- Install all requirements
- Install development tools (pre-commit, pytest)
- Set up pre-commit hooks for code formatting
- Create config.ini from sample
#### Manual Setup
Create a Python virtual environment:
from the meshview directory...
```bash
python3 -m venv env
```
@@ -221,6 +269,8 @@ vacuum = False
# Application logs (errors, startup messages, etc.) are unaffected
# Set to True to enable, False to disable (default: False)
access_log = False
# Database cleanup logfile location
db_cleanup_logfile = dbcleanup.log
```
---
@@ -254,12 +304,29 @@ Open in your browser: http://localhost:8081/
## Running Meshview with `mvrun.py`
- `mvrun.py` starts both `startdb.py` and `main.py` in separate threads and merges the output.
- It accepts the `--config` argument like the others.
- It accepts several command-line arguments for flexible deployment.
```bash
./env/bin/python mvrun.py
```
**Command-line options:**
- `--config CONFIG` - Path to the configuration file (default: `config.ini`)
- `--pid_dir PID_DIR` - Directory for PID files (default: `.`)
- `--py_exec PY_EXEC` - Path to the Python executable (default: `./env/bin/python`)
**Examples:**
```bash
# Use a specific config file
./env/bin/python mvrun.py --config /etc/meshview/config.ini
# Store PID files in a specific directory
./env/bin/python mvrun.py --pid_dir /var/run/meshview
# Use a different Python executable
./env/bin/python mvrun.py --py_exec /usr/bin/python3
```
---
## Setting Up Systemd Services (Ubuntu)
@@ -365,6 +432,15 @@ hour = 2
minute = 00
# Run VACUUM after cleanup
vacuum = False
# -------------------------
# Logging Configuration
# -------------------------
[logging]
# Enable or disable HTTP access logs from the web server
access_log = False
# Database cleanup logfile location
db_cleanup_logfile = dbcleanup.log
```
Once changes are done you need to restart the script for changes to load.
@@ -413,3 +489,20 @@ Add schedule to the bottom of the file (modify /path/to/file/ to the correct pat
```
Check the log file to see it the script run at the specific time.
---
## Testing
MeshView includes a test suite using pytest. For detailed testing documentation, see [README-testing.md](README-testing.md).
Quick start:
```bash
./env/bin/pytest tests/test_api_simple.py -v
```
---
## Technical Documentation
For more detailed technical documentation including database migrations, architecture details, and advanced topics, see the [docs/](docs/) directory.

View File

@@ -1,154 +0,0 @@
#!/usr/bin/env python3
"""
Migration script to add performance indexes
This script adds two critical indexes:
1. idx_packet_from_node_time: Composite index on packet(from_node_id, import_time DESC)
2. idx_packet_seen_packet_id: Index on packet_seen(packet_id)
These indexes significantly improve the performance of the get_top_traffic_nodes() query.
Usage:
python add_db_indexes.py
The script will:
- Connect to your database in WRITE mode
- Check if indexes already exist
- Create missing indexes
- Report timing for each operation
"""
import asyncio
import time
from sqlalchemy import text
from sqlalchemy.ext.asyncio import create_async_engine
from meshview.config import CONFIG
async def add_indexes():
# Get database connection string and remove read-only flag
db_string = CONFIG["database"]["connection_string"]
if "?mode=ro" in db_string:
db_string = db_string.replace("?mode=ro", "")
print(f"Connecting to database: {db_string}")
# Create engine with write access
engine = create_async_engine(db_string, echo=False, connect_args={"uri": True})
try:
async with engine.begin() as conn:
# Check and create idx_packet_from_node_time
print("\n" + "=" * 70)
print("Checking for index: idx_packet_from_node_time")
print("=" * 70)
result = await conn.execute(
text("""
SELECT name FROM sqlite_master
WHERE type='index' AND name='idx_packet_from_node_time'
""")
)
if result.fetchone():
print("✓ Index idx_packet_from_node_time already exists")
else:
print("Creating index idx_packet_from_node_time...")
print(" Table: packet")
print(" Columns: from_node_id, import_time DESC")
print(" Purpose: Speeds up filtering packets by sender and time range")
start_time = time.perf_counter()
await conn.execute(
text("""
CREATE INDEX idx_packet_from_node_time
ON packet(from_node_id, import_time DESC)
""")
)
elapsed = time.perf_counter() - start_time
print(f"✓ Index created successfully in {elapsed:.2f} seconds")
# Check and create idx_packet_seen_packet_id
print("\n" + "=" * 70)
print("Checking for index: idx_packet_seen_packet_id")
print("=" * 70)
result = await conn.execute(
text("""
SELECT name FROM sqlite_master
WHERE type='index' AND name='idx_packet_seen_packet_id'
""")
)
if result.fetchone():
print("✓ Index idx_packet_seen_packet_id already exists")
else:
print("Creating index idx_packet_seen_packet_id...")
print(" Table: packet_seen")
print(" Columns: packet_id")
print(" Purpose: Speeds up joining packet_seen with packet table")
start_time = time.perf_counter()
await conn.execute(
text("""
CREATE INDEX idx_packet_seen_packet_id
ON packet_seen(packet_id)
""")
)
elapsed = time.perf_counter() - start_time
print(f"✓ Index created successfully in {elapsed:.2f} seconds")
# Show index info
print("\n" + "=" * 70)
print("Current indexes on packet table:")
print("=" * 70)
result = await conn.execute(
text("""
SELECT name, sql FROM sqlite_master
WHERE type='index' AND tbl_name='packet'
ORDER BY name
""")
)
for row in result:
if row[1]: # Skip auto-indexes (they have NULL sql)
print(f"{row[0]}")
print("\n" + "=" * 70)
print("Current indexes on packet_seen table:")
print("=" * 70)
result = await conn.execute(
text("""
SELECT name, sql FROM sqlite_master
WHERE type='index' AND tbl_name='packet_seen'
ORDER BY name
""")
)
for row in result:
if row[1]: # Skip auto-indexes
print(f"{row[0]}")
print("\n" + "=" * 70)
print("Migration completed successfully!")
print("=" * 70)
print("\nNext steps:")
print("1. Restart your web server (mvrun.py)")
print("2. Visit /top endpoint and check the performance metrics")
print("3. Compare DB query time with previous measurements")
print("\nExpected improvement: 50-90% reduction in query time")
except Exception as e:
print(f"\n❌ Error during migration: {e}")
raise
finally:
await engine.dispose()
if __name__ == "__main__":
print("=" * 70)
print("Database Index Migration for Endpoint Performance")
print("=" * 70)
asyncio.run(add_indexes())

120
alembic.ini Normal file
View File

@@ -0,0 +1,120 @@
# A generic, single database configuration.
[alembic]
# path to migration scripts
# Use forward slashes (/) also on windows to provide an os agnostic path
script_location = alembic
# template used to generate migration file names; The default value is %%(rev)s_%%(slug)s
# Uncomment the line below if you want the files to be prepended with date and time
# see https://alembic.sqlalchemy.org/en/latest/tutorial.html#editing-the-ini-file
# for all available tokens
# file_template = %%(year)d_%%(month).2d_%%(day).2d_%%(hour).2d%%(minute).2d-%%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
# timezone to use when rendering the date within the migration file
# as well as the filename.
# If specified, requires the python>=3.9 or backports.zoneinfo library and tzdata library.
# Any required deps can installed by adding `alembic[tz]` to the pip requirements
# string value is passed to ZoneInfo()
# leave blank for localtime
# timezone =
# max length of characters to apply to the "slug" field
# truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version location specification; This defaults
# to alembic/versions. When using multiple version
# directories, initial revisions must be specified with --version-path.
# The path separator used here should be the separator specified by "version_path_separator" below.
# version_locations = %(here)s/bar:%(here)s/bat:alembic/versions
# version path separator; As mentioned above, this is the character used to split
# version_locations. The default within new alembic.ini files is "os", which uses os.pathsep.
# If this key is omitted entirely, it falls back to the legacy behavior of splitting on spaces and/or commas.
# Valid values for version_path_separator are:
#
# version_path_separator = :
# version_path_separator = ;
# version_path_separator = space
# version_path_separator = newline
#
# Use os.pathsep. Default configuration used for new projects.
version_path_separator = os
# set to 'true' to search source files recursively
# in each "version_locations" directory
# new in Alembic version 1.10
# recursive_version_locations = false
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
# sqlalchemy.url will be set programmatically from meshview config
# sqlalchemy.url = driver://user:pass@localhost/dbname
[post_write_hooks]
# post_write_hooks defines scripts or Python functions that are run
# on newly generated revision scripts. See the documentation for further
# detail and examples
# format using "black" - use the console_scripts runner, against the "black" entrypoint
# hooks = black
# black.type = console_scripts
# black.entrypoint = black
# black.options = -l 79 REVISION_SCRIPT_FILENAME
# lint with attempts to fix using "ruff" - use the exec runner, execute a binary
# hooks = ruff
# ruff.type = exec
# ruff.executable = %(here)s/.venv/bin/ruff
# ruff.options = --fix REVISION_SCRIPT_FILENAME
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = INFO
handlers = console
qualname =
[logger_sqlalchemy]
level = WARNING
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(asctime)s %(filename)s:%(lineno)d [pid:%(process)d] %(levelname)s - %(message)s
datefmt = %Y-%m-%d %H:%M:%S

1
alembic/README Normal file
View File

@@ -0,0 +1 @@
Generic single-database configuration.

102
alembic/env.py Normal file
View File

@@ -0,0 +1,102 @@
import asyncio
from logging.config import fileConfig
from sqlalchemy import pool
from sqlalchemy.engine import Connection
from sqlalchemy.ext.asyncio import async_engine_from_config
from alembic import context
# Import models metadata for autogenerate support
from meshview.models import Base
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# Interpret the config file for Python logging.
# This line sets up loggers basically.
# Use disable_existing_loggers=False to preserve app logging configuration
if config.config_file_name is not None:
fileConfig(config.config_file_name, disable_existing_loggers=False)
# Add your model's MetaData object here for 'autogenerate' support
target_metadata = Base.metadata
def run_migrations_offline() -> None:
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url,
target_metadata=target_metadata,
literal_binds=True,
dialect_opts={"paramstyle": "named"},
)
with context.begin_transaction():
context.run_migrations()
def do_run_migrations(connection: Connection) -> None:
"""Run migrations with the given connection."""
context.configure(connection=connection, target_metadata=target_metadata)
with context.begin_transaction():
context.run_migrations()
async def run_async_migrations() -> None:
"""Run migrations in async mode."""
# Get configuration section
configuration = config.get_section(config.config_ini_section, {})
# If sqlalchemy.url is not set in alembic.ini, try to get it from meshview config
if "sqlalchemy.url" not in configuration:
try:
from meshview.config import CONFIG
configuration["sqlalchemy.url"] = CONFIG["database"]["connection_string"]
except Exception:
# Fallback to a default for initial migration creation
configuration["sqlalchemy.url"] = "sqlite+aiosqlite:///packets.db"
connectable = async_engine_from_config(
configuration,
prefix="sqlalchemy.",
poolclass=pool.NullPool,
)
async with connectable.connect() as connection:
await connection.run_sync(do_run_migrations)
await connectable.dispose()
def run_migrations_online() -> None:
"""Run migrations in 'online' mode with async support."""
try:
# Event loop is already running, schedule and run the coroutine
import concurrent.futures
with concurrent.futures.ThreadPoolExecutor() as pool:
pool.submit(lambda: asyncio.run(run_async_migrations())).result()
except RuntimeError:
# No event loop running, create one
asyncio.run(run_async_migrations())
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()

26
alembic/script.py.mako Normal file
View File

@@ -0,0 +1,26 @@
"""${message}
Revision ID: ${up_revision}
Revises: ${down_revision | comma,n}
Create Date: ${create_date}
"""
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa
${imports if imports else ""}
# revision identifiers, used by Alembic.
revision: str = ${repr(up_revision)}
down_revision: Union[str, None] = ${repr(down_revision)}
branch_labels: Union[str, Sequence[str], None] = ${repr(branch_labels)}
depends_on: Union[str, Sequence[str], None] = ${repr(depends_on)}
def upgrade() -> None:
${upgrades if upgrades else "pass"}
def downgrade() -> None:
${downgrades if downgrades else "pass"}

View File

@@ -0,0 +1,45 @@
"""Add example table
Revision ID: 1717fa5c6545
Revises: c88468b7ab0b
Create Date: 2025-10-26 20:59:04.347066
"""
from collections.abc import Sequence
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = '1717fa5c6545'
down_revision: str | None = 'add_time_us_cols'
branch_labels: str | Sequence[str] | None = None
depends_on: str | Sequence[str] | None = None
def upgrade() -> None:
"""Create example table with sample columns."""
op.create_table(
'example',
sa.Column('id', sa.Integer(), nullable=False, primary_key=True, autoincrement=True),
sa.Column('name', sa.String(length=100), nullable=False),
sa.Column('description', sa.Text(), nullable=True),
sa.Column('value', sa.Float(), nullable=True),
sa.Column('is_active', sa.Boolean(), nullable=False, server_default='1'),
sa.Column(
'created_at', sa.DateTime(), nullable=False, server_default=sa.text('CURRENT_TIMESTAMP')
),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.PrimaryKeyConstraint('id'),
)
# Create an index on the name column for faster lookups
op.create_index('idx_example_name', 'example', ['name'])
def downgrade() -> None:
"""Remove example table."""
op.drop_index('idx_example_name', table_name='example')
op.drop_table('example')

View File

@@ -0,0 +1,35 @@
"""Add first_seen_us and last_seen_us to node table
Revision ID: 2b5a61bb2b75
Revises: ac311b3782a1
Create Date: 2025-11-05 15:19:13.446724
"""
from collections.abc import Sequence
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = '2b5a61bb2b75'
down_revision: str | None = 'ac311b3782a1'
branch_labels: str | Sequence[str] | None = None
depends_on: str | Sequence[str] | None = None
def upgrade() -> None:
# Add microsecond epoch timestamp columns for first and last seen times
op.add_column('node', sa.Column('first_seen_us', sa.BigInteger(), nullable=True))
op.add_column('node', sa.Column('last_seen_us', sa.BigInteger(), nullable=True))
op.create_index('idx_node_first_seen_us', 'node', ['first_seen_us'], unique=False)
op.create_index('idx_node_last_seen_us', 'node', ['last_seen_us'], unique=False)
def downgrade() -> None:
# Remove the microsecond epoch timestamp columns and their indexes
op.drop_index('idx_node_last_seen_us', table_name='node')
op.drop_index('idx_node_first_seen_us', table_name='node')
op.drop_column('node', 'last_seen_us')
op.drop_column('node', 'first_seen_us')

View File

@@ -0,0 +1,31 @@
"""add route_return to traceroute
Revision ID: ac311b3782a1
Revises: 1717fa5c6545
Create Date: 2025-11-04 20:28:33.174137
"""
from collections.abc import Sequence
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = 'ac311b3782a1'
down_revision: str | None = '1717fa5c6545'
branch_labels: str | Sequence[str] | None = None
depends_on: str | Sequence[str] | None = None
def upgrade() -> None:
# Add route_return column to traceroute table
with op.batch_alter_table('traceroute', schema=None) as batch_op:
batch_op.add_column(sa.Column('route_return', sa.LargeBinary(), nullable=True))
def downgrade() -> None:
# Remove route_return column from traceroute table
with op.batch_alter_table('traceroute', schema=None) as batch_op:
batch_op.drop_column('route_return')

View File

@@ -0,0 +1,74 @@
"""add import_time_us columns
Revision ID: add_time_us_cols
Revises: c88468b7ab0b
Create Date: 2025-11-03 14:10:00.000000
"""
from collections.abc import Sequence
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = 'add_time_us_cols'
down_revision: str | None = 'c88468b7ab0b'
branch_labels: str | Sequence[str] | None = None
depends_on: str | Sequence[str] | None = None
def upgrade() -> None:
# Check if columns already exist, add them if they don't
conn = op.get_bind()
inspector = sa.inspect(conn)
# Add import_time_us to packet table
packet_columns = [col['name'] for col in inspector.get_columns('packet')]
if 'import_time_us' not in packet_columns:
with op.batch_alter_table('packet', schema=None) as batch_op:
batch_op.add_column(sa.Column('import_time_us', sa.BigInteger(), nullable=True))
op.create_index(
'idx_packet_import_time_us', 'packet', [sa.text('import_time_us DESC')], unique=False
)
op.create_index(
'idx_packet_from_node_time_us',
'packet',
['from_node_id', sa.text('import_time_us DESC')],
unique=False,
)
# Add import_time_us to packet_seen table
packet_seen_columns = [col['name'] for col in inspector.get_columns('packet_seen')]
if 'import_time_us' not in packet_seen_columns:
with op.batch_alter_table('packet_seen', schema=None) as batch_op:
batch_op.add_column(sa.Column('import_time_us', sa.BigInteger(), nullable=True))
op.create_index(
'idx_packet_seen_import_time_us', 'packet_seen', ['import_time_us'], unique=False
)
# Add import_time_us to traceroute table
traceroute_columns = [col['name'] for col in inspector.get_columns('traceroute')]
if 'import_time_us' not in traceroute_columns:
with op.batch_alter_table('traceroute', schema=None) as batch_op:
batch_op.add_column(sa.Column('import_time_us', sa.BigInteger(), nullable=True))
op.create_index(
'idx_traceroute_import_time_us', 'traceroute', ['import_time_us'], unique=False
)
def downgrade() -> None:
# Drop indexes and columns
op.drop_index('idx_traceroute_import_time_us', table_name='traceroute')
with op.batch_alter_table('traceroute', schema=None) as batch_op:
batch_op.drop_column('import_time_us')
op.drop_index('idx_packet_seen_import_time_us', table_name='packet_seen')
with op.batch_alter_table('packet_seen', schema=None) as batch_op:
batch_op.drop_column('import_time_us')
op.drop_index('idx_packet_from_node_time_us', table_name='packet')
op.drop_index('idx_packet_import_time_us', table_name='packet')
with op.batch_alter_table('packet', schema=None) as batch_op:
batch_op.drop_column('import_time_us')

View File

@@ -0,0 +1,160 @@
"""Initial migration
Revision ID: c88468b7ab0b
Revises:
Create Date: 2025-10-26 20:56:50.285200
"""
from collections.abc import Sequence
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = 'c88468b7ab0b'
down_revision: str | None = None
branch_labels: str | Sequence[str] | None = None
depends_on: str | Sequence[str] | None = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
# Get connection and inspector to check what exists
conn = op.get_bind()
inspector = sa.inspect(conn)
existing_tables = inspector.get_table_names()
# Create node table if it doesn't exist
if 'node' not in existing_tables:
op.create_table(
'node',
sa.Column('id', sa.String(), nullable=False),
sa.Column('node_id', sa.BigInteger(), nullable=True),
sa.Column('long_name', sa.String(), nullable=True),
sa.Column('short_name', sa.String(), nullable=True),
sa.Column('hw_model', sa.String(), nullable=True),
sa.Column('firmware', sa.String(), nullable=True),
sa.Column('role', sa.String(), nullable=True),
sa.Column('last_lat', sa.BigInteger(), nullable=True),
sa.Column('last_long', sa.BigInteger(), nullable=True),
sa.Column('channel', sa.String(), nullable=True),
sa.Column('last_update', sa.DateTime(), nullable=True),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('node_id'),
)
op.create_index('idx_node_node_id', 'node', ['node_id'], unique=False)
# Create packet table if it doesn't exist
if 'packet' not in existing_tables:
op.create_table(
'packet',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('portnum', sa.Integer(), nullable=True),
sa.Column('from_node_id', sa.BigInteger(), nullable=True),
sa.Column('to_node_id', sa.BigInteger(), nullable=True),
sa.Column('payload', sa.LargeBinary(), nullable=True),
sa.Column('import_time', sa.DateTime(), nullable=True),
sa.Column('import_time_us', sa.BigInteger(), nullable=True),
sa.Column('channel', sa.String(), nullable=True),
sa.PrimaryKeyConstraint('id'),
)
op.create_index('idx_packet_from_node_id', 'packet', ['from_node_id'], unique=False)
op.create_index('idx_packet_to_node_id', 'packet', ['to_node_id'], unique=False)
op.create_index(
'idx_packet_import_time', 'packet', [sa.text('import_time DESC')], unique=False
)
op.create_index(
'idx_packet_import_time_us', 'packet', [sa.text('import_time_us DESC')], unique=False
)
op.create_index(
'idx_packet_from_node_time',
'packet',
['from_node_id', sa.text('import_time DESC')],
unique=False,
)
op.create_index(
'idx_packet_from_node_time_us',
'packet',
['from_node_id', sa.text('import_time_us DESC')],
unique=False,
)
# Create packet_seen table if it doesn't exist
if 'packet_seen' not in existing_tables:
op.create_table(
'packet_seen',
sa.Column('packet_id', sa.BigInteger(), nullable=False),
sa.Column('node_id', sa.BigInteger(), nullable=False),
sa.Column('rx_time', sa.BigInteger(), nullable=False),
sa.Column('hop_limit', sa.Integer(), nullable=True),
sa.Column('hop_start', sa.Integer(), nullable=True),
sa.Column('channel', sa.String(), nullable=True),
sa.Column('rx_snr', sa.Float(), nullable=True),
sa.Column('rx_rssi', sa.Integer(), nullable=True),
sa.Column('topic', sa.String(), nullable=True),
sa.Column('import_time', sa.DateTime(), nullable=True),
sa.Column('import_time_us', sa.BigInteger(), nullable=True),
sa.ForeignKeyConstraint(
['packet_id'],
['packet.id'],
),
sa.PrimaryKeyConstraint('packet_id', 'node_id', 'rx_time'),
)
op.create_index('idx_packet_seen_node_id', 'packet_seen', ['node_id'], unique=False)
op.create_index('idx_packet_seen_packet_id', 'packet_seen', ['packet_id'], unique=False)
op.create_index(
'idx_packet_seen_import_time_us', 'packet_seen', ['import_time_us'], unique=False
)
# Create traceroute table if it doesn't exist
if 'traceroute' not in existing_tables:
op.create_table(
'traceroute',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('packet_id', sa.BigInteger(), nullable=True),
sa.Column('gateway_node_id', sa.BigInteger(), nullable=True),
sa.Column('done', sa.Boolean(), nullable=True),
sa.Column('route', sa.LargeBinary(), nullable=True),
sa.Column('import_time', sa.DateTime(), nullable=True),
sa.Column('import_time_us', sa.BigInteger(), nullable=True),
sa.ForeignKeyConstraint(
['packet_id'],
['packet.id'],
),
sa.PrimaryKeyConstraint('id'),
)
op.create_index('idx_traceroute_import_time', 'traceroute', ['import_time'], unique=False)
op.create_index(
'idx_traceroute_import_time_us', 'traceroute', ['import_time_us'], unique=False
)
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
# Drop traceroute table and indexes
op.drop_index('idx_traceroute_import_time_us', table_name='traceroute')
op.drop_index('idx_traceroute_import_time', table_name='traceroute')
op.drop_table('traceroute')
# Drop packet_seen table and indexes
op.drop_index('idx_packet_seen_import_time_us', table_name='packet_seen')
op.drop_index('idx_packet_seen_packet_id', table_name='packet_seen')
op.drop_index('idx_packet_seen_node_id', table_name='packet_seen')
op.drop_table('packet_seen')
# Drop packet table and indexes
op.drop_index('idx_packet_from_node_time_us', table_name='packet')
op.drop_index('idx_packet_from_node_time', table_name='packet')
op.drop_index('idx_packet_import_time_us', table_name='packet')
op.drop_index('idx_packet_import_time', table_name='packet')
op.drop_index('idx_packet_to_node_id', table_name='packet')
op.drop_index('idx_packet_from_node_id', table_name='packet')
op.drop_table('packet')
# Drop node table and indexes
op.drop_index('idx_node_node_id', table_name='node')
op.drop_table('node')
# ### end Alembic commands ###

57
container/build-container.sh Executable file
View File

@@ -0,0 +1,57 @@
#!/bin/sh
#
# build-container.sh
#
# Script to build MeshView container images
set -e
# Default values
IMAGE_NAME="meshview"
TAG="latest"
CONTAINERFILE="Containerfile"
# Parse arguments
while [ $# -gt 0 ]; do
case "$1" in
--tag|-t)
TAG="$2"
shift 2
;;
--name|-n)
IMAGE_NAME="$2"
shift 2
;;
--file|-f)
CONTAINERFILE="$2"
shift 2
;;
--help|-h)
echo "Usage: $0 [OPTIONS]"
echo ""
echo "Options:"
echo " -t, --tag TAG Tag for the image (default: latest)"
echo " -n, --name NAME Image name (default: meshview)"
echo " -f, --file FILE Containerfile path (default: Containerfile)"
echo " -h, --help Show this help"
exit 0
;;
*)
echo "Unknown option: $1"
echo "Use --help for usage information"
exit 1
;;
esac
done
echo "Building MeshView container image..."
echo " Image: ${IMAGE_NAME}:${TAG}"
echo " Containerfile: ${CONTAINERFILE}"
echo ""
# Build the container
docker build -f "${CONTAINERFILE}" -t "${IMAGE_NAME}:${TAG}" .
echo ""
echo "Build complete!"
echo "Run with: docker run --rm -p 8081:8081 ${IMAGE_NAME}:${TAG}"

37
container/config.patch Normal file
View File

@@ -0,0 +1,37 @@
diff --git a/sample.config.ini b/sample.config.ini
index 0e64980..494685c 100644
--- a/sample.config.ini
+++ b/sample.config.ini
@@ -3,7 +3,7 @@
# -------------------------
[server]
# The address to bind the server to. Use * to listen on all interfaces.
-bind = *
+bind = 0.0.0.0
# Port to run the web server on.
port = 8081
@@ -64,7 +64,7 @@ net_tag = #BayMeshNet
# -------------------------
[mqtt]
# MQTT server hostname or IP.
-server = mqtt.bayme.sh
+server = mqtt.meshtastic.org
# Topics to subscribe to (as JSON-like list, but still a string).
topics = ["msh/US/bayarea/#", "msh/US/CA/mrymesh/#", "msh/US/CA/sacvalley"]
@@ -82,7 +82,7 @@ password = large4cats
# -------------------------
[database]
# SQLAlchemy connection string. This one uses SQLite with asyncio support.
-connection_string = sqlite+aiosqlite:///packets.db
+connection_string = sqlite+aiosqlite:////var/lib/meshview/packets.db
# -------------------------
@@ -110,4 +110,4 @@ vacuum = False
# Set to True to enable, False to disable (default: False)
access_log = False
# Database cleanup logfile
-db_cleanup_logfile = dbcleanup.log
+db_cleanup_logfile = /var/log/meshview/dbcleanup.log

52
create_example_migration.py Executable file
View File

@@ -0,0 +1,52 @@
#!/usr/bin/env python3
"""
Script to create a blank migration for manual editing.
Usage:
./env/bin/python create_example_migration.py
This creates an empty migration file that you can manually edit to add
custom migration logic (data migrations, complex schema changes, etc.)
Unlike create_migration.py which auto-generates from model changes,
this creates a blank template for you to fill in.
"""
import os
import sys
# Add current directory to path
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
from alembic.config import Config
from alembic import command
# Create Alembic config
alembic_cfg = Config("alembic.ini")
# Set database URL from meshview config
try:
from meshview.config import CONFIG
database_url = CONFIG["database"]["connection_string"]
alembic_cfg.set_main_option("sqlalchemy.url", database_url)
print(f"Using database URL from config: {database_url}")
except Exception as e:
print(f"Warning: Could not load meshview config: {e}")
print("Using default database URL")
alembic_cfg.set_main_option("sqlalchemy.url", "sqlite+aiosqlite:///packets.db")
# Generate blank migration
try:
print("Creating blank migration for manual editing...")
command.revision(alembic_cfg, autogenerate=False, message="Manual migration")
print("✓ Successfully created blank migration!")
print("\nNow edit the generated file in alembic/versions/")
print("Add your custom upgrade() and downgrade() logic")
except Exception as e:
print(f"✗ Error creating migration: {e}")
import traceback
traceback.print_exc()
sys.exit(1)

58
create_migration.py Executable file
View File

@@ -0,0 +1,58 @@
#!/usr/bin/env python3
"""
Helper script to create Alembic migrations from SQLAlchemy model changes.
Usage:
./env/bin/python create_migration.py
This will:
1. Load your current models from meshview/models.py
2. Compare them to the current database schema
3. Auto-generate a migration with the detected changes
4. Save the migration to alembic/versions/
After running this, review the generated migration file before committing!
"""
import os
import sys
# Add current directory to path
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
from alembic.config import Config
from alembic import command
# Create Alembic config
alembic_cfg = Config("alembic.ini")
# Set database URL from meshview config
try:
from meshview.config import CONFIG
database_url = CONFIG["database"]["connection_string"]
alembic_cfg.set_main_option("sqlalchemy.url", database_url)
print(f"Using database URL from config: {database_url}")
except Exception as e:
print(f"Warning: Could not load meshview config: {e}")
print("Using default database URL")
alembic_cfg.set_main_option("sqlalchemy.url", "sqlite+aiosqlite:///packets.db")
# Generate migration
try:
print("\nComparing models to current database schema...")
print("Generating migration...\n")
command.revision(alembic_cfg, autogenerate=True, message="Auto-generated migration")
print("\n✓ Successfully created migration!")
print("\nNext steps:")
print("1. Review the generated file in alembic/versions/")
print("2. Edit the migration message/logic if needed")
print("3. Test the migration: ./env/bin/alembic upgrade head")
print("4. Commit the migration file to version control")
except Exception as e:
print(f"\n✗ Error creating migration: {e}")
import traceback
traceback.print_exc()
sys.exit(1)

View File

@@ -1,28 +0,0 @@
FROM python:3.12-slim
# Set work directory
WORKDIR /app
# Install system dependencies (graphviz required, git for cloning)
RUN apt-get update && \
apt-get install -y --no-install-recommends git graphviz && \
rm -rf /var/lib/apt/lists/*
# Clone the repo with submodules
RUN git clone --recurse-submodules https://github.com/pablorevilla-meshtastic/meshview.git /app
# Create virtual environment
RUN python -m venv /app/env
# Upgrade pip and install requirements in venv
RUN /app/env/bin/pip install --no-cache-dir --upgrade pip && \
/app/env/bin/pip install --no-cache-dir -r /app/requirements.txt
# Copy sample config
RUN cp /app/sample.config.ini /app/config.ini
# Expose port
EXPOSE 8081
# Run the app via venv
CMD ["/app/env/bin/python", "/app/mvrun.py"]

View File

@@ -1,44 +1,36 @@
# MeshView Docker Container
This Dockerfile builds a containerized version of the [MeshView](https://github.com/pablorevilla-meshtastic/meshview) application. It uses a lightweight Python environment and sets up the required virtual environment as expected by the application.
> **Note:** This directory contains legacy Docker build files.
>
> **For current Docker usage instructions, please see [README-Docker.md](../README-Docker.md) in the project root.**
## Image Details
## Current Approach
- **Base Image**: `python:3.12-slim`
- **Working Directory**: `/app`
- **Python Virtual Environment**: `/app/env`
Pre-built container images are automatically built and published to GitHub Container Registry:
```bash
docker pull ghcr.io/pablorevilla-meshtastic/meshview:latest
```
See **[README-Docker.md](../README-Docker.md)** for:
- Quick start instructions
- Volume mount configuration
- Docker Compose examples
- Backup configuration
- Troubleshooting
## Legacy Build (Not Recommended)
If you need to build your own image for development:
```bash
# From project root
docker build -f Containerfile -t meshview:local .
```
The current Containerfile uses:
- **Base Image**: `python:3.13-slim` (Debian-based)
- **Build tool**: `uv` for fast dependency installation
- **User**: Non-root user `app` (UID 10001)
- **Exposed Port**: `8081`
## Build Instructions
Build the Docker image:
```bash
docker build -t meshview-docker .
```
## Run Instructions
Run the container:
```bash
docker run -d --name meshview-docker -p 8081:8081 meshview-docker
```
This maps container port `8081` to your host. The application runs via:
```bash
/app/env/bin/python /app/mvrun.py
```
## Web Interface
Once the container is running, you can access the MeshView web interface by visiting:
http://localhost:8081
If running on a remote server, replace `localhost` with the host's IP or domain name:
http://<host>:8081
Ensure that port `8081` is open and not blocked by a firewall or security group.
- **Volumes**: `/etc/meshview`, `/var/lib/meshview`, `/var/log/meshview`

361
docs/ALEMBIC_SETUP.md Normal file
View File

@@ -0,0 +1,361 @@
# Alembic Database Migration Setup
This document describes the automatic database migration system implemented for MeshView using Alembic.
## Overview
The system provides automatic database schema migrations with coordination between the writer app (startdb.py) and reader app (web.py):
- **Writer App**: Automatically runs pending migrations on startup
- **Reader App**: Waits for migrations to complete before starting
## Architecture
### Key Components
1. **`meshview/migrations.py`** - Migration management utilities
- `run_migrations()` - Runs pending migrations (writer app)
- `wait_for_migrations()` - Waits for schema to be current (reader app)
- `is_database_up_to_date()` - Checks schema version
- Migration status tracking table
2. **`alembic/`** - Alembic migration directory
- `env.py` - Configured for async SQLAlchemy support
- `versions/` - Migration scripts directory
- `alembic.ini` - Alembic configuration
3. **Modified Apps**:
- `startdb.py` - Writer app that runs migrations before MQTT ingestion
- `meshview/web.py` - Reader app that waits for schema updates
## How It Works - Automatic In-Place Updates
### ✨ Fully Automatic Operation
**No manual migration commands needed!** The database schema updates automatically when you:
1. Deploy new code with migration files
2. Restart the applications
### Writer App (startdb.py) Startup Sequence
1. Initialize database connection
2. Create migration status tracking table
3. Set "migration in progress" flag
4. **🔄 Automatically run any pending Alembic migrations** (synchronously)
- Detects current schema version
- Compares to latest available migration
- Runs all pending migrations in sequence
- Updates database schema in place
5. Clear "migration in progress" flag
6. Start MQTT ingestion and other tasks
### Reader App (web.py) Startup Sequence
1. Initialize database connection
2. **Check database schema version**
3. If not up to date:
- Wait up to 60 seconds (30 retries × 2 seconds)
- Check every 2 seconds for schema updates
- Automatically proceeds once writer completes migrations
4. Once schema is current, start web server
### 🎯 Key Point: Zero Manual Steps
When you deploy new code with migrations:
```bash
# Just start the apps - migrations happen automatically!
./env/bin/python startdb.py # Migrations run here automatically
./env/bin/python main.py # Waits for migrations, then starts
```
**The database updates itself!** No need to run `alembic upgrade` manually.
### Coordination
The apps coordinate using:
- **Alembic version table** (`alembic_version`) - Tracks current schema version
- **Migration status table** (`migration_status`) - Optional flag for "in progress" state
## Creating New Migrations
### Using the helper script:
```bash
./env/bin/python create_migration.py
```
### Manual creation:
```bash
./env/bin/alembic revision --autogenerate -m "Description of changes"
```
This will:
1. Compare current database schema with SQLAlchemy models
2. Generate a migration script in `alembic/versions/`
3. Automatically detect most schema changes
### Manual migration (advanced):
```bash
./env/bin/alembic revision -m "Manual migration"
```
Then edit the generated file to add custom migration logic.
## Running Migrations
### Automatic (Recommended)
Migrations run automatically when the writer app starts:
```bash
./env/bin/python startdb.py
```
### Manual
To run migrations manually:
```bash
./env/bin/alembic upgrade head
```
To downgrade:
```bash
./env/bin/alembic downgrade -1 # Go back one version
./env/bin/alembic downgrade base # Go back to beginning
```
## Checking Migration Status
Check current database version:
```bash
./env/bin/alembic current
```
View migration history:
```bash
./env/bin/alembic history
```
## Benefits
1. **Zero Manual Intervention**: Migrations run automatically on startup
2. **Safe Coordination**: Reader won't connect to incompatible schema
3. **Version Control**: All schema changes tracked in git
4. **Rollback Capability**: Can downgrade if needed
5. **Auto-generation**: Most migrations created automatically from model changes
## Migration Workflow
### Development Process
1. **Modify SQLAlchemy models** in `meshview/models.py`
2. **Create migration**:
```bash
./env/bin/python create_migration.py
```
3. **Review generated migration** in `alembic/versions/`
4. **Test migration**:
- Stop all apps
- Start writer app (migrations run automatically)
- Start reader app (waits for schema to be current)
5. **Commit migration** to version control
### Production Deployment
1. **Deploy new code** with migration scripts
2. **Start writer app** - Migrations run automatically
3. **Start reader app** - Waits for migrations, then starts
4. **Monitor logs** for migration success
## Troubleshooting
### Migration fails
Check logs in writer app for error details. To manually fix:
```bash
./env/bin/alembic current # Check current version
./env/bin/alembic history # View available versions
./env/bin/alembic upgrade head # Try manual upgrade
```
### Reader app won't start (timeout)
Check if writer app is running and has completed migrations:
```bash
./env/bin/alembic current
```
### Reset to clean state
⚠️ **Warning: This will lose all data**
```bash
rm packets.db # Or your database file
./env/bin/alembic upgrade head # Create fresh schema
```
## File Structure
```
meshview/
├── alembic.ini # Alembic configuration
├── alembic/
│ ├── env.py # Async-enabled migration runner
│ ├── script.py.mako # Migration template
│ └── versions/ # Migration scripts
│ └── c88468b7ab0b_initial_migration.py
├── meshview/
│ ├── models.py # SQLAlchemy models (source of truth)
│ ├── migrations.py # Migration utilities
│ ├── mqtt_database.py # Writer database connection
│ └── database.py # Reader database connection
├── startdb.py # Writer app (runs migrations)
├── main.py # Entry point for reader app
└── create_migration.py # Helper script for creating migrations
```
## Configuration
Database URL is read from `config.ini`:
```ini
[database]
connection_string = sqlite+aiosqlite:///packets.db
```
Alembic automatically uses this configuration through `meshview/migrations.py`.
## Important Notes
1. **Always test migrations** in development before deploying to production
2. **Backup database** before running migrations in production
3. **Check for data loss** - Some migrations may require data migration logic
4. **Coordinate deployments** - Start writer before readers in multi-instance setups
5. **Monitor logs** during first startup after deployment
## Example Migrations
### Example 1: Generated Initial Migration
Here's what an auto-generated migration looks like (from comparing models to database):
```python
"""Initial migration
Revision ID: c88468b7ab0b
Revises:
Create Date: 2025-01-26 20:56:50.123456
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers
revision = 'c88468b7ab0b'
down_revision = None
branch_labels = None
depends_on = None
def upgrade() -> None:
# Upgrade operations
op.create_table('node',
sa.Column('id', sa.String(), nullable=False),
sa.Column('node_id', sa.BigInteger(), nullable=True),
# ... more columns
sa.PrimaryKeyConstraint('id')
)
def downgrade() -> None:
# Downgrade operations
op.drop_table('node')
```
### Example 2: Manual Migration Adding a New Table
We've included an example migration (`1717fa5c6545_add_example_table.py`) that demonstrates how to manually create a new table:
```python
"""Add example table
Revision ID: 1717fa5c6545
Revises: c88468b7ab0b
Create Date: 2025-10-26 20:59:04.347066
"""
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa
def upgrade() -> None:
"""Create example table with sample columns."""
op.create_table(
'example',
sa.Column('id', sa.Integer(), nullable=False, primary_key=True, autoincrement=True),
sa.Column('name', sa.String(length=100), nullable=False),
sa.Column('description', sa.Text(), nullable=True),
sa.Column('value', sa.Float(), nullable=True),
sa.Column('is_active', sa.Boolean(), nullable=False, server_default='1'),
sa.Column('created_at', sa.DateTime(), nullable=False,
server_default=sa.text('CURRENT_TIMESTAMP')),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.PrimaryKeyConstraint('id')
)
# Create an index on the name column for faster lookups
op.create_index('idx_example_name', 'example', ['name'])
def downgrade() -> None:
"""Remove example table."""
op.drop_index('idx_example_name', table_name='example')
op.drop_table('example')
```
**Key features demonstrated:**
- Various column types (Integer, String, Text, Float, Boolean, DateTime)
- Primary key with autoincrement
- Nullable and non-nullable columns
- Server defaults (for timestamps and booleans)
- Creating indexes
- Proper downgrade that reverses all changes
**To test this migration:**
```bash
# Apply the migration
./env/bin/alembic upgrade head
# Check it was applied
./env/bin/alembic current
# Verify table was created
sqlite3 packetsPL.db "SELECT sql FROM sqlite_master WHERE type='table' AND name='example';"
# Roll back the migration
./env/bin/alembic downgrade -1
# Verify table was removed
sqlite3 packetsPL.db "SELECT name FROM sqlite_master WHERE type='table' AND name='example';"
```
**To remove this example migration** (after testing):
```bash
# First make sure you're not on this revision
./env/bin/alembic downgrade c88468b7ab0b
# Then delete the migration file
rm alembic/versions/1717fa5c6545_add_example_table.py
```
## References
- [Alembic Documentation](https://alembic.sqlalchemy.org/)
- [SQLAlchemy Documentation](https://docs.sqlalchemy.org/)
- [Async SQLAlchemy](https://docs.sqlalchemy.org/en/20/orm/extensions/asyncio.html)

View File

@@ -111,12 +111,29 @@ Returns a list of packets with optional filters.
---
### Notes
- All timestamps (`import_time`, `last_seen`) are returned in ISO 8601 format.
- `portnum` is an integer representing the packet type.
- `payload` is always a UTF-8 decoded string.
---
## 4 Statistics API: GET `/api/stats`
## 4. Channels API
### GET `/api/channels`
Returns a list of channels seen in a given time period.
**Query Parameters**
- `period_type` (optional, string): Time granularity (`hour` or `day`). Default: `hour`.
- `length` (optional, int): Number of periods to look back. Default: `24`.
**Response Example**
```json
{
"channels": ["LongFast", "MediumFast", "ShortFast"]
}
```
---
## 5. Statistics API
### GET `/api/stats`
Retrieve packet statistics aggregated by time periods, with optional filtering.
@@ -157,3 +174,171 @@ Retrieve packet statistics aggregated by time periods, with optional filtering.
// more entries...
]
}
```
---
## 6. Edges API
### GET `/api/edges`
Returns network edges (connections between nodes) based on traceroutes and neighbor info.
**Query Parameters**
- `type` (optional, string): Filter by edge type (`traceroute` or `neighbor`). If omitted, returns both types.
**Response Example**
```json
{
"edges": [
{
"from": 12345678,
"to": 87654321,
"type": "traceroute"
},
{
"from": 11111111,
"to": 22222222,
"type": "neighbor"
}
]
}
```
---
## 7. Configuration API
### GET `/api/config`
Returns the current site configuration (safe subset exposed to clients).
**Response Example**
```json
{
"site": {
"domain": "meshview.example.com",
"language": "en",
"title": "Bay Area Mesh",
"message": "Real time data from around the bay area",
"starting": "/chat",
"nodes": "true",
"conversations": "true",
"everything": "true",
"graphs": "true",
"stats": "true",
"net": "true",
"map": "true",
"top": "true",
"map_top_left_lat": 39.0,
"map_top_left_lon": -123.0,
"map_bottom_right_lat": 36.0,
"map_bottom_right_lon": -121.0,
"map_interval": 3,
"firehose_interval": 3,
"weekly_net_message": "Weekly Mesh check-in message.",
"net_tag": "#BayMeshNet",
"version": "2.0.8 ~ 10-22-25"
},
"mqtt": {
"server": "mqtt.bayme.sh",
"topics": ["msh/US/bayarea/#"]
},
"cleanup": {
"enabled": "false",
"days_to_keep": "14",
"hour": "2",
"minute": "0",
"vacuum": "false"
}
}
```
---
## 8. Language/Translations API
### GET `/api/lang`
Returns translation strings for the UI.
**Query Parameters**
- `lang` (optional, string): Language code (e.g., `en`, `es`). Defaults to site language setting.
- `section` (optional, string): Specific section to retrieve translations for.
**Response Example (full)**
```json
{
"chat": {
"title": "Chat",
"send": "Send"
},
"map": {
"title": "Map",
"zoom_in": "Zoom In"
}
}
```
**Response Example (section-specific)**
Request: `/api/lang?section=chat`
```json
{
"title": "Chat",
"send": "Send"
}
```
---
## 9. Health Check API
### GET `/health`
Health check endpoint for monitoring, load balancers, and orchestration systems.
**Response Example (Healthy)**
```json
{
"status": "healthy",
"timestamp": "2025-11-03T14:30:00.123456Z",
"version": "3.0.0",
"git_revision": "6416978",
"database": "connected",
"database_size": "853.03 MB",
"database_size_bytes": 894468096
}
```
**Response Example (Unhealthy)**
Status Code: `503 Service Unavailable`
```json
{
"status": "unhealthy",
"timestamp": "2025-11-03T14:30:00.123456Z",
"version": "2.0.8",
"git_revision": "6416978",
"database": "disconnected"
}
```
---
## 10. Version API
### GET `/version`
Returns detailed version information including semver, release date, and git revision.
**Response Example**
```json
{
"version": "2.0.8",
"release_date": "2025-10-22",
"git_revision": "6416978a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6q",
"git_revision_short": "6416978"
}
```
---
## Notes
- All timestamps (`import_time`, `last_seen`) are returned in ISO 8601 format.
- `portnum` is an integer representing the packet type.
- `payload` is always a UTF-8 decoded string.
- Node IDs are integers (e.g., `12345678`).

View File

@@ -0,0 +1,146 @@
# Database Changes With Alembic
This guide explains how to make database schema changes in MeshView using Alembic migrations.
## Overview
When you need to add, modify, or remove columns from database tables, you must:
1. Update the SQLAlchemy model
2. Create an Alembic migration
3. Let the system automatically apply the migration
## Step-by-Step Process
### 1. Update the Model
Edit `meshview/models.py` to add/modify the column in the appropriate model class:
```python
class Traceroute(Base):
__tablename__ = "traceroute"
id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True)
# ... existing columns ...
route_return: Mapped[bytes] = mapped_column(nullable=True) # New column
```
### 2. Create an Alembic Migration
Generate a new migration file with a descriptive message:
```bash
./env/bin/alembic revision -m "add route_return to traceroute"
```
This creates a new file in `alembic/versions/` with a unique revision ID.
### 3. Fill in the Migration
Edit the generated migration file to implement the actual database changes:
```python
def upgrade() -> None:
# Add route_return column to traceroute table
with op.batch_alter_table('traceroute', schema=None) as batch_op:
batch_op.add_column(sa.Column('route_return', sa.LargeBinary(), nullable=True))
def downgrade() -> None:
# Remove route_return column from traceroute table
with op.batch_alter_table('traceroute', schema=None) as batch_op:
batch_op.drop_column('route_return')
```
### 4. Migration Runs Automatically
When you restart the application with `mvrun.py`:
1. The writer process (`startdb.py`) starts up
2. It checks if the database schema is up to date
3. If new migrations are pending, it runs them automatically
4. The reader process (web server) waits for migrations to complete before starting
**No manual migration command is needed** - the application handles this automatically on startup.
### 5. Commit Both Files
Add both files to git:
```bash
git add meshview/models.py
git add alembic/versions/ac311b3782a1_add_route_return_to_traceroute.py
git commit -m "Add route_return column to traceroute table"
```
## Important Notes
### SQLite Compatibility
Always use `batch_alter_table` for SQLite compatibility:
```python
with op.batch_alter_table('table_name', schema=None) as batch_op:
batch_op.add_column(...)
```
SQLite has limited ALTER TABLE support, and `batch_alter_table` works around these limitations.
### Migration Process
- **Writer process** (`startdb.py`): Runs migrations on startup
- **Reader process** (web server in `main.py`): Waits for migrations to complete
- Migrations are checked and applied every time the application starts
- The system uses a migration status table to coordinate between processes
### Common Column Types
```python
# Integer
column: Mapped[int] = mapped_column(BigInteger, nullable=True)
# String
column: Mapped[str] = mapped_column(nullable=True)
# Bytes/Binary
column: Mapped[bytes] = mapped_column(nullable=True)
# DateTime
column: Mapped[datetime] = mapped_column(nullable=True)
# Boolean
column: Mapped[bool] = mapped_column(nullable=True)
# Float
column: Mapped[float] = mapped_column(nullable=True)
```
### Migration File Location
Migrations are stored in: `alembic/versions/`
Each migration file includes:
- Revision ID (unique identifier)
- Down revision (previous migration in chain)
- Create date
- `upgrade()` function (applies changes)
- `downgrade()` function (reverts changes)
## Troubleshooting
### Migration Not Running
If migrations don't run automatically:
1. Check that the database is writable
2. Look for errors in the startup logs
3. Verify the migration chain is correct (each migration references the previous one)
### Manual Migration (Not Recommended)
If you need to manually run migrations for debugging:
```bash
./env/bin/alembic upgrade head
```
However, the application normally handles this automatically.

14
docs/README.md Normal file
View File

@@ -0,0 +1,14 @@
# Technical Documentation
This directory contains technical documentation for MeshView that goes beyond initial setup and basic usage.
These documents are intended for developers, contributors, and advanced users who need deeper insight into the system's architecture, database migrations, API endpoints, and internal workings.
## Contents
- [ALEMBIC_SETUP.md](ALEMBIC_SETUP.md) - Database migration setup and management
- [TIMESTAMP_MIGRATION.md](TIMESTAMP_MIGRATION.md) - Details on timestamp schema changes
- [API_Documentation.md](API_Documentation.md) - REST API endpoints and usage
- [CODE_IMPROVEMENTS.md](CODE_IMPROVEMENTS.md) - Suggested code improvements and refactoring ideas
For initial setup and basic usage instructions, please see the main [README.md](../README.md) in the root directory.

193
docs/TIMESTAMP_MIGRATION.md Normal file
View File

@@ -0,0 +1,193 @@
# High-Resolution Timestamp Migration
This document describes the implementation of GitHub issue #55: storing high-resolution timestamps as integers in the database for improved performance and query efficiency.
## Overview
The meshview database now stores timestamps in two formats:
1. **TEXT format** (`import_time`): Human-readable ISO8601 format with microseconds (e.g., `2025-03-12 04:15:56.058038`)
2. **INTEGER format** (`import_time_us`): Microseconds since Unix epoch (1970-01-01 00:00:00 UTC)
The dual format approach provides:
- **Backward compatibility**: Existing `import_time` TEXT columns remain unchanged
- **Performance**: Fast integer comparisons and math operations
- **Precision**: Microsecond resolution for accurate timing
- **Efficiency**: Compact storage and fast indexed lookups
## Database Changes
### New Columns Added
Three tables have new `import_time_us` columns:
1. **packet.import_time_us** (INTEGER)
- Stores when the packet was imported into the database
- Indexed for fast queries
2. **packet_seen.import_time_us** (INTEGER)
- Stores when the packet_seen record was imported
- Indexed for performance
3. **traceroute.import_time_us** (INTEGER)
- Stores when the traceroute was imported
- Indexed for fast lookups
### New Indexes
The following indexes were created for optimal query performance:
```sql
CREATE INDEX idx_packet_import_time_us ON packet(import_time_us DESC);
CREATE INDEX idx_packet_from_node_time_us ON packet(from_node_id, import_time_us DESC);
CREATE INDEX idx_packet_seen_import_time_us ON packet_seen(import_time_us);
CREATE INDEX idx_traceroute_import_time_us ON traceroute(import_time_us);
```
## Migration Process
### For Existing Databases
Run the migration script to add the new columns and populate them from existing data:
```bash
python migrate_add_timestamp_us.py [database_path]
```
If no path is provided, it defaults to `packets.db` in the current directory.
The migration script:
1. Checks if migration is needed (idempotent)
2. Adds `import_time_us` columns to the three tables
3. Populates the new columns from existing `import_time` values
4. Creates indexes for optimal performance
5. Verifies the migration completed successfully
### For New Databases
New databases created with the updated schema will automatically include the `import_time_us` columns. The MQTT store module populates both columns when inserting new records.
## Code Changes
### Models (meshview/models.py)
The ORM models now include the new `import_time_us` fields:
```python
class Packet(Base):
import_time: Mapped[datetime] = mapped_column(nullable=True)
import_time_us: Mapped[int] = mapped_column(BigInteger, nullable=True)
```
### MQTT Store (meshview/mqtt_store.py)
The data ingestion logic now populates both timestamp columns using UTC time:
```python
now = datetime.datetime.now(datetime.timezone.utc)
now_us = int(now.timestamp() * 1_000_000)
# Both columns are populated
import_time=now,
import_time_us=now_us,
```
**Important**: All new timestamps use UTC (Coordinated Universal Time) for consistency across time zones.
## Using the New Timestamps
### Example Queries
**Query packets from the last 7 days:**
```sql
-- Old way (slower)
SELECT * FROM packet
WHERE import_time >= datetime('now', '-7 days');
-- New way (faster)
SELECT * FROM packet
WHERE import_time_us >= (strftime('%s', 'now', '-7 days') * 1000000);
```
**Query packets in a specific time range:**
```sql
SELECT * FROM packet
WHERE import_time_us BETWEEN 1759254380000000 AND 1759254390000000;
```
**Calculate time differences (in microseconds):**
```sql
SELECT
id,
(import_time_us - LAG(import_time_us) OVER (ORDER BY import_time_us)) / 1000000.0 as seconds_since_last
FROM packet
LIMIT 10;
```
### Converting Timestamps
**From datetime to microseconds (UTC):**
```python
import datetime
now = datetime.datetime.now(datetime.timezone.utc)
now_us = int(now.timestamp() * 1_000_000)
```
**From microseconds to datetime:**
```python
import datetime
timestamp_us = 1759254380813451
dt = datetime.datetime.fromtimestamp(timestamp_us / 1_000_000)
```
**In SQL queries:**
```sql
-- Datetime to microseconds
SELECT CAST((strftime('%s', import_time) || substr(import_time, 21, 6)) AS INTEGER);
-- Microseconds to datetime (approximate)
SELECT datetime(import_time_us / 1000000, 'unixepoch');
```
## Performance Benefits
The integer timestamp columns provide significant performance improvements:
1. **Faster comparisons**: Integer comparisons are much faster than string/datetime comparisons
2. **Smaller index size**: Integer indexes are more compact than datetime indexes
3. **Range queries**: BETWEEN operations on integers are highly optimized
4. **Math operations**: Easy to calculate time differences, averages, etc.
5. **Sorting**: Integer sorting is faster than datetime sorting
## Backward Compatibility
The original `import_time` TEXT columns remain unchanged:
- Existing code continues to work
- Human-readable timestamps still available
- Gradual migration to new columns possible
- No breaking changes for existing queries
## Future Work
Future improvements could include:
- Migrating queries to use `import_time_us` columns
- Deprecating the TEXT `import_time` columns (after transition period)
- Adding helper functions for timestamp conversion
- Creating views that expose both formats
## Testing
The migration was tested on a production database with:
- 132,466 packet records
- 1,385,659 packet_seen records
- 28,414 traceroute records
All records were successfully migrated with microsecond precision preserved.
## References
- GitHub Issue: #55 - Storing High-Resolution Timestamps in SQLite
- SQLite datetime functions: https://www.sqlite.org/lang_datefunc.html
- Python datetime module: https://docs.python.org/3/library/datetime.html

57
meshview/__version__.py Normal file
View File

@@ -0,0 +1,57 @@
"""Version information for MeshView."""
import subprocess
from pathlib import Path
__version__ = "3.0.0"
__release_date__ = "2025-11-05"
def get_git_revision():
"""Get the current git revision hash."""
try:
repo_dir = Path(__file__).parent.parent
result = subprocess.run(
["git", "rev-parse", "HEAD"],
capture_output=True,
text=True,
check=True,
cwd=repo_dir,
)
return result.stdout.strip()
except (subprocess.CalledProcessError, FileNotFoundError):
return "unknown"
def get_git_revision_short():
"""Get the short git revision hash."""
try:
repo_dir = Path(__file__).parent.parent
result = subprocess.run(
["git", "rev-parse", "--short", "HEAD"],
capture_output=True,
text=True,
check=True,
cwd=repo_dir,
)
return result.stdout.strip()
except (subprocess.CalledProcessError, FileNotFoundError):
return "unknown"
def get_version_info():
"""Get complete version information."""
return {
"version": __version__,
"release_date": __release_date__,
"git_revision": get_git_revision(),
"git_revision_short": get_git_revision_short(),
}
# Cache git info at import time for performance
_git_revision = get_git_revision()
_git_revision_short = get_git_revision_short()
# Full version string for display
__version_string__ = f"{__version__} ~ {__release_date__}"

View File

@@ -1,110 +1,141 @@
{
"base": {
"conversations": "Conversations",
"nodes": "Nodes",
"everything": "See Everything",
"graph": "Mesh Graphs",
"net": "Weekly Net",
"map": "Live Map",
"stats": "Stats",
"top": "Top Traffic Nodes",
"footer": "Visit <strong><a href=\"https://github.com/pablorevilla-meshtastic/meshview\">Meshview</a></strong> on GitHub",
"node id": "Node id",
"go to node": "Go to Node",
"all": "All",
"portnum_options": {
"1": "Text Message",
"3": "Position",
"4": "Node Info",
"67": "Telemetry",
"70": "Traceroute",
"71": "Neighbor Info"
}
{
"base": {
"chat": "Chat",
"nodes": "Nodes",
"everything": "See Everything",
"graphs": "Mesh Graphs",
"net": "Weekly Net",
"map": "Live Map",
"stats": "Stats",
"top": "Top Traffic Nodes",
"footer": "Visit <strong><a href=\"https://github.com/pablorevilla-meshtastic/meshview\">Meshview</a></strong> on GitHub",
"node id": "Node id",
"go to node": "Go to Node",
"all": "All",
"portnum_options": {
"1": "Text Message",
"3": "Position",
"4": "Node Info",
"67": "Telemetry",
"70": "Traceroute",
"71": "Neighbor Info"
}
},
"chat": {
"replying_to": "Replying to:",
"view_packet_details": "View packet details"
},
"nodelist": {
"search_placeholder": "Search by name or ID...",
"all_roles": "All Roles",
"all_channels": "All Channels",
"all_hw_models": "All HW Models",
"all_firmware": "All Firmware",
"export_csv": "Export CSV",
"clear_filters": "Clear Filters",
"showing": "Showing",
"nodes": "nodes",
"short": "Short",
"long_name": "Long Name",
"hw_model": "HW Model",
"firmware": "Firmware",
"role": "Role",
"last_lat": "Last Latitude",
"last_long": "Last Longitude",
"channel": "Channel",
"last_update": "Last Update",
"loading_nodes": "Loading nodes...",
"no_nodes": "No nodes found",
"error_nodes": "Error loading nodes"
},
"chat": {
"replying_to": "Replying to:",
"view_packet_details": "View packet details"
},
"nodelist": {
"search_placeholder": "Search by name or ID...",
"all_roles": "All Roles",
"all_channels": "All Channels",
"all_hw_models": "All HW Models",
"all_firmware": "All Firmware",
"export_csv": "Export CSV",
"clear_filters": "Clear Filters",
"showing": "Showing",
"nodes": "nodes",
"short": "Short",
"long_name": "Long Name",
"hw_model": "HW Model",
"firmware": "Firmware",
"role": "Role",
"last_lat": "Last Latitude",
"last_long": "Last Longitude",
"channel": "Channel",
"last_update": "Last Update",
"loading_nodes": "Loading nodes...",
"no_nodes": "No nodes found",
"error_nodes": "Error loading nodes"
},
"net": {
"number_of_checkins": "Number of Check-ins:",
"view_packet_details": "View packet details",
"view_all_packets_from_node": "View all packets from this node",
"no_packets_found": "No packets found."
},
"map": {
"channel": "Channel:",
"model": "Model:",
"role": "Role:",
"last_seen": "Last seen:",
"firmware": "Firmware:",
"show_routers_only": "Show Routers Only",
"share_view": "Share This View"
},
"stats":
{
"mesh_stats_summary": "Mesh Statistics - Summary (all available in Database)",
"total_nodes": "Total Nodes",
"total_packets": "Total Packets",
"total_packets_seen": "Total Packets Seen",
"packets_per_day_all": "Packets per Day - All Ports (Last 14 Days)",
"packets_per_day_text": "Packets per Day - Text Messages (Port 1, Last 14 Days)",
"packets_per_hour_all": "Packets per Hour - All Ports",
"packets_per_hour_text": "Packets per Hour - Text Messages (Port 1)",
"packet_types_last_24h": "Packet Types - Last 24 Hours",
"hardware_breakdown": "Hardware Breakdown",
"role_breakdown": "Role Breakdown",
"channel_breakdown": "Channel Breakdown",
"expand_chart": "Expand Chart",
"export_csv": "Export CSV",
"all_channels": "All Channels"
},
"top":
{
"top_traffic_nodes": "Top Traffic Nodes (last 24 hours)",
"chart_description_1": "This chart shows a bell curve (normal distribution) based on the total \"Times Seen\" values for all nodes. It helps visualize how frequently nodes are heard, relative to the average.",
"chart_description_2": "This \"Times Seen\" value is the closest that we can get to Mesh utilization by node.",
"mean_label": "Mean:",
"stddev_label": "Standard Deviation:",
"long_name": "Long Name",
"short_name": "Short Name",
"channel": "Channel",
"packets_sent": "Packets Sent",
"times_seen": "Times Seen",
"seen_percent": "Seen % of Mean",
"no_nodes": "No top traffic nodes available."
},
"nodegraph":
{
"channel_label": "Channel:",
"search_node_placeholder": "Search node...",
"search_button": "Search",
"long_name_label": "Long Name:",
"short_name_label": "Short Name:",
"role_label": "Role:",
"hw_model_label": "Hardware Model:",
"node_not_found": "Node not found in current channel!"
}
}
"net": {
"number_of_checkins": "Number of Check-ins:",
"view_packet_details": "View packet details",
"view_all_packets_from_node": "View all packets from this node",
"no_packets_found": "No packets found."
},
"map": {
"channel": "Channel:",
"model": "Model:",
"role": "Role:",
"last_seen": "Last seen:",
"firmware": "Firmware:",
"show_routers_only": "Show Routers Only",
"share_view": "Share This View"
},
"stats":
{
"mesh_stats_summary": "Mesh Statistics - Summary (all available in Database)",
"total_nodes": "Total Nodes",
"total_packets": "Total Packets",
"total_packets_seen": "Total Packets Seen",
"packets_per_day_all": "Packets per Day - All Ports (Last 14 Days)",
"packets_per_day_text": "Packets per Day - Text Messages (Port 1, Last 14 Days)",
"packets_per_hour_all": "Packets per Hour - All Ports",
"packets_per_hour_text": "Packets per Hour - Text Messages (Port 1)",
"packet_types_last_24h": "Packet Types - Last 24 Hours",
"hardware_breakdown": "Hardware Breakdown",
"role_breakdown": "Role Breakdown",
"channel_breakdown": "Channel Breakdown",
"expand_chart": "Expand Chart",
"export_csv": "Export CSV",
"all_channels": "All Channels",
"node_id": "Node ID"
},
"top":
{
"top_traffic_nodes": "Top Traffic Nodes (last 24 hours)",
"chart_description_1": "This chart shows a bell curve (normal distribution) based on the total \"Times Seen\" values for all nodes. It helps visualize how frequently nodes are heard, relative to the average.",
"chart_description_2": "This \"Times Seen\" value is the closest that we can get to Mesh utilization by node.",
"mean_label": "Mean:",
"stddev_label": "Standard Deviation:",
"long_name": "Long Name",
"short_name": "Short Name",
"channel": "Channel",
"packets_sent": "Packets Sent",
"times_seen": "Times Seen",
"seen_percent": "Seen % of Mean",
"no_nodes": "No top traffic nodes available."
},
"nodegraph":
{
"channel_label": "Channel:",
"search_node_placeholder": "Search node...",
"search_button": "Search",
"long_name_label": "Long Name:",
"short_name_label": "Short Name:",
"role_label": "Role:",
"hw_model_label": "Hardware Model:",
"node_not_found": "Node not found in current channel!"
},
"firehose":
{
"live_feed": "📡 Live Feed",
"pause": "Pause",
"resume": "Resume",
"time": "Time",
"packet_id": "Packet ID",
"from": "From",
"to": "To",
"port": "Port",
"links": "Links",
"unknown_app": "UNKNOWN APP",
"text_message": "Text Message",
"position": "Position",
"node_info": "Node Info",
"routing": "Routing",
"administration": "Administration",
"waypoint": "Waypoint",
"store_forward": "Store Forward",
"telemetry": "Telemetry",
"trace_route": "Trace Route",
"neighbor_info": "Neighbor Info",
"direct_to_mqtt": "direct to MQTT",
"all": "All",
"map": "Map",
"graph": "Graph"
}
}

View File

@@ -108,5 +108,35 @@
"other": "Otro",
"unknown": "Desconocido",
"node_not_found": "¡Nodo no encontrado en el canal actual!"
}
},
"firehose":
{
"live_feed": "📡 Flujo en Vivo",
"pause": "Pausar",
"resume": "Continuar",
"time": "Hora",
"packet_id": "ID del Paquete",
"from": "De",
"to": "Para",
"port": "Puerto",
"links": "Enlaces",
"unknown_app": "APLICACIÓN DESCONOCIDA",
"text_message": "Mensaje de Texto",
"position": "Posición",
"node_info": "Información del Nodo",
"routing": "Enrutamiento",
"administration": "Administración",
"waypoint": "Punto de Ruta",
"store_forward": "Almacenar y Reenviar",
"telemetry": "Telemetría",
"trace_route": "Rastreo de Ruta",
"neighbor_info": "Información de Vecinos",
"direct_to_mqtt": "Directo a MQTT",
"all": "Todos",
"map": "Mapa",
"graph": "Gráfico"
}
}

243
meshview/migrations.py Normal file
View File

@@ -0,0 +1,243 @@
"""
Database migration management for MeshView.
This module provides utilities for:
- Running Alembic migrations programmatically
- Checking database schema versions
- Coordinating migrations between writer and reader apps
"""
import asyncio
import logging
from pathlib import Path
from alembic.config import Config
from alembic.runtime.migration import MigrationContext
from alembic.script import ScriptDirectory
from sqlalchemy import text
from sqlalchemy.ext.asyncio import AsyncEngine
from alembic import command
logger = logging.getLogger(__name__)
def get_alembic_config(database_url: str) -> Config:
"""
Get Alembic configuration with the database URL set.
Args:
database_url: SQLAlchemy database connection string
Returns:
Configured Alembic Config object
"""
# Get the alembic.ini path (in project root)
alembic_ini = Path(__file__).parent.parent / "alembic.ini"
config = Config(str(alembic_ini))
config.set_main_option("sqlalchemy.url", database_url)
return config
async def get_current_revision(engine: AsyncEngine) -> str | None:
"""
Get the current database schema revision.
Args:
engine: Async SQLAlchemy engine
Returns:
Current revision string, or None if no migrations applied
"""
async with engine.connect() as connection:
def _get_revision(conn):
context = MigrationContext.configure(conn)
return context.get_current_revision()
revision = await connection.run_sync(_get_revision)
return revision
async def get_head_revision(database_url: str) -> str | None:
"""
Get the head (latest) revision from migration scripts.
Args:
database_url: Database connection string
Returns:
Head revision string, or None if no migrations exist
"""
config = get_alembic_config(database_url)
script_dir = ScriptDirectory.from_config(config)
head = script_dir.get_current_head()
return head
async def is_database_up_to_date(engine: AsyncEngine, database_url: str) -> bool:
"""
Check if database is at the latest schema version.
Args:
engine: Async SQLAlchemy engine
database_url: Database connection string
Returns:
True if database is up to date, False otherwise
"""
current = await get_current_revision(engine)
head = await get_head_revision(database_url)
# If there are no migrations yet, consider it up to date
if head is None:
return True
return current == head
def run_migrations(database_url: str) -> None:
"""
Run all pending migrations to bring database up to date.
This is a synchronous operation that runs Alembic migrations.
Should be called by the writer app on startup.
Args:
database_url: Database connection string
"""
logger.info("Running database migrations...")
import sys
sys.stdout.flush()
config = get_alembic_config(database_url)
try:
# Run migrations to head
logger.info("Calling alembic upgrade command...")
sys.stdout.flush()
command.upgrade(config, "head")
logger.info("Database migrations completed successfully")
sys.stdout.flush()
except Exception as e:
logger.error(f"Error running migrations: {e}")
raise
async def wait_for_migrations(
engine: AsyncEngine, database_url: str, max_retries: int = 30, retry_delay: int = 2
) -> bool:
"""
Wait for database migrations to complete.
This should be called by the reader app to wait until
the database schema is up to date before proceeding.
Args:
engine: Async SQLAlchemy engine
database_url: Database connection string
max_retries: Maximum number of retry attempts
retry_delay: Seconds to wait between retries
Returns:
True if database is up to date, False if max retries exceeded
"""
for attempt in range(max_retries):
try:
if await is_database_up_to_date(engine, database_url):
logger.info("Database schema is up to date")
return True
current = await get_current_revision(engine)
head = await get_head_revision(database_url)
logger.info(
f"Database schema not up to date (current: {current}, head: {head}). "
f"Waiting... (attempt {attempt + 1}/{max_retries})"
)
await asyncio.sleep(retry_delay)
except Exception as e:
logger.warning(
f"Error checking database version (attempt {attempt + 1}/{max_retries}): {e}"
)
await asyncio.sleep(retry_delay)
logger.error(f"Database schema not up to date after {max_retries} attempts")
return False
async def create_migration_status_table(engine: AsyncEngine) -> None:
"""
Create a simple status table for migration coordination.
This table can be used to signal when migrations are in progress.
Args:
engine: Async SQLAlchemy engine
"""
async with engine.begin() as conn:
await conn.execute(
text("""
CREATE TABLE IF NOT EXISTS migration_status (
id INTEGER PRIMARY KEY CHECK (id = 1),
in_progress BOOLEAN NOT NULL DEFAULT 0,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
""")
)
# Insert initial row if not exists
await conn.execute(
text("""
INSERT OR IGNORE INTO migration_status (id, in_progress)
VALUES (1, 0)
""")
)
async def set_migration_in_progress(engine: AsyncEngine, in_progress: bool) -> None:
"""
Set the migration in-progress flag.
Args:
engine: Async SQLAlchemy engine
in_progress: True if migration is in progress, False otherwise
"""
async with engine.begin() as conn:
await conn.execute(
text("""
UPDATE migration_status
SET in_progress = :in_progress,
updated_at = CURRENT_TIMESTAMP
WHERE id = 1
"""),
{"in_progress": in_progress},
)
async def is_migration_in_progress(engine: AsyncEngine) -> bool:
"""
Check if a migration is currently in progress.
Args:
engine: Async SQLAlchemy engine
Returns:
True if migration is in progress, False otherwise
"""
try:
async with engine.connect() as conn:
result = await conn.execute(
text("SELECT in_progress FROM migration_status WHERE id = 1")
)
row = result.fetchone()
return bool(row[0]) if row else False
except Exception:
# If table doesn't exist or query fails, assume no migration in progress
return False

View File

@@ -23,8 +23,14 @@ class Node(Base):
last_long: Mapped[int] = mapped_column(BigInteger, nullable=True)
channel: Mapped[str] = mapped_column(nullable=True)
last_update: Mapped[datetime] = mapped_column(nullable=True)
first_seen_us: Mapped[int] = mapped_column(BigInteger, nullable=True)
last_seen_us: Mapped[int] = mapped_column(BigInteger, nullable=True)
__table_args__ = (Index("idx_node_node_id", "node_id"),)
__table_args__ = (
Index("idx_node_node_id", "node_id"),
Index("idx_node_first_seen_us", "first_seen_us"),
Index("idx_node_last_seen_us", "last_seen_us"),
)
def to_dict(self):
return {
@@ -50,14 +56,17 @@ class Packet(Base):
)
payload: Mapped[bytes] = mapped_column(nullable=True)
import_time: Mapped[datetime] = mapped_column(nullable=True)
import_time_us: Mapped[int] = mapped_column(BigInteger, nullable=True)
channel: Mapped[str] = mapped_column(nullable=True)
__table_args__ = (
Index("idx_packet_from_node_id", "from_node_id"),
Index("idx_packet_to_node_id", "to_node_id"),
Index("idx_packet_import_time", desc("import_time")),
Index("idx_packet_import_time_us", desc("import_time_us")),
# Composite index for /top endpoint performance - filters by from_node_id AND import_time
Index("idx_packet_from_node_time", "from_node_id", desc("import_time")),
Index("idx_packet_from_node_time_us", "from_node_id", desc("import_time_us")),
)
@@ -78,11 +87,13 @@ class PacketSeen(Base):
rx_rssi: Mapped[int] = mapped_column(nullable=True)
topic: Mapped[str] = mapped_column(nullable=True)
import_time: Mapped[datetime] = mapped_column(nullable=True)
import_time_us: Mapped[int] = mapped_column(BigInteger, nullable=True)
__table_args__ = (
Index("idx_packet_seen_node_id", "node_id"),
# Index for /top endpoint performance - JOIN on packet_id
Index("idx_packet_seen_packet_id", "packet_id"),
Index("idx_packet_seen_import_time_us", "import_time_us"),
)
@@ -98,5 +109,10 @@ class Traceroute(Base):
done: Mapped[bool] = mapped_column(nullable=True)
route: Mapped[bytes] = mapped_column(nullable=True)
import_time: Mapped[datetime] = mapped_column(nullable=True)
route_return: Mapped[bytes] = mapped_column(nullable=True)
import_time_us: Mapped[int] = mapped_column(BigInteger, nullable=True)
__table_args__ = (Index("idx_traceroute_import_time", "import_time"),)
__table_args__ = (
Index("idx_traceroute_import_time", "import_time"),
Index("idx_traceroute_import_time_us", "import_time_us"),
)

View File

@@ -37,6 +37,9 @@ async def process_envelope(topic, env):
await session.execute(select(Node).where(Node.node_id == node_id))
).scalar_one_or_none()
now = datetime.datetime.now(datetime.UTC)
now_us = int(now.timestamp() * 1_000_000)
if node:
node.node_id = node_id
node.long_name = map_report.long_name
@@ -47,7 +50,10 @@ async def process_envelope(topic, env):
node.last_lat = map_report.latitude_i
node.last_long = map_report.longitude_i
node.firmware = map_report.firmware_version
node.last_update = datetime.datetime.now()
node.last_update = now
node.last_seen_us = now_us
if node.first_seen_us is None:
node.first_seen_us = now_us
else:
node = Node(
id=user_id,
@@ -60,7 +66,9 @@ async def process_envelope(topic, env):
firmware=map_report.firmware_version,
last_lat=map_report.latitude_i,
last_long=map_report.longitude_i,
last_update=datetime.datetime.now(),
last_update=now,
first_seen_us=now_us,
last_seen_us=now_us,
)
session.add(node)
except Exception as e:
@@ -80,6 +88,8 @@ async def process_envelope(topic, env):
if not packet:
# FIXME: Not Used
# new_packet = True
now = datetime.datetime.now(datetime.UTC)
now_us = int(now.timestamp() * 1_000_000)
stmt = (
sqlite_insert(Packet)
.values(
@@ -88,7 +98,8 @@ async def process_envelope(topic, env):
from_node_id=getattr(env.packet, "from"),
to_node_id=env.packet.to,
payload=env.packet.SerializeToString(),
import_time=datetime.datetime.now(),
import_time=now,
import_time_us=now_us,
channel=env.channel_id,
)
.on_conflict_do_nothing(index_elements=["id"])
@@ -112,6 +123,8 @@ async def process_envelope(topic, env):
)
)
if not result.scalar_one_or_none():
now = datetime.datetime.now(datetime.UTC)
now_us = int(now.timestamp() * 1_000_000)
seen = PacketSeen(
packet_id=env.packet.id,
node_id=int(env.gateway_id[1:], 16),
@@ -122,7 +135,8 @@ async def process_envelope(topic, env):
hop_limit=env.packet.hop_limit,
hop_start=env.packet.hop_start,
topic=topic,
import_time=datetime.datetime.now(),
import_time=now,
import_time_us=now_us,
)
session.add(seen)
@@ -153,6 +167,9 @@ async def process_envelope(topic, env):
await session.execute(select(Node).where(Node.id == user.id))
).scalar_one_or_none()
now = datetime.datetime.now(datetime.UTC)
now_us = int(now.timestamp() * 1_000_000)
if node:
node.node_id = node_id
node.long_name = user.long_name
@@ -160,7 +177,10 @@ async def process_envelope(topic, env):
node.hw_model = hw_model
node.role = role
node.channel = env.channel_id
node.last_update = datetime.datetime.now()
node.last_update = now
node.last_seen_us = now_us
if node.first_seen_us is None:
node.first_seen_us = now_us
else:
node = Node(
id=user.id,
@@ -170,7 +190,9 @@ async def process_envelope(topic, env):
hw_model=hw_model,
role=role,
channel=env.channel_id,
last_update=datetime.datetime.now(),
last_update=now,
first_seen_us=now_us,
last_seen_us=now_us,
)
session.add(node)
except Exception as e:
@@ -187,29 +209,30 @@ async def process_envelope(topic, env):
await session.execute(select(Node).where(Node.node_id == from_node_id))
).scalar_one_or_none()
if node:
now = datetime.datetime.now(datetime.UTC)
now_us = int(now.timestamp() * 1_000_000)
node.last_lat = position.latitude_i
node.last_long = position.longitude_i
node.last_update = now
node.last_seen_us = now_us
if node.first_seen_us is None:
node.first_seen_us = now_us
session.add(node)
# --- TRACEROUTE_APP (no conflict handling, normal insert)
if env.packet.decoded.portnum == PortNum.TRACEROUTE_APP:
packet_id = None
if env.packet.decoded.want_response:
packet_id = env.packet.id
else:
result = await session.execute(
select(Packet).where(Packet.id == env.packet.decoded.request_id)
)
if result.scalar_one_or_none():
packet_id = env.packet.decoded.request_id
packet_id = env.packet.id
if packet_id is not None:
now = datetime.datetime.now(datetime.UTC)
now_us = int(now.timestamp() * 1_000_000)
session.add(
Traceroute(
packet_id=packet_id,
route=env.packet.decoded.payload,
done=not env.packet.decoded.want_response,
gateway_node_id=int(env.gateway_id[1:], 16),
import_time=datetime.datetime.now(),
import_time=now,
import_time_us=now_us,
)
)

View File

@@ -1,9 +1,9 @@
from datetime import datetime, timedelta
from sqlalchemy import func, select, text
from sqlalchemy import and_, func, or_, select, text
from sqlalchemy.orm import lazyload
from meshview import database
from meshview import database, models
from meshview.models import Node, Packet, PacketSeen, Traceroute
@@ -24,27 +24,65 @@ async def get_fuzzy_nodes(query):
return result.scalars()
async def get_packets(node_id=None, portnum=None, after=None, before=None, limit=None):
async def get_packets(
from_node_id=None,
to_node_id=None,
node_id=None, # legacy: match either from OR to
portnum=None,
after=None,
contains=None, # NEW: SQL-level substring match
limit=50,
):
"""
SQLAlchemy 2.0 async ORM version.
Supports strict from/to/node filtering, substring payload search,
portnum, since, and limit.
"""
async with database.async_session() as session:
q = select(Packet)
stmt = select(models.Packet)
conditions = []
if node_id:
q = q.where((Packet.from_node_id == node_id) | (Packet.to_node_id == node_id))
if portnum:
q = q.where(Packet.portnum == portnum)
if after:
q = q.where(Packet.import_time > after)
if before:
q = q.where(Packet.import_time < before)
# Strict FROM filter
if from_node_id is not None:
conditions.append(models.Packet.from_node_id == from_node_id)
q = q.order_by(Packet.import_time.desc())
# Strict TO filter
if to_node_id is not None:
conditions.append(models.Packet.to_node_id == to_node_id)
if limit is not None:
q = q.limit(limit)
# Legacy node ID filter: match either direction
if node_id is not None:
conditions.append(
or_(models.Packet.from_node_id == node_id, models.Packet.to_node_id == node_id)
)
result = await session.execute(q)
packets = list(result.scalars())
return packets
# Port filter
if portnum is not None:
conditions.append(models.Packet.portnum == portnum)
# Timestamp filter
if after is not None:
conditions.append(models.Packet.import_time_us > after)
# Case-insensitive substring search on UTF-8 payload (stored as BLOB)
if contains:
contains_lower = contains.lower()
conditions.append(func.lower(models.Packet.payload).like(f"%{contains_lower}%"))
# Apply all conditions
if conditions:
stmt = stmt.where(and_(*conditions))
# Order newest → oldest
stmt = stmt.order_by(models.Packet.import_time_us.desc())
# Apply limit
stmt = stmt.limit(limit)
# Execute query
result = await session.execute(stmt)
return result.scalars().all()
async def get_packets_from(node_id=None, portnum=None, since=None, limit=500):
@@ -68,21 +106,6 @@ async def get_packet(packet_id):
return result.scalar_one_or_none()
async def get_uplinked_packets(node_id, portnum=None):
async with database.async_session() as session:
q = (
select(Packet)
.join(PacketSeen)
.where(PacketSeen.node_id == node_id)
.order_by(Packet.import_time.desc())
.limit(500)
)
if portnum:
q = q.where(Packet.portnum == portnum)
result = await session.execute(q)
return result.scalars()
async def get_packets_seen(packet_id):
async with database.async_session() as session:
result = await session.execute(
@@ -145,23 +168,6 @@ async def get_mqtt_neighbors(since):
return result
# We count the total amount of packages
# This is to be used by /stats in web.py
async def get_total_packet_count():
async with database.async_session() as session:
q = select(func.count(Packet.id)) # Use SQLAlchemy's func to count packets
result = await session.execute(q)
return result.scalar() # Return the total count of packets
# We count the total amount of seen packets
async def get_total_packet_seen_count():
async with database.async_session() as session:
q = select(func.count(PacketSeen.node_id)) # Use SQLAlchemy's func to count nodes
result = await session.execute(q)
return result.scalar() # Return the` total count of seen packets
async def get_total_node_count(channel: str = None) -> int:
try:
async with database.async_session() as session:
@@ -356,27 +362,155 @@ async def get_packet_stats(
async def get_channels_in_period(period_type: str = "hour", length: int = 24):
"""
Returns a list of distinct channels used in packets over a given period.
Returns a sorted list of distinct channels used in packets over a given period.
period_type: "hour" or "day"
length: number of hours or days to look back
"""
now = datetime.now()
now_us = int(datetime.utcnow().timestamp() * 1_000_000)
if period_type == "hour":
start_time = now - timedelta(hours=length)
delta_us = length * 3600 * 1_000_000
elif period_type == "day":
start_time = now - timedelta(days=length)
delta_us = length * 86400 * 1_000_000
else:
raise ValueError("period_type must be 'hour' or 'day'")
start_us = now_us - delta_us
async with database.async_session() as session:
q = (
stmt = (
select(Packet.channel)
.where(Packet.import_time >= start_time)
.where(Packet.import_time_us >= start_us)
.distinct()
.order_by(Packet.channel)
)
result = await session.execute(q)
channels = [row[0] for row in result if row[0] is not None]
result = await session.execute(stmt)
channels = [ch for ch in result.scalars().all() if ch is not None]
return channels
async def get_total_packet_count(
period_type: str | None = None,
length: int | None = None,
channel: str | None = None,
from_node: int | None = None,
to_node: int | None = None,
):
"""
Count total packets, with ALL filters optional.
If no filters -> return ALL packets ever.
Uses import_time_us (microseconds).
"""
# CASE 1: no filters -> count everything
if (
period_type is None
and length is None
and channel is None
and from_node is None
and to_node is None
):
async with database.async_session() as session:
q = select(func.count(Packet.id))
res = await session.execute(q)
return res.scalar() or 0
# CASE 2: filtered mode -> compute time window using import_time_us
now_us = int(datetime.now().timestamp() * 1_000_000)
if period_type is None:
period_type = "day"
if length is None:
length = 1
if period_type == "hour":
start_time_us = now_us - (length * 3600 * 1_000_000)
elif period_type == "day":
start_time_us = now_us - (length * 86400 * 1_000_000)
else:
raise ValueError("period_type must be 'hour' or 'day'")
async with database.async_session() as session:
q = select(func.count(Packet.id)).where(Packet.import_time_us >= start_time_us)
if channel:
q = q.where(func.lower(Packet.channel) == channel.lower())
if from_node:
q = q.where(Packet.from_node_id == from_node)
if to_node:
q = q.where(Packet.to_node_id == to_node)
res = await session.execute(q)
return res.scalar() or 0
async def get_total_packet_seen_count(
packet_id: int | None = None,
period_type: str | None = None,
length: int | None = None,
channel: str | None = None,
from_node: int | None = None,
to_node: int | None = None,
):
"""
Count total PacketSeen rows.
- If packet_id is provided -> count only that packet's seen entries.
- Otherwise match EXACT SAME FILTERS as get_total_packet_count.
Uses import_time_us for time window.
"""
# SPECIAL CASE: direct packet_id lookup
if packet_id is not None:
async with database.async_session() as session:
q = select(func.count(PacketSeen.packet_id)).where(PacketSeen.packet_id == packet_id)
res = await session.execute(q)
return res.scalar() or 0
# No filters -> return ALL seen entries
if (
period_type is None
and length is None
and channel is None
and from_node is None
and to_node is None
):
async with database.async_session() as session:
q = select(func.count(PacketSeen.packet_id))
res = await session.execute(q)
return res.scalar() or 0
# Compute time window
now_us = int(datetime.now().timestamp() * 1_000_000)
if period_type is None:
period_type = "day"
if length is None:
length = 1
if period_type == "hour":
start_time_us = now_us - (length * 3600 * 1_000_000)
elif period_type == "day":
start_time_us = now_us - (length * 86400 * 1_000_000)
else:
raise ValueError("period_type must be 'hour' or 'day'")
# JOIN Packet so we can apply identical filters
async with database.async_session() as session:
q = (
select(func.count(PacketSeen.packet_id))
.join(Packet, Packet.id == PacketSeen.packet_id)
.where(Packet.import_time_us >= start_time_us)
)
if channel:
q = q.where(func.lower(Packet.channel) == channel.lower())
if from_node:
q = q.where(Packet.from_node_id == from_node)
if to_node:
q = q.where(Packet.to_node_id == to_node)
res = await session.execute(q)
return res.scalar() or 0

View File

@@ -6,11 +6,7 @@
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- Scripts -->
<script src="https://unpkg.com/htmx.org@1.9.11" crossorigin="anonymous"></script>
<script src="https://unpkg.com/htmx.org@1.9.11/dist/ext/sse.js" crossorigin="anonymous"></script>
<script src="https://cdn.jsdelivr.net/npm/bootstrap@5.3.3/dist/js/bootstrap.bundle.min.js" crossorigin="anonymous"></script>
<!-- Stylesheets -->
<link href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.3/dist/css/bootstrap.min.css" rel="stylesheet" crossorigin="anonymous">
<link rel="stylesheet" href="https://unpkg.com/leaflet@1.9.4/dist/leaflet.css" crossorigin=""/>
<script src="https://unpkg.com/leaflet@1.9.4/dist/leaflet.js" crossorigin=""></script>
@@ -25,181 +21,182 @@
body.ready {
opacity: 1;
}
.htmx-indicator { opacity: 0; transition: opacity 500ms ease-in; }
.htmx-request .htmx-indicator { opacity: 1; }
#search_form { z-index: 4000; }
#details_map { width: 100%; height: 500px; }
.htmx-indicator {
opacity: 0;
transition: opacity 500ms ease-in;
}
.htmx-request .htmx-indicator {
opacity: 1;
}
#search_form {
z-index: 4000;
}
#details_map {
width: 100%;
height: 500px;
}
{% block css %}{% endblock %}
</style>
</head>
<body>
<br>
<div style="text-align:center" id="site-header"></div>
<div style="text-align:center" id="site-message"></div>
<div style="text-align:center" id="site-menu"></div>
<br>
<div style="text-align:center" id="site-header"></div>
<div style="text-align:center" id="site-message"></div>
<div style="text-align:center" id="site-menu"></div>
<!-- Search Form -->
<form class="container p-2 sticky-top mx-auto" id="search_form" action="/node_search">
<div class="row">
<input
class="col m-2"
id="q"
type="text"
name="q"
data-translate-lang="node id"
placeholder="Node id"
autocomplete="off"
list="node_options"
value="{{raw_node_id}}"
hx-trigger="input delay:100ms"
hx-get="/node_match"
hx-target="#node_options"
/>
<datalist id="node_options">
{% for option in node_options %}
<option value="{{option.id}}">{{option.id}} -- {{option.long_name}} ({{option.short_name}})</option>
{% endfor %}
</datalist>
{% set options = {
1: "Text Message",
3: "Position",
4: "Node Info",
67: "Telemetry",
70: "Traceroute",
71: "Neighbor Info",
}
%}
<select name="portnum" class="col-2 m-2">
<option
value = ""
{% if portnum not in options %}selected{% endif %}
>All</option>
{% for value, name in options.items() %}
<option
value="{{value}}"
{% if value == portnum %}selected{% endif %}
>{{ name }}</option>
{% endfor %}
</select>
<input type="submit" value="Go to Node" class="col-2 m-2" data-translate-lang="go to node" />
</div>
</form>
<br>
{% block body %}{% endblock %}
{% block body %}{% endblock %}
<br>
<div style="text-align:center" id="footer" data-translate-lang="footer"></div>
<div style="text-align:center"><small id="site-version">ver. unknown</small></div>
<br>
<br>
<div style="text-align:center" id="footer" data-translate="footer"></div>
<div style="text-align:center"><small id="site-version">ver. unknown</small></div>
<br>
<!-- Shared Site Config & Language Loader -->
<script>
// --- Global Promises ---
if (!window._langPromise) {
window._langPromise = (async () => {
try {
const res = await fetch("/api/lang");
const lang = await res.json();
window._lang = lang;
console.log("Loaded language from /api/lang:", lang);
return lang;
} catch (err) {
console.error("Failed to load /api/lang:", err);
return {};
}
})();
}
if (!window._siteConfigPromise) {
window._siteConfigPromise = (async () => {
try {
const res = await fetch("/api/config");
const config = await res.json();
window._siteConfig = config;
console.log("Loaded config from /api/config:", config);
return config;
} catch (err) {
console.error("Failed to load /api/config:", err);
return {};
}
})();
}
// --- Apply Translations ---
function applyTranslations(lang) {
if (!lang || !lang.base) return;
const base = lang.base;
document.querySelectorAll("[data-translate-lang]").forEach(el => {
const key = el.dataset.translateLang;
const translation = base[key];
if (!translation) return;
if (el.placeholder) {
el.placeholder = translation;
} else if (key === "footer") {
el.innerHTML = translation; // allow HTML links in footer
} else if (el.value && el.tagName === "INPUT") {
el.value = translation;
} else {
el.textContent = translation;
}
});
}
async function initializePage() {
<script>
// --- Shared Promises ---
if (!window._siteConfigPromise) {
window._siteConfigPromise = (async () => {
try {
const [lang, cfg] = await Promise.all([
window._langPromise,
window._siteConfigPromise
]);
const res = await fetch("/api/config");
const cfg = await res.json();
window._siteConfig = cfg;
console.log("Loaded config:", cfg);
return cfg;
} catch (err) {
console.error("Failed to load /api/config:", err);
return {};
}
})();
}
// --- Load language AFTER config ---
if (!window._langPromise) {
window._langPromise = (async () => {
try {
const cfg = await window._siteConfigPromise;
const site = cfg.site || {};
const base = (lang && lang.base) || {};
const userLang = site.language || "en";
const section = "base";
// --- Title ---
document.title = "Meshview - " + (site.title || "");
const url = `/api/lang?lang=${userLang}&section=${section}`;
const res = await fetch(url);
const lang = await res.json();
// --- Header ---
const header = document.getElementById("site-header");
if (header)
header.innerHTML = `<strong>${site.title || ""} ${site.domain ? "(" + site.domain + ")" : ""}</strong>`;
window._lang = lang;
console.log(`Loaded language (${userLang}):`, lang);
// --- Message ---
const msg = document.getElementById("site-message");
if (msg) msg.textContent = site.message || "";
return lang;
} catch (err) {
console.error("Failed to load language:", err);
return {};
}
})();
}
// --- Menu ---
const menu = document.getElementById("site-menu");
if (menu) {
let html = "";
if (site.nodes === "true") html += `<a href="/nodelist">${base.nodes || "Nodes"}</a>`;
if (site.conversations === "true") html += `&nbsp;-&nbsp;<a href="/chat">${base.conversations || "Conversations"}</a>`;
if (site.everything === "true") html += `&nbsp;-&nbsp;<a href="/firehose">${base.everything || "See Everything"}</a>`;
if (site.graphs === "true") html += `&nbsp;-&nbsp;<a href="/nodegraph">${base.graph || "Mesh Graphs"}</a>`;
if (site.net === "true") html += `&nbsp;-&nbsp;<a href="/net">${base.net || "Weekly Net"}</a>`;
if (site.map === "true") html += `&nbsp;-&nbsp;<a href="/map">${base.map || "Live Map"}</a>`;
if (site.stats === "true") html += `&nbsp;-&nbsp;<a href="/stats">${base.stats || "Stats"}</a>`;
if (site.top === "true") html += `&nbsp;-&nbsp;<a href="/top">${base.top || "Top Traffic Nodes"}</a>`;
menu.innerHTML = html;
// --- Translation Helper ---
function applyTranslations(dict) {
document.querySelectorAll("[data-translate]").forEach(el => {
const key = el.dataset.translate;
const value = dict[key];
if (!value) return;
if (el.placeholder) {
el.placeholder = value;
} else if (el.tagName === "INPUT" && el.value) {
el.value = value;
} else if (key === "footer") {
el.innerHTML = value;
} else {
el.textContent = value;
}
});
}
// --- Fill portnum select dynamically ---
function fillPortnumSelect(dict, selectedValue) {
const sel = document.getElementById("portnum_select");
if (!sel) return;
const portOptions = dict.portnum_options || {};
sel.innerHTML = "";
const allOption = document.createElement("option");
allOption.value = "";
allOption.textContent = dict.all || "All";
if (!selectedValue) allOption.selected = true;
sel.appendChild(allOption);
for (const [val, label] of Object.entries(portOptions)) {
const opt = document.createElement("option");
opt.value = val;
opt.textContent = label;
if (parseInt(val) === parseInt(selectedValue)) {
opt.selected = true;
}
sel.appendChild(opt);
}
}
// --- Main Init ---
async function initializePage() {
try {
const [cfg, lang] = await Promise.all([
window._siteConfigPromise,
window._langPromise
]);
const dict = lang || {};
const site = cfg.site || {};
// Title
document.title = "Meshview - " + (site.title || "");
// Header & Message
document.getElementById("site-header").innerHTML =
`<strong>${site.title || ""} ${site.domain ? "(" + site.domain + ")" : ""}</strong>`;
document.getElementById("site-message").textContent = site.message || "";
// Menu
const menu = document.getElementById("site-menu");
if (menu) {
const items = [];
const keys = ["nodes", "chat", "everything", "graphs", "net", "map", "stats", "top"];
const urls = ["/nodelist", "/chat", "/firehose", "/nodegraph", "/net", "/map", "/stats", "/top"];
for (let i = 0; i < keys.length; i++) {
const key = keys[i];
if (site[key] === "true") {
items.push(`<a href="${urls[i]}">${dict[key] || key}</a>`);
}
}
// --- Version ---
const verEl = document.getElementById("site-version");
if (verEl) verEl.textContent = "ver. " + (site.version || "unknown");
// --- Apply translations to placeholders and footer ---
applyTranslations(lang);
// --- Fade in ---
document.body.classList.add("ready");
} catch (err) {
console.error("Failed to initialize page:", err);
document.body.classList.add("ready");
menu.innerHTML = items.join("&nbsp;-&nbsp;");
}
}
document.addEventListener("DOMContentLoaded", initializePage);
</script>
// Version
document.getElementById("site-version").textContent =
"ver. " + (site.version || "unknown");
// Apply translations
applyTranslations(dict);
fillPortnumSelect(dict, "{{ portnum or '' }}");
document.body.classList.add("ready");
} catch (err) {
console.error("Failed to initialize page:", err);
document.body.classList.add("ready");
}
}
document.addEventListener("DOMContentLoaded", initializePage);
</script>
</body>
</html>

View File

@@ -1,16 +0,0 @@
<div id="buttons" class="btn-group" role="group">
<a
role="button"
class="btn {{ 'btn-primary' if packet_event == 'packet' else 'btn-secondary'}}"
href="/packet_list/{{node_id}}?{{query_string}}"
>
TX/RX
</a>
<a
role="button"
class="btn {{ 'btn-primary' if packet_event == 'uplinked' else 'btn-secondary'}}"
href="/uplinked_list/{{node_id}}?{{query_string}}"
>
Uplinked
</a>
</div>

View File

@@ -1,20 +1,63 @@
{% extends "base.html" %}
{% block css %}
.timestamp { min-width: 10em; }
.timestamp {
min-width: 10em;
color: #ccc;
}
.chat-packet:nth-of-type(odd) { background-color: #3a3a3a; }
.chat-packet { border-bottom: 1px solid #555; padding: 8px; border-radius: 8px; }
.chat-packet {
border-bottom: 1px solid #555;
padding: 3px 6px;
border-radius: 6px;
margin: 0;
}
/* Same column spacing as before */
.chat-packet > [class^="col-"] {
padding-left: 10px !important;
padding-right: 10px !important;
padding-top: 1px !important;
padding-bottom: 1px !important;
}
.chat-packet:nth-of-type(even) { background-color: #333333; }
@keyframes flash { 0% { background-color: #ffe066; } 100% { background-color: inherit; } }
.channel {
font-style: italic;
color: #bbb;
}
.channel a {
font-style: normal;
color: #999;
}
@keyframes flash {
0% { background-color: #ffe066; }
100% { background-color: inherit; }
}
.chat-packet.flash { animation: flash 3.5s ease-out; }
.replying-to { font-size: 0.85em; color: #aaa; margin-top: 4px; padding-left: 20px; }
.replying-to {
font-size: 0.8em;
color: #aaa;
margin-top: 2px;
padding-left: 10px;
}
.replying-to .reply-preview { color: #aaa; }
{% endblock %}
{% block body %}
<div id="chat-container">
<div id="chat-container" class="mt-3">
<!-- ⭐ CHAT TITLE WITH ICON, aligned to container ⭐ -->
<div class="container px-2">
<h2 data-translate="chat_title" style="color:white; margin:0 0 10px 0;">
💬 Chat
</h2>
</div>
<div class="container" id="chat-log"></div>
</div>
@@ -26,7 +69,19 @@ document.addEventListener("DOMContentLoaded", async () => {
let lastTime = null;
const renderedPacketIds = new Set();
const packetMap = new Map();
let chatTranslations = {};
let chatLang = {};
function applyTranslations(dict, root = document) {
root.querySelectorAll("[data-translate]").forEach(el => {
const key = el.dataset.translate;
const val = dict[key];
if (!val) return;
if (el.placeholder) el.placeholder = val;
else if (el.tagName === "INPUT" && el.value) el.value = val;
else if (key === "footer") el.innerHTML = val;
else el.textContent = val;
});
}
function escapeHtml(text) {
const div = document.createElement("div");
@@ -34,43 +89,50 @@ document.addEventListener("DOMContentLoaded", async () => {
return div.innerHTML;
}
function applyTranslations(translations, root=document) {
root.querySelectorAll("[data-translate-lang]").forEach(el => {
const key = el.dataset.translateLang;
if (translations[key]) el.textContent = translations[key];
});
root.querySelectorAll("[data-translate-lang-title]").forEach(el => {
const key = el.dataset.translateLangTitle;
if (translations[key]) el.title = translations[key];
});
}
function renderPacket(packet, highlight = false) {
if (renderedPacketIds.has(packet.id)) return;
renderedPacketIds.add(packet.id);
packetMap.set(packet.id, packet);
const date = new Date(packet.import_time);
const formattedTime = date.toLocaleTimeString([], { hour:"numeric", minute:"2-digit", second:"2-digit", hour12:true });
const formattedDate = `${(date.getMonth()+1).toString().padStart(2,"0")}/${date.getDate().toString().padStart(2,"0")}/${date.getFullYear()}`;
let date;
if (packet.import_time_us && packet.import_time_us > 0) {
date = new Date(packet.import_time_us / 1000);
} else if (packet.import_time) {
date = new Date(packet.import_time);
} else {
date = new Date();
}
const formattedTime = date.toLocaleTimeString([], {
hour:"numeric",
minute:"2-digit",
second:"2-digit",
hour12:true
});
const formattedDate =
`${(date.getMonth()+1).toString().padStart(2,"0")}/` +
`${date.getDate().toString().padStart(2,"0")}/` +
`${date.getFullYear()}`;
const formattedTimestamp = `${formattedTime} - ${formattedDate}`;
let replyHtml = "";
if (packet.reply_id) {
const parent = packetMap.get(packet.reply_id);
const replyPrefix = `<i data-translate="replying_to"></i>`;
if (parent) {
replyHtml = `<div class="replying-to">
<div class="reply-preview">
<i data-translate-lang="replying_to"></i>
replyHtml = `
<div class="replying-to">
${replyPrefix}
<strong>${escapeHtml((parent.long_name || "").trim() || `Node ${parent.from_node_id}`)}</strong>:
${escapeHtml(parent.payload || "")}
</div>
</div>`;
</div>`;
} else {
replyHtml = `<div class="replying-to">
<i data-translate-lang="replying_to"></i>
<a href="/packet/${packet.reply_id}">${packet.reply_id}</a>
</div>`;
replyHtml = `
<div class="replying-to">
${replyPrefix}
<a href="/packet/${packet.reply_id}">${packet.reply_id}</a>
</div>`;
}
}
@@ -78,33 +140,43 @@ document.addEventListener("DOMContentLoaded", async () => {
div.className = "row chat-packet" + (highlight ? " flash" : "");
div.dataset.packetId = packet.id;
div.innerHTML = `
<span class="col-2 timestamp" title="${packet.import_time}">${formattedTimestamp}</span>
<span class="col-2 timestamp" title="${packet.import_time_us}">${formattedTimestamp}</span>
<span class="col-2 channel">
<a href="/packet/${packet.id}" data-translate-lang-title="view_packet_details">✉️</a>
${escapeHtml(packet.channel || "")}
<a href="/packet/${packet.id}" title="${chatLang.view_packet_details || 'View details'}">🔎</a>
${escapeHtml(packet.channel || "")}
</span>
<span class="col-3 nodename">
<a href="/packet_list/${packet.from_node_id}">
<a href="/node/${packet.from_node_id}">
${escapeHtml((packet.long_name || "").trim() || `Node ${packet.from_node_id}`)}
</a>
</span>
<span class="col-5 message">${escapeHtml(packet.payload)}${replyHtml}</span>
`;
chatContainer.prepend(div);
applyTranslations(chatTranslations, div);
applyTranslations(chatLang, div);
if (highlight) setTimeout(() => div.classList.remove("flash"), 2500);
}
function renderPacketsEnsureDescending(packets, highlight=false) {
if (!Array.isArray(packets) || packets.length===0) return;
const sortedDesc = packets.slice().sort((a,b)=>new Date(b.import_time)-new Date(a.import_time));
const sortedDesc = packets.slice().sort((a,b)=>{
const aTime =
(a.import_time_us && a.import_time_us > 0)
? a.import_time_us
: (a.import_time ? new Date(a.import_time).getTime() * 1000 : 0);
const bTime =
(b.import_time_us && b.import_time_us > 0)
? b.import_time_us
: (b.import_time ? new Date(b.import_time).getTime() * 1000 : 0);
return bTime - aTime;
});
for (let i=sortedDesc.length-1; i>=0; i--) renderPacket(sortedDesc[i], highlight);
}
async function fetchInitial() {
try {
const resp = await fetch("/api/chat?limit=100");
const resp = await fetch("/api/packets?portnum=1&limit=100");
const data = await resp.json();
if (data?.packets?.length) renderPacketsEnsureDescending(data.packets);
lastTime = data?.latest_import_time || lastTime;
@@ -113,7 +185,7 @@ document.addEventListener("DOMContentLoaded", async () => {
async function fetchUpdates() {
try {
const url = new URL("/api/chat", window.location.origin);
const url = new URL("/api/packets?portnum=1", window.location.origin);
url.searchParams.set("limit","100");
if (lastTime) url.searchParams.set("since", lastTime);
const resp = await fetch(url);
@@ -123,18 +195,17 @@ document.addEventListener("DOMContentLoaded", async () => {
} catch(err){ console.error("Fetch updates error:", err); }
}
async function loadTranslations() {
async function loadChatLang() {
try {
const cfg = await window._siteConfigPromise;
const langCode = cfg?.site?.language || "en";
const res = await fetch(`/api/lang?lang=${langCode}&section=chat`);
chatTranslations = await res.json();
applyTranslations(chatTranslations, document);
chatLang = await res.json();
applyTranslations(chatLang);
} catch(err){ console.error("Chat translation load failed:", err); }
}
await loadTranslations();
await fetchInitial();
await Promise.all([loadChatLang(), fetchInitial()]);
setInterval(fetchUpdates, 5000);
});
</script>

View File

@@ -1,7 +0,0 @@
<datalist
id="node_options"
>
{% for option in node_options %}
<option value="{{option.id}}">{{option.id}} -- {{option.long_name}} ({{option.short_name}})</option>
{% endfor %}
</datalist>

View File

@@ -2,7 +2,6 @@
{% block css %}
.container {
max-width: 900px;
margin: 0 auto;
padding: 10px;
}
@@ -19,6 +18,7 @@
font-size: 0.85rem;
color: #e4e9ee;
}
.packet-table th, .packet-table td {
border: 1px solid #3a3f44;
padding: 6px 10px;
@@ -30,6 +30,7 @@
}
.packet-table tr:nth-of-type(odd) { background-color: #272b2f; }
.packet-table tr:nth-of-type(even) { background-color: #212529; }
.port-tag {
display: inline-block;
padding: 1px 6px;
@@ -38,8 +39,6 @@
font-weight: 500;
color: #fff;
}
/* --- Color-coded port labels --- */
.port-0 { background-color: #6c757d; }
.port-1 { background-color: #007bff; }
.port-3 { background-color: #28a745; }
@@ -51,16 +50,9 @@
.port-70 { background-color: #6f42c1; }
.port-71 { background-color: #fd7e14; }
.to-mqtt {
font-style: italic;
color: #aaa;
}
.to-mqtt { font-style: italic; color: #aaa; }
/* --- Payload rows --- */
.payload-row {
display: none;
background-color: #1b1e22;
}
.payload-row { display: none; background-color: #1b1e22; }
.payload-cell {
padding: 8px 12px;
font-family: monospace;
@@ -68,35 +60,43 @@
color: #b0bec5;
border-top: none;
}
.packet-table tr.expanded + .payload-row {
display: table-row;
}
.packet-table tr.expanded + .payload-row { display: table-row; }
.toggle-btn {
cursor: pointer;
color: #aaa;
margin-right: 6px;
font-weight: bold;
}
.toggle-btn:hover {
color: #fff;
.toggle-btn:hover { color: #fff; }
/* Link next to port tag */
.inline-link {
margin-left: 6px;
font-weight: bold;
text-decoration: none;
color: #9fd4ff;
}
.inline-link:hover {
color: #c7e6ff;
}
{% endblock %}
{% block body %}
<div class="container">
<form class="d-flex align-items-center justify-content-between mb-3">
<h5 class="mb-0">📡 Live Feed</h5>
<button type="button" id="pause-button" class="btn btn-sm btn-outline-secondary">Pause</button>
<h5 class="mb-0" data-translate-lang="live_feed">📡 Live Feed</h5>
<button type="button" id="pause-button" class="btn btn-sm btn-outline-secondary" data-translate-lang="pause">Pause</button>
</form>
<table class="packet-table">
<thead>
<tr>
<th>Packet ID</th>
<th>From</th>
<th>To</th>
<th>Port</th>
<th>Links</th>
<th data-translate-lang="time">Time</th>
<th data-translate-lang="packet_id">Packet ID</th>
<th data-translate-lang="from">From</th>
<th data-translate-lang="to">To</th>
<th data-translate-lang="port">Port</th>
</tr>
</thead>
<tbody id="packet_list"></tbody>
@@ -104,10 +104,30 @@
</div>
<script>
let lastImportTime = null;
let lastImportTimeUs = null;
let updatesPaused = false;
let nodeMap = {};
let updateInterval = 3000;
let firehoseTranslations = {};
function applyTranslations(translations, root=document) {
root.querySelectorAll("[data-translate-lang]").forEach(el => {
const key = el.dataset.translateLang;
if (translations[key]) el.textContent = translations[key];
});
}
async function loadTranslations() {
try {
const cfg = await window._siteConfigPromise;
const langCode = cfg?.site?.language || "en";
const res = await fetch(`/api/lang?lang=${langCode}&section=firehose`);
firehoseTranslations = await res.json();
applyTranslations(firehoseTranslations, document);
} catch (err) {
console.error("Firehose translation load failed:", err);
}
}
const PORT_MAP = {
0: "UNKNOWN APP",
@@ -145,14 +165,13 @@ const PORT_COLORS = {
78: "#795548"
};
// --- Load node names ---
// Load node names
async function loadNodes() {
const res = await fetch("/api/nodes");
if (!res.ok) return;
const data = await res.json();
for (const n of data.nodes || []) {
const name = n.long_name || n.short_name || n.id || n.node_id;
nodeMap[n.node_id] = name;
nodeMap[n.node_id] = n.long_name || n.short_name || n.id || n.node_id;
}
nodeMap[4294967295] = "All";
}
@@ -162,33 +181,40 @@ function nodeName(id) {
return nodeMap[id] || id;
}
function portLabel(portnum, payload) {
function portLabel(portnum, payload, linksHtml) {
const name = PORT_MAP[portnum] || "Unknown";
const color = PORT_COLORS[portnum] || "#6c757d";
const safePayload = payload ? payload.replace(/"/g, "&quot;") : "";
return `<span class="port-tag" style="background-color:${color}" title="${safePayload}">${name}</span>
<span class="text-secondary">(${portnum})</span>`;
return `
<span class="port-tag" style="background-color:${color}" title="${safePayload}">
${name}
</span>
<span class="text-secondary">(${portnum})</span>
${linksHtml || ""}
`;
}
function formatLocalTime(importTimeUs) {
const ms = importTimeUs / 1000;
const date = new Date(ms);
return date.toLocaleTimeString([], { hour: "2-digit", minute: "2-digit", second: "2-digit" });
}
// --- Fetch firehose interval from shared site config ---
async function configureFirehose() {
try {
const cfg = await window._siteConfigPromise;
const intervalSec = cfg?.site?.firehose_interval;
if (intervalSec && !isNaN(intervalSec)) {
updateInterval = parseInt(intervalSec) * 1000;
}
console.log("Firehose update interval:", updateInterval, "ms");
if (intervalSec && !isNaN(intervalSec)) updateInterval = parseInt(intervalSec) * 1000;
} catch (err) {
console.warn("Failed to read firehose interval:", err);
}
}
// --- Fetch and render packets ---
async function fetchUpdates() {
if (updatesPaused) return;
const url = new URL("/api/packets", window.location.origin);
if (lastImportTime) url.searchParams.set("since", lastImportTime);
if (lastImportTimeUs) url.searchParams.set("since", lastImportTimeUs);
url.searchParams.set("limit", 50);
try {
@@ -201,43 +227,48 @@ async function fetchUpdates() {
const list = document.getElementById("packet_list");
for (const pkt of packets.reverse()) {
const fromNodeId = pkt.from_node_id;
const toNodeId = pkt.to_node_id;
const from = pkt.from_node_id === 4294967295
? `<span class="to-mqtt">All</span>`
: `<a href="/node/${pkt.from_node_id}" style="text-decoration:underline; color:inherit;">${nodeMap[pkt.from_node_id] || pkt.from_node_id}</a>`;
let from = fromNodeId === 4294967295 ? `<span class="to-mqtt">All</span>` :
`<a href="/packet_list/${fromNodeId}" style="text-decoration:underline; color:inherit;">${nodeMap[fromNodeId] || fromNodeId}</a>`;
const to = pkt.to_node_id === 1
? `<span class="to-mqtt">direct to MQTT</span>`
: pkt.to_node_id === 4294967295
? `<span class="to-mqtt">All</span>`
: `<a href="/node/${pkt.to_node_id}" style="text-decoration:underline; color:inherit;">${nodeMap[pkt.to_node_id] || pkt.to_node_id}</a>`;
let to = toNodeId === 1 ? `<span class="to-mqtt">direct to MQTT</span>` :
toNodeId === 4294967295 ? `<span class="to-mqtt">All</span>` :
`<a href="/packet_list/${toNodeId}" style="text-decoration:underline; color:inherit;">${nodeMap[toNodeId] || toNodeId}</a>`;
// Inline link next to port tag
let inlineLinks = "";
let links = "";
if (pkt.portnum === 3 && pkt.payload) {
const latMatch = pkt.payload.match(/latitude_i:\s*(-?\d+)/);
const lonMatch = pkt.payload.match(/longitude_i:\s*(-?\d+)/);
if (latMatch && lonMatch) {
const lat = parseInt(latMatch[1]) / 1e7;
const lon = parseInt(lonMatch[1]) / 1e7;
links += `<a href="https://www.google.com/maps?q=${lat},${lon}" target="_blank" rel="noopener noreferrer" style="font-weight:bold; text-decoration:none;">Map</a>`;
inlineLinks += ` <a class="inline-link" href="https://www.google.com/maps?q=${lat},${lon}" target="_blank">📍</a>`;
}
}
if (pkt.portnum === 70) {
let traceId = pkt.id;
const idMatch = pkt.payload.match(/ID:\s*(\d+)/i);
if (idMatch) traceId = idMatch[1];
if (links) links += "&nbsp;|&nbsp;";
links += `<a href="/graph/traceroute/${traceId}" target="_blank" rel="noopener noreferrer" style="font-weight:bold; text-decoration:none;">Graph</a>`;
const match = pkt.payload.match(/ID:\s*(\d+)/i);
if (match) traceId = match[1];
inlineLinks += ` <a class="inline-link" href="/graph/traceroute/${traceId}" target="_blank">⮕</a>`;
}
const safePayload = (pkt.payload || "").replace(/</g, "&lt;").replace(/>/g, "&gt;");
const localTime = formatLocalTime(pkt.import_time_us);
const html = `
<tr class="packet-row" data-id="${pkt.id}">
<td>${localTime}</td>
<td><span class="toggle-btn">▶</span> <a href="/packet/${pkt.id}" style="text-decoration:underline; color:inherit;">${pkt.id}</a></td>
<td>${from}</td>
<td>${to}</td>
<td>${portLabel(pkt.portnum, pkt.payload)}</td>
<td>${links}</td>
<td>${portLabel(pkt.portnum, pkt.payload, inlineLinks)}</td>
</tr>
<tr class="payload-row">
<td colspan="5" class="payload-cell">${safePayload}</td>
@@ -246,7 +277,7 @@ async function fetchUpdates() {
}
while (list.rows.length > 400) list.deleteRow(-1);
lastImportTime = packets[packets.length - 1].import_time;
lastImportTimeUs = packets[packets.length - 1].import_time_us;
} catch (err) {
console.error("Packet fetch failed:", err);
@@ -258,7 +289,9 @@ document.addEventListener("DOMContentLoaded", async () => {
const pauseBtn = document.getElementById("pause-button");
pauseBtn.addEventListener("click", () => {
updatesPaused = !updatesPaused;
pauseBtn.textContent = updatesPaused ? "Resume" : "Pause";
pauseBtn.textContent = updatesPaused
? (firehoseTranslations.resume || "Resume")
: (firehoseTranslations.pause || "Pause");
});
document.addEventListener("click", (e) => {
@@ -269,11 +302,11 @@ document.addEventListener("DOMContentLoaded", async () => {
btn.textContent = row.classList.contains("expanded") ? "▼" : "▶";
});
await loadTranslations();
await configureFirehose();
await loadNodes();
fetchUpdates();
setInterval(fetchUpdates, updateInterval);
});
</script>
{% endblock %}
{% endblock %}

View File

@@ -60,21 +60,36 @@ function hashToColor(str){ if(colorMap.has(str)) return colorMap.get(str); const
function isInvalidCoord(n){ return !n||!n.lat||!n.long||n.lat===0||n.long===0||Number.isNaN(n.lat)||Number.isNaN(n.long); }
// ---------------------- Packet Fetching ----------------------
function fetchLatestPacket(){ fetch(`/api/packets?limit=1`).then(r=>r.json()).then(data=>{ lastImportTime=data.packets?.[0]?.import_time||new Date().toISOString(); }).catch(console.error); }
function fetchLatestPacket(){
fetch(`/api/packets?limit=1`)
.then(r=>r.json())
.then(data=>{
lastImportTime=data.packets?.[0]?.import_time_us||0;
})
.catch(console.error);
}
function fetchNewPackets(){
if(mapInterval <= 0) return;
if(!lastImportTime) return;
fetch(`/api/packets?since=${encodeURIComponent(lastImportTime)}`).then(r=>r.json()).then(data=>{
if(!data.packets||data.packets.length===0) return;
let latest = lastImportTime;
data.packets.forEach(pkt=>{
if(pkt.import_time>latest) latest=pkt.import_time;
const marker = markerById[pkt.from_node_id];
const nodeData = nodeMap.get(pkt.from_node_id);
if(marker && nodeData) blinkNode(marker,nodeData.long_name,pkt.portnum);
});
lastImportTime=latest;
}).catch(console.error);
if(lastImportTime===null) return;
const url = new URL(`/api/packets`, window.location.origin);
url.searchParams.set("since", lastImportTime);
url.searchParams.set("limit", 50);
fetch(url)
.then(r=>r.json())
.then(data=>{
if(!data.packets || data.packets.length===0) return;
let latest = lastImportTime;
data.packets.forEach(pkt=>{
if(pkt.import_time_us > latest) latest = pkt.import_time_us;
const marker = markerById[pkt.from_node_id];
const nodeData = nodeMap.get(pkt.from_node_id);
if(marker && nodeData) blinkNode(marker,nodeData.long_name,pkt.portnum);
});
lastImportTime = latest;
})
.catch(console.error);
}
// ---------------------- Polling ----------------------
@@ -190,7 +205,7 @@ function renderNodesOnMap(){
marker.nodeId = node.key;
marker.originalColor = color;
markerById[node.key] = marker;
const popup = `<b><a href="/packet_list/${node.node_id}">${node.long_name}</a> (${node.short_name})</b><br>
const popup = `<b><a href="/node/${node.node_id}">${node.long_name}</a> (${node.short_name})</b><br>
<b>Channel:</b> ${node.channel}<br>
<b>Model:</b> ${node.hw_model}<br>
<b>Role:</b> ${node.role}<br>

View File

@@ -1,75 +1,184 @@
{% extends "base.html" %}
{% block css %}
.timestamp {
min-width:10em;
}
.chat-packet:nth-of-type(odd){
background-color: #3a3a3a; /* Lighter than #2a2a2a */
}
.timestamp { min-width: 10em; color: #ccc; }
.chat-packet:nth-of-type(odd) { background-color: #3a3a3a; }
.chat-packet {
border-bottom: 1px solid #555;
padding: 8px;
border-radius: 8px; /* Adjust the value to make the corners more or less rounded */
padding: 3px 6px;
border-radius: 6px;
margin: 0;
}
.chat-packet:nth-of-type(even){
background-color: #333333; /* Slightly lighter than the previous #181818 */
.chat-packet > [class^="col-"] {
padding-left: 10px !important;
padding-right: 10px !important;
padding-top: 1px !important;
padding-bottom: 1px !important;
}
.chat-packet:nth-of-type(even) { background-color: #333333; }
.channel { font-style: italic; color: #bbb; }
.channel a { font-style: normal; color: #999; }
@keyframes flash { 0% { background-color: #ffe066; } 100% { background-color: inherit; } }
.chat-packet.flash { animation: flash 3.5s ease-out; }
.replying-to { font-size: 0.8em; color: #aaa; margin-top: 2px; padding-left: 10px; }
.replying-to .reply-preview { color: #aaa; }
#weekly-message { margin: 15px 0; font-weight: bold; color: #ffeb3b; }
#total-count { margin-bottom: 10px; font-style: italic; color: #ccc; }
{% endblock %}
{% block body %}
<div class="container">
<span>{{ site_config["site"]["weekly_net_message"] }}</span> <br><br>
<div id="weekly-message">Loading weekly message...</div>
<div id="total-count">Total messages: 0</div>
<h5>
<span data-translate-lang="number_of_checkins">Number of Check-ins:</span> {{ packets|length }}
</h5>
</div>
<div class="container">
{% for packet in packets %}
<div
class="row chat-packet"
data-packet-id="{{ packet.id }}"
role="article"
aria-label="Chat message from {{ packet.from_node.long_name or (packet.from_node_id | node_id_to_hex) }}"
>
<span class="col-2 timestamp">
{{ packet.import_time.strftime('%-I:%M:%S %p - %m-%d-%Y') }}
</span>
<span class="col-1 timestamp">
<a href="/packet/{{ packet.id }}" title="View packet details">✉️</a> {{ packet.from_node.channel }}
</span>
<span class="col-2 username">
<a href="/packet_list/{{ packet.from_node_id }}" title="View all packets from this node">
{{ packet.from_node.long_name or (packet.from_node_id | node_id_to_hex) }}
</a>
</span>
<span class="col-5 message">
{{ packet.payload }}
</span>
</div>
{% else %}
<span data-translate-lang="no_packets_found">No packets found.</span>
{% endfor %}
<div id="chat-container">
<div class="container" id="chat-log"></div>
</div>
</div>
<script>
async function loadTranslations() {
try {
const langCode = "{{ site_config.get('site', {}).get('language','en') }}";
const res = await fetch(`/api/lang?lang=${langCode}&section=net`);
const translations = await res.json();
document.querySelectorAll("[data-translate-lang]").forEach(el => {
const key = el.dataset.translateLang;
if(el.placeholder !== undefined && el.placeholder !== "") el.placeholder = translations[key] || el.placeholder;
else el.textContent = translations[key] || el.textContent;
});
} catch(err) {
console.error("Net translations load failed:", err);
document.addEventListener("DOMContentLoaded", async () => {
const chatContainer = document.querySelector("#chat-log");
const totalCountEl = document.querySelector("#total-count");
const weeklyMessageEl = document.querySelector("#weekly-message");
if (!chatContainer || !totalCountEl || !weeklyMessageEl) {
console.error("Required elements not found");
return;
}
}
document.addEventListener("DOMContentLoaded", loadTranslations());
const renderedPacketIds = new Set();
const packetMap = new Map();
let chatTranslations = {};
let netTag = "";
function updateTotalCount() {
totalCountEl.textContent = `Total messages: ${renderedPacketIds.size}`;
}
function escapeHtml(text) {
const div = document.createElement("div");
div.textContent = text ?? "";
return div.innerHTML;
}
function applyTranslations(translations, root = document) {
root.querySelectorAll("[data-translate-lang]").forEach(el => {
const key = el.dataset.translateLang;
if (translations[key]) el.textContent = translations[key];
});
root.querySelectorAll("[data-translate-lang-title]").forEach(el => {
const key = el.dataset.translateLangTitle;
if (translations[key]) el.title = translations[key];
});
}
function renderPacket(packet) {
if (renderedPacketIds.has(packet.id)) return;
renderedPacketIds.add(packet.id);
packetMap.set(packet.id, packet);
const date = new Date(packet.import_time_us / 1000);
const formattedTime = date.toLocaleTimeString([], { hour: "numeric", minute: "2-digit", second: "2-digit", hour12: true });
const formattedDate = `${(date.getMonth() + 1).toString().padStart(2, "0")}/${date.getDate().toString().padStart(2, "0")}/${date.getFullYear()}`;
const formattedTimestamp = `${formattedTime} - ${formattedDate}`;
let replyHtml = "";
if (packet.reply_id) {
const parent = packetMap.get(packet.reply_id);
if (parent) {
replyHtml = `<div class="replying-to">
<div class="reply-preview">
<i data-translate-lang="replying_to"></i>
<strong>${escapeHtml((parent.long_name || "").trim() || `Node ${parent.from_node_id}`)}</strong>:
${escapeHtml(parent.payload || "")}
</div>
</div>`;
} else {
replyHtml = `<div class="replying-to">
<i data-translate-lang="replying_to"></i>
<a href="/packet/${packet.reply_id}">${packet.reply_id}</a>
</div>`;
}
}
const div = document.createElement("div");
div.className = "row chat-packet";
div.dataset.packetId = packet.id;
div.innerHTML = `
<span class="col-2 timestamp" title="${packet.import_time_us}">${formattedTimestamp}</span>
<span class="col-2 channel">
<a href="/packet/${packet.id}" data-translate-lang-title="view_packet_details">✉️</a>
${escapeHtml(packet.channel || "")}
</span>
<span class="col-3 nodename">
<a href="/packet_list/${packet.from_node_id}">
${escapeHtml((packet.long_name || "").trim() || `Node ${packet.from_node_id}`)}
</a>
</span>
<span class="col-5 message">${escapeHtml(packet.payload)}${replyHtml}</span>
`;
chatContainer.prepend(div);
applyTranslations(chatTranslations, div);
updateTotalCount();
}
function renderPacketsEnsureDescending(packets) {
if (!Array.isArray(packets) || packets.length === 0) return;
const sortedDesc = packets.slice().sort((a, b) => b.import_time_us - a.import_time_us);
for (let i = sortedDesc.length - 1; i >= 0; i--) renderPacket(sortedDesc[i]);
}
async function fetchInitialPackets(tag) {
if (!tag) {
console.warn("No net_tag defined, skipping packet fetch.");
return;
}
try {
console.log("Fetching packets for netTag:", tag);
const sixDaysAgoMs = Date.now() - (6 * 24 * 60 * 60 * 1000);
const sinceUs = Math.floor(sixDaysAgoMs * 1000);
const resp = await fetch(`/api/packets?portnum=1&contains=${encodeURIComponent(tag)}&since=${sinceUs}`);
const data = await resp.json();
console.log("Packets received:", data?.packets?.length);
if (data?.packets?.length) renderPacketsEnsureDescending(data.packets);
} catch (err) {
console.error("Initial fetch error:", err);
}
}
async function loadTranslations(cfg) {
try {
const langCode = cfg?.site?.language || "en";
const res = await fetch(`/api/lang?lang=${langCode}&section=chat`);
chatTranslations = await res.json();
applyTranslations(chatTranslations, document);
} catch (err) {
console.error("Chat translation load failed:", err);
}
}
// --- MAIN LOGIC ---
try {
const cfg = await window._siteConfigPromise; // ✅ Already fetched by base.html
const site = cfg?.site || {};
// Populate from config
netTag = site.net_tag || "";
weeklyMessageEl.textContent = site.weekly_net_message || "Weekly message not set.";
await loadTranslations(cfg);
await fetchInitialPackets(netTag);
} catch (err) {
console.error("Initialization failed:", err);
weeklyMessageEl.textContent = "Failed to load site config.";
}
});
</script>
{% endblock %}

File diff suppressed because it is too large Load Diff

View File

@@ -1,58 +0,0 @@
{% extends "base.html" %}
{% block css %}
#node_info {
height:100%;
}
#map{
height:100%;
min-height: 400px;
}
#packet_details{
height: 95vh;
overflow: scroll;
top: 3em;
}
div.tab-pane > dl {
display: inline-block;
}
{% endblock %}
{% block body %}
{% include "search_form.html" %}
<div class="row">
<div class="col mb-3">
<div class="card" id="node_info">
{% if node %}
<div class="card-header">
{{node.long_name}}
</div>
<div class="card-body">
<dl >
<dt>ShortName</dt>
<dd>{{node.short_name}}</dd>
<dt>HW Model</dt>
<dd>{{node.hw_model}}</dd>
<dt>Role</dt>
<dd>{{node.role}}</dd>
</dl>
</div>
{% else %}
<div class="card-body">
A NodeInfo has not been seen.
</div>
{% endif %}
</div>
</div>
<div class="row">
<div class="col">
{% include 'packet_list.html' %}
</div>
</div>
<div class="col mb-3">
<div id="map"></div>
</div>
</div>
{% endblock %}

View File

@@ -1,263 +0,0 @@
{% macro graph(name) %}
<div id="{{name}}Chart" style="width: 100%; height: 100%;"></div>
{% endmacro %}
<!-- Download and Expand buttons -->
<div class="d-flex justify-content-end mb-2">
<button class="btn btn-sm btn-outline-light me-2" id="downloadCsvBtn">Download CSV</button>
<button class="btn btn-sm btn-outline-light" data-bs-toggle="modal" data-bs-target="#fullChartModal">Expand</button>
</div>
<!-- Tab Navigation -->
<ul class="nav nav-tabs" role="tablist">
{% for name in [
"power", "utilization", "temperature", "humidity", "pressure",
"iaq", "wind_speed", "wind_direction", "power_metrics", "neighbors"
] %}
<li class="nav-item" role="presentation">
<button class="nav-link {% if loop.first %}active{% endif %}" data-bs-toggle="tab" data-bs-target="#{{name}}Tab" type="button" role="tab">{{ name | capitalize }}</button>
</li>
{% endfor %}
</ul>
<!-- Tab Content -->
<div class="tab-content mt-3" style="height: 40vh;">
{% for name in [
"power", "utilization", "temperature", "humidity", "pressure",
"iaq", "wind_speed", "wind_direction", "power_metrics", "neighbors"
] %}
<div class="tab-pane fade {% if loop.first %}show active{% endif %}" id="{{name}}Tab" role="tabpanel" style="height: 100%;">
{{ graph(name) | safe }}
</div>
{% endfor %}
</div>
<!-- Fullscreen Modal -->
<div class="modal fade" id="fullChartModal" tabindex="-1" aria-labelledby="fullChartModalLabel" aria-hidden="true">
<div class="modal-dialog modal-fullscreen">
<div class="modal-content bg-dark text-white">
<div class="modal-header">
<h5 class="modal-title" id="fullChartModalLabel">Full Graph</h5>
<button type="button" class="btn-close btn-close-white" data-bs-dismiss="modal" aria-label="Close"></button>
</div>
<div class="modal-body" style="height: 100vh;">
<div id="fullChartContainer" style="width: 100%; height: 100%;"></div>
</div>
</div>
</div>
</div>
<!-- ECharts Library -->
<script src="https://cdn.jsdelivr.net/npm/echarts@5/dist/echarts.min.js"></script>
<script>
document.addEventListener("DOMContentLoaded", function () {
let currentChart = null;
let currentChartName = null;
let currentChartData = null;
let fullChart = null;
async function loadChart(name, targetDiv) {
currentChartName = name;
const chartDiv = document.getElementById(targetDiv);
if (!chartDiv) return;
try {
const resp = await fetch(`/graph/${name}_json/{{ node_id }}`);
if (!resp.ok) throw new Error(`Failed to load data for ${name}`);
const data = await resp.json();
// Reverse for chronological order
data.timestamps.reverse();
data.series.forEach(s => s.data.reverse());
const formattedDates = data.timestamps.map(t => {
const d = new Date(t);
return `${(d.getMonth() + 1).toString().padStart(2, '0')}-${d.getDate().toString().padStart(2, '0')}-${d.getFullYear().toString().slice(-2)}`;
});
currentChartData = {
...data,
timestamps: formattedDates
};
const chart = echarts.init(chartDiv);
const isDualAxis = name === 'power';
chart.setOption({
tooltip: {
trigger: 'axis',
formatter: function (params) {
return params.map(p => {
const label = p.seriesName.toLowerCase();
const unit = label.includes('volt') ? 'V' : label.includes('battery') ? '%' : '';
return `${p.marker} ${p.seriesName}: ${p.data}${unit}`;
}).join('<br>');
}
},
xAxis: {
type: 'category',
data: formattedDates,
axisLabel: { color: '#fff', rotate: 45 },
},
yAxis: isDualAxis ? [
{
type: 'value',
name: 'Battery (%)',
min: 0,
max: 120,
position: 'left',
axisLabel: { color: '#fff' },
nameTextStyle: { color: '#fff' }
},
{
type: 'value',
name: 'Voltage (V)',
min: 0,
max: 6,
position: 'right',
axisLabel: { color: '#fff' },
nameTextStyle: { color: '#fff' }
}
] : {
type: 'value',
axisLabel: { color: '#fff' },
},
series: data.series.map(s => ({
name: s.name,
type: 'line',
data: s.data,
smooth: true,
connectNulls: true,
showSymbol: false,
yAxisIndex: isDualAxis && s.name.toLowerCase().includes('volt') ? 1 : 0,
})),
legend: { textStyle: { color: '#fff' } }
});
return chart;
} catch (err) {
console.error(err);
currentChartData = null;
currentChartName = null;
chartDiv.innerHTML = `<div class="text-white text-center mt-5">Error loading ${name} data.</div>`;
return null;
}
}
// Load first chart
const firstTabBtn = document.querySelector('.nav-tabs button.nav-link.active');
if (firstTabBtn) {
const name = firstTabBtn.textContent.toLowerCase();
const chartId = `${name}Chart`;
loadChart(name, chartId).then(chart => currentChart = chart);
}
// On tab switch
document.querySelectorAll('.nav-tabs button.nav-link').forEach(button => {
button.addEventListener('shown.bs.tab', event => {
const name = event.target.textContent.toLowerCase();
const chartId = `${name}Chart`;
loadChart(name, chartId).then(chart => currentChart = chart);
});
});
// CSV Download
document.getElementById('downloadCsvBtn').addEventListener('click', () => {
if (!currentChartData || !currentChartName) {
alert("Chart data not loaded yet.");
return;
}
const { timestamps, series } = currentChartData;
let csv = 'Date,' + series.map(s => s.name).join(',') + '\n';
for (let i = 0; i < timestamps.length; i++) {
const row = [timestamps[i]];
for (const s of series) {
row.push(s.data[i] != null ? s.data[i] : '');
}
csv += row.join(',') + '\n';
}
const blob = new Blob([csv], { type: 'text/csv' });
const url = URL.createObjectURL(blob);
const a = document.createElement('a');
a.href = url;
a.download = `${currentChartName}_{{ node_id }}.csv`;
a.click();
URL.revokeObjectURL(url);
});
// Fullscreen modal chart
document.getElementById('fullChartModal').addEventListener('shown.bs.modal', () => {
if (!currentChartData || !currentChartName) return;
if (!fullChart) {
fullChart = echarts.init(document.getElementById('fullChartContainer'));
}
const isDualAxis = currentChartName === 'power';
fullChart.setOption({
title: { text: currentChartName.charAt(0).toUpperCase() + currentChartName.slice(1), textStyle: { color: '#fff' } },
tooltip: {
trigger: 'axis',
formatter: function (params) {
return params.map(p => {
const label = p.seriesName.toLowerCase();
const unit = label.includes('volt') ? 'V' : label.includes('battery') ? '%' : '';
return `${p.marker} ${p.seriesName}: ${p.data}${unit}`;
}).join('<br>');
}
},
xAxis: {
type: 'category',
data: currentChartData.timestamps,
axisLabel: { color: '#fff', rotate: 45 },
},
yAxis: isDualAxis ? [
{
type: 'value',
name: 'Battery (%)',
min: 0,
max: 120,
position: 'left',
axisLabel: { color: '#fff' },
nameTextStyle: { color: '#fff' }
},
{
type: 'value',
name: 'Voltage (V)',
min: 0,
max: 6,
position: 'right',
axisLabel: { color: '#fff' },
nameTextStyle: { color: '#fff' }
}
] : {
type: 'value',
axisLabel: { color: '#fff' },
},
series: currentChartData.series.map(s => ({
name: s.name,
type: 'line',
data: s.data,
smooth: true,
connectNulls: true,
showSymbol: false,
yAxisIndex: isDualAxis && s.name.toLowerCase().includes('volt') ? 1 : 0,
})),
legend: { textStyle: { color: '#fff' } }
});
fullChart.resize();
});
window.addEventListener('resize', () => {
if (fullChart) fullChart.resize();
if (currentChart) currentChart.resize();
});
});
</script>

View File

@@ -1,109 +0,0 @@
{% extends "base.html" %}
{% block css %}
.table-title {
font-size: 2rem;
text-align: center;
margin-bottom: 20px;
}
.traffic-table {
width: 50%;
border-collapse: collapse;
margin: 0 auto;
font-family: Arial, sans-serif;
}
.traffic-table th,
.traffic-table td {
padding: 10px 15px;
text-align: left;
border: 1px solid #474b4e;
}
.traffic-table th {
background-color: #272b2f;
color: white;
}
.traffic:nth-of-type(odd) {
background-color: #272b2f; /* Lighter than #2a2a2a */
}
.traffic {
border: 1px solid #474b4e;
padding: 8px;
margin-bottom: 4px;
border-radius: 8px;
}
.traffic:nth-of-type(even) {
background-color: #212529; /* Slightly lighter than the previous #181818 */
}
.footer {
text-align: center;
margin-top: 20px;
}
{% endblock %}
{% block body %}
<section>
<h2 class="table-title">
{% if traffic %}
{{ traffic[0].long_name }} (last 24 hours)
{% else %}
No Traffic Data Available
{% endif %}
</h2>
<table class="traffic-table">
<thead>
<tr>
<th>Port Number</th>
<th>Packet Count</th>
</tr>
</thead>
<tbody>
{% for port in traffic %}
<tr class="traffic">
<td>
{% if port.portnum == 1 %}
TEXT_MESSAGE_APP
{% elif port.portnum == 3 %}
POSITION_APP
{% elif port.portnum == 4 %}
NODEINFO_APP
{% elif port.portnum == 5 %}
ROUTING_APP
{% elif port.portnum == 8 %}
WAYPOINT_APP
{% elif port.portnum == 67 %}
TELEMETRY_APP
{% elif port.portnum == 70 %}
TRACEROUTE_APP
{% elif port.portnum == 71 %}
NEIGHBORINFO_APP
{% elif port.portnum == 73 %}
MAP_REPORT_APP
{% elif port.portnum == 0 %}
UNKNOWN_APP
{% else %}
{{ port.portnum }}
{% endif %}
</td>
<td>{{ port.packet_count }}</td>
</tr>
{% else %}
<tr>
<td colspan="2">No traffic data available for this node.</td>
</tr>
{% endfor %}
</tbody>
</table>
</section>
<footer class="footer">
<a href="/top">Back to Top Nodes</a>
</footer>
{% endblock %}

View File

@@ -13,11 +13,13 @@
border-radius: 10px;
box-shadow: 0 4px 8px rgba(0,0,0,0.1);
}
/* Search UI */
.search-container {
position: absolute;
bottom: 100px;
left: 10px;
z-index: 10;1
z-index: 10;
display: flex;
flex-direction: column;
gap: 5px;
@@ -37,6 +39,8 @@
.search-container button:hover {
background-color: #0056b3;
}
/* Node info box */
#node-info {
position: absolute;
bottom: 10px;
@@ -52,6 +56,8 @@
max-height: 250px;
overflow-y: auto;
}
/* Legend */
#legend {
position: absolute;
bottom: 10px;
@@ -67,9 +73,6 @@
}
.legend-category {
margin-right: 10px;
code {
color: inherit;
}
}
.legend-box {
display: inline-block;
@@ -77,22 +80,23 @@
height: 12px;
margin-right: 5px;
border-radius: 3px;
&.circle {
border-radius: 6px;
}
}
.circle { border-radius: 6px; }
{% endblock %}
{% block body %}
<div id="mynetwork"></div>
<!-- SEARCH + FILTER -->
<div class="search-container">
<label for="channel-select" style="color:#333;">Channel:</label>
<label style="color:#333;">Channel:</label>
<select id="channel-select" onchange="filterByChannel()"></select>
<input type="text" id="node-search" placeholder="Search node...">
<button onclick="searchNode()">Search</button>
</div>
<!-- INFO BOX -->
<div id="node-info">
<b>Long Name:</b> <span id="node-long-name"></span><br>
<b>Short Name:</b> <span id="node-short-name"></span><br>
@@ -100,196 +104,280 @@
<b>Hardware Model:</b> <span id="node-hw-model"></span>
</div>
<!-- LEGEND -->
<div id="legend">
<div class="legend-category">
<div><span class="legend-box" style="background-color: #ff5733"></span> Traceroute</div>
<div><span class="legend-box" style="background-color: #049acd"></span> Neighbor</div>
<div><span class="legend-box" style="background-color: #ff5733"></span>Traceroute</div>
<div><span class="legend-box" style="background-color: #049acd"></span>Neighbor</div>
</div>
<div class="legend-category">
<div><span class="legend-box circle" style="background-color: #ff5733"></span> <code>ROUTER</code></div>
<div><span class="legend-box circle" style="background-color: #b65224"></span> <code>ROUTER_LATE</code></div>
<div><span class="legend-box circle" style="background-color: #ff5733"></span><code>ROUTER</code></div>
<div><span class="legend-box circle" style="background-color: #b65224"></span><code>ROUTER_LATE</code></div>
</div>
<div class="legend-category">
<div><span class="legend-box circle" style="background-color: #007bff"></span> <code>CLIENT</code></div>
<div><span class="legend-box circle" style="background-color: #00c3ff"></span> <code>CLIENT_MUTE</code></div>
<div><span class="legend-box circle" style="background-color: #007bff"></span><code>CLIENT</code></div>
<div><span class="legend-box circle" style="background-color: #00c3ff"></span><code>CLIENT_MUTE</code></div>
</div>
<div class="legend-category">
<div><span class="legend-box circle" style="background-color: #049acd"></span> <code>CLIENT_BASE</code></div>
<div><span class="legend-box circle" style="background-color: #ffbf00"></span> Other</div>
<div><span class="legend-box circle" style="background-color: #049acd"></span><code>CLIENT_BASE</code></div>
<div><span class="legend-box circle" style="background-color: #ffbf00"></span>Other</div>
</div>
<div class="legend-category">
<div><span class="legend-box circle" style="background-color: #6c757d"></span> Unknown</div>
<div><span class="legend-box circle" style="background-color: #6c757d"></span>Unknown</div>
</div>
</div>
<script>
const chart = echarts.init(document.getElementById('mynetwork'));
// Initialize ECharts
const chart = echarts.init(document.getElementById("mynetwork"));
// -----------------------------------
// COLOR + ROLE HELPERS
// -----------------------------------
const colors = {
edge: {
traceroute: '#ff5733',
neighbor: '#049acd',
},
edge: { traceroute:"#ff5733", neighbor:"#049acd" },
role: {
ROUTER: '#ff5733',
ROUTER_LATE: '#b65224',
CLIENT: '#007bff',
CLIENT_MUTE: '#00c3ff',
CLIENT_BASE: '#049acd',
other: '#ffbf00',
unknown: '#6c757d',
ROUTER:"#ff5733",
ROUTER_LATE:"#b65224",
CLIENT:"#007bff",
CLIENT_MUTE:"#00c3ff",
CLIENT_BASE:"#049acd",
other:"#ffbf00",
unknown:"#6c757d"
},
selection: '#ff8c00',
selection:"#ff8c00"
};
function getRoleColor(role) {
if (!role) return colors.role.unknown;
return colors.role[role] || colors.role.other;
}
function getSymbolSize (role) {
switch (role) {
case 'ROUTER': return 30;
case 'ROUTER_LATE': return 30;
case 'CLIENT_BASE': return 18;
case 'CLIENT': return 15;
case 'CLIENT_MUTE': return 7;
default: return 15; // Unknown or other roles
function getRoleColor(role) { return colors.role[role] || colors.role.other; }
function getSymbolSize(role) {
switch(role){
case "ROUTER":
case "ROUTER_LATE": return 30;
case "CLIENT_BASE": return 18;
case "CLIENT": return 15;
case "CLIENT_MUTE": return 7;
default: return 15;
}
}
function getLabel (role, short_name, long_name) {
if (role === 'ROUTER') return long_name;
if (role === 'ROUTER_LATE') return long_name;
if (role === 'CLIENT_BASE') return short_name;
if (role === 'CLIENT') return short_name;
if (role === 'CLIENT_MUTE') return short_name;
return short_name || '';
function getLabel(role, shortName, longName) {
if (role === "ROUTER" || role === "ROUTER_LATE") return longName;
return shortName || "";
}
// --- Nodes ---
const nodes = [
{% for node in nodes %}
{
name: "{{ node.node_id }}", // node_id as string
value: getLabel({{node.role | tojson}}, {{node.short_name | tojson }}, {{node.long_name | tojson}}), // display label
symbol: 'circle',
symbolSize: getSymbolSize({{node.role | tojson}}),
itemStyle: { color: getRoleColor({{node.role | tojson}}), opacity:1 },
label: { show:true, position:'right', color:'#333', fontSize:12, formatter: (p)=>p.data.value },
long_name: {{ node.long_name | tojson }},
short_name: {{ node.short_name | tojson }},
role: {{ node.role | tojson }},
hw_model: {{ node.hw_model | tojson }},
channel: {{ node.channel | tojson }}
},
{% endfor %}
];
// --- Edges ---
const edges = [
{% for edge in edges %}
{
source: "{{ edge.from }}", // edge source as string
target: "{{ edge.to }}", // edge target as string
originalColor: colors.edge[{{edge.type | tojson}}],
lineStyle: {
color: colors.edge[{{edge.type | tojson}}],
width: {{edge.weight | tojson}},
},
},
{% endfor %}
];
// -----------------------------------
// STATE
// -----------------------------------
let nodes = [];
let edges = [];
let filteredNodes = [];
let filteredEdges = [];
let selectedChannel = 'LongFast';
let lastSelectedNode = null;
let selectedChannel = null;
// -----------------------------------
// LOAD NODES + EDGES FROM API
// -----------------------------------
async function loadData() {
// 1. Load nodes
const n = await fetch("/api/nodes").then(r => r.json());
nodes = n.nodes.map(x => ({
name: String(x.node_id),
node_id: x.node_id,
long_name: x.long_name,
short_name: x.short_name,
hw_model: x.hw_model,
role: x.role,
channel: x.channel,
labelValue: getLabel(x.role, x.short_name, x.long_name),
symbolSize: getSymbolSize(x.role),
itemStyle: {
color: getRoleColor(x.role),
opacity: 1
},
label: {
show: true,
position: "right",
color: "#333",
fontSize: 12,
formatter: p => p.data.labelValue
}
}));
const allNodeIDs = new Set(nodes.map(n => n.name));
// 2. Load edges
const e = await fetch("/api/edges").then(r => r.json());
// Only keep edges that reference valid nodes
edges = e.edges
.filter(ed =>
allNodeIDs.has(String(ed.from)) &&
allNodeIDs.has(String(ed.to))
)
.map(ed => ({
source: String(ed.from),
target: String(ed.to),
edgeType: ed.type,
originalColor: colors.edge[ed.type] || "#ccc",
lineStyle: {
width: 2,
color: colors.edge[ed.type] || "#ccc",
opacity: 1
}
}));
// 3. Determine which nodes are actually used in edges
const usedNodeIDs = new Set();
edges.forEach(e => {
usedNodeIDs.add(e.source);
usedNodeIDs.add(e.target);
});
// 4. Remove unused (no-edge) nodes
nodes = nodes.filter(n => usedNodeIDs.has(n.name));
// 5. Double safety: remove any edges referencing removed nodes
edges = edges.filter(e =>
usedNodeIDs.has(e.source) && usedNodeIDs.has(e.target)
);
// 6. Now ready to build dropdown & render
populateChannelDropdown();
}
// -----------------------------------
// CHANNEL FILTER
// -----------------------------------
function populateChannelDropdown() {
const sel = document.getElementById('channel-select');
const unique = [...new Set(nodes.map(n=>n.channel).filter(Boolean))].sort();
unique.forEach(ch=>{
const opt = document.createElement('option');
opt.value=ch; opt.text=ch;
if(ch==='LongFast') opt.selected=true;
const sel = document.getElementById("channel-select");
const chans = [...new Set(nodes.map(n => n.channel))].sort();
chans.forEach(ch => {
const opt = document.createElement("option");
opt.value = ch;
opt.text = ch;
sel.appendChild(opt);
});
selectedChannel = sel.value;
selectedChannel = chans[0];
sel.value = selectedChannel;
filterByChannel();
}
function filterByChannel() {
selectedChannel = document.getElementById('channel-select').value;
filteredNodes = nodes.filter(n=>n.channel===selectedChannel);
const nodeSet = new Set(filteredNodes.map(n=>n.name));
filteredEdges = edges.filter(e=>nodeSet.has(e.source) && nodeSet.has(e.target));
lastSelectedNode=null;
selectedChannel = document.getElementById("channel-select").value;
filteredNodes = nodes.filter(n => n.channel === selectedChannel);
const allowed = new Set(filteredNodes.map(n => n.name));
filteredEdges = edges.filter(e => allowed.has(e.source) && allowed.has(e.target));
lastSelectedNode = null;
updateChart();
}
// -----------------------------------
// FORCE GRAPH UPDATE
// -----------------------------------
function updateChart() {
const updatedNodes = filteredNodes.map(node=>{
let opacity=1, color=getRoleColor(node.role), borderColor='transparent', borderWidth=6;
if(lastSelectedNode){
const connected = filteredEdges.some(e=>
(e.source===node.name && e.target===lastSelectedNode) ||
(e.target===node.name && e.source===lastSelectedNode)
const updatedNodes = filteredNodes.map(n => {
let opacity = 1;
let borderColor = "transparent";
if (lastSelectedNode) {
const connected = filteredEdges.some(
e => (e.source === n.name && e.target === lastSelectedNode) ||
(e.target === n.name && e.source === lastSelectedNode)
);
if(node.name === lastSelectedNode) {
opacity=1;
borderColor=colors.selection;
if (n.name === lastSelectedNode) {
borderColor = colors.selection;
} else if (!connected) {
opacity = 0.3;
}
else if(connected) opacity=1;
else opacity=0.4;
}
return {...node, itemStyle:{...node.itemStyle, color,opacity, borderColor, borderWidth}};
return {
...n,
itemStyle: { ...n.itemStyle, opacity, borderColor, borderWidth: 6 }
};
});
const updatedEdges = filteredEdges.map(edge=>{
let opacity=0.1, width=edge.lineStyle.width;
if(lastSelectedNode){
const connected = edge.source===lastSelectedNode || edge.target===lastSelectedNode;
opacity=connected?1:0.05; width=edge.lineStyle.width;
}
return {...edge, lineStyle:{color:edge.originalColor||'#d3d3d3', width, opacity}};
const updatedEdges = filteredEdges.map(e => {
const connected =
lastSelectedNode &&
(e.source === lastSelectedNode || e.target === lastSelectedNode);
return {
...e,
lineStyle: { ...e.lineStyle, opacity: connected ? 1 : 0.1 }
};
});
chart.setOption({series:[{type:'graph', layout:'force', data:updatedNodes, links:updatedEdges, roam:true, force:{repulsion:200, edgeLength:[80,120]}}]});
chart.setOption({
animation: false,
series: [{
type: "graph",
layout: "force",
roam: true,
data: updatedNodes,
links: updatedEdges,
force: { repulsion: 200, edgeLength: [80, 120] }
}]
});
}
chart.on('click', function(params){
if(params.dataType==='node') updateSelectedNode(params.data.name);
else{
lastSelectedNode=null; updateChart();
document.getElementById('node-long-name').innerText='';
document.getElementById('node-short-name').innerText='';
document.getElementById('node-role').innerText='';
document.getElementById('node-hw-model').innerText='';
// -----------------------------------
// CLICK EVENTS
// -----------------------------------
chart.on("click", function(params){
if (params.dataType === "node") {
updateSelectedNode(params.data.name);
} else {
lastSelectedNode = null;
updateChart();
document.getElementById("node-long-name").innerText = "";
document.getElementById("node-short-name").innerText = "";
document.getElementById("node-role").innerText = "";
document.getElementById("node-hw-model").innerText = "";
}
});
function updateSelectedNode(selNode){
lastSelectedNode=selNode; updateChart();
const n = filteredNodes.find(x=>x.name===selNode);
if(n){
document.getElementById('node-long-name').innerText=n.long_name;
document.getElementById('node-short-name').innerText=n.short_name;
document.getElementById('node-role').innerText=n.role;
document.getElementById('node-hw-model').innerText=n.hw_model;
}
function updateSelectedNode(id) {
lastSelectedNode = id;
updateChart();
const n = filteredNodes.find(n => n.name === id);
if (!n) return;
document.getElementById("node-long-name").innerText = n.long_name;
document.getElementById("node-short-name").innerText = n.short_name;
document.getElementById("node-role").innerText = n.role;
document.getElementById("node-hw-model").innerText = n.hw_model;
}
function searchNode(){
const q = document.getElementById('node-search').value.toLowerCase().trim();
if(!q) return;
const found = filteredNodes.find(n=>n.name.toLowerCase().includes(q) || n.long_name.toLowerCase().includes(q) || n.short_name.toLowerCase().includes(q));
if(found) updateSelectedNode(found.name);
// -----------------------------------
// SEARCH
// -----------------------------------
function searchNode() {
const q = document.getElementById("node-search").value.toLowerCase().trim();
if (!q) return;
const found = filteredNodes.find(n =>
n.name.toLowerCase().includes(q) ||
(n.long_name || "").toLowerCase().includes(q) ||
(n.short_name || "").toLowerCase().includes(q)
);
if (found) updateSelectedNode(found.name);
else alert("Node not found in current channel!");
}
populateChannelDropdown();
window.addEventListener('resize', ()=>chart.resize());
// -----------------------------------
loadData();
window.addEventListener("resize", () => chart.resize());
</script>
{% endblock %}

View File

@@ -120,21 +120,10 @@ select, .export-btn, .search-box, .clear-btn {
<div class="filter-container">
<input type="text" id="search-box" class="search-box" placeholder="Search by name or ID..." />
<select id="role-filter">
<option value="">All Roles</option>
</select>
<select id="channel-filter">
<option value="">All Channels</option>
</select>
<select id="hw-filter">
<option value="">All HW Models</option>
</select>
<select id="firmware-filter">
<option value="">All Firmware</option>
</select>
<select id="role-filter"><option value="">All Roles</option></select>
<select id="channel-filter"><option value="">All Channels</option></select>
<select id="hw-filter"><option value="">All HW Models</option></select>
<select id="firmware-filter"><option value="">All Firmware</option></select>
<button class="favorites-btn" id="favorites-btn">⭐ Show Favorites</button>
<button class="export-btn" id="export-btn">Export CSV</button>
@@ -149,7 +138,7 @@ select, .export-btn, .search-box, .clear-btn {
<table>
<thead>
<tr>
<th>Short<span class="sort-icon"></span></th>
<th>Short <span class="sort-icon"></span></th>
<th>Long Name <span class="sort-icon"></span></th>
<th>HW Model <span class="sort-icon"></span></th>
<th>Firmware <span class="sort-icon"></span></th>
@@ -157,52 +146,75 @@ select, .export-btn, .search-box, .clear-btn {
<th>Last Latitude <span class="sort-icon"></span></th>
<th>Last Longitude <span class="sort-icon"></span></th>
<th>Channel <span class="sort-icon"></span></th>
<th>Last Update <span class="sort-icon"></span></th>
<th>Favorite</th>
<th>Last Seen <span class="sort-icon"></span></th>
<th> </th>
</tr>
</thead>
<tbody id="node-table-body">
<tr><td colspan="9" style="text-align:center; color:white;">Loading nodes...</td></tr>
<tr><td colspan="10" style="text-align:center; color:white;">Loading nodes...</td></tr>
</tbody>
</table>
</div>
<script>
// =====================================================
// GLOBALS
// =====================================================
let allNodes = [];
let sortColumn = "short_name"; // default sorted column
let sortAsc = true; // default ascending
let sortColumn = "short_name";
let sortAsc = true;
let showOnlyFavorites = false;
// Declare headers and keyMap BEFORE any function that uses them
const headers = document.querySelectorAll("thead th");
const keyMap = ["short_name","long_name","hw_model","firmware","role","last_lat","last_long","channel","last_update"];
const keyMap = [
"short_name","long_name","hw_model","firmware","role",
"last_lat","last_long","channel","last_seen_us"
];
// LocalStorage functions for favorites
// =====================================================
// FAVORITES SYSTEM (localStorage)
// =====================================================
function getFavorites() {
const favorites = localStorage.getItem('nodelist_favorites');
return favorites ? JSON.parse(favorites) : [];
}
function saveFavorites(favorites) {
localStorage.setItem('nodelist_favorites', JSON.stringify(favorites));
function saveFavorites(favs) {
localStorage.setItem('nodelist_favorites', JSON.stringify(favs));
}
function toggleFavorite(nodeId) {
let favorites = getFavorites();
const index = favorites.indexOf(nodeId);
if (index > -1) {
favorites.splice(index, 1);
} else {
favorites.push(nodeId);
}
saveFavorites(favorites);
applyFilters();
let favs = getFavorites();
const idx = favs.indexOf(nodeId);
if (idx >= 0) favs.splice(idx, 1);
else favs.push(nodeId);
saveFavorites(favs);
}
function isFavorite(nodeId) {
return getFavorites().includes(nodeId);
}
// =====================================================
// "TIME AGO" FORMATTER
// =====================================================
function timeAgo(usTimestamp) {
if (!usTimestamp) return "N/A";
const ms = usTimestamp / 1000;
const diff = Date.now() - ms;
if (diff < 60000) return "just now";
const mins = Math.floor(diff / 60000);
if (mins < 60) return `${mins} min ago`;
const hrs = Math.floor(mins / 60);
if (hrs < 24) return `${hrs} hr ago`;
const days = Math.floor(hrs / 24);
return `${days} day${days > 1 ? "s" : ""} ago`;
}
// =====================================================
// DOM LOADED: FETCH NODES
// =====================================================
document.addEventListener("DOMContentLoaded", async function() {
const tbody = document.getElementById("node-table-body");
const roleFilter = document.getElementById("role-filter");
@@ -216,15 +228,17 @@ document.addEventListener("DOMContentLoaded", async function() {
const favoritesBtn = document.getElementById("favorites-btn");
try {
const response = await fetch("/api/nodes?days_active=3");
if (!response.ok) throw new Error("Failed to fetch nodes");
const data = await response.json();
const res = await fetch("/api/nodes?days_active=3");
if (!res.ok) throw new Error("Failed to fetch nodes");
const data = await res.json();
allNodes = data.nodes;
populateFilters(allNodes);
renderTable(allNodes);
updateSortIcons();
} catch (err) {
tbody.innerHTML = `<tr><td colspan="9" style="text-align:center; color:red;">Error loading nodes: ${err.message}</td></tr>`;
tbody.innerHTML = `<tr><td colspan="10" style="text-align:center; color:red;">Error loading nodes: ${err.message}</td></tr>`;
}
roleFilter.addEventListener("change", applyFilters);
@@ -235,138 +249,140 @@ document.addEventListener("DOMContentLoaded", async function() {
exportBtn.addEventListener("click", exportToCSV);
clearBtn.addEventListener("click", clearFilters);
favoritesBtn.addEventListener("click", toggleFavoritesFilter);
// Use event delegation for star clicks
tbody.addEventListener("click", (e) => {
// STAR CLICK HANDLER
tbody.addEventListener("click", e => {
if (e.target.classList.contains('favorite-star')) {
const nodeId = parseInt(e.target.getAttribute('data-node-id'));
// Get current favorites
let favorites = getFavorites();
const index = favorites.indexOf(nodeId);
const isNowFavorite = index === -1; // Will it be a favorite after toggle?
// Update the star immediately for instant feedback
if (isNowFavorite) {
e.target.classList.add('active');
e.target.textContent = '★';
const nodeId = parseInt(e.target.dataset.nodeId);
const isFav = isFavorite(nodeId);
if (isFav) {
e.target.classList.remove("active");
e.target.textContent = "☆";
} else {
e.target.classList.remove('active');
e.target.textContent = '☆';
e.target.classList.add("active");
e.target.textContent = "★";
}
// Save to localStorage
toggleFavorite(nodeId);
}
});
// SORTING
headers.forEach((th, index) => {
th.addEventListener("click", () => {
const key = keyMap[index];
let key = keyMap[index];
sortAsc = (sortColumn === key) ? !sortAsc : true;
sortColumn = key;
applyFilters(); // apply filters and sort
applyFilters();
});
});
// =====================================================
// FILTER POPULATION
// =====================================================
function populateFilters(nodes) {
const roles = new Set();
const channels = new Set();
const hws = new Set();
const firmwares = new Set();
const roles = new Set(), channels = new Set(), hws = new Set(), fws = new Set();
nodes.forEach(n => {
if (n.role) roles.add(n.role);
if (n.channel) channels.add(n.channel);
if (n.hw_model) hws.add(n.hw_model);
if (n.firmware) firmwares.add(n.firmware);
if (n.firmware) fws.add(n.firmware);
});
fillSelect(roleFilter, roles);
fillSelect(channelFilter, channels);
fillSelect(hwFilter, hws);
fillSelect(firmwareFilter, firmwares);
fillSelect(firmwareFilter, fws);
}
function fillSelect(select, values) {
[...values].sort().forEach(v => {
const option = document.createElement("option");
option.value = v;
option.textContent = v;
select.appendChild(option);
const opt = document.createElement("option");
opt.value = v;
opt.textContent = v;
select.appendChild(opt);
});
}
// =====================================================
// FAVORITES FILTER
// =====================================================
function toggleFavoritesFilter() {
showOnlyFavorites = !showOnlyFavorites;
if (showOnlyFavorites) {
favoritesBtn.textContent = "⭐ Show All";
favoritesBtn.classList.add("active");
} else {
favoritesBtn.textContent = "⭐ Show Favorites";
favoritesBtn.classList.remove("active");
}
favoritesBtn.textContent = showOnlyFavorites ? "⭐ Show All" : "⭐ Show Favorites";
favoritesBtn.classList.toggle("active", showOnlyFavorites);
applyFilters();
}
// =====================================================
// APPLY FILTERS + SORT
// =====================================================
function applyFilters() {
const searchTerm = searchBox.value.trim().toLowerCase();
let filtered = allNodes.filter(node => {
const roleMatch = !roleFilter.value || node.role === roleFilter.value;
const channelMatch = !channelFilter.value || node.channel === channelFilter.value;
const hwMatch = !hwFilter.value || node.hw_model === hwFilter.value;
const firmwareMatch = !firmwareFilter.value || node.firmware === firmwareFilter.value;
let filtered = allNodes.filter(n => {
const roleMatch = !roleFilter.value || n.role === roleFilter.value;
const channelMatch = !channelFilter.value || n.channel === channelFilter.value;
const hwMatch = !hwFilter.value || n.hw_model === hwFilter.value;
const fwMatch = !firmwareFilter.value || n.firmware === firmwareFilter.value;
const searchMatch =
!searchTerm ||
(n.long_name && n.long_name.toLowerCase().includes(searchTerm)) ||
(n.short_name && n.short_name.toLowerCase().includes(searchTerm)) ||
n.node_id.toString().includes(searchTerm);
const searchMatch = !searchTerm ||
(node.long_name && node.long_name.toLowerCase().includes(searchTerm)) ||
(node.short_name && node.short_name.toLowerCase().includes(searchTerm)) ||
(node.node_id && node.node_id.toString().includes(searchTerm)) ||
(node.id && node.id.toString().includes(searchTerm));
const favMatch = !showOnlyFavorites || isFavorite(n.node_id);
const favoriteMatch = !showOnlyFavorites || isFavorite(node.node_id);
return roleMatch && channelMatch && hwMatch && firmwareMatch && searchMatch && favoriteMatch;
return roleMatch && channelMatch && hwMatch && fwMatch && searchMatch && favMatch;
});
if (sortColumn) {
filtered = sortNodes(filtered, sortColumn, sortAsc);
}
filtered = sortNodes(filtered, sortColumn, sortAsc);
renderTable(filtered);
updateSortIcons();
}
// =====================================================
// RENDER TABLE
// =====================================================
function renderTable(nodes) {
tbody.innerHTML = "";
if (!nodes.length) {
tbody.innerHTML = '<tr><td colspan="10" style="text-align:center; color:white;">No nodes found</td></tr>';
} else {
nodes.forEach(node => {
const row = document.createElement("tr");
const isFav = isFavorite(node.node_id);
const starClass = isFav ? 'favorite-star active' : 'favorite-star';
const starIcon = isFav ? '★' : '☆';
row.innerHTML = `
<td>${node.short_name || "N/A"}</td>
<td><a href="/packet_list/${node.node_id}">${node.long_name || "N/A"}</a></td>
<td>${node.hw_model || "N/A"}</td>
<td>${node.firmware || "N/A"}</td>
<td>${node.role || "N/A"}</td>
<td>${node.last_lat ? (node.last_lat / 1e7).toFixed(7) : "N/A"}</td>
<td>${node.last_long ? (node.last_long / 1e7).toFixed(7) : "N/A"}</td>
<td>${node.channel || "N/A"}</td>
<td>${node.last_update ? new Date(node.last_update).toLocaleString() : "N/A"}</td>
<td style="text-align:center;"><span class="${starClass}" data-node-id="${node.node_id}">${starIcon}</span></td>
`;
tbody.appendChild(row);
});
tbody.innerHTML = `<tr><td colspan="10" style="text-align:center; color:white;">No nodes found</td></tr>`;
countSpan.textContent = 0;
return;
}
nodes.forEach(node => {
const isFav = isFavorite(node.node_id);
const star = isFav ? "★" : "☆";
const row = document.createElement("tr");
row.innerHTML = `
<td>${node.short_name || "N/A"}</td>
<td><a href="/node/${node.node_id}">${node.long_name || "N/A"}</a></td>
<td>${node.hw_model || "N/A"}</td>
<td>${node.firmware || "N/A"}</td>
<td>${node.role || "N/A"}</td>
<td>${node.last_lat ? (node.last_lat / 1e7).toFixed(7) : "N/A"}</td>
<td>${node.last_long ? (node.last_long / 1e7).toFixed(7) : "N/A"}</td>
<td>${node.channel || "N/A"}</td>
<td>${timeAgo(node.last_seen_us)}</td>
<td style="text-align:center;">
<span class="favorite-star ${isFav ? "active" : ""}" data-node-id="${node.node_id}">${star}</span>
</td>
`;
tbody.appendChild(row);
});
countSpan.textContent = nodes.length;
}
// =====================================================
// CLEAR FILTERS
// =====================================================
function clearFilters() {
roleFilter.value = "";
channelFilter.value = "";
@@ -378,56 +394,60 @@ document.addEventListener("DOMContentLoaded", async function() {
showOnlyFavorites = false;
favoritesBtn.textContent = "⭐ Show Favorites";
favoritesBtn.classList.remove("active");
renderTable(allNodes);
updateSortIcons();
}
// =====================================================
// EXPORT CSV
// =====================================================
function exportToCSV() {
const rows = [];
const headersText = Array.from(headers).map(th => `"${th.innerText.replace(/▲|▼/g,'')}"`);
rows.push(headersText.join(","));
const headerList = Array.from(headers).map(h => `"${h.innerText.replace(/▲|▼/g,'')}"`);
rows.push(headerList.join(","));
const visibleRows = tbody.querySelectorAll("tr");
visibleRows.forEach(tr => {
if (tr.children.length === 9) {
const row = Array.from(tr.children).map(td => `"${td.innerText.replace(/"/g, '""')}"`);
rows.push(row.join(","));
}
const trs = tbody.querySelectorAll("tr");
trs.forEach(tr => {
const cells = Array.from(tr.children).map(td => `"${td.innerText.replace(/"/g,'""')}"`);
rows.push(cells.join(","));
});
const csvContent = "data:text/csv;charset=utf-8,\uFEFF" + rows.join("\n");
const link = document.createElement("a");
link.href = encodeURI(csvContent);
const dateStr = new Date().toISOString().slice(0,10);
link.download = `nodes_list_${dateStr}.csv`;
link.click();
const csv = "data:text/csv;charset=utf-8,\uFEFF" + rows.join("\n");
const a = document.createElement("a");
a.href = encodeURI(csv);
a.download = "nodelist.csv";
a.click();
}
// =====================================================
// SORT NODES
// =====================================================
function sortNodes(nodes, key, asc) {
return [...nodes].sort((a, b) => {
let valA = a[key] || "";
let valB = b[key] || "";
let A = a[key];
let B = b[key];
if (key === "last_lat" || key === "last_long") {
valA = Number(valA) || 0;
valB = Number(valB) || 0;
}
if (key === "last_update") {
valA = valA ? new Date(valA).getTime() : 0;
valB = valB ? new Date(valB).getTime() : 0;
// special handling for timestamp
if (key === "last_seen_us") {
A = A || 0;
B = B || 0;
}
if (valA < valB) return asc ? -1 : 1;
if (valA > valB) return asc ? 1 : -1;
if (A < B) return asc ? -1 : 1;
if (A > B) return asc ? 1 : -1;
return 0;
});
}
// =====================================================
// SORT ICONS
// =====================================================
function updateSortIcons() {
headers.forEach((th, index) => {
headers.forEach((th, i) => {
const span = th.querySelector(".sort-icon");
if (!span) return;
span.textContent = (keyMap[index] === sortColumn) ? (sortAsc ? "▲" : "▼") : "";
span.textContent = keyMap[i] === sortColumn ? (sortAsc ? "▲" : "▼") : "";
});
}
});

View File

@@ -1,69 +1,478 @@
<div class="card mt-2">
<div class="card-header">
{% set from_me = packet.from_node_id == node_id %}
{% set to_me = packet.to_node_id == node_id %}
<span {% if from_me %} class="fw-bold" {% endif %}>
{{packet.from_node.long_name}}(
{%- if not from_me -%}
<a href="/node_search?q={{packet.from_node_id|node_id_to_hex}}">
{%- endif -%}
{{packet.from_node_id|node_id_to_hex}}
{%- if not from_me -%}
</a>
{%- endif -%}
)
</span>
<span {% if to_me %} class="fw-bold" {% endif %}>
{{packet.to_node.long_name}}(
{%- if not to_me -%}
<a hx-target="#node" href="/node_search?q={{packet.to_node_id|node_id_to_hex}}">
{%- endif -%}
{{packet.to_node_id|node_id_to_hex}}
{%- if not to_me -%}
</a>
{%- endif -%}
)
</span>
</div>
<div class="card-body">
<div class="card-title">
{{packet.id}}
<a href="/packet/{{packet.id}}">🔎</a>
</div>
<div class="card-text text-start">
<dl>
<dt>Import Time</dt>
<dd>{{packet.import_time.strftime('%-I:%M:%S %p - %m-%d-%Y')}}</dd>
<dt>packet</dt>
<dd><pre>{{packet.data}}</pre></dd>
<dt>payload</dt>
<dd>
{% if packet.pretty_payload %}
<div>{{packet.pretty_payload}}</div>
{% endif %}
{% if packet.raw_mesh_packet and packet.raw_mesh_packet.decoded and packet.raw_mesh_packet.decoded.reply_id %}
<i>(Replying to: <a href="/packet/{{ packet.raw_mesh_packet.decoded.reply_id }}">{{ packet.raw_mesh_packet.decoded.reply_id }}</a>)</i>
{% endif %}
{% if packet.raw_mesh_packet.decoded and packet.raw_mesh_packet.decoded.portnum == 70 %}
<ul>
{% for node_id in packet.raw_payload.route %}
<li><a
href="/packet_list/{{node_id}}"
>
{{node_id | node_id_to_hex}}
</a>
</li>
{% endfor %}
</ul>
{% if packet.raw_mesh_packet.decoded.want_response %}
<a href="/graph/traceroute/{{packet.id}}">graph</a>
{% else %}
<a href="/graph/traceroute/{{packet.raw_mesh_packet.decoded.request_id}}">graph</a>
{% endif %}
{% endif %}
<pre>{{packet.payload}}</pre>
</dd>
</dl>
{% extends "base.html" %}
{% block title %}Packet Details{%endblock%}
{% block css %}
{{ super() }}
<style>
/* --- Packet page container --- */
.packet-container {
max-width: 900px;
margin: 0 auto;
padding: 20px 15px;
font-family: "JetBrains Mono", monospace;
}
/* --- Packet Details Card --- */
.packet-card .card-body { padding: 26px 30px; }
.packet-card {
background-color: #1e1f22;
border: 1px solid #3a3a3a;
border-radius: 12px;
color: #ddd;
margin-top: 35px;
box-shadow: 0 0 20px rgba(0,0,0,0.35);
overflow: hidden;
}
.packet-card .card-header {
background: linear-gradient(90deg, #2c2f35, #25262a);
border-bottom: 1px solid #3f3f3f;
font-weight: 600;
font-size: 1.1em;
padding: 14px 18px;
color: #e2e6ea;
display: flex;
justify-content: space-between;
align-items: center;
}
/* --- Map --- */
#map {
width: 100%;
height: 640px;
border-radius: 10px;
margin-top: 20px;
border: 1px solid #333;
display: none;
}
/* --- SOURCE MARKER (slightly bigger) --- */
.source-marker {
width: 24px;
height: 24px;
background: rgba(255,0,0,0.55);
border: 3px solid #ff0000;
border-radius: 50%;
box-shadow: 0 0 6px rgba(255,0,0,0.7);
}
/* --- Seen Table --- */
.seen-table {
border-collapse: separate;
border-spacing: 0 6px;
font-size: 0.92em;
}
.seen-table thead th {
background-color: #2a2b2f;
color: #e2e2e2;
padding: 10px 12px;
border: none !important;
text-transform: uppercase;
font-size: 0.75em;
}
.seen-table tbody td {
background: #323338;
color: #f0f0f0;
border-top: 1px solid #4a4c4f !important;
border-bottom: 1px solid #4a4c4f !important;
padding: 10px 12px !important;
}
.seen-table tbody tr:hover td { background-color: #3a3c41 !important; }
.seen-table tbody tr td:first-child {
border-left: 1px solid #4a4c4f;
border-top-left-radius: 8px;
border-bottom-left-radius: 8px;
}
.seen-table tbody tr td:last-child {
border-right: 1px solid #4a4c4f;
border-top-right-radius: 8px;
border-bottom-right-radius: 8px;
}
</style>
{% endblock %}
{% block body %}
<div class="container mt-4 mb-5 packet-container">
<div id="loading">Loading packet information...</div>
<div id="packet-card" class="packet-card d-none"></div>
<div id="map"></div>
<div id="seen-container" class="mt-4 d-none">
<h5 style="color:#ccc; margin:15px 0 10px 0;">
📡 Seen By <span id="seen-count" style="color:#4da6ff;"></span>
</h5>
<div class="table-responsive">
<table class="table table-dark table-sm seen-table">
<thead>
<tr>
<th>Gateway</th>
<th>RSSI</th>
<th>SNR</th>
<th>Hop</th>
<th>Channel</th>
<th>Time</th>
</tr>
</thead>
<tbody id="seen-table-body"></tbody>
</table>
</div>
</div>
</div>
<script>
document.addEventListener("DOMContentLoaded", async () => {
const packetCard = document.getElementById("packet-card");
const loading = document.getElementById("loading");
const mapDiv = document.getElementById("map");
const seenContainer = document.getElementById("seen-container");
const seenTableBody = document.getElementById("seen-table-body");
const seenCountSpan = document.getElementById("seen-count");
/* ---------------------------------------------
Identify packet ID
----------------------------------------------*/
const match = window.location.pathname.match(/\/packet\/(\d+)/);
if (!match) {
loading.textContent = "Invalid packet URL";
return;
}
const packetId = match[1];
/* PORT NAME MAP */
const PORT_NAMES = {
0:"UNKNOWN APP",
1:"Text",
3:"Position",
4:"Node Info",
5:"Routing",
6:"Admin",
67:"Telemetry",
70:"Traceroute",
71:"Neighbor"
};
/* ---------------------------------------------
Fetch packet
----------------------------------------------*/
const packetRes = await fetch(`/api/packets?packet_id=${packetId}`);
const packetData = await packetRes.json();
if (!packetData.packets.length) {
loading.textContent = "Packet not found.";
return;
}
const p = packetData.packets[0];
/* ---------------------------------------------
Fetch all nodes
----------------------------------------------*/
const nodesRes = await fetch("/api/nodes");
const nodesData = await nodesRes.json();
const nodeLookup = {};
(nodesData.nodes || []).forEach(n => nodeLookup[n.node_id] = n);
const fromNodeObj = nodeLookup[p.from_node_id];
const toNodeObj = nodeLookup[p.to_node_id];
const fromNodeLabel = fromNodeObj?.long_name || p.from_node_id;
const toNodeLabel =
p.to_node_id == 4294967295 ? "All" : (toNodeObj?.long_name || p.to_node_id);
/* ---------------------------------------------
Parse payload for lat/lon if this *packet* is a position packet
----------------------------------------------*/
let lat = null, lon = null;
const parsed = {};
if (p.payload?.includes(":")) {
p.payload.split("\n").forEach(line => {
const [k, v] = line.split(":").map(x=>x.trim());
if (k && v !== undefined) {
parsed[k] = v;
if (k === "latitude_i") lat = Number(v) / 1e7;
if (k === "longitude_i") lon = Number(v) / 1e7;
}
});
}
/* ---------------------------------------------
Render packet header & details
----------------------------------------------*/
const time = p.import_time_us
? new Date(p.import_time_us / 1000).toLocaleString()
: "—";
const telemetryExtras = [];
if (parsed.PDOP) telemetryExtras.push(`PDOP: ${parsed.PDOP}`);
if (parsed.sats_in_view) telemetryExtras.push(`Sats: ${parsed.sats_in_view}`);
if (parsed.ground_speed) telemetryExtras.push(`Speed: ${parsed.ground_speed}`);
if (parsed.altitude) telemetryExtras.push(`Altitude: ${parsed.altitude}`);
packetCard.innerHTML = `
<div class="card-header">
<span>Packet ID: <i>${p.id}</i></span>
<small>${time}</small>
</div>
<div class="card-body">
<dl>
<dt>From Node:</dt>
<dd><a href="/node/${p.from_node_id}">${fromNodeLabel}</a></dd>
<dt>To Node:</dt>
<dd>${
p.to_node_id === 4294967295
? `<i>All</i>`
: p.to_node_id === 1
? `<i>Direct to MQTT</i>`
: `<a href="/node/${p.to_node_id}">${toNodeLabel}</a>`
}</dd>
<dt>Channel:</dt><dd>${p.channel ?? "—"}</dd>
<dt>Port:</dt>
<dd><i>${PORT_NAMES[p.portnum] || "UNKNOWN APP"}</i> (${p.portnum})</dd>
<dt>Raw Payload:</dt>
<dd><pre>${escapeHtml(p.payload ?? "—")}</pre></dd>
${
telemetryExtras.length
? `<dt>Decoded Telemetry</dt>
<dd><pre>${telemetryExtras.join("\n")}</pre></dd>`
: ""
}
${
lat && lon
? `<dt>Location:</dt><dd>${lat.toFixed(6)}, ${lon.toFixed(6)}</dd>`
: ""
}
</dl>
</div>
`;
loading.classList.add("d-none");
packetCard.classList.remove("d-none");
/* ---------------------------------------------
Map initialization
----------------------------------------------*/
const map = L.map("map");
mapDiv.style.display = "block";
L.tileLayer("https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png", {
maxZoom: 19
}).addTo(map);
const allBounds = [];
/* ---------------------------------------------
ALWAYS SHOW SOURCE POSITION
Priority:
1) position from packet payload
2) fallback: last_lat/last_long from /api/nodes
----------------------------------------------*/
let srcLat = lat;
let srcLon = lon;
if ((!srcLat || !srcLon) && fromNodeObj?.last_lat && fromNodeObj?.last_long) {
srcLat = fromNodeObj.last_lat / 1e7;
srcLon = fromNodeObj.last_long / 1e7;
}
if (srcLat && srcLon) {
allBounds.push([srcLat, srcLon]);
const sourceIcon = L.divIcon({
html: `<div class="source-marker"></div>`,
className: "",
iconSize: [26, 26],
iconAnchor: [13, 13]
});
const sourceMarker = L.marker([srcLat, srcLon], {
icon: sourceIcon,
zIndexOffset: 9999
}).addTo(map);
sourceMarker.bindPopup(`
<div style="font-size:0.9em">
<b>Packet Source</b><br>
Lat: ${srcLat.toFixed(6)}<br>
Lon: ${srcLon.toFixed(6)}<br>
From Node: ${fromNodeLabel}<br>
Channel: ${p.channel ?? "—"}<br>
Port: ${PORT_NAMES[p.portnum] || "UNKNOWN"} (${p.portnum})
</div>
`);
} else {
map.setView([0,0], 2);
}
/* ---------------------------------------------
Color for hop indicator markers (warm → cold)
----------------------------------------------*/
function hopColor(hopValue){
const colors = [
"#ff3b30",
"#ff6b22",
"#ff9f0c",
"#ffd60a",
"#87d957",
"#57d9c4",
"#3db2ff",
"#1e63ff"
];
let h = Number(hopValue);
if (isNaN(h)) return "#aaa";
if (h < 0) h = 0;
if (h > 7) h = 7;
return colors[h];
}
/* Distance helper */
function haversine(lat1,lon1,lat2,lon2){
const R=6371;
const dLat=(lat2-lat1)*Math.PI/180;
const dLon=(lon2-lon1)*Math.PI/180;
const a=Math.sin(dLat/2)**2+
Math.cos(lat1*Math.PI/180)*
Math.cos(lat2*Math.PI/180)*
Math.sin(dLon/2)**2;
return R*(2*Math.atan2(Math.sqrt(a),Math.sqrt(1-a)));
}
/* ---------------------------------------------
Fetch packets_seen
----------------------------------------------*/
const seenRes = await fetch(`/api/packets_seen/${packetId}`);
const seenData = await seenRes.json();
const seenList = seenData.seen ?? [];
/* sort by hop_start descending (warm → cold) */
const seenSorted = seenList.slice().sort((a,b)=>{
const A=a.hop_start??-999;
const B=b.hop_start??-999;
return B-A;
});
if (seenSorted.length){
seenContainer.classList.remove("d-none");
seenCountSpan.textContent=`(${seenSorted.length} gateways)`;
}
/* ---------------------------------------------
Gateway markers and seen table
----------------------------------------------*/
seenTableBody.innerHTML = seenSorted.map(s=>{
const node=nodeLookup[s.node_id];
const label=node?(node.long_name||node.node_id):s.node_id;
const timeStr = s.import_time_us
? new Date(s.import_time_us/1000).toLocaleTimeString()
: "—";
if(node?.last_lat && node.last_long){
const rlat=node.last_lat/1e7;
const rlon=node.last_long/1e7;
allBounds.push([rlat,rlon]);
const start = Number(s.hop_start ?? 0);
const limit = Number(s.hop_limit ?? 0);
const hopValue = start - limit;
const color = hopColor(hopValue);
const iconHtml = `
<div style="
background:${color};
width:24px;
height:24px;
border-radius:50%;
display:flex;
align-items:center;
justify-content:center;
color:white;
font-size:11px;
font-weight:700;
border:2px solid rgba(0,0,0,0.35);
box-shadow:0 0 5px rgba(0,0,0,0.45);
">${hopValue}</div>`;
const marker=L.marker([rlat,rlon],{
icon:L.divIcon({
html:iconHtml,
className:"",
iconSize:[24,24],
iconAnchor:[12,12]
})
}).addTo(map);
let distKm=null,distMi=null;
if(srcLat&&srcLon){
distKm=haversine(srcLat,srcLon,rlat,rlon);
distMi=distKm*0.621371;
}
marker.bindPopup(`
<div style="font-size:0.9em">
<b>${node?.long_name || s.node_id}</b><br>
Node ID: <a href="/node/${s.node_id}">${s.node_id}</a><br>
HW: ${node?.hw_model ?? "—"}<br>
Channel: ${s.channel ?? "—"}<br><br>
<b>Signal</b><br>
RSSI: ${s.rx_rssi ?? "—"}<br>
SNR: ${s.rx_snr ?? "—"}<br><br>
<b>Hops</b>: ${hopValue}<br>
<b>Distance</b><br>
${
distKm
? `${distKm.toFixed(2)} km (${distMi.toFixed(2)} mi)`
: "—"
}
</div>
`);
}
return `
<tr>
<td><a href="/node/${s.node_id}">${label}</a></td>
<td>${s.rx_rssi ?? "—"}</td>
<td>${s.rx_snr ?? "—"}</td>
<td>${s.hop_start ?? "—"}${s.hop_limit ?? "—"}</td>
<td>${s.channel ?? "—"}</td>
<td>${timeStr}</td>
</tr>`;
}).join("");
/* ---------------------------------------------
Fit map to all markers
----------------------------------------------*/
if(allBounds.length>0){
map.fitBounds(allBounds,{padding:[40,40]});
}
/* ---------------------------------------------
Escape HTML
----------------------------------------------*/
function escapeHtml(unsafe) {
return (unsafe??"").replace(/[&<"'>]/g,m=>({
"&":"&amp;",
"<":"&lt;",
">":"&gt;",
"\"":"&quot;",
"'":"&#039;"
})[m]);
}
});
</script>
{% endblock %}

View File

@@ -1,132 +0,0 @@
<div id="details_map"></div>
{% for seen in packets_seen %}
<div class="card mt-2">
<div class="card-header">
{{seen.node.long_name}}(
<a hx-target="#node" href="/node_search?q={{seen.node_id|node_id_to_hex}}">
{{seen.node_id|node_id_to_hex}}
</a>
)
</div>
<div class="card-body">
<div class="card-text text-start">
<dl>
<dt>Import Time</dt>
<dd>{{seen.import_time.strftime('%-I:%M:%S %p - %m-%d-%Y')}}</dd>
<dt>rx_time</dt>
<dd>{{seen.rx_time|format_timestamp}}</dd>
<dt>hop_limit</dt>
<dd>{{seen.hop_limit}}</dd>
<dt>hop_start</dt>
<dd>{{seen.hop_start}}</dd>
<dt>channel</dt>
<dd>{{seen.channel}}</dd>
<dt>rx_snr</dt>
<dd>{{seen.rx_snr}}</dd>
<dt>rx_rssi</dt>
<dd>{{seen.rx_rssi}}</dd>
<dt>topic</dt>
<dd>{{seen.topic}}</dd>
</dl>
</div>
</div>
</div>
{% endfor %}
{% if map_center %}
<script>
var details_map = L.map('details_map').setView({{ map_center | tojson }}, 8);
var markers = L.featureGroup();
markers.addTo(details_map);
L.tileLayer('https://tile.openstreetmap.org/{z}/{x}/{y}.png', {
maxZoom: 15,
attribution: '&copy; <a href="http://www.openstreetmap.org/copyright">OpenStreetMap</a>'
}).addTo(details_map);
function getDistanceInMiles(latlng1, latlng2) {
var meters = latlng1.distanceTo(latlng2);
return meters * 0.000621371;
}
{% if from_node_cord %}
var fromNodeLatLng = L.latLng({{ from_node_cord | tojson }});
var fromNode = L.circleMarker(fromNodeLatLng, {
radius: 10,
color: 'red',
weight: 1,
fillColor: 'red',
fillOpacity: .4
}).addTo(markers);
fromNode.bindPopup(`
Sent by: <b>{{node.long_name}}</b><br/>
<b>Short:</b> {{node.short_name}}<br/>
<b>Channel:</b> {{node.channel}}<br/>
<b>Hardware:</b> {{node.hw_model}}<br/>
<b>Role:</b> {{node.role}}<br/>
<b>Firmware:</b> {{node.firmware}}<br/>
<b>Coordinates:</b> [{{node.last_lat}}, {{node.last_long}}]
`, { permanent: false, direction: 'top', opacity: 0.9 });
{% endif %}
{% for u in uplinked_nodes %}
var uplinkNodeLatLng = L.latLng([{{ u.lat }}, {{ u.long }}]);
{% if from_node_cord %}
var distanceMiles = getDistanceInMiles(fromNodeLatLng, uplinkNodeLatLng).toFixed(1);
{% endif %}
var node = L.marker(uplinkNodeLatLng, {
icon: L.divIcon({
className: 'text-icon',
html: `<div style="font-size: 12px; color: white; font-weight: bold; display: flex; justify-content: center; align-items: center; height: 16px; width: 16px; border-radius: 50%; background-color: blue; border: 1px solid blue;">{{u.hops}}</div>`,
iconSize: [16, 16],
iconAnchor: [8, 8]
})
}).addTo(markers);
node.setZIndexOffset({{u.hops}}*-1);
node.bindPopup(`
Heard by: <b>{{u.long_name}}</b><br>
<b>{{ u.short_name }}</b><br/>
<b>Hops:</b> {{ u.hops }}<br/>
<b>SNR:</b> {{ u.snr }}<br/>
<b>RSSI:</b> {{ u.rssi }}<br/>
{% if from_node_cord %}
<b>Distance:</b> ${distanceMiles} miles <br/>
{% endif %}
<b>Coordinates:</b> [{{u.lat}}, {{u.long}}]
`, { permanent: false, direction: 'top', opacity: 0.9 });
{% endfor %}
if (markers.getLayers().length > 0) {
details_map.fitBounds(markers.getBounds().pad(0.1), { animate: true });
}
var legend = L.control({ position: 'bottomleft' });
legend.onAdd = function(map) {
var div = L.DomUtil.create('div', 'info legend');
div.style.background = 'white';
div.style.padding = '8px';
div.style.border = '1px solid black';
div.style.borderRadius = '5px';
div.style.boxShadow = '0 0 5px rgba(0,0,0,0.3)';
div.style.color = 'black';
div.style.textAlign = 'left';
div.innerHTML = `
<b>Legend</b><br>
<svg width="20" height="20">
<circle cx="8" cy="8" r="6" fill="blue" stroke="blue" stroke-width="1" fill-opacity="0.9"/>
</svg> Receiving Node (Number is hop count)<br>
<svg width="20" height="20">
<circle cx="10" cy="10" r="8" fill="red" stroke="red" stroke-width="1" fill-opacity="0.4"/>
</svg> Sending Node<br>
`;
return div;
};
legend.addTo(details_map);
</script>
{% endif %}

View File

@@ -1,24 +0,0 @@
{% extends "base.html" %}
{% block css %}
/* Set the maximum width of the page to 900px */
.container {
max-width: 900px;
margin: 0 auto; /* Center the content horizontally */
}
{% endblock %}
{% block body %}
<div class="container">
<div class="row">
<div>
{% include 'packet.html' %}
</div>
<div
id="packet_details"
hx-get="/packet_details/{{packet.id}}"
hx-trigger="load"
>
</div>
</div>
</div>
{% endblock %}

View File

@@ -1,7 +0,0 @@
<div class="col" id="packet_list">
{% for packet in packets %}
{% include 'packet.html' %}
{% else %}
No packets found.
{% endfor %}
</div>

View File

@@ -1,21 +0,0 @@
{% extends "base.html" %}
{% block body %}
{% include "search_form.html" %}
<ul>
{% for node in nodes %}
<li>
<a href="/packet_list/{{node.node_id}}?{{query_string}}">
{{node.node_id | node_id_to_hex}}
{% if node.long_name %}
{{node.short_name}} &mdash; {{node.long_name}}
{% endif %}
</a>
</li>
{% endfor %}
</ul>
{% endblock %}

View File

@@ -1,44 +0,0 @@
<form
class="container p-2 sticky-top mx-auto"
id="search_form"
action="/node_search"
>
<div class="row">
<input
class="col m-2"
id="q"
type="text"
name="q"
placeholder="Node id"
autocomplete="off"
list="node_options"
value="{{raw_node_id}}"
hx-trigger="input delay:100ms"
hx-get="/node_match"
hx-target="#node_options"
/>
{% include "datalist.html" %}
{% set options = {
1: "Text Message",
3: "Position",
4: "Node Info",
67: "Telemetry",
70: "Traceroute",
71: "Neighbor Info",
}
%}
<select name="portnum" class="col-2 m-2">
<option
value = ""
{% if portnum not in options %}selected{% endif %}
>All</option>
{% for value, name in options.items() %}
<option
value="{{value}}"
{% if value == portnum %}selected{% endif %}
>{{ name }}</option>
{% endfor %}
</select>
<input type="submit" value="Go to Node" class="col-2 m-2" />
</div>
</form>

View File

@@ -93,26 +93,31 @@
{% block body %}
<div class="main-container">
<h2 class="main-header" data-translate-lang="mesh_stats_summary">Mesh Statistics - Summary (all available in Database)</h2>
<h2 class="main-header" data-translate-lang="mesh_stats_summary">
Mesh Statistics - Summary (all available in Database)
</h2>
<!-- Summary cards now fully driven by API + JS -->
<div class="summary-container" style="display:flex; justify-content:space-between; gap:10px; margin-bottom:20px;">
<div class="summary-card" style="flex:1;">
<p data-translate-lang="total_nodes">Total Nodes</p>
<div class="summary-count">{{ "{:,}".format(total_nodes) }}</div>
<div class="summary-count" id="summary_nodes">0</div>
</div>
<div class="summary-card" style="flex:1;">
<p data-translate-lang="total_packets">Total Packets</p>
<div class="summary-count">{{ "{:,}".format(total_packets) }}</div>
<div class="summary-count" id="summary_packets">0</div>
</div>
<div class="summary-card" style="flex:1;">
<p data-translate-lang="total_packets_seen">Total Packets Seen</p>
<div class="summary-count">{{ "{:,}".format(total_packets_seen) }}</div>
<div class="summary-count" id="summary_seen">0</div>
</div>
</div>
<!-- Daily Charts -->
<div class="card-section">
<p class="section-header" data-translate-lang="packets_per_day_all">Packets per Day - All Ports (Last 14 Days)</p>
<p class="section-header" data-translate-lang="packets_per_day_all">
Packets per Day - All Ports (Last 14 Days)
</p>
<div id="total_daily_all" class="total-count">Total: 0</div>
<button class="expand-btn" data-chart="chart_daily_all" data-translate-lang="expand_chart">Expand Chart</button>
<button class="export-btn" data-chart="chart_daily_all" data-translate-lang="export_csv">Export CSV</button>
@@ -121,7 +126,9 @@
<!-- Packet Types Pie Chart with Channel Selector -->
<div class="card-section">
<p class="section-header" data-translate-lang="packet_types_last_24h">Packet Types - Last 24 Hours</p>
<p class="section-header" data-translate-lang="packet_types_last_24h">
Packet Types - Last 24 Hours
</p>
<select id="channelSelect">
<option value="" data-translate-lang="all_channels">All Channels</option>
</select>
@@ -131,7 +138,9 @@
</div>
<div class="card-section">
<p class="section-header" data-translate-lang="packets_per_day_text">Packets per Day - Text Messages (Port 1, Last 14 Days)</p>
<p class="section-header" data-translate-lang="packets_per_day_text">
Packets per Day - Text Messages (Port 1, Last 14 Days)
</p>
<div id="total_daily_portnum_1" class="total-count">Total: 0</div>
<button class="expand-btn" data-chart="chart_daily_portnum_1" data-translate-lang="expand_chart">Expand Chart</button>
<button class="export-btn" data-chart="chart_daily_portnum_1" data-translate-lang="export_csv">Export CSV</button>
@@ -140,7 +149,9 @@
<!-- Hourly Charts -->
<div class="card-section">
<p class="section-header" data-translate-lang="packets_per_hour_all">Packets per Hour - All Ports</p>
<p class="section-header" data-translate-lang="packets_per_hour_all">
Packets per Hour - All Ports
</p>
<div id="total_hourly_all" class="total-count">Total: 0</div>
<button class="expand-btn" data-chart="chart_hourly_all" data-translate-lang="expand_chart">Expand Chart</button>
<button class="export-btn" data-chart="chart_hourly_all" data-translate-lang="export_csv">Export CSV</button>
@@ -148,7 +159,9 @@
</div>
<div class="card-section">
<p class="section-header" data-translate-lang="packets_per_hour_text">Packets per Hour - Text Messages (Port 1)</p>
<p class="section-header" data-translate-lang="packets_per_hour_text">
Packets per Hour - Text Messages (Port 1)
</p>
<div id="total_portnum_1" class="total-count">Total: 0</div>
<button class="expand-btn" data-chart="chart_portnum_1" data-translate-lang="expand_chart">Expand Chart</button>
<button class="export-btn" data-chart="chart_portnum_1" data-translate-lang="export_csv">Export CSV</button>
@@ -214,17 +227,123 @@ async function fetchStats(period_type,length,portnum=null,channel=null){
}catch{return [];}
}
async function fetchNodes(){ try{ const res=await fetch("/api/nodes"); const json=await res.json(); return json.nodes||[];}catch{return [];} }
async function fetchChannels(){ try{ const res = await fetch("/api/channels"); const json = await res.json(); return json.channels || [];}catch{return [];} }
async function fetchNodes(){
try{
const res=await fetch("/api/nodes");
const json=await res.json();
return json.nodes||[];
}catch{
return [];
}
}
function processCountField(nodes,field){ const counts={}; nodes.forEach(n=>{ const key=n[field]||"Unknown"; counts[key]=(counts[key]||0)+1; }); return Object.entries(counts).map(([name,value])=>({name,value})); }
function updateTotalCount(domId,data){ const el=document.getElementById(domId); if(!el||!data.length) return; const total=data.reduce((acc,d)=>acc+(d.count??d.packet_count??0),0); el.textContent=`Total: ${total.toLocaleString()}`; }
function prepareTopN(data,n=20){ data.sort((a,b)=>b.value-a.value); let top=data.slice(0,n); if(data.length>n){ const otherValue=data.slice(n).reduce((sum,item)=>sum+item.value,0); top.push({name:"Other", value:otherValue}); } return top; }
async function fetchChannels(){
try{
const res = await fetch("/api/channels");
const json = await res.json();
return json.channels || [];
}catch{
return [];
}
}
function processCountField(nodes,field){
const counts={};
nodes.forEach(n=>{
const key=n[field]||"Unknown";
counts[key]=(counts[key]||0)+1;
});
return Object.entries(counts).map(([name,value])=>({name,value}));
}
function updateTotalCount(domId,data){
const el=document.getElementById(domId);
if(!el||!data.length) return;
const total=data.reduce((acc,d)=>acc+(d.count??d.packet_count??0),0);
el.textContent=`Total: ${total.toLocaleString()}`;
}
function prepareTopN(data,n=20){
data.sort((a,b)=>b.value-a.value);
let top=data.slice(0,n);
if(data.length>n){
const otherValue=data.slice(n).reduce((sum,item)=>sum+item.value,0);
top.push({name:"Other", value:otherValue});
}
return top;
}
// --- Chart Rendering ---
function renderChart(domId,data,type,color){ const el=document.getElementById(domId); if(!el) return; const chart=echarts.init(el); const periods=data.map(d=>(d.period??d.period===0)?d.period.toString():''); const counts=data.map(d=>d.count??d.packet_count??0); chart.setOption({backgroundColor:'#272b2f', tooltip:{trigger:'axis'}, grid:{left:'6%', right:'6%', bottom:'18%'}, xAxis:{type:'category', data:periods, axisLine:{lineStyle:{color:'#aaa'}}, axisLabel:{rotate:45,color:'#ccc'}}, yAxis:{type:'value', axisLine:{lineStyle:{color:'#aaa'}}, axisLabel:{color:'#ccc'}}, series:[{data:counts,type:type,smooth:type==='line',itemStyle:{color:color}, areaStyle:type==='line'?{}:undefined}]}); return chart; }
function renderChart(domId,data,type,color){
const el=document.getElementById(domId);
if(!el) return;
const chart=echarts.init(el);
const periods=data.map(d=>(d.period??d.period===0)?d.period.toString():'');
const counts=data.map(d=>d.count??d.packet_count??0);
chart.setOption({
backgroundColor:'#272b2f',
tooltip:{trigger:'axis'},
grid:{left:'6%', right:'6%', bottom:'18%'},
xAxis:{
type:'category',
data:periods,
axisLine:{lineStyle:{color:'#aaa'}},
axisLabel:{rotate:45,color:'#ccc'}
},
yAxis:{
type:'value',
axisLine:{lineStyle:{color:'#aaa'}},
axisLabel:{color:'#ccc'}
},
series:[{
data:counts,
type:type,
smooth:type==='line',
itemStyle:{color:color},
areaStyle:type==='line'?{}:undefined
}]
});
return chart;
}
function renderPieChart(elId,data,name){ const el=document.getElementById(elId); if(!el) return; const chart=echarts.init(el); const top20=prepareTopN(data,20); chart.setOption({backgroundColor:"#272b2f", tooltip:{trigger:"item", formatter: params=>`${params.name}: ${Math.round(params.percent)}% (${params.value})`}, series:[{name:name, type:"pie", radius:["30%","70%"], center:["50%","50%"], avoidLabelOverlap:true, itemStyle:{borderRadius:6,borderColor:"#272b2f",borderWidth:2}, label:{show:true,formatter:"{b}\n{d}%", color:"#ccc", fontSize:10}, labelLine:{show:true,length:10,length2:6}, data:top20}]}); return chart; }
function renderPieChart(elId,data,name){
const el=document.getElementById(elId);
if(!el) return;
const chart=echarts.init(el);
const top20=prepareTopN(data,20);
chart.setOption({
backgroundColor:"#272b2f",
tooltip:{
trigger:"item",
formatter: params=>`${params.name}: ${Math.round(params.percent)}% (${params.value})`
},
series:[{
name:name,
type:"pie",
radius:["30%","70%"],
center:["50%","50%"],
avoidLabelOverlap:true,
itemStyle:{
borderRadius:6,
borderColor:"#272b2f",
borderWidth:2
},
label:{
show:true,
formatter:"{b}\n{d}%",
color:"#ccc",
fontSize:10
},
labelLine:{
show:true,
length:10,
length2:6
},
data:top20
}]
});
return chart;
}
// --- Packet Type Pie Chart ---
async function fetchPacketTypeBreakdown(channel=null) {
@@ -234,8 +353,10 @@ async function fetchPacketTypeBreakdown(channel=null) {
const total = (data || []).reduce((sum,d)=>sum+(d.count??d.packet_count??0),0);
return {portnum: pn, count: total};
});
const allData = await fetchStats('hour',24,null,channel);
const totalAll = allData.reduce((sum,d)=>sum+(d.count??d.packet_count??0),0);
const results = await Promise.all(requests);
const trackedTotal = results.reduce((sum,d)=>sum+d.count,0);
const other = Math.max(totalAll - trackedTotal,0);
@@ -250,40 +371,102 @@ let chartHwModel, chartRole, chartChannel;
let chartPacketTypes;
async function init(){
// Channel selector
const channels = await fetchChannels();
const select = document.getElementById("channelSelect");
channels.forEach(ch=>{ const opt = document.createElement("option"); opt.value = ch; opt.textContent = ch; select.appendChild(opt); });
channels.forEach(ch=>{
const opt = document.createElement("option");
opt.value = ch;
opt.textContent = ch;
select.appendChild(opt);
});
// Daily all ports
const dailyAllData=await fetchStats('day',14);
updateTotalCount('total_daily_all',dailyAllData);
chartDailyAll=renderChart('chart_daily_all',dailyAllData,'line','#66bb6a');
// Daily port 1
const dailyPort1Data=await fetchStats('day',14,1);
updateTotalCount('total_daily_portnum_1',dailyPort1Data);
chartDailyPortnum1=renderChart('chart_daily_portnum_1',dailyPort1Data,'bar','#ff5722');
// Hourly all ports
const hourlyAllData=await fetchStats('hour',24);
updateTotalCount('total_hourly_all',hourlyAllData);
chartHourlyAll=renderChart('chart_hourly_all',hourlyAllData,'bar','#03dac6');
// Hourly per port
const portnums=[1,3,4,67,70,71];
const colors=['#ff5722','#2196f3','#9c27b0','#ffeb3b','#795548','#4caf50'];
const domIds=['chart_portnum_1','chart_portnum_3','chart_portnum_4','chart_portnum_67','chart_portnum_70','chart_portnum_71'];
const totalIds=['total_portnum_1','total_portnum_3','total_portnum_4','total_portnum_67','total_portnum_70','total_portnum_71'];
const allData=await Promise.all(portnums.map(pn=>fetchStats('hour',24,pn)));
for(let i=0;i<portnums.length;i++){ updateTotalCount(totalIds[i],allData[i]); window['chartPortnum'+portnums[i]]=renderChart(domIds[i],allData[i],'bar',colors[i]); }
const allData=await Promise.all(portnums.map(pn=>fetchStats('hour',24,pn)));
for(let i=0;i<portnums.length;i++){
updateTotalCount(totalIds[i],allData[i]);
window['chartPortnum'+portnums[i]]=renderChart(domIds[i],allData[i],'bar',colors[i]);
}
// Nodes for breakdown + summary node count
const nodes=await fetchNodes();
chartHwModel=renderPieChart("chart_hw_model",processCountField(nodes,"hw_model"),"Hardware");
chartRole=renderPieChart("chart_role",processCountField(nodes,"role"),"Role");
chartChannel=renderPieChart("chart_channel",processCountField(nodes,"channel"),"Channel");
const summaryNodesEl = document.getElementById("summary_nodes");
if (summaryNodesEl) {
summaryNodesEl.textContent = nodes.length.toLocaleString();
}
// Packet types pie
const packetTypesData = await fetchPacketTypeBreakdown();
const formatted = packetTypesData.filter(d=>d.count>0).map(d=>({ name: d.portnum==="other" ? "Other" : (PORTNUM_LABELS[d.portnum]||`Port ${d.portnum}`), value: d.count }));
const formatted = packetTypesData
.filter(d=>d.count>0)
.map(d=>({
name: d.portnum==="other"
? "Other"
: (PORTNUM_LABELS[d.portnum]||`Port ${d.portnum}`),
value: d.count
}));
chartPacketTypes = renderPieChart("chart_packet_types",formatted,"Packet Types (Last 24h)");
// Total packet + total seen from /api/stats/count
try {
const countsRes = await fetch("/api/stats/count");
if (countsRes.ok) {
const countsJson = await countsRes.json();
const elPackets = document.getElementById("summary_packets");
const elSeen = document.getElementById("summary_seen");
if (elPackets) {
elPackets.textContent = (countsJson.total_packets || 0).toLocaleString();
}
if (elSeen) {
elSeen.textContent = (countsJson.total_seen || 0).toLocaleString();
}
}
} catch (err) {
console.error("Failed to load /api/stats/count:", err);
}
}
window.addEventListener('resize',()=>{ [chartHourlyAll,chartPortnum1,chartPortnum3,chartPortnum4,chartPortnum67,chartPortnum70,chartPortnum71, chartDailyAll,chartDailyPortnum1,chartHwModel,chartRole,chartChannel,chartPacketTypes].forEach(c=>c?.resize()); });
window.addEventListener('resize',()=>{
[
chartHourlyAll,
chartPortnum1,
chartPortnum3,
chartPortnum4,
chartPortnum67,
chartPortnum70,
chartPortnum71,
chartDailyAll,
chartDailyPortnum1,
chartHwModel,
chartRole,
chartChannel,
chartPacketTypes
].forEach(c=>c?.resize());
});
const modal=document.getElementById("chartModal");
const modalChartEl=document.getElementById("modalChart");
@@ -345,31 +528,51 @@ document.querySelectorAll(".export-btn").forEach(btn=>{
document.getElementById("channelSelect").addEventListener("change", async (e)=>{
const channel = e.target.value;
const packetTypesData = await fetchPacketTypeBreakdown(channel);
const formatted = packetTypesData.filter(d=>d.count>0).map(d=>({ name: d.portnum==="other" ? "Other" : (PORTNUM_LABELS[d.portnum]||`Port ${d.portnum}`), value: d.count }));
const formatted = packetTypesData
.filter(d=>d.count>0)
.map(d=>({
name: d.portnum==="other"
? "Other"
: (PORTNUM_LABELS[d.portnum]||`Port ${d.portnum}`),
value: d.count
}));
chartPacketTypes?.dispose();
chartPacketTypes = renderPieChart("chart_packet_types",formatted,"Packet Types (Last 24h)");
});
// Kick everything off
init();
// --- Translation Loader ---
async function loadTranslations() {
const langCode = "{{ site_config.get('site', {}).get('language','en') }}";
// --- Load config and translations ---
async function loadConfigAndTranslations() {
let langCode = "en";
try {
const res = await fetch(`/api/lang?lang=${langCode}&section=stats`);
window.statsTranslations = await res.json();
} catch(err){
const resConfig = await fetch("/api/config");
const cfg = await resConfig.json();
window.site_config = cfg;
langCode = cfg?.site?.language || "en";
} catch(err) {
console.error("Failed to load /api/config:", err);
window.site_config = { site: { language: "en" } };
}
try {
const resLang = await fetch(`/api/lang?lang=${langCode}&section=stats`);
window.statsTranslations = await resLang.json();
} catch(err) {
console.error("Stats translation load failed:", err);
window.statsTranslations = {};
}
}
function applyTranslations() {
// Apply translations
const t = window.statsTranslations || {};
document.querySelectorAll("[data-translate-lang]").forEach(el=>{
const key = el.getAttribute("data-translate-lang");
if(t[key]) el.textContent = t[key];
});
}
loadTranslations().then(applyTranslations);
// Call after init
loadConfigAndTranslations();
</script>
{% endblock %}

View File

@@ -1,80 +0,0 @@
{% extends "base.html" %}
{% block css %}
#packet_details {
height: 95vh;
overflow: auto;
}
.main-container, .container {
max-width: 600px;
margin: 0 auto;
text-align: center;
}
.card-section {
background-color: #272b2f;
border: 1px solid #474b4e;
padding: 15px 20px;
margin-bottom: 10px;
border-radius: 10px;
transition: background-color 0.2s ease;
}
.card-section:hover {
background-color: #2f3338;
}
.section-header {
font-size: 16px;
margin: 0;
font-weight: 500;
}
.section-value {
font-weight: 700;
color: #03dac6;
}
.percentage {
font-size: 12px;
color: #ffeb3b;
font-weight: 400;
}
.main-header {
font-size: 22px;
margin-bottom: 20px;
font-weight: 600;
}
{% endblock %}
{% block body %}
<div class="main-container">
<h2 class="main-header">Mesh Statistics</h2>
<!-- Section for Total Nodes -->
<div class="card-section">
<p class="section-header">
Total Active Nodes (24 hours): <br>
<span class="section-value">{{ "{:,}".format(total_nodes) }}</span>
</p>
</div>
<!-- Section for Total Packets -->
<div class="card-section">
<p class="section-header">
Total Packets (14 days):
<span class="section-value">{{ "{:,}".format(total_packets) }}</span>
</p>
</div>
<!-- Section for Total MQTT Reports -->
<div class="card-section">
<p class="section-header">
Total MQTT Reports (14 days):
<span class="section-value">{{ "{:,}".format(total_packets_seen) }}</span>
</p>
</div>
</div>
{% endblock %}

View File

@@ -2,287 +2,283 @@
{% block css %}
<style>
/* General table styling */
table {
width: 100%;
border-collapse: collapse;
margin-top: 20px;
}
body { background-color: #121212; color: #ddd; }
h1 { text-align: center; margin-top: 20px; color: #fff; }
table th, table td {
padding: 12px;
text-align: left;
border: 1px solid #ddd;
cursor: pointer;
}
table th {
background-color: #333;
color: #fff;
font-weight: bold;
}
table tbody tr:nth-child(odd) {
background-color: #272b2f;
}
table tbody tr:nth-child(even) {
background-color: #212529;
}
table tbody tr:hover {
background-color: #555;
}
table td {
color: #ddd;
}
table th, table td {
border-radius: 8px;
}
@media (max-width: 768px) {
table th, table td {
padding: 8px;
.top-container {
max-width: 1100px;
margin: 25px auto;
padding: 0 15px;
}
}
h1 {
text-align: center;
color: #ddd;
margin-top: 20px;
margin-bottom: 20px;
}
.filter-bar {
display: flex;
flex-wrap: wrap;
align-items: center;
gap: 10px;
margin-bottom: 15px;
}
#bellCurveChart {
height: 400px;
width: 100%;
max-width: 100%;
margin-top: 30px;
}
.filter-bar select {
background-color: #1f2327;
border: 1px solid #444;
color: #ddd;
}
#stats {
text-align: center;
margin-top: 20px;
color: #fff;
}
table {
width: 100%;
border-collapse: collapse;
margin-top: 10px;
}
select {
margin: 10px auto;
display: block;
padding: 6px 10px;
border-radius: 6px;
background-color: #333;
color: #fff;
border: 1px solid #555;
}
table th, table td {
padding: 12px;
border: 1px solid #444;
text-align: left;
color: #ddd;
}
.filter-bar {
display: flex;
flex-direction: row;
align-items: center;
gap: 20px;
}
table th { background-color: #333; }
table tbody tr:nth-child(odd) { background-color: #272b2f; }
table tbody tr:nth-child(even) { background-color: #212529; }
table tbody tr:hover { background-color: #555; cursor: pointer; }
.node-link {
color: #9fd4ff;
text-decoration: none;
}
.node-link:hover { text-decoration: underline; }
.good-x { color: #81ff81; font-weight: bold; }
.ok-x { color: #e8e86d; font-weight: bold; }
.bad-x { color: #ff6464; font-weight: bold; }
</style>
{% endblock %}
{% block body %}
<h1 data-translate-lang="top_traffic_nodes">Top Traffic Nodes (last 24 hours)</h1>
<!-- Channel Filter Dropdown -->
<select id="channelFilter"></select>
<h1>Top Nodes Traffic</h1>
<div id="stats">
<p data-translate-lang="chart_description_1">
This chart shows a bell curve (normal distribution) based on the total <strong>"Times Seen"</strong> values for all nodes. It helps visualize how frequently nodes are heard, relative to the average.
</p>
<p data-translate-lang="chart_description_2">
This "Times Seen" value is the closest that we can get to Mesh utilization by node.
</p>
<p>
<strong data-translate-lang="mean_label">Mean:</strong> <span id="mean"></span> -
<strong data-translate-lang="stddev_label">Standard Deviation:</strong> <span id="stdDev"></span>
</p>
<div class="top-container">
<div class="filter-bar">
<div>
<label for="channelFilter">Channel:</label>
<select id="channelFilter" class="form-select form-select-sm" style="width:auto;"></select>
</div>
<div>
<label for="nodeSearch">Search:</label>
<input id="nodeSearch" type="text" class="form-control form-control-sm"
placeholder="Search nodes..."
style="width:180px; display:inline-block;">
</div>
</div>
<!-- Chart -->
<div id="bellCurveChart"></div>
<!-- Table -->
{% if nodes %}
<div class="container">
<table id="trafficTable">
<thead>
<tr>
<th data-translate-lang="long_name" onclick="sortTable(0)">Long Name</th>
<th data-translate-lang="short_name" onclick="sortTable(1)">Short Name</th>
<th data-translate-lang="channel" onclick="sortTable(2)">Channel</th>
<th data-translate-lang="packets_sent" onclick="sortTable(3)">Packets Sent</th>
<th data-translate-lang="times_seen" onclick="sortTable(4)">Times Seen</th>
<th data-translate-lang="seen_percent" onclick="sortTable(5)">Seen % of Mean</th>
</tr>
</thead>
<tbody></tbody>
</table>
<div class="table-responsive">
<table id="nodesTable">
<thead>
<tr>
<th>Long Name</th>
<th>Short Name</th>
<th>Channel</th>
<th>Sent (24h)</th>
<th>Seen (24h)</th>
<th>Avg Gateways</th>
</tr>
</thead>
<tbody></tbody>
</table>
</div>
</div>
{% else %}
<p style="text-align: center;" data-translate-lang="no_nodes">No top traffic nodes available.</p>
{% endif %}
<script src="https://cdn.jsdelivr.net/npm/echarts@5.3.2/dist/echarts.min.js"></script>
<script>
const nodes = {{ nodes | tojson }};
let filteredNodes = [];
let allNodes = [];
// --- Language support ---
async function loadTopTranslations() {
const langCode = "{{ site_config.get('site', {}).get('language','en') }}";
try {
const res = await fetch(`/api/lang?lang=${langCode}&section=top`);
window.topTranslations = await res.json();
} catch(err) {
console.error("Top page translation load failed:", err);
window.topTranslations = {};
async function loadChannels() {
try {
const res = await fetch("/api/channels");
const data = await res.json();
const channels = data.channels || [];
const select = document.getElementById("channelFilter");
// Default LongFast first
if (channels.includes("LongFast")) {
const opt = document.createElement("option");
opt.value = "LongFast";
opt.textContent = "LongFast";
select.appendChild(opt);
}
for (const ch of channels) {
if (ch === "LongFast") continue;
const opt = document.createElement("option");
opt.value = ch;
opt.textContent = ch;
select.appendChild(opt);
}
select.addEventListener("change", renderTable);
} catch (err) {
console.error("Error loading channels:", err);
}
}
}
function applyTopTranslations() {
const t = window.topTranslations || {};
document.querySelectorAll("[data-translate-lang]").forEach(el=>{
const key = el.getAttribute("data-translate-lang");
if(t[key]) el.textContent = t[key];
});
}
async function loadNodes() {
try {
const res = await fetch("/api/nodes");
const data = await res.json();
allNodes = data.nodes || [];
} catch (err) {
console.error("Error loading nodes:", err);
}
}
// --- Chart & Table code ---
const chart = echarts.init(document.getElementById('bellCurveChart'));
const meanEl = document.getElementById('mean');
const stdEl = document.getElementById('stdDev');
async function fetchNodeStats(nodeId) {
try {
const url = `/api/stats/count?from_node=${nodeId}&period_type=day&length=1`;
const res = await fetch(url);
const data = await res.json();
// Populate channel dropdown
const channelSet = new Set();
nodes.forEach(n => channelSet.add(n.channel));
const dropdown = document.getElementById('channelFilter');
[...channelSet].sort().forEach(channel => {
const option = document.createElement('option');
option.value = channel;
option.textContent = channel;
if (channel === "LongFast") option.selected = true;
dropdown.appendChild(option);
});
const sent = data.total_packets || 0;
const seen = data.total_seen || 0;
const avg = seen / Math.max(sent, 1);
// Filter default
filteredNodes = nodes.filter(n => n.channel === "LongFast");
dropdown.addEventListener('change', () => {
const val = dropdown.value;
filteredNodes = nodes.filter(n => n.channel === val);
updateTable();
updateStatsAndChart();
});
return {
sent,
seen,
avg: avg
};
} catch (err) {
console.error("Stat error", err);
return { sent: 0, seen: 0, avg: 0 };
}
}
// Normal distribution
function normalDistribution(x, mean, stdDev) {
return (1 / (stdDev * Math.sqrt(2 * Math.PI))) * Math.exp(-0.5 * Math.pow((x - mean) / stdDev, 2));
}
function avgClass(v) {
if (v >= 10) return "good-x"; // Very strong node
if (v >= 2) return "ok-x"; // Normal node
return "bad-x"; // Weak node
}
// Update table
function updateTable() {
const tbody = document.querySelector('#trafficTable tbody');
async function renderTable() {
const tbody = document.querySelector("#nodesTable tbody");
tbody.innerHTML = "";
const mean = filteredNodes.reduce((sum, n) => sum + n.total_times_seen, 0) / (filteredNodes.length || 1);
for (const node of filteredNodes) {
const percent = mean > 0 ? ((node.total_times_seen / mean) * 100).toFixed(1) + "%" : "0%";
const row = `<tr>
<td><a href="/packet_list/${node.node_id}">${node.long_name}</a></td>
<td>${node.short_name}</td>
<td>${node.channel}</td>
<td><a href="/top?node_id=${node.node_id}">${node.total_packets_sent}</a></td>
<td>${node.total_times_seen}</td>
<td>${percent}</td>
</tr>`;
tbody.insertAdjacentHTML('beforeend', row);
const channel = document.getElementById("channelFilter").value;
const searchText = document.getElementById("nodeSearch").value.trim().toLowerCase();
// Filter nodes by channel FIRST
let filtered = allNodes.filter(n => n.channel === channel);
// Then apply search
if (searchText !== "") {
filtered = filtered.filter(n =>
(n.long_name && n.long_name.toLowerCase().includes(searchText)) ||
(n.short_name && n.short_name.toLowerCase().includes(searchText)) ||
String(n.node_id).includes(searchText)
);
}
// --- Create placeholder rows ---
const rowRefs = filtered.map(n => {
const tr = document.createElement("tr");
tr.addEventListener("click", () => {
window.location.href = `/node/${n.node_id}`;
});
const tdLong = document.createElement("td");
const a = document.createElement("a");
a.href = `/node/${n.node_id}`;
a.textContent = n.long_name || n.node_id;
a.className = "node-link";
a.addEventListener("click", e => e.stopPropagation());
tdLong.appendChild(a);
const tdShort = document.createElement("td");
tdShort.textContent = n.short_name || "";
const tdChannel = document.createElement("td");
tdChannel.textContent = n.channel || "";
const tdSent = document.createElement("td");
tdSent.textContent = "Loading...";
const tdSeen = document.createElement("td");
tdSeen.textContent = "Loading...";
const tdAvg = document.createElement("td");
tdAvg.textContent = "Loading...";
tr.appendChild(tdLong);
tr.appendChild(tdShort);
tr.appendChild(tdChannel);
tr.appendChild(tdSent);
tr.appendChild(tdSeen);
tr.appendChild(tdAvg);
tbody.appendChild(tr);
return { node: n, tr, tdSent, tdSeen, tdAvg };
});
// --- Stats fetch ---
const statsList = await Promise.all(
rowRefs.map(ref => fetchNodeStats(ref.node.node_id))
);
// --- Update + cleanup empty nodes ---
let combined = rowRefs.map((ref, i) => {
const stats = statsList[i];
ref.tdSent.textContent = stats.sent;
ref.tdSeen.textContent = stats.seen;
ref.tdAvg.innerHTML = `<span class="${avgClass(stats.avg)}">${stats.avg.toFixed(1)}</span>`;
return {
tr: ref.tr,
sent: stats.sent,
seen: stats.seen
};
});
// Remove nodes with no traffic
combined = combined.filter(r => !(r.sent === 0 && r.seen === 0));
// Sort by traffic (seen)
combined.sort((a, b) => b.seen - a.seen);
// Rebuild table
tbody.innerHTML = "";
for (const r of combined) {
tbody.appendChild(r.tr);
}
}
// Update chart & stats
function updateStatsAndChart() {
const timesSeen = filteredNodes.map(n => n.total_times_seen);
const mean = timesSeen.reduce((sum,v)=>sum+v,0)/(timesSeen.length||1);
const stdDev = Math.sqrt(timesSeen.reduce((sum,v)=>sum+Math.pow(v-mean,2),0)/(timesSeen.length||1));
meanEl.textContent = mean.toFixed(2);
stdEl.textContent = stdDev.toFixed(2);
const min = Math.min(...timesSeen);
const max = Math.max(...timesSeen);
const step = (max - min) / 100;
const xData=[], yData=[];
for(let x=min;x<=max;x+=step){ xData.push(x); yData.push(normalDistribution(x,mean,stdDev)); }
chart.setOption({
animation:false,
tooltip:{ trigger:'axis' },
xAxis:{ name:'Total Times Seen', type:'value', min, max },
yAxis:{ name:'Probability Density', type:'value' },
series:[{ data:xData.map((x,i)=>[x,yData[i]]), type:'line', smooth:true, color:'blue', lineStyle:{ width:3 }}]
});
chart.resize();
}
(async () => {
await loadNodes();
await loadChannels();
document.getElementById("channelFilter").value = "LongFast";
// Sort table
function sortTable(n) {
const table = document.getElementById("trafficTable");
const rows = Array.from(table.rows).slice(1);
const header = table.rows[0].cells[n];
const isNumeric = !isNaN(rows[0].cells[n].innerText.replace('%',''));
let sortedRows = rows.sort((a,b)=>{
const valA = isNumeric ? parseFloat(a.cells[n].innerText.replace('%','')) : a.cells[n].innerText.toLowerCase();
const valB = isNumeric ? parseFloat(b.cells[n].cells[n].innerText.replace('%','')) : b.cells[n].innerText.toLowerCase();
return valA > valB ? 1 : -1;
});
if(header.getAttribute('data-sort-direction')==='asc'){ sortedRows.reverse(); header.setAttribute('data-sort-direction','desc'); }
else header.setAttribute('data-sort-direction','asc');
const tbody = table.tBodies[0];
sortedRows.forEach(row=>tbody.appendChild(row));
}
document.getElementById("nodeSearch").addEventListener("input", renderTable);
// Initialize
(async ()=>{
await loadTopTranslations();
applyTopTranslations();
updateTable();
updateStatsAndChart();
window.addEventListener('resize',()=>chart.resize());
renderTable();
})();
</script>
{% if timing_data %}
<!-- Performance Metrics Summary -->
<div style="background-color: #1a1d21; border: 1px solid #444; border-radius: 8px; padding: 15px; margin: 20px auto; max-width: 800px; color: #fff;">
<h3 style="margin-top: 0; color: #4CAF50;">⚡ Performance Metrics</h3>
<div style="display: grid; grid-template-columns: repeat(auto-fit, minmax(200px, 1fr)); gap: 15px;">
<div>
<strong>Database Query:</strong><br>
<span style="color: #FFD700; font-size: 1.2em;">{{ timing_data.db_query_ms }}ms</span>
</div>
<div>
<strong>Data Processing:</strong><br>
<span style="color: #FFD700; font-size: 1.2em;">{{ timing_data.processing_ms }}ms</span>
</div>
<div>
<strong>Total Time:</strong><br>
<span style="color: #FFD700; font-size: 1.2em;">{{ timing_data.total_ms }}ms</span>
</div>
<div>
<strong>Nodes Processed:</strong><br>
<span style="color: #4CAF50; font-size: 1.2em;">{{ timing_data.node_count }}</span>
</div>
<div>
<strong>Total Packets:</strong><br>
<span style="color: #4CAF50; font-size: 1.2em;">{{ "{:,}".format(timing_data.total_packets) }}</span>
</div>
<div>
<strong>Times Seen:</strong><br>
<span style="color: #4CAF50; font-size: 1.2em;">{{ "{:,}".format(timing_data.total_seen) }}</span>
</div>
</div>
<p style="margin-bottom: 0; margin-top: 10px; font-size: 0.9em; color: #888;">
📊 Use these metrics to measure performance before and after database index changes
</p>
</div>
{% endif %}
{% endblock %}

View File

@@ -1,94 +0,0 @@
{% block head %}
<script src="https://cdn.jsdelivr.net/npm/echarts/dist/echarts.min.js"></script>
{% endblock %}
{% block body %}
<div id="mynetwork" style="width: 100%; height: 800px;"></div>
<script type="text/javascript">
const chart = echarts.init(document.getElementById('mynetwork'));
const rawNodes = {{ chart_data['nodes'] | tojson }};
const rawEdges = {{ chart_data['edges'] | tojson }};
// Build DAG layout
const layers = {};
const nodeDepth = {};
// Organize nodes into layers by hop count
for (const edge of rawEdges) {
const { source, target } = edge;
if (!(source in nodeDepth)) nodeDepth[source] = 0;
const nextDepth = nodeDepth[source] + 1;
nodeDepth[target] = Math.max(nodeDepth[target] || 0, nextDepth);
}
for (const node of rawNodes) {
const depth = nodeDepth[node.name] || 0;
if (!(depth in layers)) layers[depth] = [];
layers[depth].push(node);
}
// Position nodes manually
const chartNodes = [];
const layerKeys = Object.keys(layers).sort((a, b) => +a - +b);
const verticalSpacing = 100;
const horizontalSpacing = 180;
layerKeys.forEach((depth, layerIndex) => {
const layer = layers[depth];
const y = layerIndex * verticalSpacing;
const xStart = -(layer.length - 1) * horizontalSpacing / 2;
layer.forEach((node, i) => {
chartNodes.push({
...node,
x: xStart + i * horizontalSpacing,
y: y,
itemStyle: {
color: '#dddddd',
borderColor: '#222',
borderWidth: 2,
},
label: {
show: true,
position: 'inside',
color: '#000',
fontSize: 12,
formatter: node.short_name || node.name,
},
});
});
});
const chartEdges = rawEdges.map(edge => ({
source: edge.source,
target: edge.target,
lineStyle: {
color: edge.originalColor || '#ccc',
width: 2,
type: 'solid',
},
}));
const option = {
backgroundColor: '#fff',
tooltip: {},
animation: false,
series: [{
type: 'graph',
layout: 'none',
coordinateSystem: null,
data: chartNodes,
links: chartEdges,
roam: true,
edgeSymbol: ['none', 'arrow'],
edgeSymbolSize: [0, 10],
lineStyle: {
curveness: 0,
},
}],
};
chart.setOption(option);
</script>
{% endblock %}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1 @@
"""Web submodule for MeshView API endpoints."""

692
meshview/web_api/api.py Normal file
View File

@@ -0,0 +1,692 @@
"""API endpoints for MeshView."""
import datetime
import json
import logging
import os
from aiohttp import web
from sqlalchemy import text
from meshtastic.protobuf.portnums_pb2 import PortNum
from meshview import database, decode_payload, store
from meshview.__version__ import __version__, _git_revision_short, get_version_info
from meshview.config import CONFIG
logger = logging.getLogger(__name__)
# Will be set by web.py during initialization
Packet = None
SEQ_REGEX = None
LANG_DIR = None
# Create dedicated route table for API endpoints
routes = web.RouteTableDef()
def init_api_module(packet_class, seq_regex, lang_dir):
"""Initialize API module with dependencies from main web module."""
global Packet, SEQ_REGEX, LANG_DIR
Packet = packet_class
SEQ_REGEX = seq_regex
LANG_DIR = lang_dir
@routes.get("/api/channels")
async def api_channels(request: web.Request):
period_type = request.query.get("period_type", "hour")
length = int(request.query.get("length", 24))
try:
channels = await store.get_channels_in_period(period_type, length)
return web.json_response({"channels": channels})
except Exception as e:
return web.json_response({"channels": [], "error": str(e)})
@routes.get("/api/nodes")
async def api_nodes(request):
try:
# Optional query parameters
role = request.query.get("role")
channel = request.query.get("channel")
hw_model = request.query.get("hw_model")
days_active = request.query.get("days_active")
if days_active:
try:
days_active = int(days_active)
except ValueError:
days_active = None
# Fetch nodes from database
nodes = await store.get_nodes(
role=role, channel=channel, hw_model=hw_model, days_active=days_active
)
# Prepare the JSON response
nodes_data = []
for n in nodes:
nodes_data.append(
{
"id": getattr(n, "id", None),
"node_id": n.node_id,
"long_name": n.long_name,
"short_name": n.short_name,
"hw_model": n.hw_model,
"firmware": n.firmware,
"role": n.role,
"last_lat": getattr(n, "last_lat", None),
"last_long": getattr(n, "last_long", None),
"channel": n.channel,
# "last_update": n.last_update.isoformat(),
"last_seen_us": n.last_seen_us,
}
)
return web.json_response({"nodes": nodes_data})
except Exception as e:
logger.error(f"Error in /api/nodes: {e}")
return web.json_response({"error": "Failed to fetch nodes"}, status=500)
@routes.get("/api/packets")
async def api_packets(request):
try:
# --- Parse query parameters ---
packet_id_str = request.query.get("packet_id")
limit_str = request.query.get("limit", "50")
since_str = request.query.get("since")
portnum_str = request.query.get("portnum")
contains = request.query.get("contains")
# NEW — explicit filters
from_node_id_str = request.query.get("from_node_id")
to_node_id_str = request.query.get("to_node_id")
node_id_str = request.query.get("node_id") # legacy: match either from/to
# --- If a packet_id is provided, return only that packet ---
if packet_id_str:
try:
packet_id = int(packet_id_str)
except ValueError:
return web.json_response({"error": "Invalid packet_id format"}, status=400)
packet = await store.get_packet(packet_id)
if not packet:
return web.json_response({"packets": []})
p = Packet.from_model(packet)
data = {
"id": p.id,
"from_node_id": p.from_node_id,
"to_node_id": p.to_node_id,
"portnum": int(p.portnum) if p.portnum is not None else None,
"payload": (p.payload or "").strip(),
"import_time_us": p.import_time_us,
"import_time": p.import_time.isoformat() if p.import_time else None,
"channel": getattr(p.from_node, "channel", ""),
"long_name": getattr(p.from_node, "long_name", ""),
}
return web.json_response({"packets": [data]})
# --- Parse limit ---
try:
limit = min(max(int(limit_str), 1), 100)
except ValueError:
limit = 50
# --- Parse since timestamp ---
since = None
if since_str:
try:
since = int(since_str)
except ValueError:
logger.warning(f"Invalid 'since' value (expected microseconds): {since_str}")
# --- Parse portnum ---
portnum = None
if portnum_str:
try:
portnum = int(portnum_str)
except ValueError:
logger.warning(f"Invalid portnum: {portnum_str}")
# --- Parse node filters ---
from_node_id = None
to_node_id = None
node_id = None # legacy: match either from/to
if from_node_id_str:
try:
from_node_id = int(from_node_id_str, 0)
except ValueError:
logger.warning(f"Invalid from_node_id: {from_node_id_str}")
if to_node_id_str:
try:
to_node_id = int(to_node_id_str, 0)
except ValueError:
logger.warning(f"Invalid to_node_id: {to_node_id_str}")
if node_id_str:
try:
node_id = int(node_id_str, 0)
except ValueError:
logger.warning(f"Invalid node_id: {node_id_str}")
# --- Fetch packets using explicit filters ---
packets = await store.get_packets(
from_node_id=from_node_id,
to_node_id=to_node_id,
node_id=node_id,
portnum=portnum,
after=since,
contains=contains,
limit=limit,
)
ui_packets = [Packet.from_model(p) for p in packets]
# --- Text message filtering ---
if portnum == PortNum.TEXT_MESSAGE_APP:
ui_packets = [p for p in ui_packets if p.payload and not SEQ_REGEX.fullmatch(p.payload)]
if contains:
ui_packets = [p for p in ui_packets if contains.lower() in p.payload.lower()]
# --- Sort descending by import_time_us ---
ui_packets.sort(
key=lambda p: (p.import_time_us is not None, p.import_time_us or 0), reverse=True
)
ui_packets = ui_packets[:limit]
# --- Build JSON output ---
packets_data = []
for p in ui_packets:
packet_dict = {
"id": p.id,
"import_time_us": p.import_time_us,
"import_time": p.import_time.isoformat() if p.import_time else None,
"channel": getattr(p.from_node, "channel", ""),
"from_node_id": p.from_node_id,
"to_node_id": p.to_node_id,
"portnum": int(p.portnum),
"long_name": getattr(p.from_node, "long_name", ""),
"payload": (p.payload or "").strip(),
}
reply_id = getattr(
getattr(getattr(p, "raw_mesh_packet", None), "decoded", None),
"reply_id",
None,
)
if reply_id:
packet_dict["reply_id"] = reply_id
packets_data.append(packet_dict)
# --- Latest import_time for incremental fetch ---
latest_import_time = None
if packets_data:
for p in packets_data:
if p.get("import_time_us") and p["import_time_us"] > 0:
latest_import_time = max(latest_import_time or 0, p["import_time_us"])
elif p.get("import_time") and latest_import_time is None:
try:
dt = datetime.datetime.fromisoformat(
p["import_time"].replace("Z", "+00:00")
)
latest_import_time = int(dt.timestamp() * 1_000_000)
except Exception:
pass
response = {"packets": packets_data}
if latest_import_time is not None:
response["latest_import_time"] = latest_import_time
return web.json_response(response)
except Exception as e:
logger.error(f"Error in /api/packets: {e}")
return web.json_response({"error": "Failed to fetch packets"}, status=500)
@routes.get("/api/stats")
async def api_stats(request):
"""
Enhanced stats endpoint:
- Supports global stats (existing behavior)
- Supports per-node stats using ?node=<node_id>
returning both sent AND seen counts in the specified period
"""
allowed_periods = {"hour", "day"}
period_type = request.query.get("period_type", "hour").lower()
if period_type not in allowed_periods:
return web.json_response(
{"error": f"Invalid period_type. Must be one of {allowed_periods}"},
status=400,
)
try:
length = int(request.query.get("length", 24))
except ValueError:
return web.json_response({"error": "length must be an integer"}, status=400)
# NEW: optional combined node stats
node_str = request.query.get("node")
if node_str:
try:
node_id = int(node_str)
except ValueError:
return web.json_response({"error": "node must be an integer"}, status=400)
# Fetch sent packets
sent = await store.get_packet_stats(
period_type=period_type,
length=length,
from_node=node_id,
)
# Fetch seen packets
seen = await store.get_packet_stats(
period_type=period_type,
length=length,
to_node=node_id,
)
return web.json_response(
{
"node_id": node_id,
"period_type": period_type,
"length": length,
"sent": sent.get("total", 0),
"seen": seen.get("total", 0),
}
)
# ---- Existing full stats mode (unchanged) ----
channel = request.query.get("channel")
def parse_int_param(name):
value = request.query.get(name)
if value is not None:
try:
return int(value)
except ValueError:
raise web.HTTPBadRequest(
text=json.dumps({"error": f"{name} must be an integer"}),
content_type="application/json",
) from None
return None
portnum = parse_int_param("portnum")
to_node = parse_int_param("to_node")
from_node = parse_int_param("from_node")
stats = await store.get_packet_stats(
period_type=period_type,
length=length,
channel=channel,
portnum=portnum,
to_node=to_node,
from_node=from_node,
)
return web.json_response(stats)
@routes.get("/api/stats/count")
async def api_stats_count(request):
"""
Returns packet and packet_seen totals.
Behavior:
• If no filters → total packets ever + total seen ever
• If filters → apply window/channel/from/to + packet_id
"""
# -------- Parse request parameters --------
packet_id_str = request.query.get("packet_id")
packet_id = None
if packet_id_str:
try:
packet_id = int(packet_id_str)
except ValueError:
return web.json_response({"error": "packet_id must be integer"}, status=400)
period_type = request.query.get("period_type")
length_str = request.query.get("length")
length = None
if length_str:
try:
length = int(length_str)
except ValueError:
return web.json_response({"error": "length must be integer"}, status=400)
channel = request.query.get("channel")
def parse_int(name):
value = request.query.get(name)
if value is None:
return None
try:
return int(value)
except ValueError:
raise web.HTTPBadRequest(
text=json.dumps({"error": f"{name} must be integer"}),
content_type="application/json",
) from None
from_node = parse_int("from_node")
to_node = parse_int("to_node")
# -------- Case 1: NO FILTERS → return global totals --------
no_filters = (
period_type is None
and length is None
and channel is None
and from_node is None
and to_node is None
and packet_id is None
)
if no_filters:
total_packets = await store.get_total_packet_count()
total_seen = await store.get_total_packet_seen_count()
return web.json_response({"total_packets": total_packets, "total_seen": total_seen})
# -------- Case 2: Apply filters → compute totals --------
total_packets = await store.get_total_packet_count(
period_type=period_type,
length=length,
channel=channel,
from_node=from_node,
to_node=to_node,
)
total_seen = await store.get_total_packet_seen_count(
packet_id=packet_id,
period_type=period_type,
length=length,
channel=channel,
from_node=from_node,
to_node=to_node,
)
return web.json_response({"total_packets": total_packets, "total_seen": total_seen})
@routes.get("/api/edges")
async def api_edges(request):
since = datetime.datetime.now() - datetime.timedelta(hours=48)
filter_type = request.query.get("type")
edges = {}
# Only build traceroute edges if requested
if filter_type in (None, "traceroute"):
async for tr in store.get_traceroutes(since):
try:
route = decode_payload.decode_payload(PortNum.TRACEROUTE_APP, tr.route)
except Exception as e:
logger.error(f"Error decoding Traceroute {tr.id}: {e}")
continue
path = [tr.packet.from_node_id] + list(route.route)
path.append(tr.packet.to_node_id if tr.done else tr.gateway_node_id)
for a, b in zip(path, path[1:], strict=False):
edges[(a, b)] = "traceroute"
# Only build neighbor edges if requested
if filter_type in (None, "neighbor"):
packets = await store.get_packets(portnum=PortNum.NEIGHBORINFO_APP, after=since)
for packet in packets:
try:
_, neighbor_info = decode_payload.decode(packet)
for node in neighbor_info.neighbors:
edges.setdefault((node.node_id, packet.from_node_id), "neighbor")
except Exception as e:
logger.error(
f"Error decoding NeighborInfo packet {getattr(packet, 'id', '?')}: {e}"
)
# Convert edges dict to list format for JSON response
edges_list = [
{"from": frm, "to": to, "type": edge_type} for (frm, to), edge_type in edges.items()
]
return web.json_response({"edges": edges_list})
@routes.get("/api/config")
async def api_config(request):
try:
# ------------------ Helpers ------------------
def get(section, key, default=None):
"""Safe getter for both dict and ConfigParser."""
if isinstance(section, dict):
return section.get(key, default)
return section.get(key, fallback=default)
def get_bool(section, key, default=False):
val = get(section, key, default)
if isinstance(val, bool):
return "true" if val else "false"
if isinstance(val, str):
return "true" if val.lower() in ("1", "true", "yes", "on") else "false"
return "true" if bool(val) else "false"
def get_float(section, key, default=0.0):
try:
return float(get(section, key, default))
except Exception:
return float(default)
def get_int(section, key, default=0):
try:
return int(get(section, key, default))
except Exception:
return default
def get_str(section, key, default=""):
val = get(section, key, default)
return str(val) if val is not None else str(default)
# ------------------ SITE ------------------
site = CONFIG.get("site", {})
safe_site = {
"domain": get_str(site, "domain", ""),
"language": get_str(site, "language", "en"),
"title": get_str(site, "title", ""),
"message": get_str(site, "message", ""),
"starting": get_str(site, "starting", "/chat"),
"nodes": get_bool(site, "nodes", True),
"chat": get_bool(site, "chat", True),
"everything": get_bool(site, "everything", True),
"graphs": get_bool(site, "graphs", True),
"stats": get_bool(site, "stats", True),
"net": get_bool(site, "net", True),
"map": get_bool(site, "map", True),
"top": get_bool(site, "top", True),
"map_top_left_lat": get_float(site, "map_top_left_lat", 39.0),
"map_top_left_lon": get_float(site, "map_top_left_lon", -123.0),
"map_bottom_right_lat": get_float(site, "map_bottom_right_lat", 36.0),
"map_bottom_right_lon": get_float(site, "map_bottom_right_lon", -121.0),
"map_interval": get_int(site, "map_interval", 3),
"firehose_interval": get_int(site, "firehose_interval", 3),
"weekly_net_message": get_str(
site, "weekly_net_message", "Weekly Mesh check-in message."
),
"net_tag": get_str(site, "net_tag", "#BayMeshNet"),
"version": str(__version__),
}
# ------------------ MQTT ------------------
mqtt = CONFIG.get("mqtt", {})
topics_raw = get(mqtt, "topics", [])
if isinstance(topics_raw, str):
try:
topics = json.loads(topics_raw)
except Exception:
topics = [topics_raw]
elif isinstance(topics_raw, list):
topics = topics_raw
else:
topics = []
safe_mqtt = {
"server": get_str(mqtt, "server", ""),
"topics": topics,
}
# ------------------ CLEANUP ------------------
cleanup = CONFIG.get("cleanup", {})
safe_cleanup = {
"enabled": get_bool(cleanup, "enabled", False),
"days_to_keep": get_str(cleanup, "days_to_keep", "14"),
"hour": get_str(cleanup, "hour", "2"),
"minute": get_str(cleanup, "minute", "0"),
"vacuum": get_bool(cleanup, "vacuum", False),
}
safe_config = {
"site": safe_site,
"mqtt": safe_mqtt,
"cleanup": safe_cleanup,
}
return web.json_response(safe_config)
except Exception as e:
return web.json_response({"error": str(e)}, status=500)
@routes.get("/api/lang")
async def api_lang(request):
# Language from ?lang=xx, fallback to config, then to "en"
lang_code = request.query.get("lang") or CONFIG.get("site", {}).get("language", "en")
section = request.query.get("section")
lang_file = os.path.join(LANG_DIR, f"{lang_code}.json")
if not os.path.exists(lang_file):
lang_file = os.path.join(LANG_DIR, "en.json")
# Load JSON translations
with open(lang_file, encoding="utf-8") as f:
translations = json.load(f)
if section:
section = section.lower()
if section in translations:
return web.json_response(translations[section])
else:
return web.json_response(
{"error": f"Section '{section}' not found in {lang_code}"}, status=404
)
# if no section requested → return full translation file
return web.json_response(translations)
@routes.get("/health")
async def health_check(request):
"""Health check endpoint for monitoring and load balancers."""
health_status = {
"status": "healthy",
"timestamp": datetime.datetime.now(datetime.UTC).isoformat(),
"version": __version__,
"git_revision": _git_revision_short,
}
# Check database connectivity
try:
async with database.async_session() as session:
await session.execute(text("SELECT 1"))
health_status["database"] = "connected"
except Exception as e:
logger.error(f"Database health check failed: {e}")
health_status["database"] = "disconnected"
health_status["status"] = "unhealthy"
return web.json_response(health_status, status=503)
# Get database file size
try:
db_url = CONFIG.get("database", {}).get("connection_string", "")
# Extract file path from SQLite connection string (e.g., "sqlite+aiosqlite:///packets.db")
if "sqlite" in db_url.lower():
db_path = db_url.split("///")[-1].split("?")[0]
if os.path.exists(db_path):
db_size_bytes = os.path.getsize(db_path)
# Convert to human-readable format
if db_size_bytes < 1024:
health_status["database_size"] = f"{db_size_bytes} B"
elif db_size_bytes < 1024 * 1024:
health_status["database_size"] = f"{db_size_bytes / 1024:.2f} KB"
elif db_size_bytes < 1024 * 1024 * 1024:
health_status["database_size"] = f"{db_size_bytes / (1024 * 1024):.2f} MB"
else:
health_status["database_size"] = (
f"{db_size_bytes / (1024 * 1024 * 1024):.2f} GB"
)
health_status["database_size_bytes"] = db_size_bytes
except Exception as e:
logger.warning(f"Failed to get database size: {e}")
# Don't fail health check if we can't get size
return web.json_response(health_status)
@routes.get("/version")
async def version_endpoint(request):
"""Return version information including semver and git revision."""
try:
version_info = get_version_info()
return web.json_response(version_info)
except Exception as e:
logger.error(f"Error in /version: {e}")
return web.json_response({"error": "Failed to fetch version info"}, status=500)
@routes.get("/api/packets_seen/{packet_id}")
async def api_packets_seen(request):
try:
# --- Validate packet_id ---
try:
packet_id = int(request.match_info["packet_id"])
except (KeyError, ValueError):
return web.json_response(
{"error": "Invalid or missing packet_id"},
status=400,
)
# --- Fetch list using your helper ---
rows = await store.get_packets_seen(packet_id)
items = []
for row in rows: # <-- FIX: normal for-loop
items.append(
{
"packet_id": row.packet_id,
"node_id": row.node_id,
"rx_time": row.rx_time,
"hop_limit": row.hop_limit,
"hop_start": row.hop_start,
"channel": row.channel,
"rx_snr": row.rx_snr,
"rx_rssi": row.rx_rssi,
"topic": row.topic,
"import_time": (row.import_time.isoformat() if row.import_time else None),
"import_time_us": row.import_time_us,
}
)
return web.json_response({"seen": items})
except Exception:
logger.exception("Error in /api/packets_seen")
return web.json_response(
{"error": "Internal server error"},
status=500,
)

View File

@@ -59,14 +59,11 @@ def signal_handler(sig, frame):
# Run python in subprocess
def run_script(script_name, pid_file, *args):
def run_script(python_executable, script_name, pid_file, *args):
process = None
try:
# Path to the Python interpreter inside the virtual environment
python_executable = './env/bin/python'
# Combine the script name and arguments
command = [python_executable, script_name] + list(args)
command = [python_executable, '-u', script_name] + list(args)
# Run the subprocess (output goes directly to console for real-time viewing)
process = subprocess.Popen(command)
@@ -101,11 +98,13 @@ def main():
# Add --config runtime argument
parser.add_argument('--config', help="Path to the configuration file.", default='config.ini')
parser.add_argument('--pid_dir', help="PID files path.", default='.')
parser.add_argument('--py_exec', help="Path to the Python executable.", default=sys.executable)
args = parser.parse_args()
# PID file paths
db_pid_file = 'meshview-db.pid'
web_pid_file = 'meshview-web.pid'
db_pid_file = os.path.join(args.pid_dir, 'meshview-db.pid')
web_pid_file = os.path.join(args.pid_dir, 'meshview-web.pid')
# Track PID files globally for cleanup
pid_files.append(db_pid_file)
@@ -113,12 +112,12 @@ def main():
# Database Thread
dbthrd = threading.Thread(
target=run_script, args=('startdb.py', db_pid_file, '--config', args.config)
target=run_script, args=(args.py_exec, 'startdb.py', db_pid_file, '--config', args.config)
)
# Web server thread
webthrd = threading.Thread(
target=run_script, args=('main.py', web_pid_file, '--config', args.config)
target=run_script, args=(args.py_exec, 'main.py', web_pid_file, '--config', args.config)
)
# Start Meshview subprocess threads

View File

@@ -1,3 +1,49 @@
[project]
name = "meshview"
version = "3.0.0"
description = "Real-time monitoring and diagnostic tool for the Meshtastic mesh network"
readme = "README.md"
requires-python = ">=3.10"
dependencies = [
# Core async + networking
"aiohttp>=3.11.12,<4.0.0",
"aiohttp-sse",
"aiodns>=3.2.0,<4.0.0",
"aiomqtt>=2.3.0,<3.0.0",
"asyncpg>=0.30.0,<0.31.0",
"aiosqlite>=0.21.0,<0.22.0",
# Database + ORM
"sqlalchemy[asyncio]>=2.0.38,<3.0.0",
"alembic>=1.14.0,<2.0.0",
# Serialization / security
"protobuf>=5.29.3,<6.0.0",
"cryptography>=44.0.1,<45.0.0",
# Templates
"Jinja2>=3.1.5,<4.0.0",
"MarkupSafe>=3.0.2,<4.0.0",
# Graphs / diagrams
"pydot>=3.0.4,<4.0.0",
]
[project.optional-dependencies]
dev = [
# Data science stack
"numpy>=2.2.3,<3.0.0",
"pandas>=2.2.3,<3.0.0",
"matplotlib>=3.10.0,<4.0.0",
"seaborn>=0.13.2,<1.0.0",
"plotly>=6.0.0,<7.0.0",
# Image support
"pillow>=11.1.0,<12.0.0",
# Debugging / profiling
"psutil>=7.0.0,<8.0.0",
"objgraph>=3.6.2,<4.0.0",
# Testing
"pytest>=8.3.4,<9.0.0",
"pytest-aiohttp>=1.0.5,<2.0.0",
"pytest-asyncio>=0.24.0,<1.0.0",
]
[tool.ruff]
# Linting
target-version = "py313"

View File

@@ -12,6 +12,7 @@ aiosqlite~=0.21.0
# Database + ORM
sqlalchemy[asyncio]~=2.0.38
alembic~=1.14.0
# Serialization / security
protobuf~=5.29.3
@@ -42,3 +43,8 @@ pillow~=11.1.0
# Debugging / profiling
psutil~=7.0.0
objgraph~=3.6.2
# Testing
pytest~=8.3.4
pytest-aiohttp~=1.0.5
pytest-asyncio~=0.24.0

View File

@@ -99,6 +99,15 @@ minute = 00
# Run VACUUM after cleanup
vacuum = False
# Enable database backups (independent of cleanup)
backup_enabled = False
# Directory to store database backups (relative or absolute path)
backup_dir = ./backups
# Time to run daily backup (24-hour format)
# If not specified, uses cleanup hour/minute
backup_hour = 2
backup_minute = 00
# -------------------------
# Logging Configuration
@@ -109,3 +118,5 @@ vacuum = False
# Application logs (errors, startup messages, etc.) are unaffected
# Set to True to enable, False to disable (default: False)
access_log = False
# Database cleanup logfile
db_cleanup_logfile = dbcleanup.log

84
setup-dev.sh Executable file
View File

@@ -0,0 +1,84 @@
#!/bin/bash
#
# setup-dev.sh
#
# Development environment setup script for MeshView
# This script sets up the Python virtual environment and installs development tools
set -e
echo "Setting up MeshView development environment..."
echo ""
# Check if uv is installed
if ! command -v uv &> /dev/null; then
echo "Error: 'uv' is not installed."
echo "Install it with: curl -LsSf https://astral.sh/uv/install.sh | sh"
exit 1
fi
# Create virtual environment if it doesn't exist
if [ ! -d "env" ]; then
echo "Creating Python virtual environment with uv..."
uv venv env
echo "✓ Virtual environment created"
else
echo "✓ Virtual environment already exists"
fi
# Install requirements
echo ""
echo "Installing requirements..."
uv pip install -r requirements.txt
echo "✓ Requirements installed"
# Install development tools
echo ""
echo "Installing development tools..."
uv pip install pre-commit pytest pytest-asyncio pytest-aiohttp
echo "✓ Development tools installed"
# Install pre-commit hooks
echo ""
echo "Installing pre-commit hooks..."
./env/bin/pre-commit install
echo "✓ Pre-commit hooks installed"
# Install graphviz check
echo ""
if command -v dot &> /dev/null; then
echo "✓ graphviz is installed"
else
echo "⚠ Warning: graphviz is not installed"
echo " Install it with:"
echo " macOS: brew install graphviz"
echo " Debian: sudo apt-get install graphviz"
fi
# Create config.ini if it doesn't exist
echo ""
if [ ! -f "config.ini" ]; then
echo "Creating config.ini from sample..."
cp sample.config.ini config.ini
echo "✓ config.ini created"
echo " Edit config.ini to configure your MQTT and site settings"
else
echo "✓ config.ini already exists"
fi
echo ""
echo "=========================================="
echo "Development environment setup complete!"
echo "=========================================="
echo ""
echo "Next steps:"
echo " 1. Edit config.ini with your MQTT settings"
echo " 2. Run: ./env/bin/python mvrun.py"
echo " 3. Open: http://localhost:8081"
echo ""
echo "Pre-commit hooks are now active:"
echo " - Ruff will auto-format and fix issues before each commit"
echo " - If files are changed, you'll need to git add and commit again"
echo ""
echo "Run tests with: ./env/bin/pytest tests/"
echo ""

View File

@@ -1,19 +1,32 @@
import asyncio
import datetime
import gzip
import json
import logging
import shutil
from pathlib import Path
from sqlalchemy import delete
from meshview import models, mqtt_database, mqtt_reader, mqtt_store
from meshview import migrations, models, mqtt_database, mqtt_reader, mqtt_store
from meshview.config import CONFIG
# -------------------------
# Basic logging configuration
# -------------------------
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s %(filename)s:%(lineno)d [pid:%(process)d] %(levelname)s - %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
)
# -------------------------
# Logging for cleanup
# -------------------------
cleanup_logger = logging.getLogger("dbcleanup")
cleanup_logger.setLevel(logging.INFO)
file_handler = logging.FileHandler("dbcleanup.log")
cleanup_logfile = CONFIG.get("logging", {}).get("db_cleanup_logfile", "dbcleanup.log")
file_handler = logging.FileHandler(cleanup_logfile)
file_handler.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s [%(levelname)s] %(message)s')
file_handler.setFormatter(formatter)
@@ -40,11 +53,91 @@ def get_int(config, section, key, default=0):
db_lock = asyncio.Lock()
# -------------------------
# Database backup function
# -------------------------
async def backup_database(database_url: str, backup_dir: str = ".") -> None:
"""
Create a compressed backup of the database file.
Args:
database_url: SQLAlchemy connection string
backup_dir: Directory to store backups (default: current directory)
"""
try:
# Extract database file path from connection string
# Format: sqlite+aiosqlite:///path/to/db.db
if not database_url.startswith("sqlite"):
cleanup_logger.warning("Backup only supported for SQLite databases")
return
db_path = database_url.split("///", 1)[1] if "///" in database_url else None
if not db_path:
cleanup_logger.error("Could not extract database path from connection string")
return
db_file = Path(db_path)
if not db_file.exists():
cleanup_logger.error(f"Database file not found: {db_file}")
return
# Create backup directory if it doesn't exist
backup_path = Path(backup_dir)
backup_path.mkdir(parents=True, exist_ok=True)
# Generate backup filename with timestamp
timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
backup_filename = f"{db_file.stem}_backup_{timestamp}.db.gz"
backup_file = backup_path / backup_filename
cleanup_logger.info(f"Creating backup: {backup_file}")
# Copy and compress the database file
with open(db_file, 'rb') as f_in:
with gzip.open(backup_file, 'wb', compresslevel=9) as f_out:
shutil.copyfileobj(f_in, f_out)
# Get file sizes for logging
original_size = db_file.stat().st_size / (1024 * 1024) # MB
compressed_size = backup_file.stat().st_size / (1024 * 1024) # MB
compression_ratio = (1 - compressed_size / original_size) * 100 if original_size > 0 else 0
cleanup_logger.info(
f"Backup created successfully: {backup_file.name} "
f"({original_size:.2f} MB -> {compressed_size:.2f} MB, "
f"{compression_ratio:.1f}% compression)"
)
except Exception as e:
cleanup_logger.error(f"Error creating database backup: {e}")
# -------------------------
# Database backup scheduler
# -------------------------
async def daily_backup_at(hour: int = 2, minute: int = 0, backup_dir: str = "."):
while True:
now = datetime.datetime.now()
next_run = now.replace(hour=hour, minute=minute, second=0, microsecond=0)
if next_run <= now:
next_run += datetime.timedelta(days=1)
delay = (next_run - now).total_seconds()
cleanup_logger.info(f"Next backup scheduled at {next_run}")
await asyncio.sleep(delay)
database_url = CONFIG["database"]["connection_string"]
await backup_database(database_url, backup_dir)
# -------------------------
# Database cleanup using ORM
# -------------------------
async def daily_cleanup_at(
hour: int = 2, minute: int = 0, days_to_keep: int = 14, vacuum_db: bool = True
hour: int = 2,
minute: int = 0,
days_to_keep: int = 14,
vacuum_db: bool = True,
wait_for_backup: bool = False,
):
while True:
now = datetime.datetime.now()
@@ -55,6 +148,11 @@ async def daily_cleanup_at(
cleanup_logger.info(f"Next cleanup scheduled at {next_run}")
await asyncio.sleep(delay)
# If backup is enabled, wait a bit to let backup complete first
if wait_for_backup:
cleanup_logger.info("Waiting 60 seconds for backup to complete...")
await asyncio.sleep(60)
# Local-time cutoff as string for SQLite DATETIME comparison
cutoff = (datetime.datetime.now() - datetime.timedelta(days=days_to_keep)).strftime(
"%Y-%m-%d %H:%M:%S"
@@ -134,9 +232,39 @@ async def load_database_from_mqtt(
# Main function
# -------------------------
async def main():
logger = logging.getLogger(__name__)
# Initialize database
mqtt_database.init_database(CONFIG["database"]["connection_string"])
await mqtt_database.create_tables()
database_url = CONFIG["database"]["connection_string"]
mqtt_database.init_database(database_url)
# Create migration status table
await migrations.create_migration_status_table(mqtt_database.engine)
# Set migration in progress flag
await migrations.set_migration_in_progress(mqtt_database.engine, True)
logger.info("Migration status set to 'in progress'")
try:
# Check if migrations are needed before running them
logger.info("Checking for pending database migrations...")
if await migrations.is_database_up_to_date(mqtt_database.engine, database_url):
logger.info("Database schema is already up to date, skipping migrations")
else:
logger.info("Database schema needs updating, running migrations...")
migrations.run_migrations(database_url)
logger.info("Database migrations completed")
# Create tables if needed (for backwards compatibility)
logger.info("Creating database tables...")
await mqtt_database.create_tables()
logger.info("Database tables created")
finally:
# Clear migration in progress flag
logger.info("Clearing migration status...")
await migrations.set_migration_in_progress(mqtt_database.engine, False)
logger.info("Migration status cleared - database ready")
mqtt_user = CONFIG["mqtt"].get("username") or None
mqtt_passwd = CONFIG["mqtt"].get("password") or None
@@ -148,6 +276,21 @@ async def main():
cleanup_hour = get_int(CONFIG, "cleanup", "hour", 2)
cleanup_minute = get_int(CONFIG, "cleanup", "minute", 0)
backup_enabled = get_bool(CONFIG, "cleanup", "backup_enabled", False)
backup_dir = CONFIG.get("cleanup", {}).get("backup_dir", "./backups")
backup_hour = get_int(CONFIG, "cleanup", "backup_hour", cleanup_hour)
backup_minute = get_int(CONFIG, "cleanup", "backup_minute", cleanup_minute)
logger.info(f"Starting MQTT ingestion from {CONFIG['mqtt']['server']}:{CONFIG['mqtt']['port']}")
if cleanup_enabled:
logger.info(
f"Daily cleanup enabled: keeping {cleanup_days} days of data at {cleanup_hour:02d}:{cleanup_minute:02d}"
)
if backup_enabled:
logger.info(
f"Daily backups enabled: storing in {backup_dir} at {backup_hour:02d}:{backup_minute:02d}"
)
async with asyncio.TaskGroup() as tg:
tg.create_task(
load_database_from_mqtt(
@@ -159,10 +302,25 @@ async def main():
)
)
# Start backup task if enabled
if backup_enabled:
tg.create_task(daily_backup_at(backup_hour, backup_minute, backup_dir))
# Start cleanup task if enabled (waits for backup if both run at same time)
if cleanup_enabled:
tg.create_task(daily_cleanup_at(cleanup_hour, cleanup_minute, cleanup_days, vacuum_db))
else:
cleanup_logger.info("Daily cleanup is disabled by configuration.")
wait_for_backup = (
backup_enabled
and (backup_hour == cleanup_hour)
and (backup_minute == cleanup_minute)
)
tg.create_task(
daily_cleanup_at(
cleanup_hour, cleanup_minute, cleanup_days, vacuum_db, wait_for_backup
)
)
if not cleanup_enabled and not backup_enabled:
cleanup_logger.info("Daily cleanup and backups are both disabled by configuration.")
# -------------------------