forked from iarv/meshcore-stats
* test: add comprehensive pytest test suite with 95% coverage Add full unit and integration test coverage for the meshcore-stats project: - 1020 tests covering all modules (db, charts, html, reports, client, etc.) - 95.95% code coverage with pytest-cov (95% threshold enforced) - GitHub Actions CI workflow for automated testing on push/PR - Proper mocking of external dependencies (meshcore, serial, filesystem) - SVG snapshot infrastructure for chart regression testing - Integration tests for collection and rendering pipelines Test organization: - tests/charts/: Chart rendering and statistics - tests/client/: MeshCore client and connection handling - tests/config/: Environment and configuration parsing - tests/database/: SQLite operations and migrations - tests/html/: HTML generation and Jinja templates - tests/reports/: Report generation and formatting - tests/retry/: Circuit breaker and retry logic - tests/unit/: Pure unit tests for utilities - tests/integration/: End-to-end pipeline tests 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * chore: add test-engineer agent configuration Add project-local test-engineer agent for pytest test development, coverage analysis, and test review tasks. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * docs: comprehensive test suite review with 956 tests analyzed Conducted thorough review of all 956 test cases across 47 test files: - Unit Tests: 338 tests (battery, metrics, log, telemetry, env, charts, html, reports, formatters) - Config Tests: 53 tests (env loading, config file parsing) - Database Tests: 115 tests (init, insert, queries, migrations, maintenance, validation) - Retry Tests: 59 tests (circuit breaker, async retries, factory) - Charts Tests: 76 tests (transforms, statistics, timeseries, rendering, I/O) - HTML Tests: 81 tests (site generation, Jinja2, metrics builders, reports index) - Reports Tests: 149 tests (location, JSON/TXT formatting, aggregation, counter totals) - Client Tests: 63 tests (contacts, connection, meshcore availability, commands) - Integration Tests: 22 tests (reports, collection, rendering pipelines) Results: - Overall Pass Rate: 99.7% (953/956) - 3 tests marked for improvement (empty test bodies in client tests) - 0 tests requiring fixes Key findings documented in test_review/tests.md including quality observations, F.I.R.S.T. principle adherence, and recommendations. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * test: implement snapshot testing for charts and reports Add comprehensive snapshot testing infrastructure: SVG Chart Snapshots: - Deterministic fixtures with fixed timestamps (2024-01-15 12:00:00) - Tests for gauge/counter metrics in light/dark themes - Empty chart and single-point edge cases - Extended normalize_svg_for_snapshot_full() for reproducible comparisons TXT Report Snapshots: - Monthly/yearly report snapshots for repeater and companion - Empty report handling tests - Tests in tests/reports/test_snapshots.py Infrastructure: - tests/snapshots/conftest.py with shared fixtures - UPDATE_SNAPSHOTS=1 environment variable for regeneration - scripts/generate_snapshots.py for batch snapshot generation Run `UPDATE_SNAPSHOTS=1 pytest tests/charts/test_chart_render.py::TestSvgSnapshots` to generate initial snapshots. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * test: fix SVG normalization and generate initial snapshots Fix normalize_svg_for_snapshot() to handle: - clipPath IDs like id="p47c77a2a6e" - url(#p...) references - xlink:href="#p..." references - <dc:date> timestamps Generated initial snapshot files: - 7 SVG chart snapshots (gauge, counter, empty, single-point in light/dark) - 6 TXT report snapshots (monthly/yearly for repeater/companion + empty) All 13 snapshot tests now pass. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * test: fix SVG normalization to preserve axis rendering The SVG normalization was replacing all matplotlib-generated IDs with the same value, causing duplicate IDs that broke SVG rendering: - Font glyphs, clipPaths, and tick marks all got id="normalized" - References couldn't resolve to the correct elements - X and Y axes failed to render in normalized snapshots Fix uses type-specific prefixes with sequential numbering: - glyph_N for font glyphs (DejaVuSans-XX patterns) - clip_N for clipPath definitions (p[0-9a-f]{8,} patterns) - tick_N for tick marks (m[0-9a-f]{8,} patterns) This ensures all IDs remain unique while still being deterministic for snapshot comparison. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * chore: add coverage and pytest artifacts to gitignore Add .coverage, .coverage.*, htmlcov/, and .pytest_cache/ to prevent test artifacts from being committed. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * style: fix all ruff lint errors across codebase - Sort and organize imports (I001) - Use modern type annotations (X | Y instead of Union, collections.abc) - Remove unused imports (F401) - Combine nested if statements (SIM102) - Use ternary operators where appropriate (SIM108) - Combine nested with statements (SIM117) - Use contextlib.suppress instead of try-except-pass (SIM105) - Add noqa comments for intentional SIM115 violations (file locks) - Add TYPE_CHECKING import for forward references - Fix exception chaining (B904) All 1033 tests pass. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * docs: add TDD workflow and pre-commit requirements to CLAUDE.md - Add mandatory test-driven development workflow (write tests first) - Add pre-commit requirements (must run lint and tests before committing) - Document test organization and running commands - Document 95% coverage requirement 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: resolve mypy type checking errors with proper structural fixes - charts.py: Create PeriodConfig dataclass for type-safe period configuration, use mdates.date2num() for matplotlib datetime handling, fix x-axis limits for single-point charts - db.py: Add explicit int() conversion with None handling for SQLite returns - env.py: Add class-level type annotations to Config class - html.py: Add MetricDisplay TypedDict, fix import order, add proper type annotations for table data functions - meshcore_client.py: Add return type annotation Update tests to use new dataclass attribute access and regenerate SVG snapshots. Add mypy step to CLAUDE.md pre-commit requirements. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: cast Jinja2 template.render() to str for mypy Jinja2's type stubs declare render() as returning Any, but it actually returns str. Wrap with str() to satisfy mypy's no-any-return check. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * ci: improve workflow security and reliability - test.yml: Pin all actions by SHA, add concurrency control to cancel in-progress runs on rapid pushes - release-please.yml: Pin action by SHA, add 10-minute timeout - conftest.py: Fix snapshot_base_time to use explicit UTC timezone for consistent behavior across CI and local environments Regenerate SVG snapshots with UTC-aware timestamps. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: add mypy command to permissions in settings.local.json * test: add comprehensive script tests with coroutine warning fixes - Add tests/scripts/ with tests for collect_companion, collect_repeater, and render scripts (1135 tests total, 96% coverage) - Fix unawaited coroutine warnings by using AsyncMock properly for async functions and async_context_manager_factory fixture for context managers - Add --cov=scripts to CI workflow and pyproject.toml coverage config - Omit scripts/generate_snapshots.py from coverage (dev utility) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * docs: migrate claude setup to codex skills * feat: migrate dependencies to uv (#31) * fix: run tests through uv * test: fix ruff lint issues in tests Consolidate patch context managers and clean unused imports/variables Use datetime.UTC in snapshot fixtures * test: avoid unawaited async mocks in entrypoint tests * ci: replace codecov with github coverage artifacts Add junit XML output and coverage summary in job output Upload HTML and XML coverage artifacts (3.12 only) on every run --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
213 lines
7.5 KiB
Python
Executable File
213 lines
7.5 KiB
Python
Executable File
#!/usr/bin/env python3
|
|
"""
|
|
Phase 1: Collect data from companion node.
|
|
|
|
Connects to the local companion node via serial and collects:
|
|
- Device info
|
|
- Battery status
|
|
- Time
|
|
- Self telemetry
|
|
- Custom vars
|
|
- Contacts list
|
|
|
|
Outputs:
|
|
- Concise summary to stdout
|
|
- Metrics written to SQLite database (EAV schema)
|
|
"""
|
|
|
|
import asyncio
|
|
import sys
|
|
import time
|
|
from pathlib import Path
|
|
|
|
# Add src to path for imports
|
|
sys.path.insert(0, str(Path(__file__).parent.parent / "src"))
|
|
|
|
from meshmon import log
|
|
from meshmon.db import init_db, insert_metrics
|
|
from meshmon.env import get_config
|
|
from meshmon.meshcore_client import connect_with_lock, run_command
|
|
from meshmon.telemetry import extract_lpp_from_payload, extract_telemetry_metrics
|
|
|
|
|
|
async def collect_companion() -> int:
|
|
"""
|
|
Collect data from companion node.
|
|
|
|
Returns:
|
|
Exit code (0 = success, 1 = connection failed)
|
|
"""
|
|
cfg = get_config()
|
|
ts = int(time.time())
|
|
|
|
# Metrics to insert (firmware field names)
|
|
metrics: dict[str, float] = {}
|
|
commands_succeeded = 0
|
|
|
|
log.debug("Connecting to companion node...")
|
|
async with connect_with_lock() as mc:
|
|
if mc is None:
|
|
log.error("Failed to connect to companion node")
|
|
return 1
|
|
|
|
# Commands are accessed via mc.commands
|
|
cmd = mc.commands
|
|
|
|
try:
|
|
# send_appstart (already called during connect, but call again to get self_info)
|
|
ok, evt_type, payload, err = await run_command(
|
|
mc, cmd.send_appstart(), "send_appstart"
|
|
)
|
|
if ok:
|
|
commands_succeeded += 1
|
|
log.debug(f"appstart: {evt_type}")
|
|
else:
|
|
log.error(f"appstart failed: {err}")
|
|
|
|
# send_device_query
|
|
ok, evt_type, payload, err = await run_command(
|
|
mc, cmd.send_device_query(), "send_device_query"
|
|
)
|
|
if ok:
|
|
commands_succeeded += 1
|
|
log.debug(f"device_query: {payload}")
|
|
else:
|
|
log.error(f"device_query failed: {err}")
|
|
|
|
# get_time
|
|
ok, evt_type, payload, err = await run_command(
|
|
mc, cmd.get_time(), "get_time"
|
|
)
|
|
if ok:
|
|
commands_succeeded += 1
|
|
log.debug(f"get_time: {payload}")
|
|
else:
|
|
log.error(f"get_time failed: {err}")
|
|
|
|
# get_self_telemetry - collect environmental sensor data
|
|
# Note: The call happens regardless of telemetry_enabled for device query completeness,
|
|
# but we only extract and store metrics if the feature is enabled.
|
|
ok, evt_type, payload, err = await run_command(
|
|
mc, cmd.get_self_telemetry(), "get_self_telemetry"
|
|
)
|
|
if ok:
|
|
commands_succeeded += 1
|
|
log.debug(f"get_self_telemetry: {payload}")
|
|
# Extract and store telemetry if enabled
|
|
if cfg.telemetry_enabled:
|
|
lpp_data = extract_lpp_from_payload(payload)
|
|
if lpp_data is not None:
|
|
telemetry_metrics = extract_telemetry_metrics(lpp_data)
|
|
if telemetry_metrics:
|
|
metrics.update(telemetry_metrics)
|
|
log.debug(f"Extracted {len(telemetry_metrics)} telemetry metrics")
|
|
else:
|
|
# Debug level because not all devices have sensors attached - this is expected
|
|
log.debug(f"get_self_telemetry failed: {err}")
|
|
|
|
# get_custom_vars
|
|
ok, evt_type, payload, err = await run_command(
|
|
mc, cmd.get_custom_vars(), "get_custom_vars"
|
|
)
|
|
if ok:
|
|
commands_succeeded += 1
|
|
log.debug(f"get_custom_vars: {payload}")
|
|
else:
|
|
log.debug(f"get_custom_vars failed: {err}")
|
|
|
|
# get_contacts - count contacts
|
|
ok, evt_type, payload, err = await run_command(
|
|
mc, cmd.get_contacts(), "get_contacts"
|
|
)
|
|
if ok:
|
|
commands_succeeded += 1
|
|
contacts_count = len(payload) if payload else 0
|
|
metrics["contacts"] = float(contacts_count)
|
|
log.debug(f"get_contacts: found {contacts_count} contacts")
|
|
else:
|
|
log.error(f"get_contacts failed: {err}")
|
|
|
|
# Get statistics - these contain the main metrics
|
|
# Core stats (battery_mv, uptime_secs, errors, queue_len)
|
|
ok, evt_type, payload, err = await run_command(
|
|
mc, cmd.get_stats_core(), "get_stats_core"
|
|
)
|
|
if ok and payload and isinstance(payload, dict):
|
|
commands_succeeded += 1
|
|
# Insert all numeric fields from stats_core
|
|
for key, value in payload.items():
|
|
if isinstance(value, (int, float)):
|
|
metrics[key] = float(value)
|
|
log.debug(f"stats_core: {payload}")
|
|
|
|
# Radio stats (noise_floor, last_rssi, last_snr, tx_air_secs, rx_air_secs)
|
|
ok, evt_type, payload, err = await run_command(
|
|
mc, cmd.get_stats_radio(), "get_stats_radio"
|
|
)
|
|
if ok and payload and isinstance(payload, dict):
|
|
commands_succeeded += 1
|
|
for key, value in payload.items():
|
|
if isinstance(value, (int, float)):
|
|
metrics[key] = float(value)
|
|
log.debug(f"stats_radio: {payload}")
|
|
|
|
# Packet stats (recv, sent, flood_tx, direct_tx, flood_rx, direct_rx)
|
|
ok, evt_type, payload, err = await run_command(
|
|
mc, cmd.get_stats_packets(), "get_stats_packets"
|
|
)
|
|
if ok and payload and isinstance(payload, dict):
|
|
commands_succeeded += 1
|
|
for key, value in payload.items():
|
|
if isinstance(value, (int, float)):
|
|
metrics[key] = float(value)
|
|
log.debug(f"stats_packets: {payload}")
|
|
|
|
except Exception as e:
|
|
log.error(f"Error during collection: {e}")
|
|
|
|
# Connection closed and lock released by context manager
|
|
|
|
# Print summary
|
|
summary_parts = [f"ts={ts}"]
|
|
if "battery_mv" in metrics:
|
|
bat_v = metrics["battery_mv"] / 1000.0
|
|
summary_parts.append(f"bat={bat_v:.2f}V")
|
|
if "contacts" in metrics:
|
|
summary_parts.append(f"contacts={int(metrics['contacts'])}")
|
|
if "recv" in metrics:
|
|
summary_parts.append(f"rx={int(metrics['recv'])}")
|
|
if "sent" in metrics:
|
|
summary_parts.append(f"tx={int(metrics['sent'])}")
|
|
# Add telemetry count to summary if present
|
|
telemetry_count = sum(1 for k in metrics if k.startswith("telemetry."))
|
|
if telemetry_count > 0:
|
|
summary_parts.append(f"telem={telemetry_count}")
|
|
|
|
log.info(f"Companion: {', '.join(summary_parts)}")
|
|
|
|
# Write metrics to database
|
|
if commands_succeeded > 0 and metrics:
|
|
try:
|
|
inserted = insert_metrics(ts=ts, role="companion", metrics=metrics)
|
|
log.debug(f"Inserted {inserted} metrics to database (ts={ts})")
|
|
except Exception as e:
|
|
log.error(f"Failed to write metrics to database: {e}")
|
|
return 1
|
|
return 0
|
|
else:
|
|
log.error("No commands succeeded or no metrics collected")
|
|
return 1
|
|
|
|
|
|
def main():
|
|
"""Entry point."""
|
|
# Ensure database is initialized
|
|
init_db()
|
|
|
|
exit_code = asyncio.run(collect_companion())
|
|
sys.exit(exit_code)
|
|
|
|
|
|
if __name__ == "__main__":
|
|
main()
|