Files
meshcore-stats/tests/unit/test_metrics.py
Jorijn Schrijvershof a9f6926104 test: add comprehensive pytest test suite with 95% coverage (#29)
* test: add comprehensive pytest test suite with 95% coverage

Add full unit and integration test coverage for the meshcore-stats project:

- 1020 tests covering all modules (db, charts, html, reports, client, etc.)
- 95.95% code coverage with pytest-cov (95% threshold enforced)
- GitHub Actions CI workflow for automated testing on push/PR
- Proper mocking of external dependencies (meshcore, serial, filesystem)
- SVG snapshot infrastructure for chart regression testing
- Integration tests for collection and rendering pipelines

Test organization:
- tests/charts/: Chart rendering and statistics
- tests/client/: MeshCore client and connection handling
- tests/config/: Environment and configuration parsing
- tests/database/: SQLite operations and migrations
- tests/html/: HTML generation and Jinja templates
- tests/reports/: Report generation and formatting
- tests/retry/: Circuit breaker and retry logic
- tests/unit/: Pure unit tests for utilities
- tests/integration/: End-to-end pipeline tests

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* chore: add test-engineer agent configuration

Add project-local test-engineer agent for pytest test development,
coverage analysis, and test review tasks.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* docs: comprehensive test suite review with 956 tests analyzed

Conducted thorough review of all 956 test cases across 47 test files:

- Unit Tests: 338 tests (battery, metrics, log, telemetry, env, charts, html, reports, formatters)
- Config Tests: 53 tests (env loading, config file parsing)
- Database Tests: 115 tests (init, insert, queries, migrations, maintenance, validation)
- Retry Tests: 59 tests (circuit breaker, async retries, factory)
- Charts Tests: 76 tests (transforms, statistics, timeseries, rendering, I/O)
- HTML Tests: 81 tests (site generation, Jinja2, metrics builders, reports index)
- Reports Tests: 149 tests (location, JSON/TXT formatting, aggregation, counter totals)
- Client Tests: 63 tests (contacts, connection, meshcore availability, commands)
- Integration Tests: 22 tests (reports, collection, rendering pipelines)

Results:
- Overall Pass Rate: 99.7% (953/956)
- 3 tests marked for improvement (empty test bodies in client tests)
- 0 tests requiring fixes

Key findings documented in test_review/tests.md including quality
observations, F.I.R.S.T. principle adherence, and recommendations.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* test: implement snapshot testing for charts and reports

Add comprehensive snapshot testing infrastructure:

SVG Chart Snapshots:
- Deterministic fixtures with fixed timestamps (2024-01-15 12:00:00)
- Tests for gauge/counter metrics in light/dark themes
- Empty chart and single-point edge cases
- Extended normalize_svg_for_snapshot_full() for reproducible comparisons

TXT Report Snapshots:
- Monthly/yearly report snapshots for repeater and companion
- Empty report handling tests
- Tests in tests/reports/test_snapshots.py

Infrastructure:
- tests/snapshots/conftest.py with shared fixtures
- UPDATE_SNAPSHOTS=1 environment variable for regeneration
- scripts/generate_snapshots.py for batch snapshot generation

Run `UPDATE_SNAPSHOTS=1 pytest tests/charts/test_chart_render.py::TestSvgSnapshots`
to generate initial snapshots.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* test: fix SVG normalization and generate initial snapshots

Fix normalize_svg_for_snapshot() to handle:
- clipPath IDs like id="p47c77a2a6e"
- url(#p...) references
- xlink:href="#p..." references
- <dc:date> timestamps

Generated initial snapshot files:
- 7 SVG chart snapshots (gauge, counter, empty, single-point in light/dark)
- 6 TXT report snapshots (monthly/yearly for repeater/companion + empty)

All 13 snapshot tests now pass.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* test: fix SVG normalization to preserve axis rendering

The SVG normalization was replacing all matplotlib-generated IDs with
the same value, causing duplicate IDs that broke SVG rendering:
- Font glyphs, clipPaths, and tick marks all got id="normalized"
- References couldn't resolve to the correct elements
- X and Y axes failed to render in normalized snapshots

Fix uses type-specific prefixes with sequential numbering:
- glyph_N for font glyphs (DejaVuSans-XX patterns)
- clip_N for clipPath definitions (p[0-9a-f]{8,} patterns)
- tick_N for tick marks (m[0-9a-f]{8,} patterns)

This ensures all IDs remain unique while still being deterministic
for snapshot comparison.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* chore: add coverage and pytest artifacts to gitignore

Add .coverage, .coverage.*, htmlcov/, and .pytest_cache/ to prevent
test artifacts from being committed.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* style: fix all ruff lint errors across codebase

- Sort and organize imports (I001)
- Use modern type annotations (X | Y instead of Union, collections.abc)
- Remove unused imports (F401)
- Combine nested if statements (SIM102)
- Use ternary operators where appropriate (SIM108)
- Combine nested with statements (SIM117)
- Use contextlib.suppress instead of try-except-pass (SIM105)
- Add noqa comments for intentional SIM115 violations (file locks)
- Add TYPE_CHECKING import for forward references
- Fix exception chaining (B904)

All 1033 tests pass.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* docs: add TDD workflow and pre-commit requirements to CLAUDE.md

- Add mandatory test-driven development workflow (write tests first)
- Add pre-commit requirements (must run lint and tests before committing)
- Document test organization and running commands
- Document 95% coverage requirement

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix: resolve mypy type checking errors with proper structural fixes

- charts.py: Create PeriodConfig dataclass for type-safe period configuration,
  use mdates.date2num() for matplotlib datetime handling, fix x-axis limits
  for single-point charts
- db.py: Add explicit int() conversion with None handling for SQLite returns
- env.py: Add class-level type annotations to Config class
- html.py: Add MetricDisplay TypedDict, fix import order, add proper type
  annotations for table data functions
- meshcore_client.py: Add return type annotation

Update tests to use new dataclass attribute access and regenerate SVG
snapshots. Add mypy step to CLAUDE.md pre-commit requirements.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix: cast Jinja2 template.render() to str for mypy

Jinja2's type stubs declare render() as returning Any, but it actually
returns str. Wrap with str() to satisfy mypy's no-any-return check.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* ci: improve workflow security and reliability

- test.yml: Pin all actions by SHA, add concurrency control to cancel
  in-progress runs on rapid pushes
- release-please.yml: Pin action by SHA, add 10-minute timeout
- conftest.py: Fix snapshot_base_time to use explicit UTC timezone for
  consistent behavior across CI and local environments

Regenerate SVG snapshots with UTC-aware timestamps.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix: add mypy command to permissions in settings.local.json

* test: add comprehensive script tests with coroutine warning fixes

- Add tests/scripts/ with tests for collect_companion, collect_repeater,
  and render scripts (1135 tests total, 96% coverage)
- Fix unawaited coroutine warnings by using AsyncMock properly for async
  functions and async_context_manager_factory fixture for context managers
- Add --cov=scripts to CI workflow and pyproject.toml coverage config
- Omit scripts/generate_snapshots.py from coverage (dev utility)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* docs: migrate claude setup to codex skills

* feat: migrate dependencies to uv (#31)

* fix: run tests through uv

* test: fix ruff lint issues in tests

Consolidate patch context managers and clean unused imports/variables

Use datetime.UTC in snapshot fixtures

* test: avoid unawaited async mocks in entrypoint tests

* ci: replace codecov with github coverage artifacts

Add junit XML output and coverage summary in job output

Upload HTML and XML coverage artifacts (3.12 only) on every run

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-08 17:16:53 +01:00

252 lines
8.3 KiB
Python

"""Tests for metrics configuration and helper functions."""
import pytest
from meshmon.metrics import (
COMPANION_CHART_METRICS,
METRIC_CONFIG,
REPEATER_CHART_METRICS,
MetricConfig,
get_chart_metrics,
get_graph_scale,
get_metric_config,
get_metric_label,
get_metric_unit,
is_counter_metric,
transform_value,
)
class TestMetricConfig:
"""Test MetricConfig dataclass."""
def test_default_values(self):
"""Test MetricConfig default values."""
config = MetricConfig(label="Test", unit="V")
assert config.label == "Test"
assert config.unit == "V"
assert config.type == "gauge"
assert config.scale == 1.0
assert config.transform is None
def test_counter_type(self):
"""Test counter metric configuration."""
config = MetricConfig(label="Packets", unit="/min", type="counter", scale=60)
assert config.type == "counter"
assert config.scale == 60
def test_with_transform(self):
"""Test metric with transform."""
config = MetricConfig(label="Battery", unit="V", transform="mv_to_v")
assert config.transform == "mv_to_v"
def test_frozen_dataclass(self):
"""MetricConfig should be immutable (frozen)."""
config = MetricConfig(label="Test", unit="V")
with pytest.raises(AttributeError):
config.label = "Changed"
class TestMetricConfigDict:
"""Test the METRIC_CONFIG dictionary."""
def test_companion_metrics_exist(self):
"""All companion chart metrics should be in METRIC_CONFIG."""
for metric in COMPANION_CHART_METRICS:
assert metric in METRIC_CONFIG, f"Missing config for companion metric: {metric}"
def test_repeater_metrics_exist(self):
"""All repeater chart metrics should be in METRIC_CONFIG."""
for metric in REPEATER_CHART_METRICS:
assert metric in METRIC_CONFIG, f"Missing config for repeater metric: {metric}"
def test_battery_voltage_metrics_have_transform(self):
"""Battery voltage metrics should have mv_to_v transform."""
voltage_metrics = ["battery_mv", "bat"]
for metric in voltage_metrics:
config = METRIC_CONFIG[metric]
assert config.transform == "mv_to_v", (
f"{metric} should have mv_to_v transform"
)
def test_counter_metrics_have_scale_60(self):
"""Counter metrics showing /min should have scale=60."""
for name, config in METRIC_CONFIG.items():
if config.type == "counter" and "/min" in config.unit:
assert config.scale == 60, (
f"Counter metric {name} with /min unit should have scale=60"
)
class TestGetChartMetrics:
"""Test get_chart_metrics function."""
def test_companion_metrics(self):
"""get_chart_metrics('companion') returns companion metrics."""
metrics = get_chart_metrics("companion")
assert metrics == COMPANION_CHART_METRICS
def test_repeater_metrics(self):
"""get_chart_metrics('repeater') returns repeater metrics."""
metrics = get_chart_metrics("repeater")
assert metrics == REPEATER_CHART_METRICS
def test_invalid_role_raises(self):
"""get_chart_metrics with invalid role raises ValueError."""
with pytest.raises(ValueError, match="Unknown role"):
get_chart_metrics("invalid")
def test_empty_role_raises(self):
"""get_chart_metrics with empty role raises ValueError."""
with pytest.raises(ValueError, match="Unknown role"):
get_chart_metrics("")
class TestGetMetricConfig:
"""Test get_metric_config function."""
def test_existing_metric(self):
"""get_metric_config returns config for known metrics."""
config = get_metric_config("bat")
assert config is not None
assert config.label == "Battery Voltage"
assert config.unit == "V"
def test_unknown_metric(self):
"""get_metric_config returns None for unknown metrics."""
config = get_metric_config("nonexistent_metric")
assert config is None
def test_empty_string(self):
"""get_metric_config returns None for empty string."""
config = get_metric_config("")
assert config is None
class TestIsCounterMetric:
"""Test is_counter_metric function."""
@pytest.mark.parametrize(
"metric",
["recv", "sent", "nb_recv", "nb_sent", "airtime", "rx_airtime"],
)
def test_counter_metrics(self, metric: str):
"""Known counter metrics return True."""
assert is_counter_metric(metric) is True
@pytest.mark.parametrize(
"metric",
["bat", "battery_mv", "bat_pct", "last_rssi", "last_snr", "uptime"],
)
def test_gauge_metrics(self, metric: str):
"""Known gauge metrics return False."""
assert is_counter_metric(metric) is False
def test_unknown_metric(self):
"""Unknown metrics return False (not True)."""
assert is_counter_metric("unknown_metric") is False
class TestGetGraphScale:
"""Test get_graph_scale function."""
def test_counter_with_scale(self):
"""Counter metrics should return their configured scale."""
# nb_recv has scale=60 for per-minute display
scale = get_graph_scale("nb_recv")
assert scale == 60
def test_gauge_default_scale(self):
"""Gauge metrics with default scale return 1.0."""
# last_rssi has no special scale
scale = get_graph_scale("last_rssi")
assert scale == 1.0
def test_uptime_scale(self):
"""Uptime metrics have fractional scale for days display."""
# uptime has scale = 1/86400 to convert seconds to days
scale = get_graph_scale("uptime")
assert scale == pytest.approx(1 / 86400)
def test_unknown_metric(self):
"""Unknown metrics return default scale of 1.0."""
scale = get_graph_scale("unknown_metric")
assert scale == 1.0
class TestGetMetricLabel:
"""Test get_metric_label function."""
def test_existing_metric(self):
"""Known metrics return their configured label."""
label = get_metric_label("bat")
assert label == "Battery Voltage"
def test_unknown_metric_returns_name(self):
"""Unknown metrics return the metric name as label."""
label = get_metric_label("unknown_metric")
assert label == "unknown_metric"
class TestGetMetricUnit:
"""Test get_metric_unit function."""
def test_voltage_unit(self):
"""Voltage metrics return 'V' unit."""
unit = get_metric_unit("bat")
assert unit == "V"
def test_counter_unit(self):
"""Counter metrics return their configured unit."""
unit = get_metric_unit("nb_recv")
assert unit == "/min"
def test_unitless_metric(self):
"""Unitless metrics return empty string."""
unit = get_metric_unit("contacts")
assert unit == ""
def test_unknown_metric(self):
"""Unknown metrics return empty string."""
unit = get_metric_unit("unknown_metric")
assert unit == ""
class TestTransformValue:
"""Test transform_value function."""
def test_mv_to_v_transform(self):
"""Metrics with mv_to_v transform convert millivolts to volts."""
# bat metric has mv_to_v transform
result = transform_value("bat", 3850.0)
assert result == pytest.approx(3.85)
def test_battery_mv_transform(self):
"""battery_mv metric also has mv_to_v transform."""
result = transform_value("battery_mv", 4200.0)
assert result == pytest.approx(4.2)
def test_no_transform(self):
"""Metrics without transform return value unchanged."""
result = transform_value("last_rssi", -85.0)
assert result == -85.0
def test_unknown_metric_no_transform(self):
"""Unknown metrics return value unchanged."""
result = transform_value("unknown_metric", 12345.0)
assert result == 12345.0
def test_transform_with_zero(self):
"""Transform handles zero values correctly."""
result = transform_value("bat", 0.0)
assert result == 0.0
def test_transform_with_negative(self):
"""Transform handles negative values (edge case)."""
result = transform_value("bat", -100.0)
assert result == pytest.approx(-0.1)