mirror of
https://github.com/jorijn/meshcore-stats.git
synced 2026-03-28 17:42:55 +01:00
* test: add comprehensive pytest test suite with 95% coverage Add full unit and integration test coverage for the meshcore-stats project: - 1020 tests covering all modules (db, charts, html, reports, client, etc.) - 95.95% code coverage with pytest-cov (95% threshold enforced) - GitHub Actions CI workflow for automated testing on push/PR - Proper mocking of external dependencies (meshcore, serial, filesystem) - SVG snapshot infrastructure for chart regression testing - Integration tests for collection and rendering pipelines Test organization: - tests/charts/: Chart rendering and statistics - tests/client/: MeshCore client and connection handling - tests/config/: Environment and configuration parsing - tests/database/: SQLite operations and migrations - tests/html/: HTML generation and Jinja templates - tests/reports/: Report generation and formatting - tests/retry/: Circuit breaker and retry logic - tests/unit/: Pure unit tests for utilities - tests/integration/: End-to-end pipeline tests 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * chore: add test-engineer agent configuration Add project-local test-engineer agent for pytest test development, coverage analysis, and test review tasks. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * docs: comprehensive test suite review with 956 tests analyzed Conducted thorough review of all 956 test cases across 47 test files: - Unit Tests: 338 tests (battery, metrics, log, telemetry, env, charts, html, reports, formatters) - Config Tests: 53 tests (env loading, config file parsing) - Database Tests: 115 tests (init, insert, queries, migrations, maintenance, validation) - Retry Tests: 59 tests (circuit breaker, async retries, factory) - Charts Tests: 76 tests (transforms, statistics, timeseries, rendering, I/O) - HTML Tests: 81 tests (site generation, Jinja2, metrics builders, reports index) - Reports Tests: 149 tests (location, JSON/TXT formatting, aggregation, counter totals) - Client Tests: 63 tests (contacts, connection, meshcore availability, commands) - Integration Tests: 22 tests (reports, collection, rendering pipelines) Results: - Overall Pass Rate: 99.7% (953/956) - 3 tests marked for improvement (empty test bodies in client tests) - 0 tests requiring fixes Key findings documented in test_review/tests.md including quality observations, F.I.R.S.T. principle adherence, and recommendations. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * test: implement snapshot testing for charts and reports Add comprehensive snapshot testing infrastructure: SVG Chart Snapshots: - Deterministic fixtures with fixed timestamps (2024-01-15 12:00:00) - Tests for gauge/counter metrics in light/dark themes - Empty chart and single-point edge cases - Extended normalize_svg_for_snapshot_full() for reproducible comparisons TXT Report Snapshots: - Monthly/yearly report snapshots for repeater and companion - Empty report handling tests - Tests in tests/reports/test_snapshots.py Infrastructure: - tests/snapshots/conftest.py with shared fixtures - UPDATE_SNAPSHOTS=1 environment variable for regeneration - scripts/generate_snapshots.py for batch snapshot generation Run `UPDATE_SNAPSHOTS=1 pytest tests/charts/test_chart_render.py::TestSvgSnapshots` to generate initial snapshots. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * test: fix SVG normalization and generate initial snapshots Fix normalize_svg_for_snapshot() to handle: - clipPath IDs like id="p47c77a2a6e" - url(#p...) references - xlink:href="#p..." references - <dc:date> timestamps Generated initial snapshot files: - 7 SVG chart snapshots (gauge, counter, empty, single-point in light/dark) - 6 TXT report snapshots (monthly/yearly for repeater/companion + empty) All 13 snapshot tests now pass. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * test: fix SVG normalization to preserve axis rendering The SVG normalization was replacing all matplotlib-generated IDs with the same value, causing duplicate IDs that broke SVG rendering: - Font glyphs, clipPaths, and tick marks all got id="normalized" - References couldn't resolve to the correct elements - X and Y axes failed to render in normalized snapshots Fix uses type-specific prefixes with sequential numbering: - glyph_N for font glyphs (DejaVuSans-XX patterns) - clip_N for clipPath definitions (p[0-9a-f]{8,} patterns) - tick_N for tick marks (m[0-9a-f]{8,} patterns) This ensures all IDs remain unique while still being deterministic for snapshot comparison. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * chore: add coverage and pytest artifacts to gitignore Add .coverage, .coverage.*, htmlcov/, and .pytest_cache/ to prevent test artifacts from being committed. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * style: fix all ruff lint errors across codebase - Sort and organize imports (I001) - Use modern type annotations (X | Y instead of Union, collections.abc) - Remove unused imports (F401) - Combine nested if statements (SIM102) - Use ternary operators where appropriate (SIM108) - Combine nested with statements (SIM117) - Use contextlib.suppress instead of try-except-pass (SIM105) - Add noqa comments for intentional SIM115 violations (file locks) - Add TYPE_CHECKING import for forward references - Fix exception chaining (B904) All 1033 tests pass. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * docs: add TDD workflow and pre-commit requirements to CLAUDE.md - Add mandatory test-driven development workflow (write tests first) - Add pre-commit requirements (must run lint and tests before committing) - Document test organization and running commands - Document 95% coverage requirement 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: resolve mypy type checking errors with proper structural fixes - charts.py: Create PeriodConfig dataclass for type-safe period configuration, use mdates.date2num() for matplotlib datetime handling, fix x-axis limits for single-point charts - db.py: Add explicit int() conversion with None handling for SQLite returns - env.py: Add class-level type annotations to Config class - html.py: Add MetricDisplay TypedDict, fix import order, add proper type annotations for table data functions - meshcore_client.py: Add return type annotation Update tests to use new dataclass attribute access and regenerate SVG snapshots. Add mypy step to CLAUDE.md pre-commit requirements. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: cast Jinja2 template.render() to str for mypy Jinja2's type stubs declare render() as returning Any, but it actually returns str. Wrap with str() to satisfy mypy's no-any-return check. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * ci: improve workflow security and reliability - test.yml: Pin all actions by SHA, add concurrency control to cancel in-progress runs on rapid pushes - release-please.yml: Pin action by SHA, add 10-minute timeout - conftest.py: Fix snapshot_base_time to use explicit UTC timezone for consistent behavior across CI and local environments Regenerate SVG snapshots with UTC-aware timestamps. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: add mypy command to permissions in settings.local.json * test: add comprehensive script tests with coroutine warning fixes - Add tests/scripts/ with tests for collect_companion, collect_repeater, and render scripts (1135 tests total, 96% coverage) - Fix unawaited coroutine warnings by using AsyncMock properly for async functions and async_context_manager_factory fixture for context managers - Add --cov=scripts to CI workflow and pyproject.toml coverage config - Omit scripts/generate_snapshots.py from coverage (dev utility) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * docs: migrate claude setup to codex skills * feat: migrate dependencies to uv (#31) * fix: run tests through uv * test: fix ruff lint issues in tests Consolidate patch context managers and clean unused imports/variables Use datetime.UTC in snapshot fixtures * test: avoid unawaited async mocks in entrypoint tests * ci: replace codecov with github coverage artifacts Add junit XML output and coverage summary in job output Upload HTML and XML coverage artifacts (3.12 only) on every run --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
190 lines
5.5 KiB
Python
190 lines
5.5 KiB
Python
"""Script-specific test fixtures."""
|
|
|
|
import importlib.util
|
|
import sys
|
|
from contextlib import contextmanager
|
|
from pathlib import Path
|
|
from unittest.mock import AsyncMock, MagicMock
|
|
|
|
import pytest
|
|
|
|
# Ensure scripts can import from src
|
|
SCRIPTS_DIR = Path(__file__).parent.parent.parent / "scripts"
|
|
SRC_DIR = Path(__file__).parent.parent.parent / "src"
|
|
|
|
if str(SRC_DIR) not in sys.path:
|
|
sys.path.insert(0, str(SRC_DIR))
|
|
|
|
# Track dynamically loaded script modules for cleanup
|
|
_loaded_script_modules: set[str] = set()
|
|
|
|
|
|
def load_script_module(script_name: str):
|
|
"""Load a script as a module and track it for cleanup.
|
|
|
|
Args:
|
|
script_name: Name of script file (e.g., "collect_companion.py")
|
|
|
|
Returns:
|
|
Loaded module object
|
|
"""
|
|
script_path = SCRIPTS_DIR / script_name
|
|
module_name = script_name.replace(".py", "")
|
|
|
|
spec = importlib.util.spec_from_file_location(module_name, script_path)
|
|
assert spec is not None, f"Could not load spec for {script_path}"
|
|
assert spec.loader is not None, f"No loader for {script_path}"
|
|
|
|
module = importlib.util.module_from_spec(spec)
|
|
sys.modules[module_name] = module
|
|
_loaded_script_modules.add(module_name)
|
|
|
|
spec.loader.exec_module(module)
|
|
return module
|
|
|
|
|
|
@pytest.fixture(autouse=True)
|
|
def cleanup_script_modules():
|
|
"""Clean up dynamically loaded script modules after each test.
|
|
|
|
This prevents test pollution where module-level state persists
|
|
between tests, potentially causing false positives or flaky tests.
|
|
"""
|
|
# Clear tracking before test
|
|
_loaded_script_modules.clear()
|
|
|
|
yield
|
|
|
|
# Clean up after test
|
|
for module_name in _loaded_script_modules:
|
|
if module_name in sys.modules:
|
|
del sys.modules[module_name]
|
|
_loaded_script_modules.clear()
|
|
|
|
|
|
@pytest.fixture
|
|
def scripts_dir():
|
|
"""Path to the scripts directory."""
|
|
return SCRIPTS_DIR
|
|
|
|
|
|
@contextmanager
|
|
def mock_async_context_manager(return_value=None):
|
|
"""Create a mock that works as an async context manager.
|
|
|
|
Usage:
|
|
with patch.object(module, "connect_with_lock") as mock_connect:
|
|
mock_connect.return_value = mock_async_context_manager(mc)
|
|
# or for None return:
|
|
mock_connect.return_value = mock_async_context_manager(None)
|
|
|
|
Args:
|
|
return_value: Value to return from __aenter__
|
|
|
|
Returns:
|
|
A mock configured as an async context manager
|
|
"""
|
|
mock = MagicMock()
|
|
mock.__aenter__ = AsyncMock(return_value=return_value)
|
|
mock.__aexit__ = AsyncMock(return_value=None)
|
|
yield mock
|
|
|
|
|
|
class AsyncContextManagerMock:
|
|
"""A class-based async context manager mock for more complex scenarios.
|
|
|
|
Can be configured with enter/exit callbacks and exception handling.
|
|
"""
|
|
|
|
def __init__(self, return_value=None, exit_exception=None):
|
|
"""Initialize the mock.
|
|
|
|
Args:
|
|
return_value: Value to return from __aenter__
|
|
exit_exception: Exception to raise in __aexit__ (for testing cleanup)
|
|
"""
|
|
self.return_value = return_value
|
|
self.exit_exception = exit_exception
|
|
self.entered = False
|
|
self.exited = False
|
|
self.exit_args = None
|
|
|
|
async def __aenter__(self):
|
|
self.entered = True
|
|
return self.return_value
|
|
|
|
async def __aexit__(self, exc_type, exc_val, exc_tb):
|
|
self.exited = True
|
|
self.exit_args = (exc_type, exc_val, exc_tb)
|
|
if self.exit_exception:
|
|
raise self.exit_exception
|
|
return None
|
|
|
|
|
|
@pytest.fixture
|
|
def async_context_manager_factory():
|
|
"""Factory fixture to create async context manager mocks.
|
|
|
|
Usage:
|
|
def test_something(async_context_manager_factory):
|
|
mc = MagicMock()
|
|
ctx_mock = async_context_manager_factory(mc)
|
|
with patch.object(module, "connect_with_lock", return_value=ctx_mock):
|
|
...
|
|
"""
|
|
|
|
def factory(return_value=None, exit_exception=None):
|
|
return AsyncContextManagerMock(return_value, exit_exception)
|
|
|
|
return factory
|
|
|
|
|
|
@pytest.fixture
|
|
def mock_repeater_contact():
|
|
"""Mock repeater contact for testing."""
|
|
return {
|
|
"adv_name": "TestRepeater",
|
|
"public_key": "abc123def456",
|
|
"last_seen": 1234567890,
|
|
}
|
|
|
|
|
|
@pytest.fixture
|
|
def mock_repeater_status(sample_repeater_metrics):
|
|
"""Mock repeater status response."""
|
|
return sample_repeater_metrics.copy()
|
|
|
|
|
|
@pytest.fixture
|
|
def mock_run_command_factory():
|
|
"""Factory to create mock run_command functions with configurable responses.
|
|
|
|
Usage:
|
|
def test_something(mock_run_command_factory):
|
|
responses = {
|
|
"send_appstart": (True, "SELF_INFO", {}, None),
|
|
"get_stats_core": (True, "STATS_CORE", {"battery_mv": 3850}, None),
|
|
}
|
|
mock_run = mock_run_command_factory(responses)
|
|
with patch.object(module, "run_command", side_effect=mock_run):
|
|
...
|
|
"""
|
|
|
|
def factory(responses: dict, default_response=None):
|
|
"""Create a mock run_command function.
|
|
|
|
Args:
|
|
responses: Dict mapping command names to (ok, evt_type, payload, err) tuples
|
|
default_response: Response for commands not in responses dict.
|
|
If None, returns (False, None, None, "Unknown command")
|
|
"""
|
|
if default_response is None:
|
|
default_response = (False, None, None, "Unknown command")
|
|
|
|
async def mock_run_command(mc, coro, name):
|
|
return responses.get(name, default_response)
|
|
|
|
return mock_run_command
|
|
|
|
return factory
|