mirror of
https://github.com/jorijn/meshcore-stats.git
synced 2026-03-28 17:42:55 +01:00
* test: add comprehensive pytest test suite with 95% coverage Add full unit and integration test coverage for the meshcore-stats project: - 1020 tests covering all modules (db, charts, html, reports, client, etc.) - 95.95% code coverage with pytest-cov (95% threshold enforced) - GitHub Actions CI workflow for automated testing on push/PR - Proper mocking of external dependencies (meshcore, serial, filesystem) - SVG snapshot infrastructure for chart regression testing - Integration tests for collection and rendering pipelines Test organization: - tests/charts/: Chart rendering and statistics - tests/client/: MeshCore client and connection handling - tests/config/: Environment and configuration parsing - tests/database/: SQLite operations and migrations - tests/html/: HTML generation and Jinja templates - tests/reports/: Report generation and formatting - tests/retry/: Circuit breaker and retry logic - tests/unit/: Pure unit tests for utilities - tests/integration/: End-to-end pipeline tests 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * chore: add test-engineer agent configuration Add project-local test-engineer agent for pytest test development, coverage analysis, and test review tasks. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * docs: comprehensive test suite review with 956 tests analyzed Conducted thorough review of all 956 test cases across 47 test files: - Unit Tests: 338 tests (battery, metrics, log, telemetry, env, charts, html, reports, formatters) - Config Tests: 53 tests (env loading, config file parsing) - Database Tests: 115 tests (init, insert, queries, migrations, maintenance, validation) - Retry Tests: 59 tests (circuit breaker, async retries, factory) - Charts Tests: 76 tests (transforms, statistics, timeseries, rendering, I/O) - HTML Tests: 81 tests (site generation, Jinja2, metrics builders, reports index) - Reports Tests: 149 tests (location, JSON/TXT formatting, aggregation, counter totals) - Client Tests: 63 tests (contacts, connection, meshcore availability, commands) - Integration Tests: 22 tests (reports, collection, rendering pipelines) Results: - Overall Pass Rate: 99.7% (953/956) - 3 tests marked for improvement (empty test bodies in client tests) - 0 tests requiring fixes Key findings documented in test_review/tests.md including quality observations, F.I.R.S.T. principle adherence, and recommendations. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * test: implement snapshot testing for charts and reports Add comprehensive snapshot testing infrastructure: SVG Chart Snapshots: - Deterministic fixtures with fixed timestamps (2024-01-15 12:00:00) - Tests for gauge/counter metrics in light/dark themes - Empty chart and single-point edge cases - Extended normalize_svg_for_snapshot_full() for reproducible comparisons TXT Report Snapshots: - Monthly/yearly report snapshots for repeater and companion - Empty report handling tests - Tests in tests/reports/test_snapshots.py Infrastructure: - tests/snapshots/conftest.py with shared fixtures - UPDATE_SNAPSHOTS=1 environment variable for regeneration - scripts/generate_snapshots.py for batch snapshot generation Run `UPDATE_SNAPSHOTS=1 pytest tests/charts/test_chart_render.py::TestSvgSnapshots` to generate initial snapshots. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * test: fix SVG normalization and generate initial snapshots Fix normalize_svg_for_snapshot() to handle: - clipPath IDs like id="p47c77a2a6e" - url(#p...) references - xlink:href="#p..." references - <dc:date> timestamps Generated initial snapshot files: - 7 SVG chart snapshots (gauge, counter, empty, single-point in light/dark) - 6 TXT report snapshots (monthly/yearly for repeater/companion + empty) All 13 snapshot tests now pass. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * test: fix SVG normalization to preserve axis rendering The SVG normalization was replacing all matplotlib-generated IDs with the same value, causing duplicate IDs that broke SVG rendering: - Font glyphs, clipPaths, and tick marks all got id="normalized" - References couldn't resolve to the correct elements - X and Y axes failed to render in normalized snapshots Fix uses type-specific prefixes with sequential numbering: - glyph_N for font glyphs (DejaVuSans-XX patterns) - clip_N for clipPath definitions (p[0-9a-f]{8,} patterns) - tick_N for tick marks (m[0-9a-f]{8,} patterns) This ensures all IDs remain unique while still being deterministic for snapshot comparison. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * chore: add coverage and pytest artifacts to gitignore Add .coverage, .coverage.*, htmlcov/, and .pytest_cache/ to prevent test artifacts from being committed. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * style: fix all ruff lint errors across codebase - Sort and organize imports (I001) - Use modern type annotations (X | Y instead of Union, collections.abc) - Remove unused imports (F401) - Combine nested if statements (SIM102) - Use ternary operators where appropriate (SIM108) - Combine nested with statements (SIM117) - Use contextlib.suppress instead of try-except-pass (SIM105) - Add noqa comments for intentional SIM115 violations (file locks) - Add TYPE_CHECKING import for forward references - Fix exception chaining (B904) All 1033 tests pass. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * docs: add TDD workflow and pre-commit requirements to CLAUDE.md - Add mandatory test-driven development workflow (write tests first) - Add pre-commit requirements (must run lint and tests before committing) - Document test organization and running commands - Document 95% coverage requirement 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: resolve mypy type checking errors with proper structural fixes - charts.py: Create PeriodConfig dataclass for type-safe period configuration, use mdates.date2num() for matplotlib datetime handling, fix x-axis limits for single-point charts - db.py: Add explicit int() conversion with None handling for SQLite returns - env.py: Add class-level type annotations to Config class - html.py: Add MetricDisplay TypedDict, fix import order, add proper type annotations for table data functions - meshcore_client.py: Add return type annotation Update tests to use new dataclass attribute access and regenerate SVG snapshots. Add mypy step to CLAUDE.md pre-commit requirements. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: cast Jinja2 template.render() to str for mypy Jinja2's type stubs declare render() as returning Any, but it actually returns str. Wrap with str() to satisfy mypy's no-any-return check. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * ci: improve workflow security and reliability - test.yml: Pin all actions by SHA, add concurrency control to cancel in-progress runs on rapid pushes - release-please.yml: Pin action by SHA, add 10-minute timeout - conftest.py: Fix snapshot_base_time to use explicit UTC timezone for consistent behavior across CI and local environments Regenerate SVG snapshots with UTC-aware timestamps. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: add mypy command to permissions in settings.local.json * test: add comprehensive script tests with coroutine warning fixes - Add tests/scripts/ with tests for collect_companion, collect_repeater, and render scripts (1135 tests total, 96% coverage) - Fix unawaited coroutine warnings by using AsyncMock properly for async functions and async_context_manager_factory fixture for context managers - Add --cov=scripts to CI workflow and pyproject.toml coverage config - Omit scripts/generate_snapshots.py from coverage (dev utility) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * docs: migrate claude setup to codex skills * feat: migrate dependencies to uv (#31) * fix: run tests through uv * test: fix ruff lint issues in tests Consolidate patch context managers and clean unused imports/variables Use datetime.UTC in snapshot fixtures * test: avoid unawaited async mocks in entrypoint tests * ci: replace codecov with github coverage artifacts Add junit XML output and coverage summary in job output Upload HTML and XML coverage artifacts (3.12 only) on every run --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
558 lines
18 KiB
Python
558 lines
18 KiB
Python
"""Snapshot tests for text report formatting.
|
|
|
|
These tests compare generated TXT reports against saved snapshots
|
|
to detect unintended changes in report layout and formatting.
|
|
|
|
To update snapshots, run: UPDATE_SNAPSHOTS=1 pytest tests/reports/test_snapshots.py
|
|
"""
|
|
|
|
import os
|
|
from datetime import date, datetime
|
|
from pathlib import Path
|
|
|
|
import pytest
|
|
|
|
from meshmon.reports import (
|
|
DailyAggregate,
|
|
LocationInfo,
|
|
MetricStats,
|
|
MonthlyAggregate,
|
|
YearlyAggregate,
|
|
format_monthly_txt,
|
|
format_yearly_txt,
|
|
)
|
|
|
|
|
|
class TestTxtReportSnapshots:
|
|
"""Snapshot tests for WeeWX-style ASCII text reports."""
|
|
|
|
@pytest.fixture
|
|
def update_snapshots(self):
|
|
"""Return True if snapshots should be updated."""
|
|
return os.environ.get("UPDATE_SNAPSHOTS", "").lower() in ("1", "true", "yes")
|
|
|
|
@pytest.fixture
|
|
def txt_snapshots_dir(self):
|
|
"""Path to TXT snapshots directory."""
|
|
return Path(__file__).parent.parent / "snapshots" / "txt"
|
|
|
|
@pytest.fixture
|
|
def sample_location(self):
|
|
"""Create sample LocationInfo for testing."""
|
|
return LocationInfo(
|
|
name="Test Observatory",
|
|
lat=52.3676, # Amsterdam
|
|
lon=4.9041,
|
|
elev=2.0,
|
|
)
|
|
|
|
@pytest.fixture
|
|
def repeater_monthly_aggregate(self):
|
|
"""Create sample MonthlyAggregate for repeater role testing."""
|
|
daily_data = []
|
|
|
|
# Create 5 days of sample data
|
|
for day in range(1, 6):
|
|
daily_data.append(
|
|
DailyAggregate(
|
|
date=date(2024, 1, day),
|
|
metrics={
|
|
"bat": MetricStats(
|
|
min_value=3600 + day * 10,
|
|
min_time=datetime(2024, 1, day, 4, 0),
|
|
max_value=3900 + day * 10,
|
|
max_time=datetime(2024, 1, day, 14, 0),
|
|
mean=3750 + day * 10,
|
|
count=96,
|
|
),
|
|
"bat_pct": MetricStats(
|
|
mean=65.0 + day * 2,
|
|
count=96,
|
|
),
|
|
"last_rssi": MetricStats(
|
|
mean=-85.0 - day,
|
|
count=96,
|
|
),
|
|
"last_snr": MetricStats(
|
|
mean=8.5 + day * 0.2,
|
|
count=96,
|
|
),
|
|
"noise_floor": MetricStats(
|
|
mean=-115.0,
|
|
count=96,
|
|
),
|
|
"nb_recv": MetricStats(
|
|
total=500 + day * 100,
|
|
count=96,
|
|
reboot_count=0,
|
|
),
|
|
"nb_sent": MetricStats(
|
|
total=200 + day * 50,
|
|
count=96,
|
|
reboot_count=0,
|
|
),
|
|
"airtime": MetricStats(
|
|
total=120 + day * 20,
|
|
count=96,
|
|
reboot_count=0,
|
|
),
|
|
},
|
|
snapshot_count=96,
|
|
)
|
|
)
|
|
|
|
return MonthlyAggregate(
|
|
year=2024,
|
|
month=1,
|
|
role="repeater",
|
|
daily=daily_data,
|
|
summary={
|
|
"bat": MetricStats(
|
|
min_value=3610,
|
|
min_time=datetime(2024, 1, 1, 4, 0),
|
|
max_value=3950,
|
|
max_time=datetime(2024, 1, 5, 14, 0),
|
|
mean=3780,
|
|
count=480,
|
|
),
|
|
"bat_pct": MetricStats(
|
|
mean=71.0,
|
|
count=480,
|
|
),
|
|
"last_rssi": MetricStats(
|
|
mean=-88.0,
|
|
count=480,
|
|
),
|
|
"last_snr": MetricStats(
|
|
mean=9.1,
|
|
count=480,
|
|
),
|
|
"noise_floor": MetricStats(
|
|
mean=-115.0,
|
|
count=480,
|
|
),
|
|
"nb_recv": MetricStats(
|
|
total=4000,
|
|
count=480,
|
|
reboot_count=0,
|
|
),
|
|
"nb_sent": MetricStats(
|
|
total=1750,
|
|
count=480,
|
|
reboot_count=0,
|
|
),
|
|
"airtime": MetricStats(
|
|
total=900,
|
|
count=480,
|
|
reboot_count=0,
|
|
),
|
|
},
|
|
)
|
|
|
|
@pytest.fixture
|
|
def companion_monthly_aggregate(self):
|
|
"""Create sample MonthlyAggregate for companion role testing."""
|
|
daily_data = []
|
|
|
|
# Create 5 days of sample data
|
|
for day in range(1, 6):
|
|
daily_data.append(
|
|
DailyAggregate(
|
|
date=date(2024, 1, day),
|
|
metrics={
|
|
"battery_mv": MetricStats(
|
|
min_value=3700 + day * 10,
|
|
min_time=datetime(2024, 1, day, 5, 0),
|
|
max_value=4000 + day * 10,
|
|
max_time=datetime(2024, 1, day, 12, 0),
|
|
mean=3850 + day * 10,
|
|
count=1440,
|
|
),
|
|
"bat_pct": MetricStats(
|
|
mean=75.0 + day * 2,
|
|
count=1440,
|
|
),
|
|
"contacts": MetricStats(
|
|
mean=8 + day,
|
|
count=1440,
|
|
),
|
|
"recv": MetricStats(
|
|
total=1000 + day * 200,
|
|
count=1440,
|
|
reboot_count=0,
|
|
),
|
|
"sent": MetricStats(
|
|
total=500 + day * 100,
|
|
count=1440,
|
|
reboot_count=0,
|
|
),
|
|
},
|
|
snapshot_count=1440,
|
|
)
|
|
)
|
|
|
|
return MonthlyAggregate(
|
|
year=2024,
|
|
month=1,
|
|
role="companion",
|
|
daily=daily_data,
|
|
summary={
|
|
"battery_mv": MetricStats(
|
|
min_value=3710,
|
|
min_time=datetime(2024, 1, 1, 5, 0),
|
|
max_value=4050,
|
|
max_time=datetime(2024, 1, 5, 12, 0),
|
|
mean=3880,
|
|
count=7200,
|
|
),
|
|
"bat_pct": MetricStats(
|
|
mean=81.0,
|
|
count=7200,
|
|
),
|
|
"contacts": MetricStats(
|
|
mean=11.0,
|
|
count=7200,
|
|
),
|
|
"recv": MetricStats(
|
|
total=8000,
|
|
count=7200,
|
|
reboot_count=0,
|
|
),
|
|
"sent": MetricStats(
|
|
total=4000,
|
|
count=7200,
|
|
reboot_count=0,
|
|
),
|
|
},
|
|
)
|
|
|
|
@pytest.fixture
|
|
def repeater_yearly_aggregate(self):
|
|
"""Create sample YearlyAggregate for repeater role testing."""
|
|
monthly_data = []
|
|
|
|
# Create 3 months of sample data
|
|
for month in range(1, 4):
|
|
monthly_data.append(
|
|
MonthlyAggregate(
|
|
year=2024,
|
|
month=month,
|
|
role="repeater",
|
|
daily=[], # Daily details not needed for yearly summary
|
|
summary={
|
|
"bat": MetricStats(
|
|
min_value=3500 + month * 50,
|
|
min_time=datetime(2024, month, 15, 4, 0),
|
|
max_value=3950 + month * 20,
|
|
max_time=datetime(2024, month, 20, 14, 0),
|
|
mean=3700 + month * 30,
|
|
count=2976, # ~31 days * 96 readings
|
|
),
|
|
"bat_pct": MetricStats(
|
|
mean=60.0 + month * 5,
|
|
count=2976,
|
|
),
|
|
"last_rssi": MetricStats(
|
|
mean=-90.0 + month,
|
|
count=2976,
|
|
),
|
|
"last_snr": MetricStats(
|
|
mean=7.5 + month * 0.5,
|
|
count=2976,
|
|
),
|
|
"nb_recv": MetricStats(
|
|
total=30000 + month * 5000,
|
|
count=2976,
|
|
reboot_count=0,
|
|
),
|
|
"nb_sent": MetricStats(
|
|
total=15000 + month * 2500,
|
|
count=2976,
|
|
reboot_count=0,
|
|
),
|
|
},
|
|
)
|
|
)
|
|
|
|
return YearlyAggregate(
|
|
year=2024,
|
|
role="repeater",
|
|
monthly=monthly_data,
|
|
summary={
|
|
"bat": MetricStats(
|
|
min_value=3550,
|
|
min_time=datetime(2024, 1, 15, 4, 0),
|
|
max_value=4010,
|
|
max_time=datetime(2024, 3, 20, 14, 0),
|
|
mean=3760,
|
|
count=8928,
|
|
),
|
|
"bat_pct": MetricStats(
|
|
mean=70.0,
|
|
count=8928,
|
|
),
|
|
"last_rssi": MetricStats(
|
|
mean=-88.0,
|
|
count=8928,
|
|
),
|
|
"last_snr": MetricStats(
|
|
mean=8.5,
|
|
count=8928,
|
|
),
|
|
"nb_recv": MetricStats(
|
|
total=120000,
|
|
count=8928,
|
|
reboot_count=0,
|
|
),
|
|
"nb_sent": MetricStats(
|
|
total=60000,
|
|
count=8928,
|
|
reboot_count=0,
|
|
),
|
|
},
|
|
)
|
|
|
|
@pytest.fixture
|
|
def companion_yearly_aggregate(self):
|
|
"""Create sample YearlyAggregate for companion role testing."""
|
|
monthly_data = []
|
|
|
|
# Create 3 months of sample data
|
|
for month in range(1, 4):
|
|
monthly_data.append(
|
|
MonthlyAggregate(
|
|
year=2024,
|
|
month=month,
|
|
role="companion",
|
|
daily=[],
|
|
summary={
|
|
"battery_mv": MetricStats(
|
|
min_value=3600 + month * 30,
|
|
min_time=datetime(2024, month, 10, 5, 0),
|
|
max_value=4100 + month * 20,
|
|
max_time=datetime(2024, month, 25, 12, 0),
|
|
mean=3850 + month * 25,
|
|
count=44640, # ~31 days * 1440 readings
|
|
),
|
|
"bat_pct": MetricStats(
|
|
mean=70.0 + month * 3,
|
|
count=44640,
|
|
),
|
|
"contacts": MetricStats(
|
|
mean=10 + month,
|
|
count=44640,
|
|
),
|
|
"recv": MetricStats(
|
|
total=50000 + month * 10000,
|
|
count=44640,
|
|
reboot_count=0,
|
|
),
|
|
"sent": MetricStats(
|
|
total=25000 + month * 5000,
|
|
count=44640,
|
|
reboot_count=0,
|
|
),
|
|
},
|
|
)
|
|
)
|
|
|
|
return YearlyAggregate(
|
|
year=2024,
|
|
role="companion",
|
|
monthly=monthly_data,
|
|
summary={
|
|
"battery_mv": MetricStats(
|
|
min_value=3630,
|
|
min_time=datetime(2024, 1, 10, 5, 0),
|
|
max_value=4160,
|
|
max_time=datetime(2024, 3, 25, 12, 0),
|
|
mean=3900,
|
|
count=133920,
|
|
),
|
|
"bat_pct": MetricStats(
|
|
mean=76.0,
|
|
count=133920,
|
|
),
|
|
"contacts": MetricStats(
|
|
mean=12.0,
|
|
count=133920,
|
|
),
|
|
"recv": MetricStats(
|
|
total=210000,
|
|
count=133920,
|
|
reboot_count=0,
|
|
),
|
|
"sent": MetricStats(
|
|
total=105000,
|
|
count=133920,
|
|
reboot_count=0,
|
|
),
|
|
},
|
|
)
|
|
|
|
def _assert_snapshot_match(
|
|
self,
|
|
actual: str,
|
|
snapshot_path: Path,
|
|
update: bool,
|
|
) -> None:
|
|
"""Compare TXT report against snapshot, with optional update mode."""
|
|
if update:
|
|
# Update mode: write actual to snapshot
|
|
snapshot_path.parent.mkdir(parents=True, exist_ok=True)
|
|
snapshot_path.write_text(actual, encoding="utf-8")
|
|
pytest.skip(f"Snapshot updated: {snapshot_path}")
|
|
else:
|
|
# Compare mode
|
|
if not snapshot_path.exists():
|
|
# Create new snapshot if it doesn't exist
|
|
snapshot_path.parent.mkdir(parents=True, exist_ok=True)
|
|
snapshot_path.write_text(actual, encoding="utf-8")
|
|
pytest.fail(
|
|
f"Snapshot created: {snapshot_path}\n"
|
|
f"Run tests again to verify, or set UPDATE_SNAPSHOTS=1 to regenerate."
|
|
)
|
|
|
|
expected = snapshot_path.read_text(encoding="utf-8")
|
|
|
|
if actual != expected:
|
|
# Show differences for debugging
|
|
actual_lines = actual.splitlines()
|
|
expected_lines = expected.splitlines()
|
|
|
|
diff_info = []
|
|
for i, (a, e) in enumerate(zip(actual_lines, expected_lines, strict=False), 1):
|
|
if a != e:
|
|
diff_info.append(f"Line {i} differs:")
|
|
diff_info.append(f" Expected: '{e}'")
|
|
diff_info.append(f" Actual: '{a}'")
|
|
if len(diff_info) > 15:
|
|
diff_info.append(" (more differences omitted)")
|
|
break
|
|
|
|
if len(actual_lines) != len(expected_lines):
|
|
diff_info.append(
|
|
f"Line count: expected {len(expected_lines)}, got {len(actual_lines)}"
|
|
)
|
|
|
|
pytest.fail(
|
|
f"Snapshot mismatch: {snapshot_path}\n"
|
|
f"Set UPDATE_SNAPSHOTS=1 to regenerate.\n\n"
|
|
+ "\n".join(diff_info)
|
|
)
|
|
|
|
def test_monthly_report_repeater(
|
|
self,
|
|
repeater_monthly_aggregate,
|
|
sample_location,
|
|
txt_snapshots_dir,
|
|
update_snapshots,
|
|
):
|
|
"""Monthly repeater report matches snapshot."""
|
|
result = format_monthly_txt(
|
|
repeater_monthly_aggregate,
|
|
"Test Repeater",
|
|
sample_location,
|
|
)
|
|
|
|
snapshot_path = txt_snapshots_dir / "monthly_report_repeater.txt"
|
|
self._assert_snapshot_match(result, snapshot_path, update_snapshots)
|
|
|
|
def test_monthly_report_companion(
|
|
self,
|
|
companion_monthly_aggregate,
|
|
sample_location,
|
|
txt_snapshots_dir,
|
|
update_snapshots,
|
|
):
|
|
"""Monthly companion report matches snapshot."""
|
|
result = format_monthly_txt(
|
|
companion_monthly_aggregate,
|
|
"Test Companion",
|
|
sample_location,
|
|
)
|
|
|
|
snapshot_path = txt_snapshots_dir / "monthly_report_companion.txt"
|
|
self._assert_snapshot_match(result, snapshot_path, update_snapshots)
|
|
|
|
def test_yearly_report_repeater(
|
|
self,
|
|
repeater_yearly_aggregate,
|
|
sample_location,
|
|
txt_snapshots_dir,
|
|
update_snapshots,
|
|
):
|
|
"""Yearly repeater report matches snapshot."""
|
|
result = format_yearly_txt(
|
|
repeater_yearly_aggregate,
|
|
"Test Repeater",
|
|
sample_location,
|
|
)
|
|
|
|
snapshot_path = txt_snapshots_dir / "yearly_report_repeater.txt"
|
|
self._assert_snapshot_match(result, snapshot_path, update_snapshots)
|
|
|
|
def test_yearly_report_companion(
|
|
self,
|
|
companion_yearly_aggregate,
|
|
sample_location,
|
|
txt_snapshots_dir,
|
|
update_snapshots,
|
|
):
|
|
"""Yearly companion report matches snapshot."""
|
|
result = format_yearly_txt(
|
|
companion_yearly_aggregate,
|
|
"Test Companion",
|
|
sample_location,
|
|
)
|
|
|
|
snapshot_path = txt_snapshots_dir / "yearly_report_companion.txt"
|
|
self._assert_snapshot_match(result, snapshot_path, update_snapshots)
|
|
|
|
def test_empty_monthly_report(
|
|
self,
|
|
sample_location,
|
|
txt_snapshots_dir,
|
|
update_snapshots,
|
|
):
|
|
"""Empty monthly report matches snapshot."""
|
|
empty_aggregate = MonthlyAggregate(
|
|
year=2024,
|
|
month=1,
|
|
role="repeater",
|
|
daily=[],
|
|
summary={},
|
|
)
|
|
|
|
result = format_monthly_txt(
|
|
empty_aggregate,
|
|
"Test Repeater",
|
|
sample_location,
|
|
)
|
|
|
|
snapshot_path = txt_snapshots_dir / "empty_monthly_report.txt"
|
|
self._assert_snapshot_match(result, snapshot_path, update_snapshots)
|
|
|
|
def test_empty_yearly_report(
|
|
self,
|
|
sample_location,
|
|
txt_snapshots_dir,
|
|
update_snapshots,
|
|
):
|
|
"""Empty yearly report matches snapshot."""
|
|
empty_aggregate = YearlyAggregate(
|
|
year=2024,
|
|
role="repeater",
|
|
monthly=[],
|
|
summary={},
|
|
)
|
|
|
|
result = format_yearly_txt(
|
|
empty_aggregate,
|
|
"Test Repeater",
|
|
sample_location,
|
|
)
|
|
|
|
snapshot_path = txt_snapshots_dir / "empty_yearly_report.txt"
|
|
self._assert_snapshot_match(result, snapshot_path, update_snapshots)
|