mirror of
https://github.com/jorijn/meshcore-stats.git
synced 2026-03-28 17:42:55 +01:00
* test: add comprehensive pytest test suite with 95% coverage Add full unit and integration test coverage for the meshcore-stats project: - 1020 tests covering all modules (db, charts, html, reports, client, etc.) - 95.95% code coverage with pytest-cov (95% threshold enforced) - GitHub Actions CI workflow for automated testing on push/PR - Proper mocking of external dependencies (meshcore, serial, filesystem) - SVG snapshot infrastructure for chart regression testing - Integration tests for collection and rendering pipelines Test organization: - tests/charts/: Chart rendering and statistics - tests/client/: MeshCore client and connection handling - tests/config/: Environment and configuration parsing - tests/database/: SQLite operations and migrations - tests/html/: HTML generation and Jinja templates - tests/reports/: Report generation and formatting - tests/retry/: Circuit breaker and retry logic - tests/unit/: Pure unit tests for utilities - tests/integration/: End-to-end pipeline tests 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * chore: add test-engineer agent configuration Add project-local test-engineer agent for pytest test development, coverage analysis, and test review tasks. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * docs: comprehensive test suite review with 956 tests analyzed Conducted thorough review of all 956 test cases across 47 test files: - Unit Tests: 338 tests (battery, metrics, log, telemetry, env, charts, html, reports, formatters) - Config Tests: 53 tests (env loading, config file parsing) - Database Tests: 115 tests (init, insert, queries, migrations, maintenance, validation) - Retry Tests: 59 tests (circuit breaker, async retries, factory) - Charts Tests: 76 tests (transforms, statistics, timeseries, rendering, I/O) - HTML Tests: 81 tests (site generation, Jinja2, metrics builders, reports index) - Reports Tests: 149 tests (location, JSON/TXT formatting, aggregation, counter totals) - Client Tests: 63 tests (contacts, connection, meshcore availability, commands) - Integration Tests: 22 tests (reports, collection, rendering pipelines) Results: - Overall Pass Rate: 99.7% (953/956) - 3 tests marked for improvement (empty test bodies in client tests) - 0 tests requiring fixes Key findings documented in test_review/tests.md including quality observations, F.I.R.S.T. principle adherence, and recommendations. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * test: implement snapshot testing for charts and reports Add comprehensive snapshot testing infrastructure: SVG Chart Snapshots: - Deterministic fixtures with fixed timestamps (2024-01-15 12:00:00) - Tests for gauge/counter metrics in light/dark themes - Empty chart and single-point edge cases - Extended normalize_svg_for_snapshot_full() for reproducible comparisons TXT Report Snapshots: - Monthly/yearly report snapshots for repeater and companion - Empty report handling tests - Tests in tests/reports/test_snapshots.py Infrastructure: - tests/snapshots/conftest.py with shared fixtures - UPDATE_SNAPSHOTS=1 environment variable for regeneration - scripts/generate_snapshots.py for batch snapshot generation Run `UPDATE_SNAPSHOTS=1 pytest tests/charts/test_chart_render.py::TestSvgSnapshots` to generate initial snapshots. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * test: fix SVG normalization and generate initial snapshots Fix normalize_svg_for_snapshot() to handle: - clipPath IDs like id="p47c77a2a6e" - url(#p...) references - xlink:href="#p..." references - <dc:date> timestamps Generated initial snapshot files: - 7 SVG chart snapshots (gauge, counter, empty, single-point in light/dark) - 6 TXT report snapshots (monthly/yearly for repeater/companion + empty) All 13 snapshot tests now pass. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * test: fix SVG normalization to preserve axis rendering The SVG normalization was replacing all matplotlib-generated IDs with the same value, causing duplicate IDs that broke SVG rendering: - Font glyphs, clipPaths, and tick marks all got id="normalized" - References couldn't resolve to the correct elements - X and Y axes failed to render in normalized snapshots Fix uses type-specific prefixes with sequential numbering: - glyph_N for font glyphs (DejaVuSans-XX patterns) - clip_N for clipPath definitions (p[0-9a-f]{8,} patterns) - tick_N for tick marks (m[0-9a-f]{8,} patterns) This ensures all IDs remain unique while still being deterministic for snapshot comparison. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * chore: add coverage and pytest artifacts to gitignore Add .coverage, .coverage.*, htmlcov/, and .pytest_cache/ to prevent test artifacts from being committed. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * style: fix all ruff lint errors across codebase - Sort and organize imports (I001) - Use modern type annotations (X | Y instead of Union, collections.abc) - Remove unused imports (F401) - Combine nested if statements (SIM102) - Use ternary operators where appropriate (SIM108) - Combine nested with statements (SIM117) - Use contextlib.suppress instead of try-except-pass (SIM105) - Add noqa comments for intentional SIM115 violations (file locks) - Add TYPE_CHECKING import for forward references - Fix exception chaining (B904) All 1033 tests pass. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * docs: add TDD workflow and pre-commit requirements to CLAUDE.md - Add mandatory test-driven development workflow (write tests first) - Add pre-commit requirements (must run lint and tests before committing) - Document test organization and running commands - Document 95% coverage requirement 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: resolve mypy type checking errors with proper structural fixes - charts.py: Create PeriodConfig dataclass for type-safe period configuration, use mdates.date2num() for matplotlib datetime handling, fix x-axis limits for single-point charts - db.py: Add explicit int() conversion with None handling for SQLite returns - env.py: Add class-level type annotations to Config class - html.py: Add MetricDisplay TypedDict, fix import order, add proper type annotations for table data functions - meshcore_client.py: Add return type annotation Update tests to use new dataclass attribute access and regenerate SVG snapshots. Add mypy step to CLAUDE.md pre-commit requirements. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: cast Jinja2 template.render() to str for mypy Jinja2's type stubs declare render() as returning Any, but it actually returns str. Wrap with str() to satisfy mypy's no-any-return check. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * ci: improve workflow security and reliability - test.yml: Pin all actions by SHA, add concurrency control to cancel in-progress runs on rapid pushes - release-please.yml: Pin action by SHA, add 10-minute timeout - conftest.py: Fix snapshot_base_time to use explicit UTC timezone for consistent behavior across CI and local environments Regenerate SVG snapshots with UTC-aware timestamps. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: add mypy command to permissions in settings.local.json * test: add comprehensive script tests with coroutine warning fixes - Add tests/scripts/ with tests for collect_companion, collect_repeater, and render scripts (1135 tests total, 96% coverage) - Fix unawaited coroutine warnings by using AsyncMock properly for async functions and async_context_manager_factory fixture for context managers - Add --cov=scripts to CI workflow and pyproject.toml coverage config - Omit scripts/generate_snapshots.py from coverage (dev utility) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * docs: migrate claude setup to codex skills * feat: migrate dependencies to uv (#31) * fix: run tests through uv * test: fix ruff lint issues in tests Consolidate patch context managers and clean unused imports/variables Use datetime.UTC in snapshot fixtures * test: avoid unawaited async mocks in entrypoint tests * ci: replace codecov with github coverage artifacts Add junit XML output and coverage summary in job output Upload HTML and XML coverage artifacts (3.12 only) on every run --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
211 lines
7.9 KiB
Python
211 lines
7.9 KiB
Python
"""Tests for database validation and security functions."""
|
|
|
|
import pytest
|
|
|
|
from meshmon.db import (
|
|
VALID_ROLES,
|
|
_validate_role,
|
|
get_available_metrics,
|
|
get_distinct_timestamps,
|
|
get_latest_metrics,
|
|
get_metric_count,
|
|
get_metrics_for_period,
|
|
insert_metric,
|
|
insert_metrics,
|
|
)
|
|
|
|
|
|
class TestValidateRole:
|
|
"""Tests for _validate_role function."""
|
|
|
|
def test_accepts_companion(self):
|
|
"""Accepts 'companion' as valid role."""
|
|
result = _validate_role("companion")
|
|
assert result == "companion"
|
|
|
|
def test_accepts_repeater(self):
|
|
"""Accepts 'repeater' as valid role."""
|
|
result = _validate_role("repeater")
|
|
assert result == "repeater"
|
|
|
|
def test_returns_input_on_success(self):
|
|
"""Returns the validated role string."""
|
|
for role in VALID_ROLES:
|
|
result = _validate_role(role)
|
|
assert result == role
|
|
|
|
def test_rejects_invalid_role(self):
|
|
"""Rejects invalid role names."""
|
|
with pytest.raises(ValueError, match="Invalid role"):
|
|
_validate_role("invalid")
|
|
|
|
def test_rejects_empty_string(self):
|
|
"""Rejects empty string as role."""
|
|
with pytest.raises(ValueError, match="Invalid role"):
|
|
_validate_role("")
|
|
|
|
def test_rejects_none(self):
|
|
"""Rejects None as role."""
|
|
with pytest.raises(ValueError):
|
|
_validate_role(None)
|
|
|
|
def test_case_sensitive(self):
|
|
"""Role validation is case-sensitive."""
|
|
with pytest.raises(ValueError, match="Invalid role"):
|
|
_validate_role("Companion")
|
|
|
|
with pytest.raises(ValueError, match="Invalid role"):
|
|
_validate_role("REPEATER")
|
|
|
|
def test_rejects_whitespace_variants(self):
|
|
"""Rejects roles with leading/trailing whitespace."""
|
|
with pytest.raises(ValueError, match="Invalid role"):
|
|
_validate_role(" companion")
|
|
|
|
with pytest.raises(ValueError, match="Invalid role"):
|
|
_validate_role("repeater ")
|
|
|
|
with pytest.raises(ValueError, match="Invalid role"):
|
|
_validate_role(" companion ")
|
|
|
|
|
|
class TestSqlInjectionPrevention:
|
|
"""Tests to verify SQL injection is prevented via role validation."""
|
|
|
|
@pytest.mark.parametrize("malicious_role", [
|
|
"'; DROP TABLE metrics; --",
|
|
"admin'; DROP TABLE metrics;--",
|
|
"companion OR 1=1",
|
|
"companion; DELETE FROM metrics",
|
|
"companion' UNION SELECT * FROM db_meta --",
|
|
"companion\"; DROP TABLE metrics; --",
|
|
"1 OR 1=1",
|
|
"companion/*comment*/",
|
|
])
|
|
def test_insert_metric_rejects_injection(self, initialized_db, malicious_role):
|
|
"""insert_metric rejects SQL injection attempts."""
|
|
with pytest.raises(ValueError, match="Invalid role"):
|
|
insert_metric(1000, malicious_role, "test", 1.0, initialized_db)
|
|
|
|
@pytest.mark.parametrize("malicious_role", [
|
|
"'; DROP TABLE metrics; --",
|
|
"companion OR 1=1",
|
|
])
|
|
def test_insert_metrics_rejects_injection(self, initialized_db, malicious_role):
|
|
"""insert_metrics rejects SQL injection attempts."""
|
|
with pytest.raises(ValueError, match="Invalid role"):
|
|
insert_metrics(1000, malicious_role, {"test": 1.0}, initialized_db)
|
|
|
|
@pytest.mark.parametrize("malicious_role", [
|
|
"'; DROP TABLE metrics; --",
|
|
"companion OR 1=1",
|
|
])
|
|
def test_get_metrics_for_period_rejects_injection(self, initialized_db, malicious_role):
|
|
"""get_metrics_for_period rejects SQL injection attempts."""
|
|
with pytest.raises(ValueError, match="Invalid role"):
|
|
get_metrics_for_period(malicious_role, 0, 100, initialized_db)
|
|
|
|
@pytest.mark.parametrize("malicious_role", [
|
|
"'; DROP TABLE metrics; --",
|
|
"companion OR 1=1",
|
|
])
|
|
def test_get_latest_metrics_rejects_injection(self, initialized_db, malicious_role):
|
|
"""get_latest_metrics rejects SQL injection attempts."""
|
|
with pytest.raises(ValueError, match="Invalid role"):
|
|
get_latest_metrics(malicious_role, initialized_db)
|
|
|
|
@pytest.mark.parametrize("malicious_role", [
|
|
"'; DROP TABLE metrics; --",
|
|
"companion OR 1=1",
|
|
])
|
|
def test_get_metric_count_rejects_injection(self, initialized_db, malicious_role):
|
|
"""get_metric_count rejects SQL injection attempts."""
|
|
with pytest.raises(ValueError, match="Invalid role"):
|
|
get_metric_count(malicious_role, initialized_db)
|
|
|
|
@pytest.mark.parametrize("malicious_role", [
|
|
"'; DROP TABLE metrics; --",
|
|
"companion OR 1=1",
|
|
])
|
|
def test_get_distinct_timestamps_rejects_injection(self, initialized_db, malicious_role):
|
|
"""get_distinct_timestamps rejects SQL injection attempts."""
|
|
with pytest.raises(ValueError, match="Invalid role"):
|
|
get_distinct_timestamps(malicious_role, initialized_db)
|
|
|
|
@pytest.mark.parametrize("malicious_role", [
|
|
"'; DROP TABLE metrics; --",
|
|
"companion OR 1=1",
|
|
])
|
|
def test_get_available_metrics_rejects_injection(self, initialized_db, malicious_role):
|
|
"""get_available_metrics rejects SQL injection attempts."""
|
|
with pytest.raises(ValueError, match="Invalid role"):
|
|
get_available_metrics(malicious_role, initialized_db)
|
|
|
|
|
|
class TestValidRolesConstant:
|
|
"""Tests for VALID_ROLES constant."""
|
|
|
|
def test_contains_companion(self):
|
|
"""VALID_ROLES includes 'companion'."""
|
|
assert "companion" in VALID_ROLES
|
|
|
|
def test_contains_repeater(self):
|
|
"""VALID_ROLES includes 'repeater'."""
|
|
assert "repeater" in VALID_ROLES
|
|
|
|
def test_is_tuple(self):
|
|
"""VALID_ROLES is immutable (tuple)."""
|
|
assert isinstance(VALID_ROLES, tuple)
|
|
|
|
def test_exactly_two_roles(self):
|
|
"""There are exactly two valid roles."""
|
|
assert len(VALID_ROLES) == 2
|
|
|
|
|
|
class TestMetricNameValidation:
|
|
"""Tests for metric name handling (not validated, but should handle safely)."""
|
|
|
|
def test_metric_name_with_special_chars(self, initialized_db):
|
|
"""Metric names with special chars are handled via parameterized queries."""
|
|
# These should work because we use parameterized queries
|
|
insert_metric(1000, "companion", "test.metric", 1.0, initialized_db)
|
|
insert_metric(1001, "companion", "test-metric", 2.0, initialized_db)
|
|
insert_metric(1002, "companion", "test_metric", 3.0, initialized_db)
|
|
|
|
metrics = get_available_metrics("companion", initialized_db)
|
|
assert "test.metric" in metrics
|
|
assert "test-metric" in metrics
|
|
assert "test_metric" in metrics
|
|
|
|
def test_metric_name_with_spaces(self, initialized_db):
|
|
"""Metric names with spaces are handled safely."""
|
|
insert_metric(1000, "companion", "test metric", 1.0, initialized_db)
|
|
|
|
metrics = get_available_metrics("companion", initialized_db)
|
|
assert "test metric" in metrics
|
|
|
|
def test_metric_name_unicode(self, initialized_db):
|
|
"""Unicode metric names are handled safely."""
|
|
insert_metric(1000, "companion", "température", 1.0, initialized_db)
|
|
insert_metric(1001, "companion", "温度", 2.0, initialized_db)
|
|
|
|
metrics = get_available_metrics("companion", initialized_db)
|
|
assert "température" in metrics
|
|
assert "温度" in metrics
|
|
|
|
def test_empty_metric_name(self, initialized_db):
|
|
"""Empty metric name is allowed (not validated)."""
|
|
# Empty string is allowed as metric name
|
|
insert_metric(1000, "companion", "", 1.0, initialized_db)
|
|
|
|
metrics = get_available_metrics("companion", initialized_db)
|
|
assert "" in metrics
|
|
|
|
def test_very_long_metric_name(self, initialized_db):
|
|
"""Very long metric names are handled."""
|
|
long_name = "a" * 1000
|
|
insert_metric(1000, "companion", long_name, 1.0, initialized_db)
|
|
|
|
metrics = get_available_metrics("companion", initialized_db)
|
|
assert long_name in metrics
|