Compare commits

..

7 Commits

Author SHA1 Message Date
Jorijn Schrijvershof
7001ada265 Merge branch 'main' into jorijn-patch-1 2026-01-11 08:37:14 +01:00
renovate[bot]
18ca787f7f chore(deps): update ghcr.io/astral-sh/uv docker tag to v0.9.24 (#61)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-11 07:36:05 +00:00
Jorijn Schrijvershof
f727028246 chore: add lockFileMaintenance to update types 2026-01-11 08:35:43 +01:00
Jorijn Schrijvershof
6fc2e762cf chore: enable renovate automerge for patch and digest updates (#64) 2026-01-11 08:30:29 +01:00
renovate[bot]
df0c374b65 chore(deps): update nginx:1.29-alpine docker digest to c083c37 (#62)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-01-11 08:16:01 +01:00
Jorijn Schrijvershof
570a086c8c chore(main): release 0.2.13 (#60) 2026-01-09 09:06:00 +01:00
Jorijn Schrijvershof
3a0306043c chore: drop digest from compose image (#59) 2026-01-09 09:04:27 +01:00
10 changed files with 36 additions and 148 deletions

View File

@@ -1,140 +0,0 @@
---
name: test-engineer
description: Use this agent when you need to write, review, or improve tests for code. This includes unit tests, integration tests, end-to-end tests, and test infrastructure. Also use when evaluating test coverage, identifying untested edge cases, or refactoring existing tests for better maintainability. Examples:\n\n<example>\nContext: User has just written a new function and wants tests for it.\nuser: "I just wrote this battery voltage calculation function, can you help me test it?"\nassistant: "Let me use the test-engineer agent to create comprehensive tests for your battery calculation function."\n<commentary>\nSince the user wants tests written for new code, use the test-engineer agent to create thorough test cases covering normal operation, edge cases, and error conditions.\n</commentary>\n</example>\n\n<example>\nContext: User wants to improve existing test coverage.\nuser: "Our test coverage for the database module is only 60%, can you help improve it?"\nassistant: "I'll use the test-engineer agent to analyze the gaps in your database module tests and create additional test cases."\n<commentary>\nThe user needs help identifying untested code paths and writing new tests, which is exactly what the test-engineer agent specializes in.\n</commentary>\n</example>\n\n<example>\nContext: User has written code and the assistant should proactively suggest testing.\nuser: "Here's my new retry logic with exponential backoff"\nassistant: "I've reviewed your retry logic implementation. Now let me use the test-engineer agent to create tests that verify the backoff behavior, edge cases like max retries, and error handling."\n<commentary>\nAfter code is written, proactively use the test-engineer agent to ensure the new functionality has proper test coverage.\n</commentary>\n</example>
model: opus
---
You are an expert test engineer with deep expertise in software testing methodologies, test-driven development, and quality assurance. Your primary focus is Python testing (pytest, unittest, hypothesis), but you are also proficient in testing frameworks across JavaScript/TypeScript (Jest, Vitest, Mocha), Go, Rust, and other languages.
## Core Expertise
### Testing Principles
- Write tests that are fast, isolated, repeatable, self-validating, and timely (F.I.R.S.T.)
- Follow the Arrange-Act-Assert (AAA) pattern for clear test structure
- Apply the testing pyramid: prioritize unit tests, supplement with integration tests, minimize end-to-end tests
- Test behavior, not implementation details
- Each test should verify one specific behavior
### Python Testing (Primary Focus)
- **pytest**: fixtures, parametrization, markers, conftest.py organization, plugins
- **unittest**: TestCase classes, setUp/tearDown, mock module
- **hypothesis**: property-based testing, strategies, shrinking
- **coverage.py**: measuring and improving test coverage
- **mocking**: unittest.mock, pytest-mock, when and how to mock appropriately
- **async testing**: pytest-asyncio, testing coroutines and async generators
### Test Categories You Handle
1. **Unit Tests**: Isolated function/method testing with mocked dependencies
2. **Integration Tests**: Testing component interactions, database operations, API calls
3. **End-to-End Tests**: Full system testing, UI automation
4. **Property-Based Tests**: Generating test cases to find edge cases
5. **Regression Tests**: Preventing bug recurrence
6. **Performance Tests**: Benchmarking, load testing considerations
## Your Approach
### When Writing Tests
1. Identify the function/module's contract: inputs, outputs, side effects, exceptions
2. List test cases covering:
- Happy path (normal operation)
- Edge cases (empty inputs, boundaries, None/null values)
- Error conditions (invalid inputs, exceptions)
- State transitions (if applicable)
3. Write clear, descriptive test names that explain what is being tested
4. Use fixtures for common setup, parametrize for similar test variations
5. Keep tests independent - no test should depend on another's execution
### When Reviewing Tests
1. Check for missing edge cases and error scenarios
2. Identify flaky tests (time-dependent, order-dependent, external dependencies)
3. Look for over-mocking that makes tests meaningless
4. Verify assertions are specific and meaningful
5. Ensure test names clearly describe what they verify
6. Check for proper cleanup and resource management
### Test Naming Convention
Use descriptive names that explain the scenario:
- `test_<function>_<scenario>_<expected_result>`
- Example: `test_calculate_battery_percentage_at_minimum_voltage_returns_zero`
## Code Quality Standards
### Test Structure
```python
def test_function_name_describes_behavior():
# Arrange - set up test data and dependencies
input_data = create_test_data()
# Act - call the function under test
result = function_under_test(input_data)
# Assert - verify the expected outcome
assert result == expected_value
```
### Fixture Best Practices
- Use fixtures for reusable setup, not for test logic
- Prefer function-scoped fixtures unless sharing is necessary
- Use `yield` for cleanup in fixtures
- Document what each fixture provides
### Mocking Guidelines
- Mock at the boundary (external services, databases, file systems)
- Don't mock the thing you're testing
- Verify mock calls when the interaction itself is the behavior being tested
- Use `autospec=True` to catch interface mismatches
## Edge Cases to Always Consider
### For Numeric Functions
- Zero, negative numbers, very large numbers
- Floating point precision issues
- Integer overflow (in typed languages)
- Division by zero scenarios
### For String/Text Functions
- Empty strings, whitespace-only strings
- Unicode characters, emoji, RTL text
- Very long strings
- Special characters and escape sequences
### For Collections
- Empty collections
- Single-element collections
- Very large collections
- None/null elements within collections
- Duplicate elements
### For Time/Date Functions
- Timezone boundaries, DST transitions
- Leap years, month boundaries
- Unix epoch edge cases
- Far future/past dates
### For I/O Operations
- File not found, permission denied
- Network timeouts, connection failures
- Partial reads/writes
- Concurrent access
## Output Format
When writing tests, provide:
1. Complete, runnable test code
2. Brief explanation of what each test verifies
3. Any additional test cases that should be considered
4. Required fixtures or test utilities
When reviewing tests, provide:
1. Specific issues found with line references
2. Missing test cases that should be added
3. Suggested improvements with code examples
4. Overall assessment of test quality and coverage
## Project-Specific Considerations
When working in projects with existing test conventions:
- Follow the established test file organization
- Use existing fixtures and utilities where appropriate
- Match the naming conventions already in use
- Respect any project-specific testing requirements from documentation like CLAUDE.md

View File

@@ -1,3 +1,3 @@
{
".": "0.2.12"
".": "0.2.13"
}

View File

@@ -370,7 +370,7 @@ Jobs configured in `docker/ofelia.ini`:
**Nightly rebuilds** ensure version tags always include the latest OS security patches. This is a common pattern used by official Docker images (nginx, postgres, node). Users needing reproducibility should pin by SHA digest or use dated nightly tags.
GitHub Actions use version tags in workflows, and Renovate is configured in `renovate.json` to pin action digests and maintain lockfiles.
GitHub Actions use version tags in workflows, and Renovate is configured in `renovate.json` to pin action digests, maintain lockfiles, and auto-merge patch + digest updates once required checks pass (with automatic rebases when behind `main`).
The test and lint workflow (`.github/workflows/test.yml`) installs dependencies with uv (`uv sync --locked --extra dev`) and runs commands via `uv run`, using `uv.lock` as the source of truth.

View File

@@ -4,6 +4,13 @@ All notable changes to this project will be documented in this file.
This changelog is automatically generated by [release-please](https://github.com/googleapis/release-please) based on [Conventional Commits](https://www.conventionalcommits.org/).
## [0.2.13](https://github.com/jorijn/meshcore-stats/compare/v0.2.12...v0.2.13) (2026-01-09)
### Miscellaneous Chores
* drop digest from compose image ([#59](https://github.com/jorijn/meshcore-stats/issues/59)) ([3a03060](https://github.com/jorijn/meshcore-stats/commit/3a0306043c8a8dfeb1b5b6df6fa988322cc64e98))
## [0.2.12](https://github.com/jorijn/meshcore-stats/compare/v0.2.11...v0.2.12) (2026-01-09)

View File

@@ -1,7 +1,7 @@
# =============================================================================
# Stage 0: uv binary
# =============================================================================
FROM ghcr.io/astral-sh/uv:0.9.22@sha256:2320e6c239737dc73cccce393a8bb89eba2383d17018ee91a59773df802c20e6 AS uv
FROM ghcr.io/astral-sh/uv:0.9.24@sha256:816fdce3387ed2142e37d2e56e1b1b97ccc1ea87731ba199dc8a25c04e4997c5 AS uv
# =============================================================================
# Stage 1: Build dependencies

View File

@@ -15,7 +15,7 @@ services:
# MeshCore Stats - Data collection and rendering
# ==========================================================================
meshcore-stats:
image: ghcr.io/jorijn/meshcore-stats:0.2.12@sha256:82ca2230abba7d8846315a4fa09f1a2407695caf73fef029e6e6ea83c60c4290 # x-release-please-version
image: ghcr.io/jorijn/meshcore-stats:0.2.13 # x-release-please-version
container_name: meshcore-stats
restart: unless-stopped
@@ -78,7 +78,7 @@ services:
# nginx - Static site server
# ==========================================================================
nginx:
image: nginx:1.29-alpine@sha256:8491795299c8e739b7fcc6285d531d9812ce2666e07bd3dd8db00020ad132295
image: nginx:1.29-alpine@sha256:c083c3799197cfff91fe5c3c558db3d2eea65ccbbfd419fa42a64d2c39a24027
container_name: meshcore-stats-nginx
restart: unless-stopped

View File

@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
[project]
name = "meshcore-stats"
version = "0.2.12"
version = "0.2.13"
description = "MeshCore LoRa mesh network monitoring and statistics"
readme = "README.md"
requires-python = ">=3.11"

View File

@@ -3,6 +3,9 @@
"extends": [
"config:best-practices"
],
"rebaseWhen": "behind-base-branch",
"automergeType": "pr",
"platformAutomerge": true,
"lockFileMaintenance": {
"enabled": true
},
@@ -13,6 +16,24 @@
"github-actions"
],
"pinDigests": true
},
{
"matchManagers": [
"docker-compose"
],
"matchPackageNames": [
"ghcr.io/jorijn/meshcore-stats"
],
"pinDigests": false
},
{
"description": "Auto-merge patch and digest updates once checks pass",
"matchUpdateTypes": [
"patch",
"digest",
"lockFileMaintenance"
],
"automerge": true
}
]
}

View File

@@ -1,3 +1,3 @@
"""MeshCore network monitoring library."""
__version__ = "0.2.12" # x-release-please-version
__version__ = "0.2.13" # x-release-please-version

2
uv.lock generated
View File

@@ -752,7 +752,7 @@ wheels = [
[[package]]
name = "meshcore-stats"
version = "0.2.12"
version = "0.2.13"
source = { editable = "." }
dependencies = [
{ name = "jinja2" },