mirror of
https://github.com/jorijn/meshcore-stats.git
synced 2026-03-28 17:42:55 +01:00
Compare commits
10 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3fa002d2a4 | ||
|
|
26d5125e15 | ||
|
|
fb627fdacd | ||
|
|
62d72adf4e | ||
|
|
ca13e31aae | ||
|
|
a9f6926104 | ||
|
|
45bdf5d6d4 | ||
|
|
c199ace4a2 | ||
|
|
f7923b9434 | ||
|
|
c978844271 |
@@ -1,83 +0,0 @@
|
||||
---
|
||||
name: frontend-expert
|
||||
description: Use this agent when working on frontend development tasks including HTML structure, CSS styling, JavaScript interactions, accessibility compliance, UI/UX design decisions, responsive layouts, or component architecture. This agent should be engaged for reviewing frontend code quality, implementing new UI features, fixing accessibility issues, or optimizing user interfaces.\n\nExamples:\n\n<example>\nContext: User asks to create a new HTML page or component\nuser: "Create a navigation menu for the dashboard"\nassistant: "I'll use the frontend-expert agent to design and implement an accessible, well-structured navigation menu."\n<launches frontend-expert agent via Task tool>\n</example>\n\n<example>\nContext: User has written frontend code that needs review\nuser: "I just added this form to the page, can you check it?"\nassistant: "Let me use the frontend-expert agent to review your form for accessibility, semantic HTML, and UI best practices."\n<launches frontend-expert agent via Task tool>\n</example>\n\n<example>\nContext: User needs help with CSS or responsive design\nuser: "The charts on the dashboard look bad on mobile"\nassistant: "I'll engage the frontend-expert agent to analyze and fix the responsive layout issues for the charts."\n<launches frontend-expert agent via Task tool>\n</example>\n\n<example>\nContext: Proactive use after implementing UI changes\nassistant: "I've added the new status indicators to the HTML template. Now let me use the frontend-expert agent to verify the accessibility and semantic correctness of these changes."\n<launches frontend-expert agent via Task tool>\n</example>
|
||||
model: opus
|
||||
---
|
||||
|
||||
You are a senior frontend development expert with deep expertise in web standards, accessibility, and user interface design. You have comprehensive knowledge spanning HTML5 semantics, CSS architecture, JavaScript patterns, WCAG accessibility guidelines, and modern UI/UX principles.
|
||||
|
||||
## Core Expertise Areas
|
||||
|
||||
### Semantic HTML
|
||||
- You enforce proper document structure with appropriate landmark elements (`<header>`, `<nav>`, `<main>`, `<article>`, `<section>`, `<aside>`, `<footer>`)
|
||||
- You ensure heading hierarchy is logical and sequential (h1 → h2 → h3, never skipping levels)
|
||||
- You select the most semantically appropriate element for each use case (e.g., `<button>` for actions, `<a>` for navigation, `<time>` for dates)
|
||||
- You validate proper use of lists, tables (with proper headers and captions), and form elements
|
||||
- You understand when to use ARIA and when native HTML semantics are sufficient
|
||||
|
||||
### Accessibility (WCAG 2.1 AA Compliance)
|
||||
- You verify all interactive elements are keyboard accessible with visible focus indicators
|
||||
- You ensure proper color contrast ratios (4.5:1 for normal text, 3:1 for large text)
|
||||
- You require meaningful alt text for images and proper labeling for form controls
|
||||
- You validate that dynamic content changes are announced to screen readers
|
||||
- You check for proper focus management in modals, dialogs, and single-page navigation
|
||||
- You ensure forms have associated labels, error messages are linked to inputs, and required fields are indicated accessibly
|
||||
- You verify skip links exist for keyboard users to bypass repetitive content
|
||||
- You understand ARIA roles, states, and properties and apply them correctly
|
||||
|
||||
### CSS Best Practices
|
||||
- You advocate for maintainable CSS architecture (BEM, CSS Modules, or utility-first approaches)
|
||||
- You ensure responsive design using mobile-first methodology with appropriate breakpoints
|
||||
- You validate proper use of flexbox and grid for layouts
|
||||
- You check for CSS that respects user preferences (prefers-reduced-motion, prefers-color-scheme)
|
||||
- You optimize for performance by avoiding expensive selectors and unnecessary specificity
|
||||
- You ensure text remains readable when zoomed to 200%
|
||||
|
||||
### UI/UX Design Principles
|
||||
- You evaluate visual hierarchy and ensure important elements receive appropriate emphasis
|
||||
- You verify consistent spacing, typography, and color usage
|
||||
- You assess interactive element sizing (minimum 44x44px touch targets)
|
||||
- You ensure feedback is provided for user actions (loading states, success/error messages)
|
||||
- You validate that the interface is intuitive and follows established conventions
|
||||
- You consider cognitive load and information architecture
|
||||
|
||||
### Performance & Best Practices
|
||||
- You optimize images and recommend appropriate formats (WebP, SVG where appropriate)
|
||||
- You ensure critical CSS is prioritized and non-critical assets are deferred
|
||||
- You validate proper lazy loading implementation for images and iframes
|
||||
- You check for efficient DOM structure and minimize unnecessary nesting
|
||||
|
||||
## Working Methodology
|
||||
|
||||
1. **When reviewing code**: Systematically check each aspect—semantics, accessibility, styling, and usability. Provide specific, actionable feedback with code examples.
|
||||
|
||||
2. **When implementing features**: Start with semantic HTML structure, layer in accessible interactions, then apply styling. Always test mentally against keyboard-only and screen reader usage.
|
||||
|
||||
3. **When debugging issues**: Consider the full stack—HTML structure, CSS cascade, JavaScript behavior, and browser rendering. Check browser developer tools suggestions.
|
||||
|
||||
4. **Prioritize issues by impact**: Critical accessibility barriers first, then semantic improvements, then enhancements.
|
||||
|
||||
## Output Standards
|
||||
|
||||
- Provide working code examples, not just descriptions
|
||||
- Include comments explaining accessibility considerations
|
||||
- Reference specific WCAG criteria when relevant (e.g., "WCAG 2.1 SC 1.4.3")
|
||||
- Suggest testing approaches (keyboard testing, screen reader testing, automated tools like axe-core)
|
||||
- When multiple valid approaches exist, explain trade-offs
|
||||
|
||||
## Quality Checklist (apply to all frontend work)
|
||||
|
||||
- [ ] Semantic HTML elements used appropriately
|
||||
- [ ] Heading hierarchy is logical
|
||||
- [ ] All images have appropriate alt text
|
||||
- [ ] Form controls have associated labels
|
||||
- [ ] Interactive elements are keyboard accessible
|
||||
- [ ] Focus indicators are visible
|
||||
- [ ] Color is not the only means of conveying information
|
||||
- [ ] Color contrast meets WCAG AA standards
|
||||
- [ ] Page is responsive and readable at various sizes
|
||||
- [ ] Touch targets are sufficiently sized
|
||||
- [ ] Loading and error states are handled
|
||||
- [ ] ARIA is used correctly and only when necessary
|
||||
|
||||
You approach every frontend task with the mindset that the interface must work for everyone, regardless of how they access it. You balance aesthetic excellence with functional accessibility, never sacrificing one for the other.
|
||||
@@ -1,103 +0,0 @@
|
||||
---
|
||||
name: python-code-reviewer
|
||||
description: Use this agent when you need expert code review for Python code. This agent should be invoked after writing or modifying Python code to ensure it meets high quality standards. Examples:\n\n<example>\nContext: User has just written a new Python function for battery percentage calculation.\nuser: "I've added a function to calculate battery percentage from voltage"\nassistant: "Let me review that code for you."\n<uses Agent tool to invoke python-code-reviewer>\nassistant: "I've reviewed your battery percentage calculation function using the python-code-reviewer agent. Here are the findings..."\n</example>\n\n<example>\nContext: User has refactored a module to reduce duplication.\nuser: "I've refactored the metric extraction logic to be more DRY"\nassistant: "Great! Let me have the code reviewer take a look at your refactoring."\n<uses Agent tool to invoke python-code-reviewer>\nassistant: "The python-code-reviewer agent has analyzed your refactoring..."\n</example>\n\n<example>\nContext: User has completed a new feature implementation.\nuser: "Here's the new RRD charting feature I just finished implementing"\nassistant: "Excellent! Before we proceed, let me invoke the python-code-reviewer to ensure it meets our quality standards."\n<uses Agent tool to invoke python-code-reviewer>\nassistant: "The code review is complete. Here's what the python-code-reviewer found..."\n</example>
|
||||
model: opus
|
||||
---
|
||||
|
||||
You are an elite Python code reviewer with over 15 years of experience building production systems. You have a deep understanding of Python idioms, design patterns, and software engineering principles. Your reviews are known for being thorough yet constructive, focusing on code quality, maintainability, and long-term sustainability.
|
||||
|
||||
Your core responsibilities:
|
||||
|
||||
1. **Code Quality Assessment**: Evaluate code for readability, clarity, and maintainability. Every line should communicate its intent clearly to future developers.
|
||||
|
||||
2. **DRY Principle Enforcement**: Identify and flag code duplication ruthlessly. Look for:
|
||||
- Repeated logic that could be extracted into functions
|
||||
- Similar patterns that could use abstraction
|
||||
- Configuration or constants that should be centralized
|
||||
- Opportunities for inheritance, composition, or shared utilities
|
||||
|
||||
3. **Python Best Practices**: Ensure code follows Python conventions:
|
||||
- PEP 8 style guidelines (though focus on substance over style)
|
||||
- Pythonic idioms (list comprehensions, generators, context managers)
|
||||
- Proper use of standard library features
|
||||
- Type hints where they add clarity (especially for public APIs)
|
||||
- Docstrings for modules, classes, and non-obvious functions
|
||||
|
||||
4. **Design Pattern Recognition**: Identify opportunities for:
|
||||
- Better separation of concerns
|
||||
- More cohesive module design
|
||||
- Appropriate abstraction levels
|
||||
- Clearer interfaces and contracts
|
||||
|
||||
5. **Error Handling & Edge Cases**: Review for:
|
||||
- Missing error handling
|
||||
- Unhandled edge cases
|
||||
- Silent failures or swallowed exceptions
|
||||
- Validation of inputs and assumptions
|
||||
|
||||
6. **Performance & Efficiency**: Flag obvious performance issues:
|
||||
- Unnecessary iterations or nested loops
|
||||
- Missing opportunities for caching
|
||||
- Inefficient data structures
|
||||
- Resource leaks (unclosed files, connections)
|
||||
|
||||
7. **Testing & Testability**: Assess whether code is:
|
||||
- Testable (dependencies can be mocked, side effects isolated)
|
||||
- Following patterns that make testing easier
|
||||
- Complex enough to warrant additional test coverage
|
||||
|
||||
**Review Process**:
|
||||
|
||||
1. First, understand the context: What is this code trying to accomplish? What constraints exist?
|
||||
|
||||
2. Read through the code completely before commenting. Look for patterns and overall structure.
|
||||
|
||||
3. Organize your feedback into categories:
|
||||
- **Critical Issues**: Bugs, security problems, or major design flaws
|
||||
- **Important Improvements**: DRY violations, readability issues, missing error handling
|
||||
- **Suggestions**: Minor optimizations, style preferences, alternative approaches
|
||||
- **Praise**: Acknowledge well-written code, clever solutions, good patterns
|
||||
|
||||
4. For each issue:
|
||||
- Explain *why* it's a problem, not just *what* is wrong
|
||||
- Provide concrete examples or code snippets showing the improvement
|
||||
- Consider the trade-offs (sometimes duplication is acceptable for clarity)
|
||||
|
||||
5. Be specific with line numbers or code excerpts when referencing issues.
|
||||
|
||||
6. Balance criticism with encouragement. Good code review builds better developers.
|
||||
|
||||
**Your Output Format**:
|
||||
|
||||
Structure your review as:
|
||||
|
||||
```
|
||||
## Code Review Summary
|
||||
|
||||
**Overall Assessment**: [Brief 1-2 sentence summary]
|
||||
|
||||
### Critical Issues
|
||||
[List any bugs, security issues, or major problems]
|
||||
|
||||
### Important Improvements
|
||||
[DRY violations, readability issues, missing error handling]
|
||||
|
||||
### Suggestions
|
||||
[Nice-to-have improvements, alternative approaches]
|
||||
|
||||
### What Went Well
|
||||
[Positive aspects worth highlighting]
|
||||
|
||||
### Recommended Actions
|
||||
[Prioritized list of what to address first]
|
||||
```
|
||||
|
||||
**Important Principles**:
|
||||
|
||||
- **Context Matters**: Consider the project's stage (prototype vs. production), team size, and constraints
|
||||
- **Pragmatism Over Perfection**: Not every issue needs fixing immediately. Help prioritize.
|
||||
- **Teach, Don't Judge**: Explain the reasoning behind recommendations. Help developers grow.
|
||||
- **Question Assumptions**: If something seems odd, ask why it's done that way before suggesting changes
|
||||
- **Consider Project Patterns**: Look for and reference established patterns in the codebase (like those in CLAUDE.md)
|
||||
|
||||
When you're uncertain about context or requirements, ask clarifying questions rather than making assumptions. Your goal is to help create better code, not to enforce arbitrary rules.
|
||||
140
.claude/agents/test-engineer.md
Normal file
140
.claude/agents/test-engineer.md
Normal file
@@ -0,0 +1,140 @@
|
||||
---
|
||||
name: test-engineer
|
||||
description: Use this agent when you need to write, review, or improve tests for code. This includes unit tests, integration tests, end-to-end tests, and test infrastructure. Also use when evaluating test coverage, identifying untested edge cases, or refactoring existing tests for better maintainability. Examples:\n\n<example>\nContext: User has just written a new function and wants tests for it.\nuser: "I just wrote this battery voltage calculation function, can you help me test it?"\nassistant: "Let me use the test-engineer agent to create comprehensive tests for your battery calculation function."\n<commentary>\nSince the user wants tests written for new code, use the test-engineer agent to create thorough test cases covering normal operation, edge cases, and error conditions.\n</commentary>\n</example>\n\n<example>\nContext: User wants to improve existing test coverage.\nuser: "Our test coverage for the database module is only 60%, can you help improve it?"\nassistant: "I'll use the test-engineer agent to analyze the gaps in your database module tests and create additional test cases."\n<commentary>\nThe user needs help identifying untested code paths and writing new tests, which is exactly what the test-engineer agent specializes in.\n</commentary>\n</example>\n\n<example>\nContext: User has written code and the assistant should proactively suggest testing.\nuser: "Here's my new retry logic with exponential backoff"\nassistant: "I've reviewed your retry logic implementation. Now let me use the test-engineer agent to create tests that verify the backoff behavior, edge cases like max retries, and error handling."\n<commentary>\nAfter code is written, proactively use the test-engineer agent to ensure the new functionality has proper test coverage.\n</commentary>\n</example>
|
||||
model: opus
|
||||
---
|
||||
|
||||
You are an expert test engineer with deep expertise in software testing methodologies, test-driven development, and quality assurance. Your primary focus is Python testing (pytest, unittest, hypothesis), but you are also proficient in testing frameworks across JavaScript/TypeScript (Jest, Vitest, Mocha), Go, Rust, and other languages.
|
||||
|
||||
## Core Expertise
|
||||
|
||||
### Testing Principles
|
||||
- Write tests that are fast, isolated, repeatable, self-validating, and timely (F.I.R.S.T.)
|
||||
- Follow the Arrange-Act-Assert (AAA) pattern for clear test structure
|
||||
- Apply the testing pyramid: prioritize unit tests, supplement with integration tests, minimize end-to-end tests
|
||||
- Test behavior, not implementation details
|
||||
- Each test should verify one specific behavior
|
||||
|
||||
### Python Testing (Primary Focus)
|
||||
- **pytest**: fixtures, parametrization, markers, conftest.py organization, plugins
|
||||
- **unittest**: TestCase classes, setUp/tearDown, mock module
|
||||
- **hypothesis**: property-based testing, strategies, shrinking
|
||||
- **coverage.py**: measuring and improving test coverage
|
||||
- **mocking**: unittest.mock, pytest-mock, when and how to mock appropriately
|
||||
- **async testing**: pytest-asyncio, testing coroutines and async generators
|
||||
|
||||
### Test Categories You Handle
|
||||
1. **Unit Tests**: Isolated function/method testing with mocked dependencies
|
||||
2. **Integration Tests**: Testing component interactions, database operations, API calls
|
||||
3. **End-to-End Tests**: Full system testing, UI automation
|
||||
4. **Property-Based Tests**: Generating test cases to find edge cases
|
||||
5. **Regression Tests**: Preventing bug recurrence
|
||||
6. **Performance Tests**: Benchmarking, load testing considerations
|
||||
|
||||
## Your Approach
|
||||
|
||||
### When Writing Tests
|
||||
1. Identify the function/module's contract: inputs, outputs, side effects, exceptions
|
||||
2. List test cases covering:
|
||||
- Happy path (normal operation)
|
||||
- Edge cases (empty inputs, boundaries, None/null values)
|
||||
- Error conditions (invalid inputs, exceptions)
|
||||
- State transitions (if applicable)
|
||||
3. Write clear, descriptive test names that explain what is being tested
|
||||
4. Use fixtures for common setup, parametrize for similar test variations
|
||||
5. Keep tests independent - no test should depend on another's execution
|
||||
|
||||
### When Reviewing Tests
|
||||
1. Check for missing edge cases and error scenarios
|
||||
2. Identify flaky tests (time-dependent, order-dependent, external dependencies)
|
||||
3. Look for over-mocking that makes tests meaningless
|
||||
4. Verify assertions are specific and meaningful
|
||||
5. Ensure test names clearly describe what they verify
|
||||
6. Check for proper cleanup and resource management
|
||||
|
||||
### Test Naming Convention
|
||||
Use descriptive names that explain the scenario:
|
||||
- `test_<function>_<scenario>_<expected_result>`
|
||||
- Example: `test_calculate_battery_percentage_at_minimum_voltage_returns_zero`
|
||||
|
||||
## Code Quality Standards
|
||||
|
||||
### Test Structure
|
||||
```python
|
||||
def test_function_name_describes_behavior():
|
||||
# Arrange - set up test data and dependencies
|
||||
input_data = create_test_data()
|
||||
|
||||
# Act - call the function under test
|
||||
result = function_under_test(input_data)
|
||||
|
||||
# Assert - verify the expected outcome
|
||||
assert result == expected_value
|
||||
```
|
||||
|
||||
### Fixture Best Practices
|
||||
- Use fixtures for reusable setup, not for test logic
|
||||
- Prefer function-scoped fixtures unless sharing is necessary
|
||||
- Use `yield` for cleanup in fixtures
|
||||
- Document what each fixture provides
|
||||
|
||||
### Mocking Guidelines
|
||||
- Mock at the boundary (external services, databases, file systems)
|
||||
- Don't mock the thing you're testing
|
||||
- Verify mock calls when the interaction itself is the behavior being tested
|
||||
- Use `autospec=True` to catch interface mismatches
|
||||
|
||||
## Edge Cases to Always Consider
|
||||
|
||||
### For Numeric Functions
|
||||
- Zero, negative numbers, very large numbers
|
||||
- Floating point precision issues
|
||||
- Integer overflow (in typed languages)
|
||||
- Division by zero scenarios
|
||||
|
||||
### For String/Text Functions
|
||||
- Empty strings, whitespace-only strings
|
||||
- Unicode characters, emoji, RTL text
|
||||
- Very long strings
|
||||
- Special characters and escape sequences
|
||||
|
||||
### For Collections
|
||||
- Empty collections
|
||||
- Single-element collections
|
||||
- Very large collections
|
||||
- None/null elements within collections
|
||||
- Duplicate elements
|
||||
|
||||
### For Time/Date Functions
|
||||
- Timezone boundaries, DST transitions
|
||||
- Leap years, month boundaries
|
||||
- Unix epoch edge cases
|
||||
- Far future/past dates
|
||||
|
||||
### For I/O Operations
|
||||
- File not found, permission denied
|
||||
- Network timeouts, connection failures
|
||||
- Partial reads/writes
|
||||
- Concurrent access
|
||||
|
||||
## Output Format
|
||||
|
||||
When writing tests, provide:
|
||||
1. Complete, runnable test code
|
||||
2. Brief explanation of what each test verifies
|
||||
3. Any additional test cases that should be considered
|
||||
4. Required fixtures or test utilities
|
||||
|
||||
When reviewing tests, provide:
|
||||
1. Specific issues found with line references
|
||||
2. Missing test cases that should be added
|
||||
3. Suggested improvements with code examples
|
||||
4. Overall assessment of test quality and coverage
|
||||
|
||||
## Project-Specific Considerations
|
||||
|
||||
When working in projects with existing test conventions:
|
||||
- Follow the established test file organization
|
||||
- Use existing fixtures and utilities where appropriate
|
||||
- Match the naming conventions already in use
|
||||
- Respect any project-specific testing requirements from documentation like CLAUDE.md
|
||||
@@ -1,20 +0,0 @@
|
||||
{
|
||||
"permissions": {
|
||||
"allow": [
|
||||
"Bash(cat:*)",
|
||||
"Bash(ls:*)",
|
||||
"Bash(git add:*)",
|
||||
"Bash(git commit:*)",
|
||||
"Bash(git push)",
|
||||
"Bash(find:*)",
|
||||
"Bash(tree:*)",
|
||||
"Skill(frontend-design)",
|
||||
"Skill(frontend-design:*)",
|
||||
"Bash(gh run view:*)",
|
||||
"Bash(gh run list:*)",
|
||||
"Bash(gh release view:*)",
|
||||
"Bash(gh release list:*)",
|
||||
"Bash(gh workflow list:*)"
|
||||
]
|
||||
}
|
||||
}
|
||||
87
.codex/skills/frontend-expert/SKILL.md
Normal file
87
.codex/skills/frontend-expert/SKILL.md
Normal file
@@ -0,0 +1,87 @@
|
||||
---
|
||||
name: frontend-expert
|
||||
description: Frontend UI/UX design and implementation for HTML/CSS/JS including semantic structure, responsive layout, accessibility compliance, and visual design direction. Use for building or reviewing web pages/components, fixing accessibility issues, improving styling/responsiveness, or making UI/UX decisions.
|
||||
---
|
||||
|
||||
# Frontend Expert
|
||||
|
||||
## Overview
|
||||
Deliver accessible, production-grade frontend UI with a distinctive aesthetic and clear semantic structure.
|
||||
|
||||
## Core Expertise Areas
|
||||
|
||||
### Semantic HTML
|
||||
- Enforce proper document structure with landmark elements (`<header>`, `<nav>`, `<main>`, `<article>`, `<section>`, `<aside>`, `<footer>`)
|
||||
- Keep heading hierarchy logical and sequential (h1 -> h2 -> h3)
|
||||
- Choose the most semantic element for each use case (`<button>` for actions, `<a>` for navigation, `<time>` for dates)
|
||||
- Validate correct lists, tables (headers/captions), and form elements
|
||||
- Prefer native semantics; add ARIA only when required
|
||||
|
||||
### Accessibility (WCAG 2.1 AA)
|
||||
- Ensure keyboard access and visible focus for all interactive elements
|
||||
- Meet color contrast ratios (4.5:1 normal text, 3:1 large text)
|
||||
- Provide meaningful alt text and labeled form controls
|
||||
- Announce dynamic content changes to assistive tech when needed
|
||||
- Manage focus in modals/dialogs/SPA navigation
|
||||
|
||||
### CSS Best Practices
|
||||
- Use maintainable CSS architecture and consistent naming
|
||||
- Implement mobile-first responsive layouts with appropriate breakpoints
|
||||
- Use flexbox/grid correctly for layout
|
||||
- Respect `prefers-reduced-motion` and `prefers-color-scheme`
|
||||
- Avoid overly specific or expensive selectors
|
||||
- Keep text readable at 200% zoom
|
||||
|
||||
### UI/UX Design Principles
|
||||
- Maintain clear visual hierarchy and consistent spacing
|
||||
- Ensure touch targets meet minimum size (44x44px)
|
||||
- Provide feedback for user actions (loading, success, error)
|
||||
- Reduce cognitive load with clear information architecture
|
||||
|
||||
### Performance & Best Practices
|
||||
- Optimize images and use appropriate formats (WebP, SVG)
|
||||
- Prioritize critical CSS; defer non-critical assets
|
||||
- Use lazy loading where appropriate
|
||||
- Avoid unnecessary DOM nesting
|
||||
|
||||
## Design Direction (Distinctive Aesthetic)
|
||||
- Define purpose, audience, constraints, and target devices
|
||||
- Commit to a bold, intentional style (brutalist, editorial, retro-futuristic, organic, maximalist, minimal, etc.)
|
||||
- Pick a single memorable visual idea and execute it precisely
|
||||
|
||||
### Aesthetic Guidance
|
||||
- **Typography**: Choose distinctive display + body fonts; avoid default stacks (Inter/Roboto/Arial/system) and overused trendy choices
|
||||
- **Color**: Use a cohesive palette with dominant colors and sharp accents; avoid timid palettes and purple-on-white defaults
|
||||
- **Motion**: Prefer a few high-impact animations (page load, staggered reveals, key hovers)
|
||||
- **Composition**: Use asymmetry, overlap, grid-breaking elements, and intentional negative space
|
||||
- **Backgrounds**: Add atmosphere via gradients, texture/noise, patterns, layered depth
|
||||
|
||||
### Match Complexity to Vision
|
||||
- Minimalist designs require precision in spacing and typography
|
||||
- Maximalist designs require richer layout, effects, and animation
|
||||
|
||||
## Working Methodology
|
||||
- Structure semantic HTML first, then layer in styling and interactions
|
||||
- Check keyboard-only flow and screen reader expectations
|
||||
- Prioritize issues by impact: accessibility barriers first, then semantics, then enhancements
|
||||
|
||||
## Output Standards
|
||||
- Provide working code, not just guidance
|
||||
- Explain trade-offs when multiple options exist
|
||||
- Suggest quick validation steps (keyboard-only pass, screen reader spot check, axe)
|
||||
|
||||
## Quality Checklist
|
||||
- Semantic HTML elements used appropriately
|
||||
- Heading hierarchy is logical
|
||||
- Images have alt text
|
||||
- Form controls are labeled
|
||||
- Interactive elements are keyboard accessible
|
||||
- Focus indicators are visible
|
||||
- Color is not the only means of conveying information
|
||||
- Color contrast meets WCAG AA
|
||||
- Page is responsive and readable at multiple sizes
|
||||
- Touch targets are sufficiently sized
|
||||
- Loading and error states are handled
|
||||
- ARIA is used correctly and only when necessary
|
||||
|
||||
Push creative boundaries while keeping the UI usable and inclusive.
|
||||
52
.codex/skills/python-code-reviewer/SKILL.md
Normal file
52
.codex/skills/python-code-reviewer/SKILL.md
Normal file
@@ -0,0 +1,52 @@
|
||||
---
|
||||
name: python-code-reviewer
|
||||
description: Expert code review for Python focused on correctness, maintainability, error handling, performance, and testability. Use after writing or modifying Python code, or when reviewing refactors and new features.
|
||||
---
|
||||
|
||||
# Python Code Reviewer
|
||||
|
||||
## Overview
|
||||
Provide thorough, constructive reviews that prioritize bugs, risks, and design issues over style nits.
|
||||
|
||||
## Core Responsibilities
|
||||
- Assess readability, clarity, and maintainability
|
||||
- Enforce DRY and identify shared abstractions
|
||||
- Apply Python best practices and idioms
|
||||
- Spot design/architecture issues and unclear contracts
|
||||
- Check error handling and edge cases
|
||||
- Flag performance pitfalls and resource leaks
|
||||
- Evaluate testability and missing coverage
|
||||
|
||||
## Review Process
|
||||
- Understand intent, constraints, and context first
|
||||
- Read the full change before commenting
|
||||
- Organize feedback into critical issues, important improvements, suggestions, and praise
|
||||
- Explain why an issue matters and provide concrete examples or fixes
|
||||
- Ask questions when assumptions are unclear
|
||||
|
||||
## Output Format
|
||||
```
|
||||
## Code Review Summary
|
||||
|
||||
**Overall Assessment**: <1-2 sentence summary>
|
||||
|
||||
### Critical Issues
|
||||
- ...
|
||||
|
||||
### Important Improvements
|
||||
- ...
|
||||
|
||||
### Suggestions
|
||||
- ...
|
||||
|
||||
### What Went Well
|
||||
- ...
|
||||
|
||||
### Recommended Actions
|
||||
- ...
|
||||
```
|
||||
|
||||
## Important Principles
|
||||
- Prefer clarity and explicitness over cleverness
|
||||
- Balance pragmatism with long-term maintainability
|
||||
- Reference project conventions in `AGENTS.md`
|
||||
83
.codex/skills/test-engineer/SKILL.md
Normal file
83
.codex/skills/test-engineer/SKILL.md
Normal file
@@ -0,0 +1,83 @@
|
||||
---
|
||||
name: test-engineer
|
||||
description: Test planning, writing, and review across unit/integration/e2e, primarily with pytest. Use when adding tests, improving coverage, diagnosing flaky tests, or designing a testing strategy.
|
||||
---
|
||||
|
||||
# Test Engineer
|
||||
|
||||
## Overview
|
||||
Create fast, reliable tests that validate behavior and improve coverage without brittleness.
|
||||
|
||||
## Testing Principles
|
||||
- Follow F.I.R.S.T. (fast, isolated, repeatable, self-validating, timely)
|
||||
- Use Arrange-Act-Assert structure
|
||||
- Favor unit tests, add integration tests as needed, minimize e2e
|
||||
- Test behavior, not implementation details
|
||||
- Keep one behavior per test
|
||||
|
||||
## Python Testing Focus
|
||||
- pytest fixtures, parametrization, markers, conftest organization
|
||||
- unittest + mock for legacy patterns
|
||||
- hypothesis for property-based tests
|
||||
- coverage.py for measurement
|
||||
- pytest-asyncio for async code
|
||||
|
||||
## Test Categories
|
||||
- Unit tests
|
||||
- Integration tests
|
||||
- End-to-end tests
|
||||
- Property-based tests
|
||||
- Regression tests
|
||||
- Performance tests (when relevant)
|
||||
|
||||
## Writing Tests
|
||||
- Identify contract: inputs, outputs, side effects, exceptions
|
||||
- Enumerate cases: happy path, boundaries, invalid input, failure modes
|
||||
- Use descriptive names and keep tests independent
|
||||
- Use fixtures for shared setup; parametrize for variations
|
||||
|
||||
## Reviewing Tests
|
||||
- Look for missing edge cases and error scenarios
|
||||
- Identify flakiness (time/order/external dependencies)
|
||||
- Avoid over-mocking; mock only boundaries
|
||||
- Ensure assertions are specific and meaningful
|
||||
- Verify cleanup and resource management
|
||||
|
||||
## Naming Convention
|
||||
Use `test_<function>_<scenario>_<expected_result>`.
|
||||
|
||||
## Test Structure
|
||||
```python
|
||||
def test_function_name_describes_behavior():
|
||||
# Arrange
|
||||
input_data = create_test_data()
|
||||
|
||||
# Act
|
||||
result = function_under_test(input_data)
|
||||
|
||||
# Assert
|
||||
assert result == expected_value
|
||||
```
|
||||
|
||||
## Fixture Best Practices
|
||||
- Prefer function-scoped fixtures
|
||||
- Use `yield` for cleanup
|
||||
- Document fixture purpose
|
||||
|
||||
## Mocking Guidelines
|
||||
- Mock at the boundary (DB, filesystem, network)
|
||||
- Do not mock the unit under test
|
||||
- Verify interactions when they are the behavior
|
||||
- Use `autospec=True` to catch interface mismatches
|
||||
|
||||
## Edge Cases to Consider
|
||||
- Numeric: zero, negative, large, precision
|
||||
- Strings: empty, whitespace, unicode, long, special chars
|
||||
- Collections: empty, single, large, duplicates, None elements
|
||||
- Time: DST, leap years, month boundaries, epoch edges
|
||||
- I/O: not found, permission denied, timeouts, partial writes, concurrency
|
||||
|
||||
## Output Expectations
|
||||
- Provide runnable tests with brief explanations
|
||||
- Call out missing coverage or risky gaps
|
||||
- Follow project conventions in `AGENTS.md`
|
||||
@@ -35,7 +35,7 @@ docs/
|
||||
!README.md
|
||||
|
||||
# Development files
|
||||
.claude/
|
||||
.codex/
|
||||
*.log
|
||||
|
||||
# macOS
|
||||
|
||||
1
.github/workflows/docker-publish.yml
vendored
1
.github/workflows/docker-publish.yml
vendored
@@ -33,6 +33,7 @@ permissions:
|
||||
packages: write
|
||||
id-token: write
|
||||
attestations: write
|
||||
artifact-metadata: write
|
||||
|
||||
concurrency:
|
||||
group: docker-${{ github.ref }}
|
||||
|
||||
3
.github/workflows/release-please.yml
vendored
3
.github/workflows/release-please.yml
vendored
@@ -23,9 +23,10 @@ permissions:
|
||||
jobs:
|
||||
release-please:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
steps:
|
||||
- name: Release Please
|
||||
uses: googleapis/release-please-action@v4
|
||||
uses: googleapis/release-please-action@c3fc4de07084f75a2b61a5b933069bda6edf3d5c # v4
|
||||
with:
|
||||
token: ${{ secrets.RELEASE_PLEASE_TOKEN }}
|
||||
config-file: release-please-config.json
|
||||
|
||||
119
.github/workflows/test.yml
vendored
Normal file
119
.github/workflows/test.yml
vendored
Normal file
@@ -0,0 +1,119 @@
|
||||
name: Tests
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main, feat/*]
|
||||
pull_request:
|
||||
branches: [main]
|
||||
|
||||
concurrency:
|
||||
group: test-${{ github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 15
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
python-version: ["3.11", "3.12"]
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
|
||||
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
|
||||
- name: Set up uv
|
||||
uses: astral-sh/setup-uv@61cb8a9741eeb8a550a1b8544337180c0fc8476b # v7.2.0
|
||||
with:
|
||||
enable-cache: true
|
||||
python-version: ${{ matrix.python-version }}
|
||||
|
||||
- name: Install dependencies
|
||||
run: uv sync --locked --extra dev
|
||||
|
||||
- name: Set up matplotlib cache
|
||||
run: |
|
||||
echo "MPLCONFIGDIR=$RUNNER_TEMP/matplotlib" >> "$GITHUB_ENV"
|
||||
mkdir -p "$RUNNER_TEMP/matplotlib"
|
||||
|
||||
- name: Run tests with coverage
|
||||
run: |
|
||||
uv run pytest \
|
||||
--cov=src/meshmon \
|
||||
--cov=scripts \
|
||||
--cov-report=xml \
|
||||
--cov-report=html \
|
||||
--cov-report=term-missing \
|
||||
--cov-fail-under=95 \
|
||||
--junitxml=test-results.xml \
|
||||
-n auto \
|
||||
--tb=short \
|
||||
-q
|
||||
|
||||
- name: Coverage summary
|
||||
if: always()
|
||||
run: |
|
||||
{
|
||||
echo "### Coverage (Python ${{ matrix.python-version }})"
|
||||
if [ -f .coverage ]; then
|
||||
uv run coverage report -m
|
||||
else
|
||||
echo "No coverage data found."
|
||||
fi
|
||||
echo ""
|
||||
} >> "$GITHUB_STEP_SUMMARY"
|
||||
|
||||
- name: Upload coverage HTML report
|
||||
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
|
||||
if: always() && matrix.python-version == '3.12'
|
||||
with:
|
||||
name: coverage-report-html-${{ matrix.python-version }}
|
||||
path: htmlcov/
|
||||
if-no-files-found: warn
|
||||
retention-days: 7
|
||||
|
||||
- name: Upload coverage XML report
|
||||
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
|
||||
if: always() && matrix.python-version == '3.12'
|
||||
with:
|
||||
name: coverage-report-xml-${{ matrix.python-version }}
|
||||
path: coverage.xml
|
||||
if-no-files-found: warn
|
||||
retention-days: 7
|
||||
|
||||
- name: Upload test results
|
||||
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
|
||||
if: always()
|
||||
with:
|
||||
name: test-results-${{ matrix.python-version }}
|
||||
path: test-results.xml
|
||||
if-no-files-found: warn
|
||||
retention-days: 7
|
||||
|
||||
lint:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
|
||||
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5
|
||||
with:
|
||||
python-version: "3.12"
|
||||
|
||||
- name: Set up uv
|
||||
uses: astral-sh/setup-uv@61cb8a9741eeb8a550a1b8544337180c0fc8476b # v7.2.0
|
||||
with:
|
||||
enable-cache: true
|
||||
python-version: "3.12"
|
||||
|
||||
- name: Install linters
|
||||
run: uv sync --locked --extra dev --no-install-project
|
||||
|
||||
- name: Run ruff
|
||||
run: uv run ruff check src/ tests/ scripts/
|
||||
|
||||
- name: Run mypy
|
||||
run: uv run mypy src/meshmon --ignore-missing-imports --no-error-summary
|
||||
6
.gitignore
vendored
6
.gitignore
vendored
@@ -12,6 +12,12 @@ env/
|
||||
dist/
|
||||
build/
|
||||
|
||||
# Testing/Coverage
|
||||
.coverage
|
||||
.coverage.*
|
||||
htmlcov/
|
||||
.pytest_cache/
|
||||
|
||||
# Environment
|
||||
.envrc
|
||||
.env
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
{
|
||||
".": "0.2.8"
|
||||
".": "0.2.11"
|
||||
}
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# CLAUDE.md - MeshCore Stats Project Guide
|
||||
# AGENTS.md - MeshCore Stats Project Guide
|
||||
|
||||
> **Maintenance Note**: This file should always reflect the current state of the project. When making changes to the codebase (adding features, changing architecture, modifying configuration), update this document accordingly. Keep it accurate and comprehensive for future reference.
|
||||
|
||||
@@ -26,6 +26,121 @@ python scripts/render_site.py
|
||||
|
||||
Configuration is automatically loaded from `meshcore.conf` (if it exists). Environment variables always take precedence over the config file.
|
||||
|
||||
## Development Workflow
|
||||
|
||||
### Test-Driven Development (TDD)
|
||||
|
||||
**MANDATORY: Always write tests BEFORE implementing functionality.**
|
||||
|
||||
When implementing new features or fixing bugs, follow this workflow:
|
||||
|
||||
1. **Write the test first**
|
||||
- Create test cases that define the expected behavior
|
||||
- Tests should fail initially (red phase)
|
||||
- Cover happy path, edge cases, and error conditions
|
||||
|
||||
2. **Implement the minimum code to pass**
|
||||
- Write only enough code to make tests pass (green phase)
|
||||
- Don't over-engineer or add unrequested features
|
||||
|
||||
3. **Refactor if needed**
|
||||
- Clean up code while keeping tests green
|
||||
- Extract common patterns, improve naming
|
||||
|
||||
Example workflow for adding a new function:
|
||||
|
||||
```python
|
||||
# Step 1: Write the test first (tests/unit/test_battery.py)
|
||||
def test_voltage_to_percentage_at_full_charge():
|
||||
"""4.20V should return 100%."""
|
||||
assert voltage_to_percentage(4.20) == 100.0
|
||||
|
||||
def test_voltage_to_percentage_at_empty():
|
||||
"""3.00V should return 0%."""
|
||||
assert voltage_to_percentage(3.00) == 0.0
|
||||
|
||||
# Step 2: Run tests - they should FAIL
|
||||
# Step 3: Implement the function to make tests pass
|
||||
# Step 4: Run tests again - they should PASS
|
||||
```
|
||||
|
||||
### Pre-Commit Requirements
|
||||
|
||||
**MANDATORY: Before committing ANY changes, run lint, type check, and tests.**
|
||||
|
||||
```bash
|
||||
# Always run these commands before committing:
|
||||
source .venv/bin/activate
|
||||
|
||||
# 1. Run linter (must pass with no errors)
|
||||
ruff check src/ tests/ scripts/
|
||||
|
||||
# 2. Run type checker (must pass with no errors)
|
||||
python -m mypy src/meshmon --ignore-missing-imports
|
||||
|
||||
# 3. Run test suite (must pass)
|
||||
python -m pytest tests/ -q
|
||||
|
||||
# 4. Only then commit
|
||||
git add . && git commit -m "..."
|
||||
```
|
||||
|
||||
If lint, type check, or tests fail:
|
||||
1. Fix all lint errors before committing
|
||||
2. Fix all type errors before committing - use proper fixes, not `# type: ignore`
|
||||
3. Fix all failing tests before committing
|
||||
4. Never commit with `--no-verify` or skip checks
|
||||
|
||||
### Running Tests
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
python -m pytest tests/
|
||||
|
||||
# Run with coverage report
|
||||
python -m pytest tests/ --cov=src/meshmon --cov-report=term-missing
|
||||
|
||||
# Run specific test file
|
||||
python -m pytest tests/unit/test_battery.py
|
||||
|
||||
# Run specific test
|
||||
python -m pytest tests/unit/test_battery.py::test_voltage_to_percentage_at_full_charge
|
||||
|
||||
# Run tests matching a pattern
|
||||
python -m pytest tests/ -k "battery"
|
||||
|
||||
# Run with verbose output
|
||||
python -m pytest tests/ -v
|
||||
```
|
||||
|
||||
### Test Organization
|
||||
|
||||
```
|
||||
tests/
|
||||
├── conftest.py # Root fixtures (clean_env, tmp dirs, sample data)
|
||||
├── unit/ # Unit tests (isolated, fast)
|
||||
│ ├── test_battery.py
|
||||
│ ├── test_metrics.py
|
||||
│ └── ...
|
||||
├── database/ # Database tests (use temp SQLite)
|
||||
│ ├── conftest.py # DB-specific fixtures
|
||||
│ └── test_db_*.py
|
||||
├── integration/ # Integration tests (multiple components)
|
||||
│ └── test_*_pipeline.py
|
||||
├── charts/ # Chart rendering tests
|
||||
│ ├── conftest.py # SVG normalization, themes
|
||||
│ └── test_chart_*.py
|
||||
└── snapshots/ # Golden files for snapshot testing
|
||||
├── svg/ # Reference SVG charts
|
||||
└── txt/ # Reference TXT reports
|
||||
```
|
||||
|
||||
### Coverage Requirements
|
||||
|
||||
- **Minimum coverage: 95%** (enforced in CI)
|
||||
- Coverage is measured against `src/meshmon/`
|
||||
- Run `python -m pytest tests/ --cov=src/meshmon --cov-fail-under=95`
|
||||
|
||||
## Commit Message Guidelines
|
||||
|
||||
This project uses [Conventional Commits](https://www.conventionalcommits.org/) with [release-please](https://github.com/googleapis/release-please) for automated releases. **Commit messages directly control versioning and changelog generation.**
|
||||
@@ -139,6 +254,7 @@ Example: `fix(charts): prevent crash when no data points available`
|
||||
2. release-please creates/updates a "Release PR" with:
|
||||
- Updated `CHANGELOG.md`
|
||||
- Updated version in `src/meshmon/__init__.py`
|
||||
- Updated `uv.lock` (project version entry)
|
||||
3. When the Release PR is merged:
|
||||
- A GitHub Release is created
|
||||
- A git tag (e.g., `v0.2.0`) is created
|
||||
@@ -255,6 +371,8 @@ Jobs configured in `docker/ofelia.ini`:
|
||||
|
||||
All GitHub Actions are pinned by full SHA for security. Dependabot can be configured to update these automatically.
|
||||
|
||||
The test and lint workflow (`.github/workflows/test.yml`) installs dependencies with uv (`uv sync --locked --extra dev`) and runs commands via `uv run`, using `uv.lock` as the source of truth.
|
||||
|
||||
### Version Placeholder
|
||||
|
||||
The version in `docker-compose.yml` uses release-please's placeholder syntax:
|
||||
37
CHANGELOG.md
37
CHANGELOG.md
@@ -4,6 +4,43 @@ All notable changes to this project will be documented in this file.
|
||||
|
||||
This changelog is automatically generated by [release-please](https://github.com/googleapis/release-please) based on [Conventional Commits](https://www.conventionalcommits.org/).
|
||||
|
||||
## [0.2.11](https://github.com/jorijn/meshcore-stats/compare/v0.2.10...v0.2.11) (2026-01-08)
|
||||
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
* **docker:** skip project install in uv sync ([#35](https://github.com/jorijn/meshcore-stats/issues/35)) ([26d5125](https://github.com/jorijn/meshcore-stats/commit/26d5125e15a78fd7b3fddd09292b4aff6efd23b7))
|
||||
|
||||
|
||||
### Miscellaneous Chores
|
||||
|
||||
* **release:** track uv.lock in release-please ([#33](https://github.com/jorijn/meshcore-stats/issues/33)) ([fb627fd](https://github.com/jorijn/meshcore-stats/commit/fb627fdacd1b58d0c8fc10b8d3d8738a1bdce799))
|
||||
|
||||
## [0.2.10](https://github.com/jorijn/meshcore-stats/compare/v0.2.9...v0.2.10) (2026-01-08)
|
||||
|
||||
|
||||
### Documentation
|
||||
|
||||
* add TZ timezone setting to example config ([45bdf5d](https://github.com/jorijn/meshcore-stats/commit/45bdf5d6d47aacb7ebaba8e420bc9f8d917d06a3))
|
||||
|
||||
|
||||
### Tests
|
||||
|
||||
* add comprehensive pytest test suite with 95% coverage ([#29](https://github.com/jorijn/meshcore-stats/issues/29)) ([a9f6926](https://github.com/jorijn/meshcore-stats/commit/a9f69261049e45b36119fd502dd0d7fc2be2691c))
|
||||
* stabilize suite and broaden integration coverage ([#32](https://github.com/jorijn/meshcore-stats/issues/32)) ([ca13e31](https://github.com/jorijn/meshcore-stats/commit/ca13e31aae1bff561b278608c16df8e17424f9eb))
|
||||
|
||||
## [0.2.9](https://github.com/jorijn/meshcore-stats/compare/v0.2.8...v0.2.9) (2026-01-06)
|
||||
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
* tooltip positioning and locale-aware time formatting ([f7923b9](https://github.com/jorijn/meshcore-stats/commit/f7923b94346c3d492e7291ecca208ab704176308))
|
||||
|
||||
|
||||
### Continuous Integration
|
||||
|
||||
* add artifact-metadata permission for attestation storage records ([c978844](https://github.com/jorijn/meshcore-stats/commit/c978844271eafd35f4778d748d7c832309d1614f))
|
||||
|
||||
## [0.2.8](https://github.com/jorijn/meshcore-stats/compare/v0.2.7...v0.2.8) (2026-01-06)
|
||||
|
||||
|
||||
|
||||
@@ -34,12 +34,13 @@ RUN set -ex; \
|
||||
|
||||
# Create virtual environment
|
||||
RUN python -m venv /opt/venv
|
||||
ENV PATH="/opt/venv/bin:$PATH"
|
||||
ENV PATH="/opt/venv/bin:$PATH" \
|
||||
UV_PROJECT_ENVIRONMENT=/opt/venv
|
||||
|
||||
# Install Python dependencies
|
||||
COPY requirements.txt .
|
||||
RUN pip install --no-cache-dir --upgrade pip && \
|
||||
pip install --no-cache-dir -r requirements.txt
|
||||
COPY pyproject.toml uv.lock ./
|
||||
RUN pip install --no-cache-dir --upgrade pip uv && \
|
||||
uv sync --frozen --no-dev --no-install-project
|
||||
|
||||
# =============================================================================
|
||||
# Stage 2: Runtime
|
||||
|
||||
@@ -164,14 +164,15 @@ For environments where Docker is not available.
|
||||
|
||||
- Python 3.10+
|
||||
- SQLite3
|
||||
- [uv](https://github.com/astral-sh/uv)
|
||||
|
||||
#### Setup
|
||||
|
||||
```bash
|
||||
cd meshcore-stats
|
||||
python3 -m venv .venv
|
||||
uv venv
|
||||
source .venv/bin/activate
|
||||
pip install -r requirements.txt
|
||||
uv sync
|
||||
cp meshcore.conf.example meshcore.conf
|
||||
# Edit meshcore.conf with your settings
|
||||
```
|
||||
|
||||
@@ -15,7 +15,7 @@ services:
|
||||
# MeshCore Stats - Data collection and rendering
|
||||
# ==========================================================================
|
||||
meshcore-stats:
|
||||
image: ghcr.io/jorijn/meshcore-stats:0.2.8 # x-release-please-version
|
||||
image: ghcr.io/jorijn/meshcore-stats:0.2.11 # x-release-please-version
|
||||
container_name: meshcore-stats
|
||||
restart: unless-stopped
|
||||
|
||||
|
||||
@@ -6,6 +6,13 @@
|
||||
# This format is compatible with both Docker env_file and shell 'source' command.
|
||||
# Comments start with # and blank lines are ignored.
|
||||
|
||||
# =============================================================================
|
||||
# Timezone (for Docker deployments)
|
||||
# =============================================================================
|
||||
# Set the timezone for timestamps in charts and reports.
|
||||
# Uses IANA timezone names: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
|
||||
# TZ=Europe/Amsterdam
|
||||
|
||||
# =============================================================================
|
||||
# Connection Settings
|
||||
# =============================================================================
|
||||
|
||||
84
pyproject.toml
Normal file
84
pyproject.toml
Normal file
@@ -0,0 +1,84 @@
|
||||
[build-system]
|
||||
requires = ["setuptools>=61.0"]
|
||||
build-backend = "setuptools.build_meta"
|
||||
|
||||
[project]
|
||||
name = "meshcore-stats"
|
||||
version = "0.2.11"
|
||||
description = "MeshCore LoRa mesh network monitoring and statistics"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.11"
|
||||
dependencies = [
|
||||
"meshcore>=2.2.3",
|
||||
"meshcore-cli>=1.0.0",
|
||||
"pyserial>=3.5",
|
||||
"jinja2>=3.1.0",
|
||||
"matplotlib>=3.8.0",
|
||||
]
|
||||
|
||||
[project.optional-dependencies]
|
||||
dev = [
|
||||
"pytest>=8.0.0",
|
||||
"pytest-asyncio>=0.24.0",
|
||||
"pytest-cov>=5.0.0",
|
||||
"pytest-xdist>=3.5.0",
|
||||
"coverage[toml]>=7.4.0",
|
||||
"freezegun>=1.2.0",
|
||||
"ruff>=0.3.0",
|
||||
"mypy>=1.8.0",
|
||||
]
|
||||
|
||||
[tool.setuptools.packages.find]
|
||||
where = ["src"]
|
||||
|
||||
[tool.pytest.ini_options]
|
||||
testpaths = ["tests"]
|
||||
asyncio_mode = "auto"
|
||||
asyncio_default_fixture_loop_scope = "function"
|
||||
addopts = ["-v", "--strict-markers", "-ra", "--tb=short"]
|
||||
markers = [
|
||||
"slow: marks tests as slow (deselect with '-m \"not slow\"')",
|
||||
"integration: marks integration tests",
|
||||
"snapshot: marks snapshot comparison tests",
|
||||
]
|
||||
filterwarnings = [
|
||||
"ignore::DeprecationWarning:matplotlib.*",
|
||||
]
|
||||
|
||||
[tool.coverage.run]
|
||||
source = ["src/meshmon", "scripts"]
|
||||
branch = true
|
||||
omit = [
|
||||
"src/meshmon/__init__.py",
|
||||
"scripts/generate_snapshots.py", # Dev utility for test fixtures
|
||||
]
|
||||
|
||||
[tool.coverage.report]
|
||||
fail_under = 95
|
||||
show_missing = true
|
||||
skip_covered = false
|
||||
exclude_lines = [
|
||||
"pragma: no cover",
|
||||
"if TYPE_CHECKING:",
|
||||
"raise NotImplementedError",
|
||||
"if not MESHCORE_AVAILABLE:",
|
||||
"except ImportError:",
|
||||
"if __name__ == .__main__.:",
|
||||
]
|
||||
|
||||
[tool.coverage.html]
|
||||
directory = "htmlcov"
|
||||
|
||||
[tool.ruff]
|
||||
target-version = "py311"
|
||||
line-length = 100
|
||||
|
||||
[tool.ruff.lint]
|
||||
select = ["E", "F", "I", "UP", "B", "SIM"]
|
||||
ignore = ["E501"]
|
||||
|
||||
[tool.mypy]
|
||||
python_version = "3.11"
|
||||
warn_return_any = true
|
||||
warn_unused_ignores = true
|
||||
ignore_missing_imports = true
|
||||
@@ -14,6 +14,11 @@
|
||||
"type": "generic",
|
||||
"path": "docker-compose.yml",
|
||||
"glob": false
|
||||
},
|
||||
{
|
||||
"jsonpath": "$.package[?(@.name.value=='meshcore-stats')].version",
|
||||
"path": "uv.lock",
|
||||
"type": "toml"
|
||||
}
|
||||
],
|
||||
"changelog-sections": [
|
||||
|
||||
@@ -1,5 +0,0 @@
|
||||
meshcore>=2.2.3
|
||||
meshcore-cli>=1.0.0
|
||||
pyserial>=3.5
|
||||
jinja2>=3.1.0
|
||||
matplotlib>=3.8.0
|
||||
@@ -23,10 +23,10 @@ from pathlib import Path
|
||||
# Add src to path for imports
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent / "src"))
|
||||
|
||||
from meshmon.env import get_config
|
||||
from meshmon import log
|
||||
from meshmon.meshcore_client import connect_with_lock, run_command
|
||||
from meshmon.db import init_db, insert_metrics
|
||||
from meshmon.env import get_config
|
||||
from meshmon.meshcore_client import connect_with_lock, run_command
|
||||
from meshmon.telemetry import extract_lpp_from_payload, extract_telemetry_metrics
|
||||
|
||||
|
||||
|
||||
@@ -18,27 +18,28 @@ Outputs:
|
||||
import asyncio
|
||||
import sys
|
||||
import time
|
||||
from collections.abc import Callable, Coroutine
|
||||
from pathlib import Path
|
||||
from typing import Any, Callable, Coroutine, Optional
|
||||
from typing import Any
|
||||
|
||||
# Add src to path for imports
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent / "src"))
|
||||
|
||||
from meshmon.env import get_config
|
||||
from meshmon import log
|
||||
from meshmon.db import init_db, insert_metrics
|
||||
from meshmon.env import get_config
|
||||
from meshmon.meshcore_client import (
|
||||
connect_with_lock,
|
||||
run_command,
|
||||
get_contact_by_name,
|
||||
get_contact_by_key_prefix,
|
||||
extract_contact_info,
|
||||
get_contact_by_key_prefix,
|
||||
get_contact_by_name,
|
||||
run_command,
|
||||
)
|
||||
from meshmon.db import init_db, insert_metrics
|
||||
from meshmon.retry import get_repeater_circuit_breaker, with_retries
|
||||
from meshmon.telemetry import extract_lpp_from_payload, extract_telemetry_metrics
|
||||
|
||||
|
||||
async def find_repeater_contact(mc: Any) -> Optional[Any]:
|
||||
async def find_repeater_contact(mc: Any) -> Any | None:
|
||||
"""
|
||||
Find the repeater contact by name or key prefix.
|
||||
|
||||
@@ -69,7 +70,7 @@ async def find_repeater_contact(mc: Any) -> Optional[Any]:
|
||||
return contact
|
||||
|
||||
# Manual search in payload dict
|
||||
for pk, c in contacts_dict.items():
|
||||
for _pk, c in contacts_dict.items():
|
||||
if isinstance(c, dict):
|
||||
name = c.get("adv_name", "")
|
||||
if name and name.lower() == cfg.repeater_name.lower():
|
||||
@@ -105,7 +106,7 @@ async def query_repeater_with_retry(
|
||||
contact: Any,
|
||||
command_name: str,
|
||||
command_coro_fn: Callable[[], Coroutine[Any, Any, Any]],
|
||||
) -> tuple[bool, Optional[dict], Optional[str]]:
|
||||
) -> tuple[bool, dict | None, str | None]:
|
||||
"""
|
||||
Query repeater with retry logic.
|
||||
|
||||
|
||||
357
scripts/generate_snapshots.py
Normal file
357
scripts/generate_snapshots.py
Normal file
@@ -0,0 +1,357 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Generate initial snapshot files for tests.
|
||||
|
||||
This script creates the initial SVG and TXT snapshots for snapshot testing.
|
||||
Run this once to generate the baseline snapshots, then use pytest to verify them.
|
||||
|
||||
Usage:
|
||||
python scripts/generate_snapshots.py
|
||||
"""
|
||||
|
||||
import re
|
||||
import sys
|
||||
from datetime import date, datetime, timedelta
|
||||
from pathlib import Path
|
||||
|
||||
# Add src to path
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent / "src"))
|
||||
|
||||
from meshmon.charts import (
|
||||
CHART_THEMES,
|
||||
DataPoint,
|
||||
TimeSeries,
|
||||
render_chart_svg,
|
||||
)
|
||||
from meshmon.reports import (
|
||||
DailyAggregate,
|
||||
LocationInfo,
|
||||
MetricStats,
|
||||
MonthlyAggregate,
|
||||
YearlyAggregate,
|
||||
format_monthly_txt,
|
||||
format_yearly_txt,
|
||||
)
|
||||
|
||||
|
||||
def normalize_svg_for_snapshot(svg: str) -> str:
|
||||
"""Normalize SVG for deterministic snapshot comparison."""
|
||||
# 1. Normalize matplotlib-generated IDs (prefixed with random hex)
|
||||
svg = re.sub(r'id="[a-zA-Z0-9]+-[0-9a-f]+"', 'id="normalized"', svg)
|
||||
svg = re.sub(r'id="m[0-9a-f]{8,}"', 'id="normalized"', svg)
|
||||
|
||||
# 2. Normalize url(#...) references to match
|
||||
svg = re.sub(r'url\(#[a-zA-Z0-9]+-[0-9a-f]+\)', 'url(#normalized)', svg)
|
||||
svg = re.sub(r'url\(#m[0-9a-f]{8,}\)', 'url(#normalized)', svg)
|
||||
|
||||
# 3. Normalize clip-path IDs
|
||||
svg = re.sub(r'clip-path="url\(#[^)]+\)"', 'clip-path="url(#clip)"', svg)
|
||||
|
||||
# 4. Normalize xlink:href="#..." references
|
||||
svg = re.sub(r'xlink:href="#[a-zA-Z0-9]+-[0-9a-f]+"', 'xlink:href="#normalized"', svg)
|
||||
svg = re.sub(r'xlink:href="#m[0-9a-f]{8,}"', 'xlink:href="#normalized"', svg)
|
||||
|
||||
# 5. Remove matplotlib version comment (changes between versions)
|
||||
svg = re.sub(r'<!-- Created with matplotlib.*?-->', '', svg)
|
||||
|
||||
# 6. Normalize whitespace (but preserve newlines for readability)
|
||||
svg = re.sub(r'[ \t]+', ' ', svg)
|
||||
svg = re.sub(r' ?\n ?', '\n', svg)
|
||||
|
||||
return svg.strip()
|
||||
|
||||
|
||||
def generate_svg_snapshots():
|
||||
"""Generate all SVG snapshot files."""
|
||||
print("Generating SVG snapshots...")
|
||||
|
||||
svg_dir = Path(__file__).parent.parent / "tests" / "snapshots" / "svg"
|
||||
svg_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
light_theme = CHART_THEMES["light"]
|
||||
dark_theme = CHART_THEMES["dark"]
|
||||
|
||||
# Fixed base time for deterministic tests
|
||||
base_time = datetime(2024, 1, 15, 12, 0, 0)
|
||||
|
||||
# Generate gauge timeseries (battery voltage)
|
||||
gauge_points = []
|
||||
for i in range(24):
|
||||
ts = base_time - timedelta(hours=23 - i)
|
||||
value = 3.7 + 0.3 * abs(12 - i) / 12
|
||||
gauge_points.append(DataPoint(timestamp=ts, value=value))
|
||||
|
||||
gauge_ts = TimeSeries(
|
||||
metric="bat",
|
||||
role="repeater",
|
||||
period="day",
|
||||
points=gauge_points,
|
||||
)
|
||||
|
||||
# Generate counter timeseries (packet rate)
|
||||
counter_points = []
|
||||
for i in range(24):
|
||||
ts = base_time - timedelta(hours=23 - i)
|
||||
hour = (i + 12) % 24
|
||||
value = 2.0 + (hour - 6) * 0.3 if 6 <= hour <= 18 else 0.5 + hour % 6 * 0.1
|
||||
counter_points.append(DataPoint(timestamp=ts, value=value))
|
||||
|
||||
counter_ts = TimeSeries(
|
||||
metric="nb_recv",
|
||||
role="repeater",
|
||||
period="day",
|
||||
points=counter_points,
|
||||
)
|
||||
|
||||
# Empty timeseries
|
||||
empty_ts = TimeSeries(
|
||||
metric="bat",
|
||||
role="repeater",
|
||||
period="day",
|
||||
points=[],
|
||||
)
|
||||
|
||||
# Single point timeseries
|
||||
single_point_ts = TimeSeries(
|
||||
metric="bat",
|
||||
role="repeater",
|
||||
period="day",
|
||||
points=[DataPoint(timestamp=base_time, value=3.85)],
|
||||
)
|
||||
|
||||
# Generate snapshots
|
||||
snapshots = [
|
||||
("bat_day_light.svg", gauge_ts, light_theme, 3.0, 4.2),
|
||||
("bat_day_dark.svg", gauge_ts, dark_theme, 3.0, 4.2),
|
||||
("nb_recv_day_light.svg", counter_ts, light_theme, None, None),
|
||||
("nb_recv_day_dark.svg", counter_ts, dark_theme, None, None),
|
||||
("empty_day_light.svg", empty_ts, light_theme, None, None),
|
||||
("empty_day_dark.svg", empty_ts, dark_theme, None, None),
|
||||
("single_point_day_light.svg", single_point_ts, light_theme, 3.0, 4.2),
|
||||
]
|
||||
|
||||
for filename, ts, theme, y_min, y_max in snapshots:
|
||||
svg = render_chart_svg(ts, theme, y_min=y_min, y_max=y_max)
|
||||
normalized = normalize_svg_for_snapshot(svg)
|
||||
|
||||
output_path = svg_dir / filename
|
||||
output_path.write_text(normalized, encoding="utf-8")
|
||||
print(f" Created: {output_path}")
|
||||
|
||||
|
||||
def generate_txt_snapshots():
|
||||
"""Generate all TXT report snapshot files."""
|
||||
print("Generating TXT snapshots...")
|
||||
|
||||
txt_dir = Path(__file__).parent.parent / "tests" / "snapshots" / "txt"
|
||||
txt_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
sample_location = LocationInfo(
|
||||
name="Test Observatory",
|
||||
lat=52.3676,
|
||||
lon=4.9041,
|
||||
elev=2.0,
|
||||
)
|
||||
|
||||
# Repeater monthly aggregate
|
||||
repeater_daily_data = []
|
||||
for day in range(1, 6):
|
||||
repeater_daily_data.append(
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, day),
|
||||
metrics={
|
||||
"bat": MetricStats(
|
||||
min_value=3600 + day * 10,
|
||||
min_time=datetime(2024, 1, day, 4, 0),
|
||||
max_value=3900 + day * 10,
|
||||
max_time=datetime(2024, 1, day, 14, 0),
|
||||
mean=3750 + day * 10,
|
||||
count=96,
|
||||
),
|
||||
"bat_pct": MetricStats(mean=65.0 + day * 2, count=96),
|
||||
"last_rssi": MetricStats(mean=-85.0 - day, count=96),
|
||||
"last_snr": MetricStats(mean=8.5 + day * 0.2, count=96),
|
||||
"noise_floor": MetricStats(mean=-115.0, count=96),
|
||||
"nb_recv": MetricStats(total=500 + day * 100, count=96),
|
||||
"nb_sent": MetricStats(total=200 + day * 50, count=96),
|
||||
"airtime": MetricStats(total=120 + day * 20, count=96),
|
||||
},
|
||||
snapshot_count=96,
|
||||
)
|
||||
)
|
||||
|
||||
repeater_monthly = MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=repeater_daily_data,
|
||||
summary={
|
||||
"bat": MetricStats(
|
||||
min_value=3610, min_time=datetime(2024, 1, 1, 4, 0),
|
||||
max_value=3950, max_time=datetime(2024, 1, 5, 14, 0),
|
||||
mean=3780, count=480,
|
||||
),
|
||||
"bat_pct": MetricStats(mean=71.0, count=480),
|
||||
"last_rssi": MetricStats(mean=-88.0, count=480),
|
||||
"last_snr": MetricStats(mean=9.1, count=480),
|
||||
"noise_floor": MetricStats(mean=-115.0, count=480),
|
||||
"nb_recv": MetricStats(total=4000, count=480),
|
||||
"nb_sent": MetricStats(total=1750, count=480),
|
||||
"airtime": MetricStats(total=900, count=480),
|
||||
},
|
||||
)
|
||||
|
||||
# Companion monthly aggregate
|
||||
companion_daily_data = []
|
||||
for day in range(1, 6):
|
||||
companion_daily_data.append(
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, day),
|
||||
metrics={
|
||||
"battery_mv": MetricStats(
|
||||
min_value=3700 + day * 10,
|
||||
min_time=datetime(2024, 1, day, 5, 0),
|
||||
max_value=4000 + day * 10,
|
||||
max_time=datetime(2024, 1, day, 12, 0),
|
||||
mean=3850 + day * 10,
|
||||
count=1440,
|
||||
),
|
||||
"bat_pct": MetricStats(mean=75.0 + day * 2, count=1440),
|
||||
"contacts": MetricStats(mean=8 + day, count=1440),
|
||||
"recv": MetricStats(total=1000 + day * 200, count=1440),
|
||||
"sent": MetricStats(total=500 + day * 100, count=1440),
|
||||
},
|
||||
snapshot_count=1440,
|
||||
)
|
||||
)
|
||||
|
||||
companion_monthly = MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="companion",
|
||||
daily=companion_daily_data,
|
||||
summary={
|
||||
"battery_mv": MetricStats(
|
||||
min_value=3710, min_time=datetime(2024, 1, 1, 5, 0),
|
||||
max_value=4050, max_time=datetime(2024, 1, 5, 12, 0),
|
||||
mean=3880, count=7200,
|
||||
),
|
||||
"bat_pct": MetricStats(mean=81.0, count=7200),
|
||||
"contacts": MetricStats(mean=11.0, count=7200),
|
||||
"recv": MetricStats(total=8000, count=7200),
|
||||
"sent": MetricStats(total=4000, count=7200),
|
||||
},
|
||||
)
|
||||
|
||||
# Repeater yearly aggregate
|
||||
repeater_yearly_monthly = []
|
||||
for month in range(1, 4):
|
||||
repeater_yearly_monthly.append(
|
||||
MonthlyAggregate(
|
||||
year=2024,
|
||||
month=month,
|
||||
role="repeater",
|
||||
daily=[],
|
||||
summary={
|
||||
"bat": MetricStats(
|
||||
min_value=3500 + month * 50,
|
||||
min_time=datetime(2024, month, 15, 4, 0),
|
||||
max_value=3950 + month * 20,
|
||||
max_time=datetime(2024, month, 20, 14, 0),
|
||||
mean=3700 + month * 30,
|
||||
count=2976,
|
||||
),
|
||||
"bat_pct": MetricStats(mean=60.0 + month * 5, count=2976),
|
||||
"last_rssi": MetricStats(mean=-90.0 + month, count=2976),
|
||||
"last_snr": MetricStats(mean=7.5 + month * 0.5, count=2976),
|
||||
"nb_recv": MetricStats(total=30000 + month * 5000, count=2976),
|
||||
"nb_sent": MetricStats(total=15000 + month * 2500, count=2976),
|
||||
},
|
||||
)
|
||||
)
|
||||
|
||||
repeater_yearly = YearlyAggregate(
|
||||
year=2024,
|
||||
role="repeater",
|
||||
monthly=repeater_yearly_monthly,
|
||||
summary={
|
||||
"bat": MetricStats(
|
||||
min_value=3550, min_time=datetime(2024, 1, 15, 4, 0),
|
||||
max_value=4010, max_time=datetime(2024, 3, 20, 14, 0),
|
||||
mean=3760, count=8928,
|
||||
),
|
||||
"bat_pct": MetricStats(mean=70.0, count=8928),
|
||||
"last_rssi": MetricStats(mean=-88.0, count=8928),
|
||||
"last_snr": MetricStats(mean=8.5, count=8928),
|
||||
"nb_recv": MetricStats(total=120000, count=8928),
|
||||
"nb_sent": MetricStats(total=60000, count=8928),
|
||||
},
|
||||
)
|
||||
|
||||
# Companion yearly aggregate
|
||||
companion_yearly_monthly = []
|
||||
for month in range(1, 4):
|
||||
companion_yearly_monthly.append(
|
||||
MonthlyAggregate(
|
||||
year=2024,
|
||||
month=month,
|
||||
role="companion",
|
||||
daily=[],
|
||||
summary={
|
||||
"battery_mv": MetricStats(
|
||||
min_value=3600 + month * 30,
|
||||
min_time=datetime(2024, month, 10, 5, 0),
|
||||
max_value=4100 + month * 20,
|
||||
max_time=datetime(2024, month, 25, 12, 0),
|
||||
mean=3850 + month * 25,
|
||||
count=44640,
|
||||
),
|
||||
"bat_pct": MetricStats(mean=70.0 + month * 3, count=44640),
|
||||
"contacts": MetricStats(mean=10 + month, count=44640),
|
||||
"recv": MetricStats(total=50000 + month * 10000, count=44640),
|
||||
"sent": MetricStats(total=25000 + month * 5000, count=44640),
|
||||
},
|
||||
)
|
||||
)
|
||||
|
||||
companion_yearly = YearlyAggregate(
|
||||
year=2024,
|
||||
role="companion",
|
||||
monthly=companion_yearly_monthly,
|
||||
summary={
|
||||
"battery_mv": MetricStats(
|
||||
min_value=3630, min_time=datetime(2024, 1, 10, 5, 0),
|
||||
max_value=4160, max_time=datetime(2024, 3, 25, 12, 0),
|
||||
mean=3900, count=133920,
|
||||
),
|
||||
"bat_pct": MetricStats(mean=76.0, count=133920),
|
||||
"contacts": MetricStats(mean=12.0, count=133920),
|
||||
"recv": MetricStats(total=210000, count=133920),
|
||||
"sent": MetricStats(total=105000, count=133920),
|
||||
},
|
||||
)
|
||||
|
||||
# Empty aggregates
|
||||
empty_monthly = MonthlyAggregate(year=2024, month=1, role="repeater", daily=[], summary={})
|
||||
empty_yearly = YearlyAggregate(year=2024, role="repeater", monthly=[], summary={})
|
||||
|
||||
# Generate all TXT snapshots
|
||||
txt_snapshots = [
|
||||
("monthly_report_repeater.txt", format_monthly_txt(repeater_monthly, "Test Repeater", sample_location)),
|
||||
("monthly_report_companion.txt", format_monthly_txt(companion_monthly, "Test Companion", sample_location)),
|
||||
("yearly_report_repeater.txt", format_yearly_txt(repeater_yearly, "Test Repeater", sample_location)),
|
||||
("yearly_report_companion.txt", format_yearly_txt(companion_yearly, "Test Companion", sample_location)),
|
||||
("empty_monthly_report.txt", format_monthly_txt(empty_monthly, "Test Repeater", sample_location)),
|
||||
("empty_yearly_report.txt", format_yearly_txt(empty_yearly, "Test Repeater", sample_location)),
|
||||
]
|
||||
|
||||
for filename, content in txt_snapshots:
|
||||
output_path = txt_dir / filename
|
||||
output_path.write_text(content, encoding="utf-8")
|
||||
print(f" Created: {output_path}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
generate_svg_snapshots()
|
||||
generate_txt_snapshots()
|
||||
print("\nSnapshot generation complete!")
|
||||
print("Run pytest to verify the snapshots work correctly.")
|
||||
@@ -12,9 +12,9 @@ from pathlib import Path
|
||||
# Add src to path for imports
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent / "src"))
|
||||
|
||||
from meshmon.db import init_db, get_metric_count
|
||||
from meshmon import log
|
||||
from meshmon.charts import render_all_charts, save_chart_stats
|
||||
from meshmon.db import get_metric_count, init_db
|
||||
|
||||
|
||||
def main():
|
||||
|
||||
@@ -25,14 +25,24 @@ import calendar
|
||||
import json
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
# Add src to path for imports
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent / "src"))
|
||||
|
||||
from meshmon import log
|
||||
from meshmon.db import init_db
|
||||
from meshmon.env import get_config
|
||||
from meshmon import log
|
||||
from meshmon.html import render_report_page, render_reports_index
|
||||
from meshmon.reports import (
|
||||
LocationInfo,
|
||||
aggregate_monthly,
|
||||
aggregate_yearly,
|
||||
format_monthly_txt,
|
||||
format_yearly_txt,
|
||||
get_available_periods,
|
||||
monthly_to_json,
|
||||
yearly_to_json,
|
||||
)
|
||||
|
||||
|
||||
def safe_write(path: Path, content: str) -> bool:
|
||||
@@ -48,24 +58,11 @@ def safe_write(path: Path, content: str) -> bool:
|
||||
try:
|
||||
path.write_text(content, encoding="utf-8")
|
||||
return True
|
||||
except IOError as e:
|
||||
except OSError as e:
|
||||
log.error(f"Failed to write {path}: {e}")
|
||||
return False
|
||||
|
||||
|
||||
from meshmon.reports import (
|
||||
LocationInfo,
|
||||
aggregate_monthly,
|
||||
aggregate_yearly,
|
||||
format_monthly_txt,
|
||||
format_yearly_txt,
|
||||
get_available_periods,
|
||||
monthly_to_json,
|
||||
yearly_to_json,
|
||||
)
|
||||
from meshmon.html import render_report_page, render_reports_index
|
||||
|
||||
|
||||
def get_node_name(role: str) -> str:
|
||||
"""Get display name for a node role from configuration."""
|
||||
cfg = get_config()
|
||||
@@ -91,8 +88,8 @@ def render_monthly_report(
|
||||
role: str,
|
||||
year: int,
|
||||
month: int,
|
||||
prev_period: Optional[tuple[int, int]] = None,
|
||||
next_period: Optional[tuple[int, int]] = None,
|
||||
prev_period: tuple[int, int] | None = None,
|
||||
next_period: tuple[int, int] | None = None,
|
||||
) -> None:
|
||||
"""Render monthly report in all formats.
|
||||
|
||||
@@ -152,8 +149,8 @@ def render_monthly_report(
|
||||
def render_yearly_report(
|
||||
role: str,
|
||||
year: int,
|
||||
prev_year: Optional[int] = None,
|
||||
next_year: Optional[int] = None,
|
||||
prev_year: int | None = None,
|
||||
next_year: int | None = None,
|
||||
) -> None:
|
||||
"""Render yearly report in all formats.
|
||||
|
||||
|
||||
@@ -13,9 +13,9 @@ from pathlib import Path
|
||||
# Add src to path for imports
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent / "src"))
|
||||
|
||||
from meshmon.db import init_db, get_latest_metrics
|
||||
from meshmon.env import get_config
|
||||
from meshmon import log
|
||||
from meshmon.db import get_latest_metrics, init_db
|
||||
from meshmon.env import get_config
|
||||
from meshmon.html import write_site
|
||||
|
||||
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
"""MeshCore network monitoring library."""
|
||||
|
||||
__version__ = "0.2.8" # x-release-please-version
|
||||
__version__ = "0.2.11" # x-release-please-version
|
||||
|
||||
@@ -10,23 +10,23 @@ import re
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime, timedelta
|
||||
from pathlib import Path
|
||||
from typing import Any, Literal, Optional
|
||||
from typing import Any, Literal
|
||||
|
||||
import matplotlib
|
||||
matplotlib.use('Agg') # Non-interactive backend for server-side rendering
|
||||
import matplotlib.pyplot as plt
|
||||
import matplotlib.dates as mdates
|
||||
|
||||
matplotlib.use('Agg') # Non-interactive backend for server-side rendering
|
||||
import matplotlib.dates as mdates
|
||||
import matplotlib.pyplot as plt
|
||||
|
||||
from . import log
|
||||
from .db import get_metrics_for_period
|
||||
from .env import get_config
|
||||
from .metrics import (
|
||||
get_chart_metrics,
|
||||
is_counter_metric,
|
||||
get_graph_scale,
|
||||
is_counter_metric,
|
||||
transform_value,
|
||||
)
|
||||
from . import log
|
||||
|
||||
|
||||
# Type alias for theme names
|
||||
ThemeName = Literal["light", "dark"]
|
||||
@@ -37,27 +37,35 @@ BIN_30_MINUTES = 1800 # 30 minutes in seconds
|
||||
BIN_2_HOURS = 7200 # 2 hours in seconds
|
||||
BIN_1_DAY = 86400 # 1 day in seconds
|
||||
|
||||
# Period configuration: lookback duration and aggregation bin size
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class PeriodConfig:
|
||||
"""Configuration for a chart time period."""
|
||||
|
||||
lookback: timedelta
|
||||
bin_seconds: int | None = None # None = no binning (raw data)
|
||||
|
||||
|
||||
# Period configuration for chart rendering
|
||||
# Target: ~100-400 data points per chart for clean visualization
|
||||
# Chart plot area is ~640px, so aim for 1.5-6px per point
|
||||
PERIOD_CONFIG = {
|
||||
"day": {
|
||||
"lookback": timedelta(days=1),
|
||||
"bin_seconds": None, # No binning - raw data (~96 points at 15-min intervals)
|
||||
},
|
||||
"week": {
|
||||
"lookback": timedelta(days=7),
|
||||
"bin_seconds": BIN_30_MINUTES, # 30-min bins (~336 points, ~2px per point)
|
||||
},
|
||||
"month": {
|
||||
"lookback": timedelta(days=31),
|
||||
"bin_seconds": BIN_2_HOURS, # 2-hour bins (~372 points, ~1.7px per point)
|
||||
},
|
||||
"year": {
|
||||
"lookback": timedelta(days=365),
|
||||
"bin_seconds": BIN_1_DAY, # 1-day bins (~365 points, ~1.8px per point)
|
||||
},
|
||||
PERIOD_CONFIG: dict[str, PeriodConfig] = {
|
||||
"day": PeriodConfig(
|
||||
lookback=timedelta(days=1),
|
||||
bin_seconds=None, # No binning - raw data (~96 points at 15-min intervals)
|
||||
),
|
||||
"week": PeriodConfig(
|
||||
lookback=timedelta(days=7),
|
||||
bin_seconds=BIN_30_MINUTES, # 30-min bins (~336 points, ~2px per point)
|
||||
),
|
||||
"month": PeriodConfig(
|
||||
lookback=timedelta(days=31),
|
||||
bin_seconds=BIN_2_HOURS, # 2-hour bins (~372 points, ~1.7px per point)
|
||||
),
|
||||
"year": PeriodConfig(
|
||||
lookback=timedelta(days=365),
|
||||
bin_seconds=BIN_1_DAY, # 1-day bins (~365 points, ~1.8px per point)
|
||||
),
|
||||
}
|
||||
|
||||
|
||||
@@ -134,12 +142,12 @@ class TimeSeries:
|
||||
class ChartStatistics:
|
||||
"""Statistics for a time series (min/avg/max/current)."""
|
||||
|
||||
min_value: Optional[float] = None
|
||||
avg_value: Optional[float] = None
|
||||
max_value: Optional[float] = None
|
||||
current_value: Optional[float] = None
|
||||
min_value: float | None = None
|
||||
avg_value: float | None = None
|
||||
max_value: float | None = None
|
||||
current_value: float | None = None
|
||||
|
||||
def to_dict(self) -> dict[str, Optional[float]]:
|
||||
def to_dict(self) -> dict[str, float | None]:
|
||||
"""Convert to dict matching existing chart_stats.json format."""
|
||||
return {
|
||||
"min": self.min_value,
|
||||
@@ -167,7 +175,7 @@ def load_timeseries_from_db(
|
||||
end_time: datetime,
|
||||
lookback: timedelta,
|
||||
period: str,
|
||||
all_metrics: Optional[dict[str, list[tuple[int, float]]]] = None,
|
||||
all_metrics: dict[str, list[tuple[int, float]]] | None = None,
|
||||
) -> TimeSeries:
|
||||
"""Load time series data from SQLite database.
|
||||
|
||||
@@ -241,11 +249,9 @@ def load_timeseries_from_db(
|
||||
raw_points = [(ts, val * scale) for ts, val in raw_points]
|
||||
|
||||
# Apply time binning if configured
|
||||
period_cfg = PERIOD_CONFIG.get(period, {})
|
||||
bin_seconds = period_cfg.get("bin_seconds")
|
||||
|
||||
if bin_seconds and len(raw_points) > 1:
|
||||
raw_points = _aggregate_bins(raw_points, bin_seconds)
|
||||
period_cfg = PERIOD_CONFIG.get(period)
|
||||
if period_cfg and period_cfg.bin_seconds and len(raw_points) > 1:
|
||||
raw_points = _aggregate_bins(raw_points, period_cfg.bin_seconds)
|
||||
|
||||
# Convert to DataPoints
|
||||
points = [DataPoint(timestamp=ts, value=val) for ts, val in raw_points]
|
||||
@@ -318,10 +324,10 @@ def render_chart_svg(
|
||||
theme: ChartTheme,
|
||||
width: int = 800,
|
||||
height: int = 280,
|
||||
y_min: Optional[float] = None,
|
||||
y_max: Optional[float] = None,
|
||||
x_start: Optional[datetime] = None,
|
||||
x_end: Optional[datetime] = None,
|
||||
y_min: float | None = None,
|
||||
y_max: float | None = None,
|
||||
x_start: datetime | None = None,
|
||||
x_end: datetime | None = None,
|
||||
) -> str:
|
||||
"""Render time series as SVG using matplotlib.
|
||||
|
||||
@@ -380,10 +386,14 @@ def render_chart_svg(
|
||||
timestamps = ts.timestamps
|
||||
values = ts.values
|
||||
|
||||
# Convert datetime to matplotlib date numbers for proper typing
|
||||
# and correct axis formatter behavior
|
||||
x_dates = mdates.date2num(timestamps)
|
||||
|
||||
# Plot area fill
|
||||
area_color = _hex_to_rgba(theme.area)
|
||||
area = ax.fill_between(
|
||||
timestamps,
|
||||
x_dates,
|
||||
values,
|
||||
alpha=area_color[3],
|
||||
color=f"#{theme.line}",
|
||||
@@ -392,7 +402,7 @@ def render_chart_svg(
|
||||
|
||||
# Plot line
|
||||
(line,) = ax.plot(
|
||||
timestamps,
|
||||
x_dates,
|
||||
values,
|
||||
color=f"#{theme.line}",
|
||||
linewidth=2,
|
||||
@@ -414,7 +424,17 @@ def render_chart_svg(
|
||||
|
||||
# Set X-axis limits first (before configuring ticks)
|
||||
if x_start is not None and x_end is not None:
|
||||
ax.set_xlim(x_start, x_end)
|
||||
ax.set_xlim(mdates.date2num(x_start), mdates.date2num(x_end))
|
||||
else:
|
||||
# Compute sensible x-axis limits from data
|
||||
# For single point or sparse data, add padding based on period
|
||||
x_min_dt = min(timestamps)
|
||||
x_max_dt = max(timestamps)
|
||||
if x_min_dt == x_max_dt:
|
||||
# Single point: use period lookback for range
|
||||
period_cfg = PERIOD_CONFIG.get(ts.period, PERIOD_CONFIG["day"])
|
||||
x_min_dt = x_max_dt - period_cfg.lookback
|
||||
ax.set_xlim(mdates.date2num(x_min_dt), mdates.date2num(x_max_dt))
|
||||
|
||||
# Format X-axis based on period (after setting limits)
|
||||
_configure_x_axis(ax, ts.period)
|
||||
@@ -464,10 +484,10 @@ def _inject_data_attributes(
|
||||
svg: str,
|
||||
ts: TimeSeries,
|
||||
theme_name: str,
|
||||
x_start: Optional[datetime] = None,
|
||||
x_end: Optional[datetime] = None,
|
||||
y_min: Optional[float] = None,
|
||||
y_max: Optional[float] = None,
|
||||
x_start: datetime | None = None,
|
||||
x_end: datetime | None = None,
|
||||
y_min: float | None = None,
|
||||
y_max: float | None = None,
|
||||
) -> str:
|
||||
"""Inject data-* attributes into SVG for tooltip support.
|
||||
|
||||
@@ -516,19 +536,17 @@ def _inject_data_attributes(
|
||||
count=1
|
||||
)
|
||||
|
||||
# Add data-points to the main path element (the line, not the fill)
|
||||
def add_data_to_id(match):
|
||||
return f'<path{match.group(1)} data-points="{data_points_attr}"'
|
||||
|
||||
# Add data-points to the line path inside the #chart-line group
|
||||
# matplotlib creates <g id="chart-line"><path d="..."></g>
|
||||
svg, count = re.subn(
|
||||
r'<path([^>]*(?:id|gid)="chart-line"[^>]*)',
|
||||
add_data_to_id,
|
||||
r'(<g[^>]*id="chart-line"[^>]*>\s*<path\b)',
|
||||
rf'\1 data-points="{data_points_attr}"',
|
||||
svg,
|
||||
count=1,
|
||||
)
|
||||
|
||||
if count == 0:
|
||||
# Look for the second path element (first is usually the fill area)
|
||||
# Fallback: look for the second path element (first is usually the fill area)
|
||||
path_count = 0
|
||||
|
||||
def add_data_to_path(match):
|
||||
@@ -545,7 +563,7 @@ def _inject_data_attributes(
|
||||
|
||||
def render_all_charts(
|
||||
role: str,
|
||||
metrics: Optional[list[str]] = None,
|
||||
metrics: list[str] | None = None,
|
||||
) -> tuple[list[Path], dict[str, dict[str, dict[str, Any]]]]:
|
||||
"""Render all charts for a role in both light and dark themes.
|
||||
|
||||
@@ -589,7 +607,7 @@ def render_all_charts(
|
||||
for period in periods:
|
||||
period_cfg = PERIOD_CONFIG[period]
|
||||
x_end = now
|
||||
x_start = now - period_cfg["lookback"]
|
||||
x_start = now - period_cfg.lookback
|
||||
|
||||
start_ts = int(x_start.timestamp())
|
||||
end_ts = int(x_end.timestamp())
|
||||
@@ -601,7 +619,7 @@ def render_all_charts(
|
||||
role=role,
|
||||
metric=metric,
|
||||
end_time=now,
|
||||
lookback=period_cfg["lookback"],
|
||||
lookback=period_cfg.lookback,
|
||||
period=period,
|
||||
all_metrics=all_metrics,
|
||||
)
|
||||
@@ -677,7 +695,8 @@ def load_chart_stats(role: str) -> dict[str, dict[str, dict[str, Any]]]:
|
||||
|
||||
try:
|
||||
with open(stats_path) as f:
|
||||
return json.load(f)
|
||||
data: dict[str, dict[str, dict[str, Any]]] = json.load(f)
|
||||
return data
|
||||
except Exception as e:
|
||||
log.debug(f"Failed to load chart stats: {e}")
|
||||
return {}
|
||||
|
||||
@@ -19,14 +19,14 @@ Migration system:
|
||||
|
||||
import sqlite3
|
||||
from collections import defaultdict
|
||||
from collections.abc import Iterator
|
||||
from contextlib import contextmanager
|
||||
from pathlib import Path
|
||||
from typing import Any, Iterator, Optional
|
||||
from typing import Any
|
||||
|
||||
from . import log
|
||||
from .battery import voltage_to_percentage
|
||||
from .env import get_config
|
||||
from . import log
|
||||
|
||||
|
||||
# Path to migrations directory (relative to this file)
|
||||
MIGRATIONS_DIR = Path(__file__).parent / "migrations"
|
||||
@@ -176,7 +176,7 @@ def get_db_path() -> Path:
|
||||
return cfg.state_dir / "metrics.db"
|
||||
|
||||
|
||||
def init_db(db_path: Optional[Path] = None) -> None:
|
||||
def init_db(db_path: Path | None = None) -> None:
|
||||
"""Initialize database with schema and apply pending migrations.
|
||||
|
||||
Creates tables if they don't exist. Safe to call multiple times.
|
||||
@@ -212,7 +212,7 @@ def init_db(db_path: Optional[Path] = None) -> None:
|
||||
|
||||
@contextmanager
|
||||
def get_connection(
|
||||
db_path: Optional[Path] = None,
|
||||
db_path: Path | None = None,
|
||||
readonly: bool = False
|
||||
) -> Iterator[sqlite3.Connection]:
|
||||
"""Context manager for database connections.
|
||||
@@ -259,7 +259,7 @@ def insert_metric(
|
||||
role: str,
|
||||
metric: str,
|
||||
value: float,
|
||||
db_path: Optional[Path] = None,
|
||||
db_path: Path | None = None,
|
||||
) -> bool:
|
||||
"""Insert a single metric value.
|
||||
|
||||
@@ -293,7 +293,7 @@ def insert_metrics(
|
||||
ts: int,
|
||||
role: str,
|
||||
metrics: dict[str, Any],
|
||||
db_path: Optional[Path] = None,
|
||||
db_path: Path | None = None,
|
||||
) -> int:
|
||||
"""Insert multiple metrics from a dict (e.g., firmware status response).
|
||||
|
||||
@@ -348,7 +348,7 @@ def get_metrics_for_period(
|
||||
role: str,
|
||||
start_ts: int,
|
||||
end_ts: int,
|
||||
db_path: Optional[Path] = None,
|
||||
db_path: Path | None = None,
|
||||
) -> dict[str, list[tuple[int, float]]]:
|
||||
"""Fetch all metrics for a role within a time range.
|
||||
|
||||
@@ -403,8 +403,8 @@ def get_metrics_for_period(
|
||||
|
||||
def get_latest_metrics(
|
||||
role: str,
|
||||
db_path: Optional[Path] = None,
|
||||
) -> Optional[dict[str, Any]]:
|
||||
db_path: Path | None = None,
|
||||
) -> dict[str, Any] | None:
|
||||
"""Get the most recent metrics for a role.
|
||||
|
||||
Returns all metrics at the most recent timestamp as a flat dict.
|
||||
@@ -455,7 +455,7 @@ def get_latest_metrics(
|
||||
|
||||
def get_metric_count(
|
||||
role: str,
|
||||
db_path: Optional[Path] = None,
|
||||
db_path: Path | None = None,
|
||||
) -> int:
|
||||
"""Get total number of metric rows for a role.
|
||||
|
||||
@@ -476,12 +476,13 @@ def get_metric_count(
|
||||
"SELECT COUNT(*) FROM metrics WHERE role = ?",
|
||||
(role,)
|
||||
)
|
||||
return cursor.fetchone()[0]
|
||||
row = cursor.fetchone()
|
||||
return int(row[0]) if row else 0
|
||||
|
||||
|
||||
def get_distinct_timestamps(
|
||||
role: str,
|
||||
db_path: Optional[Path] = None,
|
||||
db_path: Path | None = None,
|
||||
) -> int:
|
||||
"""Get count of distinct timestamps for a role.
|
||||
|
||||
@@ -501,12 +502,13 @@ def get_distinct_timestamps(
|
||||
"SELECT COUNT(DISTINCT ts) FROM metrics WHERE role = ?",
|
||||
(role,)
|
||||
)
|
||||
return cursor.fetchone()[0]
|
||||
row = cursor.fetchone()
|
||||
return int(row[0]) if row else 0
|
||||
|
||||
|
||||
def get_available_metrics(
|
||||
role: str,
|
||||
db_path: Optional[Path] = None,
|
||||
db_path: Path | None = None,
|
||||
) -> list[str]:
|
||||
"""Get list of all metric names stored for a role.
|
||||
|
||||
@@ -529,7 +531,7 @@ def get_available_metrics(
|
||||
return [row["metric"] for row in cursor]
|
||||
|
||||
|
||||
def vacuum_db(db_path: Optional[Path] = None) -> None:
|
||||
def vacuum_db(db_path: Path | None = None) -> None:
|
||||
"""Compact database and rebuild indexes.
|
||||
|
||||
Should be run periodically (e.g., weekly via cron).
|
||||
|
||||
@@ -4,7 +4,6 @@ import os
|
||||
import re
|
||||
import warnings
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
|
||||
def _parse_config_value(value: str) -> str:
|
||||
@@ -79,14 +78,14 @@ def _load_config_file() -> None:
|
||||
os.environ[key] = value
|
||||
|
||||
except (OSError, UnicodeDecodeError) as e:
|
||||
warnings.warn(f"Failed to load {config_path}: {e}")
|
||||
warnings.warn(f"Failed to load {config_path}: {e}", stacklevel=2)
|
||||
|
||||
|
||||
# Load config file at module import time, before Config is instantiated
|
||||
_load_config_file()
|
||||
|
||||
|
||||
def get_str(key: str, default: Optional[str] = None) -> Optional[str]:
|
||||
def get_str(key: str, default: str | None = None) -> str | None:
|
||||
"""Get string env var."""
|
||||
return os.environ.get(key, default)
|
||||
|
||||
@@ -130,9 +129,65 @@ def get_path(key: str, default: str) -> Path:
|
||||
class Config:
|
||||
"""Configuration loaded from environment variables."""
|
||||
|
||||
def __init__(self):
|
||||
# Connection settings
|
||||
mesh_transport: str
|
||||
mesh_serial_port: str | None
|
||||
mesh_serial_baud: int
|
||||
mesh_tcp_host: str | None
|
||||
mesh_tcp_port: int
|
||||
mesh_ble_addr: str | None
|
||||
mesh_ble_pin: str | None
|
||||
mesh_debug: bool
|
||||
|
||||
# Remote repeater identity
|
||||
repeater_name: str | None
|
||||
repeater_key_prefix: str | None
|
||||
repeater_password: str | None
|
||||
|
||||
# Intervals and timeouts
|
||||
companion_step: int
|
||||
repeater_step: int
|
||||
remote_timeout_s: int
|
||||
remote_retry_attempts: int
|
||||
remote_retry_backoff_s: int
|
||||
remote_cb_fails: int
|
||||
remote_cb_cooldown_s: int
|
||||
|
||||
# Telemetry
|
||||
telemetry_enabled: bool
|
||||
telemetry_timeout_s: int
|
||||
telemetry_retry_attempts: int
|
||||
telemetry_retry_backoff_s: int
|
||||
|
||||
# Paths
|
||||
state_dir: Path
|
||||
out_dir: Path
|
||||
|
||||
# Report location metadata
|
||||
report_location_name: str | None
|
||||
report_location_short: str | None
|
||||
report_lat: float
|
||||
report_lon: float
|
||||
report_elev: float
|
||||
report_elev_unit: str | None
|
||||
|
||||
# Node display names
|
||||
repeater_display_name: str | None
|
||||
companion_display_name: str | None
|
||||
repeater_pubkey_prefix: str | None
|
||||
companion_pubkey_prefix: str | None
|
||||
repeater_hardware: str | None
|
||||
companion_hardware: str | None
|
||||
|
||||
# Radio configuration
|
||||
radio_frequency: str | None
|
||||
radio_bandwidth: str | None
|
||||
radio_spread_factor: str | None
|
||||
radio_coding_rate: str | None
|
||||
|
||||
def __init__(self) -> None:
|
||||
# Connection settings
|
||||
self.mesh_transport = get_str("MESH_TRANSPORT", "serial")
|
||||
self.mesh_transport = get_str("MESH_TRANSPORT", "serial") or "serial"
|
||||
self.mesh_serial_port = get_str("MESH_SERIAL_PORT") # None = auto-detect
|
||||
self.mesh_serial_baud = get_int("MESH_SERIAL_BAUD", 115200)
|
||||
self.mesh_tcp_host = get_str("MESH_TCP_HOST", "localhost")
|
||||
@@ -203,7 +258,7 @@ class Config:
|
||||
|
||||
|
||||
# Global config instance
|
||||
_config: Optional[Config] = None
|
||||
_config: Config | None = None
|
||||
|
||||
|
||||
def get_config() -> Config:
|
||||
|
||||
@@ -1,14 +1,14 @@
|
||||
"""Shared formatting functions for display values."""
|
||||
|
||||
from datetime import datetime
|
||||
from typing import Any, Optional, Union
|
||||
|
||||
Number = Union[int, float]
|
||||
from typing import Any
|
||||
|
||||
from .battery import voltage_to_percentage
|
||||
|
||||
Number = int | float
|
||||
|
||||
def format_time(ts: Optional[int]) -> str:
|
||||
|
||||
def format_time(ts: int | None) -> str:
|
||||
"""Format Unix timestamp to human readable string."""
|
||||
if ts is None:
|
||||
return "N/A"
|
||||
@@ -28,14 +28,14 @@ def format_value(value: Any) -> str:
|
||||
return str(value)
|
||||
|
||||
|
||||
def format_number(value: Optional[int]) -> str:
|
||||
def format_number(value: int | None) -> str:
|
||||
"""Format an integer with thousands separators."""
|
||||
if value is None:
|
||||
return "N/A"
|
||||
return f"{value:,}"
|
||||
|
||||
|
||||
def format_duration(seconds: Optional[int]) -> str:
|
||||
def format_duration(seconds: int | None) -> str:
|
||||
"""Format duration in seconds to human readable string (days, hours, minutes, seconds)."""
|
||||
if seconds is None:
|
||||
return "N/A"
|
||||
@@ -57,7 +57,7 @@ def format_duration(seconds: Optional[int]) -> str:
|
||||
return " ".join(parts)
|
||||
|
||||
|
||||
def format_uptime(seconds: Optional[int]) -> str:
|
||||
def format_uptime(seconds: int | None) -> str:
|
||||
"""Format uptime seconds to human readable string (days, hours, minutes)."""
|
||||
if seconds is None:
|
||||
return "N/A"
|
||||
@@ -76,7 +76,7 @@ def format_uptime(seconds: Optional[int]) -> str:
|
||||
return " ".join(parts)
|
||||
|
||||
|
||||
def format_voltage_with_pct(mv: Optional[float]) -> str:
|
||||
def format_voltage_with_pct(mv: float | None) -> str:
|
||||
"""Format millivolts as voltage with battery percentage."""
|
||||
if mv is None:
|
||||
return "N/A"
|
||||
@@ -85,7 +85,7 @@ def format_voltage_with_pct(mv: Optional[float]) -> str:
|
||||
return f"{v:.2f} V ({pct:.0f}%)"
|
||||
|
||||
|
||||
def format_compact_number(value: Optional[Number], precision: int = 1) -> str:
|
||||
def format_compact_number(value: Number | None, precision: int = 1) -> str:
|
||||
"""Format a number using compact notation (k, M suffixes).
|
||||
|
||||
Rules:
|
||||
@@ -119,7 +119,7 @@ def format_compact_number(value: Optional[Number], precision: int = 1) -> str:
|
||||
return str(int(value))
|
||||
|
||||
|
||||
def format_duration_compact(seconds: Optional[int]) -> str:
|
||||
def format_duration_compact(seconds: int | None) -> str:
|
||||
"""Format duration showing only the two most significant units.
|
||||
|
||||
Uses truncation (floor), not rounding.
|
||||
|
||||
@@ -1,27 +1,40 @@
|
||||
"""HTML rendering helpers using Jinja2 templates."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import calendar
|
||||
import shutil
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import Any, Optional
|
||||
from typing import TYPE_CHECKING, Any, TypedDict
|
||||
|
||||
from jinja2 import Environment, PackageLoader, select_autoescape
|
||||
|
||||
from . import log
|
||||
from .charts import load_chart_stats
|
||||
from .env import get_config
|
||||
from .formatters import (
|
||||
format_time,
|
||||
format_value,
|
||||
format_number,
|
||||
format_duration,
|
||||
format_uptime,
|
||||
format_compact_number,
|
||||
format_duration,
|
||||
format_duration_compact,
|
||||
format_number,
|
||||
format_time,
|
||||
format_uptime,
|
||||
format_value,
|
||||
)
|
||||
from .charts import load_chart_stats
|
||||
from .metrics import get_chart_metrics, get_metric_label
|
||||
from . import log
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from .reports import MonthlyAggregate, YearlyAggregate
|
||||
|
||||
|
||||
class MetricDisplay(TypedDict, total=False):
|
||||
"""A metric display item for the UI."""
|
||||
|
||||
label: str
|
||||
value: str
|
||||
unit: str | None
|
||||
raw_value: int
|
||||
|
||||
# Status indicator thresholds (seconds)
|
||||
STATUS_ONLINE_THRESHOLD = 1800 # 30 minutes
|
||||
@@ -76,7 +89,7 @@ COMPANION_CHART_GROUPS = [
|
||||
]
|
||||
|
||||
# Singleton Jinja2 environment
|
||||
_jinja_env: Optional[Environment] = None
|
||||
_jinja_env: Environment | None = None
|
||||
|
||||
|
||||
def get_jinja_env() -> Environment:
|
||||
@@ -110,7 +123,7 @@ def get_jinja_env() -> Environment:
|
||||
return env
|
||||
|
||||
|
||||
def get_status(ts: Optional[int]) -> tuple[str, str]:
|
||||
def get_status(ts: int | None) -> tuple[str, str]:
|
||||
"""Determine status based on timestamp age.
|
||||
|
||||
Returns:
|
||||
@@ -128,7 +141,7 @@ def get_status(ts: Optional[int]) -> tuple[str, str]:
|
||||
return ("offline", "Offline")
|
||||
|
||||
|
||||
def build_repeater_metrics(row: Optional[dict]) -> dict:
|
||||
def build_repeater_metrics(row: dict | None) -> dict:
|
||||
"""Build metrics data from repeater database row.
|
||||
|
||||
Args:
|
||||
@@ -242,7 +255,7 @@ def build_repeater_metrics(row: Optional[dict]) -> dict:
|
||||
}
|
||||
|
||||
|
||||
def build_companion_metrics(row: Optional[dict]) -> dict:
|
||||
def build_companion_metrics(row: dict | None) -> dict:
|
||||
"""Build metrics data from companion database row.
|
||||
|
||||
Args:
|
||||
@@ -296,7 +309,7 @@ def build_companion_metrics(row: Optional[dict]) -> dict:
|
||||
})
|
||||
|
||||
# Secondary metrics (empty for companion)
|
||||
secondary_metrics = []
|
||||
secondary_metrics: list[MetricDisplay] = []
|
||||
|
||||
# Traffic metrics for companion
|
||||
traffic_metrics = []
|
||||
@@ -402,7 +415,7 @@ def build_radio_config() -> list[dict]:
|
||||
]
|
||||
|
||||
|
||||
def _format_stat_value(value: Optional[float], metric: str) -> str:
|
||||
def _format_stat_value(value: float | None, metric: str) -> str:
|
||||
"""Format a statistic value for display in chart footer.
|
||||
|
||||
Args:
|
||||
@@ -444,7 +457,7 @@ def _format_stat_value(value: Optional[float], metric: str) -> str:
|
||||
return f"{value:.2f}"
|
||||
|
||||
|
||||
def _load_svg_content(path: Path) -> Optional[str]:
|
||||
def _load_svg_content(path: Path) -> str | None:
|
||||
"""Load SVG file content for inline embedding.
|
||||
|
||||
Args:
|
||||
@@ -466,7 +479,7 @@ def _load_svg_content(path: Path) -> Optional[str]:
|
||||
def build_chart_groups(
|
||||
role: str,
|
||||
period: str,
|
||||
chart_stats: Optional[dict] = None,
|
||||
chart_stats: dict | None = None,
|
||||
) -> list[dict]:
|
||||
"""Build chart groups for template.
|
||||
|
||||
@@ -523,7 +536,8 @@ def build_chart_groups(
|
||||
{"label": "Max", "value": _format_stat_value(max_val, metric)},
|
||||
]
|
||||
|
||||
chart_data = {
|
||||
# Build chart data for template - mixed types require Any
|
||||
chart_data: dict[str, Any] = {
|
||||
"label": get_metric_label(metric),
|
||||
"metric": metric,
|
||||
"current": current_formatted,
|
||||
@@ -555,7 +569,7 @@ def build_chart_groups(
|
||||
def build_page_context(
|
||||
role: str,
|
||||
period: str,
|
||||
row: Optional[dict],
|
||||
row: dict | None,
|
||||
at_root: bool,
|
||||
) -> dict[str, Any]:
|
||||
"""Build template context dictionary for node pages.
|
||||
@@ -569,16 +583,10 @@ def build_page_context(
|
||||
cfg = get_config()
|
||||
|
||||
# Get node name from config
|
||||
if role == "repeater":
|
||||
node_name = cfg.repeater_display_name
|
||||
else:
|
||||
node_name = cfg.companion_display_name
|
||||
node_name = cfg.repeater_display_name if role == "repeater" else cfg.companion_display_name
|
||||
|
||||
# Pubkey prefix from config
|
||||
if role == "repeater":
|
||||
pubkey_pre = cfg.repeater_pubkey_prefix
|
||||
else:
|
||||
pubkey_pre = cfg.companion_pubkey_prefix
|
||||
pubkey_pre = cfg.repeater_pubkey_prefix if role == "repeater" else cfg.companion_pubkey_prefix
|
||||
|
||||
# Status based on timestamp
|
||||
ts = row.get("ts") if row else None
|
||||
@@ -675,7 +683,7 @@ def build_page_context(
|
||||
def render_node_page(
|
||||
role: str,
|
||||
period: str,
|
||||
row: Optional[dict],
|
||||
row: dict | None,
|
||||
at_root: bool = False,
|
||||
) -> str:
|
||||
"""Render a node page (companion or repeater).
|
||||
@@ -689,7 +697,7 @@ def render_node_page(
|
||||
env = get_jinja_env()
|
||||
context = build_page_context(role, period, row, at_root)
|
||||
template = env.get_template("node.html")
|
||||
return template.render(**context)
|
||||
return str(template.render(**context))
|
||||
|
||||
|
||||
def copy_static_assets():
|
||||
@@ -712,8 +720,8 @@ def copy_static_assets():
|
||||
|
||||
|
||||
def write_site(
|
||||
companion_row: Optional[dict],
|
||||
repeater_row: Optional[dict],
|
||||
companion_row: dict | None,
|
||||
repeater_row: dict | None,
|
||||
) -> list[Path]:
|
||||
"""
|
||||
Write all static site pages.
|
||||
@@ -794,8 +802,8 @@ def _fmt_val_plain(value: float | None, fmt: str = ".2f") -> str:
|
||||
|
||||
|
||||
def build_monthly_table_data(
|
||||
agg: "MonthlyAggregate", role: str
|
||||
) -> tuple[list[dict], list[dict], list[dict]]:
|
||||
agg: MonthlyAggregate, role: str
|
||||
) -> tuple[list[dict[str, Any]], list[dict[str, Any]], list[dict[str, Any]]]:
|
||||
"""Build table column groups, headers and rows for a monthly report.
|
||||
|
||||
Args:
|
||||
@@ -807,6 +815,11 @@ def build_monthly_table_data(
|
||||
"""
|
||||
from .reports import MetricStats
|
||||
|
||||
# Define types upfront for mypy
|
||||
col_groups: list[dict[str, Any]]
|
||||
headers: list[dict[str, Any]]
|
||||
rows: list[dict[str, Any]]
|
||||
|
||||
if role == "repeater":
|
||||
# Column groups matching redesign/reports/monthly.html
|
||||
col_groups = [
|
||||
@@ -986,8 +999,8 @@ def _fmt_val_month(value: float | None, time_obj, fmt: str = ".2f") -> str:
|
||||
|
||||
|
||||
def build_yearly_table_data(
|
||||
agg: "YearlyAggregate", role: str
|
||||
) -> tuple[list[dict], list[dict], list[dict]]:
|
||||
agg: YearlyAggregate, role: str
|
||||
) -> tuple[list[dict[str, Any]], list[dict[str, Any]], list[dict[str, Any]]]:
|
||||
"""Build table column groups, headers and rows for a yearly report.
|
||||
|
||||
Args:
|
||||
@@ -999,6 +1012,11 @@ def build_yearly_table_data(
|
||||
"""
|
||||
from .reports import MetricStats
|
||||
|
||||
# Define types upfront for mypy
|
||||
col_groups: list[dict[str, Any]]
|
||||
headers: list[dict[str, Any]]
|
||||
rows: list[dict[str, Any]]
|
||||
|
||||
if role == "repeater":
|
||||
# Column groups matching redesign/reports/yearly.html
|
||||
col_groups = [
|
||||
@@ -1166,8 +1184,8 @@ def render_report_page(
|
||||
agg: Any,
|
||||
node_name: str,
|
||||
report_type: str,
|
||||
prev_report: Optional[dict] = None,
|
||||
next_report: Optional[dict] = None,
|
||||
prev_report: dict | None = None,
|
||||
next_report: dict | None = None,
|
||||
) -> str:
|
||||
"""Render a report page (monthly or yearly).
|
||||
|
||||
@@ -1239,7 +1257,7 @@ def render_report_page(
|
||||
}
|
||||
|
||||
template = env.get_template("report.html")
|
||||
return template.render(**context)
|
||||
return str(template.render(**context))
|
||||
|
||||
|
||||
def render_reports_index(report_sections: list[dict]) -> str:
|
||||
@@ -1276,4 +1294,4 @@ def render_reports_index(report_sections: list[dict]) -> str:
|
||||
}
|
||||
|
||||
template = env.get_template("report_index.html")
|
||||
return template.render(**context)
|
||||
return str(template.render(**context))
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
|
||||
import sys
|
||||
from datetime import datetime
|
||||
|
||||
from .env import get_config
|
||||
|
||||
|
||||
|
||||
@@ -2,16 +2,17 @@
|
||||
|
||||
import asyncio
|
||||
import fcntl
|
||||
from collections.abc import AsyncIterator, Coroutine
|
||||
from contextlib import asynccontextmanager
|
||||
from pathlib import Path
|
||||
from typing import Any, AsyncIterator, Callable, Coroutine, Optional
|
||||
from typing import Any
|
||||
|
||||
from .env import get_config
|
||||
from . import log
|
||||
from .env import get_config
|
||||
|
||||
# Try to import meshcore - will fail gracefully if not installed
|
||||
try:
|
||||
from meshcore import MeshCore, EventType
|
||||
from meshcore import EventType, MeshCore
|
||||
MESHCORE_AVAILABLE = True
|
||||
except ImportError:
|
||||
MESHCORE_AVAILABLE = False
|
||||
@@ -19,7 +20,7 @@ except ImportError:
|
||||
EventType = None
|
||||
|
||||
|
||||
def auto_detect_serial_port() -> Optional[str]:
|
||||
def auto_detect_serial_port() -> str | None:
|
||||
"""
|
||||
Auto-detect a suitable serial port for MeshCore device.
|
||||
Prefers /dev/ttyACM* or /dev/ttyUSB* devices.
|
||||
@@ -39,20 +40,20 @@ def auto_detect_serial_port() -> Optional[str]:
|
||||
for port in ports:
|
||||
if "ttyACM" in port.device:
|
||||
log.info(f"Auto-detected serial port: {port.device} ({port.description})")
|
||||
return port.device
|
||||
return str(port.device)
|
||||
|
||||
for port in ports:
|
||||
if "ttyUSB" in port.device:
|
||||
log.info(f"Auto-detected serial port: {port.device} ({port.description})")
|
||||
return port.device
|
||||
return str(port.device)
|
||||
|
||||
# Fall back to first available
|
||||
port = ports[0]
|
||||
log.info(f"Using first available port: {port.device} ({port.description})")
|
||||
return port.device
|
||||
return str(port.device)
|
||||
|
||||
|
||||
async def connect_from_env() -> Optional[Any]:
|
||||
async def connect_from_env() -> Any | None:
|
||||
"""
|
||||
Connect to MeshCore device using environment configuration.
|
||||
|
||||
@@ -127,19 +128,19 @@ async def _acquire_lock_async(
|
||||
try:
|
||||
fcntl.flock(lock_file.fileno(), fcntl.LOCK_EX | fcntl.LOCK_NB)
|
||||
return
|
||||
except BlockingIOError:
|
||||
except BlockingIOError as err:
|
||||
if loop.time() >= deadline:
|
||||
raise TimeoutError(
|
||||
f"Could not acquire serial lock within {timeout}s. "
|
||||
"Another process may be using the serial port."
|
||||
)
|
||||
) from err
|
||||
await asyncio.sleep(poll_interval)
|
||||
|
||||
|
||||
@asynccontextmanager
|
||||
async def connect_with_lock(
|
||||
lock_timeout: float = 60.0,
|
||||
) -> AsyncIterator[Optional[Any]]:
|
||||
) -> AsyncIterator[Any | None]:
|
||||
"""Connect to MeshCore with serial port locking to prevent concurrent access.
|
||||
|
||||
For serial transport: Acquires exclusive file lock before connecting.
|
||||
@@ -162,7 +163,7 @@ async def connect_with_lock(
|
||||
lock_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Use 'a' mode: doesn't truncate, creates if missing
|
||||
lock_file = open(lock_path, "a")
|
||||
lock_file = open(lock_path, "a") # noqa: SIM115 - must stay open for lock
|
||||
try:
|
||||
await _acquire_lock_async(lock_file, timeout=lock_timeout)
|
||||
log.debug(f"Acquired serial lock: {lock_path}")
|
||||
@@ -193,7 +194,7 @@ async def run_command(
|
||||
mc: Any,
|
||||
cmd_coro: Coroutine,
|
||||
name: str,
|
||||
) -> tuple[bool, Optional[str], Optional[dict], Optional[str]]:
|
||||
) -> tuple[bool, str | None, dict | None, str | None]:
|
||||
"""
|
||||
Run a MeshCore command and capture result.
|
||||
|
||||
@@ -218,10 +219,7 @@ async def run_command(
|
||||
# Extract event type name
|
||||
event_type_name = None
|
||||
if hasattr(event, "type"):
|
||||
if hasattr(event.type, "name"):
|
||||
event_type_name = event.type.name
|
||||
else:
|
||||
event_type_name = str(event.type)
|
||||
event_type_name = event.type.name if hasattr(event.type, "name") else str(event.type)
|
||||
|
||||
# Check for error
|
||||
if EventType and hasattr(event, "type") and event.type == EventType.ERROR:
|
||||
@@ -246,13 +244,13 @@ async def run_command(
|
||||
log.debug(f"Command {name} returned: {event_type_name}")
|
||||
return (True, event_type_name, payload, None)
|
||||
|
||||
except asyncio.TimeoutError:
|
||||
except TimeoutError:
|
||||
return (False, None, None, "Timeout")
|
||||
except Exception as e:
|
||||
return (False, None, None, str(e))
|
||||
|
||||
|
||||
def get_contact_by_name(mc: Any, name: str) -> Optional[Any]:
|
||||
def get_contact_by_name(mc: Any, name: str) -> Any | None:
|
||||
"""
|
||||
Find a contact by advertised name.
|
||||
|
||||
@@ -276,7 +274,7 @@ def get_contact_by_name(mc: Any, name: str) -> Optional[Any]:
|
||||
return None
|
||||
|
||||
|
||||
def get_contact_by_key_prefix(mc: Any, prefix: str) -> Optional[Any]:
|
||||
def get_contact_by_key_prefix(mc: Any, prefix: str) -> Any | None:
|
||||
"""
|
||||
Find a contact by public key prefix.
|
||||
|
||||
|
||||
@@ -12,7 +12,6 @@ See docs/firmware-responses.md for the complete field reference.
|
||||
"""
|
||||
|
||||
from dataclasses import dataclass
|
||||
from typing import Optional
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
@@ -30,7 +29,7 @@ class MetricConfig:
|
||||
unit: str
|
||||
type: str = "gauge"
|
||||
scale: float = 1.0
|
||||
transform: Optional[str] = None
|
||||
transform: str | None = None
|
||||
|
||||
|
||||
# =============================================================================
|
||||
@@ -223,7 +222,7 @@ def get_chart_metrics(role: str) -> list[str]:
|
||||
raise ValueError(f"Unknown role: {role}")
|
||||
|
||||
|
||||
def get_metric_config(metric: str) -> Optional[MetricConfig]:
|
||||
def get_metric_config(metric: str) -> MetricConfig | None:
|
||||
"""Get configuration for a metric.
|
||||
|
||||
Args:
|
||||
|
||||
@@ -14,12 +14,11 @@ Metric names use firmware field names directly:
|
||||
"""
|
||||
|
||||
import calendar
|
||||
import json
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import date, datetime, timedelta
|
||||
from typing import Any, Optional
|
||||
from datetime import date, datetime
|
||||
from typing import Any
|
||||
|
||||
from .db import get_connection, get_metrics_for_period, VALID_ROLES
|
||||
from .db import VALID_ROLES, get_connection, get_metrics_for_period
|
||||
from .metrics import (
|
||||
is_counter_metric,
|
||||
)
|
||||
@@ -88,12 +87,12 @@ class MetricStats:
|
||||
For counter metrics: total (sum of positive deltas), reboot_count.
|
||||
"""
|
||||
|
||||
mean: Optional[float] = None
|
||||
min_value: Optional[float] = None
|
||||
min_time: Optional[datetime] = None
|
||||
max_value: Optional[float] = None
|
||||
max_time: Optional[datetime] = None
|
||||
total: Optional[int] = None # For counters: sum of positive deltas
|
||||
mean: float | None = None
|
||||
min_value: float | None = None
|
||||
min_time: datetime | None = None
|
||||
max_value: float | None = None
|
||||
max_time: datetime | None = None
|
||||
total: int | None = None # For counters: sum of positive deltas
|
||||
count: int = 0
|
||||
reboot_count: int = 0 # Number of counter resets detected
|
||||
|
||||
@@ -177,7 +176,7 @@ def get_rows_for_date(role: str, d: date) -> list[dict[str, Any]]:
|
||||
|
||||
def compute_counter_total(
|
||||
values: list[tuple[datetime, int]],
|
||||
) -> tuple[Optional[int], int]:
|
||||
) -> tuple[int | None, int]:
|
||||
"""Compute total for a counter metric, handling reboots.
|
||||
|
||||
Sums positive deltas between consecutive readings. Negative deltas
|
||||
@@ -311,8 +310,8 @@ def _aggregate_daily_gauge_to_summary(
|
||||
"""
|
||||
total_sum = 0.0
|
||||
total_count = 0
|
||||
overall_min: Optional[tuple[float, datetime]] = None
|
||||
overall_max: Optional[tuple[float, datetime]] = None
|
||||
overall_min: tuple[float, datetime] | None = None
|
||||
overall_max: tuple[float, datetime] | None = None
|
||||
|
||||
for daily in daily_list:
|
||||
if ds_name not in daily.metrics or not daily.metrics[ds_name].has_data:
|
||||
@@ -326,14 +325,20 @@ def _aggregate_daily_gauge_to_summary(
|
||||
total_count += stats.count
|
||||
|
||||
# Track overall min
|
||||
if stats.min_value is not None and stats.min_time is not None:
|
||||
if overall_min is None or stats.min_value < overall_min[0]:
|
||||
overall_min = (stats.min_value, stats.min_time)
|
||||
if (
|
||||
stats.min_value is not None
|
||||
and stats.min_time is not None
|
||||
and (overall_min is None or stats.min_value < overall_min[0])
|
||||
):
|
||||
overall_min = (stats.min_value, stats.min_time)
|
||||
|
||||
# Track overall max
|
||||
if stats.max_value is not None and stats.max_time is not None:
|
||||
if overall_max is None or stats.max_value > overall_max[0]:
|
||||
overall_max = (stats.max_value, stats.max_time)
|
||||
if (
|
||||
stats.max_value is not None
|
||||
and stats.max_time is not None
|
||||
and (overall_max is None or stats.max_value > overall_max[0])
|
||||
):
|
||||
overall_max = (stats.max_value, stats.max_time)
|
||||
|
||||
if total_count == 0:
|
||||
return MetricStats()
|
||||
@@ -422,8 +427,8 @@ def _aggregate_monthly_gauge_to_summary(
|
||||
"""Aggregate monthly gauge stats into a yearly summary."""
|
||||
total_sum = 0.0
|
||||
total_count = 0
|
||||
overall_min: Optional[tuple[float, datetime]] = None
|
||||
overall_max: Optional[tuple[float, datetime]] = None
|
||||
overall_min: tuple[float, datetime] | None = None
|
||||
overall_max: tuple[float, datetime] | None = None
|
||||
|
||||
for monthly in monthly_list:
|
||||
if ds_name not in monthly.summary or not monthly.summary[ds_name].has_data:
|
||||
@@ -435,13 +440,19 @@ def _aggregate_monthly_gauge_to_summary(
|
||||
total_sum += stats.mean * stats.count
|
||||
total_count += stats.count
|
||||
|
||||
if stats.min_value is not None and stats.min_time is not None:
|
||||
if overall_min is None or stats.min_value < overall_min[0]:
|
||||
overall_min = (stats.min_value, stats.min_time)
|
||||
if (
|
||||
stats.min_value is not None
|
||||
and stats.min_time is not None
|
||||
and (overall_min is None or stats.min_value < overall_min[0])
|
||||
):
|
||||
overall_min = (stats.min_value, stats.min_time)
|
||||
|
||||
if stats.max_value is not None and stats.max_time is not None:
|
||||
if overall_max is None or stats.max_value > overall_max[0]:
|
||||
overall_max = (stats.max_value, stats.max_time)
|
||||
if (
|
||||
stats.max_value is not None
|
||||
and stats.max_time is not None
|
||||
and (overall_max is None or stats.max_value > overall_max[0])
|
||||
):
|
||||
overall_max = (stats.max_value, stats.max_time)
|
||||
|
||||
if total_count == 0:
|
||||
return MetricStats()
|
||||
@@ -496,12 +507,18 @@ def aggregate_yearly(role: str, year: int) -> YearlyAggregate:
|
||||
"""
|
||||
agg = YearlyAggregate(year=year, role=role)
|
||||
metrics = get_metrics_for_role(role)
|
||||
today = date.today()
|
||||
|
||||
# Process month by month to limit memory usage
|
||||
for month in range(1, 13):
|
||||
# Don't aggregate future months
|
||||
if date(year, month, 1) > date.today():
|
||||
break
|
||||
periods = get_available_periods(role)
|
||||
months_with_data = sorted({month for y, month in periods if y == year})
|
||||
|
||||
if year > today.year:
|
||||
months_with_data = []
|
||||
elif year == today.year:
|
||||
months_with_data = [month for month in months_with_data if month <= today.month]
|
||||
|
||||
# Process only months that have data to avoid unnecessary daily scans.
|
||||
for month in months_with_data:
|
||||
monthly = aggregate_monthly(role, year, month)
|
||||
if monthly.daily: # Has data
|
||||
agg.monthly.append(monthly)
|
||||
@@ -624,28 +641,28 @@ class LocationInfo:
|
||||
)
|
||||
|
||||
|
||||
def _fmt_val(val: Optional[float], width: int = 6, decimals: int = 1) -> str:
|
||||
def _fmt_val(val: float | None, width: int = 6, decimals: int = 1) -> str:
|
||||
"""Format a value with fixed width, or dashes if None."""
|
||||
if val is None:
|
||||
return "-".center(width)
|
||||
return f"{val:>{width}.{decimals}f}"
|
||||
|
||||
|
||||
def _fmt_int(val: Optional[int], width: int = 6) -> str:
|
||||
def _fmt_int(val: int | None, width: int = 6) -> str:
|
||||
"""Format an integer with fixed width and comma separators, or dashes if None."""
|
||||
if val is None:
|
||||
return "-".center(width)
|
||||
return f"{val:>{width},}"
|
||||
|
||||
|
||||
def _fmt_time(dt: Optional[datetime], fmt: str = "%H:%M") -> str:
|
||||
def _fmt_time(dt: datetime | None, fmt: str = "%H:%M") -> str:
|
||||
"""Format a datetime, or dashes if None."""
|
||||
if dt is None:
|
||||
return "--:--"
|
||||
return dt.strftime(fmt)
|
||||
|
||||
|
||||
def _fmt_day(dt: Optional[datetime]) -> str:
|
||||
def _fmt_day(dt: datetime | None) -> str:
|
||||
"""Format datetime as day number, or dashes if None."""
|
||||
if dt is None:
|
||||
return "--"
|
||||
@@ -669,10 +686,7 @@ class Column:
|
||||
if value is None:
|
||||
text = "-"
|
||||
elif isinstance(value, int):
|
||||
if self.comma_sep:
|
||||
text = f"{value:,}"
|
||||
else:
|
||||
text = str(value)
|
||||
text = f"{value:,}" if self.comma_sep else str(value)
|
||||
elif isinstance(value, float):
|
||||
text = f"{value:.{self.decimals}f}"
|
||||
else:
|
||||
@@ -688,7 +702,7 @@ class Column:
|
||||
|
||||
def _format_row(columns: list[Column], values: list[Any]) -> str:
|
||||
"""Format a row of values using column specs."""
|
||||
return "".join(col.format(val) for col, val in zip(columns, values))
|
||||
return "".join(col.format(val) for col, val in zip(columns, values, strict=False))
|
||||
|
||||
|
||||
def _format_separator(columns: list[Column], char: str = "-") -> str:
|
||||
@@ -706,10 +720,7 @@ def _get_bat_v(m: dict[str, MetricStats], role: str) -> MetricStats:
|
||||
Returns:
|
||||
MetricStats with values in volts
|
||||
"""
|
||||
if role == "companion":
|
||||
bat = m.get("battery_mv", MetricStats())
|
||||
else:
|
||||
bat = m.get("bat", MetricStats())
|
||||
bat = m.get("battery_mv", MetricStats()) if role == "companion" else m.get("bat", MetricStats())
|
||||
|
||||
if not bat.has_data:
|
||||
return bat
|
||||
|
||||
@@ -3,11 +3,12 @@
|
||||
import asyncio
|
||||
import json
|
||||
import time
|
||||
from collections.abc import Callable, Coroutine
|
||||
from pathlib import Path
|
||||
from typing import Any, Callable, Coroutine, Optional, TypeVar
|
||||
from typing import Any, TypeVar
|
||||
|
||||
from .env import get_config
|
||||
from . import log
|
||||
from .env import get_config
|
||||
|
||||
T = TypeVar("T")
|
||||
|
||||
@@ -88,7 +89,7 @@ async def with_retries(
|
||||
attempts: int = 2,
|
||||
backoff_s: float = 4.0,
|
||||
name: str = "operation",
|
||||
) -> tuple[bool, Optional[T], Optional[Exception]]:
|
||||
) -> tuple[bool, T | None, Exception | None]:
|
||||
"""
|
||||
Execute async function with retries.
|
||||
|
||||
@@ -101,7 +102,7 @@ async def with_retries(
|
||||
Returns:
|
||||
(success, result, last_exception)
|
||||
"""
|
||||
last_exception: Optional[Exception] = None
|
||||
last_exception: Exception | None = None
|
||||
|
||||
for attempt in range(1, attempts + 1):
|
||||
try:
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
"""Telemetry data extraction from Cayenne LPP format."""
|
||||
|
||||
from typing import Any
|
||||
|
||||
from . import log
|
||||
|
||||
__all__ = ["extract_lpp_from_payload", "extract_telemetry_metrics"]
|
||||
@@ -83,9 +84,7 @@ def extract_telemetry_metrics(lpp_data: Any) -> dict[str, float]:
|
||||
|
||||
# Note: Check bool before int because bool is a subclass of int in Python.
|
||||
# Some sensors may report digital on/off values as booleans.
|
||||
if isinstance(value, bool):
|
||||
metrics[base_key] = float(value)
|
||||
elif isinstance(value, (int, float)):
|
||||
if isinstance(value, (bool, int, float)):
|
||||
metrics[base_key] = float(value)
|
||||
elif isinstance(value, dict):
|
||||
for subkey, subval in value.items():
|
||||
@@ -94,9 +93,7 @@ def extract_telemetry_metrics(lpp_data: Any) -> dict[str, float]:
|
||||
subkey_clean = subkey.strip().lower().replace(" ", "_")
|
||||
if not subkey_clean:
|
||||
continue
|
||||
if isinstance(subval, bool):
|
||||
metrics[f"{base_key}.{subkey_clean}"] = float(subval)
|
||||
elif isinstance(subval, (int, float)):
|
||||
if isinstance(subval, (bool, int, float)):
|
||||
metrics[f"{base_key}.{subkey_clean}"] = float(subval)
|
||||
|
||||
return metrics
|
||||
|
||||
@@ -1,143 +1,331 @@
|
||||
/**
|
||||
* Chart tooltip enhancement for MeshCore Stats
|
||||
* Chart Tooltip Enhancement for MeshCore Stats
|
||||
*
|
||||
* Progressive enhancement: charts work fully without JS,
|
||||
* but this adds interactive tooltips on hover.
|
||||
* Progressive enhancement: charts display fully without JavaScript.
|
||||
* This module adds interactive tooltips showing datetime and value on hover,
|
||||
* with an indicator dot that follows the data line.
|
||||
*
|
||||
* Data sources:
|
||||
* - Data points: path.dataset.points or svg.dataset.points (JSON array of {ts, v})
|
||||
* - Time range: svg.dataset.xStart, svg.dataset.xEnd (Unix timestamps)
|
||||
* - Value range: svg.dataset.yMin, svg.dataset.yMax
|
||||
* - Plot bounds: Derived from clipPath rect or line path bounding box
|
||||
*/
|
||||
(function() {
|
||||
(function () {
|
||||
'use strict';
|
||||
|
||||
// Create tooltip element
|
||||
const tooltip = document.createElement('div');
|
||||
tooltip.className = 'chart-tooltip';
|
||||
tooltip.innerHTML = '<div class="tooltip-time"></div><div class="tooltip-value"></div>';
|
||||
document.body.appendChild(tooltip);
|
||||
// ============================================================================
|
||||
// Configuration
|
||||
// ============================================================================
|
||||
|
||||
const tooltipTime = tooltip.querySelector('.tooltip-time');
|
||||
const tooltipValue = tooltip.querySelector('.tooltip-value');
|
||||
|
||||
// Track the current indicator element
|
||||
let currentIndicator = null;
|
||||
let currentSvg = null;
|
||||
|
||||
// Metric display labels and units (using firmware field names)
|
||||
const metricLabels = {
|
||||
// Companion metrics
|
||||
'battery_mv': { label: 'Voltage', unit: 'V', decimals: 2 },
|
||||
'uptime_secs': { label: 'Uptime', unit: 'days', decimals: 2 },
|
||||
'contacts': { label: 'Contacts', unit: '', decimals: 0 },
|
||||
'recv': { label: 'Received', unit: '/min', decimals: 1 },
|
||||
'sent': { label: 'Sent', unit: '/min', decimals: 1 },
|
||||
|
||||
// Repeater metrics
|
||||
'bat': { label: 'Voltage', unit: 'V', decimals: 2 },
|
||||
'bat_pct': { label: 'Charge', unit: '%', decimals: 0 },
|
||||
'uptime': { label: 'Uptime', unit: 'days', decimals: 2 },
|
||||
'last_rssi': { label: 'RSSI', unit: 'dBm', decimals: 0 },
|
||||
'last_snr': { label: 'SNR', unit: 'dB', decimals: 1 },
|
||||
'noise_floor': { label: 'Noise', unit: 'dBm', decimals: 0 },
|
||||
'tx_queue_len': { label: 'Queue', unit: '', decimals: 0 },
|
||||
'nb_recv': { label: 'Received', unit: '/min', decimals: 1 },
|
||||
'nb_sent': { label: 'Sent', unit: '/min', decimals: 1 },
|
||||
'airtime': { label: 'TX Air', unit: 's/min', decimals: 2 },
|
||||
'rx_airtime': { label: 'RX Air', unit: 's/min', decimals: 2 },
|
||||
'flood_dups': { label: 'Dropped', unit: '/min', decimals: 1 },
|
||||
'direct_dups': { label: 'Dropped', unit: '/min', decimals: 1 },
|
||||
'sent_flood': { label: 'Sent', unit: '/min', decimals: 1 },
|
||||
'recv_flood': { label: 'Received', unit: '/min', decimals: 1 },
|
||||
'sent_direct': { label: 'Sent', unit: '/min', decimals: 1 },
|
||||
'recv_direct': { label: 'Received', unit: '/min', decimals: 1 },
|
||||
var CONFIG = {
|
||||
tooltipOffset: 15,
|
||||
viewportPadding: 10,
|
||||
indicatorRadius: 5,
|
||||
indicatorStrokeWidth: 2,
|
||||
colors: {
|
||||
light: { fill: '#b45309', stroke: '#ffffff' },
|
||||
dark: { fill: '#f59e0b', stroke: '#0f1114' }
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Format a timestamp as a readable date/time string
|
||||
* Metric display configuration keyed by firmware field name.
|
||||
* Each entry defines how to format values for that metric.
|
||||
*/
|
||||
function formatTime(ts, period) {
|
||||
const date = new Date(ts * 1000);
|
||||
const options = {
|
||||
var METRIC_CONFIG = {
|
||||
// Companion metrics
|
||||
battery_mv: { label: 'Voltage', unit: 'V', decimals: 2 },
|
||||
uptime_secs: { label: 'Uptime', unit: 'days', decimals: 2 },
|
||||
contacts: { label: 'Contacts', unit: '', decimals: 0 },
|
||||
recv: { label: 'Received', unit: '/min', decimals: 1 },
|
||||
sent: { label: 'Sent', unit: '/min', decimals: 1 },
|
||||
|
||||
// Repeater metrics
|
||||
bat: { label: 'Voltage', unit: 'V', decimals: 2 },
|
||||
bat_pct: { label: 'Charge', unit: '%', decimals: 0 },
|
||||
uptime: { label: 'Uptime', unit: 'days', decimals: 2 },
|
||||
last_rssi: { label: 'RSSI', unit: 'dBm', decimals: 0 },
|
||||
last_snr: { label: 'SNR', unit: 'dB', decimals: 1 },
|
||||
noise_floor: { label: 'Noise', unit: 'dBm', decimals: 0 },
|
||||
tx_queue_len: { label: 'Queue', unit: '', decimals: 0 },
|
||||
nb_recv: { label: 'Received', unit: '/min', decimals: 1 },
|
||||
nb_sent: { label: 'Sent', unit: '/min', decimals: 1 },
|
||||
airtime: { label: 'TX Air', unit: 's/min', decimals: 2 },
|
||||
rx_airtime: { label: 'RX Air', unit: 's/min', decimals: 2 },
|
||||
flood_dups: { label: 'Dropped', unit: '/min', decimals: 1 },
|
||||
direct_dups: { label: 'Dropped', unit: '/min', decimals: 1 },
|
||||
sent_flood: { label: 'Sent', unit: '/min', decimals: 1 },
|
||||
recv_flood: { label: 'Received', unit: '/min', decimals: 1 },
|
||||
sent_direct: { label: 'Sent', unit: '/min', decimals: 1 },
|
||||
recv_direct: { label: 'Received', unit: '/min', decimals: 1 }
|
||||
};
|
||||
|
||||
// ============================================================================
|
||||
// Formatting Utilities
|
||||
// ============================================================================
|
||||
|
||||
/**
|
||||
* Format a Unix timestamp as a localized date/time string.
|
||||
* Uses browser language preference for locale (determines 12/24 hour format).
|
||||
* Includes year only for year-period charts.
|
||||
*/
|
||||
function formatTimestamp(timestamp, period) {
|
||||
var date = new Date(timestamp * 1000);
|
||||
var options = {
|
||||
month: 'short',
|
||||
day: 'numeric',
|
||||
hour: '2-digit',
|
||||
hour: 'numeric',
|
||||
minute: '2-digit',
|
||||
timeZoneName: 'short'
|
||||
};
|
||||
|
||||
// For year view, include year
|
||||
if (period === 'year') {
|
||||
options.year = 'numeric';
|
||||
}
|
||||
|
||||
return date.toLocaleString(undefined, options);
|
||||
// Use browser's language preference (navigator.language), not system locale
|
||||
// Empty array [] or undefined would use OS regional settings instead
|
||||
return date.toLocaleString(navigator.language, options);
|
||||
}
|
||||
|
||||
/**
|
||||
* Format a value with appropriate decimals and unit
|
||||
* Format a numeric value with the appropriate decimals and unit for a metric.
|
||||
*/
|
||||
function formatValue(value, metric) {
|
||||
const config = metricLabels[metric] || { label: metric, unit: '', decimals: 2 };
|
||||
const formatted = value.toFixed(config.decimals);
|
||||
return `${formatted}${config.unit ? ' ' + config.unit : ''}`;
|
||||
function formatMetricValue(value, metric) {
|
||||
var config = METRIC_CONFIG[metric] || { label: metric, unit: '', decimals: 2 };
|
||||
var formatted = value.toFixed(config.decimals);
|
||||
return config.unit ? formatted + ' ' + config.unit : formatted;
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Data Point Utilities
|
||||
// ============================================================================
|
||||
|
||||
/**
|
||||
* Find the closest data point to a timestamp, returning index too
|
||||
* Find the data point closest to the target timestamp.
|
||||
* Returns the point object or null if no points available.
|
||||
*/
|
||||
function findClosestPoint(dataPoints, targetTs) {
|
||||
if (!dataPoints || dataPoints.length === 0) return null;
|
||||
function findClosestDataPoint(dataPoints, targetTimestamp) {
|
||||
if (!dataPoints || dataPoints.length === 0) {
|
||||
return null;
|
||||
}
|
||||
|
||||
let closestIdx = 0;
|
||||
let minDiff = Math.abs(dataPoints[0].ts - targetTs);
|
||||
var closest = dataPoints[0];
|
||||
var minDiff = Math.abs(closest.ts - targetTimestamp);
|
||||
|
||||
for (let i = 1; i < dataPoints.length; i++) {
|
||||
const diff = Math.abs(dataPoints[i].ts - targetTs);
|
||||
for (var i = 1; i < dataPoints.length; i++) {
|
||||
var diff = Math.abs(dataPoints[i].ts - targetTimestamp);
|
||||
if (diff < minDiff) {
|
||||
minDiff = diff;
|
||||
closestIdx = i;
|
||||
closest = dataPoints[i];
|
||||
}
|
||||
}
|
||||
|
||||
return { point: dataPoints[closestIdx], index: closestIdx };
|
||||
return closest;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create or get the indicator circle for an SVG
|
||||
* Parse and cache data points on an SVG element.
|
||||
* Handles HTML entity encoding from server-side JSON embedding.
|
||||
*/
|
||||
function getDataPoints(svg, rawJson) {
|
||||
if (svg._dataPoints) {
|
||||
return svg._dataPoints;
|
||||
}
|
||||
|
||||
try {
|
||||
var json = rawJson.replace(/"/g, '"');
|
||||
svg._dataPoints = JSON.parse(json);
|
||||
return svg._dataPoints;
|
||||
} catch (error) {
|
||||
console.warn('Chart tooltip: failed to parse data points', error);
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// SVG Coordinate Utilities
|
||||
// ============================================================================
|
||||
|
||||
/**
|
||||
* Get and cache the plot area bounds for an SVG chart.
|
||||
* Prefers the clip path rect (defines full plot area) over line path bbox
|
||||
* (which only covers the actual data range).
|
||||
*/
|
||||
function getPlotAreaBounds(svg, fallbackPath) {
|
||||
if (svg._plotArea) {
|
||||
return svg._plotArea;
|
||||
}
|
||||
|
||||
var clipRect = svg.querySelector('clipPath rect');
|
||||
if (clipRect) {
|
||||
svg._plotArea = {
|
||||
x: parseFloat(clipRect.getAttribute('x')),
|
||||
y: parseFloat(clipRect.getAttribute('y')),
|
||||
width: parseFloat(clipRect.getAttribute('width')),
|
||||
height: parseFloat(clipRect.getAttribute('height'))
|
||||
};
|
||||
} else if (fallbackPath) {
|
||||
svg._plotArea = fallbackPath.getBBox();
|
||||
}
|
||||
|
||||
return svg._plotArea;
|
||||
}
|
||||
|
||||
/**
|
||||
* Find the chart line path element within an SVG.
|
||||
* Tries multiple selectors for compatibility with different SVG structures.
|
||||
*/
|
||||
function findLinePath(svg) {
|
||||
return (
|
||||
svg.querySelector('#chart-line path') ||
|
||||
svg.querySelector('path#chart-line') ||
|
||||
svg.querySelector('[gid="chart-line"] path') ||
|
||||
svg.querySelector('path[gid="chart-line"]') ||
|
||||
svg.querySelector('path[data-points]')
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert a screen X coordinate to SVG coordinate space.
|
||||
*/
|
||||
function screenToSvgX(svg, clientX) {
|
||||
var svgRect = svg.getBoundingClientRect();
|
||||
var viewBox = svg.viewBox.baseVal;
|
||||
var scale = viewBox.width / svgRect.width;
|
||||
return (clientX - svgRect.left) * scale + viewBox.x;
|
||||
}
|
||||
|
||||
/**
|
||||
* Map a timestamp to an X coordinate within the plot area.
|
||||
*/
|
||||
function timestampToX(timestamp, xStart, xEnd, plotArea) {
|
||||
var relativePosition = (timestamp - xStart) / (xEnd - xStart);
|
||||
return plotArea.x + relativePosition * plotArea.width;
|
||||
}
|
||||
|
||||
/**
|
||||
* Map a value to a Y coordinate within the plot area.
|
||||
* SVG Y-axis is inverted (0 at top), so higher values map to lower Y.
|
||||
*/
|
||||
function valueToY(value, yMin, yMax, plotArea) {
|
||||
var ySpan = yMax - yMin || 1;
|
||||
var relativePosition = (value - yMin) / ySpan;
|
||||
return plotArea.y + plotArea.height - relativePosition * plotArea.height;
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Tooltip Element
|
||||
// ============================================================================
|
||||
|
||||
var tooltip = null;
|
||||
var tooltipTimeEl = null;
|
||||
var tooltipValueEl = null;
|
||||
|
||||
/**
|
||||
* Create the tooltip DOM element (called once on init).
|
||||
*/
|
||||
function createTooltipElement() {
|
||||
tooltip = document.createElement('div');
|
||||
tooltip.className = 'chart-tooltip';
|
||||
tooltip.innerHTML =
|
||||
'<div class="tooltip-time"></div>' + '<div class="tooltip-value"></div>';
|
||||
document.body.appendChild(tooltip);
|
||||
|
||||
tooltipTimeEl = tooltip.querySelector('.tooltip-time');
|
||||
tooltipValueEl = tooltip.querySelector('.tooltip-value');
|
||||
}
|
||||
|
||||
/**
|
||||
* Update tooltip content and position it near the cursor.
|
||||
*/
|
||||
function showTooltip(event, timeText, valueText) {
|
||||
tooltipTimeEl.textContent = timeText;
|
||||
tooltipValueEl.textContent = valueText;
|
||||
|
||||
var left = event.pageX + CONFIG.tooltipOffset;
|
||||
var top = event.pageY + CONFIG.tooltipOffset;
|
||||
|
||||
// Keep tooltip within viewport
|
||||
var rect = tooltip.getBoundingClientRect();
|
||||
if (left + rect.width > window.innerWidth - CONFIG.viewportPadding) {
|
||||
left = event.pageX - rect.width - CONFIG.tooltipOffset;
|
||||
}
|
||||
if (top + rect.height > window.innerHeight - CONFIG.viewportPadding) {
|
||||
top = event.pageY - rect.height - CONFIG.tooltipOffset;
|
||||
}
|
||||
|
||||
tooltip.style.left = left + 'px';
|
||||
tooltip.style.top = top + 'px';
|
||||
tooltip.classList.add('visible');
|
||||
}
|
||||
|
||||
/**
|
||||
* Hide the tooltip.
|
||||
*/
|
||||
function hideTooltip() {
|
||||
tooltip.classList.remove('visible');
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Indicator Dot
|
||||
// ============================================================================
|
||||
|
||||
var currentIndicator = null;
|
||||
var currentIndicatorSvg = null;
|
||||
|
||||
/**
|
||||
* Get or create the indicator circle for an SVG chart.
|
||||
* Reuses existing indicator if still on the same chart.
|
||||
*/
|
||||
function getIndicator(svg) {
|
||||
if (currentSvg === svg && currentIndicator) {
|
||||
if (currentIndicatorSvg === svg && currentIndicator) {
|
||||
return currentIndicator;
|
||||
}
|
||||
|
||||
// Remove old indicator if switching charts
|
||||
// Remove indicator from previous chart
|
||||
if (currentIndicator && currentIndicator.parentNode) {
|
||||
currentIndicator.parentNode.removeChild(currentIndicator);
|
||||
}
|
||||
|
||||
// Create new indicator as an SVG circle
|
||||
const indicator = document.createElementNS('http://www.w3.org/2000/svg', 'circle');
|
||||
indicator.setAttribute('r', '5');
|
||||
// Create new indicator circle
|
||||
var indicator = document.createElementNS(
|
||||
'http://www.w3.org/2000/svg',
|
||||
'circle'
|
||||
);
|
||||
indicator.setAttribute('r', CONFIG.indicatorRadius);
|
||||
indicator.setAttribute('class', 'chart-indicator');
|
||||
indicator.setAttribute('stroke-width', CONFIG.indicatorStrokeWidth);
|
||||
indicator.style.pointerEvents = 'none';
|
||||
|
||||
// Get theme from SVG data attribute for color
|
||||
const theme = svg.dataset.theme;
|
||||
if (theme === 'dark') {
|
||||
indicator.setAttribute('fill', '#f59e0b');
|
||||
indicator.setAttribute('stroke', '#0f1114');
|
||||
} else {
|
||||
indicator.setAttribute('fill', '#b45309');
|
||||
indicator.setAttribute('stroke', '#ffffff');
|
||||
}
|
||||
indicator.setAttribute('stroke-width', '2');
|
||||
// Apply theme-appropriate colors
|
||||
var theme = svg.dataset.theme === 'dark' ? 'dark' : 'light';
|
||||
indicator.setAttribute('fill', CONFIG.colors[theme].fill);
|
||||
indicator.setAttribute('stroke', CONFIG.colors[theme].stroke);
|
||||
|
||||
svg.appendChild(indicator);
|
||||
currentIndicator = indicator;
|
||||
currentSvg = svg;
|
||||
currentIndicatorSvg = svg;
|
||||
|
||||
return indicator;
|
||||
}
|
||||
|
||||
/**
|
||||
* Hide and clean up the indicator
|
||||
* Position the indicator at a specific data point.
|
||||
*/
|
||||
function positionIndicator(svg, dataPoint, xStart, xEnd, yMin, yMax, plotArea) {
|
||||
var indicator = getIndicator(svg);
|
||||
var x = timestampToX(dataPoint.ts, xStart, xEnd, plotArea);
|
||||
var y = valueToY(dataPoint.v, yMin, yMax, plotArea);
|
||||
|
||||
indicator.setAttribute('cx', x);
|
||||
indicator.setAttribute('cy', y);
|
||||
indicator.style.display = '';
|
||||
}
|
||||
|
||||
/**
|
||||
* Hide the indicator dot.
|
||||
*/
|
||||
function hideIndicator() {
|
||||
if (currentIndicator) {
|
||||
@@ -145,193 +333,137 @@
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Event Handlers
|
||||
// ============================================================================
|
||||
|
||||
/**
|
||||
* Position tooltip near the mouse cursor
|
||||
* Convert a touch event to a mouse-like event object.
|
||||
*/
|
||||
function positionTooltip(event) {
|
||||
const offset = 15;
|
||||
let left = event.pageX + offset;
|
||||
let top = event.pageY + offset;
|
||||
|
||||
// Keep tooltip on screen
|
||||
const rect = tooltip.getBoundingClientRect();
|
||||
const viewportWidth = window.innerWidth;
|
||||
const viewportHeight = window.innerHeight;
|
||||
|
||||
if (left + rect.width > viewportWidth - 10) {
|
||||
left = event.pageX - rect.width - offset;
|
||||
}
|
||||
if (top + rect.height > viewportHeight - 10) {
|
||||
top = event.pageY - rect.height - offset;
|
||||
}
|
||||
|
||||
tooltip.style.left = left + 'px';
|
||||
tooltip.style.top = top + 'px';
|
||||
function touchToMouseEvent(touchEvent) {
|
||||
var touch = touchEvent.touches[0];
|
||||
return {
|
||||
currentTarget: touchEvent.currentTarget,
|
||||
clientX: touch.clientX,
|
||||
clientY: touch.clientY,
|
||||
pageX: touch.pageX,
|
||||
pageY: touch.pageY
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle mouse move over chart SVG
|
||||
* Handle pointer movement over a chart (mouse or touch).
|
||||
* Finds the closest data point and updates tooltip and indicator.
|
||||
*/
|
||||
function handleMouseMove(event) {
|
||||
const svg = event.currentTarget;
|
||||
const metric = svg.dataset.metric;
|
||||
const period = svg.dataset.period;
|
||||
const xStart = parseInt(svg.dataset.xStart, 10);
|
||||
const xEnd = parseInt(svg.dataset.xEnd, 10);
|
||||
const yMin = parseFloat(svg.dataset.yMin);
|
||||
const yMax = parseFloat(svg.dataset.yMax);
|
||||
function handlePointerMove(event) {
|
||||
var svg = event.currentTarget;
|
||||
|
||||
// Find the primary line path for precise coordinates
|
||||
const path =
|
||||
svg.querySelector('path#chart-line') ||
|
||||
svg.querySelector('path[gid="chart-line"]') ||
|
||||
svg.querySelector('#chart-line path') ||
|
||||
svg.querySelector('[gid="chart-line"] path') ||
|
||||
svg.querySelector('path[data-points]');
|
||||
if (!path) return;
|
||||
// Extract chart metadata
|
||||
var metric = svg.dataset.metric;
|
||||
var period = svg.dataset.period;
|
||||
var xStart = parseInt(svg.dataset.xStart, 10);
|
||||
var xEnd = parseInt(svg.dataset.xEnd, 10);
|
||||
var yMin = parseFloat(svg.dataset.yMin);
|
||||
var yMax = parseFloat(svg.dataset.yMax);
|
||||
|
||||
const pointsSource = path.dataset.points || svg.dataset.points;
|
||||
if (!pointsSource) return;
|
||||
|
||||
// Parse and cache data points on first access
|
||||
if (!svg._dataPoints) {
|
||||
try {
|
||||
const json = pointsSource.replace(/"/g, '"');
|
||||
svg._dataPoints = JSON.parse(json);
|
||||
} catch (e) {
|
||||
console.warn('Failed to parse chart data:', e);
|
||||
return;
|
||||
}
|
||||
// Find the line path and data points source
|
||||
var linePath = findLinePath(svg);
|
||||
if (!linePath) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Cache the path's bounding box for coordinate mapping
|
||||
if (!path._pathBox) {
|
||||
path._pathBox = path.getBBox();
|
||||
var rawPoints = linePath.dataset.points || svg.dataset.points;
|
||||
if (!rawPoints) {
|
||||
return;
|
||||
}
|
||||
|
||||
const pathBox = path._pathBox;
|
||||
// Parse data points (cached on svg element)
|
||||
var dataPoints = getDataPoints(svg, rawPoints);
|
||||
if (!dataPoints) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Get mouse position in SVG coordinate space
|
||||
const svgRect = svg.getBoundingClientRect();
|
||||
const viewBox = svg.viewBox.baseVal;
|
||||
// Get plot area bounds (cached on svg element)
|
||||
var plotArea = getPlotAreaBounds(svg, linePath);
|
||||
if (!plotArea) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Convert screen X coordinate to SVG coordinate
|
||||
const scaleX = viewBox.width / svgRect.width;
|
||||
const svgX = (event.clientX - svgRect.left) * scaleX + viewBox.x;
|
||||
// Convert screen position to timestamp
|
||||
var svgX = screenToSvgX(svg, event.clientX);
|
||||
var relativeX = Math.max(0, Math.min(1, (svgX - plotArea.x) / plotArea.width));
|
||||
var targetTimestamp = xStart + relativeX * (xEnd - xStart);
|
||||
|
||||
// Calculate relative X position within the plot area (pathBox)
|
||||
const relX = (svgX - pathBox.x) / pathBox.width;
|
||||
// Find and display closest data point
|
||||
var closestPoint = findClosestDataPoint(dataPoints, targetTimestamp);
|
||||
if (!closestPoint) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Clamp to plot area bounds
|
||||
const clampedRelX = Math.max(0, Math.min(1, relX));
|
||||
showTooltip(
|
||||
event,
|
||||
formatTimestamp(closestPoint.ts, period),
|
||||
formatMetricValue(closestPoint.v, metric)
|
||||
);
|
||||
|
||||
// Map relative X position to timestamp using the chart's X-axis range
|
||||
const targetTs = xStart + clampedRelX * (xEnd - xStart);
|
||||
|
||||
// Find closest data point by timestamp
|
||||
const result = findClosestPoint(svg._dataPoints, targetTs);
|
||||
if (!result) return;
|
||||
|
||||
const { point } = result;
|
||||
|
||||
// Update tooltip content
|
||||
tooltipTime.textContent = formatTime(point.ts, period);
|
||||
tooltipValue.textContent = formatValue(point.v, metric);
|
||||
|
||||
// Position and show tooltip
|
||||
positionTooltip(event);
|
||||
tooltip.classList.add('visible');
|
||||
|
||||
// Position the indicator at the data point
|
||||
const indicator = getIndicator(svg);
|
||||
|
||||
// Calculate X position: map timestamp to path coordinate space
|
||||
const pointRelX = (point.ts - xStart) / (xEnd - xStart);
|
||||
const indicatorX = pathBox.x + pointRelX * pathBox.width;
|
||||
|
||||
// Calculate Y position using the actual Y-axis range from the chart
|
||||
const ySpan = yMax - yMin || 1;
|
||||
// Y is inverted in SVG (0 at top)
|
||||
const pointRelY = 1 - (point.v - yMin) / ySpan;
|
||||
const indicatorY = pathBox.y + pointRelY * pathBox.height;
|
||||
|
||||
indicator.setAttribute('cx', indicatorX);
|
||||
indicator.setAttribute('cy', indicatorY);
|
||||
indicator.style.display = '';
|
||||
positionIndicator(svg, closestPoint, xStart, xEnd, yMin, yMax, plotArea);
|
||||
}
|
||||
|
||||
/**
|
||||
* Hide tooltip when leaving chart
|
||||
* Handle pointer leaving the chart area.
|
||||
*/
|
||||
function handleMouseLeave() {
|
||||
tooltip.classList.remove('visible');
|
||||
function handlePointerLeave() {
|
||||
hideTooltip();
|
||||
hideIndicator();
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle touch events for mobile
|
||||
* Handle touch start event.
|
||||
*/
|
||||
function handleTouchStart(event) {
|
||||
// Convert touch to mouse-like event
|
||||
const touch = event.touches[0];
|
||||
const mouseEvent = {
|
||||
currentTarget: event.currentTarget,
|
||||
clientX: touch.clientX,
|
||||
clientY: touch.clientY,
|
||||
pageX: touch.pageX,
|
||||
pageY: touch.pageY
|
||||
};
|
||||
|
||||
handleMouseMove(mouseEvent);
|
||||
}
|
||||
|
||||
function handleTouchMove(event) {
|
||||
const touch = event.touches[0];
|
||||
const mouseEvent = {
|
||||
currentTarget: event.currentTarget,
|
||||
clientX: touch.clientX,
|
||||
clientY: touch.clientY,
|
||||
pageX: touch.pageX,
|
||||
pageY: touch.pageY
|
||||
};
|
||||
|
||||
handleMouseMove(mouseEvent);
|
||||
}
|
||||
|
||||
function handleTouchEnd() {
|
||||
handleMouseLeave();
|
||||
handlePointerMove(touchToMouseEvent(event));
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize tooltips for all chart SVGs
|
||||
* Handle touch move event.
|
||||
*/
|
||||
function initTooltips() {
|
||||
// Find all chart SVGs with data attributes
|
||||
const chartSvgs = document.querySelectorAll('svg[data-metric][data-period]');
|
||||
function handleTouchMove(event) {
|
||||
handlePointerMove(touchToMouseEvent(event));
|
||||
}
|
||||
|
||||
chartSvgs.forEach(function(svg) {
|
||||
// Mouse events for desktop
|
||||
svg.addEventListener('mousemove', handleMouseMove);
|
||||
svg.addEventListener('mouseleave', handleMouseLeave);
|
||||
// ============================================================================
|
||||
// Initialization
|
||||
// ============================================================================
|
||||
|
||||
// Touch events for mobile
|
||||
/**
|
||||
* Attach event listeners to all chart SVG elements.
|
||||
*/
|
||||
function initializeChartTooltips() {
|
||||
createTooltipElement();
|
||||
|
||||
var chartSvgs = document.querySelectorAll('svg[data-metric][data-period]');
|
||||
|
||||
chartSvgs.forEach(function (svg) {
|
||||
// Desktop mouse events
|
||||
svg.addEventListener('mousemove', handlePointerMove);
|
||||
svg.addEventListener('mouseleave', handlePointerLeave);
|
||||
|
||||
// Mobile touch events
|
||||
svg.addEventListener('touchstart', handleTouchStart, { passive: true });
|
||||
svg.addEventListener('touchmove', handleTouchMove, { passive: true });
|
||||
svg.addEventListener('touchend', handleTouchEnd);
|
||||
svg.addEventListener('touchcancel', handleTouchEnd);
|
||||
svg.addEventListener('touchend', handlePointerLeave);
|
||||
svg.addEventListener('touchcancel', handlePointerLeave);
|
||||
|
||||
// Set cursor to indicate interactivity
|
||||
// Visual affordance for interactivity
|
||||
svg.style.cursor = 'crosshair';
|
||||
|
||||
// Allow vertical scrolling but prevent horizontal pan on mobile
|
||||
svg.style.touchAction = 'pan-y';
|
||||
});
|
||||
}
|
||||
|
||||
// Initialize when DOM is ready
|
||||
// Run initialization when DOM is ready
|
||||
if (document.readyState === 'loading') {
|
||||
document.addEventListener('DOMContentLoaded', initTooltips);
|
||||
document.addEventListener('DOMContentLoaded', initializeChartTooltips);
|
||||
} else {
|
||||
initTooltips();
|
||||
initializeChartTooltips();
|
||||
}
|
||||
})();
|
||||
|
||||
4329
test_review/tests.md
Normal file
4329
test_review/tests.md
Normal file
File diff suppressed because it is too large
Load Diff
1
tests/__init__.py
Normal file
1
tests/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Test suite for meshcore-stats."""
|
||||
1
tests/charts/__init__.py
Normal file
1
tests/charts/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Tests for chart rendering."""
|
||||
339
tests/charts/conftest.py
Normal file
339
tests/charts/conftest.py
Normal file
@@ -0,0 +1,339 @@
|
||||
"""Fixtures for chart tests."""
|
||||
|
||||
import json
|
||||
import re
|
||||
from datetime import UTC, datetime, timedelta
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.charts import (
|
||||
CHART_THEMES,
|
||||
DataPoint,
|
||||
TimeSeries,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def light_theme():
|
||||
"""Light chart theme."""
|
||||
return CHART_THEMES["light"]
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def dark_theme():
|
||||
"""Dark chart theme."""
|
||||
return CHART_THEMES["dark"]
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_timeseries():
|
||||
"""Sample time series with 24 hours of data."""
|
||||
now = datetime.now()
|
||||
points = []
|
||||
for i in range(24):
|
||||
ts = now - timedelta(hours=23 - i)
|
||||
# Simulate battery voltage pattern (higher during day, lower at night)
|
||||
value = 3.7 + 0.3 * abs(12 - i) / 12
|
||||
points.append(DataPoint(timestamp=ts, value=value))
|
||||
|
||||
return TimeSeries(
|
||||
metric="bat",
|
||||
role="repeater",
|
||||
period="day",
|
||||
points=points,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def empty_timeseries():
|
||||
"""Empty time series (no data)."""
|
||||
return TimeSeries(
|
||||
metric="bat",
|
||||
role="repeater",
|
||||
period="day",
|
||||
points=[],
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def single_point_timeseries():
|
||||
"""Time series with single data point."""
|
||||
now = datetime.now()
|
||||
return TimeSeries(
|
||||
metric="bat",
|
||||
role="repeater",
|
||||
period="day",
|
||||
points=[DataPoint(timestamp=now, value=3.85)],
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def counter_timeseries():
|
||||
"""Sample counter time series (for rate calculation testing)."""
|
||||
now = datetime.now()
|
||||
points = []
|
||||
for i in range(24):
|
||||
ts = now - timedelta(hours=23 - i)
|
||||
# Simulate increasing counter
|
||||
value = float(i * 100)
|
||||
points.append(DataPoint(timestamp=ts, value=value))
|
||||
|
||||
return TimeSeries(
|
||||
metric="nb_recv",
|
||||
role="repeater",
|
||||
period="day",
|
||||
points=points,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def week_timeseries():
|
||||
"""Sample week time series for binning tests."""
|
||||
now = datetime.now()
|
||||
points = []
|
||||
# One point per hour for 7 days = 168 points
|
||||
for i in range(168):
|
||||
ts = now - timedelta(hours=167 - i)
|
||||
value = 3.7 + 0.2 * (i % 24) / 24
|
||||
points.append(DataPoint(timestamp=ts, value=value))
|
||||
|
||||
return TimeSeries(
|
||||
metric="bat",
|
||||
role="repeater",
|
||||
period="week",
|
||||
points=points,
|
||||
)
|
||||
|
||||
|
||||
def normalize_svg_for_snapshot(svg: str) -> str:
|
||||
"""Normalize SVG for deterministic snapshot comparison.
|
||||
|
||||
Handles matplotlib's dynamic ID generation while preserving
|
||||
semantic content that affects chart appearance. Uses sequential
|
||||
normalized IDs to preserve relationships between definitions
|
||||
and references.
|
||||
|
||||
IMPORTANT: Each ID type gets its own prefix to maintain uniqueness:
|
||||
- tick_N: matplotlib tick marks (m[0-9a-f]{8,})
|
||||
- clip_N: clipPath definitions (p[0-9a-f]{8,})
|
||||
- glyph_N: font glyph definitions (DejaVuSans-XX)
|
||||
|
||||
This ensures that:
|
||||
1. All IDs remain unique (no duplicates)
|
||||
2. References (xlink:href, url(#...)) correctly resolve
|
||||
3. SVG renders identically to the original
|
||||
"""
|
||||
# Patterns for matplotlib's random IDs, each with its own prefix
|
||||
# to maintain uniqueness across different ID types
|
||||
id_type_patterns = [
|
||||
(r'm[0-9a-f]{8,}', 'tick'), # matplotlib tick marks
|
||||
(r'p[0-9a-f]{8,}', 'clip'), # matplotlib clipPaths
|
||||
(r'DejaVuSans-[0-9a-f]+', 'glyph'), # font glyphs (hex-named)
|
||||
]
|
||||
|
||||
# Find all IDs in the document
|
||||
all_ids = re.findall(r'id="([^"]+)"', svg)
|
||||
|
||||
# Create mapping for IDs that match random patterns
|
||||
# Use separate counters per type to ensure predictable naming
|
||||
id_mapping = {}
|
||||
type_counters = {prefix: 0 for _, prefix in id_type_patterns}
|
||||
|
||||
for id_val in all_ids:
|
||||
if id_val in id_mapping:
|
||||
continue
|
||||
for pattern, prefix in id_type_patterns:
|
||||
if re.fullmatch(pattern, id_val):
|
||||
new_id = f"{prefix}_{type_counters[prefix]}"
|
||||
id_mapping[id_val] = new_id
|
||||
type_counters[prefix] += 1
|
||||
break
|
||||
|
||||
# Replace all occurrences of mapped IDs (definitions and references)
|
||||
# Process in a deterministic order (sorted by original ID) for consistency
|
||||
for old_id, new_id in sorted(id_mapping.items()):
|
||||
# Replace id definitions
|
||||
svg = svg.replace(f'id="{old_id}"', f'id="{new_id}"')
|
||||
# Replace url(#...) references
|
||||
svg = svg.replace(f'url(#{old_id})', f'url(#{new_id})')
|
||||
# Replace xlink:href references
|
||||
svg = svg.replace(f'xlink:href="#{old_id}"', f'xlink:href="#{new_id}"')
|
||||
# Replace href references (SVG 2.0 style without xlink prefix)
|
||||
svg = svg.replace(f'href="#{old_id}"', f'href="#{new_id}"')
|
||||
|
||||
# Remove matplotlib version comment (changes between versions)
|
||||
svg = re.sub(r'<!-- Created with matplotlib.*?-->', '', svg)
|
||||
|
||||
# Normalize dc:date timestamp (changes on each render)
|
||||
svg = re.sub(r'<dc:date>[^<]+</dc:date>', '<dc:date>NORMALIZED</dc:date>', svg)
|
||||
|
||||
# Normalize whitespace (but preserve newlines for readability)
|
||||
svg = re.sub(r'[ \t]+', ' ', svg)
|
||||
svg = re.sub(r' ?\n ?', '\n', svg)
|
||||
|
||||
return svg.strip()
|
||||
|
||||
|
||||
def extract_svg_data_attributes(svg: str) -> dict:
|
||||
"""Extract data-* attributes from SVG for validation.
|
||||
|
||||
Args:
|
||||
svg: SVG string
|
||||
|
||||
Returns:
|
||||
Dict with extracted data attributes
|
||||
"""
|
||||
data = {}
|
||||
|
||||
# Extract data-points JSON
|
||||
points_match = re.search(r'data-points="([^"]+)"', svg)
|
||||
if points_match:
|
||||
points_str = points_match.group(1).replace('"', '"')
|
||||
try:
|
||||
data["points"] = json.loads(points_str)
|
||||
except json.JSONDecodeError:
|
||||
data["points_raw"] = points_str
|
||||
|
||||
# Extract other data attributes
|
||||
for attr in ["data-metric", "data-period", "data-theme",
|
||||
"data-x-start", "data-x-end", "data-y-min", "data-y-max"]:
|
||||
match = re.search(rf'{attr}="([^"]+)"', svg)
|
||||
if match:
|
||||
key = attr.replace("data-", "").replace("-", "_")
|
||||
data[key] = match.group(1)
|
||||
|
||||
return data
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def snapshots_dir():
|
||||
"""Path to snapshots directory."""
|
||||
return Path(__file__).parent.parent / "snapshots" / "svg"
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_raw_points():
|
||||
"""Raw points for aggregation testing."""
|
||||
now = datetime.now()
|
||||
return [
|
||||
(now - timedelta(hours=2), 3.7),
|
||||
(now - timedelta(hours=1, minutes=45), 3.72),
|
||||
(now - timedelta(hours=1, minutes=30), 3.75),
|
||||
(now - timedelta(hours=1), 3.8),
|
||||
(now - timedelta(minutes=30), 3.82),
|
||||
(now, 3.85),
|
||||
]
|
||||
|
||||
|
||||
# --- Deterministic fixtures for snapshot testing ---
|
||||
# These use fixed timestamps to produce consistent SVG output
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def snapshot_base_time():
|
||||
"""Fixed base time for deterministic snapshot tests.
|
||||
|
||||
Uses 2024-01-15 12:00:00 UTC as a stable reference point.
|
||||
Explicitly set to UTC to ensure consistent behavior across all machines.
|
||||
"""
|
||||
return datetime(2024, 1, 15, 12, 0, 0, tzinfo=UTC)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def snapshot_gauge_timeseries(snapshot_base_time):
|
||||
"""Deterministic gauge time series for snapshot testing.
|
||||
|
||||
Creates a battery voltage pattern over 24 hours with fixed timestamps.
|
||||
"""
|
||||
points = []
|
||||
for i in range(24):
|
||||
ts = snapshot_base_time - timedelta(hours=23 - i)
|
||||
# Simulate battery voltage pattern (higher during day, lower at night)
|
||||
value = 3.7 + 0.3 * abs(12 - i) / 12
|
||||
points.append(DataPoint(timestamp=ts, value=value))
|
||||
|
||||
return TimeSeries(
|
||||
metric="bat",
|
||||
role="repeater",
|
||||
period="day",
|
||||
points=points,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def snapshot_counter_timeseries(snapshot_base_time):
|
||||
"""Deterministic counter time series for snapshot testing.
|
||||
|
||||
Creates a packet rate pattern over 24 hours with fixed timestamps.
|
||||
This represents rate values (already converted from counter deltas).
|
||||
"""
|
||||
points = []
|
||||
for i in range(24):
|
||||
ts = snapshot_base_time - timedelta(hours=23 - i)
|
||||
# Simulate packet rate - higher during day hours (6-18)
|
||||
hour = (i + 12) % 24 # Convert to actual hour of day
|
||||
value = (
|
||||
2.0 + (hour - 6) * 0.3 # 2.0 to 5.6 packets/min
|
||||
if 6 <= hour <= 18
|
||||
else 0.5 + (hour % 6) * 0.1 # 0.5 to 1.1 packets/min (night)
|
||||
)
|
||||
points.append(DataPoint(timestamp=ts, value=value))
|
||||
|
||||
return TimeSeries(
|
||||
metric="nb_recv",
|
||||
role="repeater",
|
||||
period="day",
|
||||
points=points,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def snapshot_empty_timeseries():
|
||||
"""Empty time series for snapshot testing."""
|
||||
return TimeSeries(
|
||||
metric="bat",
|
||||
role="repeater",
|
||||
period="day",
|
||||
points=[],
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def snapshot_single_point_timeseries(snapshot_base_time):
|
||||
"""Time series with single data point for snapshot testing."""
|
||||
return TimeSeries(
|
||||
metric="bat",
|
||||
role="repeater",
|
||||
period="day",
|
||||
points=[DataPoint(timestamp=snapshot_base_time, value=3.85)],
|
||||
)
|
||||
|
||||
|
||||
def normalize_svg_for_snapshot_full(svg: str) -> str:
|
||||
"""Extended SVG normalization for full snapshot comparison.
|
||||
|
||||
In addition to standard normalization, this also:
|
||||
- Removes timestamps from data-points to allow content-only comparison
|
||||
- Normalizes floating point precision
|
||||
|
||||
Used when you want to compare the visual structure but not exact data values.
|
||||
"""
|
||||
# Apply standard normalization first
|
||||
svg = normalize_svg_for_snapshot(svg)
|
||||
|
||||
# Normalize data-points timestamps (keep structure, normalize values)
|
||||
# This allows charts with different base times to still match structure
|
||||
svg = re.sub(r'"ts":\s*\d+', '"ts":0', svg)
|
||||
|
||||
# Normalize floating point values to 2 decimal places in attributes
|
||||
def normalize_float(match):
|
||||
try:
|
||||
val = float(match.group(1))
|
||||
return f'{val:.2f}'
|
||||
except ValueError:
|
||||
return match.group(0)
|
||||
|
||||
svg = re.sub(r'(\d+\.\d{3,})', normalize_float, svg)
|
||||
|
||||
return svg
|
||||
201
tests/charts/test_chart_io.py
Normal file
201
tests/charts/test_chart_io.py
Normal file
@@ -0,0 +1,201 @@
|
||||
"""Tests for chart statistics I/O functions."""
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.charts import (
|
||||
load_chart_stats,
|
||||
save_chart_stats,
|
||||
)
|
||||
|
||||
|
||||
class TestSaveChartStats:
|
||||
"""Tests for save_chart_stats function."""
|
||||
|
||||
def test_saves_stats_to_file(self, configured_env):
|
||||
"""Saves stats dict to JSON file."""
|
||||
stats = {
|
||||
"bat": {
|
||||
"day": {"min": 3.5, "avg": 3.7, "max": 3.9, "current": 3.85},
|
||||
"week": {"min": 3.4, "avg": 3.65, "max": 3.95, "current": 3.85},
|
||||
}
|
||||
}
|
||||
|
||||
path = save_chart_stats("repeater", stats)
|
||||
|
||||
assert path.exists()
|
||||
with open(path) as f:
|
||||
loaded = json.load(f)
|
||||
assert loaded == stats
|
||||
|
||||
def test_creates_directories(self, configured_env):
|
||||
"""Creates parent directories if needed."""
|
||||
stats = {"test": {"day": {"min": 1.0}}}
|
||||
|
||||
path = save_chart_stats("repeater", stats)
|
||||
|
||||
assert path.parent.exists()
|
||||
assert path.parent.name == "repeater"
|
||||
|
||||
def test_returns_path(self, configured_env):
|
||||
"""Returns path to saved file."""
|
||||
stats = {"test": {"day": {}}}
|
||||
|
||||
path = save_chart_stats("companion", stats)
|
||||
|
||||
assert isinstance(path, Path)
|
||||
assert path.name == "chart_stats.json"
|
||||
assert "companion" in str(path)
|
||||
|
||||
def test_overwrites_existing(self, configured_env):
|
||||
"""Overwrites existing stats file."""
|
||||
stats1 = {"metric1": {"day": {"min": 1.0}}}
|
||||
stats2 = {"metric2": {"day": {"min": 2.0}}}
|
||||
|
||||
path1 = save_chart_stats("repeater", stats1)
|
||||
path2 = save_chart_stats("repeater", stats2)
|
||||
|
||||
assert path1 == path2
|
||||
with open(path2) as f:
|
||||
loaded = json.load(f)
|
||||
assert loaded == stats2
|
||||
|
||||
def test_empty_stats(self, configured_env):
|
||||
"""Saves empty stats dict."""
|
||||
stats = {}
|
||||
|
||||
path = save_chart_stats("repeater", stats)
|
||||
|
||||
with open(path) as f:
|
||||
loaded = json.load(f)
|
||||
assert loaded == {}
|
||||
|
||||
def test_nested_stats_structure(self, configured_env):
|
||||
"""Preserves nested structure of stats."""
|
||||
stats = {
|
||||
"bat": {
|
||||
"day": {"min": 3.5, "avg": 3.7, "max": 3.9, "current": 3.85},
|
||||
"week": {"min": 3.4, "avg": 3.65, "max": 3.95, "current": None},
|
||||
},
|
||||
"nb_recv": {
|
||||
"day": {"min": 0, "avg": 50.5, "max": 100, "current": 75},
|
||||
}
|
||||
}
|
||||
|
||||
path = save_chart_stats("repeater", stats)
|
||||
|
||||
with open(path) as f:
|
||||
loaded = json.load(f)
|
||||
assert loaded["bat"]["week"]["current"] is None
|
||||
assert loaded["nb_recv"]["day"]["avg"] == 50.5
|
||||
|
||||
|
||||
class TestLoadChartStats:
|
||||
"""Tests for load_chart_stats function."""
|
||||
|
||||
def test_loads_existing_stats(self, configured_env):
|
||||
"""Loads stats from existing file."""
|
||||
stats = {
|
||||
"bat": {
|
||||
"day": {"min": 3.5, "avg": 3.7, "max": 3.9, "current": 3.85},
|
||||
}
|
||||
}
|
||||
save_chart_stats("repeater", stats)
|
||||
|
||||
loaded = load_chart_stats("repeater")
|
||||
|
||||
assert loaded == stats
|
||||
|
||||
def test_returns_empty_when_missing(self, configured_env):
|
||||
"""Returns empty dict when file doesn't exist."""
|
||||
loaded = load_chart_stats("repeater")
|
||||
|
||||
assert loaded == {}
|
||||
|
||||
def test_returns_empty_on_invalid_json(self, configured_env):
|
||||
"""Returns empty dict on invalid JSON."""
|
||||
stats_path = configured_env["out_dir"] / "assets" / "repeater" / "chart_stats.json"
|
||||
stats_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
stats_path.write_text("not valid json {{{", encoding="utf-8")
|
||||
|
||||
loaded = load_chart_stats("repeater")
|
||||
|
||||
assert loaded == {}
|
||||
|
||||
def test_preserves_none_values(self, configured_env):
|
||||
"""None values are preserved through save/load cycle."""
|
||||
stats = {
|
||||
"bat": {
|
||||
"day": {"min": None, "avg": None, "max": None, "current": None},
|
||||
}
|
||||
}
|
||||
save_chart_stats("repeater", stats)
|
||||
|
||||
loaded = load_chart_stats("repeater")
|
||||
|
||||
assert loaded["bat"]["day"]["min"] is None
|
||||
assert loaded["bat"]["day"]["avg"] is None
|
||||
|
||||
def test_loads_different_roles(self, configured_env):
|
||||
"""Loads correct file for each role."""
|
||||
companion_stats = {"battery_mv": {"day": {"min": 3.5}}}
|
||||
repeater_stats = {"bat": {"day": {"min": 3.6}}}
|
||||
|
||||
save_chart_stats("companion", companion_stats)
|
||||
save_chart_stats("repeater", repeater_stats)
|
||||
|
||||
assert load_chart_stats("companion") == companion_stats
|
||||
assert load_chart_stats("repeater") == repeater_stats
|
||||
|
||||
|
||||
class TestStatsRoundTrip:
|
||||
"""Tests for complete save/load round trips."""
|
||||
|
||||
def test_complex_stats_roundtrip(self, configured_env):
|
||||
"""Complex stats survive round trip unchanged."""
|
||||
stats = {
|
||||
"bat": {
|
||||
"day": {"min": 3.5, "avg": 3.7, "max": 3.9, "current": 3.85},
|
||||
"week": {"min": 3.4, "avg": 3.65, "max": 3.95, "current": 3.8},
|
||||
"month": {"min": 3.3, "avg": 3.6, "max": 4.0, "current": 3.75},
|
||||
"year": {"min": 3.2, "avg": 3.55, "max": 4.1, "current": 3.7},
|
||||
},
|
||||
"bat_pct": {
|
||||
"day": {"min": 50.0, "avg": 70.0, "max": 90.0, "current": 85.0},
|
||||
"week": {"min": 45.0, "avg": 65.0, "max": 95.0, "current": 80.0},
|
||||
"month": {"min": 40.0, "avg": 60.0, "max": 100.0, "current": 75.0},
|
||||
"year": {"min": 30.0, "avg": 55.0, "max": 100.0, "current": 70.0},
|
||||
},
|
||||
"nb_recv": {
|
||||
"day": {"min": 0, "avg": 50.5, "max": 100, "current": 75},
|
||||
"week": {"min": 0, "avg": 48.2, "max": 150, "current": 60},
|
||||
"month": {"min": 0, "avg": 45.8, "max": 200, "current": 55},
|
||||
"year": {"min": 0, "avg": 42.1, "max": 250, "current": 50},
|
||||
},
|
||||
}
|
||||
|
||||
save_chart_stats("repeater", stats)
|
||||
loaded = load_chart_stats("repeater")
|
||||
|
||||
assert loaded == stats
|
||||
|
||||
def test_float_precision_preserved(self, configured_env):
|
||||
"""Float precision is preserved in round trip."""
|
||||
stats = {
|
||||
"test": {
|
||||
"day": {
|
||||
"min": 3.141592653589793,
|
||||
"avg": 2.718281828459045,
|
||||
"max": 1.4142135623730951,
|
||||
"current": 0.0001234567890123,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
save_chart_stats("repeater", stats)
|
||||
loaded = load_chart_stats("repeater")
|
||||
|
||||
assert loaded["test"]["day"]["min"] == pytest.approx(3.141592653589793)
|
||||
assert loaded["test"]["day"]["avg"] == pytest.approx(2.718281828459045)
|
||||
433
tests/charts/test_chart_render.py
Normal file
433
tests/charts/test_chart_render.py
Normal file
@@ -0,0 +1,433 @@
|
||||
"""Tests for SVG chart rendering."""
|
||||
|
||||
import os
|
||||
from datetime import datetime, timedelta
|
||||
from pathlib import Path
|
||||
from xml.etree import ElementTree as ET
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.charts import (
|
||||
CHART_THEMES,
|
||||
DataPoint,
|
||||
TimeSeries,
|
||||
render_chart_svg,
|
||||
)
|
||||
|
||||
from .conftest import extract_svg_data_attributes, normalize_svg_for_snapshot
|
||||
|
||||
|
||||
def _svg_viewbox_dims(svg: str) -> tuple[float, float]:
|
||||
root = ET.fromstring(svg)
|
||||
viewbox = root.attrib.get("viewBox")
|
||||
assert viewbox is not None
|
||||
_, _, width, height = viewbox.split()
|
||||
return float(width), float(height)
|
||||
|
||||
|
||||
class TestRenderChartSvg:
|
||||
"""Tests for render_chart_svg function."""
|
||||
|
||||
def test_returns_svg_string(self, sample_timeseries, light_theme):
|
||||
"""Returns valid SVG string."""
|
||||
svg = render_chart_svg(sample_timeseries, light_theme)
|
||||
|
||||
assert svg.startswith("<?xml") or svg.startswith("<svg")
|
||||
assert "</svg>" in svg
|
||||
|
||||
def test_includes_svg_namespace(self, sample_timeseries, light_theme):
|
||||
"""SVG includes xmlns namespace."""
|
||||
svg = render_chart_svg(sample_timeseries, light_theme)
|
||||
|
||||
assert 'xmlns="http://www.w3.org/2000/svg"' in svg
|
||||
|
||||
def test_respects_width_height(self, sample_timeseries, light_theme):
|
||||
"""SVG respects specified dimensions."""
|
||||
svg_default = render_chart_svg(sample_timeseries, light_theme)
|
||||
svg_small = render_chart_svg(sample_timeseries, light_theme, width=600, height=200)
|
||||
|
||||
default_w, default_h = _svg_viewbox_dims(svg_default)
|
||||
small_w, small_h = _svg_viewbox_dims(svg_small)
|
||||
|
||||
assert small_w < default_w
|
||||
assert small_h < default_h
|
||||
|
||||
def test_uses_theme_colors(self, sample_timeseries, light_theme, dark_theme):
|
||||
"""Different themes produce different colors."""
|
||||
light_svg = render_chart_svg(sample_timeseries, light_theme)
|
||||
dark_svg = render_chart_svg(sample_timeseries, dark_theme)
|
||||
|
||||
# Check theme colors are present
|
||||
assert light_theme.line in light_svg or f"#{light_theme.line}" in light_svg
|
||||
assert dark_theme.line in dark_svg or f"#{dark_theme.line}" in dark_svg
|
||||
|
||||
|
||||
class TestEmptyChartRendering:
|
||||
"""Tests for rendering empty charts."""
|
||||
|
||||
def test_empty_chart_renders(self, empty_timeseries, light_theme):
|
||||
"""Empty time series renders without error."""
|
||||
svg = render_chart_svg(empty_timeseries, light_theme)
|
||||
|
||||
assert "</svg>" in svg
|
||||
|
||||
def test_empty_chart_shows_message(self, empty_timeseries, light_theme):
|
||||
"""Empty chart shows 'No data available' message."""
|
||||
svg = render_chart_svg(empty_timeseries, light_theme)
|
||||
|
||||
assert "No data available" in svg
|
||||
|
||||
|
||||
class TestDataPointsInjection:
|
||||
"""Tests for data-points attribute injection."""
|
||||
|
||||
def test_includes_data_points(self, sample_timeseries, light_theme):
|
||||
"""SVG includes data-points attribute."""
|
||||
svg = render_chart_svg(sample_timeseries, light_theme)
|
||||
|
||||
assert "data-points=" in svg
|
||||
|
||||
def test_data_points_valid_json(self, sample_timeseries, light_theme):
|
||||
"""data-points contains valid JSON array."""
|
||||
svg = render_chart_svg(sample_timeseries, light_theme)
|
||||
data = extract_svg_data_attributes(svg)
|
||||
|
||||
assert "points" in data
|
||||
assert isinstance(data["points"], list)
|
||||
|
||||
def test_data_points_count_matches(self, sample_timeseries, light_theme):
|
||||
"""data-points count matches time series points."""
|
||||
svg = render_chart_svg(sample_timeseries, light_theme)
|
||||
data = extract_svg_data_attributes(svg)
|
||||
|
||||
assert len(data["points"]) == len(sample_timeseries.points)
|
||||
|
||||
def test_data_points_structure(self, sample_timeseries, light_theme):
|
||||
"""Each data point has ts and v keys."""
|
||||
svg = render_chart_svg(sample_timeseries, light_theme)
|
||||
data = extract_svg_data_attributes(svg)
|
||||
|
||||
for point in data["points"]:
|
||||
assert "ts" in point
|
||||
assert "v" in point
|
||||
assert isinstance(point["ts"], int)
|
||||
assert isinstance(point["v"], (int, float))
|
||||
|
||||
def test_includes_metadata_attributes(self, sample_timeseries, light_theme):
|
||||
"""SVG includes metric, period, theme attributes."""
|
||||
svg = render_chart_svg(sample_timeseries, light_theme)
|
||||
data = extract_svg_data_attributes(svg)
|
||||
|
||||
assert data.get("metric") == "bat"
|
||||
assert data.get("period") == "day"
|
||||
assert data.get("theme") == "light"
|
||||
|
||||
def test_includes_axis_range_attributes(self, sample_timeseries, light_theme):
|
||||
"""SVG includes x and y axis range attributes."""
|
||||
svg = render_chart_svg(sample_timeseries, light_theme)
|
||||
data = extract_svg_data_attributes(svg)
|
||||
|
||||
assert "x_start" in data
|
||||
assert "x_end" in data
|
||||
assert "y_min" in data
|
||||
assert "y_max" in data
|
||||
|
||||
|
||||
class TestYAxisLimits:
|
||||
"""Tests for Y-axis limit handling."""
|
||||
|
||||
def test_fixed_y_limits(self, sample_timeseries, light_theme):
|
||||
"""Fixed Y limits are applied."""
|
||||
svg = render_chart_svg(
|
||||
sample_timeseries, light_theme,
|
||||
y_min=3.0, y_max=4.5
|
||||
)
|
||||
data = extract_svg_data_attributes(svg)
|
||||
|
||||
assert float(data["y_min"]) == 3.0
|
||||
assert float(data["y_max"]) == 4.5
|
||||
|
||||
def test_auto_y_limits_with_padding(self, light_theme):
|
||||
"""Auto Y limits add padding around data."""
|
||||
now = datetime.now()
|
||||
points = [
|
||||
DataPoint(timestamp=now, value=10.0),
|
||||
DataPoint(timestamp=now + timedelta(hours=1), value=20.0),
|
||||
]
|
||||
ts = TimeSeries(metric="test", role="repeater", period="day", points=points)
|
||||
|
||||
svg = render_chart_svg(ts, light_theme)
|
||||
data = extract_svg_data_attributes(svg)
|
||||
|
||||
y_min = float(data["y_min"])
|
||||
y_max = float(data["y_max"])
|
||||
|
||||
# Auto limits should extend beyond data range
|
||||
assert y_min < 10.0
|
||||
assert y_max > 20.0
|
||||
|
||||
|
||||
class TestXAxisLimits:
|
||||
"""Tests for X-axis limit handling."""
|
||||
|
||||
def test_fixed_x_limits(self, sample_timeseries, light_theme):
|
||||
"""Fixed X limits are applied."""
|
||||
x_start = datetime(2024, 1, 1, 0, 0, 0)
|
||||
x_end = datetime(2024, 1, 2, 0, 0, 0)
|
||||
|
||||
svg = render_chart_svg(
|
||||
sample_timeseries, light_theme,
|
||||
x_start=x_start, x_end=x_end
|
||||
)
|
||||
data = extract_svg_data_attributes(svg)
|
||||
|
||||
assert int(data["x_start"]) == int(x_start.timestamp())
|
||||
assert int(data["x_end"]) == int(x_end.timestamp())
|
||||
|
||||
|
||||
class TestChartThemes:
|
||||
"""Tests for chart theme constants."""
|
||||
|
||||
def test_light_theme_exists(self):
|
||||
"""Light theme is defined."""
|
||||
assert "light" in CHART_THEMES
|
||||
|
||||
def test_dark_theme_exists(self):
|
||||
"""Dark theme is defined."""
|
||||
assert "dark" in CHART_THEMES
|
||||
|
||||
def test_themes_have_required_colors(self):
|
||||
"""Themes have all required color attributes."""
|
||||
required = ["background", "canvas", "text", "axis", "grid", "line", "area"]
|
||||
|
||||
for theme in CHART_THEMES.values():
|
||||
for attr in required:
|
||||
assert hasattr(theme, attr), f"Theme missing {attr}"
|
||||
assert getattr(theme, attr), f"Theme {attr} is empty"
|
||||
|
||||
def test_theme_colors_are_valid_hex(self):
|
||||
"""Theme colors are valid hex strings."""
|
||||
import re
|
||||
hex_pattern = re.compile(r'^[0-9a-fA-F]{6,8}$')
|
||||
|
||||
for name, theme in CHART_THEMES.items():
|
||||
for attr in ["background", "canvas", "text", "axis", "grid", "line", "area"]:
|
||||
color = getattr(theme, attr)
|
||||
assert hex_pattern.match(color), f"{name}.{attr} = {color} is not valid hex"
|
||||
|
||||
|
||||
class TestSvgNormalization:
|
||||
"""Tests for SVG snapshot normalization helper."""
|
||||
|
||||
def test_normalize_removes_matplotlib_ids(self, sample_timeseries, light_theme):
|
||||
"""Normalization removes matplotlib-generated IDs."""
|
||||
svg = render_chart_svg(sample_timeseries, light_theme)
|
||||
normalized = normalize_svg_for_snapshot(svg)
|
||||
|
||||
# Should not have matplotlib's randomized IDs
|
||||
import re
|
||||
# Look for patterns like id="abc123-def456"
|
||||
random_ids = re.findall(r'id="[a-z0-9]+-[0-9a-f]{8,}"', normalized)
|
||||
assert len(random_ids) == 0
|
||||
|
||||
def test_normalize_preserves_data_attributes(self, sample_timeseries, light_theme):
|
||||
"""Normalization preserves data-* attributes."""
|
||||
svg = render_chart_svg(sample_timeseries, light_theme)
|
||||
normalized = normalize_svg_for_snapshot(svg)
|
||||
|
||||
assert "data-metric=" in normalized
|
||||
assert "data-points=" in normalized
|
||||
|
||||
def test_normalize_removes_matplotlib_comment(self, sample_timeseries, light_theme):
|
||||
"""Normalization removes matplotlib version comment."""
|
||||
svg = render_chart_svg(sample_timeseries, light_theme)
|
||||
normalized = normalize_svg_for_snapshot(svg)
|
||||
|
||||
assert "Created with matplotlib" not in normalized
|
||||
|
||||
|
||||
class TestSvgSnapshots:
|
||||
"""Snapshot tests for SVG chart rendering.
|
||||
|
||||
These tests compare rendered SVG output against saved snapshots
|
||||
to detect unintended changes in chart appearance.
|
||||
|
||||
To update snapshots, run: UPDATE_SNAPSHOTS=1 pytest tests/charts/test_chart_render.py
|
||||
"""
|
||||
|
||||
@pytest.fixture
|
||||
def update_snapshots(self):
|
||||
"""Return True if snapshots should be updated."""
|
||||
return os.environ.get("UPDATE_SNAPSHOTS", "").lower() in ("1", "true", "yes")
|
||||
|
||||
def _assert_snapshot_match(
|
||||
self,
|
||||
actual: str,
|
||||
snapshot_path: Path,
|
||||
update: bool,
|
||||
) -> None:
|
||||
"""Compare SVG against snapshot, with optional update mode."""
|
||||
# Normalize for comparison
|
||||
normalized = normalize_svg_for_snapshot(actual)
|
||||
|
||||
if update:
|
||||
# Update mode: write normalized SVG to snapshot
|
||||
snapshot_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
snapshot_path.write_text(normalized, encoding="utf-8")
|
||||
pytest.skip(f"Snapshot updated: {snapshot_path}")
|
||||
else:
|
||||
# Compare mode
|
||||
if not snapshot_path.exists():
|
||||
# Create new snapshot if it doesn't exist
|
||||
snapshot_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
snapshot_path.write_text(normalized, encoding="utf-8")
|
||||
pytest.fail(
|
||||
f"Snapshot created: {snapshot_path}\n"
|
||||
f"Run tests again to verify, or set UPDATE_SNAPSHOTS=1 to regenerate."
|
||||
)
|
||||
|
||||
expected = snapshot_path.read_text(encoding="utf-8")
|
||||
|
||||
if normalized != expected:
|
||||
# Show first difference for debugging
|
||||
norm_lines = normalized.splitlines()
|
||||
exp_lines = expected.splitlines()
|
||||
|
||||
diff_info = []
|
||||
for i, (n, e) in enumerate(zip(norm_lines, exp_lines, strict=False), 1):
|
||||
if n != e:
|
||||
diff_info.append(f"Line {i} differs:")
|
||||
diff_info.append(f" Expected: {e[:100]}...")
|
||||
diff_info.append(f" Actual: {n[:100]}...")
|
||||
if len(diff_info) > 12:
|
||||
diff_info.append(" (more differences omitted)")
|
||||
break
|
||||
|
||||
if len(norm_lines) != len(exp_lines):
|
||||
diff_info.append(
|
||||
f"Line count: expected {len(exp_lines)}, got {len(norm_lines)}"
|
||||
)
|
||||
|
||||
pytest.fail(
|
||||
f"Snapshot mismatch: {snapshot_path}\n"
|
||||
f"Set UPDATE_SNAPSHOTS=1 to regenerate.\n\n"
|
||||
+ "\n".join(diff_info)
|
||||
)
|
||||
|
||||
def test_gauge_chart_light_theme(
|
||||
self,
|
||||
snapshot_gauge_timeseries,
|
||||
light_theme,
|
||||
snapshots_dir,
|
||||
update_snapshots,
|
||||
):
|
||||
"""Gauge metric chart with light theme matches snapshot."""
|
||||
svg = render_chart_svg(
|
||||
snapshot_gauge_timeseries,
|
||||
light_theme,
|
||||
y_min=3.0,
|
||||
y_max=4.2,
|
||||
)
|
||||
|
||||
snapshot_path = snapshots_dir / "bat_day_light.svg"
|
||||
self._assert_snapshot_match(svg, snapshot_path, update_snapshots)
|
||||
|
||||
def test_gauge_chart_dark_theme(
|
||||
self,
|
||||
snapshot_gauge_timeseries,
|
||||
dark_theme,
|
||||
snapshots_dir,
|
||||
update_snapshots,
|
||||
):
|
||||
"""Gauge metric chart with dark theme matches snapshot."""
|
||||
svg = render_chart_svg(
|
||||
snapshot_gauge_timeseries,
|
||||
dark_theme,
|
||||
y_min=3.0,
|
||||
y_max=4.2,
|
||||
)
|
||||
|
||||
snapshot_path = snapshots_dir / "bat_day_dark.svg"
|
||||
self._assert_snapshot_match(svg, snapshot_path, update_snapshots)
|
||||
|
||||
def test_counter_chart_light_theme(
|
||||
self,
|
||||
snapshot_counter_timeseries,
|
||||
light_theme,
|
||||
snapshots_dir,
|
||||
update_snapshots,
|
||||
):
|
||||
"""Counter metric (rate) chart with light theme matches snapshot."""
|
||||
svg = render_chart_svg(
|
||||
snapshot_counter_timeseries,
|
||||
light_theme,
|
||||
)
|
||||
|
||||
snapshot_path = snapshots_dir / "nb_recv_day_light.svg"
|
||||
self._assert_snapshot_match(svg, snapshot_path, update_snapshots)
|
||||
|
||||
def test_counter_chart_dark_theme(
|
||||
self,
|
||||
snapshot_counter_timeseries,
|
||||
dark_theme,
|
||||
snapshots_dir,
|
||||
update_snapshots,
|
||||
):
|
||||
"""Counter metric (rate) chart with dark theme matches snapshot."""
|
||||
svg = render_chart_svg(
|
||||
snapshot_counter_timeseries,
|
||||
dark_theme,
|
||||
)
|
||||
|
||||
snapshot_path = snapshots_dir / "nb_recv_day_dark.svg"
|
||||
self._assert_snapshot_match(svg, snapshot_path, update_snapshots)
|
||||
|
||||
def test_empty_chart_light_theme(
|
||||
self,
|
||||
snapshot_empty_timeseries,
|
||||
light_theme,
|
||||
snapshots_dir,
|
||||
update_snapshots,
|
||||
):
|
||||
"""Empty chart with 'No data available' matches snapshot."""
|
||||
svg = render_chart_svg(
|
||||
snapshot_empty_timeseries,
|
||||
light_theme,
|
||||
)
|
||||
|
||||
snapshot_path = snapshots_dir / "empty_day_light.svg"
|
||||
self._assert_snapshot_match(svg, snapshot_path, update_snapshots)
|
||||
|
||||
def test_empty_chart_dark_theme(
|
||||
self,
|
||||
snapshot_empty_timeseries,
|
||||
dark_theme,
|
||||
snapshots_dir,
|
||||
update_snapshots,
|
||||
):
|
||||
"""Empty chart with dark theme matches snapshot."""
|
||||
svg = render_chart_svg(
|
||||
snapshot_empty_timeseries,
|
||||
dark_theme,
|
||||
)
|
||||
|
||||
snapshot_path = snapshots_dir / "empty_day_dark.svg"
|
||||
self._assert_snapshot_match(svg, snapshot_path, update_snapshots)
|
||||
|
||||
def test_single_point_chart(
|
||||
self,
|
||||
snapshot_single_point_timeseries,
|
||||
light_theme,
|
||||
snapshots_dir,
|
||||
update_snapshots,
|
||||
):
|
||||
"""Chart with single data point matches snapshot."""
|
||||
svg = render_chart_svg(
|
||||
snapshot_single_point_timeseries,
|
||||
light_theme,
|
||||
y_min=3.0,
|
||||
y_max=4.2,
|
||||
)
|
||||
|
||||
snapshot_path = snapshots_dir / "single_point_day_light.svg"
|
||||
self._assert_snapshot_match(svg, snapshot_path, update_snapshots)
|
||||
185
tests/charts/test_statistics.py
Normal file
185
tests/charts/test_statistics.py
Normal file
@@ -0,0 +1,185 @@
|
||||
"""Tests for chart statistics calculation."""
|
||||
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.charts import (
|
||||
ChartStatistics,
|
||||
DataPoint,
|
||||
TimeSeries,
|
||||
calculate_statistics,
|
||||
)
|
||||
|
||||
BASE_TIME = datetime(2024, 1, 1, 0, 0, 0)
|
||||
|
||||
|
||||
class TestCalculateStatistics:
|
||||
"""Tests for calculate_statistics function."""
|
||||
|
||||
def test_calculates_min(self, sample_timeseries):
|
||||
"""Calculates minimum value."""
|
||||
stats = calculate_statistics(sample_timeseries)
|
||||
|
||||
assert stats.min_value is not None
|
||||
assert stats.min_value == min(p.value for p in sample_timeseries.points)
|
||||
|
||||
def test_calculates_max(self, sample_timeseries):
|
||||
"""Calculates maximum value."""
|
||||
stats = calculate_statistics(sample_timeseries)
|
||||
|
||||
assert stats.max_value is not None
|
||||
assert stats.max_value == max(p.value for p in sample_timeseries.points)
|
||||
|
||||
def test_calculates_avg(self, sample_timeseries):
|
||||
"""Calculates average value."""
|
||||
stats = calculate_statistics(sample_timeseries)
|
||||
|
||||
expected_avg = sum(p.value for p in sample_timeseries.points) / len(sample_timeseries.points)
|
||||
assert stats.avg_value is not None
|
||||
assert stats.avg_value == pytest.approx(expected_avg)
|
||||
|
||||
def test_calculates_current(self, sample_timeseries):
|
||||
"""Current is the last value."""
|
||||
stats = calculate_statistics(sample_timeseries)
|
||||
|
||||
assert stats.current_value is not None
|
||||
assert stats.current_value == sample_timeseries.points[-1].value
|
||||
|
||||
def test_empty_series_returns_none_values(self, empty_timeseries):
|
||||
"""Empty time series returns None for all stats."""
|
||||
stats = calculate_statistics(empty_timeseries)
|
||||
|
||||
assert stats.min_value is None
|
||||
assert stats.avg_value is None
|
||||
assert stats.max_value is None
|
||||
assert stats.current_value is None
|
||||
|
||||
def test_single_point_stats(self, single_point_timeseries):
|
||||
"""Single point: min=avg=max=current."""
|
||||
stats = calculate_statistics(single_point_timeseries)
|
||||
value = single_point_timeseries.points[0].value
|
||||
|
||||
assert stats.min_value == value
|
||||
assert stats.avg_value == value
|
||||
assert stats.max_value == value
|
||||
assert stats.current_value == value
|
||||
|
||||
|
||||
class TestChartStatistics:
|
||||
"""Tests for ChartStatistics dataclass."""
|
||||
|
||||
def test_to_dict(self):
|
||||
"""Converts to dict with correct keys."""
|
||||
stats = ChartStatistics(
|
||||
min_value=3.0,
|
||||
avg_value=3.5,
|
||||
max_value=4.0,
|
||||
current_value=3.8,
|
||||
)
|
||||
|
||||
d = stats.to_dict()
|
||||
|
||||
assert d == {
|
||||
"min": 3.0,
|
||||
"avg": 3.5,
|
||||
"max": 4.0,
|
||||
"current": 3.8,
|
||||
}
|
||||
|
||||
def test_to_dict_with_none_values(self):
|
||||
"""None values preserved in dict."""
|
||||
stats = ChartStatistics()
|
||||
|
||||
d = stats.to_dict()
|
||||
|
||||
assert d == {
|
||||
"min": None,
|
||||
"avg": None,
|
||||
"max": None,
|
||||
"current": None,
|
||||
}
|
||||
|
||||
def test_default_values_are_none(self):
|
||||
"""Default values are all None."""
|
||||
stats = ChartStatistics()
|
||||
|
||||
assert stats.min_value is None
|
||||
assert stats.avg_value is None
|
||||
assert stats.max_value is None
|
||||
assert stats.current_value is None
|
||||
|
||||
|
||||
class TestStatisticsWithVariousData:
|
||||
"""Tests for statistics with various data patterns."""
|
||||
|
||||
def test_constant_values(self):
|
||||
"""All same values gives min=avg=max."""
|
||||
now = BASE_TIME
|
||||
points = [DataPoint(timestamp=now + timedelta(hours=i), value=5.0) for i in range(10)]
|
||||
ts = TimeSeries(metric="test", role="companion", period="day", points=points)
|
||||
|
||||
stats = calculate_statistics(ts)
|
||||
|
||||
assert stats.min_value == 5.0
|
||||
assert stats.avg_value == 5.0
|
||||
assert stats.max_value == 5.0
|
||||
|
||||
def test_increasing_values(self):
|
||||
"""Increasing values have correct stats."""
|
||||
now = BASE_TIME
|
||||
points = [DataPoint(timestamp=now + timedelta(hours=i), value=float(i)) for i in range(10)]
|
||||
ts = TimeSeries(metric="test", role="companion", period="day", points=points)
|
||||
|
||||
stats = calculate_statistics(ts)
|
||||
|
||||
assert stats.min_value == 0.0
|
||||
assert stats.max_value == 9.0
|
||||
assert stats.avg_value == 4.5 # Mean of 0-9
|
||||
assert stats.current_value == 9.0 # Last value
|
||||
|
||||
def test_negative_values(self):
|
||||
"""Handles negative values correctly."""
|
||||
now = BASE_TIME
|
||||
points = [
|
||||
DataPoint(timestamp=now, value=-10.0),
|
||||
DataPoint(timestamp=now + timedelta(hours=1), value=-5.0),
|
||||
DataPoint(timestamp=now + timedelta(hours=2), value=0.0),
|
||||
]
|
||||
ts = TimeSeries(metric="test", role="companion", period="day", points=points)
|
||||
|
||||
stats = calculate_statistics(ts)
|
||||
|
||||
assert stats.min_value == -10.0
|
||||
assert stats.max_value == 0.0
|
||||
assert stats.avg_value == -5.0
|
||||
|
||||
def test_large_values(self):
|
||||
"""Handles large values correctly."""
|
||||
now = BASE_TIME
|
||||
points = [
|
||||
DataPoint(timestamp=now, value=1e10),
|
||||
DataPoint(timestamp=now + timedelta(hours=1), value=1e11),
|
||||
]
|
||||
ts = TimeSeries(metric="test", role="companion", period="day", points=points)
|
||||
|
||||
stats = calculate_statistics(ts)
|
||||
|
||||
assert stats.min_value == 1e10
|
||||
assert stats.max_value == 1e11
|
||||
|
||||
def test_small_decimal_values(self):
|
||||
"""Handles small decimal values correctly."""
|
||||
now = BASE_TIME
|
||||
points = [
|
||||
DataPoint(timestamp=now, value=0.001),
|
||||
DataPoint(timestamp=now + timedelta(hours=1), value=0.002),
|
||||
DataPoint(timestamp=now + timedelta(hours=2), value=0.003),
|
||||
]
|
||||
ts = TimeSeries(metric="test", role="companion", period="day", points=points)
|
||||
|
||||
stats = calculate_statistics(ts)
|
||||
|
||||
assert stats.min_value == pytest.approx(0.001)
|
||||
assert stats.max_value == pytest.approx(0.003)
|
||||
assert stats.avg_value == pytest.approx(0.002)
|
||||
187
tests/charts/test_timeseries.py
Normal file
187
tests/charts/test_timeseries.py
Normal file
@@ -0,0 +1,187 @@
|
||||
"""Tests for TimeSeries data class and loading."""
|
||||
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.charts import (
|
||||
DataPoint,
|
||||
TimeSeries,
|
||||
load_timeseries_from_db,
|
||||
)
|
||||
from meshmon.db import insert_metrics
|
||||
|
||||
BASE_TIME = datetime(2024, 1, 1, 0, 0, 0)
|
||||
|
||||
|
||||
class TestDataPoint:
|
||||
"""Tests for DataPoint dataclass."""
|
||||
|
||||
def test_stores_timestamp_and_value(self):
|
||||
"""Stores timestamp and value."""
|
||||
ts = BASE_TIME
|
||||
dp = DataPoint(timestamp=ts, value=3.85)
|
||||
|
||||
assert dp.timestamp == ts
|
||||
assert dp.value == 3.85
|
||||
|
||||
def test_value_types(self):
|
||||
"""Accepts float and int values."""
|
||||
ts = BASE_TIME
|
||||
|
||||
dp_float = DataPoint(timestamp=ts, value=3.85)
|
||||
assert dp_float.value == 3.85
|
||||
|
||||
dp_int = DataPoint(timestamp=ts, value=100)
|
||||
assert dp_int.value == 100
|
||||
|
||||
|
||||
class TestTimeSeries:
|
||||
"""Tests for TimeSeries dataclass."""
|
||||
|
||||
def test_stores_metadata(self):
|
||||
"""Stores metric, role, period metadata."""
|
||||
ts = TimeSeries(
|
||||
metric="bat",
|
||||
role="repeater",
|
||||
period="day",
|
||||
)
|
||||
|
||||
assert ts.metric == "bat"
|
||||
assert ts.role == "repeater"
|
||||
assert ts.period == "day"
|
||||
|
||||
def test_empty_by_default(self):
|
||||
"""Points list is empty by default."""
|
||||
ts = TimeSeries(metric="bat", role="repeater", period="day")
|
||||
|
||||
assert ts.points == []
|
||||
assert ts.is_empty is True
|
||||
|
||||
def test_timestamps_property(self, sample_timeseries):
|
||||
"""timestamps property returns list of timestamps."""
|
||||
timestamps = sample_timeseries.timestamps
|
||||
|
||||
assert len(timestamps) == len(sample_timeseries.points)
|
||||
assert all(isinstance(t, datetime) for t in timestamps)
|
||||
|
||||
def test_values_property(self, sample_timeseries):
|
||||
"""values property returns list of values."""
|
||||
values = sample_timeseries.values
|
||||
|
||||
assert len(values) == len(sample_timeseries.points)
|
||||
assert all(isinstance(v, float) for v in values)
|
||||
|
||||
def test_is_empty_false_with_data(self, sample_timeseries):
|
||||
"""is_empty is False when points exist."""
|
||||
assert sample_timeseries.is_empty is False
|
||||
|
||||
def test_is_empty_true_without_data(self, empty_timeseries):
|
||||
"""is_empty is True when no points."""
|
||||
assert empty_timeseries.is_empty is True
|
||||
|
||||
|
||||
class TestLoadTimeseriesFromDb:
|
||||
"""Tests for load_timeseries_from_db function."""
|
||||
|
||||
def test_loads_metric_data(self, initialized_db, configured_env):
|
||||
"""Loads metric data from database."""
|
||||
base_ts = 1704067200
|
||||
insert_metrics(base_ts, "repeater", {"bat": 3850.0}, initialized_db)
|
||||
insert_metrics(base_ts + 900, "repeater", {"bat": 3860.0}, initialized_db)
|
||||
|
||||
ts = load_timeseries_from_db(
|
||||
role="repeater",
|
||||
metric="bat",
|
||||
end_time=datetime.fromtimestamp(base_ts + 1000),
|
||||
lookback=timedelta(hours=1),
|
||||
period="day",
|
||||
)
|
||||
|
||||
assert len(ts.points) == 2
|
||||
|
||||
def test_filters_by_time_range(self, initialized_db, configured_env):
|
||||
"""Only loads data within time range."""
|
||||
base_ts = 1704067200
|
||||
|
||||
# Insert data outside and inside range
|
||||
insert_metrics(base_ts - 7200, "repeater", {"bat": 3800.0}, initialized_db) # Outside
|
||||
insert_metrics(base_ts, "repeater", {"bat": 3850.0}, initialized_db) # Inside
|
||||
insert_metrics(base_ts + 7200, "repeater", {"bat": 3900.0}, initialized_db) # Outside
|
||||
|
||||
ts = load_timeseries_from_db(
|
||||
role="repeater",
|
||||
metric="bat",
|
||||
end_time=datetime.fromtimestamp(base_ts + 1800),
|
||||
lookback=timedelta(hours=1),
|
||||
period="day",
|
||||
)
|
||||
|
||||
assert len(ts.points) == 1
|
||||
assert ts.points[0].value == pytest.approx(3.85) # Transformed to volts
|
||||
|
||||
def test_returns_correct_metadata(self, initialized_db, configured_env):
|
||||
"""Returned TimeSeries has correct metadata."""
|
||||
ts = load_timeseries_from_db(
|
||||
role="repeater",
|
||||
metric="bat",
|
||||
end_time=BASE_TIME,
|
||||
lookback=timedelta(hours=1),
|
||||
period="week",
|
||||
)
|
||||
|
||||
assert ts.metric == "bat"
|
||||
assert ts.role == "repeater"
|
||||
assert ts.period == "week"
|
||||
|
||||
def test_uses_prefetched_metrics(self, initialized_db, configured_env):
|
||||
"""Can use pre-fetched metrics dict."""
|
||||
base_ts = 1704067200
|
||||
insert_metrics(base_ts, "repeater", {"bat": 3850.0}, initialized_db)
|
||||
|
||||
# Pre-fetch metrics
|
||||
from meshmon.db import get_metrics_for_period
|
||||
all_metrics = get_metrics_for_period("repeater", base_ts - 3600, base_ts + 3600)
|
||||
|
||||
ts = load_timeseries_from_db(
|
||||
role="repeater",
|
||||
metric="bat",
|
||||
end_time=datetime.fromtimestamp(base_ts + 3600),
|
||||
lookback=timedelta(hours=2),
|
||||
period="day",
|
||||
all_metrics=all_metrics,
|
||||
)
|
||||
|
||||
assert len(ts.points) == 1
|
||||
|
||||
def test_handles_missing_metric(self, initialized_db, configured_env):
|
||||
"""Returns empty TimeSeries for missing metric."""
|
||||
ts = load_timeseries_from_db(
|
||||
role="repeater",
|
||||
metric="nonexistent_metric",
|
||||
end_time=BASE_TIME,
|
||||
lookback=timedelta(hours=1),
|
||||
period="day",
|
||||
)
|
||||
|
||||
assert ts.is_empty
|
||||
|
||||
def test_sorts_by_timestamp(self, initialized_db, configured_env):
|
||||
"""Points are sorted by timestamp."""
|
||||
base_ts = 1704067200
|
||||
|
||||
# Insert out of order
|
||||
insert_metrics(base_ts + 300, "repeater", {"bat": 3860.0}, initialized_db)
|
||||
insert_metrics(base_ts, "repeater", {"bat": 3850.0}, initialized_db)
|
||||
insert_metrics(base_ts + 150, "repeater", {"bat": 3855.0}, initialized_db)
|
||||
|
||||
ts = load_timeseries_from_db(
|
||||
role="repeater",
|
||||
metric="bat",
|
||||
end_time=datetime.fromtimestamp(base_ts + 600),
|
||||
lookback=timedelta(hours=1),
|
||||
period="day",
|
||||
)
|
||||
|
||||
timestamps = [p.timestamp for p in ts.points]
|
||||
assert timestamps == sorted(timestamps)
|
||||
215
tests/charts/test_transforms.py
Normal file
215
tests/charts/test_transforms.py
Normal file
@@ -0,0 +1,215 @@
|
||||
"""Tests for chart data transformations (counter-to-rate, etc.)."""
|
||||
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.charts import (
|
||||
PERIOD_CONFIG,
|
||||
load_timeseries_from_db,
|
||||
)
|
||||
from meshmon.db import insert_metrics
|
||||
|
||||
BASE_TIME = datetime(2024, 1, 1, 0, 0, 0)
|
||||
|
||||
|
||||
class TestCounterToRateConversion:
|
||||
"""Tests for counter metric rate conversion."""
|
||||
|
||||
def test_calculates_rate_from_deltas(self, initialized_db, configured_env):
|
||||
"""Counter values are converted to rate of change."""
|
||||
base_ts = 1704067200 # 2024-01-01 00:00:00 UTC
|
||||
|
||||
# Insert increasing counter values (15 min apart)
|
||||
for i in range(5):
|
||||
ts = base_ts + (i * 900) # 15 minutes
|
||||
insert_metrics(ts, "repeater", {"nb_recv": float(i * 100)}, initialized_db)
|
||||
|
||||
ts = load_timeseries_from_db(
|
||||
role="repeater",
|
||||
metric="nb_recv",
|
||||
end_time=datetime.fromtimestamp(base_ts + 4 * 900),
|
||||
lookback=timedelta(hours=2),
|
||||
period="day",
|
||||
)
|
||||
|
||||
# Counter produces N-1 rate points from N values
|
||||
assert len(ts.points) == 4
|
||||
|
||||
# All rates should be positive (counter increasing)
|
||||
expected_rate = (100.0 / 900.0) * 60.0
|
||||
for p in ts.points:
|
||||
assert p.value == pytest.approx(expected_rate)
|
||||
|
||||
def test_handles_counter_reset(self, initialized_db, configured_env):
|
||||
"""Counter resets (negative delta) are skipped."""
|
||||
base_ts = 1704067200
|
||||
|
||||
# Insert values with a reset
|
||||
insert_metrics(base_ts, "repeater", {"nb_recv": 100.0}, initialized_db)
|
||||
insert_metrics(base_ts + 900, "repeater", {"nb_recv": 200.0}, initialized_db)
|
||||
insert_metrics(base_ts + 1800, "repeater", {"nb_recv": 50.0}, initialized_db) # Reset!
|
||||
insert_metrics(base_ts + 2700, "repeater", {"nb_recv": 150.0}, initialized_db)
|
||||
|
||||
ts = load_timeseries_from_db(
|
||||
role="repeater",
|
||||
metric="nb_recv",
|
||||
end_time=datetime.fromtimestamp(base_ts + 2700),
|
||||
lookback=timedelta(hours=1),
|
||||
period="day",
|
||||
)
|
||||
|
||||
# Reset point should be skipped, so fewer points
|
||||
assert len(ts.points) == 2 # Only valid deltas
|
||||
expected_rate = (100.0 / 900.0) * 60.0
|
||||
assert ts.points[0].timestamp == datetime.fromtimestamp(base_ts + 900)
|
||||
assert ts.points[1].timestamp == datetime.fromtimestamp(base_ts + 2700)
|
||||
assert ts.points[0].value == pytest.approx(expected_rate)
|
||||
assert ts.points[1].value == pytest.approx(expected_rate)
|
||||
|
||||
def test_applies_scale_factor(self, initialized_db, configured_env):
|
||||
"""Counter rate is scaled (typically x60 for per-minute)."""
|
||||
base_ts = 1704067200
|
||||
|
||||
# Insert values 60 seconds apart for easy math
|
||||
insert_metrics(base_ts, "repeater", {"nb_recv": 0.0}, initialized_db)
|
||||
insert_metrics(base_ts + 60, "repeater", {"nb_recv": 60.0}, initialized_db)
|
||||
|
||||
ts = load_timeseries_from_db(
|
||||
role="repeater",
|
||||
metric="nb_recv",
|
||||
end_time=datetime.fromtimestamp(base_ts + 60),
|
||||
lookback=timedelta(hours=1),
|
||||
period="day",
|
||||
)
|
||||
|
||||
# 60 packets in 60 seconds = 1/sec = 60/min with scale=60
|
||||
assert len(ts.points) == 1
|
||||
assert ts.points[0].value == pytest.approx(60.0)
|
||||
|
||||
def test_single_value_returns_empty(self, initialized_db, configured_env):
|
||||
"""Single counter value cannot compute rate."""
|
||||
base_ts = 1704067200
|
||||
insert_metrics(base_ts, "repeater", {"nb_recv": 100.0}, initialized_db)
|
||||
|
||||
ts = load_timeseries_from_db(
|
||||
role="repeater",
|
||||
metric="nb_recv",
|
||||
end_time=datetime.fromtimestamp(base_ts),
|
||||
lookback=timedelta(hours=1),
|
||||
period="day",
|
||||
)
|
||||
|
||||
assert ts.is_empty
|
||||
|
||||
|
||||
class TestGaugeValueTransform:
|
||||
"""Tests for gauge metric value transformation."""
|
||||
|
||||
def test_applies_voltage_transform(self, initialized_db, configured_env):
|
||||
"""Voltage transform converts mV to V."""
|
||||
base_ts = 1704067200
|
||||
|
||||
# Insert millivolt value
|
||||
insert_metrics(base_ts, "companion", {"battery_mv": 3850.0}, initialized_db)
|
||||
|
||||
ts = load_timeseries_from_db(
|
||||
role="companion",
|
||||
metric="battery_mv",
|
||||
end_time=datetime.fromtimestamp(base_ts),
|
||||
lookback=timedelta(hours=1),
|
||||
period="day",
|
||||
)
|
||||
|
||||
# Should be converted to volts
|
||||
assert len(ts.points) == 1
|
||||
assert ts.points[0].value == pytest.approx(3.85)
|
||||
|
||||
def test_no_transform_for_bat_pct(self, initialized_db, configured_env):
|
||||
"""Battery percentage has no transform."""
|
||||
base_ts = 1704067200
|
||||
insert_metrics(base_ts, "repeater", {"bat_pct": 75.0}, initialized_db)
|
||||
|
||||
ts = load_timeseries_from_db(
|
||||
role="repeater",
|
||||
metric="bat_pct",
|
||||
end_time=datetime.fromtimestamp(base_ts),
|
||||
lookback=timedelta(hours=1),
|
||||
period="day",
|
||||
)
|
||||
|
||||
assert ts.points[0].value == pytest.approx(75.0)
|
||||
|
||||
|
||||
class TestTimeBinning:
|
||||
"""Tests for time series aggregation/binning."""
|
||||
|
||||
def test_no_binning_for_day(self):
|
||||
"""Day period uses raw data (no binning)."""
|
||||
assert PERIOD_CONFIG["day"].bin_seconds is None
|
||||
|
||||
def test_30_min_bins_for_week(self):
|
||||
"""Week period uses 30-minute bins."""
|
||||
assert PERIOD_CONFIG["week"].bin_seconds == 1800
|
||||
|
||||
def test_2_hour_bins_for_month(self):
|
||||
"""Month period uses 2-hour bins."""
|
||||
assert PERIOD_CONFIG["month"].bin_seconds == 7200
|
||||
|
||||
def test_1_day_bins_for_year(self):
|
||||
"""Year period uses 1-day bins."""
|
||||
assert PERIOD_CONFIG["year"].bin_seconds == 86400
|
||||
|
||||
def test_binning_reduces_point_count(self, initialized_db, configured_env):
|
||||
"""Binning aggregates multiple points per bin."""
|
||||
base_ts = 1704067200
|
||||
|
||||
# Insert many points (one per minute for an hour)
|
||||
for i in range(60):
|
||||
ts = base_ts + (i * 60)
|
||||
insert_metrics(ts, "repeater", {"bat": 3850.0 + i}, initialized_db)
|
||||
|
||||
ts = load_timeseries_from_db(
|
||||
role="repeater",
|
||||
metric="bat",
|
||||
end_time=datetime.fromtimestamp(base_ts + 3600),
|
||||
lookback=timedelta(days=7), # Week period has 30-min bins
|
||||
period="week",
|
||||
)
|
||||
|
||||
# 60 points over 1 hour with 30-min bins = 2-3 bins
|
||||
assert len(ts.points) <= 3
|
||||
|
||||
|
||||
class TestEmptyData:
|
||||
"""Tests for handling empty/missing data."""
|
||||
|
||||
def test_empty_when_no_metric_data(self, initialized_db, configured_env):
|
||||
"""Returns empty TimeSeries when metric has no data."""
|
||||
ts = load_timeseries_from_db(
|
||||
role="repeater",
|
||||
metric="nonexistent",
|
||||
end_time=BASE_TIME,
|
||||
lookback=timedelta(days=1),
|
||||
period="day",
|
||||
)
|
||||
|
||||
assert ts.is_empty
|
||||
assert ts.metric == "nonexistent"
|
||||
assert ts.role == "repeater"
|
||||
assert ts.period == "day"
|
||||
|
||||
def test_empty_when_no_data_in_range(self, initialized_db, configured_env):
|
||||
"""Returns empty TimeSeries when no data in time range."""
|
||||
old_ts = 1000000 # Very old timestamp
|
||||
insert_metrics(old_ts, "repeater", {"bat": 3850.0}, initialized_db)
|
||||
|
||||
ts = load_timeseries_from_db(
|
||||
role="repeater",
|
||||
metric="bat",
|
||||
end_time=BASE_TIME,
|
||||
lookback=timedelta(hours=1),
|
||||
period="day",
|
||||
)
|
||||
|
||||
assert ts.is_empty
|
||||
1
tests/client/__init__.py
Normal file
1
tests/client/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Tests for MeshCore client wrapper."""
|
||||
99
tests/client/conftest.py
Normal file
99
tests/client/conftest.py
Normal file
@@ -0,0 +1,99 @@
|
||||
"""Fixtures for MeshCore client tests."""
|
||||
|
||||
from unittest.mock import AsyncMock, MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_meshcore_module():
|
||||
"""Mock the entire meshcore module at import level."""
|
||||
mock_mc = MagicMock()
|
||||
mock_mc.MeshCore = MagicMock()
|
||||
mock_mc.EventType = MagicMock()
|
||||
mock_mc.EventType.ERROR = "ERROR"
|
||||
|
||||
with patch.dict("sys.modules", {"meshcore": mock_mc}):
|
||||
yield mock_mc
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_meshcore_client():
|
||||
"""Create mock MeshCore client with AsyncMock for coroutines."""
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
mc.contacts = {}
|
||||
|
||||
# Async methods
|
||||
mc.disconnect = AsyncMock()
|
||||
mc.commands.send_appstart = MagicMock(return_value=AsyncMock())
|
||||
mc.commands.get_contacts = MagicMock(return_value=AsyncMock())
|
||||
mc.commands.req_status_sync = MagicMock(return_value=AsyncMock())
|
||||
|
||||
# Synchronous methods
|
||||
mc.get_contact_by_name = MagicMock(return_value=None)
|
||||
mc.get_contact_by_key_prefix = MagicMock(return_value=None)
|
||||
|
||||
return mc
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_serial_port():
|
||||
"""Mock pyserial for serial port detection."""
|
||||
mock_serial = MagicMock()
|
||||
mock_port = MagicMock()
|
||||
mock_port.device = "/dev/ttyACM0"
|
||||
mock_port.description = "Mock MeshCore Device"
|
||||
mock_serial.tools = MagicMock()
|
||||
mock_serial.tools.list_ports = MagicMock()
|
||||
mock_serial.tools.list_ports.comports = MagicMock(return_value=[mock_port])
|
||||
|
||||
with patch.dict("sys.modules", {
|
||||
"serial": mock_serial,
|
||||
"serial.tools": mock_serial.tools,
|
||||
"serial.tools.list_ports": mock_serial.tools.list_ports,
|
||||
}):
|
||||
yield mock_serial
|
||||
|
||||
|
||||
def make_mock_event(event_type: str, payload: dict = None):
|
||||
"""Helper: Create a mock MeshCore event.
|
||||
|
||||
Args:
|
||||
event_type: Event type name (e.g., "SELF_INFO", "ERROR")
|
||||
payload: Event payload dict
|
||||
|
||||
Returns:
|
||||
Mock event object
|
||||
"""
|
||||
event = MagicMock()
|
||||
event.type = MagicMock()
|
||||
event.type.name = event_type
|
||||
event.payload = payload if payload is not None else {}
|
||||
return event
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_contact():
|
||||
"""Sample contact object."""
|
||||
contact = MagicMock()
|
||||
contact.adv_name = "TestNode"
|
||||
contact.name = "Test"
|
||||
contact.pubkey_prefix = "abc123"
|
||||
contact.public_key = b"\x01\x02\x03\x04"
|
||||
contact.type = 1
|
||||
contact.flags = 0
|
||||
return contact
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_contact_dict():
|
||||
"""Sample contact as dictionary."""
|
||||
return {
|
||||
"adv_name": "TestNode",
|
||||
"name": "Test",
|
||||
"pubkey_prefix": "abc123",
|
||||
"public_key": b"\x01\x02\x03\x04",
|
||||
"type": 1,
|
||||
"flags": 0,
|
||||
}
|
||||
449
tests/client/test_connect.py
Normal file
449
tests/client/test_connect.py
Normal file
@@ -0,0 +1,449 @@
|
||||
"""Tests for MeshCore connection functions."""
|
||||
|
||||
from unittest.mock import AsyncMock, MagicMock
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.meshcore_client import (
|
||||
_acquire_lock_async,
|
||||
auto_detect_serial_port,
|
||||
connect_from_env,
|
||||
connect_with_lock,
|
||||
)
|
||||
|
||||
|
||||
def _reset_config():
|
||||
import meshmon.env
|
||||
|
||||
meshmon.env._config = None
|
||||
return meshmon.env.get_config()
|
||||
|
||||
|
||||
class TestAutoDetectSerialPort:
|
||||
"""Tests for auto_detect_serial_port function."""
|
||||
|
||||
def test_prefers_acm_devices(self, mock_serial_port):
|
||||
"""Prefers /dev/ttyACM* devices."""
|
||||
mock_port_acm = MagicMock()
|
||||
mock_port_acm.device = "/dev/ttyACM0"
|
||||
mock_port_acm.description = "ACM Device"
|
||||
|
||||
mock_port_usb = MagicMock()
|
||||
mock_port_usb.device = "/dev/ttyUSB0"
|
||||
mock_port_usb.description = "USB Device"
|
||||
|
||||
mock_serial_port.tools.list_ports.comports.return_value = [mock_port_usb, mock_port_acm]
|
||||
|
||||
result = auto_detect_serial_port()
|
||||
|
||||
assert result == "/dev/ttyACM0"
|
||||
|
||||
def test_falls_back_to_usb(self, mock_serial_port):
|
||||
"""Falls back to /dev/ttyUSB* if no ACM."""
|
||||
mock_port = MagicMock()
|
||||
mock_port.device = "/dev/ttyUSB0"
|
||||
mock_port.description = "USB Device"
|
||||
|
||||
mock_serial_port.tools.list_ports.comports.return_value = [mock_port]
|
||||
|
||||
result = auto_detect_serial_port()
|
||||
|
||||
assert result == "/dev/ttyUSB0"
|
||||
|
||||
def test_falls_back_to_first_available(self, mock_serial_port):
|
||||
"""Falls back to first available port."""
|
||||
mock_port = MagicMock()
|
||||
mock_port.device = "/dev/ttyS0"
|
||||
mock_port.description = "Serial Port"
|
||||
|
||||
mock_serial_port.tools.list_ports.comports.return_value = [mock_port]
|
||||
|
||||
result = auto_detect_serial_port()
|
||||
|
||||
assert result == "/dev/ttyS0"
|
||||
|
||||
def test_returns_none_when_no_ports(self, mock_serial_port):
|
||||
"""Returns None when no ports available."""
|
||||
mock_serial_port.tools.list_ports.comports.return_value = []
|
||||
|
||||
result = auto_detect_serial_port()
|
||||
|
||||
assert result is None
|
||||
|
||||
def test_handles_import_error(self, monkeypatch):
|
||||
"""Returns None when pyserial not installed."""
|
||||
import builtins
|
||||
|
||||
real_import = builtins.__import__
|
||||
|
||||
def mock_import(name, *args, **kwargs):
|
||||
if name in {"serial", "serial.tools.list_ports"}:
|
||||
raise ImportError("No module named 'serial'")
|
||||
return real_import(name, *args, **kwargs)
|
||||
|
||||
monkeypatch.setattr(builtins, "__import__", mock_import)
|
||||
|
||||
assert auto_detect_serial_port() is None
|
||||
|
||||
|
||||
class TestConnectFromEnv:
|
||||
"""Tests for connect_from_env function."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_returns_none_when_meshcore_unavailable(self, configured_env, monkeypatch):
|
||||
"""Returns None when meshcore library not available."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", False)
|
||||
|
||||
result = await connect_from_env()
|
||||
|
||||
assert result is None
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_serial_connection(self, configured_env, monkeypatch, mock_serial_port):
|
||||
"""Connects via serial when configured."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "serial")
|
||||
monkeypatch.setenv("MESH_SERIAL_PORT", "/dev/ttyACM0")
|
||||
monkeypatch.setenv("MESH_SERIAL_BAUD", "57600")
|
||||
monkeypatch.setenv("MESH_DEBUG", "1")
|
||||
|
||||
_reset_config()
|
||||
|
||||
mock_client = MagicMock()
|
||||
mock_create = AsyncMock(return_value=mock_client)
|
||||
mock_meshcore = MagicMock()
|
||||
mock_meshcore.create_serial = mock_create
|
||||
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MeshCore", mock_meshcore)
|
||||
|
||||
result = await connect_from_env()
|
||||
|
||||
assert result is mock_client
|
||||
mock_create.assert_called_once_with("/dev/ttyACM0", 57600, debug=True)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_tcp_connection(self, configured_env, monkeypatch):
|
||||
"""Connects via TCP when configured."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "tcp")
|
||||
monkeypatch.setenv("MESH_TCP_HOST", "localhost")
|
||||
monkeypatch.setenv("MESH_TCP_PORT", "4403")
|
||||
|
||||
_reset_config()
|
||||
|
||||
mock_client = MagicMock()
|
||||
mock_create = AsyncMock(return_value=mock_client)
|
||||
mock_meshcore = MagicMock()
|
||||
mock_meshcore.create_tcp = mock_create
|
||||
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MeshCore", mock_meshcore)
|
||||
|
||||
result = await connect_from_env()
|
||||
|
||||
assert result is mock_client
|
||||
mock_create.assert_called_once_with("localhost", 4403)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_unknown_transport(self, configured_env, monkeypatch):
|
||||
"""Returns None for unknown transport."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "unknown")
|
||||
|
||||
_reset_config()
|
||||
|
||||
result = await connect_from_env()
|
||||
|
||||
assert result is None
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_handles_connection_error(self, configured_env, monkeypatch, mock_serial_port):
|
||||
"""Returns None on connection error."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "serial")
|
||||
monkeypatch.setenv("MESH_SERIAL_PORT", "/dev/ttyACM0")
|
||||
|
||||
_reset_config()
|
||||
|
||||
mock_create = AsyncMock(side_effect=Exception("Connection failed"))
|
||||
mock_meshcore = MagicMock()
|
||||
mock_meshcore.create_serial = mock_create
|
||||
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MeshCore", mock_meshcore)
|
||||
|
||||
result = await connect_from_env()
|
||||
|
||||
assert result is None
|
||||
mock_create.assert_called_once()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_ble_connection(self, configured_env, monkeypatch):
|
||||
"""Connects via BLE when configured."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "ble")
|
||||
monkeypatch.setenv("MESH_BLE_ADDR", "AA:BB:CC:DD:EE:FF")
|
||||
monkeypatch.setenv("MESH_BLE_PIN", "123456")
|
||||
|
||||
_reset_config()
|
||||
|
||||
mock_client = MagicMock()
|
||||
mock_create = AsyncMock(return_value=mock_client)
|
||||
mock_meshcore = MagicMock()
|
||||
mock_meshcore.create_ble = mock_create
|
||||
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MeshCore", mock_meshcore)
|
||||
|
||||
result = await connect_from_env()
|
||||
|
||||
assert result is mock_client
|
||||
mock_create.assert_called_once_with("AA:BB:CC:DD:EE:FF", pin="123456")
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_ble_missing_address(self, configured_env, monkeypatch):
|
||||
"""Returns None when BLE address not configured."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "ble")
|
||||
# Don't set MESH_BLE_ADDR
|
||||
|
||||
_reset_config()
|
||||
|
||||
result = await connect_from_env()
|
||||
|
||||
assert result is None
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_serial_auto_detect(self, configured_env, monkeypatch, mock_serial_port):
|
||||
"""Auto-detects serial port when not configured."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "serial")
|
||||
# Don't set MESH_SERIAL_PORT to trigger auto-detection
|
||||
|
||||
_reset_config()
|
||||
|
||||
# Set up mock port detection
|
||||
mock_port = MagicMock()
|
||||
mock_port.device = "/dev/ttyACM0"
|
||||
mock_serial_port.tools.list_ports.comports.return_value = [mock_port]
|
||||
|
||||
mock_client = MagicMock()
|
||||
mock_create = AsyncMock(return_value=mock_client)
|
||||
mock_meshcore = MagicMock()
|
||||
mock_meshcore.create_serial = mock_create
|
||||
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MeshCore", mock_meshcore)
|
||||
|
||||
result = await connect_from_env()
|
||||
|
||||
assert result is mock_client
|
||||
mock_create.assert_called_once_with("/dev/ttyACM0", 115200, debug=False)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_serial_auto_detect_fails(self, configured_env, monkeypatch, mock_serial_port):
|
||||
"""Returns None when serial auto-detection fails."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "serial")
|
||||
# Don't set MESH_SERIAL_PORT to trigger auto-detection
|
||||
|
||||
_reset_config()
|
||||
|
||||
# No ports available
|
||||
mock_serial_port.tools.list_ports.comports.return_value = []
|
||||
|
||||
result = await connect_from_env()
|
||||
|
||||
assert result is None
|
||||
|
||||
|
||||
class TestConnectWithLock:
|
||||
"""Tests for connect_with_lock context manager."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_yields_client_on_success(self, configured_env, monkeypatch, mock_serial_port):
|
||||
"""Yields connected client on success."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "serial")
|
||||
monkeypatch.setenv("MESH_SERIAL_PORT", "/dev/ttyACM0")
|
||||
|
||||
_reset_config()
|
||||
|
||||
mock_client = MagicMock()
|
||||
mock_client.disconnect = AsyncMock()
|
||||
mock_create = AsyncMock(return_value=mock_client)
|
||||
mock_meshcore = MagicMock()
|
||||
mock_meshcore.create_serial = mock_create
|
||||
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MeshCore", mock_meshcore)
|
||||
|
||||
async with connect_with_lock() as mc:
|
||||
assert mc is mock_client
|
||||
|
||||
# Should disconnect when exiting context
|
||||
mock_client.disconnect.assert_called_once()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_yields_none_on_connection_failure(self, configured_env, monkeypatch, mock_serial_port):
|
||||
"""Yields None when connection fails."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "serial")
|
||||
monkeypatch.setenv("MESH_SERIAL_PORT", "/dev/ttyACM0")
|
||||
|
||||
_reset_config()
|
||||
|
||||
mock_create = AsyncMock(side_effect=Exception("Connection failed"))
|
||||
mock_meshcore = MagicMock()
|
||||
mock_meshcore.create_serial = mock_create
|
||||
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MeshCore", mock_meshcore)
|
||||
|
||||
async with connect_with_lock() as mc:
|
||||
assert mc is None
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_acquires_lock_for_serial(self, configured_env, monkeypatch, mock_serial_port):
|
||||
"""Acquires lock file for serial transport."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "serial")
|
||||
monkeypatch.setenv("MESH_SERIAL_PORT", "/dev/ttyACM0")
|
||||
|
||||
cfg = _reset_config()
|
||||
|
||||
mock_client = MagicMock()
|
||||
mock_client.disconnect = AsyncMock()
|
||||
mock_create = AsyncMock(return_value=mock_client)
|
||||
mock_meshcore = MagicMock()
|
||||
mock_meshcore.create_serial = mock_create
|
||||
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MeshCore", mock_meshcore)
|
||||
|
||||
async with connect_with_lock():
|
||||
# Lock file should exist while connected
|
||||
lock_path = cfg.state_dir / "serial.lock"
|
||||
assert lock_path.exists()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_no_lock_for_tcp(self, configured_env, monkeypatch):
|
||||
"""Does not acquire lock for TCP transport."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "tcp")
|
||||
monkeypatch.setenv("MESH_TCP_HOST", "localhost")
|
||||
monkeypatch.setenv("MESH_TCP_PORT", "4403")
|
||||
|
||||
cfg = _reset_config()
|
||||
|
||||
mock_client = MagicMock()
|
||||
mock_client.disconnect = AsyncMock()
|
||||
mock_create = AsyncMock(return_value=mock_client)
|
||||
mock_meshcore = MagicMock()
|
||||
mock_meshcore.create_tcp = mock_create
|
||||
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MeshCore", mock_meshcore)
|
||||
|
||||
lock_path = cfg.state_dir / "serial.lock"
|
||||
|
||||
async with connect_with_lock():
|
||||
# Lock file should not exist for TCP
|
||||
assert not lock_path.exists()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_handles_disconnect_error(self, configured_env, monkeypatch, mock_serial_port):
|
||||
"""Handles disconnect error gracefully."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "serial")
|
||||
monkeypatch.setenv("MESH_SERIAL_PORT", "/dev/ttyACM0")
|
||||
|
||||
_reset_config()
|
||||
|
||||
mock_client = MagicMock()
|
||||
mock_client.disconnect = AsyncMock(side_effect=Exception("Disconnect error"))
|
||||
mock_create = AsyncMock(return_value=mock_client)
|
||||
mock_meshcore = MagicMock()
|
||||
mock_meshcore.create_serial = mock_create
|
||||
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MeshCore", mock_meshcore)
|
||||
|
||||
# Should not raise even when disconnect fails
|
||||
async with connect_with_lock() as mc:
|
||||
assert mc is mock_client
|
||||
|
||||
# Disconnect was still called
|
||||
mock_client.disconnect.assert_called_once()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_releases_lock_on_failure(self, configured_env, monkeypatch, mock_serial_port):
|
||||
"""Releases lock even when connection fails."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "serial")
|
||||
monkeypatch.setenv("MESH_SERIAL_PORT", "/dev/ttyACM0")
|
||||
|
||||
cfg = _reset_config()
|
||||
|
||||
mock_create = AsyncMock(side_effect=Exception("Connection failed"))
|
||||
mock_meshcore = MagicMock()
|
||||
mock_meshcore.create_serial = mock_create
|
||||
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MeshCore", mock_meshcore)
|
||||
|
||||
async with connect_with_lock() as mc:
|
||||
assert mc is None
|
||||
|
||||
# Lock should be released after exiting context
|
||||
# We can verify by acquiring it again without timeout
|
||||
lock_path = cfg.state_dir / "serial.lock"
|
||||
if lock_path.exists():
|
||||
import fcntl
|
||||
with open(lock_path, "a") as f:
|
||||
fcntl.flock(f.fileno(), fcntl.LOCK_EX | fcntl.LOCK_NB)
|
||||
|
||||
|
||||
class TestAcquireLockAsync:
|
||||
"""Tests for _acquire_lock_async function."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_acquires_lock_immediately(self, tmp_path):
|
||||
"""Acquires lock when not held by others."""
|
||||
lock_file = tmp_path / "test.lock"
|
||||
|
||||
with open(lock_file, "w") as f:
|
||||
await _acquire_lock_async(f, timeout=1.0)
|
||||
# If we get here, lock was acquired
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_times_out_when_locked(self, tmp_path):
|
||||
"""Times out when lock held by another."""
|
||||
import fcntl
|
||||
|
||||
lock_file = tmp_path / "test.lock"
|
||||
|
||||
# Hold the lock in this process
|
||||
holder = open(lock_file, "w") # noqa: SIM115 - must stay open for lock
|
||||
fcntl.flock(holder.fileno(), fcntl.LOCK_EX)
|
||||
|
||||
try:
|
||||
# Try to acquire with different file handle
|
||||
with open(lock_file, "a") as f, pytest.raises(TimeoutError):
|
||||
await _acquire_lock_async(f, timeout=0.2, poll_interval=0.05)
|
||||
finally:
|
||||
holder.close()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_waits_for_lock_release(self, tmp_path):
|
||||
"""Waits and acquires when lock released."""
|
||||
import asyncio
|
||||
import fcntl
|
||||
|
||||
lock_file = tmp_path / "test.lock"
|
||||
|
||||
holder = open(lock_file, "w") # noqa: SIM115 - must stay open for lock
|
||||
fcntl.flock(holder.fileno(), fcntl.LOCK_EX)
|
||||
|
||||
async def release_later():
|
||||
await asyncio.sleep(0.1)
|
||||
holder.close()
|
||||
|
||||
# Start release task
|
||||
release_task = asyncio.create_task(release_later())
|
||||
|
||||
# Try to acquire - should succeed after release
|
||||
with open(lock_file, "a") as f:
|
||||
await _acquire_lock_async(f, timeout=2.0, poll_interval=0.05)
|
||||
|
||||
await release_task
|
||||
271
tests/client/test_contacts.py
Normal file
271
tests/client/test_contacts.py
Normal file
@@ -0,0 +1,271 @@
|
||||
"""Tests for contact lookup functions."""
|
||||
|
||||
from types import SimpleNamespace
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
|
||||
class TestGetContactByName:
|
||||
"""Tests for get_contact_by_name function."""
|
||||
|
||||
def test_returns_contact_when_found(self, mock_meshcore_client):
|
||||
"""Returns contact when found by name."""
|
||||
from meshmon.meshcore_client import get_contact_by_name
|
||||
|
||||
contact = MagicMock()
|
||||
contact.adv_name = "TestNode"
|
||||
mock_meshcore_client.get_contact_by_name.return_value = contact
|
||||
|
||||
result = get_contact_by_name(mock_meshcore_client, "TestNode")
|
||||
|
||||
assert result == contact
|
||||
mock_meshcore_client.get_contact_by_name.assert_called_once_with("TestNode")
|
||||
|
||||
def test_returns_none_when_not_found(self, mock_meshcore_client):
|
||||
"""Returns None when contact not found."""
|
||||
from meshmon.meshcore_client import get_contact_by_name
|
||||
|
||||
mock_meshcore_client.get_contact_by_name.return_value = None
|
||||
|
||||
result = get_contact_by_name(mock_meshcore_client, "NonExistent")
|
||||
|
||||
assert result is None
|
||||
mock_meshcore_client.get_contact_by_name.assert_called_once_with("NonExistent")
|
||||
|
||||
def test_returns_none_when_method_not_available(self):
|
||||
"""Returns None when get_contact_by_name method not available."""
|
||||
from meshmon.meshcore_client import get_contact_by_name
|
||||
|
||||
mc = MagicMock(spec=[]) # No methods
|
||||
|
||||
result = get_contact_by_name(mc, "TestNode")
|
||||
|
||||
assert result is None
|
||||
|
||||
def test_returns_none_on_exception(self, mock_meshcore_client):
|
||||
"""Returns None when method raises exception."""
|
||||
from meshmon.meshcore_client import get_contact_by_name
|
||||
|
||||
mock_meshcore_client.get_contact_by_name.side_effect = RuntimeError("Connection lost")
|
||||
|
||||
result = get_contact_by_name(mock_meshcore_client, "TestNode")
|
||||
|
||||
assert result is None
|
||||
mock_meshcore_client.get_contact_by_name.assert_called_once_with("TestNode")
|
||||
|
||||
|
||||
class TestGetContactByKeyPrefix:
|
||||
"""Tests for get_contact_by_key_prefix function."""
|
||||
|
||||
def test_returns_contact_when_found(self, mock_meshcore_client):
|
||||
"""Returns contact when found by key prefix."""
|
||||
from meshmon.meshcore_client import get_contact_by_key_prefix
|
||||
|
||||
contact = MagicMock()
|
||||
contact.pubkey_prefix = "abc123"
|
||||
mock_meshcore_client.get_contact_by_key_prefix.return_value = contact
|
||||
|
||||
result = get_contact_by_key_prefix(mock_meshcore_client, "abc123")
|
||||
|
||||
assert result == contact
|
||||
mock_meshcore_client.get_contact_by_key_prefix.assert_called_once_with("abc123")
|
||||
|
||||
def test_returns_none_when_not_found(self, mock_meshcore_client):
|
||||
"""Returns None when contact not found."""
|
||||
from meshmon.meshcore_client import get_contact_by_key_prefix
|
||||
|
||||
mock_meshcore_client.get_contact_by_key_prefix.return_value = None
|
||||
|
||||
result = get_contact_by_key_prefix(mock_meshcore_client, "xyz789")
|
||||
|
||||
assert result is None
|
||||
mock_meshcore_client.get_contact_by_key_prefix.assert_called_once_with("xyz789")
|
||||
|
||||
def test_returns_none_when_method_not_available(self):
|
||||
"""Returns None when get_contact_by_key_prefix method not available."""
|
||||
from meshmon.meshcore_client import get_contact_by_key_prefix
|
||||
|
||||
mc = MagicMock(spec=[]) # No methods
|
||||
|
||||
result = get_contact_by_key_prefix(mc, "abc123")
|
||||
|
||||
assert result is None
|
||||
|
||||
def test_returns_none_on_exception(self, mock_meshcore_client):
|
||||
"""Returns None when method raises exception."""
|
||||
from meshmon.meshcore_client import get_contact_by_key_prefix
|
||||
|
||||
mock_meshcore_client.get_contact_by_key_prefix.side_effect = RuntimeError("Connection lost")
|
||||
|
||||
result = get_contact_by_key_prefix(mock_meshcore_client, "abc123")
|
||||
|
||||
assert result is None
|
||||
mock_meshcore_client.get_contact_by_key_prefix.assert_called_once_with("abc123")
|
||||
|
||||
|
||||
class TestExtractContactInfo:
|
||||
"""Tests for extract_contact_info function."""
|
||||
|
||||
def test_extracts_from_dict_contact(self):
|
||||
"""Extracts info from dict-based contact."""
|
||||
from meshmon.meshcore_client import extract_contact_info
|
||||
|
||||
contact = {
|
||||
"adv_name": "TestNode",
|
||||
"name": "test",
|
||||
"pubkey_prefix": "abc123",
|
||||
"public_key": "abc123def456",
|
||||
"type": 1,
|
||||
"flags": 0,
|
||||
}
|
||||
|
||||
result = extract_contact_info(contact)
|
||||
|
||||
assert result["adv_name"] == "TestNode"
|
||||
assert result["name"] == "test"
|
||||
assert result["pubkey_prefix"] == "abc123"
|
||||
assert result["public_key"] == "abc123def456"
|
||||
assert result["type"] == 1
|
||||
assert result["flags"] == 0
|
||||
|
||||
def test_extracts_from_object_contact(self):
|
||||
"""Extracts info from object-based contact."""
|
||||
from meshmon.meshcore_client import extract_contact_info
|
||||
|
||||
contact = SimpleNamespace(
|
||||
adv_name="TestNode",
|
||||
name="test",
|
||||
pubkey_prefix="abc123",
|
||||
public_key="abc123def456",
|
||||
type=1,
|
||||
flags=0,
|
||||
)
|
||||
|
||||
result = extract_contact_info(contact)
|
||||
|
||||
assert result["adv_name"] == "TestNode"
|
||||
assert result["name"] == "test"
|
||||
assert result["pubkey_prefix"] == "abc123"
|
||||
|
||||
def test_converts_bytes_to_hex(self):
|
||||
"""Converts bytes values to hex strings."""
|
||||
from meshmon.meshcore_client import extract_contact_info
|
||||
|
||||
contact = {
|
||||
"adv_name": "TestNode",
|
||||
"public_key": bytes.fromhex("abc123def456"),
|
||||
}
|
||||
|
||||
result = extract_contact_info(contact)
|
||||
|
||||
assert result["adv_name"] == "TestNode"
|
||||
assert result["public_key"] == "abc123def456"
|
||||
|
||||
def test_converts_bytes_from_object(self):
|
||||
"""Converts bytes values from object attributes to hex."""
|
||||
from meshmon.meshcore_client import extract_contact_info
|
||||
|
||||
contact = SimpleNamespace(
|
||||
adv_name="TestNode",
|
||||
public_key=bytes.fromhex("deadbeef"),
|
||||
)
|
||||
|
||||
result = extract_contact_info(contact)
|
||||
|
||||
assert result["adv_name"] == "TestNode"
|
||||
assert result["public_key"] == "deadbeef"
|
||||
|
||||
def test_skips_none_values(self):
|
||||
"""Skips None values in contact."""
|
||||
from meshmon.meshcore_client import extract_contact_info
|
||||
|
||||
contact = {
|
||||
"adv_name": "TestNode",
|
||||
"name": None,
|
||||
"pubkey_prefix": None,
|
||||
}
|
||||
|
||||
result = extract_contact_info(contact)
|
||||
|
||||
assert result["adv_name"] == "TestNode"
|
||||
assert "name" not in result
|
||||
assert "pubkey_prefix" not in result
|
||||
|
||||
def test_skips_missing_attributes(self):
|
||||
"""Skips missing attributes in dict contact."""
|
||||
from meshmon.meshcore_client import extract_contact_info
|
||||
|
||||
contact = {"adv_name": "TestNode"}
|
||||
|
||||
result = extract_contact_info(contact)
|
||||
|
||||
assert result == {"adv_name": "TestNode"}
|
||||
|
||||
def test_empty_contact_returns_empty_dict(self):
|
||||
"""Empty contact returns empty dict."""
|
||||
from meshmon.meshcore_client import extract_contact_info
|
||||
|
||||
result = extract_contact_info({})
|
||||
|
||||
assert result == {}
|
||||
|
||||
|
||||
class TestListContactsSummary:
|
||||
"""Tests for list_contacts_summary function."""
|
||||
|
||||
def test_returns_list_of_contact_info(self):
|
||||
"""Returns list of extracted contact info."""
|
||||
from meshmon.meshcore_client import list_contacts_summary
|
||||
|
||||
contacts = [
|
||||
{"adv_name": "Node1", "type": 1},
|
||||
{"adv_name": "Node2", "type": 2},
|
||||
{"adv_name": "Node3", "type": 1},
|
||||
]
|
||||
|
||||
result = list_contacts_summary(contacts)
|
||||
|
||||
assert len(result) == 3
|
||||
assert result[0]["adv_name"] == "Node1"
|
||||
assert result[1]["adv_name"] == "Node2"
|
||||
assert result[2]["adv_name"] == "Node3"
|
||||
|
||||
def test_handles_mixed_contact_types(self):
|
||||
"""Handles mix of dict and object contacts."""
|
||||
from meshmon.meshcore_client import list_contacts_summary
|
||||
|
||||
obj_contact = SimpleNamespace(adv_name="ObjectNode")
|
||||
|
||||
contacts = [
|
||||
{"adv_name": "DictNode"},
|
||||
obj_contact,
|
||||
]
|
||||
|
||||
result = list_contacts_summary(contacts)
|
||||
|
||||
assert len(result) == 2
|
||||
assert result[0]["adv_name"] == "DictNode"
|
||||
assert result[1]["adv_name"] == "ObjectNode"
|
||||
|
||||
def test_empty_list_returns_empty_list(self):
|
||||
"""Empty contacts list returns empty list."""
|
||||
from meshmon.meshcore_client import list_contacts_summary
|
||||
|
||||
result = list_contacts_summary([])
|
||||
|
||||
assert result == []
|
||||
|
||||
def test_preserves_order(self):
|
||||
"""Preserves contact order in output."""
|
||||
from meshmon.meshcore_client import list_contacts_summary
|
||||
|
||||
contacts = [
|
||||
{"adv_name": "Zebra"},
|
||||
{"adv_name": "Alpha"},
|
||||
{"adv_name": "Middle"},
|
||||
]
|
||||
|
||||
result = list_contacts_summary(contacts)
|
||||
|
||||
assert result[0]["adv_name"] == "Zebra"
|
||||
assert result[1]["adv_name"] == "Alpha"
|
||||
assert result[2]["adv_name"] == "Middle"
|
||||
245
tests/client/test_meshcore_available.py
Normal file
245
tests/client/test_meshcore_available.py
Normal file
@@ -0,0 +1,245 @@
|
||||
"""Tests for MESHCORE_AVAILABLE flag handling."""
|
||||
|
||||
from unittest.mock import AsyncMock, MagicMock
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
class TestMeshcoreAvailableTrue:
|
||||
"""Tests when MESHCORE_AVAILABLE is True."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_run_command_executes_when_available(self, mock_meshcore_client, monkeypatch):
|
||||
"""run_command executes command when meshcore available."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
|
||||
from meshmon.meshcore_client import run_command
|
||||
|
||||
from .conftest import make_mock_event
|
||||
|
||||
event = make_mock_event("SELF_INFO", {"bat": 3850})
|
||||
|
||||
async def cmd():
|
||||
return event
|
||||
|
||||
success, event_type, payload, error = await run_command(
|
||||
mock_meshcore_client, cmd(), "test"
|
||||
)
|
||||
|
||||
assert success is True
|
||||
assert event_type == "SELF_INFO"
|
||||
assert payload == {"bat": 3850}
|
||||
assert error is None
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_connect_from_env_attempts_connection(self, monkeypatch, tmp_path):
|
||||
"""connect_from_env attempts to connect when meshcore available."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
|
||||
# Mock MeshCore.create_serial
|
||||
mock_mc = MagicMock()
|
||||
mock_meshcore = MagicMock()
|
||||
mock_meshcore.create_serial = AsyncMock(return_value=mock_mc)
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MeshCore", mock_meshcore)
|
||||
|
||||
# Configure environment
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "serial")
|
||||
monkeypatch.setenv("MESH_SERIAL_PORT", "/dev/ttyACM0")
|
||||
monkeypatch.setenv("STATE_DIR", str(tmp_path))
|
||||
monkeypatch.setenv("OUT_DIR", str(tmp_path / "out"))
|
||||
|
||||
import meshmon.env
|
||||
meshmon.env._config = None
|
||||
|
||||
from meshmon.meshcore_client import connect_from_env
|
||||
|
||||
result = await connect_from_env()
|
||||
|
||||
assert result == mock_mc
|
||||
mock_meshcore.create_serial.assert_called_once_with("/dev/ttyACM0", 115200, debug=False)
|
||||
|
||||
|
||||
class TestMeshcoreAvailableFalse:
|
||||
"""Tests when MESHCORE_AVAILABLE is False."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_run_command_returns_failure(self, mock_meshcore_client, monkeypatch):
|
||||
"""run_command returns failure when meshcore not available."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", False)
|
||||
|
||||
from meshmon.meshcore_client import run_command
|
||||
|
||||
async def cmd():
|
||||
return None
|
||||
|
||||
# Create the coroutine
|
||||
cmd_coro = cmd()
|
||||
|
||||
success, event_type, payload, error = await run_command(
|
||||
mock_meshcore_client, cmd_coro, "test"
|
||||
)
|
||||
|
||||
# Close the coroutine to prevent "never awaited" warning
|
||||
# since run_command returns early when MESHCORE_AVAILABLE=False
|
||||
cmd_coro.close()
|
||||
|
||||
assert success is False
|
||||
assert event_type is None
|
||||
assert payload is None
|
||||
assert "not available" in error
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_connect_from_env_returns_none(self, monkeypatch, tmp_path):
|
||||
"""connect_from_env returns None when meshcore not available."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", False)
|
||||
|
||||
# Configure environment
|
||||
monkeypatch.setenv("STATE_DIR", str(tmp_path))
|
||||
monkeypatch.setenv("OUT_DIR", str(tmp_path / "out"))
|
||||
|
||||
import meshmon.env
|
||||
meshmon.env._config = None
|
||||
|
||||
from meshmon.meshcore_client import connect_from_env
|
||||
|
||||
result = await connect_from_env()
|
||||
|
||||
assert result is None
|
||||
|
||||
|
||||
class TestMeshcoreImportFallback:
|
||||
"""Tests for import fallback behavior."""
|
||||
|
||||
def test_meshcore_none_when_import_fails(self, monkeypatch):
|
||||
"""MeshCore is None when import fails."""
|
||||
import builtins
|
||||
import importlib
|
||||
|
||||
import meshmon.meshcore_client as module
|
||||
|
||||
real_import = builtins.__import__
|
||||
|
||||
def mock_import(name, *args, **kwargs):
|
||||
if name == "meshcore":
|
||||
raise ImportError("No module named 'meshcore'")
|
||||
return real_import(name, *args, **kwargs)
|
||||
|
||||
monkeypatch.setattr(builtins, "__import__", mock_import)
|
||||
|
||||
module = importlib.reload(module)
|
||||
|
||||
assert module.MESHCORE_AVAILABLE is False
|
||||
assert module.MeshCore is None
|
||||
assert module.EventType is None
|
||||
|
||||
monkeypatch.setattr(builtins, "__import__", real_import)
|
||||
importlib.reload(module)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_event_type_check_handles_none(self, monkeypatch):
|
||||
"""EventType checks handle None gracefully."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
monkeypatch.setattr("meshmon.meshcore_client.EventType", None)
|
||||
|
||||
from meshmon.meshcore_client import run_command
|
||||
|
||||
from .conftest import make_mock_event
|
||||
|
||||
event = make_mock_event("SELF_INFO", {"bat": 3850})
|
||||
|
||||
async def cmd():
|
||||
return event
|
||||
|
||||
success, event_type, payload, error = await run_command(
|
||||
MagicMock(), cmd(), "test"
|
||||
)
|
||||
|
||||
assert success is True
|
||||
assert event_type == "SELF_INFO"
|
||||
assert payload == {"bat": 3850}
|
||||
assert error is None
|
||||
|
||||
|
||||
class TestContactFunctionsWithUnavailableMeshcore:
|
||||
"""Tests that contact functions work regardless of MESHCORE_AVAILABLE."""
|
||||
|
||||
def test_get_contact_by_name_works_when_unavailable(self, mock_meshcore_client, monkeypatch):
|
||||
"""get_contact_by_name works even when meshcore unavailable."""
|
||||
# Contact functions don't check MESHCORE_AVAILABLE - they work with any client
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", False)
|
||||
|
||||
from meshmon.meshcore_client import get_contact_by_name
|
||||
|
||||
contact = MagicMock()
|
||||
contact.adv_name = "TestNode"
|
||||
mock_meshcore_client.get_contact_by_name.return_value = contact
|
||||
|
||||
result = get_contact_by_name(mock_meshcore_client, "TestNode")
|
||||
|
||||
assert result == contact
|
||||
|
||||
def test_get_contact_by_key_prefix_works_when_unavailable(
|
||||
self, mock_meshcore_client, monkeypatch
|
||||
):
|
||||
"""get_contact_by_key_prefix works even when meshcore unavailable."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", False)
|
||||
|
||||
from meshmon.meshcore_client import get_contact_by_key_prefix
|
||||
|
||||
contact = MagicMock()
|
||||
contact.pubkey_prefix = "abc123"
|
||||
mock_meshcore_client.get_contact_by_key_prefix.return_value = contact
|
||||
|
||||
result = get_contact_by_key_prefix(mock_meshcore_client, "abc123")
|
||||
|
||||
assert result == contact
|
||||
|
||||
def test_extract_contact_info_works_when_unavailable(self, monkeypatch):
|
||||
"""extract_contact_info works even when meshcore unavailable."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", False)
|
||||
|
||||
from meshmon.meshcore_client import extract_contact_info
|
||||
|
||||
contact = {"adv_name": "TestNode", "type": 1}
|
||||
|
||||
result = extract_contact_info(contact)
|
||||
|
||||
assert result["adv_name"] == "TestNode"
|
||||
assert result["type"] == 1
|
||||
|
||||
def test_list_contacts_summary_works_when_unavailable(self, monkeypatch):
|
||||
"""list_contacts_summary works even when meshcore unavailable."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", False)
|
||||
|
||||
from meshmon.meshcore_client import list_contacts_summary
|
||||
|
||||
contacts = [{"adv_name": "Node1"}, {"adv_name": "Node2"}]
|
||||
|
||||
result = list_contacts_summary(contacts)
|
||||
|
||||
assert len(result) == 2
|
||||
assert result[0]["adv_name"] == "Node1"
|
||||
|
||||
|
||||
class TestAutoDetectWithUnavailablePyserial:
|
||||
"""Tests for auto_detect_serial_port when pyserial unavailable."""
|
||||
|
||||
def test_returns_none_when_pyserial_not_installed(self, monkeypatch):
|
||||
"""Returns None when pyserial not installed."""
|
||||
# Mock the import to fail
|
||||
import builtins
|
||||
|
||||
real_import = builtins.__import__
|
||||
|
||||
def mock_import(name, *args, **kwargs):
|
||||
if name == "serial.tools.list_ports" or name == "serial":
|
||||
raise ImportError("No module named 'serial'")
|
||||
return real_import(name, *args, **kwargs)
|
||||
|
||||
monkeypatch.setattr(builtins, "__import__", mock_import)
|
||||
|
||||
from meshmon.meshcore_client import auto_detect_serial_port
|
||||
|
||||
result = auto_detect_serial_port()
|
||||
|
||||
assert result is None
|
||||
237
tests/client/test_run_command.py
Normal file
237
tests/client/test_run_command.py
Normal file
@@ -0,0 +1,237 @@
|
||||
"""Tests for run_command function."""
|
||||
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.meshcore_client import run_command
|
||||
|
||||
from .conftest import make_mock_event
|
||||
|
||||
|
||||
class TestRunCommandSuccess:
|
||||
"""Tests for successful command execution."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_returns_success_tuple(self, mock_meshcore_client, monkeypatch):
|
||||
"""Returns (True, event_type, payload, None) on success."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
|
||||
event = make_mock_event("SELF_INFO", {"bat": 3850})
|
||||
|
||||
async def cmd():
|
||||
return event
|
||||
|
||||
success, event_type, payload, error = await run_command(
|
||||
mock_meshcore_client, cmd(), "test"
|
||||
)
|
||||
|
||||
assert success is True
|
||||
assert event_type == "SELF_INFO"
|
||||
assert payload == {"bat": 3850}
|
||||
assert error is None
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_extracts_payload_dict(self, mock_meshcore_client, monkeypatch):
|
||||
"""Extracts payload when it's a dict."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
|
||||
payload_data = {"voltage": 3.85, "uptime": 86400}
|
||||
event = make_mock_event("SELF_INFO", payload_data)
|
||||
|
||||
async def cmd():
|
||||
return event
|
||||
|
||||
success, _, payload, _ = await run_command(
|
||||
mock_meshcore_client, cmd(), "test"
|
||||
)
|
||||
|
||||
assert payload == payload_data
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_converts_object_payload(self, mock_meshcore_client, monkeypatch):
|
||||
"""Converts object payload to dict."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
|
||||
# Create object-like payload using a simple class with instance attributes
|
||||
# vars() only returns instance attributes, not class attributes
|
||||
class ObjPayload:
|
||||
def __init__(self):
|
||||
self.voltage = 3.85
|
||||
|
||||
obj_payload = ObjPayload()
|
||||
|
||||
event = make_mock_event("SELF_INFO", payload=obj_payload)
|
||||
|
||||
async def cmd():
|
||||
return event
|
||||
|
||||
success, _, payload, _ = await run_command(
|
||||
mock_meshcore_client, cmd(), "test"
|
||||
)
|
||||
|
||||
assert payload == {"voltage": 3.85}
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_converts_namedtuple_payload(self, mock_meshcore_client, monkeypatch):
|
||||
"""Converts namedtuple payload to dict."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
|
||||
from collections import namedtuple
|
||||
Payload = namedtuple("Payload", ["voltage", "uptime"])
|
||||
nt_payload = Payload(voltage=3.85, uptime=86400)
|
||||
|
||||
event = make_mock_event("SELF_INFO")
|
||||
event.payload = nt_payload
|
||||
|
||||
async def cmd():
|
||||
return event
|
||||
|
||||
success, _, payload, _ = await run_command(
|
||||
mock_meshcore_client, cmd(), "test"
|
||||
)
|
||||
|
||||
assert payload == {"voltage": 3.85, "uptime": 86400}
|
||||
|
||||
|
||||
class TestRunCommandFailure:
|
||||
"""Tests for command failure scenarios."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_returns_failure_when_unavailable(self, mock_meshcore_client, monkeypatch):
|
||||
"""Returns failure when meshcore not available."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", False)
|
||||
|
||||
async def cmd():
|
||||
return None
|
||||
|
||||
# Create the coroutine
|
||||
cmd_coro = cmd()
|
||||
|
||||
success, event_type, payload, error = await run_command(
|
||||
mock_meshcore_client, cmd_coro, "test"
|
||||
)
|
||||
|
||||
# Close the coroutine to prevent "never awaited" warning
|
||||
# since run_command returns early when MESHCORE_AVAILABLE=False
|
||||
cmd_coro.close()
|
||||
|
||||
assert success is False
|
||||
assert event_type is None
|
||||
assert payload is None
|
||||
assert error == "meshcore not available"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_returns_failure_on_none_event(self, mock_meshcore_client, monkeypatch):
|
||||
"""Returns failure when no event received."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
|
||||
async def cmd():
|
||||
return None
|
||||
|
||||
success, _, _, error = await run_command(
|
||||
mock_meshcore_client, cmd(), "test"
|
||||
)
|
||||
|
||||
assert success is False
|
||||
assert error == "No response received"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_returns_failure_on_error_event(self, mock_meshcore_client, monkeypatch):
|
||||
"""Returns failure on ERROR event type."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
|
||||
# Set up EventType mock
|
||||
mock_event_type = MagicMock()
|
||||
mock_event_type.ERROR = "ERROR"
|
||||
monkeypatch.setattr("meshmon.meshcore_client.EventType", mock_event_type)
|
||||
|
||||
event = MagicMock()
|
||||
event.type = mock_event_type.ERROR
|
||||
event.payload = "Command failed"
|
||||
|
||||
async def cmd():
|
||||
return event
|
||||
|
||||
success, event_type, payload, error = await run_command(
|
||||
mock_meshcore_client, cmd(), "test"
|
||||
)
|
||||
|
||||
assert success is False
|
||||
assert event_type == "ERROR"
|
||||
assert payload is None
|
||||
assert error == "Command failed"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_returns_failure_on_timeout(self, mock_meshcore_client, monkeypatch):
|
||||
"""Returns failure on timeout."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
|
||||
async def cmd():
|
||||
raise TimeoutError()
|
||||
|
||||
success, _, _, error = await run_command(
|
||||
mock_meshcore_client, cmd(), "test"
|
||||
)
|
||||
|
||||
assert success is False
|
||||
assert error == "Timeout"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_returns_failure_on_exception(self, mock_meshcore_client, monkeypatch):
|
||||
"""Returns failure on general exception."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
|
||||
async def cmd():
|
||||
raise RuntimeError("Connection lost")
|
||||
|
||||
success, _, _, error = await run_command(
|
||||
mock_meshcore_client, cmd(), "test"
|
||||
)
|
||||
|
||||
assert success is False
|
||||
assert error == "Connection lost"
|
||||
|
||||
|
||||
class TestRunCommandEventTypeParsing:
|
||||
"""Tests for event type name extraction."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_extracts_type_name_attribute(self, mock_meshcore_client, monkeypatch):
|
||||
"""Extracts event type from .type.name attribute."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
|
||||
event = make_mock_event("CUSTOM_EVENT", {})
|
||||
|
||||
async def cmd():
|
||||
return event
|
||||
|
||||
success, event_type, payload, error = await run_command(
|
||||
mock_meshcore_client, cmd(), "test"
|
||||
)
|
||||
|
||||
assert success is True
|
||||
assert event_type == "CUSTOM_EVENT"
|
||||
assert payload == {}
|
||||
assert error is None
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_falls_back_to_str_type(self, mock_meshcore_client, monkeypatch):
|
||||
"""Falls back to str(type) when no .name."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
|
||||
event = MagicMock()
|
||||
event.type = "STRING_TYPE"
|
||||
event.payload = {}
|
||||
|
||||
async def cmd():
|
||||
return event
|
||||
|
||||
success, event_type, payload, error = await run_command(
|
||||
mock_meshcore_client, cmd(), "test"
|
||||
)
|
||||
|
||||
assert success is True
|
||||
assert event_type == "STRING_TYPE"
|
||||
assert payload == {}
|
||||
assert error is None
|
||||
1
tests/config/__init__.py
Normal file
1
tests/config/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Configuration tests."""
|
||||
49
tests/config/conftest.py
Normal file
49
tests/config/conftest.py
Normal file
@@ -0,0 +1,49 @@
|
||||
"""Fixtures for configuration tests."""
|
||||
|
||||
import os
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def config_file(tmp_path, monkeypatch):
|
||||
"""Create a temporary config file and set up paths.
|
||||
|
||||
Returns a helper to write config content.
|
||||
"""
|
||||
config_path = tmp_path / "meshcore.conf"
|
||||
|
||||
# Helper function to write config content
|
||||
def write_config(content: str):
|
||||
config_path.write_text(content)
|
||||
return config_path
|
||||
|
||||
return write_config
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def isolate_config_loading(monkeypatch):
|
||||
"""Isolate config loading by clearing all mesh-related env vars.
|
||||
|
||||
This fixture goes beyond clean_env by ensuring a completely
|
||||
clean slate for testing config file loading.
|
||||
"""
|
||||
# Clear all env vars that might affect config
|
||||
env_prefixes = (
|
||||
"MESH_", "REPEATER_", "COMPANION_", "REMOTE_",
|
||||
"TELEMETRY_", "REPORT_", "RADIO_", "STATE_DIR", "OUT_DIR"
|
||||
)
|
||||
for key in list(os.environ.keys()):
|
||||
for prefix in env_prefixes:
|
||||
if key.startswith(prefix):
|
||||
monkeypatch.delenv(key, raising=False)
|
||||
break
|
||||
|
||||
# Reset config singleton
|
||||
import meshmon.env
|
||||
meshmon.env._config = None
|
||||
|
||||
yield
|
||||
|
||||
# Reset again after test
|
||||
meshmon.env._config = None
|
||||
253
tests/config/test_config_file.py
Normal file
253
tests/config/test_config_file.py
Normal file
@@ -0,0 +1,253 @@
|
||||
"""Tests for meshcore.conf file parsing."""
|
||||
|
||||
import os
|
||||
|
||||
from meshmon.env import _parse_config_value
|
||||
|
||||
|
||||
def _load_config_from_content(tmp_path, monkeypatch, content: str | None) -> None:
|
||||
import meshmon.env as env
|
||||
|
||||
config_path = tmp_path / "meshcore.conf"
|
||||
if content is not None:
|
||||
config_path.write_text(content)
|
||||
|
||||
fake_env_path = tmp_path / "src" / "meshmon" / "env.py"
|
||||
fake_env_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
fake_env_path.write_text("")
|
||||
|
||||
monkeypatch.setattr(env, "__file__", str(fake_env_path))
|
||||
env._load_config_file()
|
||||
|
||||
class TestParseConfigValueDetailed:
|
||||
"""Detailed tests for _parse_config_value."""
|
||||
|
||||
# ==========================================================================
|
||||
# Empty/whitespace handling
|
||||
# ==========================================================================
|
||||
|
||||
def test_empty_string(self):
|
||||
assert _parse_config_value("") == ""
|
||||
|
||||
def test_only_spaces(self):
|
||||
assert _parse_config_value(" ") == ""
|
||||
|
||||
def test_only_tabs(self):
|
||||
assert _parse_config_value("\t\t") == ""
|
||||
|
||||
# ==========================================================================
|
||||
# Unquoted values
|
||||
# ==========================================================================
|
||||
|
||||
def test_simple_value(self):
|
||||
assert _parse_config_value("hello") == "hello"
|
||||
|
||||
def test_value_with_leading_trailing_space(self):
|
||||
assert _parse_config_value(" hello ") == "hello"
|
||||
|
||||
def test_value_with_internal_spaces(self):
|
||||
assert _parse_config_value("hello world") == "hello world"
|
||||
|
||||
def test_numeric_value(self):
|
||||
assert _parse_config_value("12345") == "12345"
|
||||
|
||||
def test_path_value(self):
|
||||
assert _parse_config_value("/dev/ttyUSB0") == "/dev/ttyUSB0"
|
||||
|
||||
# ==========================================================================
|
||||
# Double-quoted strings
|
||||
# ==========================================================================
|
||||
|
||||
def test_double_quoted_simple(self):
|
||||
assert _parse_config_value('"hello"') == "hello"
|
||||
|
||||
def test_double_quoted_with_spaces(self):
|
||||
assert _parse_config_value('"hello world"') == "hello world"
|
||||
|
||||
def test_double_quoted_with_special_chars(self):
|
||||
assert _parse_config_value('"hello #world"') == "hello #world"
|
||||
|
||||
def test_double_quoted_unclosed(self):
|
||||
assert _parse_config_value('"hello') == "hello"
|
||||
|
||||
def test_double_quoted_empty(self):
|
||||
assert _parse_config_value('""') == ""
|
||||
|
||||
def test_double_quoted_with_trailing_content(self):
|
||||
# Only extracts content within first pair of quotes
|
||||
assert _parse_config_value('"hello" # comment') == "hello"
|
||||
|
||||
# ==========================================================================
|
||||
# Single-quoted strings
|
||||
# ==========================================================================
|
||||
|
||||
def test_single_quoted_simple(self):
|
||||
assert _parse_config_value("'hello'") == "hello"
|
||||
|
||||
def test_single_quoted_with_spaces(self):
|
||||
assert _parse_config_value("'hello world'") == "hello world"
|
||||
|
||||
def test_single_quoted_unclosed(self):
|
||||
assert _parse_config_value("'hello") == "hello"
|
||||
|
||||
def test_single_quoted_empty(self):
|
||||
assert _parse_config_value("''") == ""
|
||||
|
||||
# ==========================================================================
|
||||
# Inline comments
|
||||
# ==========================================================================
|
||||
|
||||
def test_inline_comment_with_space(self):
|
||||
assert _parse_config_value("hello # comment") == "hello"
|
||||
|
||||
def test_inline_comment_multiple_spaces(self):
|
||||
assert _parse_config_value("hello # comment here") == "hello"
|
||||
|
||||
def test_hash_without_space_kept(self):
|
||||
# Hash without preceding space is kept (not a comment)
|
||||
assert _parse_config_value("color#ffffff") == "color#ffffff"
|
||||
|
||||
def test_hash_at_start_kept(self):
|
||||
# Hash at start is kept (though unusual for a value)
|
||||
assert _parse_config_value("#ffffff") == "#ffffff"
|
||||
|
||||
# ==========================================================================
|
||||
# Mixed scenarios
|
||||
# ==========================================================================
|
||||
|
||||
def test_quoted_preserves_hash_comment_style(self):
|
||||
assert _parse_config_value('"test # not a comment"') == "test # not a comment"
|
||||
|
||||
def test_value_ending_with_hash(self):
|
||||
# "test#" has no space before #, so kept
|
||||
assert _parse_config_value("test#") == "test#"
|
||||
|
||||
|
||||
class TestLoadConfigFileBehavior:
|
||||
"""Tests for _load_config_file behavior."""
|
||||
|
||||
def test_nonexistent_file_no_error(self, tmp_path, monkeypatch, isolate_config_loading):
|
||||
"""Missing config file doesn't raise error."""
|
||||
_load_config_from_content(tmp_path, monkeypatch, content=None)
|
||||
|
||||
assert "MESH_TRANSPORT" not in os.environ
|
||||
|
||||
def test_skips_empty_lines(self, tmp_path, monkeypatch, isolate_config_loading):
|
||||
"""Empty lines are skipped."""
|
||||
config_content = """
|
||||
MESH_TRANSPORT=tcp
|
||||
|
||||
MESH_DEBUG=1
|
||||
|
||||
"""
|
||||
_load_config_from_content(tmp_path, monkeypatch, config_content)
|
||||
|
||||
assert os.environ["MESH_TRANSPORT"] == "tcp"
|
||||
assert os.environ["MESH_DEBUG"] == "1"
|
||||
|
||||
def test_skips_comment_lines(self, tmp_path, monkeypatch, isolate_config_loading):
|
||||
"""Lines starting with # are skipped."""
|
||||
config_content = """# This is a comment
|
||||
MESH_TRANSPORT=tcp
|
||||
# Another comment
|
||||
"""
|
||||
_load_config_from_content(tmp_path, monkeypatch, config_content)
|
||||
|
||||
assert os.environ["MESH_TRANSPORT"] == "tcp"
|
||||
|
||||
def test_handles_export_prefix(self, tmp_path, monkeypatch, isolate_config_loading):
|
||||
"""Lines with 'export ' prefix are handled."""
|
||||
config_content = "export MESH_TRANSPORT=tcp\n"
|
||||
_load_config_from_content(tmp_path, monkeypatch, config_content)
|
||||
|
||||
assert os.environ["MESH_TRANSPORT"] == "tcp"
|
||||
|
||||
def test_skips_lines_without_equals(self, tmp_path, monkeypatch, isolate_config_loading):
|
||||
"""Lines without = are skipped."""
|
||||
config_content = """MESH_TRANSPORT=tcp
|
||||
this line has no equals
|
||||
MESH_DEBUG=1
|
||||
"""
|
||||
_load_config_from_content(tmp_path, monkeypatch, config_content)
|
||||
|
||||
assert os.environ["MESH_TRANSPORT"] == "tcp"
|
||||
assert os.environ["MESH_DEBUG"] == "1"
|
||||
|
||||
def test_env_vars_take_precedence(self, tmp_path, monkeypatch, isolate_config_loading):
|
||||
"""Environment variables override config file values."""
|
||||
# Set env var first
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "ble")
|
||||
|
||||
# Config file has different value
|
||||
config_content = "MESH_TRANSPORT=serial\n"
|
||||
_load_config_from_content(tmp_path, monkeypatch, config_content)
|
||||
|
||||
# After loading, env var should still be "ble"
|
||||
assert os.environ.get("MESH_TRANSPORT") == "ble"
|
||||
|
||||
|
||||
class TestConfigFileFormats:
|
||||
"""Test various config file format scenarios."""
|
||||
|
||||
def test_standard_format(self):
|
||||
"""Standard KEY=value format."""
|
||||
assert _parse_config_value("value") == "value"
|
||||
|
||||
def test_spaces_around_equals(self):
|
||||
"""Key = value with spaces (handled by partition)."""
|
||||
# Note: _parse_config_value only handles the value part
|
||||
# The key=value split happens in _load_config_file
|
||||
assert _parse_config_value(" value ") == "value"
|
||||
|
||||
def test_quoted_path_with_spaces(self):
|
||||
"""Path with spaces must be quoted."""
|
||||
assert _parse_config_value('"/path/with spaces/file.txt"') == "/path/with spaces/file.txt"
|
||||
|
||||
def test_url_value(self):
|
||||
"""URL values work correctly."""
|
||||
assert _parse_config_value("https://example.com:8080/path") == "https://example.com:8080/path"
|
||||
|
||||
def test_email_value(self):
|
||||
"""Email values work correctly."""
|
||||
assert _parse_config_value("user@example.com") == "user@example.com"
|
||||
|
||||
def test_json_like_value(self):
|
||||
"""JSON-like values need quoting if they have spaces."""
|
||||
# Without spaces, works fine
|
||||
assert _parse_config_value("{key:value}") == "{key:value}"
|
||||
# With spaces, needs quotes
|
||||
assert _parse_config_value('"{key: value}"') == "{key: value}"
|
||||
|
||||
|
||||
class TestValidKeyPatterns:
|
||||
"""Test key validation patterns."""
|
||||
|
||||
def test_valid_key_patterns(self):
|
||||
"""Valid shell identifier patterns."""
|
||||
# These would be tested in _load_config_file
|
||||
# Valid: starts with letter or underscore, contains letters/numbers/underscores
|
||||
valid_keys = [
|
||||
"MESH_TRANSPORT",
|
||||
"_PRIVATE",
|
||||
"var123",
|
||||
"MY_VAR_2",
|
||||
]
|
||||
# All should match: ^[A-Za-z_][A-Za-z0-9_]*$
|
||||
import re
|
||||
pattern = r"^[A-Za-z_][A-Za-z0-9_]*$"
|
||||
for key in valid_keys:
|
||||
assert re.match(pattern, key), f"{key} should be valid"
|
||||
|
||||
def test_invalid_key_patterns(self):
|
||||
"""Invalid key patterns are rejected."""
|
||||
invalid_keys = [
|
||||
"123_starts_with_number",
|
||||
"has-dash",
|
||||
"has.dot",
|
||||
"has space",
|
||||
"",
|
||||
]
|
||||
import re
|
||||
pattern = r"^[A-Za-z_][A-Za-z0-9_]*$"
|
||||
for key in invalid_keys:
|
||||
assert not re.match(pattern, key), f"{key} should be invalid"
|
||||
211
tests/config/test_env.py
Normal file
211
tests/config/test_env.py
Normal file
@@ -0,0 +1,211 @@
|
||||
"""Tests for environment variable parsing and Config class."""
|
||||
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.env import (
|
||||
Config,
|
||||
get_bool,
|
||||
get_config,
|
||||
get_int,
|
||||
get_str,
|
||||
)
|
||||
|
||||
|
||||
class TestGetStrEdgeCases:
|
||||
"""Additional edge case tests for get_str."""
|
||||
|
||||
def test_whitespace_value_preserved(self, monkeypatch):
|
||||
"""Whitespace-only value is preserved."""
|
||||
monkeypatch.setenv("TEST_VAR", " ")
|
||||
assert get_str("TEST_VAR") == " "
|
||||
|
||||
def test_special_characters(self, monkeypatch):
|
||||
"""Special characters are preserved."""
|
||||
monkeypatch.setenv("TEST_VAR", "hello@world#123!")
|
||||
assert get_str("TEST_VAR") == "hello@world#123!"
|
||||
|
||||
|
||||
class TestGetIntEdgeCases:
|
||||
"""Additional edge case tests for get_int."""
|
||||
|
||||
def test_leading_zeros(self, monkeypatch):
|
||||
"""Leading zeros work (not octal)."""
|
||||
monkeypatch.setenv("TEST_INT", "042")
|
||||
assert get_int("TEST_INT", 0) == 42
|
||||
|
||||
def test_whitespace_around_number(self, monkeypatch):
|
||||
"""Whitespace around number is tolerated by int()."""
|
||||
monkeypatch.setenv("TEST_INT", " 42 ")
|
||||
# Python's int() handles whitespace
|
||||
assert get_int("TEST_INT", 0) == 42
|
||||
|
||||
|
||||
class TestGetBoolEdgeCases:
|
||||
"""Additional edge case tests for get_bool."""
|
||||
|
||||
def test_mixed_case(self, monkeypatch):
|
||||
"""Mixed case variants work."""
|
||||
monkeypatch.setenv("TEST_BOOL", "TrUe")
|
||||
assert get_bool("TEST_BOOL") is True
|
||||
|
||||
def test_with_spaces(self, monkeypatch):
|
||||
"""Whitespace causes a non-match since get_bool does not strip."""
|
||||
monkeypatch.setenv("TEST_BOOL", " yes ")
|
||||
# .lower() doesn't strip, so " yes " != "yes"
|
||||
# This will return False
|
||||
assert get_bool("TEST_BOOL") is False
|
||||
|
||||
|
||||
class TestConfigComplete:
|
||||
"""Complete Config class tests."""
|
||||
|
||||
def test_all_connection_settings(self, clean_env, monkeypatch):
|
||||
"""All connection settings are loaded."""
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "tcp")
|
||||
monkeypatch.setenv("MESH_SERIAL_PORT", "/dev/ttyUSB0")
|
||||
monkeypatch.setenv("MESH_SERIAL_BAUD", "9600")
|
||||
monkeypatch.setenv("MESH_TCP_HOST", "192.168.1.1")
|
||||
monkeypatch.setenv("MESH_TCP_PORT", "8080")
|
||||
monkeypatch.setenv("MESH_BLE_ADDR", "AA:BB:CC:DD:EE:FF")
|
||||
monkeypatch.setenv("MESH_BLE_PIN", "1234")
|
||||
monkeypatch.setenv("MESH_DEBUG", "true")
|
||||
|
||||
config = Config()
|
||||
|
||||
assert config.mesh_transport == "tcp"
|
||||
assert config.mesh_serial_port == "/dev/ttyUSB0"
|
||||
assert config.mesh_serial_baud == 9600
|
||||
assert config.mesh_tcp_host == "192.168.1.1"
|
||||
assert config.mesh_tcp_port == 8080
|
||||
assert config.mesh_ble_addr == "AA:BB:CC:DD:EE:FF"
|
||||
assert config.mesh_ble_pin == "1234"
|
||||
assert config.mesh_debug is True
|
||||
|
||||
def test_all_repeater_settings(self, clean_env, monkeypatch):
|
||||
"""All repeater identity settings are loaded."""
|
||||
monkeypatch.setenv("REPEATER_NAME", "HilltopRepeater")
|
||||
monkeypatch.setenv("REPEATER_KEY_PREFIX", "abc123")
|
||||
monkeypatch.setenv("REPEATER_PASSWORD", "secret")
|
||||
monkeypatch.setenv("REPEATER_DISPLAY_NAME", "Hilltop Relay")
|
||||
monkeypatch.setenv("REPEATER_PUBKEY_PREFIX", "!abc123")
|
||||
monkeypatch.setenv("REPEATER_HARDWARE", "RAK4631 with Solar")
|
||||
|
||||
config = Config()
|
||||
|
||||
assert config.repeater_name == "HilltopRepeater"
|
||||
assert config.repeater_key_prefix == "abc123"
|
||||
assert config.repeater_password == "secret"
|
||||
assert config.repeater_display_name == "Hilltop Relay"
|
||||
assert config.repeater_pubkey_prefix == "!abc123"
|
||||
assert config.repeater_hardware == "RAK4631 with Solar"
|
||||
|
||||
def test_all_timeout_settings(self, clean_env, monkeypatch):
|
||||
"""All timeout and retry settings are loaded."""
|
||||
monkeypatch.setenv("REMOTE_TIMEOUT_S", "30")
|
||||
monkeypatch.setenv("REMOTE_RETRY_ATTEMPTS", "5")
|
||||
monkeypatch.setenv("REMOTE_RETRY_BACKOFF_S", "10")
|
||||
monkeypatch.setenv("REMOTE_CB_FAILS", "10")
|
||||
monkeypatch.setenv("REMOTE_CB_COOLDOWN_S", "7200")
|
||||
|
||||
config = Config()
|
||||
|
||||
assert config.remote_timeout_s == 30
|
||||
assert config.remote_retry_attempts == 5
|
||||
assert config.remote_retry_backoff_s == 10
|
||||
assert config.remote_cb_fails == 10
|
||||
assert config.remote_cb_cooldown_s == 7200
|
||||
|
||||
def test_all_telemetry_settings(self, clean_env, monkeypatch):
|
||||
"""All telemetry settings are loaded."""
|
||||
monkeypatch.setenv("TELEMETRY_ENABLED", "yes")
|
||||
monkeypatch.setenv("TELEMETRY_TIMEOUT_S", "20")
|
||||
monkeypatch.setenv("TELEMETRY_RETRY_ATTEMPTS", "3")
|
||||
monkeypatch.setenv("TELEMETRY_RETRY_BACKOFF_S", "5")
|
||||
|
||||
config = Config()
|
||||
|
||||
assert config.telemetry_enabled is True
|
||||
assert config.telemetry_timeout_s == 20
|
||||
assert config.telemetry_retry_attempts == 3
|
||||
assert config.telemetry_retry_backoff_s == 5
|
||||
|
||||
def test_all_location_settings(self, clean_env, monkeypatch):
|
||||
"""All location/report settings are loaded."""
|
||||
monkeypatch.setenv("REPORT_LOCATION_NAME", "Mountain Peak Observatory")
|
||||
monkeypatch.setenv("REPORT_LOCATION_SHORT", "Mountain Peak")
|
||||
monkeypatch.setenv("REPORT_LAT", "46.8523")
|
||||
monkeypatch.setenv("REPORT_LON", "9.5369")
|
||||
monkeypatch.setenv("REPORT_ELEV", "2500")
|
||||
monkeypatch.setenv("REPORT_ELEV_UNIT", "ft")
|
||||
|
||||
config = Config()
|
||||
|
||||
assert config.report_location_name == "Mountain Peak Observatory"
|
||||
assert config.report_location_short == "Mountain Peak"
|
||||
assert config.report_lat == pytest.approx(46.8523)
|
||||
assert config.report_lon == pytest.approx(9.5369)
|
||||
assert config.report_elev == pytest.approx(2500)
|
||||
assert config.report_elev_unit == "ft"
|
||||
|
||||
def test_all_radio_settings(self, clean_env, monkeypatch):
|
||||
"""All radio configuration settings are loaded."""
|
||||
monkeypatch.setenv("RADIO_FREQUENCY", "915.000 MHz")
|
||||
monkeypatch.setenv("RADIO_BANDWIDTH", "125 kHz")
|
||||
monkeypatch.setenv("RADIO_SPREAD_FACTOR", "SF12")
|
||||
monkeypatch.setenv("RADIO_CODING_RATE", "CR5")
|
||||
|
||||
config = Config()
|
||||
|
||||
assert config.radio_frequency == "915.000 MHz"
|
||||
assert config.radio_bandwidth == "125 kHz"
|
||||
assert config.radio_spread_factor == "SF12"
|
||||
assert config.radio_coding_rate == "CR5"
|
||||
|
||||
def test_companion_settings(self, clean_env, monkeypatch):
|
||||
"""Companion display settings are loaded."""
|
||||
monkeypatch.setenv("COMPANION_DISPLAY_NAME", "Base Station")
|
||||
monkeypatch.setenv("COMPANION_PUBKEY_PREFIX", "!def456")
|
||||
monkeypatch.setenv("COMPANION_HARDWARE", "T-Beam Supreme")
|
||||
|
||||
config = Config()
|
||||
|
||||
assert config.companion_display_name == "Base Station"
|
||||
assert config.companion_pubkey_prefix == "!def456"
|
||||
assert config.companion_hardware == "T-Beam Supreme"
|
||||
|
||||
|
||||
class TestGetConfigSingleton:
|
||||
"""Tests for get_config singleton behavior."""
|
||||
|
||||
def test_config_persists_across_calls(self, clean_env, monkeypatch):
|
||||
"""Config values persist across multiple get_config calls."""
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "tcp")
|
||||
|
||||
config1 = get_config()
|
||||
assert config1.mesh_transport == "tcp"
|
||||
|
||||
# Change env var - should NOT affect cached config
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "ble")
|
||||
|
||||
config2 = get_config()
|
||||
assert config2.mesh_transport == "tcp" # Still tcp, cached
|
||||
assert config1 is config2
|
||||
|
||||
def test_reset_allows_new_config(self, clean_env, monkeypatch):
|
||||
"""Resetting singleton allows new config."""
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "tcp")
|
||||
|
||||
config1 = get_config()
|
||||
assert config1.mesh_transport == "tcp"
|
||||
|
||||
# Reset singleton
|
||||
import meshmon.env
|
||||
meshmon.env._config = None
|
||||
|
||||
# Change env var
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "ble")
|
||||
|
||||
config2 = get_config()
|
||||
assert config2.mesh_transport == "ble"
|
||||
assert config1 is not config2
|
||||
168
tests/conftest.py
Normal file
168
tests/conftest.py
Normal file
@@ -0,0 +1,168 @@
|
||||
"""Root fixtures for all tests."""
|
||||
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def clean_env(monkeypatch):
|
||||
"""Clear mesh-related env vars and reset config singleton before each test."""
|
||||
env_prefixes = (
|
||||
"MESH_",
|
||||
"REPEATER_",
|
||||
"COMPANION_",
|
||||
"REMOTE_",
|
||||
"TELEMETRY_",
|
||||
"REPORT_",
|
||||
"RADIO_",
|
||||
"STATE_DIR",
|
||||
"OUT_DIR",
|
||||
)
|
||||
|
||||
for key in list(os.environ.keys()):
|
||||
for prefix in env_prefixes:
|
||||
if key.startswith(prefix):
|
||||
monkeypatch.delenv(key, raising=False)
|
||||
break
|
||||
|
||||
# Reset config singleton
|
||||
import meshmon.env
|
||||
|
||||
meshmon.env._config = None
|
||||
|
||||
yield
|
||||
|
||||
# Reset again after test
|
||||
meshmon.env._config = None
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def tmp_state_dir(tmp_path):
|
||||
"""Create temp directory for state files (DB, circuit breaker)."""
|
||||
state_dir = tmp_path / "state"
|
||||
state_dir.mkdir()
|
||||
return state_dir
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def tmp_out_dir(tmp_path):
|
||||
"""Create temp directory for rendered output."""
|
||||
out_dir = tmp_path / "out"
|
||||
out_dir.mkdir()
|
||||
return out_dir
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def configured_env(tmp_state_dir, tmp_out_dir, monkeypatch):
|
||||
"""Set up test environment with temp directories."""
|
||||
monkeypatch.setenv("STATE_DIR", str(tmp_state_dir))
|
||||
monkeypatch.setenv("OUT_DIR", str(tmp_out_dir))
|
||||
# Reset config to pick up new values
|
||||
import meshmon.env
|
||||
|
||||
meshmon.env._config = None
|
||||
return {"state_dir": tmp_state_dir, "out_dir": tmp_out_dir}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_companion_metrics():
|
||||
"""Sample companion metrics using firmware field names."""
|
||||
return {
|
||||
"battery_mv": 3850.0,
|
||||
"uptime_secs": 86400,
|
||||
"contacts": 5,
|
||||
"recv": 1234,
|
||||
"sent": 567,
|
||||
"errors": 0,
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_repeater_metrics():
|
||||
"""Sample repeater metrics using firmware field names."""
|
||||
return {
|
||||
"bat": 3920.0,
|
||||
"uptime": 172800,
|
||||
"last_rssi": -85,
|
||||
"last_snr": 7.5,
|
||||
"noise_floor": -115,
|
||||
"tx_queue_len": 0,
|
||||
"nb_recv": 5678,
|
||||
"nb_sent": 2345,
|
||||
"airtime": 3600,
|
||||
"rx_airtime": 7200,
|
||||
"flood_dups": 12,
|
||||
"direct_dups": 5,
|
||||
"sent_flood": 100,
|
||||
"recv_flood": 200,
|
||||
"sent_direct": 50,
|
||||
"recv_direct": 75,
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def project_root():
|
||||
"""Path to the project root directory."""
|
||||
return Path(__file__).parent.parent
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def src_root(project_root):
|
||||
"""Path to the src/meshmon directory."""
|
||||
return project_root / "src" / "meshmon"
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def db_path(tmp_state_dir):
|
||||
"""Database path in temp state directory."""
|
||||
return tmp_state_dir / "metrics.db"
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def migrations_dir(project_root):
|
||||
"""Path to actual migrations directory."""
|
||||
return project_root / "src" / "meshmon" / "migrations"
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def initialized_db(db_path, configured_env, monkeypatch):
|
||||
"""Fresh database with migrations applied."""
|
||||
from meshmon.db import init_db
|
||||
|
||||
init_db()
|
||||
return db_path
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def populated_db(initialized_db, sample_companion_metrics, sample_repeater_metrics):
|
||||
"""Database with 7 days of sample data."""
|
||||
import time
|
||||
|
||||
from meshmon.db import insert_metrics
|
||||
|
||||
now = int(time.time())
|
||||
day_seconds = 86400
|
||||
|
||||
# Insert 7 days of companion data (every hour)
|
||||
for day in range(7):
|
||||
for hour in range(24):
|
||||
ts = now - (day * day_seconds) - (hour * 3600)
|
||||
metrics = sample_companion_metrics.copy()
|
||||
metrics["battery_mv"] = 3700 + (hour * 10) + (day * 5)
|
||||
metrics["recv"] = 100 * (day + 1) + hour
|
||||
metrics["sent"] = 50 * (day + 1) + hour
|
||||
insert_metrics(ts, "companion", metrics)
|
||||
|
||||
# Insert 7 days of repeater data (every 15 minutes)
|
||||
for day in range(7):
|
||||
for interval in range(96): # 24 * 4
|
||||
ts = now - (day * day_seconds) - (interval * 900)
|
||||
metrics = sample_repeater_metrics.copy()
|
||||
metrics["bat"] = 3700 + (interval * 2) + (day * 5)
|
||||
metrics["nb_recv"] = 1000 * (day + 1) + interval * 10
|
||||
metrics["nb_sent"] = 500 * (day + 1) + interval * 5
|
||||
insert_metrics(ts, "repeater", metrics)
|
||||
|
||||
return initialized_db
|
||||
1
tests/database/__init__.py
Normal file
1
tests/database/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Database tests."""
|
||||
59
tests/database/conftest.py
Normal file
59
tests/database/conftest.py
Normal file
@@ -0,0 +1,59 @@
|
||||
"""Fixtures for database tests."""
|
||||
|
||||
import time
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def db_path(tmp_state_dir):
|
||||
"""Database path in temp state directory."""
|
||||
return tmp_state_dir / "metrics.db"
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def migrations_dir():
|
||||
"""Path to actual migrations directory."""
|
||||
return Path(__file__).parent.parent.parent / "src" / "meshmon" / "migrations"
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def initialized_db(db_path, configured_env):
|
||||
"""Fresh database with migrations applied."""
|
||||
from meshmon.db import init_db
|
||||
init_db(db_path)
|
||||
return db_path
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def populated_db(initialized_db, sample_companion_metrics, sample_repeater_metrics):
|
||||
"""Database with 7 days of sample data."""
|
||||
from meshmon.db import insert_metrics
|
||||
|
||||
now = int(time.time())
|
||||
day_seconds = 86400
|
||||
|
||||
# Insert 7 days of companion data (every hour)
|
||||
for day in range(7):
|
||||
for hour in range(24):
|
||||
ts = now - (day * day_seconds) - (hour * 3600)
|
||||
metrics = sample_companion_metrics.copy()
|
||||
# Vary values slightly
|
||||
metrics["battery_mv"] = 3700 + (hour * 10) + (day * 5)
|
||||
metrics["recv"] = 100 * (day + 1) + hour
|
||||
metrics["sent"] = 50 * (day + 1) + hour
|
||||
insert_metrics(ts, "companion", metrics, initialized_db)
|
||||
|
||||
# Insert 7 days of repeater data (every 15 minutes)
|
||||
for day in range(7):
|
||||
for interval in range(96): # 24 * 4
|
||||
ts = now - (day * day_seconds) - (interval * 900)
|
||||
metrics = sample_repeater_metrics.copy()
|
||||
# Vary values slightly
|
||||
metrics["bat"] = 3700 + (interval * 2) + (day * 5)
|
||||
metrics["nb_recv"] = 1000 * (day + 1) + interval * 10
|
||||
metrics["nb_sent"] = 500 * (day + 1) + interval * 5
|
||||
insert_metrics(ts, "repeater", metrics, initialized_db)
|
||||
|
||||
return initialized_db
|
||||
179
tests/database/test_db_init.py
Normal file
179
tests/database/test_db_init.py
Normal file
@@ -0,0 +1,179 @@
|
||||
"""Tests for database initialization and migrations."""
|
||||
|
||||
import sqlite3
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.db import (
|
||||
_get_schema_version,
|
||||
get_connection,
|
||||
init_db,
|
||||
)
|
||||
|
||||
|
||||
class TestInitDb:
|
||||
"""Tests for init_db function."""
|
||||
|
||||
def test_creates_database_file(self, db_path, configured_env):
|
||||
"""Creates database file if it doesn't exist."""
|
||||
assert not db_path.exists()
|
||||
|
||||
init_db(db_path)
|
||||
|
||||
assert db_path.exists()
|
||||
|
||||
def test_creates_parent_directories(self, tmp_path, configured_env):
|
||||
"""Creates parent directories if needed."""
|
||||
nested_path = tmp_path / "deep" / "nested" / "metrics.db"
|
||||
assert not nested_path.parent.exists()
|
||||
|
||||
init_db(nested_path)
|
||||
|
||||
assert nested_path.exists()
|
||||
|
||||
def test_applies_migrations(self, db_path, configured_env):
|
||||
"""Applies schema migrations."""
|
||||
init_db(db_path)
|
||||
|
||||
with get_connection(db_path, readonly=True) as conn:
|
||||
version = _get_schema_version(conn)
|
||||
assert version >= 1
|
||||
|
||||
def test_safe_to_call_multiple_times(self, db_path, configured_env):
|
||||
"""Can be called multiple times without error."""
|
||||
init_db(db_path)
|
||||
init_db(db_path) # Should not raise
|
||||
init_db(db_path) # Should not raise
|
||||
|
||||
with get_connection(db_path, readonly=True) as conn:
|
||||
version = _get_schema_version(conn)
|
||||
assert version >= 1
|
||||
|
||||
def test_enables_wal_mode(self, db_path, configured_env):
|
||||
"""Enables WAL journal mode."""
|
||||
init_db(db_path)
|
||||
|
||||
conn = sqlite3.connect(db_path)
|
||||
try:
|
||||
cursor = conn.execute("PRAGMA journal_mode")
|
||||
mode = cursor.fetchone()[0]
|
||||
assert mode.lower() == "wal"
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
def test_creates_metrics_table(self, db_path, configured_env):
|
||||
"""Creates metrics table with correct schema."""
|
||||
init_db(db_path)
|
||||
|
||||
with get_connection(db_path, readonly=True) as conn:
|
||||
# Check table exists
|
||||
cursor = conn.execute(
|
||||
"SELECT name FROM sqlite_master WHERE type='table' AND name='metrics'"
|
||||
)
|
||||
assert cursor.fetchone() is not None
|
||||
|
||||
# Check columns
|
||||
cursor = conn.execute("PRAGMA table_info(metrics)")
|
||||
columns = {row["name"]: row for row in cursor}
|
||||
assert "ts" in columns
|
||||
assert "role" in columns
|
||||
assert "metric" in columns
|
||||
assert "value" in columns
|
||||
|
||||
def test_creates_db_meta_table(self, db_path, configured_env):
|
||||
"""Creates db_meta table for schema versioning."""
|
||||
init_db(db_path)
|
||||
|
||||
with get_connection(db_path, readonly=True) as conn:
|
||||
cursor = conn.execute(
|
||||
"SELECT name FROM sqlite_master WHERE type='table' AND name='db_meta'"
|
||||
)
|
||||
assert cursor.fetchone() is not None
|
||||
|
||||
|
||||
class TestGetConnection:
|
||||
"""Tests for get_connection context manager."""
|
||||
|
||||
def test_returns_connection(self, initialized_db):
|
||||
"""Returns a working connection."""
|
||||
with get_connection(initialized_db) as conn:
|
||||
assert conn is not None
|
||||
cursor = conn.execute("SELECT 1")
|
||||
assert cursor.fetchone()[0] == 1
|
||||
|
||||
def test_row_factory_enabled(self, initialized_db):
|
||||
"""Row factory is set to sqlite3.Row."""
|
||||
with get_connection(initialized_db) as conn:
|
||||
conn.execute(
|
||||
"INSERT INTO metrics (ts, role, metric, value) VALUES (1, 'companion', 'test', 1.0)"
|
||||
)
|
||||
with get_connection(initialized_db, readonly=True) as conn:
|
||||
cursor = conn.execute("SELECT * FROM metrics WHERE metric = 'test'")
|
||||
row = cursor.fetchone()
|
||||
# sqlite3.Row supports dict-like access
|
||||
assert row["metric"] == "test"
|
||||
assert row["value"] == 1.0
|
||||
|
||||
def test_commits_on_success(self, initialized_db):
|
||||
"""Commits transaction on normal exit."""
|
||||
with get_connection(initialized_db) as conn:
|
||||
conn.execute(
|
||||
"INSERT INTO metrics (ts, role, metric, value) VALUES (1, 'companion', 'test', 1.0)"
|
||||
)
|
||||
|
||||
# Check data persisted
|
||||
with get_connection(initialized_db, readonly=True) as conn:
|
||||
cursor = conn.execute("SELECT COUNT(*) FROM metrics WHERE metric = 'test'")
|
||||
assert cursor.fetchone()[0] == 1
|
||||
|
||||
def test_rollback_on_exception(self, initialized_db):
|
||||
"""Rolls back transaction on exception."""
|
||||
try:
|
||||
with get_connection(initialized_db) as conn:
|
||||
conn.execute(
|
||||
"INSERT INTO metrics (ts, role, metric, value) VALUES (2, 'companion', 'test2', 1.0)"
|
||||
)
|
||||
raise ValueError("Test error")
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
# Check data was rolled back
|
||||
with get_connection(initialized_db, readonly=True) as conn:
|
||||
cursor = conn.execute("SELECT COUNT(*) FROM metrics WHERE metric = 'test2'")
|
||||
assert cursor.fetchone()[0] == 0
|
||||
|
||||
def test_readonly_mode(self, initialized_db):
|
||||
"""Read-only mode prevents writes."""
|
||||
with (
|
||||
get_connection(initialized_db, readonly=True) as conn,
|
||||
pytest.raises(sqlite3.OperationalError),
|
||||
):
|
||||
conn.execute(
|
||||
"INSERT INTO metrics (ts, role, metric, value) VALUES (1, 'companion', 'test', 1.0)"
|
||||
)
|
||||
|
||||
|
||||
class TestMigrationsDirectory:
|
||||
"""Tests for migrations directory and files."""
|
||||
|
||||
def test_migrations_dir_exists(self, migrations_dir):
|
||||
"""Migrations directory exists."""
|
||||
assert migrations_dir.exists()
|
||||
assert migrations_dir.is_dir()
|
||||
|
||||
def test_has_initial_migration(self, migrations_dir):
|
||||
"""Has at least the initial schema migration."""
|
||||
sql_files = list(migrations_dir.glob("*.sql"))
|
||||
assert len(sql_files) >= 1
|
||||
|
||||
# Check for 001 prefixed file
|
||||
initial = [f for f in sql_files if f.stem.startswith("001")]
|
||||
assert len(initial) == 1
|
||||
|
||||
def test_migrations_are_numbered(self, migrations_dir):
|
||||
"""Migration files follow NNN_description.sql pattern."""
|
||||
import re
|
||||
|
||||
pattern = re.compile(r"^\d{3}_.*\.sql$")
|
||||
for sql_file in migrations_dir.glob("*.sql"):
|
||||
assert pattern.match(sql_file.name), f"{sql_file.name} doesn't match pattern"
|
||||
207
tests/database/test_db_insert.py
Normal file
207
tests/database/test_db_insert.py
Normal file
@@ -0,0 +1,207 @@
|
||||
"""Tests for database insert functions."""
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.db import (
|
||||
get_connection,
|
||||
insert_metric,
|
||||
insert_metrics,
|
||||
)
|
||||
|
||||
BASE_TS = 1704067200
|
||||
|
||||
|
||||
class TestInsertMetric:
|
||||
"""Tests for insert_metric function."""
|
||||
|
||||
def test_inserts_single_metric(self, initialized_db):
|
||||
"""Inserts a single metric successfully."""
|
||||
ts = BASE_TS
|
||||
|
||||
result = insert_metric(ts, "companion", "battery_mv", 3850.0, initialized_db)
|
||||
|
||||
assert result is True
|
||||
|
||||
with get_connection(initialized_db, readonly=True) as conn:
|
||||
cursor = conn.execute(
|
||||
"SELECT value FROM metrics WHERE ts = ? AND role = ? AND metric = ?",
|
||||
(ts, "companion", "battery_mv")
|
||||
)
|
||||
row = cursor.fetchone()
|
||||
assert row is not None
|
||||
assert row["value"] == 3850.0
|
||||
|
||||
def test_returns_false_on_duplicate(self, initialized_db):
|
||||
"""Returns False for duplicate (ts, role, metric) tuple."""
|
||||
ts = BASE_TS
|
||||
|
||||
# First insert succeeds
|
||||
assert insert_metric(ts, "companion", "test", 1.0, initialized_db) is True
|
||||
|
||||
# Second insert with same key returns False
|
||||
assert insert_metric(ts, "companion", "test", 2.0, initialized_db) is False
|
||||
|
||||
def test_different_roles_not_duplicate(self, initialized_db):
|
||||
"""Same ts/metric with different roles are not duplicates."""
|
||||
ts = BASE_TS
|
||||
|
||||
assert insert_metric(ts, "companion", "test", 1.0, initialized_db) is True
|
||||
assert insert_metric(ts, "repeater", "test", 2.0, initialized_db) is True
|
||||
|
||||
def test_different_metrics_not_duplicate(self, initialized_db):
|
||||
"""Same ts/role with different metrics are not duplicates."""
|
||||
ts = BASE_TS
|
||||
|
||||
assert insert_metric(ts, "companion", "test1", 1.0, initialized_db) is True
|
||||
assert insert_metric(ts, "companion", "test2", 2.0, initialized_db) is True
|
||||
|
||||
def test_invalid_role_raises(self, initialized_db):
|
||||
"""Invalid role raises ValueError."""
|
||||
ts = BASE_TS
|
||||
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
insert_metric(ts, "invalid", "test", 1.0, initialized_db)
|
||||
|
||||
def test_sql_injection_blocked(self, initialized_db):
|
||||
"""SQL injection attempt raises ValueError."""
|
||||
ts = BASE_TS
|
||||
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
insert_metric(ts, "'; DROP TABLE metrics; --", "test", 1.0, initialized_db)
|
||||
|
||||
|
||||
class TestInsertMetrics:
|
||||
"""Tests for insert_metrics function (bulk insert)."""
|
||||
|
||||
def test_inserts_multiple_metrics(self, initialized_db):
|
||||
"""Inserts multiple metrics from dict."""
|
||||
ts = BASE_TS
|
||||
metrics = {
|
||||
"battery_mv": 3850.0,
|
||||
"contacts": 5,
|
||||
"uptime_secs": 86400,
|
||||
}
|
||||
|
||||
count = insert_metrics(ts, "companion", metrics, initialized_db)
|
||||
|
||||
assert count == 3
|
||||
|
||||
with get_connection(initialized_db, readonly=True) as conn:
|
||||
cursor = conn.execute(
|
||||
"SELECT COUNT(*) FROM metrics WHERE ts = ?",
|
||||
(ts,)
|
||||
)
|
||||
assert cursor.fetchone()[0] == 3
|
||||
|
||||
def test_returns_insert_count(self, initialized_db):
|
||||
"""Returns correct count of inserted metrics."""
|
||||
ts = BASE_TS
|
||||
metrics = {"a": 1.0, "b": 2.0, "c": 3.0}
|
||||
|
||||
count = insert_metrics(ts, "companion", metrics, initialized_db)
|
||||
|
||||
assert count == 3
|
||||
|
||||
def test_skips_non_numeric_values(self, initialized_db):
|
||||
"""Non-numeric values are silently skipped."""
|
||||
ts = BASE_TS
|
||||
metrics = {
|
||||
"battery_mv": 3850.0, # Numeric - inserted
|
||||
"name": "test", # String - skipped
|
||||
"status": None, # None - skipped
|
||||
"flags": [1, 2, 3], # List - skipped
|
||||
"nested": {"a": 1}, # Dict - skipped
|
||||
}
|
||||
|
||||
count = insert_metrics(ts, "companion", metrics, initialized_db)
|
||||
|
||||
assert count == 1 # Only battery_mv
|
||||
|
||||
def test_handles_int_and_float(self, initialized_db):
|
||||
"""Both int and float values are inserted."""
|
||||
ts = BASE_TS
|
||||
metrics = {
|
||||
"int_value": 42,
|
||||
"float_value": 3.14,
|
||||
}
|
||||
|
||||
count = insert_metrics(ts, "companion", metrics, initialized_db)
|
||||
|
||||
assert count == 2
|
||||
|
||||
def test_converts_int_to_float(self, initialized_db):
|
||||
"""Integer values are stored as float."""
|
||||
ts = BASE_TS
|
||||
metrics = {"contacts": 5}
|
||||
|
||||
insert_metrics(ts, "companion", metrics, initialized_db)
|
||||
|
||||
with get_connection(initialized_db, readonly=True) as conn:
|
||||
cursor = conn.execute(
|
||||
"SELECT value FROM metrics WHERE metric = 'contacts'"
|
||||
)
|
||||
row = cursor.fetchone()
|
||||
assert row["value"] == 5.0
|
||||
assert isinstance(row["value"], float)
|
||||
|
||||
def test_empty_dict_returns_zero(self, initialized_db):
|
||||
"""Empty dict returns 0."""
|
||||
ts = BASE_TS
|
||||
|
||||
count = insert_metrics(ts, "companion", {}, initialized_db)
|
||||
|
||||
assert count == 0
|
||||
|
||||
def test_skips_duplicates_silently(self, initialized_db):
|
||||
"""Duplicate metrics are skipped without error."""
|
||||
ts = BASE_TS
|
||||
metrics = {"test": 1.0}
|
||||
|
||||
# First insert
|
||||
count1 = insert_metrics(ts, "companion", metrics, initialized_db)
|
||||
assert count1 == 1
|
||||
|
||||
# Second insert - same key
|
||||
count2 = insert_metrics(ts, "companion", metrics, initialized_db)
|
||||
assert count2 == 0 # Duplicate skipped
|
||||
|
||||
def test_partial_duplicates(self, initialized_db):
|
||||
"""Partial duplicates: some inserted, some skipped."""
|
||||
ts = BASE_TS
|
||||
|
||||
# First insert
|
||||
insert_metrics(ts, "companion", {"existing": 1.0}, initialized_db)
|
||||
|
||||
# Second insert with mix
|
||||
metrics = {
|
||||
"existing": 2.0, # Duplicate - skipped
|
||||
"new": 3.0, # New - inserted
|
||||
}
|
||||
count = insert_metrics(ts, "companion", metrics, initialized_db)
|
||||
|
||||
assert count == 1 # Only "new" inserted
|
||||
|
||||
def test_invalid_role_raises(self, initialized_db):
|
||||
"""Invalid role raises ValueError."""
|
||||
ts = BASE_TS
|
||||
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
insert_metrics(ts, "invalid", {"test": 1.0}, initialized_db)
|
||||
|
||||
def test_companion_metrics(self, initialized_db, sample_companion_metrics):
|
||||
"""Inserts companion metrics dict."""
|
||||
ts = BASE_TS
|
||||
|
||||
count = insert_metrics(ts, "companion", sample_companion_metrics, initialized_db)
|
||||
|
||||
# Should insert all numeric fields
|
||||
assert count == len(sample_companion_metrics)
|
||||
|
||||
def test_repeater_metrics(self, initialized_db, sample_repeater_metrics):
|
||||
"""Inserts repeater metrics dict."""
|
||||
ts = BASE_TS
|
||||
|
||||
count = insert_metrics(ts, "repeater", sample_repeater_metrics, initialized_db)
|
||||
|
||||
# Should insert all numeric fields
|
||||
assert count == len(sample_repeater_metrics)
|
||||
203
tests/database/test_db_maintenance.py
Normal file
203
tests/database/test_db_maintenance.py
Normal file
@@ -0,0 +1,203 @@
|
||||
"""Tests for database maintenance functions."""
|
||||
|
||||
import os
|
||||
import sqlite3
|
||||
|
||||
from meshmon.db import (
|
||||
get_db_path,
|
||||
init_db,
|
||||
vacuum_db,
|
||||
)
|
||||
|
||||
|
||||
class TestVacuumDb:
|
||||
"""Tests for vacuum_db function."""
|
||||
|
||||
def test_vacuums_existing_db(self, initialized_db):
|
||||
"""Vacuum should run without error on initialized database."""
|
||||
# Add some data then vacuum
|
||||
conn = sqlite3.connect(initialized_db)
|
||||
conn.execute(
|
||||
"INSERT INTO metrics (ts, role, metric, value) VALUES (1, 'companion', 'test', 1.0)"
|
||||
)
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
# Should not raise
|
||||
vacuum_db(initialized_db)
|
||||
|
||||
def test_runs_analyze(self, initialized_db):
|
||||
"""ANALYZE should be run after VACUUM."""
|
||||
conn = sqlite3.connect(initialized_db)
|
||||
conn.execute(
|
||||
"INSERT INTO metrics (ts, role, metric, value) VALUES (1, 'companion', 'test', 1.0)"
|
||||
)
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
# Vacuum includes ANALYZE
|
||||
vacuum_db(initialized_db)
|
||||
|
||||
# Check that database stats were updated
|
||||
conn = sqlite3.connect(initialized_db)
|
||||
cursor = conn.execute("SELECT COUNT(*) FROM sqlite_stat1")
|
||||
count = cursor.fetchone()[0]
|
||||
conn.close()
|
||||
assert count > 0
|
||||
|
||||
def test_uses_default_path_when_none(self, configured_env, monkeypatch):
|
||||
"""Uses get_db_path() when no path provided."""
|
||||
# Initialize db at default location
|
||||
init_db()
|
||||
|
||||
# vacuum_db with None should use default path
|
||||
vacuum_db(None)
|
||||
|
||||
def test_can_vacuum_empty_db(self, initialized_db):
|
||||
"""Can vacuum an empty database."""
|
||||
vacuum_db(initialized_db)
|
||||
|
||||
def test_reclaims_space_after_delete(self, initialized_db):
|
||||
"""Vacuum should reclaim space after deleting rows."""
|
||||
conn = sqlite3.connect(initialized_db)
|
||||
|
||||
# Insert many rows
|
||||
for i in range(1000):
|
||||
conn.execute(
|
||||
"INSERT INTO metrics (ts, role, metric, value) VALUES (?, 'companion', 'test', 1.0)",
|
||||
(i,)
|
||||
)
|
||||
conn.commit()
|
||||
|
||||
# Get size before delete
|
||||
conn.close()
|
||||
size_before = os.path.getsize(initialized_db)
|
||||
|
||||
# Delete all rows
|
||||
conn = sqlite3.connect(initialized_db)
|
||||
conn.execute("DELETE FROM metrics")
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
# Vacuum
|
||||
vacuum_db(initialized_db)
|
||||
|
||||
# Size should be smaller (or at least not larger)
|
||||
size_after = os.path.getsize(initialized_db)
|
||||
# Note: Due to WAL mode, this might not always shrink dramatically
|
||||
# but vacuum should at least complete without error
|
||||
assert size_after <= size_before + 4096 # Allow for some overhead
|
||||
|
||||
|
||||
class TestGetDbPath:
|
||||
"""Tests for get_db_path function."""
|
||||
|
||||
def test_returns_path_in_state_dir(self, configured_env):
|
||||
"""Path should be in the configured state directory."""
|
||||
path = get_db_path()
|
||||
|
||||
assert path.name == "metrics.db"
|
||||
assert str(configured_env["state_dir"]) in str(path)
|
||||
|
||||
def test_returns_path_object(self, configured_env):
|
||||
"""Should return a Path object."""
|
||||
from pathlib import Path
|
||||
|
||||
path = get_db_path()
|
||||
|
||||
assert isinstance(path, Path)
|
||||
|
||||
|
||||
class TestDatabaseIntegrity:
|
||||
"""Tests for database integrity after operations."""
|
||||
|
||||
def test_wal_mode_enabled(self, initialized_db):
|
||||
"""Database should be in WAL mode."""
|
||||
conn = sqlite3.connect(initialized_db)
|
||||
cursor = conn.execute("PRAGMA journal_mode")
|
||||
mode = cursor.fetchone()[0]
|
||||
conn.close()
|
||||
|
||||
assert mode.lower() == "wal"
|
||||
|
||||
def test_foreign_keys_disabled_by_default(self, initialized_db):
|
||||
"""Foreign keys should be disabled (SQLite default)."""
|
||||
conn = sqlite3.connect(initialized_db)
|
||||
cursor = conn.execute("PRAGMA foreign_keys")
|
||||
enabled = cursor.fetchone()[0]
|
||||
conn.close()
|
||||
|
||||
# Default is off, and we don't explicitly enable them
|
||||
assert enabled == 0
|
||||
|
||||
def test_metrics_table_exists(self, initialized_db):
|
||||
"""Metrics table should exist after init."""
|
||||
conn = sqlite3.connect(initialized_db)
|
||||
cursor = conn.execute(
|
||||
"SELECT name FROM sqlite_master WHERE type='table' AND name='metrics'"
|
||||
)
|
||||
result = cursor.fetchone()
|
||||
conn.close()
|
||||
|
||||
assert result is not None
|
||||
assert result[0] == "metrics"
|
||||
|
||||
def test_db_meta_table_exists(self, initialized_db):
|
||||
"""db_meta table should exist after init."""
|
||||
conn = sqlite3.connect(initialized_db)
|
||||
cursor = conn.execute(
|
||||
"SELECT name FROM sqlite_master WHERE type='table' AND name='db_meta'"
|
||||
)
|
||||
result = cursor.fetchone()
|
||||
conn.close()
|
||||
|
||||
assert result is not None
|
||||
|
||||
def test_metrics_index_exists(self, initialized_db):
|
||||
"""Index on metrics(role, ts) should exist."""
|
||||
conn = sqlite3.connect(initialized_db)
|
||||
cursor = conn.execute(
|
||||
"SELECT name FROM sqlite_master WHERE type='index' AND name='idx_metrics_role_ts'"
|
||||
)
|
||||
result = cursor.fetchone()
|
||||
conn.close()
|
||||
|
||||
assert result is not None
|
||||
|
||||
def test_vacuum_preserves_data(self, initialized_db):
|
||||
"""Vacuum should not lose any data."""
|
||||
conn = sqlite3.connect(initialized_db)
|
||||
for i in range(100):
|
||||
conn.execute(
|
||||
"INSERT INTO metrics (ts, role, metric, value) VALUES (?, 'companion', 'test', ?)",
|
||||
(i, float(i))
|
||||
)
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
# Vacuum
|
||||
vacuum_db(initialized_db)
|
||||
|
||||
# Check data is still there
|
||||
conn = sqlite3.connect(initialized_db)
|
||||
cursor = conn.execute("SELECT COUNT(*) FROM metrics")
|
||||
count = cursor.fetchone()[0]
|
||||
conn.close()
|
||||
|
||||
assert count == 100
|
||||
|
||||
def test_vacuum_preserves_schema_version(self, initialized_db):
|
||||
"""Vacuum should not change schema version."""
|
||||
from meshmon.db import _get_schema_version
|
||||
|
||||
conn = sqlite3.connect(initialized_db)
|
||||
version_before = _get_schema_version(conn)
|
||||
conn.close()
|
||||
|
||||
vacuum_db(initialized_db)
|
||||
|
||||
conn = sqlite3.connect(initialized_db)
|
||||
version_after = _get_schema_version(conn)
|
||||
conn.close()
|
||||
|
||||
assert version_before == version_after
|
||||
331
tests/database/test_db_migrations.py
Normal file
331
tests/database/test_db_migrations.py
Normal file
@@ -0,0 +1,331 @@
|
||||
"""Tests for database migration system."""
|
||||
|
||||
import sqlite3
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.db import (
|
||||
_apply_migrations,
|
||||
_get_migration_files,
|
||||
_get_schema_version,
|
||||
_set_schema_version,
|
||||
get_schema_version,
|
||||
)
|
||||
|
||||
|
||||
class TestGetMigrationFiles:
|
||||
"""Tests for _get_migration_files function."""
|
||||
|
||||
def test_finds_migration_files(self):
|
||||
"""Should find actual migration files in MIGRATIONS_DIR."""
|
||||
migrations = _get_migration_files()
|
||||
|
||||
assert len(migrations) >= 2
|
||||
# Should include 001 and 002
|
||||
versions = [v for v, _ in migrations]
|
||||
assert 1 in versions
|
||||
assert 2 in versions
|
||||
|
||||
def test_returns_sorted_by_version(self):
|
||||
"""Migrations should be sorted by version number."""
|
||||
migrations = _get_migration_files()
|
||||
|
||||
versions = [v for v, _ in migrations]
|
||||
assert versions == sorted(versions)
|
||||
|
||||
def test_returns_path_objects(self):
|
||||
"""Each migration should have a Path object."""
|
||||
migrations = _get_migration_files()
|
||||
|
||||
for _version, path in migrations:
|
||||
assert isinstance(path, Path)
|
||||
assert path.exists()
|
||||
assert path.suffix == ".sql"
|
||||
|
||||
def test_extracts_version_from_filename(self):
|
||||
"""Version number extracted from filename prefix."""
|
||||
migrations = _get_migration_files()
|
||||
|
||||
for version, path in migrations:
|
||||
filename_version = int(path.stem.split("_")[0])
|
||||
assert version == filename_version
|
||||
|
||||
def test_empty_when_no_migrations_dir(self, tmp_path, monkeypatch):
|
||||
"""Returns empty list when migrations dir doesn't exist."""
|
||||
fake_dir = tmp_path / "nonexistent"
|
||||
monkeypatch.setattr("meshmon.db.MIGRATIONS_DIR", fake_dir)
|
||||
|
||||
migrations = _get_migration_files()
|
||||
|
||||
assert migrations == []
|
||||
|
||||
def test_skips_invalid_filenames(self, tmp_path, monkeypatch):
|
||||
"""Skips files without valid version prefix."""
|
||||
migrations_dir = tmp_path / "migrations"
|
||||
migrations_dir.mkdir()
|
||||
|
||||
# Create valid migration
|
||||
(migrations_dir / "001_valid.sql").write_text("-- valid")
|
||||
# Create invalid migrations
|
||||
(migrations_dir / "invalid_name.sql").write_text("-- invalid")
|
||||
(migrations_dir / "abc_noversion.sql").write_text("-- no version")
|
||||
|
||||
monkeypatch.setattr("meshmon.db.MIGRATIONS_DIR", migrations_dir)
|
||||
|
||||
migrations = _get_migration_files()
|
||||
|
||||
assert len(migrations) == 1
|
||||
assert migrations[0][0] == 1
|
||||
|
||||
|
||||
class TestGetSchemaVersion:
|
||||
"""Tests for _get_schema_version internal function."""
|
||||
|
||||
def test_returns_zero_for_fresh_db(self, tmp_path):
|
||||
"""Fresh database with no db_meta returns 0."""
|
||||
db_path = tmp_path / "test.db"
|
||||
conn = sqlite3.connect(db_path)
|
||||
|
||||
version = _get_schema_version(conn)
|
||||
|
||||
assert version == 0
|
||||
conn.close()
|
||||
|
||||
def test_returns_stored_version(self, tmp_path):
|
||||
"""Returns version from db_meta table."""
|
||||
db_path = tmp_path / "test.db"
|
||||
conn = sqlite3.connect(db_path)
|
||||
conn.execute("""
|
||||
CREATE TABLE db_meta (
|
||||
key TEXT PRIMARY KEY NOT NULL,
|
||||
value TEXT NOT NULL
|
||||
)
|
||||
""")
|
||||
conn.execute(
|
||||
"INSERT INTO db_meta (key, value) VALUES ('schema_version', '5')"
|
||||
)
|
||||
conn.commit()
|
||||
|
||||
version = _get_schema_version(conn)
|
||||
|
||||
assert version == 5
|
||||
conn.close()
|
||||
|
||||
def test_returns_zero_when_key_missing(self, tmp_path):
|
||||
"""Returns 0 if db_meta exists but schema_version key is missing."""
|
||||
db_path = tmp_path / "test.db"
|
||||
conn = sqlite3.connect(db_path)
|
||||
conn.execute("""
|
||||
CREATE TABLE db_meta (
|
||||
key TEXT PRIMARY KEY NOT NULL,
|
||||
value TEXT NOT NULL
|
||||
)
|
||||
""")
|
||||
conn.execute(
|
||||
"INSERT INTO db_meta (key, value) VALUES ('other_key', 'value')"
|
||||
)
|
||||
conn.commit()
|
||||
|
||||
version = _get_schema_version(conn)
|
||||
|
||||
assert version == 0
|
||||
conn.close()
|
||||
|
||||
|
||||
class TestSetSchemaVersion:
|
||||
"""Tests for _set_schema_version internal function."""
|
||||
|
||||
def test_inserts_new_version(self, tmp_path):
|
||||
"""Can insert schema version into fresh db_meta."""
|
||||
db_path = tmp_path / "test.db"
|
||||
conn = sqlite3.connect(db_path)
|
||||
conn.execute("""
|
||||
CREATE TABLE db_meta (
|
||||
key TEXT PRIMARY KEY NOT NULL,
|
||||
value TEXT NOT NULL
|
||||
)
|
||||
""")
|
||||
|
||||
_set_schema_version(conn, 3)
|
||||
conn.commit()
|
||||
|
||||
cursor = conn.execute(
|
||||
"SELECT value FROM db_meta WHERE key = 'schema_version'"
|
||||
)
|
||||
assert cursor.fetchone()[0] == "3"
|
||||
conn.close()
|
||||
|
||||
def test_updates_existing_version(self, tmp_path):
|
||||
"""Can update existing schema version."""
|
||||
db_path = tmp_path / "test.db"
|
||||
conn = sqlite3.connect(db_path)
|
||||
conn.execute("""
|
||||
CREATE TABLE db_meta (
|
||||
key TEXT PRIMARY KEY NOT NULL,
|
||||
value TEXT NOT NULL
|
||||
)
|
||||
""")
|
||||
conn.execute(
|
||||
"INSERT INTO db_meta (key, value) VALUES ('schema_version', '1')"
|
||||
)
|
||||
conn.commit()
|
||||
|
||||
_set_schema_version(conn, 5)
|
||||
conn.commit()
|
||||
|
||||
cursor = conn.execute(
|
||||
"SELECT value FROM db_meta WHERE key = 'schema_version'"
|
||||
)
|
||||
assert cursor.fetchone()[0] == "5"
|
||||
conn.close()
|
||||
|
||||
|
||||
class TestApplyMigrations:
|
||||
"""Tests for _apply_migrations function."""
|
||||
|
||||
def test_applies_all_migrations_to_fresh_db(self, tmp_path, monkeypatch):
|
||||
"""Applies all migrations to a fresh database."""
|
||||
# Create mock migrations
|
||||
migrations_dir = tmp_path / "migrations"
|
||||
migrations_dir.mkdir()
|
||||
|
||||
(migrations_dir / "001_initial.sql").write_text("""
|
||||
CREATE TABLE IF NOT EXISTS db_meta (
|
||||
key TEXT PRIMARY KEY NOT NULL,
|
||||
value TEXT NOT NULL
|
||||
);
|
||||
CREATE TABLE test1 (id INTEGER);
|
||||
""")
|
||||
(migrations_dir / "002_second.sql").write_text("""
|
||||
CREATE TABLE test2 (id INTEGER);
|
||||
""")
|
||||
|
||||
monkeypatch.setattr("meshmon.db.MIGRATIONS_DIR", migrations_dir)
|
||||
|
||||
db_path = tmp_path / "test.db"
|
||||
conn = sqlite3.connect(db_path)
|
||||
|
||||
_apply_migrations(conn)
|
||||
|
||||
# Check both tables exist
|
||||
cursor = conn.execute(
|
||||
"SELECT name FROM sqlite_master WHERE type='table' ORDER BY name"
|
||||
)
|
||||
tables = [row[0] for row in cursor]
|
||||
assert "test1" in tables
|
||||
assert "test2" in tables
|
||||
assert "db_meta" in tables
|
||||
|
||||
# Check version is updated
|
||||
assert _get_schema_version(conn) == 2
|
||||
conn.close()
|
||||
|
||||
def test_skips_already_applied_migrations(self, tmp_path, monkeypatch):
|
||||
"""Skips migrations that have already been applied."""
|
||||
migrations_dir = tmp_path / "migrations"
|
||||
migrations_dir.mkdir()
|
||||
|
||||
(migrations_dir / "001_initial.sql").write_text("""
|
||||
CREATE TABLE IF NOT EXISTS db_meta (
|
||||
key TEXT PRIMARY KEY NOT NULL,
|
||||
value TEXT NOT NULL
|
||||
);
|
||||
CREATE TABLE test1 (id INTEGER);
|
||||
""")
|
||||
(migrations_dir / "002_second.sql").write_text("""
|
||||
CREATE TABLE test2 (id INTEGER);
|
||||
""")
|
||||
|
||||
monkeypatch.setattr("meshmon.db.MIGRATIONS_DIR", migrations_dir)
|
||||
|
||||
db_path = tmp_path / "test.db"
|
||||
conn = sqlite3.connect(db_path)
|
||||
|
||||
# Apply first time
|
||||
_apply_migrations(conn)
|
||||
|
||||
# Apply second time - should not fail
|
||||
_apply_migrations(conn)
|
||||
|
||||
assert _get_schema_version(conn) == 2
|
||||
conn.close()
|
||||
|
||||
def test_raises_when_no_migrations(self, tmp_path, monkeypatch):
|
||||
"""Raises error when no migration files exist."""
|
||||
empty_dir = tmp_path / "empty_migrations"
|
||||
empty_dir.mkdir()
|
||||
monkeypatch.setattr("meshmon.db.MIGRATIONS_DIR", empty_dir)
|
||||
|
||||
db_path = tmp_path / "test.db"
|
||||
conn = sqlite3.connect(db_path)
|
||||
|
||||
with pytest.raises(RuntimeError, match="No migration files found"):
|
||||
_apply_migrations(conn)
|
||||
|
||||
conn.close()
|
||||
|
||||
def test_rolls_back_failed_migration(self, tmp_path, monkeypatch):
|
||||
"""Rolls back if a migration fails."""
|
||||
migrations_dir = tmp_path / "migrations"
|
||||
migrations_dir.mkdir()
|
||||
|
||||
(migrations_dir / "001_initial.sql").write_text("""
|
||||
CREATE TABLE IF NOT EXISTS db_meta (
|
||||
key TEXT PRIMARY KEY NOT NULL,
|
||||
value TEXT NOT NULL
|
||||
);
|
||||
CREATE TABLE test1 (id INTEGER);
|
||||
""")
|
||||
(migrations_dir / "002_broken.sql").write_text("""
|
||||
THIS IS NOT VALID SQL;
|
||||
""")
|
||||
|
||||
monkeypatch.setattr("meshmon.db.MIGRATIONS_DIR", migrations_dir)
|
||||
|
||||
db_path = tmp_path / "test.db"
|
||||
conn = sqlite3.connect(db_path)
|
||||
|
||||
with pytest.raises(RuntimeError, match="Migration.*failed"):
|
||||
_apply_migrations(conn)
|
||||
|
||||
# Version should still be 1 (first migration applied)
|
||||
assert _get_schema_version(conn) == 1
|
||||
conn.close()
|
||||
|
||||
|
||||
class TestPublicGetSchemaVersion:
|
||||
"""Tests for public get_schema_version function."""
|
||||
|
||||
def test_returns_zero_when_db_missing(self, configured_env):
|
||||
"""Returns 0 when database file doesn't exist."""
|
||||
version = get_schema_version()
|
||||
assert version == 0
|
||||
|
||||
def test_returns_version_from_existing_db(self, initialized_db):
|
||||
"""Returns schema version from initialized database."""
|
||||
version = get_schema_version()
|
||||
|
||||
# Should be at least version 2 (we have 2 migrations)
|
||||
assert version >= 2
|
||||
|
||||
def test_uses_readonly_connection(self, initialized_db, monkeypatch):
|
||||
"""Opens database in readonly mode."""
|
||||
calls = []
|
||||
original_get_connection = __import__(
|
||||
"meshmon.db", fromlist=["get_connection"]
|
||||
).get_connection
|
||||
|
||||
from contextlib import contextmanager
|
||||
|
||||
@contextmanager
|
||||
def mock_get_connection(*args, **kwargs):
|
||||
calls.append(kwargs)
|
||||
with original_get_connection(*args, **kwargs) as conn:
|
||||
yield conn
|
||||
|
||||
monkeypatch.setattr("meshmon.db.get_connection", mock_get_connection)
|
||||
|
||||
get_schema_version()
|
||||
|
||||
assert any(call.get("readonly") is True for call in calls)
|
||||
312
tests/database/test_db_queries.py
Normal file
312
tests/database/test_db_queries.py
Normal file
@@ -0,0 +1,312 @@
|
||||
"""Tests for database query functions."""
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.db import (
|
||||
get_available_metrics,
|
||||
get_distinct_timestamps,
|
||||
get_latest_metrics,
|
||||
get_metric_count,
|
||||
get_metrics_for_period,
|
||||
insert_metrics,
|
||||
)
|
||||
|
||||
BASE_TS = 1704067200
|
||||
|
||||
|
||||
class TestGetMetricsForPeriod:
|
||||
"""Tests for get_metrics_for_period function."""
|
||||
|
||||
def test_returns_dict_by_metric(self, initialized_db):
|
||||
"""Returns dict with metric names as keys."""
|
||||
ts = BASE_TS
|
||||
insert_metrics(ts, "companion", {
|
||||
"battery_mv": 3850.0,
|
||||
"contacts": 5,
|
||||
}, initialized_db)
|
||||
|
||||
result = get_metrics_for_period(
|
||||
"companion", ts - 100, ts + 100, initialized_db
|
||||
)
|
||||
|
||||
assert isinstance(result, dict)
|
||||
assert "battery_mv" in result
|
||||
assert "contacts" in result
|
||||
|
||||
def test_returns_timestamp_value_tuples(self, initialized_db):
|
||||
"""Each metric has list of (ts, value) tuples."""
|
||||
ts = BASE_TS
|
||||
insert_metrics(ts, "companion", {"test": 1.0}, initialized_db)
|
||||
|
||||
result = get_metrics_for_period(
|
||||
"companion", ts - 100, ts + 100, initialized_db
|
||||
)
|
||||
|
||||
assert len(result["test"]) == 1
|
||||
assert result["test"][0] == (ts, 1.0)
|
||||
|
||||
def test_sorted_by_timestamp(self, initialized_db):
|
||||
"""Results are sorted by timestamp ascending."""
|
||||
base_ts = BASE_TS
|
||||
|
||||
# Insert out of order
|
||||
insert_metrics(base_ts + 200, "companion", {"test": 3.0}, initialized_db)
|
||||
insert_metrics(base_ts, "companion", {"test": 1.0}, initialized_db)
|
||||
insert_metrics(base_ts + 100, "companion", {"test": 2.0}, initialized_db)
|
||||
|
||||
result = get_metrics_for_period(
|
||||
"companion", base_ts - 100, base_ts + 300, initialized_db
|
||||
)
|
||||
|
||||
values = [v for ts, v in result["test"]]
|
||||
assert values == [1.0, 2.0, 3.0]
|
||||
|
||||
def test_respects_time_range(self, initialized_db):
|
||||
"""Only returns data within specified time range."""
|
||||
base_ts = BASE_TS
|
||||
|
||||
insert_metrics(base_ts - 200, "companion", {"test": 1.0}, initialized_db) # Outside
|
||||
insert_metrics(base_ts, "companion", {"test": 2.0}, initialized_db) # Inside
|
||||
insert_metrics(base_ts + 200, "companion", {"test": 3.0}, initialized_db) # Outside
|
||||
|
||||
result = get_metrics_for_period(
|
||||
"companion", base_ts - 100, base_ts + 100, initialized_db
|
||||
)
|
||||
|
||||
assert len(result["test"]) == 1
|
||||
assert result["test"][0][1] == 2.0
|
||||
|
||||
def test_filters_by_role(self, initialized_db):
|
||||
"""Only returns data for specified role."""
|
||||
ts = BASE_TS
|
||||
insert_metrics(ts, "companion", {"test": 1.0}, initialized_db)
|
||||
insert_metrics(ts, "repeater", {"test": 2.0}, initialized_db)
|
||||
|
||||
result = get_metrics_for_period(
|
||||
"companion", ts - 100, ts + 100, initialized_db
|
||||
)
|
||||
|
||||
assert result["test"][0][1] == 1.0
|
||||
|
||||
def test_computes_bat_pct(self, initialized_db):
|
||||
"""Computes bat_pct from battery voltage."""
|
||||
ts = BASE_TS
|
||||
# 4200 mV = 4.2V = 100%
|
||||
insert_metrics(ts, "companion", {"battery_mv": 4200.0}, initialized_db)
|
||||
|
||||
result = get_metrics_for_period(
|
||||
"companion", ts - 100, ts + 100, initialized_db
|
||||
)
|
||||
|
||||
assert "bat_pct" in result
|
||||
assert result["bat_pct"][0][1] == pytest.approx(100.0)
|
||||
|
||||
def test_bat_pct_for_repeater(self, initialized_db):
|
||||
"""Computes bat_pct for repeater using 'bat' field."""
|
||||
ts = BASE_TS
|
||||
# 3000 mV = 3.0V = 0%
|
||||
insert_metrics(ts, "repeater", {"bat": 3000.0}, initialized_db)
|
||||
|
||||
result = get_metrics_for_period(
|
||||
"repeater", ts - 100, ts + 100, initialized_db
|
||||
)
|
||||
|
||||
assert "bat_pct" in result
|
||||
assert result["bat_pct"][0][1] == pytest.approx(0.0)
|
||||
|
||||
def test_empty_period_returns_empty(self, initialized_db):
|
||||
"""Empty time period returns empty dict."""
|
||||
result = get_metrics_for_period(
|
||||
"companion", 0, 1, initialized_db
|
||||
)
|
||||
|
||||
assert result == {}
|
||||
|
||||
def test_invalid_role_raises(self, initialized_db):
|
||||
"""Invalid role raises ValueError."""
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
get_metrics_for_period("invalid", 0, 100, initialized_db)
|
||||
|
||||
|
||||
class TestGetLatestMetrics:
|
||||
"""Tests for get_latest_metrics function."""
|
||||
|
||||
def test_returns_most_recent(self, initialized_db):
|
||||
"""Returns metrics at most recent timestamp."""
|
||||
base_ts = BASE_TS
|
||||
|
||||
insert_metrics(base_ts, "companion", {"test": 1.0}, initialized_db)
|
||||
insert_metrics(base_ts + 100, "companion", {"test": 2.0}, initialized_db)
|
||||
|
||||
result = get_latest_metrics("companion", initialized_db)
|
||||
|
||||
assert result["test"] == 2.0
|
||||
assert result["ts"] == base_ts + 100
|
||||
|
||||
def test_includes_ts(self, initialized_db):
|
||||
"""Result includes 'ts' key with timestamp."""
|
||||
ts = BASE_TS
|
||||
insert_metrics(ts, "companion", {"test": 1.0}, initialized_db)
|
||||
|
||||
result = get_latest_metrics("companion", initialized_db)
|
||||
|
||||
assert "ts" in result
|
||||
assert result["ts"] == ts
|
||||
|
||||
def test_includes_all_metrics(self, initialized_db):
|
||||
"""Result includes all metrics at that timestamp."""
|
||||
ts = BASE_TS
|
||||
insert_metrics(ts, "companion", {
|
||||
"battery_mv": 3850.0,
|
||||
"contacts": 5,
|
||||
"uptime_secs": 86400,
|
||||
}, initialized_db)
|
||||
|
||||
result = get_latest_metrics("companion", initialized_db)
|
||||
|
||||
assert result["battery_mv"] == 3850.0
|
||||
assert result["contacts"] == 5.0
|
||||
assert result["uptime_secs"] == 86400.0
|
||||
|
||||
def test_computes_bat_pct(self, initialized_db):
|
||||
"""Computes bat_pct from battery voltage."""
|
||||
ts = BASE_TS
|
||||
insert_metrics(ts, "companion", {"battery_mv": 3820.0}, initialized_db)
|
||||
|
||||
result = get_latest_metrics("companion", initialized_db)
|
||||
|
||||
assert "bat_pct" in result
|
||||
assert result["bat_pct"] == pytest.approx(50.0)
|
||||
|
||||
def test_returns_none_when_empty(self, initialized_db):
|
||||
"""Returns None when no data exists."""
|
||||
result = get_latest_metrics("companion", initialized_db)
|
||||
|
||||
assert result is None
|
||||
|
||||
def test_filters_by_role(self, initialized_db):
|
||||
"""Only returns data for specified role."""
|
||||
ts = BASE_TS
|
||||
insert_metrics(ts, "companion", {"test": 1.0}, initialized_db)
|
||||
insert_metrics(ts + 100, "repeater", {"test": 2.0}, initialized_db)
|
||||
|
||||
result = get_latest_metrics("companion", initialized_db)
|
||||
|
||||
assert result["ts"] == ts
|
||||
assert result["test"] == 1.0
|
||||
|
||||
def test_invalid_role_raises(self, initialized_db):
|
||||
"""Invalid role raises ValueError."""
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
get_latest_metrics("invalid", initialized_db)
|
||||
|
||||
|
||||
class TestGetMetricCount:
|
||||
"""Tests for get_metric_count function."""
|
||||
|
||||
def test_counts_rows(self, initialized_db):
|
||||
"""Counts total metric rows for role."""
|
||||
ts = BASE_TS
|
||||
insert_metrics(ts, "companion", {"a": 1.0, "b": 2.0, "c": 3.0}, initialized_db)
|
||||
|
||||
count = get_metric_count("companion", initialized_db)
|
||||
|
||||
assert count == 3
|
||||
|
||||
def test_filters_by_role(self, initialized_db):
|
||||
"""Only counts rows for specified role."""
|
||||
ts = BASE_TS
|
||||
insert_metrics(ts, "companion", {"a": 1.0}, initialized_db)
|
||||
insert_metrics(ts, "repeater", {"b": 2.0, "c": 3.0}, initialized_db)
|
||||
|
||||
assert get_metric_count("companion", initialized_db) == 1
|
||||
assert get_metric_count("repeater", initialized_db) == 2
|
||||
|
||||
def test_returns_zero_when_empty(self, initialized_db):
|
||||
"""Returns 0 when no data exists."""
|
||||
count = get_metric_count("companion", initialized_db)
|
||||
assert count == 0
|
||||
|
||||
def test_invalid_role_raises(self, initialized_db):
|
||||
"""Invalid role raises ValueError."""
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
get_metric_count("invalid", initialized_db)
|
||||
|
||||
|
||||
class TestGetDistinctTimestamps:
|
||||
"""Tests for get_distinct_timestamps function."""
|
||||
|
||||
def test_counts_unique_timestamps(self, initialized_db):
|
||||
"""Counts distinct timestamps."""
|
||||
ts = BASE_TS
|
||||
insert_metrics(ts, "companion", {"a": 1.0, "b": 2.0}, initialized_db) # 1 ts
|
||||
insert_metrics(ts + 100, "companion", {"a": 3.0}, initialized_db) # 2nd ts
|
||||
|
||||
count = get_distinct_timestamps("companion", initialized_db)
|
||||
|
||||
assert count == 2
|
||||
|
||||
def test_filters_by_role(self, initialized_db):
|
||||
"""Only counts timestamps for specified role."""
|
||||
ts = BASE_TS
|
||||
insert_metrics(ts, "companion", {"a": 1.0}, initialized_db)
|
||||
insert_metrics(ts + 100, "companion", {"a": 2.0}, initialized_db)
|
||||
insert_metrics(ts, "repeater", {"a": 3.0}, initialized_db)
|
||||
|
||||
assert get_distinct_timestamps("companion", initialized_db) == 2
|
||||
assert get_distinct_timestamps("repeater", initialized_db) == 1
|
||||
|
||||
def test_returns_zero_when_empty(self, initialized_db):
|
||||
"""Returns 0 when no data exists."""
|
||||
count = get_distinct_timestamps("companion", initialized_db)
|
||||
assert count == 0
|
||||
|
||||
|
||||
class TestGetAvailableMetrics:
|
||||
"""Tests for get_available_metrics function."""
|
||||
|
||||
def test_returns_metric_names(self, initialized_db):
|
||||
"""Returns list of distinct metric names."""
|
||||
ts = BASE_TS
|
||||
insert_metrics(ts, "companion", {
|
||||
"battery_mv": 3850.0,
|
||||
"contacts": 5,
|
||||
"recv": 100,
|
||||
}, initialized_db)
|
||||
|
||||
metrics = get_available_metrics("companion", initialized_db)
|
||||
|
||||
assert "battery_mv" in metrics
|
||||
assert "contacts" in metrics
|
||||
assert "recv" in metrics
|
||||
|
||||
def test_sorted_alphabetically(self, initialized_db):
|
||||
"""Metrics are sorted alphabetically."""
|
||||
ts = BASE_TS
|
||||
insert_metrics(ts, "companion", {
|
||||
"zebra": 1.0,
|
||||
"apple": 2.0,
|
||||
"mango": 3.0,
|
||||
}, initialized_db)
|
||||
|
||||
metrics = get_available_metrics("companion", initialized_db)
|
||||
|
||||
assert metrics == sorted(metrics)
|
||||
|
||||
def test_filters_by_role(self, initialized_db):
|
||||
"""Only returns metrics for specified role."""
|
||||
ts = BASE_TS
|
||||
insert_metrics(ts, "companion", {"companion_metric": 1.0}, initialized_db)
|
||||
insert_metrics(ts, "repeater", {"repeater_metric": 2.0}, initialized_db)
|
||||
|
||||
companion_metrics = get_available_metrics("companion", initialized_db)
|
||||
repeater_metrics = get_available_metrics("repeater", initialized_db)
|
||||
|
||||
assert "companion_metric" in companion_metrics
|
||||
assert "repeater_metric" not in companion_metrics
|
||||
assert "repeater_metric" in repeater_metrics
|
||||
|
||||
def test_returns_empty_when_no_data(self, initialized_db):
|
||||
"""Returns empty list when no data exists."""
|
||||
metrics = get_available_metrics("companion", initialized_db)
|
||||
assert metrics == []
|
||||
210
tests/database/test_db_validation.py
Normal file
210
tests/database/test_db_validation.py
Normal file
@@ -0,0 +1,210 @@
|
||||
"""Tests for database validation and security functions."""
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.db import (
|
||||
VALID_ROLES,
|
||||
_validate_role,
|
||||
get_available_metrics,
|
||||
get_distinct_timestamps,
|
||||
get_latest_metrics,
|
||||
get_metric_count,
|
||||
get_metrics_for_period,
|
||||
insert_metric,
|
||||
insert_metrics,
|
||||
)
|
||||
|
||||
|
||||
class TestValidateRole:
|
||||
"""Tests for _validate_role function."""
|
||||
|
||||
def test_accepts_companion(self):
|
||||
"""Accepts 'companion' as valid role."""
|
||||
result = _validate_role("companion")
|
||||
assert result == "companion"
|
||||
|
||||
def test_accepts_repeater(self):
|
||||
"""Accepts 'repeater' as valid role."""
|
||||
result = _validate_role("repeater")
|
||||
assert result == "repeater"
|
||||
|
||||
def test_returns_input_on_success(self):
|
||||
"""Returns the validated role string."""
|
||||
for role in VALID_ROLES:
|
||||
result = _validate_role(role)
|
||||
assert result == role
|
||||
|
||||
def test_rejects_invalid_role(self):
|
||||
"""Rejects invalid role names."""
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
_validate_role("invalid")
|
||||
|
||||
def test_rejects_empty_string(self):
|
||||
"""Rejects empty string as role."""
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
_validate_role("")
|
||||
|
||||
def test_rejects_none(self):
|
||||
"""Rejects None as role."""
|
||||
with pytest.raises(ValueError):
|
||||
_validate_role(None)
|
||||
|
||||
def test_case_sensitive(self):
|
||||
"""Role validation is case-sensitive."""
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
_validate_role("Companion")
|
||||
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
_validate_role("REPEATER")
|
||||
|
||||
def test_rejects_whitespace_variants(self):
|
||||
"""Rejects roles with leading/trailing whitespace."""
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
_validate_role(" companion")
|
||||
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
_validate_role("repeater ")
|
||||
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
_validate_role(" companion ")
|
||||
|
||||
|
||||
class TestSqlInjectionPrevention:
|
||||
"""Tests to verify SQL injection is prevented via role validation."""
|
||||
|
||||
@pytest.mark.parametrize("malicious_role", [
|
||||
"'; DROP TABLE metrics; --",
|
||||
"admin'; DROP TABLE metrics;--",
|
||||
"companion OR 1=1",
|
||||
"companion; DELETE FROM metrics",
|
||||
"companion' UNION SELECT * FROM db_meta --",
|
||||
"companion\"; DROP TABLE metrics; --",
|
||||
"1 OR 1=1",
|
||||
"companion/*comment*/",
|
||||
])
|
||||
def test_insert_metric_rejects_injection(self, initialized_db, malicious_role):
|
||||
"""insert_metric rejects SQL injection attempts."""
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
insert_metric(1000, malicious_role, "test", 1.0, initialized_db)
|
||||
|
||||
@pytest.mark.parametrize("malicious_role", [
|
||||
"'; DROP TABLE metrics; --",
|
||||
"companion OR 1=1",
|
||||
])
|
||||
def test_insert_metrics_rejects_injection(self, initialized_db, malicious_role):
|
||||
"""insert_metrics rejects SQL injection attempts."""
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
insert_metrics(1000, malicious_role, {"test": 1.0}, initialized_db)
|
||||
|
||||
@pytest.mark.parametrize("malicious_role", [
|
||||
"'; DROP TABLE metrics; --",
|
||||
"companion OR 1=1",
|
||||
])
|
||||
def test_get_metrics_for_period_rejects_injection(self, initialized_db, malicious_role):
|
||||
"""get_metrics_for_period rejects SQL injection attempts."""
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
get_metrics_for_period(malicious_role, 0, 100, initialized_db)
|
||||
|
||||
@pytest.mark.parametrize("malicious_role", [
|
||||
"'; DROP TABLE metrics; --",
|
||||
"companion OR 1=1",
|
||||
])
|
||||
def test_get_latest_metrics_rejects_injection(self, initialized_db, malicious_role):
|
||||
"""get_latest_metrics rejects SQL injection attempts."""
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
get_latest_metrics(malicious_role, initialized_db)
|
||||
|
||||
@pytest.mark.parametrize("malicious_role", [
|
||||
"'; DROP TABLE metrics; --",
|
||||
"companion OR 1=1",
|
||||
])
|
||||
def test_get_metric_count_rejects_injection(self, initialized_db, malicious_role):
|
||||
"""get_metric_count rejects SQL injection attempts."""
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
get_metric_count(malicious_role, initialized_db)
|
||||
|
||||
@pytest.mark.parametrize("malicious_role", [
|
||||
"'; DROP TABLE metrics; --",
|
||||
"companion OR 1=1",
|
||||
])
|
||||
def test_get_distinct_timestamps_rejects_injection(self, initialized_db, malicious_role):
|
||||
"""get_distinct_timestamps rejects SQL injection attempts."""
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
get_distinct_timestamps(malicious_role, initialized_db)
|
||||
|
||||
@pytest.mark.parametrize("malicious_role", [
|
||||
"'; DROP TABLE metrics; --",
|
||||
"companion OR 1=1",
|
||||
])
|
||||
def test_get_available_metrics_rejects_injection(self, initialized_db, malicious_role):
|
||||
"""get_available_metrics rejects SQL injection attempts."""
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
get_available_metrics(malicious_role, initialized_db)
|
||||
|
||||
|
||||
class TestValidRolesConstant:
|
||||
"""Tests for VALID_ROLES constant."""
|
||||
|
||||
def test_contains_companion(self):
|
||||
"""VALID_ROLES includes 'companion'."""
|
||||
assert "companion" in VALID_ROLES
|
||||
|
||||
def test_contains_repeater(self):
|
||||
"""VALID_ROLES includes 'repeater'."""
|
||||
assert "repeater" in VALID_ROLES
|
||||
|
||||
def test_is_tuple(self):
|
||||
"""VALID_ROLES is immutable (tuple)."""
|
||||
assert isinstance(VALID_ROLES, tuple)
|
||||
|
||||
def test_exactly_two_roles(self):
|
||||
"""There are exactly two valid roles."""
|
||||
assert len(VALID_ROLES) == 2
|
||||
|
||||
|
||||
class TestMetricNameValidation:
|
||||
"""Tests for metric name handling (not validated, but should handle safely)."""
|
||||
|
||||
def test_metric_name_with_special_chars(self, initialized_db):
|
||||
"""Metric names with special chars are handled via parameterized queries."""
|
||||
# These should work because we use parameterized queries
|
||||
insert_metric(1000, "companion", "test.metric", 1.0, initialized_db)
|
||||
insert_metric(1001, "companion", "test-metric", 2.0, initialized_db)
|
||||
insert_metric(1002, "companion", "test_metric", 3.0, initialized_db)
|
||||
|
||||
metrics = get_available_metrics("companion", initialized_db)
|
||||
assert "test.metric" in metrics
|
||||
assert "test-metric" in metrics
|
||||
assert "test_metric" in metrics
|
||||
|
||||
def test_metric_name_with_spaces(self, initialized_db):
|
||||
"""Metric names with spaces are handled safely."""
|
||||
insert_metric(1000, "companion", "test metric", 1.0, initialized_db)
|
||||
|
||||
metrics = get_available_metrics("companion", initialized_db)
|
||||
assert "test metric" in metrics
|
||||
|
||||
def test_metric_name_unicode(self, initialized_db):
|
||||
"""Unicode metric names are handled safely."""
|
||||
insert_metric(1000, "companion", "température", 1.0, initialized_db)
|
||||
insert_metric(1001, "companion", "温度", 2.0, initialized_db)
|
||||
|
||||
metrics = get_available_metrics("companion", initialized_db)
|
||||
assert "température" in metrics
|
||||
assert "温度" in metrics
|
||||
|
||||
def test_empty_metric_name(self, initialized_db):
|
||||
"""Empty metric name is allowed (not validated)."""
|
||||
# Empty string is allowed as metric name
|
||||
insert_metric(1000, "companion", "", 1.0, initialized_db)
|
||||
|
||||
metrics = get_available_metrics("companion", initialized_db)
|
||||
assert "" in metrics
|
||||
|
||||
def test_very_long_metric_name(self, initialized_db):
|
||||
"""Very long metric names are handled."""
|
||||
long_name = "a" * 1000
|
||||
insert_metric(1000, "companion", long_name, 1.0, initialized_db)
|
||||
|
||||
metrics = get_available_metrics("companion", initialized_db)
|
||||
assert long_name in metrics
|
||||
1
tests/html/__init__.py
Normal file
1
tests/html/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Tests for HTML generation."""
|
||||
74
tests/html/conftest.py
Normal file
74
tests/html/conftest.py
Normal file
@@ -0,0 +1,74 @@
|
||||
"""Fixtures for HTML tests."""
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_chart_stats():
|
||||
"""Sample chart statistics for template rendering."""
|
||||
return {
|
||||
"bat": {
|
||||
"day": {"min": 3.5, "avg": 3.7, "max": 3.9, "current": 3.85},
|
||||
"week": {"min": 3.4, "avg": 3.65, "max": 3.95, "current": 3.8},
|
||||
"month": {"min": 3.3, "avg": 3.6, "max": 4.0, "current": 3.75},
|
||||
"year": {"min": 3.2, "avg": 3.55, "max": 4.1, "current": 3.7},
|
||||
},
|
||||
"bat_pct": {
|
||||
"day": {"min": 50, "avg": 70, "max": 90, "current": 85},
|
||||
"week": {"min": 45, "avg": 65, "max": 95, "current": 80},
|
||||
},
|
||||
"nb_recv": {
|
||||
"day": {"min": 0, "avg": 50.5, "max": 100, "current": 75},
|
||||
"week": {"min": 0, "avg": 48.2, "max": 150, "current": 60},
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_latest_metrics():
|
||||
"""Sample latest metrics for page rendering."""
|
||||
return {
|
||||
"ts": 1704067200, # 2024-01-01 00:00:00 UTC
|
||||
"bat": 3850.0,
|
||||
"bat_pct": 75.0,
|
||||
"uptime": 86400,
|
||||
"last_rssi": -85,
|
||||
"last_snr": 7.5,
|
||||
"noise_floor": -115,
|
||||
"nb_recv": 1234,
|
||||
"nb_sent": 567,
|
||||
"tx_queue_len": 0,
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_companion_latest():
|
||||
"""Sample companion latest metrics."""
|
||||
return {
|
||||
"ts": 1704067200,
|
||||
"battery_mv": 3850.0,
|
||||
"bat_pct": 75.0,
|
||||
"uptime_secs": 86400,
|
||||
"contacts": 5,
|
||||
"recv": 1234,
|
||||
"sent": 567,
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def templates_dir():
|
||||
"""Path to templates directory."""
|
||||
return Path(__file__).parent.parent.parent / "src" / "meshmon" / "templates"
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_svg_content():
|
||||
"""Sample SVG content for testing."""
|
||||
return """<?xml version="1.0" encoding="utf-8"?>
|
||||
<svg xmlns="http://www.w3.org/2000/svg" width="800" height="280"
|
||||
data-metric="bat" data-period="day" data-theme="light">
|
||||
<rect width="100%" height="100%" fill="#ffffff"/>
|
||||
<path d="M0,0 L100,100"/>
|
||||
</svg>"""
|
||||
149
tests/html/test_jinja_env.py
Normal file
149
tests/html/test_jinja_env.py
Normal file
@@ -0,0 +1,149 @@
|
||||
"""Tests for Jinja2 environment and custom filters."""
|
||||
|
||||
import re
|
||||
|
||||
import pytest
|
||||
from jinja2 import Environment
|
||||
|
||||
from meshmon.html import get_jinja_env
|
||||
|
||||
|
||||
class TestGetJinjaEnv:
|
||||
"""Tests for get_jinja_env function."""
|
||||
|
||||
def test_returns_environment(self):
|
||||
"""Returns a Jinja2 Environment."""
|
||||
env = get_jinja_env()
|
||||
assert isinstance(env, Environment)
|
||||
|
||||
def test_has_autoescape(self):
|
||||
"""Environment has autoescape enabled."""
|
||||
env = get_jinja_env()
|
||||
# Default is to autoescape HTML files
|
||||
assert env.autoescape is True or callable(env.autoescape)
|
||||
|
||||
def test_can_load_templates(self, templates_dir):
|
||||
"""Can load templates from the templates directory."""
|
||||
env = get_jinja_env()
|
||||
|
||||
# Should be able to get the base template
|
||||
template = env.get_template("base.html")
|
||||
assert template is not None
|
||||
|
||||
def test_returns_same_instance(self):
|
||||
"""Returns the same environment instance (cached)."""
|
||||
env1 = get_jinja_env()
|
||||
env2 = get_jinja_env()
|
||||
assert env1 is env2
|
||||
|
||||
|
||||
class TestJinjaFilters:
|
||||
"""Tests for custom Jinja2 filters."""
|
||||
|
||||
@pytest.fixture
|
||||
def env(self):
|
||||
"""Get Jinja2 environment."""
|
||||
return get_jinja_env()
|
||||
|
||||
def test_format_number_filter_exists(self, env):
|
||||
"""format_number filter is registered."""
|
||||
assert "format_number" in env.filters
|
||||
|
||||
def test_format_number_formats_thousands(self, env):
|
||||
"""format_number adds thousand separators."""
|
||||
template = env.from_string("{{ value|format_number }}")
|
||||
|
||||
result = template.render(value=1234567)
|
||||
assert result == "1,234,567"
|
||||
|
||||
def test_format_number_handles_none(self, env):
|
||||
"""format_number handles None gracefully."""
|
||||
template = env.from_string("{{ value|format_number }}")
|
||||
|
||||
result = template.render(value=None)
|
||||
assert result == "N/A"
|
||||
|
||||
def test_format_time_filter_exists(self, env):
|
||||
"""format_time filter is registered."""
|
||||
assert "format_time" in env.filters
|
||||
|
||||
def test_format_time_formats_timestamp(self, env):
|
||||
"""format_time formats Unix timestamp."""
|
||||
template = env.from_string("{{ value|format_time }}")
|
||||
|
||||
ts = 1704067200
|
||||
result = template.render(value=ts)
|
||||
assert re.match(r"^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}$", result)
|
||||
|
||||
def test_format_time_handles_none(self, env):
|
||||
"""format_time handles None gracefully."""
|
||||
template = env.from_string("{{ value|format_time }}")
|
||||
|
||||
result = template.render(value=None)
|
||||
assert result == "N/A"
|
||||
|
||||
def test_format_uptime_filter_exists(self, env):
|
||||
"""format_uptime filter is registered."""
|
||||
assert "format_uptime" in env.filters
|
||||
|
||||
def test_format_uptime_formats_seconds(self, env):
|
||||
"""format_uptime formats seconds to human readable."""
|
||||
template = env.from_string("{{ value|format_uptime }}")
|
||||
|
||||
# 1 day, 2 hours, 30 minutes = 95400 seconds
|
||||
result = template.render(value=95400)
|
||||
assert result == "1d 2h 30m"
|
||||
|
||||
def test_format_duration_filter_exists(self, env):
|
||||
"""format_duration filter is registered."""
|
||||
assert "format_duration" in env.filters
|
||||
|
||||
def test_format_value_filter_exists(self, env):
|
||||
"""format_value filter is registered."""
|
||||
assert "format_value" in env.filters
|
||||
|
||||
def test_format_compact_number_filter_exists(self, env):
|
||||
"""format_compact_number filter is registered."""
|
||||
assert "format_compact_number" in env.filters
|
||||
|
||||
|
||||
class TestTemplateRendering:
|
||||
"""Tests for basic template rendering."""
|
||||
|
||||
def test_base_template_renders(self):
|
||||
"""Base template renders without error."""
|
||||
env = get_jinja_env()
|
||||
template = env.get_template("base.html")
|
||||
|
||||
# Render with minimal context
|
||||
html = template.render(
|
||||
role="repeater",
|
||||
period="day",
|
||||
title="Test",
|
||||
)
|
||||
|
||||
assert "</html>" in html
|
||||
|
||||
def test_node_template_extends_base(self):
|
||||
"""Node template extends base template."""
|
||||
env = get_jinja_env()
|
||||
template = env.get_template("node.html")
|
||||
|
||||
# Should have access to base template blocks
|
||||
assert template is not None
|
||||
|
||||
def test_template_has_html_structure(self):
|
||||
"""Rendered template has proper HTML structure."""
|
||||
env = get_jinja_env()
|
||||
template = env.get_template("base.html")
|
||||
|
||||
html = template.render(
|
||||
role="repeater",
|
||||
period="day",
|
||||
title="Test Page",
|
||||
)
|
||||
|
||||
assert "<!DOCTYPE html>" in html or "<!doctype html>" in html.lower()
|
||||
assert "<html" in html
|
||||
assert "<head>" in html
|
||||
assert "<body>" in html
|
||||
192
tests/html/test_metrics_builders.py
Normal file
192
tests/html/test_metrics_builders.py
Normal file
@@ -0,0 +1,192 @@
|
||||
"""Tests for metrics builder functions."""
|
||||
|
||||
|
||||
from meshmon.html import (
|
||||
_build_traffic_table_rows,
|
||||
build_companion_metrics,
|
||||
build_node_details,
|
||||
build_radio_config,
|
||||
build_repeater_metrics,
|
||||
)
|
||||
|
||||
|
||||
class TestBuildRepeaterMetrics:
|
||||
"""Tests for build_repeater_metrics function."""
|
||||
|
||||
def test_returns_dict(self, sample_repeater_metrics):
|
||||
"""Returns a dictionary."""
|
||||
# build_repeater_metrics takes a row dict (firmware field names)
|
||||
result = build_repeater_metrics(sample_repeater_metrics)
|
||||
assert isinstance(result, dict)
|
||||
|
||||
def test_returns_dict_structure(self, sample_repeater_metrics):
|
||||
"""Returns dict with expected keys."""
|
||||
result = build_repeater_metrics(sample_repeater_metrics)
|
||||
# Should have critical_metrics, secondary_metrics, traffic_metrics
|
||||
assert "critical_metrics" in result
|
||||
assert "secondary_metrics" in result
|
||||
assert "traffic_metrics" in result
|
||||
|
||||
def test_critical_metrics_is_list(self, sample_repeater_metrics):
|
||||
"""Critical metrics is a list."""
|
||||
result = build_repeater_metrics(sample_repeater_metrics)
|
||||
assert isinstance(result["critical_metrics"], list)
|
||||
|
||||
def test_handles_none(self):
|
||||
"""Handles None row."""
|
||||
result = build_repeater_metrics(None)
|
||||
assert isinstance(result, dict)
|
||||
assert result["critical_metrics"] == []
|
||||
|
||||
def test_handles_empty_dict(self):
|
||||
"""Handles empty dict."""
|
||||
result = build_repeater_metrics({})
|
||||
assert isinstance(result, dict)
|
||||
|
||||
|
||||
class TestBuildCompanionMetrics:
|
||||
"""Tests for build_companion_metrics function."""
|
||||
|
||||
def test_returns_dict(self, sample_companion_metrics):
|
||||
"""Returns a dictionary."""
|
||||
result = build_companion_metrics(sample_companion_metrics)
|
||||
assert isinstance(result, dict)
|
||||
|
||||
def test_returns_dict_structure(self, sample_companion_metrics):
|
||||
"""Returns dict with expected keys."""
|
||||
result = build_companion_metrics(sample_companion_metrics)
|
||||
assert "critical_metrics" in result
|
||||
assert "secondary_metrics" in result
|
||||
assert "traffic_metrics" in result
|
||||
|
||||
def test_handles_none(self):
|
||||
"""Handles None row."""
|
||||
result = build_companion_metrics(None)
|
||||
assert isinstance(result, dict)
|
||||
assert result["critical_metrics"] == []
|
||||
|
||||
def test_handles_empty_dict(self):
|
||||
"""Handles empty dict."""
|
||||
result = build_companion_metrics({})
|
||||
assert isinstance(result, dict)
|
||||
|
||||
|
||||
class TestBuildNodeDetails:
|
||||
"""Tests for build_node_details function."""
|
||||
|
||||
def test_returns_list(self, configured_env):
|
||||
"""Returns a list of detail items."""
|
||||
result = build_node_details("repeater")
|
||||
assert isinstance(result, list)
|
||||
|
||||
def test_items_have_label_value(self, configured_env):
|
||||
"""Each item has label and value."""
|
||||
result = build_node_details("repeater")
|
||||
for item in result:
|
||||
assert isinstance(item, dict)
|
||||
assert "label" in item
|
||||
assert "value" in item
|
||||
|
||||
def test_includes_hardware_info(self, configured_env, monkeypatch):
|
||||
"""Includes hardware model info."""
|
||||
monkeypatch.setenv("REPEATER_HARDWARE", "Test LoRa Device")
|
||||
import meshmon.env
|
||||
meshmon.env._config = None
|
||||
|
||||
result = build_node_details("repeater")
|
||||
|
||||
# Should have hardware in one of the items
|
||||
hardware = next(item for item in result if item.get("label") == "Hardware")
|
||||
assert hardware["value"] == "Test LoRa Device"
|
||||
|
||||
def test_different_roles(self, configured_env):
|
||||
"""Different roles return details."""
|
||||
repeater_details = build_node_details("repeater")
|
||||
companion_details = build_node_details("companion")
|
||||
|
||||
assert isinstance(repeater_details, list)
|
||||
assert isinstance(companion_details, list)
|
||||
|
||||
|
||||
class TestBuildRadioConfig:
|
||||
"""Tests for build_radio_config function."""
|
||||
|
||||
def test_returns_list(self, configured_env):
|
||||
"""Returns a list of config items."""
|
||||
result = build_radio_config()
|
||||
assert isinstance(result, list)
|
||||
|
||||
def test_items_have_label_value(self, configured_env):
|
||||
"""Each item has label and value."""
|
||||
result = build_radio_config()
|
||||
for item in result:
|
||||
assert isinstance(item, dict)
|
||||
assert "label" in item
|
||||
assert "value" in item
|
||||
|
||||
def test_includes_frequency_when_set(self, configured_env, monkeypatch):
|
||||
"""Includes frequency when configured."""
|
||||
monkeypatch.setenv("RADIO_FREQUENCY", "869.618 MHz")
|
||||
import meshmon.env
|
||||
meshmon.env._config = None
|
||||
|
||||
result = build_radio_config()
|
||||
|
||||
freq = next(item for item in result if item.get("label") == "Frequency")
|
||||
assert freq["value"] == "869.618 MHz"
|
||||
|
||||
def test_handles_missing_config(self, configured_env):
|
||||
"""Returns list even with default config."""
|
||||
result = build_radio_config()
|
||||
assert isinstance(result, list)
|
||||
|
||||
|
||||
class TestBuildTrafficTableRows:
|
||||
"""Tests for _build_traffic_table_rows function."""
|
||||
|
||||
def test_returns_list(self):
|
||||
"""Returns a list of rows."""
|
||||
# Input is list of traffic metric dicts
|
||||
traffic_metrics = [
|
||||
{"label": "RX", "value": "100", "raw_value": 100, "unit": "/min"},
|
||||
{"label": "TX", "value": "50", "raw_value": 50, "unit": "/min"},
|
||||
]
|
||||
rows = _build_traffic_table_rows(traffic_metrics)
|
||||
assert isinstance(rows, list)
|
||||
|
||||
def test_rows_have_structure(self):
|
||||
"""Each row has expected structure."""
|
||||
traffic_metrics = [
|
||||
{"label": "RX", "value": "100", "raw_value": 100, "unit": "/min"},
|
||||
{"label": "TX", "value": "50", "raw_value": 50, "unit": "/min"},
|
||||
]
|
||||
rows = _build_traffic_table_rows(traffic_metrics)
|
||||
|
||||
for row in rows:
|
||||
assert isinstance(row, dict)
|
||||
assert "label" in row
|
||||
assert "rx" in row
|
||||
assert "tx" in row
|
||||
assert "rx_raw" in row
|
||||
assert "tx_raw" in row
|
||||
assert "unit" in row
|
||||
|
||||
def test_handles_empty_list(self):
|
||||
"""Handles empty traffic metrics list."""
|
||||
rows = _build_traffic_table_rows([])
|
||||
assert isinstance(rows, list)
|
||||
assert len(rows) == 0
|
||||
|
||||
def test_combines_rx_tx_pairs(self):
|
||||
"""Combines RX and TX into single row."""
|
||||
traffic_metrics = [
|
||||
{"label": "Flood RX", "value": "100", "raw_value": 100, "unit": "/min"},
|
||||
{"label": "Flood TX", "value": "50", "raw_value": 50, "unit": "/min"},
|
||||
]
|
||||
rows = _build_traffic_table_rows(traffic_metrics)
|
||||
|
||||
# Should have one "Flood" row with both rx and tx
|
||||
assert len(rows) == 1
|
||||
assert rows[0]["label"] == "Flood"
|
||||
assert rows[0]["rx"] == "100"
|
||||
assert rows[0]["tx"] == "50"
|
||||
233
tests/html/test_page_context.py
Normal file
233
tests/html/test_page_context.py
Normal file
@@ -0,0 +1,233 @@
|
||||
"""Tests for page context building."""
|
||||
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.html import (
|
||||
build_page_context,
|
||||
get_status,
|
||||
)
|
||||
|
||||
FIXED_NOW = datetime(2024, 1, 1, 12, 0, 0)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def fixed_now(monkeypatch):
|
||||
class FixedDatetime(datetime):
|
||||
@classmethod
|
||||
def now(cls):
|
||||
return FIXED_NOW
|
||||
|
||||
monkeypatch.setattr("meshmon.html.datetime", FixedDatetime)
|
||||
return FIXED_NOW
|
||||
|
||||
|
||||
class TestGetStatus:
|
||||
"""Tests for get_status function."""
|
||||
|
||||
def test_online_for_recent_data(self, fixed_now):
|
||||
"""Returns 'online' for data less than 30 minutes old."""
|
||||
# 10 minutes ago
|
||||
recent_ts = int(fixed_now.timestamp()) - 600
|
||||
|
||||
status_class, status_label = get_status(recent_ts)
|
||||
|
||||
assert status_class == "online"
|
||||
|
||||
def test_stale_for_medium_age_data(self, fixed_now):
|
||||
"""Returns 'stale' for data 30 minutes to 2 hours old."""
|
||||
# 1 hour ago
|
||||
medium_ts = int(fixed_now.timestamp()) - 3600
|
||||
|
||||
status_class, status_label = get_status(medium_ts)
|
||||
|
||||
assert status_class == "stale"
|
||||
|
||||
def test_offline_for_old_data(self, fixed_now):
|
||||
"""Returns 'offline' for data more than 2 hours old."""
|
||||
# 3 hours ago
|
||||
old_ts = int(fixed_now.timestamp()) - 10800
|
||||
|
||||
status_class, status_label = get_status(old_ts)
|
||||
|
||||
assert status_class == "offline"
|
||||
|
||||
def test_offline_for_very_old_data(self, fixed_now):
|
||||
"""Returns 'offline' for very old data."""
|
||||
# 7 days ago
|
||||
very_old_ts = int(fixed_now.timestamp()) - int(timedelta(days=7).total_seconds())
|
||||
|
||||
status_class, status_label = get_status(very_old_ts)
|
||||
|
||||
assert status_class == "offline"
|
||||
|
||||
def test_offline_for_none(self):
|
||||
"""Returns 'offline' for None timestamp."""
|
||||
status_class, status_label = get_status(None)
|
||||
|
||||
assert status_class == "offline"
|
||||
|
||||
def test_offline_for_zero(self):
|
||||
"""Returns 'offline' for zero timestamp."""
|
||||
status_class, status_label = get_status(0)
|
||||
|
||||
assert status_class == "offline"
|
||||
|
||||
def test_online_for_current_time(self, fixed_now):
|
||||
"""Returns 'online' for current timestamp."""
|
||||
now_ts = int(fixed_now.timestamp())
|
||||
|
||||
status_class, status_label = get_status(now_ts)
|
||||
|
||||
assert status_class == "online"
|
||||
|
||||
def test_boundary_30_minutes(self, fixed_now):
|
||||
"""Tests boundary at exactly 30 minutes."""
|
||||
# Exactly 30 minutes ago
|
||||
boundary_ts = int(fixed_now.timestamp()) - 1800
|
||||
|
||||
status_class, _ = get_status(boundary_ts)
|
||||
assert status_class == "stale"
|
||||
|
||||
def test_boundary_2_hours(self, fixed_now):
|
||||
"""Tests boundary at exactly 2 hours."""
|
||||
# Exactly 2 hours ago
|
||||
boundary_ts = int(fixed_now.timestamp()) - 7200
|
||||
|
||||
status_class, _ = get_status(boundary_ts)
|
||||
assert status_class == "offline"
|
||||
|
||||
def test_returns_tuple(self, fixed_now):
|
||||
"""Returns tuple of (status_class, status_label)."""
|
||||
status = get_status(int(fixed_now.timestamp()))
|
||||
assert isinstance(status, tuple)
|
||||
assert len(status) == 2
|
||||
|
||||
def test_status_label_is_string(self, fixed_now):
|
||||
"""Status label is a string."""
|
||||
_, status_label = get_status(int(fixed_now.timestamp()))
|
||||
assert isinstance(status_label, str)
|
||||
|
||||
|
||||
class TestBuildPageContext:
|
||||
"""Tests for build_page_context function."""
|
||||
|
||||
@pytest.fixture
|
||||
def sample_row(self, sample_repeater_metrics, fixed_now):
|
||||
"""Create a sample row with timestamp."""
|
||||
row = sample_repeater_metrics.copy()
|
||||
row["ts"] = int(fixed_now.timestamp()) - 300 # 5 minutes ago
|
||||
return row
|
||||
|
||||
def test_returns_dict(self, configured_env, sample_row):
|
||||
"""Returns a dictionary."""
|
||||
context = build_page_context(
|
||||
role="repeater",
|
||||
period="day",
|
||||
row=sample_row,
|
||||
at_root=True,
|
||||
)
|
||||
|
||||
assert isinstance(context, dict)
|
||||
|
||||
def test_includes_role_and_period(self, configured_env, sample_row):
|
||||
"""Context includes role and period."""
|
||||
context = build_page_context(
|
||||
role="repeater",
|
||||
period="day",
|
||||
row=sample_row,
|
||||
at_root=True,
|
||||
)
|
||||
|
||||
assert context.get("role") == "repeater"
|
||||
assert context.get("period") == "day"
|
||||
|
||||
def test_includes_status(self, configured_env, sample_row):
|
||||
"""Context includes status indicator."""
|
||||
context = build_page_context(
|
||||
role="repeater",
|
||||
period="day",
|
||||
row=sample_row,
|
||||
at_root=True,
|
||||
)
|
||||
|
||||
assert context["status_class"] == "online"
|
||||
|
||||
def test_handles_none_row(self, configured_env):
|
||||
"""Handles None row gracefully."""
|
||||
context = build_page_context(
|
||||
role="repeater",
|
||||
period="day",
|
||||
row=None,
|
||||
at_root=True,
|
||||
)
|
||||
|
||||
assert context.get("status_class") == "offline"
|
||||
|
||||
def test_includes_node_name(self, configured_env, sample_row, monkeypatch):
|
||||
"""Context includes node name from config."""
|
||||
monkeypatch.setenv("REPEATER_DISPLAY_NAME", "Test Repeater")
|
||||
import meshmon.env
|
||||
meshmon.env._config = None
|
||||
|
||||
context = build_page_context(
|
||||
role="repeater",
|
||||
period="day",
|
||||
row=sample_row,
|
||||
at_root=True,
|
||||
)
|
||||
|
||||
assert "node_name" in context
|
||||
assert context["node_name"] == "Test Repeater"
|
||||
|
||||
def test_includes_period(self, configured_env, sample_row):
|
||||
"""Context includes current period."""
|
||||
context = build_page_context(
|
||||
role="repeater",
|
||||
period="day",
|
||||
row=sample_row,
|
||||
at_root=True,
|
||||
)
|
||||
|
||||
assert "period" in context
|
||||
assert context["period"] == "day"
|
||||
|
||||
def test_different_roles(self, configured_env, sample_row, sample_companion_metrics, fixed_now):
|
||||
"""Context varies by role."""
|
||||
companion_row = sample_companion_metrics.copy()
|
||||
companion_row["ts"] = int(fixed_now.timestamp()) - 300
|
||||
|
||||
repeater_context = build_page_context(
|
||||
role="repeater",
|
||||
period="day",
|
||||
row=sample_row,
|
||||
at_root=True,
|
||||
)
|
||||
companion_context = build_page_context(
|
||||
role="companion",
|
||||
period="day",
|
||||
row=companion_row,
|
||||
at_root=False,
|
||||
)
|
||||
|
||||
assert repeater_context["role"] == "repeater"
|
||||
assert companion_context["role"] == "companion"
|
||||
|
||||
def test_at_root_affects_css_path(self, configured_env, sample_row):
|
||||
"""at_root parameter affects CSS path."""
|
||||
root_context = build_page_context(
|
||||
role="repeater",
|
||||
period="day",
|
||||
row=sample_row,
|
||||
at_root=True,
|
||||
)
|
||||
non_root_context = build_page_context(
|
||||
role="companion",
|
||||
period="day",
|
||||
row=sample_row,
|
||||
at_root=False,
|
||||
)
|
||||
|
||||
assert root_context["css_path"] == "/"
|
||||
assert non_root_context["css_path"] == "../"
|
||||
110
tests/html/test_reports_index.py
Normal file
110
tests/html/test_reports_index.py
Normal file
@@ -0,0 +1,110 @@
|
||||
"""Tests for reports index page generation."""
|
||||
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.html import render_reports_index
|
||||
|
||||
|
||||
class TestRenderReportsIndex:
|
||||
"""Tests for render_reports_index function."""
|
||||
|
||||
@pytest.fixture
|
||||
def sample_report_sections(self):
|
||||
"""Sample report sections for testing."""
|
||||
return [
|
||||
{
|
||||
"role": "repeater",
|
||||
"years": [
|
||||
{
|
||||
"year": 2024,
|
||||
"months": [
|
||||
{"month": 1, "name": "January"},
|
||||
{"month": 2, "name": "February"},
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"role": "companion",
|
||||
"years": [
|
||||
{
|
||||
"year": 2024,
|
||||
"months": [
|
||||
{"month": 1, "name": "January"},
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
]
|
||||
|
||||
def test_returns_html_string(self, configured_env, sample_report_sections):
|
||||
"""Returns an HTML string."""
|
||||
html = render_reports_index(sample_report_sections)
|
||||
|
||||
assert isinstance(html, str)
|
||||
assert len(html) > 0
|
||||
|
||||
def test_html_structure(self, configured_env, sample_report_sections):
|
||||
"""Generated HTML has proper structure."""
|
||||
html = render_reports_index(sample_report_sections)
|
||||
|
||||
assert "<!DOCTYPE html>" in html or "<!doctype html>" in html.lower()
|
||||
assert "</html>" in html
|
||||
|
||||
def test_includes_title(self, configured_env, sample_report_sections):
|
||||
"""Index page includes title."""
|
||||
html = render_reports_index(sample_report_sections)
|
||||
|
||||
assert "Reports Archive" in html
|
||||
|
||||
def test_includes_year(self, configured_env, sample_report_sections):
|
||||
"""Lists available years."""
|
||||
html = render_reports_index(sample_report_sections)
|
||||
|
||||
assert "/reports/repeater/2024/" in html
|
||||
|
||||
def test_handles_empty_sections(self, configured_env):
|
||||
"""Handles empty report sections."""
|
||||
html = render_reports_index([])
|
||||
|
||||
assert isinstance(html, str)
|
||||
assert "</html>" in html
|
||||
|
||||
def test_includes_role_names(self, configured_env, sample_report_sections):
|
||||
"""Includes role names in output."""
|
||||
html = render_reports_index(sample_report_sections)
|
||||
|
||||
assert "Repeater" in html
|
||||
assert "Companion" in html
|
||||
|
||||
def test_includes_descriptions(self, configured_env, sample_report_sections, monkeypatch):
|
||||
"""Includes role descriptions from config."""
|
||||
monkeypatch.setenv("REPEATER_DISPLAY_NAME", "Alpha Repeater")
|
||||
monkeypatch.setenv("COMPANION_DISPLAY_NAME", "Beta Node")
|
||||
monkeypatch.setenv("REPORT_LOCATION_SHORT", "Test Ridge")
|
||||
import meshmon.env
|
||||
meshmon.env._config = None
|
||||
|
||||
html = render_reports_index(sample_report_sections)
|
||||
|
||||
assert "Alpha Repeater — Remote node in Test Ridge" in html
|
||||
assert "Beta Node — Local USB-connected node" in html
|
||||
|
||||
def test_includes_css_reference(self, configured_env, sample_report_sections):
|
||||
"""Includes reference to stylesheet."""
|
||||
html = render_reports_index(sample_report_sections)
|
||||
|
||||
assert "styles.css" in html
|
||||
|
||||
def test_handles_sections_without_years(self, configured_env):
|
||||
"""Handles sections with no years."""
|
||||
sections = [
|
||||
{"role": "repeater", "years": []},
|
||||
{"role": "companion", "years": []},
|
||||
]
|
||||
|
||||
html = render_reports_index(sections)
|
||||
|
||||
assert isinstance(html, str)
|
||||
assert "No reports available yet." in html
|
||||
280
tests/html/test_write_site.py
Normal file
280
tests/html/test_write_site.py
Normal file
@@ -0,0 +1,280 @@
|
||||
"""Tests for write_site and related output functions."""
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.db import get_latest_metrics
|
||||
from meshmon.html import (
|
||||
copy_static_assets,
|
||||
write_site,
|
||||
)
|
||||
|
||||
BASE_TS = 1704067200
|
||||
|
||||
|
||||
def _sample_companion_metrics() -> dict[str, float]:
|
||||
return {
|
||||
"battery_mv": 3850.0,
|
||||
"uptime_secs": 86400.0,
|
||||
"contacts": 5.0,
|
||||
"recv": 1234.0,
|
||||
"sent": 567.0,
|
||||
"errors": 0.0,
|
||||
}
|
||||
|
||||
|
||||
def _sample_repeater_metrics() -> dict[str, float]:
|
||||
return {
|
||||
"bat": 3920.0,
|
||||
"uptime": 172800.0,
|
||||
"last_rssi": -85.0,
|
||||
"last_snr": 7.5,
|
||||
"noise_floor": -115.0,
|
||||
"tx_queue_len": 0.0,
|
||||
"nb_recv": 5678.0,
|
||||
"nb_sent": 2345.0,
|
||||
"airtime": 3600.0,
|
||||
"rx_airtime": 7200.0,
|
||||
"flood_dups": 12.0,
|
||||
"direct_dups": 5.0,
|
||||
"sent_flood": 100.0,
|
||||
"recv_flood": 200.0,
|
||||
"sent_direct": 50.0,
|
||||
"recv_direct": 75.0,
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def html_db_cache(tmp_path_factory):
|
||||
"""Create and populate a shared DB once for HTML write_site tests."""
|
||||
from meshmon.db import init_db, insert_metrics
|
||||
|
||||
root_dir = tmp_path_factory.mktemp("html-db")
|
||||
state_dir = root_dir / "state"
|
||||
state_dir.mkdir()
|
||||
|
||||
db_path = state_dir / "metrics.db"
|
||||
init_db(db_path=db_path)
|
||||
|
||||
now = BASE_TS
|
||||
day_seconds = 86400
|
||||
|
||||
sample_companion_metrics = _sample_companion_metrics()
|
||||
sample_repeater_metrics = _sample_repeater_metrics()
|
||||
|
||||
# Insert 7 days of companion data (every hour)
|
||||
for day in range(7):
|
||||
for hour in range(24):
|
||||
ts = now - (day * day_seconds) - (hour * 3600)
|
||||
metrics = sample_companion_metrics.copy()
|
||||
metrics["battery_mv"] = 3700 + (hour * 10) + (day * 5)
|
||||
metrics["recv"] = 100 * (day + 1) + hour
|
||||
metrics["sent"] = 50 * (day + 1) + hour
|
||||
insert_metrics(ts, "companion", metrics, db_path=db_path)
|
||||
|
||||
# Insert 7 days of repeater data (every 15 minutes)
|
||||
for day in range(7):
|
||||
for interval in range(96): # 24 * 4
|
||||
ts = now - (day * day_seconds) - (interval * 900)
|
||||
metrics = sample_repeater_metrics.copy()
|
||||
metrics["bat"] = 3700 + (interval * 2) + (day * 5)
|
||||
metrics["nb_recv"] = 1000 * (day + 1) + interval * 10
|
||||
metrics["nb_sent"] = 500 * (day + 1) + interval * 5
|
||||
insert_metrics(ts, "repeater", metrics, db_path=db_path)
|
||||
|
||||
return {"state_dir": state_dir, "db_path": db_path}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def html_env(html_db_cache, tmp_out_dir, monkeypatch):
|
||||
"""Env with shared DB and per-test output directory."""
|
||||
monkeypatch.setenv("STATE_DIR", str(html_db_cache["state_dir"]))
|
||||
monkeypatch.setenv("OUT_DIR", str(tmp_out_dir))
|
||||
|
||||
import meshmon.env
|
||||
meshmon.env._config = None
|
||||
|
||||
return {"state_dir": html_db_cache["state_dir"], "out_dir": tmp_out_dir}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def metrics_rows(html_env):
|
||||
"""Get latest metrics rows for both roles."""
|
||||
companion_row = get_latest_metrics("companion")
|
||||
repeater_row = get_latest_metrics("repeater")
|
||||
return {"companion": companion_row, "repeater": repeater_row}
|
||||
|
||||
|
||||
class TestWriteSite:
|
||||
"""Tests for write_site function."""
|
||||
|
||||
def test_creates_output_directory(self, html_env, metrics_rows):
|
||||
"""Creates output directory if it doesn't exist."""
|
||||
out_dir = html_env["out_dir"]
|
||||
|
||||
write_site(metrics_rows["companion"], metrics_rows["repeater"])
|
||||
|
||||
assert out_dir.exists()
|
||||
|
||||
def test_generates_repeater_pages(self, html_env, metrics_rows):
|
||||
"""Generates repeater HTML pages at root."""
|
||||
out_dir = html_env["out_dir"]
|
||||
|
||||
write_site(metrics_rows["companion"], metrics_rows["repeater"])
|
||||
|
||||
# Repeater pages are at root
|
||||
for period in ["day", "week", "month", "year"]:
|
||||
assert (out_dir / f"{period}.html").exists()
|
||||
|
||||
def test_generates_companion_pages(self, html_env, metrics_rows):
|
||||
"""Generates companion HTML pages in subdirectory."""
|
||||
out_dir = html_env["out_dir"]
|
||||
|
||||
write_site(metrics_rows["companion"], metrics_rows["repeater"])
|
||||
|
||||
companion_dir = out_dir / "companion"
|
||||
assert companion_dir.exists()
|
||||
for period in ["day", "week", "month", "year"]:
|
||||
assert (companion_dir / f"{period}.html").exists()
|
||||
|
||||
def test_html_files_are_valid(self, html_env, metrics_rows):
|
||||
"""Generated HTML files have valid structure."""
|
||||
out_dir = html_env["out_dir"]
|
||||
|
||||
write_site(metrics_rows["companion"], metrics_rows["repeater"])
|
||||
|
||||
html_file = out_dir / "day.html"
|
||||
content = html_file.read_text()
|
||||
|
||||
assert "<!DOCTYPE html>" in content or "<!doctype html>" in content.lower()
|
||||
assert "</html>" in content
|
||||
|
||||
def test_handles_empty_database(self, configured_env, initialized_db):
|
||||
"""Handles empty database gracefully."""
|
||||
out_dir = configured_env["out_dir"]
|
||||
|
||||
# Should not raise - pass None for empty database
|
||||
write_site(None, None)
|
||||
|
||||
# Should still generate pages
|
||||
assert (out_dir / "day.html").exists()
|
||||
|
||||
|
||||
class TestCopyStaticAssets:
|
||||
"""Tests for copy_static_assets function."""
|
||||
|
||||
def test_copies_css(self, html_env):
|
||||
"""Copies CSS stylesheet."""
|
||||
out_dir = html_env["out_dir"]
|
||||
out_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
copy_static_assets()
|
||||
|
||||
css_file = out_dir / "styles.css"
|
||||
assert css_file.exists()
|
||||
|
||||
def test_copies_javascript(self, html_env):
|
||||
"""Copies JavaScript files."""
|
||||
out_dir = html_env["out_dir"]
|
||||
out_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
copy_static_assets()
|
||||
|
||||
js_file = out_dir / "chart-tooltip.js"
|
||||
assert js_file.exists()
|
||||
|
||||
def test_css_is_valid(self, html_env):
|
||||
"""Copied CSS has expected content."""
|
||||
out_dir = html_env["out_dir"]
|
||||
out_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
copy_static_assets()
|
||||
|
||||
css_file = out_dir / "styles.css"
|
||||
content = css_file.read_text()
|
||||
|
||||
assert "--bg-primary" in content
|
||||
|
||||
def test_requires_output_directory(self, html_env):
|
||||
"""Requires output directory to exist."""
|
||||
out_dir = html_env["out_dir"]
|
||||
# Ensure out_dir exists
|
||||
out_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Should not raise when directory exists
|
||||
copy_static_assets()
|
||||
|
||||
assert (out_dir / "styles.css").exists()
|
||||
|
||||
def test_overwrites_existing(self, html_env):
|
||||
"""Overwrites existing static files."""
|
||||
out_dir = html_env["out_dir"]
|
||||
out_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Create a fake CSS file
|
||||
css_file = out_dir / "styles.css"
|
||||
css_file.write_text("/* fake */")
|
||||
|
||||
copy_static_assets()
|
||||
|
||||
# Should be overwritten with real content
|
||||
content = css_file.read_text()
|
||||
assert content != "/* fake */"
|
||||
|
||||
|
||||
class TestHtmlOutput:
|
||||
"""Tests for HTML output structure."""
|
||||
|
||||
def test_pages_include_navigation(self, html_env, metrics_rows):
|
||||
"""HTML pages include navigation."""
|
||||
out_dir = html_env["out_dir"]
|
||||
|
||||
write_site(metrics_rows["companion"], metrics_rows["repeater"])
|
||||
|
||||
content = (out_dir / "day.html").read_text()
|
||||
|
||||
# Should have links to other periods
|
||||
assert "week" in content.lower()
|
||||
assert "month" in content.lower()
|
||||
|
||||
def test_pages_include_meta_tags(self, html_env, metrics_rows):
|
||||
"""HTML pages include meta tags."""
|
||||
out_dir = html_env["out_dir"]
|
||||
|
||||
write_site(metrics_rows["companion"], metrics_rows["repeater"])
|
||||
|
||||
content = (out_dir / "day.html").read_text()
|
||||
|
||||
assert "<meta" in content
|
||||
assert "charset" in content.lower() or "utf-8" in content.lower()
|
||||
|
||||
def test_pages_include_title(self, html_env, metrics_rows):
|
||||
"""HTML pages include title tag."""
|
||||
out_dir = html_env["out_dir"]
|
||||
|
||||
write_site(metrics_rows["companion"], metrics_rows["repeater"])
|
||||
|
||||
content = (out_dir / "day.html").read_text()
|
||||
|
||||
assert "<title>" in content
|
||||
assert "</title>" in content
|
||||
|
||||
def test_pages_reference_css(self, html_env, metrics_rows):
|
||||
"""HTML pages reference stylesheet."""
|
||||
out_dir = html_env["out_dir"]
|
||||
|
||||
write_site(metrics_rows["companion"], metrics_rows["repeater"])
|
||||
|
||||
content = (out_dir / "day.html").read_text()
|
||||
|
||||
assert "styles.css" in content
|
||||
|
||||
def test_companion_pages_relative_css(self, html_env, metrics_rows):
|
||||
"""Companion pages use relative path to CSS."""
|
||||
out_dir = html_env["out_dir"]
|
||||
|
||||
write_site(metrics_rows["companion"], metrics_rows["repeater"])
|
||||
|
||||
content = (out_dir / "companion" / "day.html").read_text()
|
||||
|
||||
# Should reference parent directory CSS
|
||||
assert "../styles.css" in content or "styles.css" in content
|
||||
1
tests/integration/__init__.py
Normal file
1
tests/integration/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Integration tests for end-to-end pipelines."""
|
||||
278
tests/integration/conftest.py
Normal file
278
tests/integration/conftest.py
Normal file
@@ -0,0 +1,278 @@
|
||||
"""Integration test fixtures."""
|
||||
|
||||
import os
|
||||
import time
|
||||
from unittest.mock import AsyncMock, MagicMock
|
||||
|
||||
import pytest
|
||||
|
||||
_INTEGRATION_ENV = {
|
||||
"REPORT_LOCATION_NAME": "Test Location",
|
||||
"REPORT_LOCATION_SHORT": "Test",
|
||||
"REPEATER_DISPLAY_NAME": "Test Repeater",
|
||||
"COMPANION_DISPLAY_NAME": "Test Companion",
|
||||
"MESH_TRANSPORT": "serial",
|
||||
"MESH_SERIAL_PORT": "/dev/ttyACM0",
|
||||
}
|
||||
RENDERED_CHART_METRICS = {
|
||||
"companion": ["battery_mv", "recv", "contacts"],
|
||||
"repeater": ["bat", "nb_recv", "last_rssi"],
|
||||
}
|
||||
|
||||
|
||||
def _sample_companion_metrics() -> dict[str, float]:
|
||||
return {
|
||||
"battery_mv": 3850.0,
|
||||
"uptime_secs": 86400.0,
|
||||
"contacts": 5.0,
|
||||
"recv": 1234.0,
|
||||
"sent": 567.0,
|
||||
"errors": 0.0,
|
||||
}
|
||||
|
||||
|
||||
def _sample_repeater_metrics() -> dict[str, float]:
|
||||
return {
|
||||
"bat": 3920.0,
|
||||
"uptime": 172800.0,
|
||||
"last_rssi": -85.0,
|
||||
"last_snr": 7.5,
|
||||
"noise_floor": -115.0,
|
||||
"tx_queue_len": 0.0,
|
||||
"nb_recv": 5678.0,
|
||||
"nb_sent": 2345.0,
|
||||
"airtime": 3600.0,
|
||||
"rx_airtime": 7200.0,
|
||||
"flood_dups": 12.0,
|
||||
"direct_dups": 5.0,
|
||||
"sent_flood": 100.0,
|
||||
"recv_flood": 200.0,
|
||||
"sent_direct": 50.0,
|
||||
"recv_direct": 75.0,
|
||||
}
|
||||
|
||||
|
||||
def _populate_db_with_history(
|
||||
db_path,
|
||||
sample_companion_metrics: dict[str, float],
|
||||
sample_repeater_metrics: dict[str, float],
|
||||
days: int = 30,
|
||||
companion_step_seconds: int = 3600,
|
||||
repeater_step_seconds: int = 900,
|
||||
) -> None:
|
||||
from meshmon.db import insert_metrics
|
||||
|
||||
now = int(time.time())
|
||||
day_seconds = 86400
|
||||
companion_steps = day_seconds // companion_step_seconds
|
||||
repeater_steps = day_seconds // repeater_step_seconds
|
||||
|
||||
# Insert companion data (default: 30 days, hourly)
|
||||
for day in range(days):
|
||||
for step in range(companion_steps):
|
||||
ts = now - (day * day_seconds) - (step * companion_step_seconds)
|
||||
metrics = sample_companion_metrics.copy()
|
||||
# Vary values to create realistic patterns
|
||||
metrics["battery_mv"] = 3700 + (step * 5) + (day % 7) * 10
|
||||
metrics["recv"] = 100 + day * 10 + step
|
||||
metrics["sent"] = 50 + day * 5 + step
|
||||
metrics["uptime_secs"] = (days - day) * day_seconds + step * companion_step_seconds
|
||||
insert_metrics(ts, "companion", metrics, db_path=db_path)
|
||||
|
||||
# Insert repeater data (default: 30 days, every 15 minutes)
|
||||
for day in range(days):
|
||||
for interval in range(repeater_steps):
|
||||
ts = now - (day * day_seconds) - (interval * repeater_step_seconds)
|
||||
metrics = sample_repeater_metrics.copy()
|
||||
# Vary values to create realistic patterns
|
||||
metrics["bat"] = 3800 + (interval % 24) * 5 + (day % 7) * 10
|
||||
metrics["nb_recv"] = 1000 + day * 100 + interval
|
||||
metrics["nb_sent"] = 500 + day * 50 + interval
|
||||
metrics["uptime"] = (days - day) * day_seconds + interval * repeater_step_seconds
|
||||
metrics["last_rssi"] = -90 + (interval % 20)
|
||||
metrics["last_snr"] = 5 + (interval % 10) * 0.5
|
||||
insert_metrics(ts, "repeater", metrics, db_path=db_path)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def reports_env(reports_db_cache, tmp_out_dir, monkeypatch):
|
||||
"""Integration env wired to the shared reports DB and per-test output."""
|
||||
monkeypatch.setenv("STATE_DIR", str(reports_db_cache["state_dir"]))
|
||||
monkeypatch.setenv("OUT_DIR", str(tmp_out_dir))
|
||||
for key, value in _INTEGRATION_ENV.items():
|
||||
monkeypatch.setenv(key, value)
|
||||
|
||||
import meshmon.env
|
||||
meshmon.env._config = None
|
||||
|
||||
return {
|
||||
"state_dir": reports_db_cache["state_dir"],
|
||||
"out_dir": tmp_out_dir,
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def rendered_chart_metrics():
|
||||
"""Minimal chart set to keep integration rendering tests fast."""
|
||||
return RENDERED_CHART_METRICS
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def populated_db_with_history(reports_db_cache, reports_env):
|
||||
"""Shared database populated with a fixed history window for integration tests."""
|
||||
return reports_db_cache["db_path"]
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def reports_db_cache(tmp_path_factory):
|
||||
"""Create and populate a shared reports DB once per module."""
|
||||
from meshmon.db import init_db
|
||||
|
||||
root_dir = tmp_path_factory.mktemp("reports-db")
|
||||
state_dir = root_dir / "state"
|
||||
state_dir.mkdir()
|
||||
|
||||
db_path = state_dir / "metrics.db"
|
||||
init_db(db_path=db_path)
|
||||
_populate_db_with_history(
|
||||
db_path,
|
||||
_sample_companion_metrics(),
|
||||
_sample_repeater_metrics(),
|
||||
days=14,
|
||||
companion_step_seconds=7200,
|
||||
repeater_step_seconds=7200,
|
||||
)
|
||||
|
||||
return {
|
||||
"state_dir": state_dir,
|
||||
"db_path": db_path,
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def rendered_charts_cache(tmp_path_factory):
|
||||
"""Cache rendered charts once per module to speed up integration tests."""
|
||||
from meshmon.charts import render_all_charts, save_chart_stats
|
||||
from meshmon.db import init_db
|
||||
|
||||
root_dir = tmp_path_factory.mktemp("rendered-charts")
|
||||
state_dir = root_dir / "state"
|
||||
out_dir = root_dir / "out"
|
||||
state_dir.mkdir()
|
||||
out_dir.mkdir()
|
||||
|
||||
env_keys = ["STATE_DIR", "OUT_DIR", *_INTEGRATION_ENV.keys()]
|
||||
previous_env = {key: os.environ.get(key) for key in env_keys}
|
||||
|
||||
os.environ["STATE_DIR"] = str(state_dir)
|
||||
os.environ["OUT_DIR"] = str(out_dir)
|
||||
for key, value in _INTEGRATION_ENV.items():
|
||||
os.environ[key] = value
|
||||
|
||||
import meshmon.env
|
||||
meshmon.env._config = None
|
||||
|
||||
db_path = state_dir / "metrics.db"
|
||||
init_db(db_path=db_path)
|
||||
_populate_db_with_history(
|
||||
db_path,
|
||||
_sample_companion_metrics(),
|
||||
_sample_repeater_metrics(),
|
||||
days=7,
|
||||
companion_step_seconds=3600,
|
||||
repeater_step_seconds=3600,
|
||||
)
|
||||
|
||||
for role in ["companion", "repeater"]:
|
||||
charts, stats = render_all_charts(role, metrics=RENDERED_CHART_METRICS[role])
|
||||
save_chart_stats(role, stats)
|
||||
|
||||
yield {
|
||||
"state_dir": state_dir,
|
||||
"out_dir": out_dir,
|
||||
"db_path": db_path,
|
||||
}
|
||||
|
||||
for key, value in previous_env.items():
|
||||
if value is None:
|
||||
os.environ.pop(key, None)
|
||||
else:
|
||||
os.environ[key] = value
|
||||
|
||||
meshmon.env._config = None
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def rendered_charts(rendered_charts_cache, monkeypatch):
|
||||
"""Expose cached charts with env wired for per-test access."""
|
||||
state_dir = rendered_charts_cache["state_dir"]
|
||||
out_dir = rendered_charts_cache["out_dir"]
|
||||
|
||||
monkeypatch.setenv("STATE_DIR", str(state_dir))
|
||||
monkeypatch.setenv("OUT_DIR", str(out_dir))
|
||||
for key, value in _INTEGRATION_ENV.items():
|
||||
monkeypatch.setenv(key, value)
|
||||
|
||||
import meshmon.env
|
||||
meshmon.env._config = None
|
||||
|
||||
return rendered_charts_cache
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_meshcore_successful_collection(sample_companion_metrics):
|
||||
"""Mock MeshCore client that returns successful responses."""
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
mc.contacts = {}
|
||||
mc.disconnect = AsyncMock()
|
||||
|
||||
# Helper to create successful event
|
||||
def make_event(event_type: str, payload: dict):
|
||||
event = MagicMock()
|
||||
event.type = MagicMock()
|
||||
event.type.name = event_type
|
||||
event.payload = payload
|
||||
return event
|
||||
|
||||
# Mock all commands to return success - use AsyncMock directly without invoking
|
||||
mc.commands.send_appstart = AsyncMock(return_value=make_event("SELF_INFO", {}))
|
||||
mc.commands.send_device_query = AsyncMock(return_value=make_event("DEVICE_INFO", {}))
|
||||
mc.commands.get_time = AsyncMock(return_value=make_event("TIME", {"time": 1234567890}))
|
||||
mc.commands.get_self_telemetry = AsyncMock(return_value=make_event("TELEMETRY", {}))
|
||||
mc.commands.get_custom_vars = AsyncMock(return_value=make_event("CUSTOM_VARS", {}))
|
||||
mc.commands.get_contacts = AsyncMock(
|
||||
return_value=make_event("CONTACTS", {"contact1": {}, "contact2": {}})
|
||||
)
|
||||
mc.commands.get_stats_core = AsyncMock(
|
||||
return_value=make_event(
|
||||
"STATS_CORE",
|
||||
{"battery_mv": sample_companion_metrics["battery_mv"], "uptime_secs": 86400},
|
||||
)
|
||||
)
|
||||
mc.commands.get_stats_radio = AsyncMock(
|
||||
return_value=make_event("STATS_RADIO", {"noise_floor": -115, "last_rssi": -85})
|
||||
)
|
||||
mc.commands.get_stats_packets = AsyncMock(
|
||||
return_value=make_event(
|
||||
"STATS_PACKETS",
|
||||
{"recv": sample_companion_metrics["recv"], "sent": sample_companion_metrics["sent"]},
|
||||
)
|
||||
)
|
||||
|
||||
return mc
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def full_integration_env(configured_env, monkeypatch):
|
||||
"""Full integration environment with per-test directories."""
|
||||
for key, value in _INTEGRATION_ENV.items():
|
||||
monkeypatch.setenv(key, value)
|
||||
|
||||
import meshmon.env
|
||||
meshmon.env._config = None
|
||||
|
||||
return {
|
||||
"state_dir": configured_env["state_dir"],
|
||||
"out_dir": configured_env["out_dir"],
|
||||
}
|
||||
184
tests/integration/test_collection_pipeline.py
Normal file
184
tests/integration/test_collection_pipeline.py
Normal file
@@ -0,0 +1,184 @@
|
||||
"""Integration tests for data collection pipeline."""
|
||||
|
||||
from contextlib import asynccontextmanager
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
|
||||
from tests.scripts.conftest import load_script_module
|
||||
|
||||
BASE_TS = 1704067200
|
||||
|
||||
|
||||
@pytest.mark.integration
|
||||
class TestCompanionCollectionPipeline:
|
||||
"""Test companion collection end-to-end."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_successful_collection_stores_metrics(
|
||||
self,
|
||||
mock_meshcore_successful_collection,
|
||||
full_integration_env,
|
||||
monkeypatch,
|
||||
):
|
||||
"""Successful collection should store all metrics in database."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
|
||||
# Mock connect_with_lock to return our mock client
|
||||
@asynccontextmanager
|
||||
async def mock_connect_with_lock(*args, **kwargs):
|
||||
yield mock_meshcore_successful_collection
|
||||
|
||||
with patch(
|
||||
"meshmon.meshcore_client.connect_with_lock",
|
||||
mock_connect_with_lock,
|
||||
):
|
||||
# Initialize database
|
||||
from meshmon.db import get_latest_metrics, init_db
|
||||
|
||||
init_db()
|
||||
|
||||
# Import and run collection (inline to avoid import issues)
|
||||
# Note: We import the function directly rather than the script
|
||||
from meshmon.db import insert_metrics
|
||||
|
||||
# Simulate collection logic
|
||||
ts = BASE_TS
|
||||
metrics = {}
|
||||
|
||||
async with mock_connect_with_lock() as mc:
|
||||
assert mc is not None
|
||||
|
||||
# Get stats_core
|
||||
event = await mc.commands.get_stats_core()
|
||||
if event and hasattr(event, "payload") and isinstance(event.payload, dict):
|
||||
for key, value in event.payload.items():
|
||||
if isinstance(value, (int, float)):
|
||||
metrics[key] = float(value)
|
||||
|
||||
# Get stats_packets
|
||||
event = await mc.commands.get_stats_packets()
|
||||
if event and hasattr(event, "payload") and isinstance(event.payload, dict):
|
||||
for key, value in event.payload.items():
|
||||
if isinstance(value, (int, float)):
|
||||
metrics[key] = float(value)
|
||||
|
||||
# Get contacts
|
||||
event = await mc.commands.get_contacts()
|
||||
if event and hasattr(event, "payload"):
|
||||
contacts_count = len(event.payload) if event.payload else 0
|
||||
metrics["contacts"] = float(contacts_count)
|
||||
|
||||
# Insert metrics
|
||||
inserted = insert_metrics(ts=ts, role="companion", metrics=metrics)
|
||||
assert inserted > 0
|
||||
|
||||
# Verify data was stored
|
||||
latest = get_latest_metrics("companion")
|
||||
assert latest is not None
|
||||
assert "battery_mv" in latest
|
||||
assert "recv" in latest
|
||||
assert "sent" in latest
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_collection_fails_gracefully_on_connection_error(
|
||||
self, full_integration_env, monkeypatch
|
||||
):
|
||||
"""Collection should fail gracefully when connection fails."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
|
||||
@asynccontextmanager
|
||||
async def mock_connect_with_lock_failing(*args, **kwargs):
|
||||
yield None
|
||||
|
||||
with patch(
|
||||
"meshmon.meshcore_client.connect_with_lock",
|
||||
mock_connect_with_lock_failing,
|
||||
):
|
||||
from meshmon.db import get_latest_metrics, init_db
|
||||
|
||||
init_db()
|
||||
|
||||
# Simulate collection with failed connection
|
||||
async with mock_connect_with_lock_failing() as mc:
|
||||
assert mc is None
|
||||
|
||||
# Database should be empty
|
||||
latest = get_latest_metrics("companion")
|
||||
assert latest is None
|
||||
|
||||
|
||||
@pytest.mark.integration
|
||||
class TestCollectionWithCircuitBreaker:
|
||||
"""Test collection with circuit breaker integration."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_circuit_breaker_prevents_collection_when_open(
|
||||
self, full_integration_env, monkeypatch
|
||||
):
|
||||
"""Collection should be skipped when circuit breaker is open."""
|
||||
from meshmon.retry import CircuitBreaker
|
||||
|
||||
# Create an open circuit breaker
|
||||
state_dir = full_integration_env["state_dir"]
|
||||
cb = CircuitBreaker(state_dir / "repeater_circuit.json")
|
||||
cb.record_failure(max_failures=1, cooldown_s=3600)
|
||||
|
||||
# Verify circuit is open
|
||||
assert cb.is_open() is True
|
||||
|
||||
module = load_script_module("collect_repeater.py")
|
||||
connect_called = False
|
||||
|
||||
@asynccontextmanager
|
||||
async def mock_connect_with_lock(*args, **kwargs):
|
||||
nonlocal connect_called
|
||||
connect_called = True
|
||||
yield None
|
||||
|
||||
monkeypatch.setattr(module, "connect_with_lock", mock_connect_with_lock)
|
||||
|
||||
result = await module.collect_repeater()
|
||||
|
||||
assert result == 0
|
||||
assert connect_called is False
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_circuit_breaker_records_failure(self, full_integration_env, monkeypatch):
|
||||
"""Circuit breaker should record failures."""
|
||||
|
||||
from meshmon.retry import CircuitBreaker
|
||||
|
||||
state_dir = full_integration_env["state_dir"]
|
||||
cb = CircuitBreaker(state_dir / "test_circuit.json")
|
||||
|
||||
assert cb.consecutive_failures == 0
|
||||
|
||||
# Record failures (requires max_failures and cooldown_s args)
|
||||
cb.record_failure(max_failures=5, cooldown_s=60)
|
||||
cb.record_failure(max_failures=5, cooldown_s=60)
|
||||
cb.record_failure(max_failures=5, cooldown_s=60)
|
||||
|
||||
assert cb.consecutive_failures == 3
|
||||
|
||||
# Success resets counter
|
||||
cb.record_success()
|
||||
assert cb.consecutive_failures == 0
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_circuit_breaker_state_persists(self, full_integration_env):
|
||||
"""Circuit breaker state should persist to disk."""
|
||||
from meshmon.retry import CircuitBreaker
|
||||
|
||||
state_dir = full_integration_env["state_dir"]
|
||||
state_file = state_dir / "persist_test_circuit.json"
|
||||
|
||||
# Create and configure circuit breaker
|
||||
cb1 = CircuitBreaker(state_file)
|
||||
cb1.record_failure(max_failures=1, cooldown_s=1800)
|
||||
|
||||
# Load in new instance
|
||||
cb2 = CircuitBreaker(state_file)
|
||||
|
||||
assert cb2.consecutive_failures == 1
|
||||
assert cb2.cooldown_until == cb1.cooldown_until
|
||||
231
tests/integration/test_rendering_pipeline.py
Normal file
231
tests/integration/test_rendering_pipeline.py
Normal file
@@ -0,0 +1,231 @@
|
||||
"""Integration tests for chart and HTML rendering pipeline."""
|
||||
|
||||
import contextlib
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.mark.integration
|
||||
class TestChartRenderingPipeline:
|
||||
"""Test chart rendering end-to-end."""
|
||||
|
||||
def test_renders_all_chart_periods(self, rendered_charts):
|
||||
"""Should render charts for all periods (day/week/month/year)."""
|
||||
out_dir = rendered_charts["out_dir"]
|
||||
|
||||
for role in ["companion", "repeater"]:
|
||||
assets_dir = out_dir / "assets" / role
|
||||
assert assets_dir.exists()
|
||||
|
||||
for period in ["day", "week", "month", "year"]:
|
||||
period_svgs = list(assets_dir.glob(f"*_{period}_*.svg"))
|
||||
assert period_svgs, f"No {period} charts found for {role}"
|
||||
|
||||
def test_chart_files_created(self, rendered_charts):
|
||||
"""Should create SVG chart files in output directory."""
|
||||
out_dir = rendered_charts["out_dir"]
|
||||
|
||||
# Check SVG files exist
|
||||
assets_dir = out_dir / "assets" / "repeater"
|
||||
assert assets_dir.exists()
|
||||
|
||||
# Should have SVG files
|
||||
svg_files = list(assets_dir.glob("*.svg"))
|
||||
assert len(svg_files) > 0
|
||||
|
||||
# Check stats file exists
|
||||
stats_file = assets_dir / "chart_stats.json"
|
||||
assert stats_file.exists()
|
||||
|
||||
def test_chart_statistics_calculated(self, rendered_charts):
|
||||
"""Should calculate correct statistics for charts."""
|
||||
from meshmon.charts import load_chart_stats
|
||||
|
||||
# Load and verify stats
|
||||
loaded_stats = load_chart_stats("repeater")
|
||||
|
||||
assert loaded_stats is not None
|
||||
|
||||
# Check that stats have expected structure
|
||||
# Stats are nested: {metric_name: {period: {min, max, avg, current}}}
|
||||
for _metric_name, metric_stats in loaded_stats.items():
|
||||
if metric_stats: # Skip empty stats
|
||||
# Each metric has period keys like 'day', 'week', 'month', 'year'
|
||||
for _period, period_stats in metric_stats.items():
|
||||
if period_stats:
|
||||
assert "min" in period_stats
|
||||
assert "max" in period_stats
|
||||
assert "avg" in period_stats
|
||||
assert "current" in period_stats
|
||||
|
||||
|
||||
@pytest.mark.integration
|
||||
class TestHtmlRenderingPipeline:
|
||||
"""Test HTML site rendering end-to-end."""
|
||||
|
||||
def test_renders_site_pages(self, rendered_charts):
|
||||
"""Should render all HTML site pages."""
|
||||
from meshmon.db import get_latest_metrics
|
||||
from meshmon.html import write_site
|
||||
|
||||
out_dir = rendered_charts["out_dir"]
|
||||
|
||||
# Get latest metrics for write_site
|
||||
companion_row = get_latest_metrics("companion")
|
||||
repeater_row = get_latest_metrics("repeater")
|
||||
|
||||
# Render site
|
||||
write_site(companion_row, repeater_row)
|
||||
|
||||
# Check main pages exist
|
||||
assert (out_dir / "day.html").exists()
|
||||
assert (out_dir / "week.html").exists()
|
||||
assert (out_dir / "month.html").exists()
|
||||
assert (out_dir / "year.html").exists()
|
||||
|
||||
# Check companion pages exist
|
||||
assert (out_dir / "companion" / "day.html").exists()
|
||||
assert (out_dir / "companion" / "week.html").exists()
|
||||
assert (out_dir / "companion" / "month.html").exists()
|
||||
assert (out_dir / "companion" / "year.html").exists()
|
||||
|
||||
def test_copies_static_assets(self, full_integration_env):
|
||||
"""Should copy static assets (CSS, JS)."""
|
||||
from meshmon.html import copy_static_assets
|
||||
|
||||
out_dir = full_integration_env["out_dir"]
|
||||
|
||||
copy_static_assets()
|
||||
|
||||
# Check static files exist
|
||||
assert (out_dir / "styles.css").exists()
|
||||
assert (out_dir / "chart-tooltip.js").exists()
|
||||
|
||||
def test_html_contains_chart_data(self, rendered_charts):
|
||||
"""HTML should contain embedded chart SVGs."""
|
||||
from meshmon.db import get_latest_metrics
|
||||
from meshmon.html import write_site
|
||||
|
||||
out_dir = rendered_charts["out_dir"]
|
||||
|
||||
# Get latest metrics for write_site
|
||||
companion_row = get_latest_metrics("companion")
|
||||
repeater_row = get_latest_metrics("repeater")
|
||||
|
||||
# Render site
|
||||
write_site(companion_row, repeater_row)
|
||||
|
||||
# Check HTML contains SVG
|
||||
day_html = (out_dir / "day.html").read_text()
|
||||
|
||||
# Should contain SVG elements
|
||||
assert "<svg" in day_html
|
||||
# Should contain chart data attributes
|
||||
assert "data-metric" in day_html
|
||||
assert "data-points" in day_html
|
||||
|
||||
def test_html_has_correct_status_indicator(
|
||||
self, rendered_charts
|
||||
):
|
||||
"""HTML should have correct status indicator based on data freshness."""
|
||||
from meshmon.db import get_latest_metrics
|
||||
from meshmon.html import write_site
|
||||
|
||||
out_dir = rendered_charts["out_dir"]
|
||||
|
||||
# Get latest metrics for write_site
|
||||
companion_row = get_latest_metrics("companion")
|
||||
repeater_row = get_latest_metrics("repeater")
|
||||
|
||||
write_site(companion_row, repeater_row)
|
||||
|
||||
# Check status indicator exists
|
||||
day_html = (out_dir / "day.html").read_text()
|
||||
|
||||
assert "status-badge" in day_html
|
||||
assert any(label in day_html for label in ["Online", "Stale", "Offline"])
|
||||
|
||||
|
||||
@pytest.mark.integration
|
||||
class TestFullRenderingChain:
|
||||
"""Test complete rendering chain: data -> charts -> HTML."""
|
||||
|
||||
def test_full_chain_from_database_to_html(
|
||||
self, rendered_charts
|
||||
):
|
||||
"""Complete chain: database metrics -> charts -> HTML site."""
|
||||
from meshmon.db import get_latest_metrics, get_metric_count
|
||||
from meshmon.html import copy_static_assets, write_site
|
||||
|
||||
out_dir = rendered_charts["out_dir"]
|
||||
|
||||
# 1. Verify database has data
|
||||
assert get_metric_count("repeater") > 0
|
||||
assert get_metric_count("companion") > 0
|
||||
|
||||
# 2. Verify rendered charts exist for both roles
|
||||
for role in ["repeater", "companion"]:
|
||||
assets_dir = out_dir / "assets" / role
|
||||
svg_files = list(assets_dir.glob("*.svg"))
|
||||
assert svg_files, f"No charts found for {role}"
|
||||
|
||||
# 3. Copy static assets
|
||||
copy_static_assets()
|
||||
|
||||
# 4. Get latest metrics for write_site
|
||||
companion_row = get_latest_metrics("companion")
|
||||
repeater_row = get_latest_metrics("repeater")
|
||||
|
||||
# 5. Render HTML site
|
||||
write_site(companion_row, repeater_row)
|
||||
|
||||
# 6. Verify output structure
|
||||
assert (out_dir / "day.html").exists()
|
||||
assert (out_dir / "styles.css").exists()
|
||||
assert (out_dir / "chart-tooltip.js").exists()
|
||||
assert (out_dir / "assets" / "repeater").exists()
|
||||
assert (out_dir / "assets" / "companion").exists()
|
||||
|
||||
# 7. Verify HTML is valid (basic check)
|
||||
html_content = (out_dir / "day.html").read_text()
|
||||
assert "<!DOCTYPE html>" in html_content or "<!doctype html>" in html_content.lower()
|
||||
assert "</html>" in html_content
|
||||
|
||||
def test_empty_database_renders_gracefully(
|
||||
self,
|
||||
full_integration_env,
|
||||
rendered_chart_metrics,
|
||||
):
|
||||
"""Should handle empty database gracefully."""
|
||||
from meshmon.charts import render_all_charts, save_chart_stats
|
||||
from meshmon.db import get_latest_metrics, get_metric_count, init_db
|
||||
from meshmon.html import copy_static_assets, write_site
|
||||
|
||||
full_integration_env["out_dir"]
|
||||
|
||||
# Initialize empty database
|
||||
init_db()
|
||||
|
||||
# Verify no data
|
||||
assert get_metric_count("repeater") == 0
|
||||
assert get_metric_count("companion") == 0
|
||||
|
||||
# Rendering with no data should not crash
|
||||
for role in ["repeater", "companion"]:
|
||||
charts, stats = render_all_charts(
|
||||
role, metrics=rendered_chart_metrics[role]
|
||||
)
|
||||
save_chart_stats(role, stats)
|
||||
# Should have no charts (or empty charts)
|
||||
# The important thing is it doesn't crash
|
||||
|
||||
copy_static_assets()
|
||||
|
||||
# Get empty metrics
|
||||
companion_row = get_latest_metrics("companion")
|
||||
repeater_row = get_latest_metrics("repeater")
|
||||
|
||||
# Site rendering might fail or show "no data" - verify it handles gracefully
|
||||
# Some implementations might raise an exception for empty data - acceptable
|
||||
with contextlib.suppress(Exception):
|
||||
write_site(companion_row, repeater_row)
|
||||
381
tests/integration/test_reports_pipeline.py
Normal file
381
tests/integration/test_reports_pipeline.py
Normal file
@@ -0,0 +1,381 @@
|
||||
"""Integration tests for report generation pipeline."""
|
||||
|
||||
import calendar
|
||||
import json
|
||||
from datetime import datetime
|
||||
|
||||
import pytest
|
||||
|
||||
BASE_TS = 1704067200
|
||||
|
||||
|
||||
@pytest.mark.integration
|
||||
class TestReportGenerationPipeline:
|
||||
"""Test report generation end-to-end."""
|
||||
|
||||
def test_generates_monthly_reports(self, populated_db_with_history, reports_env):
|
||||
"""Should generate monthly reports for available data."""
|
||||
from meshmon.html import render_report_page
|
||||
from meshmon.reports import aggregate_monthly, format_monthly_txt, get_available_periods
|
||||
|
||||
# Get available periods
|
||||
periods = get_available_periods("repeater")
|
||||
assert periods
|
||||
|
||||
# Get the current month (should have data)
|
||||
year, month = periods[-1]
|
||||
month_name = calendar.month_name[month]
|
||||
|
||||
# Aggregate monthly data
|
||||
agg = aggregate_monthly("repeater", year, month)
|
||||
|
||||
assert agg is not None
|
||||
assert agg.year == year
|
||||
assert agg.month == month
|
||||
assert agg.role == "repeater"
|
||||
assert agg.daily
|
||||
assert agg.summary["bat"].count > 0
|
||||
assert agg.summary["bat"].min_value is not None
|
||||
assert agg.summary["nb_recv"].total is not None
|
||||
assert agg.summary["nb_recv"].count > 0
|
||||
|
||||
# Generate TXT report
|
||||
from meshmon.reports import LocationInfo
|
||||
|
||||
location = LocationInfo(
|
||||
name="Test Location",
|
||||
lat=52.0,
|
||||
lon=4.0,
|
||||
elev=10.0,
|
||||
)
|
||||
txt_report = format_monthly_txt(agg, "Test Repeater", location)
|
||||
|
||||
assert txt_report is not None
|
||||
assert len(txt_report) > 0
|
||||
assert f"MONTHLY MESHCORE REPORT for {month_name} {year}" in txt_report
|
||||
assert "NODE: Test Repeater" in txt_report
|
||||
assert "NAME: Test Location" in txt_report
|
||||
|
||||
# Generate HTML report
|
||||
html_report = render_report_page(agg, "Test Repeater", "monthly")
|
||||
|
||||
assert html_report is not None
|
||||
assert "<html" in html_report.lower()
|
||||
assert f"{month_name} {year}" in html_report
|
||||
assert "Test Repeater" in html_report
|
||||
|
||||
def test_generates_yearly_reports(self, populated_db_with_history, reports_env):
|
||||
"""Should generate yearly reports for available data."""
|
||||
from meshmon.html import render_report_page
|
||||
from meshmon.reports import aggregate_yearly, format_yearly_txt, get_available_periods
|
||||
|
||||
# Get available periods
|
||||
periods = get_available_periods("repeater")
|
||||
assert len(periods) > 0
|
||||
|
||||
# Get the current year
|
||||
year = periods[-1][0]
|
||||
|
||||
# Aggregate yearly data
|
||||
agg = aggregate_yearly("repeater", year)
|
||||
|
||||
assert agg is not None
|
||||
assert agg.year == year
|
||||
assert agg.role == "repeater"
|
||||
assert agg.monthly
|
||||
assert agg.summary["bat"].count > 0
|
||||
assert agg.summary["nb_recv"].total is not None
|
||||
|
||||
# Generate TXT report
|
||||
from meshmon.reports import LocationInfo
|
||||
|
||||
location = LocationInfo(
|
||||
name="Test Location",
|
||||
lat=52.0,
|
||||
lon=4.0,
|
||||
elev=10.0,
|
||||
)
|
||||
txt_report = format_yearly_txt(agg, "Test Repeater", location)
|
||||
|
||||
assert txt_report is not None
|
||||
assert len(txt_report) > 0
|
||||
assert f"YEARLY MESHCORE REPORT for {year}" in txt_report
|
||||
assert "NODE: Test Repeater" in txt_report
|
||||
|
||||
# Generate HTML report
|
||||
html_report = render_report_page(agg, "Test Repeater", "yearly")
|
||||
|
||||
assert html_report is not None
|
||||
assert "<html" in html_report.lower()
|
||||
assert "Yearly report for Test Repeater" in html_report
|
||||
|
||||
def test_generates_json_reports(self, populated_db_with_history, reports_env):
|
||||
"""Should generate valid JSON reports."""
|
||||
from meshmon.reports import (
|
||||
aggregate_monthly,
|
||||
aggregate_yearly,
|
||||
get_available_periods,
|
||||
monthly_to_json,
|
||||
yearly_to_json,
|
||||
)
|
||||
|
||||
periods = get_available_periods("repeater")
|
||||
year, month = periods[-1]
|
||||
|
||||
# Monthly JSON
|
||||
monthly_agg = aggregate_monthly("repeater", year, month)
|
||||
monthly_json = monthly_to_json(monthly_agg)
|
||||
|
||||
assert monthly_json is not None
|
||||
assert monthly_json["report_type"] == "monthly"
|
||||
assert "year" in monthly_json
|
||||
assert "month" in monthly_json
|
||||
assert monthly_json["role"] == "repeater"
|
||||
assert monthly_json["days_with_data"] == len(monthly_agg.daily)
|
||||
assert "daily" in monthly_json
|
||||
assert "bat" in monthly_json["summary"]
|
||||
|
||||
# Verify it's valid JSON
|
||||
json_str = json.dumps(monthly_json)
|
||||
parsed = json.loads(json_str)
|
||||
assert parsed == monthly_json
|
||||
|
||||
# Yearly JSON
|
||||
yearly_agg = aggregate_yearly("repeater", year)
|
||||
yearly_json = yearly_to_json(yearly_agg)
|
||||
|
||||
assert yearly_json is not None
|
||||
assert yearly_json["report_type"] == "yearly"
|
||||
assert "year" in yearly_json
|
||||
assert yearly_json["role"] == "repeater"
|
||||
assert yearly_json["months_with_data"] == len(yearly_agg.monthly)
|
||||
assert "monthly" in yearly_json
|
||||
assert "bat" in yearly_json["summary"]
|
||||
|
||||
def test_report_files_created(self, populated_db_with_history, reports_env):
|
||||
"""Should create report files in correct directory structure."""
|
||||
from meshmon.html import render_report_page
|
||||
from meshmon.reports import (
|
||||
LocationInfo,
|
||||
aggregate_monthly,
|
||||
format_monthly_txt,
|
||||
get_available_periods,
|
||||
monthly_to_json,
|
||||
)
|
||||
|
||||
out_dir = reports_env["out_dir"]
|
||||
|
||||
periods = get_available_periods("repeater")
|
||||
year, month = periods[-1]
|
||||
month_name = calendar.month_name[month]
|
||||
|
||||
# Create output directory
|
||||
report_dir = out_dir / "reports" / "repeater" / str(year) / f"{month:02d}"
|
||||
report_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Generate reports
|
||||
agg = aggregate_monthly("repeater", year, month)
|
||||
location = LocationInfo(name="Test", lat=0.0, lon=0.0, elev=0.0)
|
||||
|
||||
# Write files
|
||||
html = render_report_page(agg, "Test Repeater", "monthly")
|
||||
txt = format_monthly_txt(agg, "Test Repeater", location)
|
||||
json_data = monthly_to_json(agg)
|
||||
|
||||
(report_dir / "index.html").write_text(html, encoding="utf-8")
|
||||
(report_dir / "report.txt").write_text(txt, encoding="utf-8")
|
||||
(report_dir / "report.json").write_text(json.dumps(json_data), encoding="utf-8")
|
||||
|
||||
# Verify files exist
|
||||
assert (report_dir / "index.html").exists()
|
||||
assert (report_dir / "report.txt").exists()
|
||||
assert (report_dir / "report.json").exists()
|
||||
|
||||
# Verify content is not empty
|
||||
assert len((report_dir / "index.html").read_text()) > 0
|
||||
assert len((report_dir / "report.txt").read_text()) > 0
|
||||
assert len((report_dir / "report.json").read_text()) > 0
|
||||
assert f"{month_name} {year}" in (report_dir / "index.html").read_text()
|
||||
assert "NODE: Test Repeater" in (report_dir / "report.txt").read_text()
|
||||
|
||||
parsed_json = json.loads((report_dir / "report.json").read_text())
|
||||
assert parsed_json["report_type"] == "monthly"
|
||||
assert parsed_json["year"] == year
|
||||
assert parsed_json["month"] == month
|
||||
|
||||
|
||||
@pytest.mark.integration
|
||||
class TestReportsIndex:
|
||||
"""Test reports index page generation."""
|
||||
|
||||
def test_generates_reports_index(self, populated_db_with_history, reports_env):
|
||||
"""Should generate reports index with all available periods."""
|
||||
from meshmon.html import render_reports_index
|
||||
from meshmon.reports import get_available_periods
|
||||
|
||||
out_dir = reports_env["out_dir"]
|
||||
|
||||
# Build sections data (mimicking render_reports.py)
|
||||
sections = []
|
||||
latest_periods: dict[str, tuple[int, int]] = {}
|
||||
for role in ["repeater", "companion"]:
|
||||
periods = get_available_periods(role)
|
||||
|
||||
if not periods:
|
||||
sections.append({"role": role, "years": []})
|
||||
continue
|
||||
latest_periods[role] = periods[-1]
|
||||
|
||||
years_data = {}
|
||||
for year, month in periods:
|
||||
if year not in years_data:
|
||||
years_data[year] = []
|
||||
years_data[year].append(
|
||||
{
|
||||
"month": month,
|
||||
"name": calendar.month_name[month],
|
||||
}
|
||||
)
|
||||
|
||||
years = []
|
||||
for year in sorted(years_data.keys(), reverse=True):
|
||||
years.append(
|
||||
{
|
||||
"year": year,
|
||||
"months": sorted(years_data[year], key=lambda m: m["month"]),
|
||||
}
|
||||
)
|
||||
|
||||
sections.append({"role": role, "years": years})
|
||||
|
||||
# Render index
|
||||
html = render_reports_index(sections)
|
||||
|
||||
assert html is not None
|
||||
assert "<html" in html.lower()
|
||||
assert "reports archive" in html.lower()
|
||||
|
||||
for role, (year, month) in latest_periods.items():
|
||||
assert f"../reports/{role}/{year}/" in html
|
||||
assert f"../reports/{role}/{year}/{month:02d}/" in html
|
||||
|
||||
# Write and verify file
|
||||
reports_dir = out_dir / "reports"
|
||||
reports_dir.mkdir(parents=True, exist_ok=True)
|
||||
(reports_dir / "index.html").write_text(html)
|
||||
|
||||
assert (reports_dir / "index.html").exists()
|
||||
|
||||
|
||||
@pytest.mark.integration
|
||||
class TestCounterAggregation:
|
||||
"""Test counter metrics aggregation (handles reboots)."""
|
||||
|
||||
def test_counter_aggregation_handles_reboots(self, full_integration_env):
|
||||
"""Counter aggregation should correctly handle device reboots."""
|
||||
from meshmon.db import init_db, insert_metrics
|
||||
from meshmon.reports import aggregate_daily
|
||||
|
||||
init_db()
|
||||
|
||||
# Insert data with a simulated reboot
|
||||
day_start = BASE_TS - (BASE_TS % 86400)
|
||||
|
||||
# Before reboot: counter increases
|
||||
for i in range(10):
|
||||
ts = day_start + i * 900
|
||||
insert_metrics(
|
||||
ts, "repeater", {"nb_recv": float(100 + i * 10)} # 100, 110, 120, ..., 190
|
||||
)
|
||||
|
||||
# Reboot: counter resets
|
||||
insert_metrics(day_start + 10 * 900, "repeater", {"nb_recv": 0.0})
|
||||
|
||||
# After reboot: counter increases again
|
||||
for i in range(5):
|
||||
ts = day_start + (11 + i) * 900
|
||||
insert_metrics(ts, "repeater", {"nb_recv": float(i * 20)}) # 0, 20, 40, 60, 80
|
||||
|
||||
# Aggregate daily data
|
||||
dt = datetime.fromtimestamp(day_start)
|
||||
agg = aggregate_daily("repeater", dt.date())
|
||||
|
||||
# Should have data for nb_recv
|
||||
# The counter total should account for the reboot
|
||||
assert agg is not None
|
||||
assert agg.snapshot_count == 16
|
||||
stats = agg.metrics["nb_recv"]
|
||||
assert stats.count == 16
|
||||
assert stats.reboot_count == 1
|
||||
assert stats.total == 170
|
||||
|
||||
def test_gauge_aggregation_computes_stats(self, full_integration_env):
|
||||
"""Gauge metrics should compute min/max/avg correctly."""
|
||||
from meshmon.db import init_db, insert_metrics
|
||||
from meshmon.reports import aggregate_daily
|
||||
|
||||
init_db()
|
||||
|
||||
day_start = BASE_TS - (BASE_TS % 86400)
|
||||
|
||||
# Insert battery readings with known pattern
|
||||
values = [3.7, 3.8, 3.9, 4.0, 3.85] # min=3.7, max_value=4.0, avg≈3.85
|
||||
for i, val in enumerate(values):
|
||||
ts = day_start + i * 3600
|
||||
insert_metrics(ts, "repeater", {"bat": val * 1000}) # Store in mV
|
||||
|
||||
dt = datetime.fromtimestamp(day_start)
|
||||
agg = aggregate_daily("repeater", dt.date())
|
||||
|
||||
assert agg is not None
|
||||
assert agg.snapshot_count == len(values)
|
||||
stats = agg.metrics["bat"]
|
||||
assert stats.count == len(values)
|
||||
assert stats.min_value == 3700.0
|
||||
assert stats.max_value == 4000.0
|
||||
assert stats.mean == pytest.approx(3850.0)
|
||||
assert stats.min_time == datetime.fromtimestamp(day_start)
|
||||
assert stats.max_time == datetime.fromtimestamp(day_start + 3 * 3600)
|
||||
|
||||
|
||||
@pytest.mark.integration
|
||||
class TestReportConsistency:
|
||||
"""Test consistency across different report formats."""
|
||||
|
||||
def test_txt_json_html_contain_same_data(
|
||||
self, populated_db_with_history, reports_env
|
||||
):
|
||||
"""TXT, JSON, and HTML reports should contain consistent data."""
|
||||
from meshmon.html import render_report_page
|
||||
from meshmon.reports import (
|
||||
LocationInfo,
|
||||
aggregate_monthly,
|
||||
format_monthly_txt,
|
||||
get_available_periods,
|
||||
monthly_to_json,
|
||||
)
|
||||
|
||||
periods = get_available_periods("repeater")
|
||||
year, month = periods[-1]
|
||||
|
||||
agg = aggregate_monthly("repeater", year, month)
|
||||
location = LocationInfo(name="Test", lat=52.0, lon=4.0, elev=10.0)
|
||||
|
||||
txt = format_monthly_txt(agg, "Test Repeater", location)
|
||||
json_data = monthly_to_json(agg)
|
||||
html = render_report_page(agg, "Test Repeater", "monthly")
|
||||
|
||||
# All should reference the same year/month
|
||||
month_name = calendar.month_name[month]
|
||||
assert str(year) in txt
|
||||
assert json_data["year"] == year
|
||||
assert json_data["month"] == month
|
||||
assert json_data["role"] == "repeater"
|
||||
assert json_data["report_type"] == "monthly"
|
||||
assert str(year) in html
|
||||
assert f"{month_name} {year}" in html
|
||||
|
||||
# All should have the same number of days
|
||||
num_days = len(agg.daily)
|
||||
assert len(json_data["daily"]) == num_days
|
||||
assert json_data["days_with_data"] == num_days
|
||||
1
tests/reports/__init__.py
Normal file
1
tests/reports/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Tests for report generation."""
|
||||
111
tests/reports/conftest.py
Normal file
111
tests/reports/conftest.py
Normal file
@@ -0,0 +1,111 @@
|
||||
"""Fixtures for reports tests."""
|
||||
|
||||
from datetime import date, datetime, timedelta
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_daily_data():
|
||||
"""Sample daily metrics data for report generation."""
|
||||
base_date = date(2024, 1, 15)
|
||||
return {
|
||||
"date": base_date,
|
||||
"bat": {
|
||||
"min": 3.5,
|
||||
"avg": 3.7,
|
||||
"max": 3.9,
|
||||
"count": 96, # 15-min intervals for a day
|
||||
},
|
||||
"bat_pct": {
|
||||
"min": 50.0,
|
||||
"avg": 70.0,
|
||||
"max": 90.0,
|
||||
"count": 96,
|
||||
},
|
||||
"nb_recv": {
|
||||
"total": 12000, # Counter total for the day
|
||||
"count": 96,
|
||||
},
|
||||
"nb_sent": {
|
||||
"total": 5000,
|
||||
"count": 96,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_monthly_data():
|
||||
"""Sample monthly aggregated data."""
|
||||
return {
|
||||
"year": 2024,
|
||||
"month": 1,
|
||||
"bat": {
|
||||
"min": 3.3,
|
||||
"avg": 3.65,
|
||||
"max": 4.0,
|
||||
"count": 2976, # ~31 days * 96 readings
|
||||
},
|
||||
"bat_pct": {
|
||||
"min": 40.0,
|
||||
"avg": 65.0,
|
||||
"max": 100.0,
|
||||
"count": 2976,
|
||||
},
|
||||
"nb_recv": {
|
||||
"total": 360000,
|
||||
"count": 2976,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_yearly_data():
|
||||
"""Sample yearly aggregated data."""
|
||||
return {
|
||||
"year": 2024,
|
||||
"bat": {
|
||||
"min": 3.0,
|
||||
"avg": 3.6,
|
||||
"max": 4.2,
|
||||
"count": 35040, # ~365 days * 96 readings
|
||||
},
|
||||
"nb_recv": {
|
||||
"total": 4320000,
|
||||
"count": 35040,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_counter_values():
|
||||
"""Sample counter values with timestamps for reboot detection."""
|
||||
base_ts = datetime(2024, 1, 15, 0, 0, 0)
|
||||
return [
|
||||
(base_ts, 100),
|
||||
(base_ts + timedelta(minutes=15), 150),
|
||||
(base_ts + timedelta(minutes=30), 200),
|
||||
(base_ts + timedelta(minutes=45), 250),
|
||||
(base_ts + timedelta(hours=1), 300),
|
||||
]
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_counter_values_with_reboot():
|
||||
"""Sample counter values with a device reboot."""
|
||||
base_ts = datetime(2024, 1, 15, 0, 0, 0)
|
||||
return [
|
||||
(base_ts, 100),
|
||||
(base_ts + timedelta(minutes=15), 150),
|
||||
(base_ts + timedelta(minutes=30), 200),
|
||||
(base_ts + timedelta(minutes=45), 50), # Reboot! Counter reset
|
||||
(base_ts + timedelta(hours=1), 100),
|
||||
]
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def reports_out_dir(configured_env):
|
||||
"""Output directory for reports."""
|
||||
reports_dir = configured_env["out_dir"] / "reports"
|
||||
reports_dir.mkdir(parents=True, exist_ok=True)
|
||||
return reports_dir
|
||||
215
tests/reports/test_aggregation.py
Normal file
215
tests/reports/test_aggregation.py
Normal file
@@ -0,0 +1,215 @@
|
||||
"""Tests for report data aggregation functions."""
|
||||
|
||||
from datetime import date, datetime
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.db import insert_metrics
|
||||
from meshmon.reports import (
|
||||
DailyAggregate,
|
||||
aggregate_daily,
|
||||
aggregate_monthly,
|
||||
aggregate_yearly,
|
||||
get_rows_for_date,
|
||||
)
|
||||
|
||||
BASE_DATE = date(2024, 1, 15)
|
||||
BASE_TS = int(datetime(2024, 1, 15, 0, 0, 0).timestamp())
|
||||
|
||||
|
||||
class TestGetRowsForDate:
|
||||
"""Tests for get_rows_for_date function."""
|
||||
|
||||
def test_returns_list(self, initialized_db, configured_env):
|
||||
"""Returns a list."""
|
||||
result = get_rows_for_date("repeater", BASE_DATE)
|
||||
assert isinstance(result, list)
|
||||
|
||||
def test_filters_by_date(self, initialized_db, configured_env):
|
||||
"""Only returns rows for the specified date."""
|
||||
# Insert data for different dates
|
||||
ts_jan14 = int(datetime(2024, 1, 14, 12, 0, 0).timestamp())
|
||||
ts_jan15 = int(datetime(2024, 1, 15, 12, 0, 0).timestamp())
|
||||
ts_jan16 = int(datetime(2024, 1, 16, 12, 0, 0).timestamp())
|
||||
|
||||
insert_metrics(ts_jan14, "repeater", {"bat": 3800.0})
|
||||
insert_metrics(ts_jan15, "repeater", {"bat": 3850.0})
|
||||
insert_metrics(ts_jan16, "repeater", {"bat": 3900.0})
|
||||
|
||||
result = get_rows_for_date("repeater", BASE_DATE)
|
||||
|
||||
# Should have data for Jan 15 only
|
||||
assert len(result) == 1
|
||||
assert result[0]["ts"] == ts_jan15
|
||||
assert result[0]["bat"] == 3850.0
|
||||
|
||||
def test_filters_by_role(self, initialized_db, configured_env):
|
||||
"""Only returns rows for the specified role."""
|
||||
ts = int(datetime(2024, 1, 15, 12, 0, 0).timestamp())
|
||||
|
||||
insert_metrics(ts, "repeater", {"bat": 3800.0})
|
||||
insert_metrics(ts, "companion", {"battery_mv": 3850.0})
|
||||
|
||||
repeater_result = get_rows_for_date("repeater", BASE_DATE)
|
||||
companion_result = get_rows_for_date("companion", BASE_DATE)
|
||||
|
||||
assert len(repeater_result) == 1
|
||||
assert "bat" in repeater_result[0]
|
||||
assert "battery_mv" not in repeater_result[0]
|
||||
assert len(companion_result) == 1
|
||||
assert "battery_mv" in companion_result[0]
|
||||
assert "bat" not in companion_result[0]
|
||||
|
||||
def test_returns_empty_for_no_data(self, initialized_db, configured_env):
|
||||
"""Returns empty list when no data for date."""
|
||||
result = get_rows_for_date("repeater", BASE_DATE)
|
||||
assert result == []
|
||||
|
||||
|
||||
class TestAggregateDaily:
|
||||
"""Tests for aggregate_daily function."""
|
||||
|
||||
def test_returns_daily_aggregate(self, initialized_db, configured_env):
|
||||
"""Returns a DailyAggregate."""
|
||||
result = aggregate_daily("repeater", BASE_DATE)
|
||||
assert isinstance(result, DailyAggregate)
|
||||
|
||||
def test_calculates_gauge_stats(self, initialized_db, configured_env):
|
||||
"""Calculates stats for gauge metrics."""
|
||||
# Insert several values
|
||||
for i, value in enumerate([3700.0, 3800.0, 3900.0, 4000.0]):
|
||||
insert_metrics(BASE_TS + i * 3600, "repeater", {"bat": value})
|
||||
|
||||
result = aggregate_daily("repeater", BASE_DATE)
|
||||
|
||||
assert "bat" in result.metrics
|
||||
bat_stats = result.metrics["bat"]
|
||||
assert bat_stats.count == 4
|
||||
assert bat_stats.min_value == 3700.0
|
||||
assert bat_stats.max_value == 4000.0
|
||||
assert bat_stats.mean == pytest.approx(3850.0)
|
||||
assert bat_stats.min_time == datetime.fromtimestamp(BASE_TS)
|
||||
assert bat_stats.max_time == datetime.fromtimestamp(BASE_TS + 3 * 3600)
|
||||
|
||||
def test_calculates_counter_total(self, initialized_db, configured_env):
|
||||
"""Calculates total for counter metrics."""
|
||||
# Insert increasing counter values
|
||||
for i in range(5):
|
||||
insert_metrics(BASE_TS + i * 900, "repeater", {"nb_recv": float(i * 100)})
|
||||
|
||||
result = aggregate_daily("repeater", BASE_DATE)
|
||||
|
||||
assert "nb_recv" in result.metrics
|
||||
counter_stats = result.metrics["nb_recv"]
|
||||
assert counter_stats.count == 5
|
||||
assert counter_stats.reboot_count == 0
|
||||
assert counter_stats.total == 400
|
||||
|
||||
def test_returns_empty_for_no_data(self, initialized_db, configured_env):
|
||||
"""Returns aggregate with empty metrics when no data."""
|
||||
result = aggregate_daily("repeater", BASE_DATE)
|
||||
|
||||
assert isinstance(result, DailyAggregate)
|
||||
assert result.snapshot_count == 0
|
||||
assert result.metrics == {}
|
||||
|
||||
|
||||
class TestAggregateMonthly:
|
||||
"""Tests for aggregate_monthly function."""
|
||||
|
||||
def test_returns_monthly_aggregate(self, initialized_db, configured_env):
|
||||
"""Returns a MonthlyAggregate."""
|
||||
from meshmon.reports import MonthlyAggregate
|
||||
|
||||
result = aggregate_monthly("repeater", 2024, 1)
|
||||
assert isinstance(result, MonthlyAggregate)
|
||||
|
||||
def test_aggregates_all_days(self, initialized_db, configured_env):
|
||||
"""Aggregates data from all days in month."""
|
||||
# Insert data for multiple days
|
||||
for day in [1, 5, 15, 20, 31]:
|
||||
ts = int(datetime(2024, 1, day, 12, 0, 0).timestamp())
|
||||
insert_metrics(ts, "repeater", {"bat": 3800.0 + day * 10})
|
||||
|
||||
result = aggregate_monthly("repeater", 2024, 1)
|
||||
|
||||
# Should have daily data
|
||||
assert result.year == 2024
|
||||
assert result.month == 1
|
||||
assert len(result.daily) == 5
|
||||
assert all(d.snapshot_count == 1 for d in result.daily)
|
||||
summary = result.summary["bat"]
|
||||
assert summary.count == 5
|
||||
assert summary.min_value == 3810.0
|
||||
assert summary.max_value == 4110.0
|
||||
assert summary.mean == pytest.approx(3944.0)
|
||||
assert summary.min_time.day == 1
|
||||
assert summary.max_time.day == 31
|
||||
|
||||
def test_handles_partial_month(self, initialized_db, configured_env):
|
||||
"""Handles months with partial data."""
|
||||
# Insert data for only a few days
|
||||
for day in [10, 11, 12]:
|
||||
ts = int(datetime(2024, 1, day, 12, 0, 0).timestamp())
|
||||
insert_metrics(ts, "repeater", {"bat": 3800.0})
|
||||
|
||||
result = aggregate_monthly("repeater", 2024, 1)
|
||||
|
||||
assert result.year == 2024
|
||||
assert result.month == 1
|
||||
assert len(result.daily) == 3
|
||||
summary = result.summary["bat"]
|
||||
assert summary.count == 3
|
||||
assert summary.mean == pytest.approx(3800.0)
|
||||
|
||||
|
||||
class TestAggregateYearly:
|
||||
"""Tests for aggregate_yearly function."""
|
||||
|
||||
def test_returns_yearly_aggregate(self, initialized_db, configured_env):
|
||||
"""Returns a YearlyAggregate."""
|
||||
from meshmon.reports import YearlyAggregate
|
||||
|
||||
result = aggregate_yearly("repeater", 2024)
|
||||
assert isinstance(result, YearlyAggregate)
|
||||
|
||||
def test_aggregates_all_months(self, initialized_db, configured_env):
|
||||
"""Aggregates data from all months in year."""
|
||||
# Insert data for multiple months
|
||||
for month in [1, 3, 6, 12]:
|
||||
ts = int(datetime(2024, month, 15, 12, 0, 0).timestamp())
|
||||
insert_metrics(ts, "repeater", {"bat": 3800.0 + month * 10})
|
||||
|
||||
result = aggregate_yearly("repeater", 2024)
|
||||
|
||||
assert result.year == 2024
|
||||
# Should have monthly aggregates
|
||||
assert len(result.monthly) == 4
|
||||
summary = result.summary["bat"]
|
||||
assert summary.count == 4
|
||||
assert summary.min_value == 3810.0
|
||||
assert summary.max_value == 3920.0
|
||||
assert summary.mean == pytest.approx(3855.0)
|
||||
assert summary.min_time.month == 1
|
||||
assert summary.max_time.month == 12
|
||||
|
||||
def test_returns_empty_for_no_data(self, initialized_db, configured_env):
|
||||
"""Returns aggregate with empty monthly when no data."""
|
||||
result = aggregate_yearly("repeater", 2024)
|
||||
|
||||
assert result.year == 2024
|
||||
# Empty year may have no monthly data
|
||||
assert result.monthly == []
|
||||
|
||||
def test_handles_leap_year(self, initialized_db, configured_env):
|
||||
"""Correctly handles leap years."""
|
||||
# Insert data for Feb 29 (2024 is a leap year)
|
||||
ts = int(datetime(2024, 2, 29, 12, 0, 0).timestamp())
|
||||
insert_metrics(ts, "repeater", {"bat": 3800.0})
|
||||
|
||||
result = aggregate_yearly("repeater", 2024)
|
||||
|
||||
assert result.year == 2024
|
||||
months = [monthly.month for monthly in result.monthly]
|
||||
assert 2 in months
|
||||
assert result.summary["bat"].count == 1
|
||||
424
tests/reports/test_aggregation_helpers.py
Normal file
424
tests/reports/test_aggregation_helpers.py
Normal file
@@ -0,0 +1,424 @@
|
||||
"""Tests for report aggregation helper functions."""
|
||||
|
||||
from datetime import date, datetime
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.reports import (
|
||||
DailyAggregate,
|
||||
MetricStats,
|
||||
MonthlyAggregate,
|
||||
_aggregate_daily_counter_to_summary,
|
||||
_aggregate_daily_gauge_to_summary,
|
||||
_aggregate_monthly_counter_to_summary,
|
||||
_aggregate_monthly_gauge_to_summary,
|
||||
_compute_counter_stats,
|
||||
_compute_gauge_stats,
|
||||
)
|
||||
|
||||
|
||||
class TestComputeGaugeStats:
|
||||
"""Tests for _compute_gauge_stats function."""
|
||||
|
||||
def test_returns_metric_stats(self):
|
||||
"""Returns a MetricStats dataclass."""
|
||||
values = [
|
||||
(datetime(2024, 1, 1, 0, 0), 3.8),
|
||||
(datetime(2024, 1, 1, 1, 0), 3.9),
|
||||
(datetime(2024, 1, 1, 2, 0), 4.0),
|
||||
]
|
||||
result = _compute_gauge_stats(values)
|
||||
assert isinstance(result, MetricStats)
|
||||
|
||||
def test_computes_min_max_mean(self):
|
||||
"""Computes correct min, max, and mean."""
|
||||
values = [
|
||||
(datetime(2024, 1, 1, 0, 0), 3.8),
|
||||
(datetime(2024, 1, 1, 1, 0), 3.9),
|
||||
(datetime(2024, 1, 1, 2, 0), 4.0),
|
||||
]
|
||||
result = _compute_gauge_stats(values)
|
||||
assert result.min_value == 3.8
|
||||
assert result.max_value == 4.0
|
||||
assert result.mean == pytest.approx(3.9)
|
||||
assert result.count == 3
|
||||
|
||||
def test_handles_single_value(self):
|
||||
"""Handles single value correctly."""
|
||||
values = [(datetime(2024, 1, 1, 0, 0), 3.85)]
|
||||
result = _compute_gauge_stats(values)
|
||||
assert result.min_value == 3.85
|
||||
assert result.max_value == 3.85
|
||||
assert result.mean == 3.85
|
||||
assert result.count == 1
|
||||
assert result.min_time == datetime(2024, 1, 1, 0, 0)
|
||||
assert result.max_time == datetime(2024, 1, 1, 0, 0)
|
||||
|
||||
def test_handles_empty_list(self):
|
||||
"""Handles empty list gracefully."""
|
||||
result = _compute_gauge_stats([])
|
||||
assert result.min_value is None
|
||||
assert result.max_value is None
|
||||
assert result.mean is None
|
||||
assert result.count == 0
|
||||
|
||||
def test_tracks_count(self):
|
||||
"""Tracks the number of values."""
|
||||
values = [
|
||||
(datetime(2024, 1, 1, i, 0), 3.8 + i * 0.01)
|
||||
for i in range(10)
|
||||
]
|
||||
result = _compute_gauge_stats(values)
|
||||
assert result.count == 10
|
||||
|
||||
def test_tracks_min_time(self):
|
||||
"""Tracks timestamp of minimum value."""
|
||||
values = [
|
||||
(datetime(2024, 1, 1, 0, 0), 3.9),
|
||||
(datetime(2024, 1, 1, 1, 0), 3.7), # Min
|
||||
(datetime(2024, 1, 1, 2, 0), 3.8),
|
||||
]
|
||||
result = _compute_gauge_stats(values)
|
||||
assert result.min_time == datetime(2024, 1, 1, 1, 0)
|
||||
|
||||
def test_tracks_max_time(self):
|
||||
"""Tracks timestamp of maximum value."""
|
||||
values = [
|
||||
(datetime(2024, 1, 1, 0, 0), 3.9),
|
||||
(datetime(2024, 1, 1, 1, 0), 4.1), # Max
|
||||
(datetime(2024, 1, 1, 2, 0), 3.8),
|
||||
]
|
||||
result = _compute_gauge_stats(values)
|
||||
assert result.max_time == datetime(2024, 1, 1, 1, 0)
|
||||
|
||||
|
||||
class TestComputeCounterStats:
|
||||
"""Tests for _compute_counter_stats function."""
|
||||
|
||||
def test_returns_metric_stats(self):
|
||||
"""Returns a MetricStats dataclass."""
|
||||
values = [
|
||||
(datetime(2024, 1, 1, 0, 0), 100),
|
||||
(datetime(2024, 1, 1, 1, 0), 150),
|
||||
(datetime(2024, 1, 1, 2, 0), 200),
|
||||
]
|
||||
result = _compute_counter_stats(values)
|
||||
assert isinstance(result, MetricStats)
|
||||
|
||||
def test_computes_total_delta(self):
|
||||
"""Computes total delta from counter values."""
|
||||
values = [
|
||||
(datetime(2024, 1, 1, 0, 0), 100),
|
||||
(datetime(2024, 1, 1, 1, 0), 150), # +50
|
||||
(datetime(2024, 1, 1, 2, 0), 200), # +50
|
||||
]
|
||||
result = _compute_counter_stats(values)
|
||||
# Total should be 100 (50 + 50)
|
||||
assert result.total == 100
|
||||
assert result.count == 3
|
||||
assert result.reboot_count == 0
|
||||
|
||||
def test_handles_counter_reboot(self):
|
||||
"""Handles counter reboot (value decrease)."""
|
||||
values = [
|
||||
(datetime(2024, 1, 1, 0, 0), 100),
|
||||
(datetime(2024, 1, 1, 1, 0), 150), # +50
|
||||
(datetime(2024, 1, 1, 2, 0), 20), # Reboot - counts from 0
|
||||
(datetime(2024, 1, 1, 3, 0), 50), # +30
|
||||
]
|
||||
result = _compute_counter_stats(values)
|
||||
# Total: 50 + 20 + 30 = 100
|
||||
assert result.total == 100
|
||||
assert result.reboot_count == 1
|
||||
assert result.count == 4
|
||||
|
||||
def test_tracks_reboot_count(self):
|
||||
"""Tracks number of reboots."""
|
||||
values = [
|
||||
(datetime(2024, 1, 1, 0, 0), 100),
|
||||
(datetime(2024, 1, 1, 1, 0), 150),
|
||||
(datetime(2024, 1, 1, 2, 0), 20), # Reboot 1
|
||||
(datetime(2024, 1, 1, 3, 0), 50),
|
||||
(datetime(2024, 1, 1, 4, 0), 10), # Reboot 2
|
||||
]
|
||||
result = _compute_counter_stats(values)
|
||||
assert result.reboot_count == 2
|
||||
assert result.total == 110
|
||||
assert result.count == 5
|
||||
|
||||
def test_handles_empty_list(self):
|
||||
"""Handles empty list gracefully."""
|
||||
result = _compute_counter_stats([])
|
||||
assert result.total is None
|
||||
assert result.count == 0
|
||||
assert result.reboot_count == 0
|
||||
|
||||
def test_handles_single_value(self):
|
||||
"""Handles single value (no delta possible)."""
|
||||
values = [(datetime(2024, 1, 1, 0, 0), 100)]
|
||||
result = _compute_counter_stats(values)
|
||||
# Single value means no delta can be computed
|
||||
assert result.total is None
|
||||
assert result.count == 1
|
||||
assert result.reboot_count == 0
|
||||
|
||||
|
||||
class TestAggregateDailyGaugeToSummary:
|
||||
"""Tests for _aggregate_daily_gauge_to_summary function."""
|
||||
|
||||
@pytest.fixture
|
||||
def daily_gauge_data(self):
|
||||
"""Sample daily gauge aggregates."""
|
||||
return [
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, 1),
|
||||
metrics={
|
||||
"battery": MetricStats(
|
||||
min_value=3.7, min_time=datetime(2024, 1, 1, 3, 0),
|
||||
max_value=3.9, max_time=datetime(2024, 1, 1, 15, 0),
|
||||
mean=3.8, count=96
|
||||
)
|
||||
}
|
||||
),
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, 2),
|
||||
metrics={
|
||||
"battery": MetricStats(
|
||||
min_value=3.6, min_time=datetime(2024, 1, 2, 4, 0),
|
||||
max_value=4.0, max_time=datetime(2024, 1, 2, 12, 0),
|
||||
mean=3.85, count=96
|
||||
)
|
||||
}
|
||||
),
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, 3),
|
||||
metrics={
|
||||
"battery": MetricStats(
|
||||
min_value=3.8, min_time=datetime(2024, 1, 3, 2, 0),
|
||||
max_value=4.1, max_time=datetime(2024, 1, 3, 18, 0),
|
||||
mean=3.95, count=96
|
||||
)
|
||||
}
|
||||
),
|
||||
]
|
||||
|
||||
def test_returns_metric_stats(self, daily_gauge_data):
|
||||
"""Returns a MetricStats object."""
|
||||
result = _aggregate_daily_gauge_to_summary(daily_gauge_data, "battery")
|
||||
assert isinstance(result, MetricStats)
|
||||
|
||||
def test_finds_overall_min(self, daily_gauge_data):
|
||||
"""Finds minimum across all days."""
|
||||
result = _aggregate_daily_gauge_to_summary(daily_gauge_data, "battery")
|
||||
assert result.min_value == 3.6
|
||||
assert result.min_time == datetime(2024, 1, 2, 4, 0)
|
||||
|
||||
def test_finds_overall_max(self, daily_gauge_data):
|
||||
"""Finds maximum across all days."""
|
||||
result = _aggregate_daily_gauge_to_summary(daily_gauge_data, "battery")
|
||||
assert result.max_value == 4.1
|
||||
assert result.max_time == datetime(2024, 1, 3, 18, 0)
|
||||
|
||||
def test_computes_weighted_mean(self, daily_gauge_data):
|
||||
"""Computes weighted mean based on count."""
|
||||
result = _aggregate_daily_gauge_to_summary(daily_gauge_data, "battery")
|
||||
# All have same count, so simple average: (3.8 + 3.85 + 3.95) / 3 = 3.8667
|
||||
assert result.mean == pytest.approx(3.8667, rel=0.01)
|
||||
assert result.count == 288
|
||||
|
||||
def test_handles_empty_list(self):
|
||||
"""Handles empty daily list."""
|
||||
result = _aggregate_daily_gauge_to_summary([], "battery")
|
||||
assert result.min_value is None
|
||||
assert result.max_value is None
|
||||
assert result.mean is None
|
||||
assert result.count == 0
|
||||
|
||||
def test_handles_missing_metric(self, daily_gauge_data):
|
||||
"""Handles when metric doesn't exist in daily data."""
|
||||
result = _aggregate_daily_gauge_to_summary(daily_gauge_data, "nonexistent")
|
||||
assert result.min_value is None
|
||||
assert result.max_value is None
|
||||
assert result.mean is None
|
||||
assert result.count == 0
|
||||
|
||||
|
||||
class TestAggregateDailyCounterToSummary:
|
||||
"""Tests for _aggregate_daily_counter_to_summary function."""
|
||||
|
||||
@pytest.fixture
|
||||
def daily_counter_data(self):
|
||||
"""Sample daily counter aggregates."""
|
||||
return [
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, 1),
|
||||
metrics={
|
||||
"packets_rx": MetricStats(total=1000, reboot_count=0, count=96)
|
||||
}
|
||||
),
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, 2),
|
||||
metrics={
|
||||
"packets_rx": MetricStats(total=1500, reboot_count=1, count=96)
|
||||
}
|
||||
),
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, 3),
|
||||
metrics={
|
||||
"packets_rx": MetricStats(total=800, reboot_count=0, count=96)
|
||||
}
|
||||
),
|
||||
]
|
||||
|
||||
def test_returns_metric_stats(self, daily_counter_data):
|
||||
"""Returns a MetricStats object."""
|
||||
result = _aggregate_daily_counter_to_summary(daily_counter_data, "packets_rx")
|
||||
assert isinstance(result, MetricStats)
|
||||
|
||||
def test_sums_totals(self, daily_counter_data):
|
||||
"""Sums totals across all days."""
|
||||
result = _aggregate_daily_counter_to_summary(daily_counter_data, "packets_rx")
|
||||
assert result.total == 3300 # 1000 + 1500 + 800
|
||||
assert result.count == 288
|
||||
|
||||
def test_sums_reboots(self, daily_counter_data):
|
||||
"""Sums reboot counts across all days."""
|
||||
result = _aggregate_daily_counter_to_summary(daily_counter_data, "packets_rx")
|
||||
assert result.reboot_count == 1
|
||||
|
||||
def test_handles_empty_list(self):
|
||||
"""Handles empty daily list."""
|
||||
result = _aggregate_daily_counter_to_summary([], "packets_rx")
|
||||
assert result.total is None
|
||||
assert result.count == 0
|
||||
assert result.reboot_count == 0
|
||||
|
||||
def test_handles_missing_metric(self, daily_counter_data):
|
||||
"""Handles when metric doesn't exist in daily data."""
|
||||
result = _aggregate_daily_counter_to_summary(daily_counter_data, "nonexistent")
|
||||
assert result.total is None
|
||||
assert result.count == 0
|
||||
assert result.reboot_count == 0
|
||||
|
||||
|
||||
class TestAggregateMonthlyGaugeToSummary:
|
||||
"""Tests for _aggregate_monthly_gauge_to_summary function."""
|
||||
|
||||
@pytest.fixture
|
||||
def monthly_gauge_data(self):
|
||||
"""Sample monthly gauge aggregates."""
|
||||
return [
|
||||
MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="companion",
|
||||
summary={
|
||||
"battery": MetricStats(
|
||||
min_value=3.6, min_time=datetime(2024, 1, 15, 4, 0),
|
||||
max_value=4.0, max_time=datetime(2024, 1, 20, 14, 0),
|
||||
mean=3.8, count=2976
|
||||
)
|
||||
}
|
||||
),
|
||||
MonthlyAggregate(
|
||||
year=2024,
|
||||
month=2,
|
||||
role="companion",
|
||||
summary={
|
||||
"battery": MetricStats(
|
||||
min_value=3.5, min_time=datetime(2024, 2, 10, 5, 0),
|
||||
max_value=4.1, max_time=datetime(2024, 2, 25, 16, 0),
|
||||
mean=3.9, count=2784
|
||||
)
|
||||
}
|
||||
),
|
||||
]
|
||||
|
||||
def test_returns_metric_stats(self, monthly_gauge_data):
|
||||
"""Returns a MetricStats object."""
|
||||
result = _aggregate_monthly_gauge_to_summary(monthly_gauge_data, "battery")
|
||||
assert isinstance(result, MetricStats)
|
||||
|
||||
def test_finds_overall_min(self, monthly_gauge_data):
|
||||
"""Finds minimum across all months."""
|
||||
result = _aggregate_monthly_gauge_to_summary(monthly_gauge_data, "battery")
|
||||
assert result.min_value == 3.5
|
||||
assert result.min_time == datetime(2024, 2, 10, 5, 0)
|
||||
|
||||
def test_finds_overall_max(self, monthly_gauge_data):
|
||||
"""Finds maximum across all months."""
|
||||
result = _aggregate_monthly_gauge_to_summary(monthly_gauge_data, "battery")
|
||||
assert result.max_value == 4.1
|
||||
assert result.max_time == datetime(2024, 2, 25, 16, 0)
|
||||
|
||||
def test_computes_weighted_mean(self, monthly_gauge_data):
|
||||
"""Computes weighted mean based on count."""
|
||||
result = _aggregate_monthly_gauge_to_summary(monthly_gauge_data, "battery")
|
||||
# Weighted: (3.8 * 2976 + 3.9 * 2784) / (2976 + 2784)
|
||||
expected = (3.8 * 2976 + 3.9 * 2784) / (2976 + 2784)
|
||||
assert result.mean == pytest.approx(expected, rel=0.01)
|
||||
assert result.count == 5760
|
||||
|
||||
def test_handles_empty_list(self):
|
||||
"""Handles empty monthly list."""
|
||||
result = _aggregate_monthly_gauge_to_summary([], "battery")
|
||||
assert result.min_value is None
|
||||
assert result.max_value is None
|
||||
assert result.mean is None
|
||||
assert result.count == 0
|
||||
|
||||
|
||||
class TestAggregateMonthlyCounterToSummary:
|
||||
"""Tests for _aggregate_monthly_counter_to_summary function."""
|
||||
|
||||
@pytest.fixture
|
||||
def monthly_counter_data(self):
|
||||
"""Sample monthly counter aggregates."""
|
||||
return [
|
||||
MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="companion",
|
||||
summary={
|
||||
"packets_rx": MetricStats(total=50000, reboot_count=2, count=2976)
|
||||
}
|
||||
),
|
||||
MonthlyAggregate(
|
||||
year=2024,
|
||||
month=2,
|
||||
role="companion",
|
||||
summary={
|
||||
"packets_rx": MetricStats(total=45000, reboot_count=1, count=2784)
|
||||
}
|
||||
),
|
||||
]
|
||||
|
||||
def test_returns_metric_stats(self, monthly_counter_data):
|
||||
"""Returns a MetricStats object."""
|
||||
result = _aggregate_monthly_counter_to_summary(monthly_counter_data, "packets_rx")
|
||||
assert isinstance(result, MetricStats)
|
||||
|
||||
def test_sums_totals(self, monthly_counter_data):
|
||||
"""Sums totals across all months."""
|
||||
result = _aggregate_monthly_counter_to_summary(monthly_counter_data, "packets_rx")
|
||||
assert result.total == 95000
|
||||
assert result.count == 5760
|
||||
|
||||
def test_sums_reboots(self, monthly_counter_data):
|
||||
"""Sums reboot counts across all months."""
|
||||
result = _aggregate_monthly_counter_to_summary(monthly_counter_data, "packets_rx")
|
||||
assert result.reboot_count == 3
|
||||
|
||||
def test_handles_empty_list(self):
|
||||
"""Handles empty monthly list."""
|
||||
result = _aggregate_monthly_counter_to_summary([], "packets_rx")
|
||||
assert result.total is None
|
||||
assert result.count == 0
|
||||
assert result.reboot_count == 0
|
||||
|
||||
def test_handles_missing_metric(self, monthly_counter_data):
|
||||
"""Handles when metric doesn't exist in monthly data."""
|
||||
result = _aggregate_monthly_counter_to_summary(monthly_counter_data, "nonexistent")
|
||||
assert result.total is None
|
||||
assert result.count == 0
|
||||
assert result.reboot_count == 0
|
||||
152
tests/reports/test_counter_total.py
Normal file
152
tests/reports/test_counter_total.py
Normal file
@@ -0,0 +1,152 @@
|
||||
"""Tests for counter total computation with reboot handling."""
|
||||
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.reports import compute_counter_total
|
||||
|
||||
|
||||
class TestComputeCounterTotal:
|
||||
"""Tests for compute_counter_total function."""
|
||||
|
||||
def test_calculates_total_from_deltas(self, sample_counter_values):
|
||||
"""Calculates total as sum of positive deltas."""
|
||||
total, reboots = compute_counter_total(sample_counter_values)
|
||||
|
||||
# Values: 100, 150, 200, 250, 300
|
||||
# Deltas: +50, +50, +50, +50 = 200
|
||||
assert total == 200
|
||||
assert reboots == 0
|
||||
|
||||
def test_handles_single_value(self):
|
||||
"""Single value cannot compute delta, returns None."""
|
||||
values = [(datetime(2024, 1, 15, 0, 0, 0), 100)]
|
||||
|
||||
total, reboots = compute_counter_total(values)
|
||||
|
||||
assert total is None
|
||||
assert reboots == 0
|
||||
|
||||
def test_handles_empty_values(self):
|
||||
"""Empty values returns None."""
|
||||
total, reboots = compute_counter_total([])
|
||||
|
||||
assert total is None
|
||||
assert reboots == 0
|
||||
|
||||
def test_detects_single_reboot(self, sample_counter_values_with_reboot):
|
||||
"""Detects reboot and handles counter reset."""
|
||||
total, reboots = compute_counter_total(sample_counter_values_with_reboot)
|
||||
|
||||
# Values: 100, 150, 200, 50 (reboot!), 100
|
||||
# Deltas: +50, +50, (reset to 50), +50
|
||||
# Total should be: 50 + 50 + 50 + 50 = 200
|
||||
# Or: (150-100) + (200-150) + 50 + (100-50) = 200
|
||||
assert total == 200
|
||||
assert reboots == 1
|
||||
|
||||
def test_handles_multiple_reboots(self):
|
||||
"""Handles multiple reboots in sequence."""
|
||||
base_ts = datetime(2024, 1, 15, 0, 0, 0)
|
||||
values = [
|
||||
(base_ts, 100),
|
||||
(base_ts + timedelta(minutes=15), 150), # +50
|
||||
(base_ts + timedelta(minutes=30), 50), # Reboot 1
|
||||
(base_ts + timedelta(minutes=45), 80), # +30
|
||||
(base_ts + timedelta(hours=1), 30), # Reboot 2
|
||||
(base_ts + timedelta(hours=1, minutes=15), 50), # +20
|
||||
]
|
||||
|
||||
total, reboots = compute_counter_total(values)
|
||||
|
||||
# Deltas: 50 + 50 + 30 + 30 + 20 = 180
|
||||
assert reboots == 2
|
||||
assert total == 50 + 50 + 30 + 30 + 20
|
||||
|
||||
def test_zero_delta(self):
|
||||
"""Handles zero delta (no change)."""
|
||||
base_ts = datetime(2024, 1, 15, 0, 0, 0)
|
||||
values = [
|
||||
(base_ts, 100),
|
||||
(base_ts + timedelta(minutes=15), 100), # No change
|
||||
(base_ts + timedelta(minutes=30), 100), # No change
|
||||
]
|
||||
|
||||
total, reboots = compute_counter_total(values)
|
||||
|
||||
assert total == 0
|
||||
assert reboots == 0
|
||||
|
||||
def test_large_values(self):
|
||||
"""Handles large counter values."""
|
||||
base_ts = datetime(2024, 1, 15, 0, 0, 0)
|
||||
values = [
|
||||
(base_ts, 1000000000),
|
||||
(base_ts + timedelta(minutes=15), 1000001000), # +1000
|
||||
(base_ts + timedelta(minutes=30), 1000002500), # +1500
|
||||
]
|
||||
|
||||
total, reboots = compute_counter_total(values)
|
||||
|
||||
assert total == 2500
|
||||
assert reboots == 0
|
||||
|
||||
def test_sorted_values_required(self):
|
||||
"""Function expects pre-sorted values by timestamp."""
|
||||
base_ts = datetime(2024, 1, 15, 0, 0, 0)
|
||||
# Properly sorted by timestamp
|
||||
values = [
|
||||
(base_ts, 100),
|
||||
(base_ts + timedelta(minutes=15), 150),
|
||||
(base_ts + timedelta(minutes=30), 200),
|
||||
]
|
||||
|
||||
total, reboots = compute_counter_total(values)
|
||||
|
||||
# Deltas: 50, 50 = 100
|
||||
assert total == 100
|
||||
assert reboots == 0
|
||||
|
||||
def test_two_values(self):
|
||||
"""Two values gives single delta."""
|
||||
base_ts = datetime(2024, 1, 15, 0, 0, 0)
|
||||
values = [
|
||||
(base_ts, 100),
|
||||
(base_ts + timedelta(minutes=15), 175),
|
||||
]
|
||||
|
||||
total, reboots = compute_counter_total(values)
|
||||
|
||||
assert total == 75
|
||||
assert reboots == 0
|
||||
|
||||
def test_reboot_to_zero(self):
|
||||
"""Handles reboot to exactly zero."""
|
||||
base_ts = datetime(2024, 1, 15, 0, 0, 0)
|
||||
values = [
|
||||
(base_ts, 100),
|
||||
(base_ts + timedelta(minutes=15), 150), # +50
|
||||
(base_ts + timedelta(minutes=30), 0), # Reboot to 0
|
||||
(base_ts + timedelta(minutes=45), 30), # +30
|
||||
]
|
||||
|
||||
total, reboots = compute_counter_total(values)
|
||||
|
||||
assert total == 50 + 0 + 30
|
||||
assert reboots == 1
|
||||
|
||||
def test_float_values(self):
|
||||
"""Handles float counter values."""
|
||||
base_ts = datetime(2024, 1, 15, 0, 0, 0)
|
||||
values = [
|
||||
(base_ts, 100.5),
|
||||
(base_ts + timedelta(minutes=15), 150.7),
|
||||
(base_ts + timedelta(minutes=30), 200.3),
|
||||
]
|
||||
|
||||
total, reboots = compute_counter_total(values)
|
||||
|
||||
expected = (150.7 - 100.5) + (200.3 - 150.7)
|
||||
assert total == pytest.approx(expected)
|
||||
assert reboots == 0
|
||||
316
tests/reports/test_format_json.py
Normal file
316
tests/reports/test_format_json.py
Normal file
@@ -0,0 +1,316 @@
|
||||
"""Tests for JSON report formatting."""
|
||||
|
||||
import json
|
||||
from datetime import date, datetime
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.reports import (
|
||||
DailyAggregate,
|
||||
MetricStats,
|
||||
MonthlyAggregate,
|
||||
YearlyAggregate,
|
||||
monthly_to_json,
|
||||
yearly_to_json,
|
||||
)
|
||||
|
||||
|
||||
class TestMonthlyToJson:
|
||||
"""Tests for monthly_to_json function."""
|
||||
|
||||
@pytest.fixture
|
||||
def sample_monthly_aggregate(self):
|
||||
"""Create sample MonthlyAggregate for testing."""
|
||||
daily_data = [
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, 1),
|
||||
metrics={
|
||||
"bat": MetricStats(min_value=3.7, max_value=3.9, mean=3.8, count=24),
|
||||
"nb_recv": MetricStats(total=720, count=24),
|
||||
},
|
||||
),
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, 2),
|
||||
metrics={
|
||||
"bat": MetricStats(min_value=3.6, max_value=3.85, mean=3.75, count=24),
|
||||
"nb_recv": MetricStats(total=840, count=24),
|
||||
},
|
||||
),
|
||||
]
|
||||
|
||||
return MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=daily_data,
|
||||
summary={
|
||||
"bat": MetricStats(
|
||||
min_value=3.6,
|
||||
min_time=datetime(2024, 1, 2, 1, 0),
|
||||
max_value=3.9,
|
||||
max_time=datetime(2024, 1, 1, 23, 0),
|
||||
mean=3.775,
|
||||
count=48,
|
||||
),
|
||||
"nb_recv": MetricStats(total=1560, count=48, reboot_count=1),
|
||||
},
|
||||
)
|
||||
|
||||
def test_returns_dict(self, sample_monthly_aggregate):
|
||||
"""Returns a dictionary."""
|
||||
result = monthly_to_json(sample_monthly_aggregate)
|
||||
assert isinstance(result, dict)
|
||||
|
||||
def test_includes_report_type(self, sample_monthly_aggregate):
|
||||
"""Includes report_type field."""
|
||||
result = monthly_to_json(sample_monthly_aggregate)
|
||||
assert result["report_type"] == "monthly"
|
||||
|
||||
def test_includes_year_and_month(self, sample_monthly_aggregate):
|
||||
"""Includes year and month."""
|
||||
result = monthly_to_json(sample_monthly_aggregate)
|
||||
assert result["year"] == 2024
|
||||
assert result["month"] == 1
|
||||
|
||||
def test_includes_role(self, sample_monthly_aggregate):
|
||||
"""Includes role identifier."""
|
||||
result = monthly_to_json(sample_monthly_aggregate)
|
||||
assert result["role"] == "repeater"
|
||||
|
||||
def test_includes_daily_data(self, sample_monthly_aggregate):
|
||||
"""Includes daily breakdown."""
|
||||
result = monthly_to_json(sample_monthly_aggregate)
|
||||
assert "daily" in result
|
||||
assert len(result["daily"]) == 2
|
||||
assert result["days_with_data"] == 2
|
||||
|
||||
def test_daily_data_has_date(self, sample_monthly_aggregate):
|
||||
"""Daily data includes date."""
|
||||
result = monthly_to_json(sample_monthly_aggregate)
|
||||
first_day = result["daily"][0]
|
||||
assert "date" in first_day
|
||||
assert first_day["date"] == "2024-01-01"
|
||||
|
||||
def test_daily_metrics_include_units_and_values(self, sample_monthly_aggregate):
|
||||
"""Daily metrics include units and expected values."""
|
||||
result = monthly_to_json(sample_monthly_aggregate)
|
||||
first_day = result["daily"][0]
|
||||
|
||||
bat_stats = first_day["metrics"]["bat"]
|
||||
assert bat_stats["unit"] == "mV"
|
||||
assert bat_stats["min"] == 3.7
|
||||
assert bat_stats["max"] == 3.9
|
||||
assert bat_stats["mean"] == 3.8
|
||||
assert bat_stats["count"] == 24
|
||||
|
||||
rx_stats = first_day["metrics"]["nb_recv"]
|
||||
assert rx_stats["unit"] == "packets"
|
||||
assert rx_stats["total"] == 720
|
||||
assert rx_stats["count"] == 24
|
||||
|
||||
def test_is_json_serializable(self, sample_monthly_aggregate):
|
||||
"""Result is JSON serializable."""
|
||||
result = monthly_to_json(sample_monthly_aggregate)
|
||||
# Should not raise
|
||||
json_str = json.dumps(result)
|
||||
assert isinstance(json_str, str)
|
||||
|
||||
def test_summary_includes_times_and_reboots(self, sample_monthly_aggregate):
|
||||
"""Summary includes time fields and reboot counts when provided."""
|
||||
result = monthly_to_json(sample_monthly_aggregate)
|
||||
summary = result["summary"]
|
||||
|
||||
assert summary["bat"]["min_time"] == "2024-01-02T01:00:00"
|
||||
assert summary["bat"]["max_time"] == "2024-01-01T23:00:00"
|
||||
assert summary["nb_recv"]["total"] == 1560
|
||||
assert summary["nb_recv"]["reboot_count"] == 1
|
||||
|
||||
def test_handles_empty_daily(self):
|
||||
"""Handles aggregate with no daily data."""
|
||||
agg = MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=[],
|
||||
summary={},
|
||||
)
|
||||
|
||||
result = monthly_to_json(agg)
|
||||
assert result["daily"] == []
|
||||
assert result["days_with_data"] == 0
|
||||
assert result["summary"] == {}
|
||||
|
||||
|
||||
class TestYearlyToJson:
|
||||
"""Tests for yearly_to_json function."""
|
||||
|
||||
@pytest.fixture
|
||||
def sample_yearly_aggregate(self):
|
||||
"""Create sample YearlyAggregate for testing."""
|
||||
monthly_data = [
|
||||
MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=[],
|
||||
summary={"bat": MetricStats(min_value=3.6, max_value=3.9, mean=3.75, count=720)},
|
||||
),
|
||||
MonthlyAggregate(
|
||||
year=2024,
|
||||
month=2,
|
||||
role="repeater",
|
||||
daily=[],
|
||||
summary={"bat": MetricStats(min_value=3.5, max_value=3.85, mean=3.7, count=672)},
|
||||
),
|
||||
]
|
||||
|
||||
return YearlyAggregate(
|
||||
year=2024,
|
||||
role="repeater",
|
||||
monthly=monthly_data,
|
||||
summary={"bat": MetricStats(min_value=3.5, max_value=3.9, mean=3.725, count=1392)},
|
||||
)
|
||||
|
||||
def test_returns_dict(self, sample_yearly_aggregate):
|
||||
"""Returns a dictionary."""
|
||||
result = yearly_to_json(sample_yearly_aggregate)
|
||||
assert isinstance(result, dict)
|
||||
|
||||
def test_includes_report_type(self, sample_yearly_aggregate):
|
||||
"""Includes report_type field."""
|
||||
result = yearly_to_json(sample_yearly_aggregate)
|
||||
assert result["report_type"] == "yearly"
|
||||
|
||||
def test_includes_year(self, sample_yearly_aggregate):
|
||||
"""Includes year."""
|
||||
result = yearly_to_json(sample_yearly_aggregate)
|
||||
assert result["year"] == 2024
|
||||
|
||||
def test_includes_role(self, sample_yearly_aggregate):
|
||||
"""Includes role identifier."""
|
||||
result = yearly_to_json(sample_yearly_aggregate)
|
||||
assert result["role"] == "repeater"
|
||||
|
||||
def test_includes_monthly_data(self, sample_yearly_aggregate):
|
||||
"""Includes monthly breakdown."""
|
||||
result = yearly_to_json(sample_yearly_aggregate)
|
||||
assert "monthly" in result
|
||||
assert len(result["monthly"]) == 2
|
||||
assert result["months_with_data"] == 2
|
||||
|
||||
def test_is_json_serializable(self, sample_yearly_aggregate):
|
||||
"""Result is JSON serializable."""
|
||||
result = yearly_to_json(sample_yearly_aggregate)
|
||||
json_str = json.dumps(result)
|
||||
assert isinstance(json_str, str)
|
||||
|
||||
def test_summary_and_monthly_entries(self, sample_yearly_aggregate):
|
||||
"""Summary and monthly entries include expected fields."""
|
||||
result = yearly_to_json(sample_yearly_aggregate)
|
||||
|
||||
assert result["summary"]["bat"]["count"] == 1392
|
||||
assert result["summary"]["bat"]["unit"] == "mV"
|
||||
|
||||
first_month = result["monthly"][0]
|
||||
assert first_month["year"] == 2024
|
||||
assert first_month["month"] == 1
|
||||
assert first_month["days_with_data"] == 0
|
||||
assert first_month["summary"]["bat"]["mean"] == 3.75
|
||||
|
||||
def test_handles_empty_monthly(self):
|
||||
"""Handles aggregate with no monthly data."""
|
||||
agg = YearlyAggregate(
|
||||
year=2024,
|
||||
role="repeater",
|
||||
monthly=[],
|
||||
summary={},
|
||||
)
|
||||
|
||||
result = yearly_to_json(agg)
|
||||
assert result["monthly"] == []
|
||||
assert result["months_with_data"] == 0
|
||||
assert result["summary"] == {}
|
||||
|
||||
|
||||
class TestJsonStructure:
|
||||
"""Tests for JSON output structure."""
|
||||
|
||||
def test_metric_stats_converted(self):
|
||||
"""MetricStats are properly converted to dicts."""
|
||||
agg = MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=[],
|
||||
summary={"bat": MetricStats(min_value=3.5, max_value=4.0, mean=3.75, count=100)},
|
||||
)
|
||||
|
||||
result = monthly_to_json(agg)
|
||||
|
||||
# Summary should contain stats
|
||||
assert isinstance(result["summary"], dict)
|
||||
assert result["summary"]["bat"]["min"] == 3.5
|
||||
assert result["summary"]["bat"]["max"] == 4.0
|
||||
assert result["summary"]["bat"]["mean"] == 3.75
|
||||
assert result["summary"]["bat"]["unit"] == "mV"
|
||||
|
||||
def test_nested_structure_serializes(self):
|
||||
"""Nested structures serialize correctly."""
|
||||
daily = DailyAggregate(
|
||||
date=date(2024, 1, 1),
|
||||
metrics={"bat": MetricStats(min_value=3.7, max_value=3.9, mean=3.8, count=24)},
|
||||
)
|
||||
|
||||
agg = MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="companion",
|
||||
daily=[daily],
|
||||
summary={},
|
||||
)
|
||||
|
||||
result = monthly_to_json(agg)
|
||||
json_str = json.dumps(result, indent=2)
|
||||
|
||||
# Should be valid JSON with proper structure
|
||||
reparsed = json.loads(json_str)
|
||||
assert reparsed == result
|
||||
|
||||
|
||||
class TestJsonRoundTrip:
|
||||
"""Tests for JSON data round-trip integrity."""
|
||||
|
||||
def test_parse_and_serialize_identical(self):
|
||||
"""Parsing and re-serializing produces same structure."""
|
||||
agg = MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=[],
|
||||
summary={"bat": MetricStats(min_value=3.5, max_value=4.0, mean=3.75, count=100)},
|
||||
)
|
||||
|
||||
result = monthly_to_json(agg)
|
||||
json_str = json.dumps(result)
|
||||
parsed = json.loads(json_str)
|
||||
reserialized = json.dumps(parsed)
|
||||
reparsed = json.loads(reserialized)
|
||||
|
||||
assert parsed == reparsed
|
||||
|
||||
def test_numeric_values_preserved(self):
|
||||
"""Numeric values are preserved through round-trip."""
|
||||
agg = MonthlyAggregate(
|
||||
year=2024,
|
||||
month=6,
|
||||
role="repeater",
|
||||
daily=[],
|
||||
summary={},
|
||||
)
|
||||
|
||||
result = monthly_to_json(agg)
|
||||
json_str = json.dumps(result)
|
||||
parsed = json.loads(json_str)
|
||||
|
||||
assert parsed["year"] == 2024
|
||||
assert parsed["month"] == 6
|
||||
647
tests/reports/test_format_txt.py
Normal file
647
tests/reports/test_format_txt.py
Normal file
@@ -0,0 +1,647 @@
|
||||
"""Tests for WeeWX-style ASCII text report formatting."""
|
||||
|
||||
from datetime import date
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.reports import (
|
||||
Column,
|
||||
DailyAggregate,
|
||||
LocationInfo,
|
||||
MetricStats,
|
||||
MonthlyAggregate,
|
||||
YearlyAggregate,
|
||||
_format_row,
|
||||
_format_separator,
|
||||
format_monthly_txt,
|
||||
format_yearly_txt,
|
||||
)
|
||||
|
||||
|
||||
class TestColumn:
|
||||
"""Tests for Column dataclass."""
|
||||
|
||||
def test_format_with_value(self):
|
||||
"""Formats value with specified width and alignment."""
|
||||
col = Column(width=6, align="right")
|
||||
|
||||
result = col.format(42)
|
||||
|
||||
assert result == " 42"
|
||||
|
||||
def test_format_with_none(self):
|
||||
"""Formats None as dash."""
|
||||
col = Column(width=10)
|
||||
|
||||
result = col.format(None)
|
||||
|
||||
assert result == "-".rjust(10)
|
||||
|
||||
def test_left_alignment(self):
|
||||
"""Left alignment pads on right."""
|
||||
col = Column(width=10, align="left")
|
||||
|
||||
result = col.format("Hi")
|
||||
|
||||
assert result == "Hi".ljust(10)
|
||||
|
||||
def test_right_alignment(self):
|
||||
"""Right alignment pads on left."""
|
||||
col = Column(width=10, align="right")
|
||||
|
||||
result = col.format("Hi")
|
||||
|
||||
assert result == "Hi".rjust(10)
|
||||
|
||||
def test_center_alignment(self):
|
||||
"""Center alignment pads on both sides."""
|
||||
col = Column(width=10, align="center")
|
||||
|
||||
result = col.format("Hi")
|
||||
|
||||
assert result == "Hi".center(10)
|
||||
|
||||
def test_decimals_formatting(self):
|
||||
"""Formats floats with specified decimals."""
|
||||
col = Column(width=10, decimals=2)
|
||||
|
||||
result = col.format(3.14159)
|
||||
|
||||
assert result == "3.14".rjust(10)
|
||||
|
||||
def test_comma_separator(self):
|
||||
"""Uses comma separator for large integers."""
|
||||
col = Column(width=15, comma_sep=True)
|
||||
|
||||
result = col.format(1000000)
|
||||
|
||||
assert result == "1,000,000".rjust(15)
|
||||
|
||||
|
||||
class TestFormatRow:
|
||||
"""Tests for _format_row function."""
|
||||
|
||||
def test_joins_values_with_columns(self):
|
||||
"""Joins formatted values using column specs."""
|
||||
columns = [
|
||||
Column(width=5),
|
||||
Column(width=5),
|
||||
]
|
||||
|
||||
row = _format_row(columns, [1, 2])
|
||||
|
||||
assert row == " 1 2"
|
||||
|
||||
def test_handles_fewer_values(self):
|
||||
"""Handles fewer values than columns."""
|
||||
columns = [
|
||||
Column(width=5),
|
||||
Column(width=5),
|
||||
Column(width=5),
|
||||
]
|
||||
|
||||
# Should not raise - zip stops at shorter list
|
||||
row = _format_row(columns, ["X", "Y"])
|
||||
|
||||
assert row is not None
|
||||
assert "X" in row
|
||||
assert "Y" in row
|
||||
assert len(row) == 10
|
||||
|
||||
|
||||
class TestFormatSeparator:
|
||||
"""Tests for _format_separator function."""
|
||||
|
||||
def test_creates_separator_line(self):
|
||||
"""Creates separator line matching column widths."""
|
||||
columns = [
|
||||
Column(width=10),
|
||||
Column(width=8),
|
||||
]
|
||||
|
||||
separator = _format_separator(columns)
|
||||
|
||||
assert separator == "-" * 18
|
||||
|
||||
def test_matches_total_width(self):
|
||||
"""Separator width matches total column width."""
|
||||
columns = [
|
||||
Column(width=10),
|
||||
Column(width=10),
|
||||
]
|
||||
|
||||
separator = _format_separator(columns)
|
||||
|
||||
assert len(separator) == 20
|
||||
assert set(separator) == {"-"}
|
||||
|
||||
def test_custom_separator_char(self):
|
||||
"""Uses custom separator character."""
|
||||
columns = [Column(width=10)]
|
||||
|
||||
separator = _format_separator(columns, char="=")
|
||||
|
||||
assert separator == "=" * 10
|
||||
|
||||
|
||||
class TestFormatMonthlyTxt:
|
||||
"""Tests for format_monthly_txt function."""
|
||||
|
||||
@pytest.fixture
|
||||
def sample_monthly_aggregate(self):
|
||||
"""Create sample MonthlyAggregate for testing."""
|
||||
daily_data = [
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, 1),
|
||||
metrics={
|
||||
"bat": MetricStats(min_value=3700, max_value=3900, mean=3800, count=24),
|
||||
"nb_recv": MetricStats(total=720, count=24),
|
||||
},
|
||||
),
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, 2),
|
||||
metrics={
|
||||
"bat": MetricStats(min_value=3600, max_value=3850, mean=3750, count=24),
|
||||
"nb_recv": MetricStats(total=840, count=24),
|
||||
},
|
||||
),
|
||||
]
|
||||
|
||||
return MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=daily_data,
|
||||
summary={"bat": MetricStats(min_value=3600, max_value=3900, mean=3775, count=48)},
|
||||
)
|
||||
|
||||
@pytest.fixture
|
||||
def sample_location(self):
|
||||
"""Create sample LocationInfo for testing."""
|
||||
return LocationInfo(
|
||||
name="Test Location",
|
||||
lat=52.0,
|
||||
lon=4.0,
|
||||
elev=10.0,
|
||||
)
|
||||
|
||||
def test_returns_string(self, sample_monthly_aggregate, sample_location):
|
||||
"""Returns a string."""
|
||||
result = format_monthly_txt(sample_monthly_aggregate, "Test Repeater", sample_location)
|
||||
|
||||
assert isinstance(result, str)
|
||||
|
||||
def test_includes_header(self, sample_monthly_aggregate, sample_location):
|
||||
"""Includes report header with month/year."""
|
||||
result = format_monthly_txt(sample_monthly_aggregate, "Test Repeater", sample_location)
|
||||
|
||||
assert "MONTHLY MESHCORE REPORT for January 2024" in result
|
||||
|
||||
def test_includes_node_name(self, sample_monthly_aggregate, sample_location):
|
||||
"""Includes node name."""
|
||||
result = format_monthly_txt(sample_monthly_aggregate, "Test Repeater", sample_location)
|
||||
|
||||
assert "Test Repeater" in result
|
||||
|
||||
def test_has_table_structure(self, sample_monthly_aggregate, sample_location):
|
||||
"""Has ASCII table structure with separators."""
|
||||
result = format_monthly_txt(sample_monthly_aggregate, "Test Repeater", sample_location)
|
||||
|
||||
assert "BATTERY (V)" in result
|
||||
assert result.count("-" * 95) == 2
|
||||
|
||||
def test_daily_rows_rendered(self, sample_monthly_aggregate, sample_location):
|
||||
"""Renders one row per day with battery values."""
|
||||
result = format_monthly_txt(sample_monthly_aggregate, "Test Repeater", sample_location)
|
||||
lines = result.splitlines()
|
||||
daily_lines = [line for line in lines if line[:3].strip().isdigit()]
|
||||
|
||||
assert [line[:3].strip() for line in daily_lines] == ["1", "2"]
|
||||
assert any("3.80" in line for line in daily_lines)
|
||||
assert any("3.75" in line for line in daily_lines)
|
||||
|
||||
def test_handles_empty_daily(self, sample_location):
|
||||
"""Handles aggregate with no daily data."""
|
||||
agg = MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=[],
|
||||
summary={},
|
||||
)
|
||||
|
||||
result = format_monthly_txt(agg, "Test Repeater", sample_location)
|
||||
|
||||
assert isinstance(result, str)
|
||||
lines = result.splitlines()
|
||||
daily_lines = [line for line in lines if line[:3].strip().isdigit()]
|
||||
assert daily_lines == []
|
||||
|
||||
def test_includes_location_info(self, sample_monthly_aggregate, sample_location):
|
||||
"""Includes location information."""
|
||||
result = format_monthly_txt(sample_monthly_aggregate, "Test Repeater", sample_location)
|
||||
|
||||
assert "NAME: Test Location" in result
|
||||
assert "COORDS:" in result
|
||||
assert "ELEV: 10 meters" in result
|
||||
|
||||
|
||||
class TestFormatYearlyTxt:
|
||||
"""Tests for format_yearly_txt function."""
|
||||
|
||||
@pytest.fixture
|
||||
def sample_yearly_aggregate(self):
|
||||
"""Create sample YearlyAggregate for testing."""
|
||||
monthly_data = [
|
||||
MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=[],
|
||||
summary={"bat": MetricStats(min_value=3600, max_value=3900, mean=3750, count=720)},
|
||||
),
|
||||
MonthlyAggregate(
|
||||
year=2024,
|
||||
month=2,
|
||||
role="repeater",
|
||||
daily=[],
|
||||
summary={"bat": MetricStats(min_value=3500, max_value=3850, mean=3700, count=672)},
|
||||
),
|
||||
]
|
||||
|
||||
return YearlyAggregate(
|
||||
year=2024,
|
||||
role="repeater",
|
||||
monthly=monthly_data,
|
||||
summary={"bat": MetricStats(min_value=3500, max_value=3900, mean=3725, count=1392)},
|
||||
)
|
||||
|
||||
@pytest.fixture
|
||||
def sample_location(self):
|
||||
"""Create sample LocationInfo for testing."""
|
||||
return LocationInfo(
|
||||
name="Test Location",
|
||||
lat=52.0,
|
||||
lon=4.0,
|
||||
elev=10.0,
|
||||
)
|
||||
|
||||
def test_returns_string(self, sample_yearly_aggregate, sample_location):
|
||||
"""Returns a string."""
|
||||
result = format_yearly_txt(sample_yearly_aggregate, "Test Repeater", sample_location)
|
||||
|
||||
assert isinstance(result, str)
|
||||
|
||||
def test_includes_year(self, sample_yearly_aggregate, sample_location):
|
||||
"""Includes year in header."""
|
||||
result = format_yearly_txt(sample_yearly_aggregate, "Test Repeater", sample_location)
|
||||
|
||||
assert "YEARLY MESHCORE REPORT for 2024" in result
|
||||
assert "NODE: Test Repeater" in result
|
||||
assert "NAME: Test Location" in result
|
||||
|
||||
def test_has_monthly_breakdown(self, sample_yearly_aggregate, sample_location):
|
||||
"""Shows monthly breakdown."""
|
||||
result = format_yearly_txt(sample_yearly_aggregate, "Test Repeater", sample_location)
|
||||
|
||||
lines = result.splitlines()
|
||||
monthly_lines = [line for line in lines if line.strip().startswith("2024")]
|
||||
months = [line[4:8].strip() for line in monthly_lines]
|
||||
assert months == ["01", "02"]
|
||||
|
||||
def test_handles_empty_monthly(self, sample_location):
|
||||
"""Handles aggregate with no monthly data."""
|
||||
agg = YearlyAggregate(
|
||||
year=2024,
|
||||
role="repeater",
|
||||
monthly=[],
|
||||
summary={},
|
||||
)
|
||||
|
||||
result = format_yearly_txt(agg, "Test Repeater", sample_location)
|
||||
|
||||
assert isinstance(result, str)
|
||||
|
||||
|
||||
class TestFormatYearlyCompanionTxt:
|
||||
"""Tests for format_yearly_txt with companion role."""
|
||||
|
||||
@pytest.fixture
|
||||
def sample_companion_yearly_aggregate(self):
|
||||
"""Create sample YearlyAggregate for companion role testing."""
|
||||
from datetime import datetime as dt
|
||||
monthly_data = [
|
||||
MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="companion",
|
||||
daily=[],
|
||||
summary={
|
||||
"battery_mv": MetricStats(
|
||||
min_value=3600, min_time=dt(2024, 1, 15, 4, 0),
|
||||
max_value=3900, max_time=dt(2024, 1, 20, 14, 0),
|
||||
mean=3750, count=720
|
||||
),
|
||||
"bat_pct": MetricStats(mean=75, count=720),
|
||||
"contacts": MetricStats(mean=10, count=720),
|
||||
"recv": MetricStats(total=5000, count=720),
|
||||
"sent": MetricStats(total=3000, count=720),
|
||||
},
|
||||
),
|
||||
MonthlyAggregate(
|
||||
year=2024,
|
||||
month=2,
|
||||
role="companion",
|
||||
daily=[],
|
||||
summary={
|
||||
"battery_mv": MetricStats(
|
||||
min_value=3500, min_time=dt(2024, 2, 10, 5, 0),
|
||||
max_value=3850, max_time=dt(2024, 2, 25, 16, 0),
|
||||
mean=3700, count=672
|
||||
),
|
||||
"bat_pct": MetricStats(mean=70, count=672),
|
||||
"contacts": MetricStats(mean=12, count=672),
|
||||
"recv": MetricStats(total=4500, count=672),
|
||||
"sent": MetricStats(total=2800, count=672),
|
||||
},
|
||||
),
|
||||
]
|
||||
|
||||
return YearlyAggregate(
|
||||
year=2024,
|
||||
role="companion",
|
||||
monthly=monthly_data,
|
||||
summary={
|
||||
"battery_mv": MetricStats(
|
||||
min_value=3500, min_time=dt(2024, 2, 10, 5, 0),
|
||||
max_value=3900, max_time=dt(2024, 1, 20, 14, 0),
|
||||
mean=3725, count=1392
|
||||
),
|
||||
"bat_pct": MetricStats(mean=72.5, count=1392),
|
||||
"contacts": MetricStats(mean=11, count=1392),
|
||||
"recv": MetricStats(total=9500, count=1392),
|
||||
"sent": MetricStats(total=5800, count=1392),
|
||||
},
|
||||
)
|
||||
|
||||
@pytest.fixture
|
||||
def sample_location(self):
|
||||
"""Create sample LocationInfo for testing."""
|
||||
return LocationInfo(
|
||||
name="Test Location",
|
||||
lat=52.0,
|
||||
lon=4.0,
|
||||
elev=10.0,
|
||||
)
|
||||
|
||||
def test_returns_string(self, sample_companion_yearly_aggregate, sample_location):
|
||||
"""Returns a string."""
|
||||
result = format_yearly_txt(sample_companion_yearly_aggregate, "Test Companion", sample_location)
|
||||
|
||||
assert isinstance(result, str)
|
||||
|
||||
def test_includes_year(self, sample_companion_yearly_aggregate, sample_location):
|
||||
"""Includes year in header."""
|
||||
result = format_yearly_txt(sample_companion_yearly_aggregate, "Test Companion", sample_location)
|
||||
|
||||
assert "YEARLY MESHCORE REPORT for 2024" in result
|
||||
assert "NODE: Test Companion" in result
|
||||
assert "NAME: Test Location" in result
|
||||
|
||||
def test_includes_node_name(self, sample_companion_yearly_aggregate, sample_location):
|
||||
"""Includes node name."""
|
||||
result = format_yearly_txt(sample_companion_yearly_aggregate, "Test Companion", sample_location)
|
||||
|
||||
assert "Test Companion" in result
|
||||
|
||||
def test_has_monthly_breakdown(self, sample_companion_yearly_aggregate, sample_location):
|
||||
"""Shows monthly breakdown."""
|
||||
result = format_yearly_txt(sample_companion_yearly_aggregate, "Test Companion", sample_location)
|
||||
|
||||
lines = result.splitlines()
|
||||
monthly_lines = [line for line in lines if line.strip().startswith("2024")]
|
||||
months = [line[4:8].strip() for line in monthly_lines]
|
||||
assert months == ["01", "02"]
|
||||
|
||||
def test_has_battery_data(self, sample_companion_yearly_aggregate, sample_location):
|
||||
"""Contains battery voltage data."""
|
||||
result = format_yearly_txt(sample_companion_yearly_aggregate, "Test Companion", sample_location)
|
||||
|
||||
# Battery header or VOLT should be present
|
||||
assert "BATT" in result or "VOLT" in result
|
||||
|
||||
def test_has_packet_counts(self, sample_companion_yearly_aggregate, sample_location):
|
||||
"""Contains packet count data."""
|
||||
result = format_yearly_txt(sample_companion_yearly_aggregate, "Test Companion", sample_location)
|
||||
|
||||
# RX and TX columns should be present
|
||||
assert "RX" in result
|
||||
assert "TX" in result
|
||||
|
||||
def test_handles_empty_monthly(self, sample_location):
|
||||
"""Handles aggregate with no monthly data."""
|
||||
agg = YearlyAggregate(
|
||||
year=2024,
|
||||
role="companion",
|
||||
monthly=[],
|
||||
summary={},
|
||||
)
|
||||
|
||||
result = format_yearly_txt(agg, "Test Companion", sample_location)
|
||||
|
||||
assert isinstance(result, str)
|
||||
|
||||
|
||||
class TestFormatMonthlyCompanionTxt:
|
||||
"""Tests for format_monthly_txt with companion role."""
|
||||
|
||||
@pytest.fixture
|
||||
def sample_companion_monthly_aggregate(self):
|
||||
"""Create sample MonthlyAggregate for companion role testing."""
|
||||
from datetime import datetime as dt
|
||||
daily_data = [
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, 1),
|
||||
metrics={
|
||||
"battery_mv": MetricStats(
|
||||
min_value=3700, min_time=dt(2024, 1, 1, 4, 0),
|
||||
max_value=3900, max_time=dt(2024, 1, 1, 14, 0),
|
||||
mean=3800, count=24
|
||||
),
|
||||
"bat_pct": MetricStats(mean=75, count=24),
|
||||
"contacts": MetricStats(mean=10, count=24),
|
||||
"recv": MetricStats(total=500, count=24),
|
||||
"sent": MetricStats(total=300, count=24),
|
||||
},
|
||||
),
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, 2),
|
||||
metrics={
|
||||
"battery_mv": MetricStats(
|
||||
min_value=3650, min_time=dt(2024, 1, 2, 5, 0),
|
||||
max_value=3850, max_time=dt(2024, 1, 2, 12, 0),
|
||||
mean=3750, count=24
|
||||
),
|
||||
"bat_pct": MetricStats(mean=70, count=24),
|
||||
"contacts": MetricStats(mean=11, count=24),
|
||||
"recv": MetricStats(total=450, count=24),
|
||||
"sent": MetricStats(total=280, count=24),
|
||||
},
|
||||
),
|
||||
]
|
||||
|
||||
return MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="companion",
|
||||
daily=daily_data,
|
||||
summary={
|
||||
"battery_mv": MetricStats(
|
||||
min_value=3650, min_time=dt(2024, 1, 2, 5, 0),
|
||||
max_value=3900, max_time=dt(2024, 1, 1, 14, 0),
|
||||
mean=3775, count=48
|
||||
),
|
||||
"bat_pct": MetricStats(mean=72.5, count=48),
|
||||
"contacts": MetricStats(mean=10.5, count=48),
|
||||
"recv": MetricStats(total=950, count=48),
|
||||
"sent": MetricStats(total=580, count=48),
|
||||
},
|
||||
)
|
||||
|
||||
@pytest.fixture
|
||||
def sample_location(self):
|
||||
"""Create sample LocationInfo for testing."""
|
||||
return LocationInfo(
|
||||
name="Test Location",
|
||||
lat=52.0,
|
||||
lon=4.0,
|
||||
elev=10.0,
|
||||
)
|
||||
|
||||
def test_returns_string(self, sample_companion_monthly_aggregate, sample_location):
|
||||
"""Returns a string."""
|
||||
result = format_monthly_txt(sample_companion_monthly_aggregate, "Test Companion", sample_location)
|
||||
|
||||
assert isinstance(result, str)
|
||||
|
||||
def test_includes_month_year(self, sample_companion_monthly_aggregate, sample_location):
|
||||
"""Includes month and year in header."""
|
||||
result = format_monthly_txt(sample_companion_monthly_aggregate, "Test Companion", sample_location)
|
||||
|
||||
assert "MONTHLY MESHCORE REPORT for January 2024" in result
|
||||
assert "NODE: Test Companion" in result
|
||||
|
||||
def test_has_daily_breakdown(self, sample_companion_monthly_aggregate, sample_location):
|
||||
"""Shows daily breakdown."""
|
||||
result = format_monthly_txt(sample_companion_monthly_aggregate, "Test Companion", sample_location)
|
||||
|
||||
lines = result.splitlines()
|
||||
daily_lines = [line for line in lines if line[:3].strip().isdigit()]
|
||||
assert [line[:3].strip() for line in daily_lines] == ["1", "2"]
|
||||
|
||||
def test_has_packet_counts(self, sample_companion_monthly_aggregate, sample_location):
|
||||
"""Contains packet count data."""
|
||||
result = format_monthly_txt(sample_companion_monthly_aggregate, "Test Companion", sample_location)
|
||||
|
||||
# RX and TX columns should be present
|
||||
assert "RX" in result
|
||||
assert "TX" in result
|
||||
|
||||
|
||||
class TestTextReportContent:
|
||||
"""Tests for text report content quality."""
|
||||
|
||||
@pytest.fixture
|
||||
def sample_monthly_aggregate(self):
|
||||
"""Create sample MonthlyAggregate for testing."""
|
||||
daily_data = [
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, 1),
|
||||
metrics={"bat": MetricStats(min_value=3700, max_value=3900, mean=3800, count=24)},
|
||||
),
|
||||
]
|
||||
|
||||
return MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=daily_data,
|
||||
summary={"bat": MetricStats(min_value=3700, max_value=3900, mean=3800, count=24)},
|
||||
)
|
||||
|
||||
@pytest.fixture
|
||||
def sample_location(self):
|
||||
"""Create sample LocationInfo for testing."""
|
||||
return LocationInfo(
|
||||
name="Test Location",
|
||||
lat=52.0,
|
||||
lon=4.0,
|
||||
elev=10.0,
|
||||
)
|
||||
|
||||
def test_readable_numbers(self, sample_monthly_aggregate, sample_location):
|
||||
"""Numbers are formatted readably."""
|
||||
result = format_monthly_txt(sample_monthly_aggregate, "Test Repeater", sample_location)
|
||||
|
||||
# Should contain numeric values
|
||||
assert any(c.isdigit() for c in result)
|
||||
|
||||
def test_aligned_columns(self, sample_monthly_aggregate, sample_location):
|
||||
"""Columns appear aligned."""
|
||||
result = format_monthly_txt(sample_monthly_aggregate, "Test Repeater", sample_location)
|
||||
lines = result.split("\n")
|
||||
|
||||
# Find lines that start with day numbers (data rows)
|
||||
# These are the actual data rows that should be aligned
|
||||
data_lines = [line for line in lines if line.strip() and line.strip()[:2].isdigit()]
|
||||
if len(data_lines) >= 2:
|
||||
lengths = [len(line) for line in data_lines]
|
||||
# Data rows should be same length (well aligned)
|
||||
assert max(lengths) - min(lengths) < 10
|
||||
|
||||
|
||||
class TestCompanionFormatting:
|
||||
"""Tests for companion-specific formatting."""
|
||||
|
||||
@pytest.fixture
|
||||
def companion_monthly_aggregate(self):
|
||||
"""Create sample companion MonthlyAggregate."""
|
||||
daily_data = [
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, 1),
|
||||
metrics={
|
||||
"battery_mv": MetricStats(min_value=3700, max_value=3900, mean=3800, count=24),
|
||||
"contacts": MetricStats(min_value=5, max_value=10, mean=7, count=24),
|
||||
"recv": MetricStats(total=720, count=24),
|
||||
},
|
||||
),
|
||||
]
|
||||
|
||||
return MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="companion",
|
||||
daily=daily_data,
|
||||
summary={
|
||||
"battery_mv": MetricStats(min_value=3700, max_value=3900, mean=3800, count=24),
|
||||
},
|
||||
)
|
||||
|
||||
@pytest.fixture
|
||||
def sample_location(self):
|
||||
"""Create sample LocationInfo."""
|
||||
return LocationInfo(
|
||||
name="Test Location",
|
||||
lat=52.0,
|
||||
lon=4.0,
|
||||
elev=10.0,
|
||||
)
|
||||
|
||||
def test_companion_monthly_format(self, companion_monthly_aggregate, sample_location):
|
||||
"""Companion monthly report formats correctly."""
|
||||
result = format_monthly_txt(companion_monthly_aggregate, "Test Companion", sample_location)
|
||||
|
||||
assert isinstance(result, str)
|
||||
assert "MONTHLY MESHCORE REPORT for January 2024" in result
|
||||
assert "NODE: Test Companion" in result
|
||||
assert "NAME: Test Location" in result
|
||||
189
tests/reports/test_location.py
Normal file
189
tests/reports/test_location.py
Normal file
@@ -0,0 +1,189 @@
|
||||
"""Tests for location formatting functions."""
|
||||
|
||||
|
||||
from meshmon.reports import (
|
||||
LocationInfo,
|
||||
format_lat_lon,
|
||||
format_lat_lon_dms,
|
||||
)
|
||||
|
||||
|
||||
class TestFormatLatLon:
|
||||
"""Tests for format_lat_lon function."""
|
||||
|
||||
def test_formats_positive_coordinates(self):
|
||||
"""Formats positive lat/lon with N/E."""
|
||||
lat_str, lon_str = format_lat_lon(51.5074, 0.1278)
|
||||
|
||||
assert lat_str == "51-30.44 N"
|
||||
assert lon_str == "000-07.67 E"
|
||||
|
||||
def test_formats_negative_latitude(self):
|
||||
"""Negative latitude shows S."""
|
||||
lat_str, lon_str = format_lat_lon(-33.8688, 151.2093)
|
||||
|
||||
assert lat_str == "33-52.13 S"
|
||||
assert lon_str == "151-12.56 E"
|
||||
|
||||
def test_formats_negative_longitude(self):
|
||||
"""Negative longitude shows W."""
|
||||
lat_str, lon_str = format_lat_lon(51.5074, -0.1278)
|
||||
|
||||
assert lon_str == "000-07.67 W"
|
||||
|
||||
def test_formats_positive_longitude(self):
|
||||
"""Positive longitude shows E."""
|
||||
lat_str, lon_str = format_lat_lon(0.0, 4.0)
|
||||
|
||||
assert lon_str == "004-00.00 E"
|
||||
|
||||
def test_includes_degrees_minutes(self):
|
||||
"""Includes degrees and minutes."""
|
||||
lat_str, lon_str = format_lat_lon(3.5, 7.25)
|
||||
|
||||
assert lat_str.startswith("03-")
|
||||
assert lon_str.startswith("007-")
|
||||
|
||||
def test_handles_zero(self):
|
||||
"""Handles zero coordinates."""
|
||||
lat_str, lon_str = format_lat_lon(0.0, 0.0)
|
||||
|
||||
assert lat_str == "00-00.00 N"
|
||||
assert lon_str == "000-00.00 E"
|
||||
|
||||
def test_handles_extremes(self):
|
||||
"""Handles extreme coordinates."""
|
||||
# North pole
|
||||
lat_str_north, lon_str_north = format_lat_lon(90.0, 0.0)
|
||||
assert lat_str_north == "90-00.00 N"
|
||||
|
||||
# South pole
|
||||
lat_str_south, lon_str_south = format_lat_lon(-90.0, 0.0)
|
||||
assert lat_str_south == "90-00.00 S"
|
||||
|
||||
|
||||
class TestFormatLatLonDms:
|
||||
"""Tests for format_lat_lon_dms function."""
|
||||
|
||||
def test_returns_dms_format(self):
|
||||
"""Returns degrees-minutes-seconds format."""
|
||||
result = format_lat_lon_dms(51.5074, -0.1278)
|
||||
|
||||
assert result == "51°30'26\"N 000°07'40\"W"
|
||||
|
||||
def test_includes_direction(self):
|
||||
"""Includes N/S/E/W directions."""
|
||||
result = format_lat_lon_dms(51.5074, -0.1278)
|
||||
|
||||
assert "N" in result
|
||||
assert "W" in result
|
||||
|
||||
def test_correct_conversion(self):
|
||||
"""Converts decimal to DMS correctly."""
|
||||
result = format_lat_lon_dms(0.0, 0.0)
|
||||
|
||||
assert result == "00°00'00\"N 000°00'00\"E"
|
||||
|
||||
def test_handles_fractional_seconds(self):
|
||||
"""Handles fractional seconds."""
|
||||
result = format_lat_lon_dms(51.123456, -0.987654)
|
||||
|
||||
assert result == "51°07'24\"N 000°59'15\"W"
|
||||
|
||||
def test_combines_lat_and_lon(self):
|
||||
"""Returns combined string with both lat and lon."""
|
||||
result = format_lat_lon_dms(52.0, 4.0)
|
||||
|
||||
assert result == "52°00'00\"N 004°00'00\"E"
|
||||
|
||||
|
||||
class TestLocationInfo:
|
||||
"""Tests for LocationInfo dataclass."""
|
||||
|
||||
def test_stores_all_fields(self):
|
||||
"""Stores all location fields."""
|
||||
loc = LocationInfo(
|
||||
name="Test Location",
|
||||
lat=51.5074,
|
||||
lon=-0.1278,
|
||||
elev=11.0,
|
||||
)
|
||||
|
||||
assert loc.name == "Test Location"
|
||||
assert loc.lat == 51.5074
|
||||
assert loc.lon == -0.1278
|
||||
assert loc.elev == 11.0
|
||||
|
||||
def test_format_header(self):
|
||||
"""format_header returns formatted string."""
|
||||
loc = LocationInfo(
|
||||
name="Test Location",
|
||||
lat=51.5074,
|
||||
lon=-0.1278,
|
||||
elev=11.0,
|
||||
)
|
||||
|
||||
header = loc.format_header()
|
||||
|
||||
assert header == (
|
||||
"NAME: Test Location\n"
|
||||
"COORDS: 51°30'26\"N 000°07'40\"W ELEV: 11 meters"
|
||||
)
|
||||
|
||||
def test_format_header_includes_coordinates(self):
|
||||
"""Header includes formatted coordinates."""
|
||||
loc = LocationInfo(
|
||||
name="Test Location",
|
||||
lat=51.5074,
|
||||
lon=-0.1278,
|
||||
elev=11.0,
|
||||
)
|
||||
|
||||
header = loc.format_header()
|
||||
|
||||
assert "COORDS: 51°30'26\"N 000°07'40\"W" in header
|
||||
|
||||
def test_format_header_includes_elevation(self):
|
||||
"""Header includes elevation with unit."""
|
||||
loc = LocationInfo(
|
||||
name="London",
|
||||
lat=51.5074,
|
||||
lon=-0.1278,
|
||||
elev=11.0,
|
||||
)
|
||||
|
||||
header = loc.format_header()
|
||||
|
||||
assert "ELEV: 11 meters" in header
|
||||
|
||||
|
||||
class TestLocationCoordinates:
|
||||
"""Tests for various coordinate scenarios."""
|
||||
|
||||
def test_equator(self):
|
||||
"""Handles equator (0° latitude)."""
|
||||
lat_str, lon_str = format_lat_lon(0.0, 45.0)
|
||||
|
||||
assert lat_str == "00-00.00 N"
|
||||
assert lon_str == "045-00.00 E"
|
||||
|
||||
def test_prime_meridian(self):
|
||||
"""Handles prime meridian (0° longitude)."""
|
||||
lat_str, lon_str = format_lat_lon(45.0, 0.0)
|
||||
|
||||
assert lat_str == "45-00.00 N"
|
||||
assert lon_str == "000-00.00 E"
|
||||
|
||||
def test_international_date_line(self):
|
||||
"""Handles international date line (180° longitude)."""
|
||||
lat_str, lon_str = format_lat_lon(0.0, 180.0)
|
||||
|
||||
assert lat_str == "00-00.00 N"
|
||||
assert lon_str == "180-00.00 E"
|
||||
|
||||
def test_very_precise_coordinates(self):
|
||||
"""Handles high-precision coordinates."""
|
||||
lat_str, lon_str = format_lat_lon(51.50735509, -0.12775829)
|
||||
|
||||
assert lat_str == "51-30.44 N"
|
||||
assert lon_str == "000-07.67 W"
|
||||
557
tests/reports/test_snapshots.py
Normal file
557
tests/reports/test_snapshots.py
Normal file
@@ -0,0 +1,557 @@
|
||||
"""Snapshot tests for text report formatting.
|
||||
|
||||
These tests compare generated TXT reports against saved snapshots
|
||||
to detect unintended changes in report layout and formatting.
|
||||
|
||||
To update snapshots, run: UPDATE_SNAPSHOTS=1 pytest tests/reports/test_snapshots.py
|
||||
"""
|
||||
|
||||
import os
|
||||
from datetime import date, datetime
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.reports import (
|
||||
DailyAggregate,
|
||||
LocationInfo,
|
||||
MetricStats,
|
||||
MonthlyAggregate,
|
||||
YearlyAggregate,
|
||||
format_monthly_txt,
|
||||
format_yearly_txt,
|
||||
)
|
||||
|
||||
|
||||
class TestTxtReportSnapshots:
|
||||
"""Snapshot tests for WeeWX-style ASCII text reports."""
|
||||
|
||||
@pytest.fixture
|
||||
def update_snapshots(self):
|
||||
"""Return True if snapshots should be updated."""
|
||||
return os.environ.get("UPDATE_SNAPSHOTS", "").lower() in ("1", "true", "yes")
|
||||
|
||||
@pytest.fixture
|
||||
def txt_snapshots_dir(self):
|
||||
"""Path to TXT snapshots directory."""
|
||||
return Path(__file__).parent.parent / "snapshots" / "txt"
|
||||
|
||||
@pytest.fixture
|
||||
def sample_location(self):
|
||||
"""Create sample LocationInfo for testing."""
|
||||
return LocationInfo(
|
||||
name="Test Observatory",
|
||||
lat=52.3676, # Amsterdam
|
||||
lon=4.9041,
|
||||
elev=2.0,
|
||||
)
|
||||
|
||||
@pytest.fixture
|
||||
def repeater_monthly_aggregate(self):
|
||||
"""Create sample MonthlyAggregate for repeater role testing."""
|
||||
daily_data = []
|
||||
|
||||
# Create 5 days of sample data
|
||||
for day in range(1, 6):
|
||||
daily_data.append(
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, day),
|
||||
metrics={
|
||||
"bat": MetricStats(
|
||||
min_value=3600 + day * 10,
|
||||
min_time=datetime(2024, 1, day, 4, 0),
|
||||
max_value=3900 + day * 10,
|
||||
max_time=datetime(2024, 1, day, 14, 0),
|
||||
mean=3750 + day * 10,
|
||||
count=96,
|
||||
),
|
||||
"bat_pct": MetricStats(
|
||||
mean=65.0 + day * 2,
|
||||
count=96,
|
||||
),
|
||||
"last_rssi": MetricStats(
|
||||
mean=-85.0 - day,
|
||||
count=96,
|
||||
),
|
||||
"last_snr": MetricStats(
|
||||
mean=8.5 + day * 0.2,
|
||||
count=96,
|
||||
),
|
||||
"noise_floor": MetricStats(
|
||||
mean=-115.0,
|
||||
count=96,
|
||||
),
|
||||
"nb_recv": MetricStats(
|
||||
total=500 + day * 100,
|
||||
count=96,
|
||||
reboot_count=0,
|
||||
),
|
||||
"nb_sent": MetricStats(
|
||||
total=200 + day * 50,
|
||||
count=96,
|
||||
reboot_count=0,
|
||||
),
|
||||
"airtime": MetricStats(
|
||||
total=120 + day * 20,
|
||||
count=96,
|
||||
reboot_count=0,
|
||||
),
|
||||
},
|
||||
snapshot_count=96,
|
||||
)
|
||||
)
|
||||
|
||||
return MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=daily_data,
|
||||
summary={
|
||||
"bat": MetricStats(
|
||||
min_value=3610,
|
||||
min_time=datetime(2024, 1, 1, 4, 0),
|
||||
max_value=3950,
|
||||
max_time=datetime(2024, 1, 5, 14, 0),
|
||||
mean=3780,
|
||||
count=480,
|
||||
),
|
||||
"bat_pct": MetricStats(
|
||||
mean=71.0,
|
||||
count=480,
|
||||
),
|
||||
"last_rssi": MetricStats(
|
||||
mean=-88.0,
|
||||
count=480,
|
||||
),
|
||||
"last_snr": MetricStats(
|
||||
mean=9.1,
|
||||
count=480,
|
||||
),
|
||||
"noise_floor": MetricStats(
|
||||
mean=-115.0,
|
||||
count=480,
|
||||
),
|
||||
"nb_recv": MetricStats(
|
||||
total=4000,
|
||||
count=480,
|
||||
reboot_count=0,
|
||||
),
|
||||
"nb_sent": MetricStats(
|
||||
total=1750,
|
||||
count=480,
|
||||
reboot_count=0,
|
||||
),
|
||||
"airtime": MetricStats(
|
||||
total=900,
|
||||
count=480,
|
||||
reboot_count=0,
|
||||
),
|
||||
},
|
||||
)
|
||||
|
||||
@pytest.fixture
|
||||
def companion_monthly_aggregate(self):
|
||||
"""Create sample MonthlyAggregate for companion role testing."""
|
||||
daily_data = []
|
||||
|
||||
# Create 5 days of sample data
|
||||
for day in range(1, 6):
|
||||
daily_data.append(
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, day),
|
||||
metrics={
|
||||
"battery_mv": MetricStats(
|
||||
min_value=3700 + day * 10,
|
||||
min_time=datetime(2024, 1, day, 5, 0),
|
||||
max_value=4000 + day * 10,
|
||||
max_time=datetime(2024, 1, day, 12, 0),
|
||||
mean=3850 + day * 10,
|
||||
count=1440,
|
||||
),
|
||||
"bat_pct": MetricStats(
|
||||
mean=75.0 + day * 2,
|
||||
count=1440,
|
||||
),
|
||||
"contacts": MetricStats(
|
||||
mean=8 + day,
|
||||
count=1440,
|
||||
),
|
||||
"recv": MetricStats(
|
||||
total=1000 + day * 200,
|
||||
count=1440,
|
||||
reboot_count=0,
|
||||
),
|
||||
"sent": MetricStats(
|
||||
total=500 + day * 100,
|
||||
count=1440,
|
||||
reboot_count=0,
|
||||
),
|
||||
},
|
||||
snapshot_count=1440,
|
||||
)
|
||||
)
|
||||
|
||||
return MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="companion",
|
||||
daily=daily_data,
|
||||
summary={
|
||||
"battery_mv": MetricStats(
|
||||
min_value=3710,
|
||||
min_time=datetime(2024, 1, 1, 5, 0),
|
||||
max_value=4050,
|
||||
max_time=datetime(2024, 1, 5, 12, 0),
|
||||
mean=3880,
|
||||
count=7200,
|
||||
),
|
||||
"bat_pct": MetricStats(
|
||||
mean=81.0,
|
||||
count=7200,
|
||||
),
|
||||
"contacts": MetricStats(
|
||||
mean=11.0,
|
||||
count=7200,
|
||||
),
|
||||
"recv": MetricStats(
|
||||
total=8000,
|
||||
count=7200,
|
||||
reboot_count=0,
|
||||
),
|
||||
"sent": MetricStats(
|
||||
total=4000,
|
||||
count=7200,
|
||||
reboot_count=0,
|
||||
),
|
||||
},
|
||||
)
|
||||
|
||||
@pytest.fixture
|
||||
def repeater_yearly_aggregate(self):
|
||||
"""Create sample YearlyAggregate for repeater role testing."""
|
||||
monthly_data = []
|
||||
|
||||
# Create 3 months of sample data
|
||||
for month in range(1, 4):
|
||||
monthly_data.append(
|
||||
MonthlyAggregate(
|
||||
year=2024,
|
||||
month=month,
|
||||
role="repeater",
|
||||
daily=[], # Daily details not needed for yearly summary
|
||||
summary={
|
||||
"bat": MetricStats(
|
||||
min_value=3500 + month * 50,
|
||||
min_time=datetime(2024, month, 15, 4, 0),
|
||||
max_value=3950 + month * 20,
|
||||
max_time=datetime(2024, month, 20, 14, 0),
|
||||
mean=3700 + month * 30,
|
||||
count=2976, # ~31 days * 96 readings
|
||||
),
|
||||
"bat_pct": MetricStats(
|
||||
mean=60.0 + month * 5,
|
||||
count=2976,
|
||||
),
|
||||
"last_rssi": MetricStats(
|
||||
mean=-90.0 + month,
|
||||
count=2976,
|
||||
),
|
||||
"last_snr": MetricStats(
|
||||
mean=7.5 + month * 0.5,
|
||||
count=2976,
|
||||
),
|
||||
"nb_recv": MetricStats(
|
||||
total=30000 + month * 5000,
|
||||
count=2976,
|
||||
reboot_count=0,
|
||||
),
|
||||
"nb_sent": MetricStats(
|
||||
total=15000 + month * 2500,
|
||||
count=2976,
|
||||
reboot_count=0,
|
||||
),
|
||||
},
|
||||
)
|
||||
)
|
||||
|
||||
return YearlyAggregate(
|
||||
year=2024,
|
||||
role="repeater",
|
||||
monthly=monthly_data,
|
||||
summary={
|
||||
"bat": MetricStats(
|
||||
min_value=3550,
|
||||
min_time=datetime(2024, 1, 15, 4, 0),
|
||||
max_value=4010,
|
||||
max_time=datetime(2024, 3, 20, 14, 0),
|
||||
mean=3760,
|
||||
count=8928,
|
||||
),
|
||||
"bat_pct": MetricStats(
|
||||
mean=70.0,
|
||||
count=8928,
|
||||
),
|
||||
"last_rssi": MetricStats(
|
||||
mean=-88.0,
|
||||
count=8928,
|
||||
),
|
||||
"last_snr": MetricStats(
|
||||
mean=8.5,
|
||||
count=8928,
|
||||
),
|
||||
"nb_recv": MetricStats(
|
||||
total=120000,
|
||||
count=8928,
|
||||
reboot_count=0,
|
||||
),
|
||||
"nb_sent": MetricStats(
|
||||
total=60000,
|
||||
count=8928,
|
||||
reboot_count=0,
|
||||
),
|
||||
},
|
||||
)
|
||||
|
||||
@pytest.fixture
|
||||
def companion_yearly_aggregate(self):
|
||||
"""Create sample YearlyAggregate for companion role testing."""
|
||||
monthly_data = []
|
||||
|
||||
# Create 3 months of sample data
|
||||
for month in range(1, 4):
|
||||
monthly_data.append(
|
||||
MonthlyAggregate(
|
||||
year=2024,
|
||||
month=month,
|
||||
role="companion",
|
||||
daily=[],
|
||||
summary={
|
||||
"battery_mv": MetricStats(
|
||||
min_value=3600 + month * 30,
|
||||
min_time=datetime(2024, month, 10, 5, 0),
|
||||
max_value=4100 + month * 20,
|
||||
max_time=datetime(2024, month, 25, 12, 0),
|
||||
mean=3850 + month * 25,
|
||||
count=44640, # ~31 days * 1440 readings
|
||||
),
|
||||
"bat_pct": MetricStats(
|
||||
mean=70.0 + month * 3,
|
||||
count=44640,
|
||||
),
|
||||
"contacts": MetricStats(
|
||||
mean=10 + month,
|
||||
count=44640,
|
||||
),
|
||||
"recv": MetricStats(
|
||||
total=50000 + month * 10000,
|
||||
count=44640,
|
||||
reboot_count=0,
|
||||
),
|
||||
"sent": MetricStats(
|
||||
total=25000 + month * 5000,
|
||||
count=44640,
|
||||
reboot_count=0,
|
||||
),
|
||||
},
|
||||
)
|
||||
)
|
||||
|
||||
return YearlyAggregate(
|
||||
year=2024,
|
||||
role="companion",
|
||||
monthly=monthly_data,
|
||||
summary={
|
||||
"battery_mv": MetricStats(
|
||||
min_value=3630,
|
||||
min_time=datetime(2024, 1, 10, 5, 0),
|
||||
max_value=4160,
|
||||
max_time=datetime(2024, 3, 25, 12, 0),
|
||||
mean=3900,
|
||||
count=133920,
|
||||
),
|
||||
"bat_pct": MetricStats(
|
||||
mean=76.0,
|
||||
count=133920,
|
||||
),
|
||||
"contacts": MetricStats(
|
||||
mean=12.0,
|
||||
count=133920,
|
||||
),
|
||||
"recv": MetricStats(
|
||||
total=210000,
|
||||
count=133920,
|
||||
reboot_count=0,
|
||||
),
|
||||
"sent": MetricStats(
|
||||
total=105000,
|
||||
count=133920,
|
||||
reboot_count=0,
|
||||
),
|
||||
},
|
||||
)
|
||||
|
||||
def _assert_snapshot_match(
|
||||
self,
|
||||
actual: str,
|
||||
snapshot_path: Path,
|
||||
update: bool,
|
||||
) -> None:
|
||||
"""Compare TXT report against snapshot, with optional update mode."""
|
||||
if update:
|
||||
# Update mode: write actual to snapshot
|
||||
snapshot_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
snapshot_path.write_text(actual, encoding="utf-8")
|
||||
pytest.skip(f"Snapshot updated: {snapshot_path}")
|
||||
else:
|
||||
# Compare mode
|
||||
if not snapshot_path.exists():
|
||||
# Create new snapshot if it doesn't exist
|
||||
snapshot_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
snapshot_path.write_text(actual, encoding="utf-8")
|
||||
pytest.fail(
|
||||
f"Snapshot created: {snapshot_path}\n"
|
||||
f"Run tests again to verify, or set UPDATE_SNAPSHOTS=1 to regenerate."
|
||||
)
|
||||
|
||||
expected = snapshot_path.read_text(encoding="utf-8")
|
||||
|
||||
if actual != expected:
|
||||
# Show differences for debugging
|
||||
actual_lines = actual.splitlines()
|
||||
expected_lines = expected.splitlines()
|
||||
|
||||
diff_info = []
|
||||
for i, (a, e) in enumerate(zip(actual_lines, expected_lines, strict=False), 1):
|
||||
if a != e:
|
||||
diff_info.append(f"Line {i} differs:")
|
||||
diff_info.append(f" Expected: '{e}'")
|
||||
diff_info.append(f" Actual: '{a}'")
|
||||
if len(diff_info) > 15:
|
||||
diff_info.append(" (more differences omitted)")
|
||||
break
|
||||
|
||||
if len(actual_lines) != len(expected_lines):
|
||||
diff_info.append(
|
||||
f"Line count: expected {len(expected_lines)}, got {len(actual_lines)}"
|
||||
)
|
||||
|
||||
pytest.fail(
|
||||
f"Snapshot mismatch: {snapshot_path}\n"
|
||||
f"Set UPDATE_SNAPSHOTS=1 to regenerate.\n\n"
|
||||
+ "\n".join(diff_info)
|
||||
)
|
||||
|
||||
def test_monthly_report_repeater(
|
||||
self,
|
||||
repeater_monthly_aggregate,
|
||||
sample_location,
|
||||
txt_snapshots_dir,
|
||||
update_snapshots,
|
||||
):
|
||||
"""Monthly repeater report matches snapshot."""
|
||||
result = format_monthly_txt(
|
||||
repeater_monthly_aggregate,
|
||||
"Test Repeater",
|
||||
sample_location,
|
||||
)
|
||||
|
||||
snapshot_path = txt_snapshots_dir / "monthly_report_repeater.txt"
|
||||
self._assert_snapshot_match(result, snapshot_path, update_snapshots)
|
||||
|
||||
def test_monthly_report_companion(
|
||||
self,
|
||||
companion_monthly_aggregate,
|
||||
sample_location,
|
||||
txt_snapshots_dir,
|
||||
update_snapshots,
|
||||
):
|
||||
"""Monthly companion report matches snapshot."""
|
||||
result = format_monthly_txt(
|
||||
companion_monthly_aggregate,
|
||||
"Test Companion",
|
||||
sample_location,
|
||||
)
|
||||
|
||||
snapshot_path = txt_snapshots_dir / "monthly_report_companion.txt"
|
||||
self._assert_snapshot_match(result, snapshot_path, update_snapshots)
|
||||
|
||||
def test_yearly_report_repeater(
|
||||
self,
|
||||
repeater_yearly_aggregate,
|
||||
sample_location,
|
||||
txt_snapshots_dir,
|
||||
update_snapshots,
|
||||
):
|
||||
"""Yearly repeater report matches snapshot."""
|
||||
result = format_yearly_txt(
|
||||
repeater_yearly_aggregate,
|
||||
"Test Repeater",
|
||||
sample_location,
|
||||
)
|
||||
|
||||
snapshot_path = txt_snapshots_dir / "yearly_report_repeater.txt"
|
||||
self._assert_snapshot_match(result, snapshot_path, update_snapshots)
|
||||
|
||||
def test_yearly_report_companion(
|
||||
self,
|
||||
companion_yearly_aggregate,
|
||||
sample_location,
|
||||
txt_snapshots_dir,
|
||||
update_snapshots,
|
||||
):
|
||||
"""Yearly companion report matches snapshot."""
|
||||
result = format_yearly_txt(
|
||||
companion_yearly_aggregate,
|
||||
"Test Companion",
|
||||
sample_location,
|
||||
)
|
||||
|
||||
snapshot_path = txt_snapshots_dir / "yearly_report_companion.txt"
|
||||
self._assert_snapshot_match(result, snapshot_path, update_snapshots)
|
||||
|
||||
def test_empty_monthly_report(
|
||||
self,
|
||||
sample_location,
|
||||
txt_snapshots_dir,
|
||||
update_snapshots,
|
||||
):
|
||||
"""Empty monthly report matches snapshot."""
|
||||
empty_aggregate = MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=[],
|
||||
summary={},
|
||||
)
|
||||
|
||||
result = format_monthly_txt(
|
||||
empty_aggregate,
|
||||
"Test Repeater",
|
||||
sample_location,
|
||||
)
|
||||
|
||||
snapshot_path = txt_snapshots_dir / "empty_monthly_report.txt"
|
||||
self._assert_snapshot_match(result, snapshot_path, update_snapshots)
|
||||
|
||||
def test_empty_yearly_report(
|
||||
self,
|
||||
sample_location,
|
||||
txt_snapshots_dir,
|
||||
update_snapshots,
|
||||
):
|
||||
"""Empty yearly report matches snapshot."""
|
||||
empty_aggregate = YearlyAggregate(
|
||||
year=2024,
|
||||
role="repeater",
|
||||
monthly=[],
|
||||
summary={},
|
||||
)
|
||||
|
||||
result = format_yearly_txt(
|
||||
empty_aggregate,
|
||||
"Test Repeater",
|
||||
sample_location,
|
||||
)
|
||||
|
||||
snapshot_path = txt_snapshots_dir / "empty_yearly_report.txt"
|
||||
self._assert_snapshot_match(result, snapshot_path, update_snapshots)
|
||||
363
tests/reports/test_table_builders.py
Normal file
363
tests/reports/test_table_builders.py
Normal file
@@ -0,0 +1,363 @@
|
||||
"""Tests for report table building functions."""
|
||||
|
||||
from datetime import date
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.html import (
|
||||
build_monthly_table_data,
|
||||
build_yearly_table_data,
|
||||
)
|
||||
from meshmon.reports import (
|
||||
DailyAggregate,
|
||||
MetricStats,
|
||||
MonthlyAggregate,
|
||||
YearlyAggregate,
|
||||
)
|
||||
|
||||
|
||||
class TestBuildMonthlyTableData:
|
||||
"""Tests for build_monthly_table_data function."""
|
||||
|
||||
@pytest.fixture
|
||||
def sample_monthly_aggregate(self):
|
||||
"""Create sample MonthlyAggregate for testing."""
|
||||
daily_data = [
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, 1),
|
||||
metrics={
|
||||
"bat": MetricStats(min_value=3700, max_value=3900, mean=3800, count=24),
|
||||
"last_rssi": MetricStats(min_value=-95, max_value=-80, mean=-87, count=24),
|
||||
"nb_recv": MetricStats(total=720, count=24),
|
||||
},
|
||||
),
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, 2),
|
||||
metrics={
|
||||
"bat": MetricStats(min_value=3600, max_value=3850, mean=3750, count=24),
|
||||
"last_rssi": MetricStats(min_value=-93, max_value=-78, mean=-85, count=24),
|
||||
"nb_recv": MetricStats(total=840, count=24),
|
||||
},
|
||||
),
|
||||
]
|
||||
|
||||
return MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=daily_data,
|
||||
summary={
|
||||
"bat": MetricStats(min_value=3600, max_value=3900, mean=3775, count=48),
|
||||
"last_rssi": MetricStats(min_value=-95, max_value=-78, mean=-86, count=48),
|
||||
"nb_recv": MetricStats(total=1560, count=48),
|
||||
},
|
||||
)
|
||||
|
||||
def test_returns_tuple_of_three_lists(self, sample_monthly_aggregate):
|
||||
"""Returns tuple of (column_groups, headers, rows)."""
|
||||
result = build_monthly_table_data(sample_monthly_aggregate, "repeater")
|
||||
|
||||
assert isinstance(result, tuple)
|
||||
assert len(result) == 3
|
||||
|
||||
column_groups, headers, rows = result
|
||||
assert isinstance(column_groups, list)
|
||||
assert isinstance(headers, list)
|
||||
assert isinstance(rows, list)
|
||||
|
||||
def test_rows_match_daily_count(self, sample_monthly_aggregate):
|
||||
"""Number of rows matches number of daily aggregates (plus summary)."""
|
||||
_, _, rows = build_monthly_table_data(sample_monthly_aggregate, "repeater")
|
||||
|
||||
# Should have 2 data rows + 1 summary row = 3 total
|
||||
data_rows = [r for r in rows if not r.get("is_summary", False)]
|
||||
assert len(data_rows) == 2
|
||||
assert len(rows) == 3
|
||||
assert rows[-1]["is_summary"] is True
|
||||
|
||||
def test_headers_have_labels(self, sample_monthly_aggregate):
|
||||
"""Headers include label information."""
|
||||
_, headers, _ = build_monthly_table_data(sample_monthly_aggregate, "repeater")
|
||||
|
||||
expected_labels = [
|
||||
"Day",
|
||||
"Avg V",
|
||||
"Avg %",
|
||||
"Min V",
|
||||
"Max V",
|
||||
"RSSI",
|
||||
"SNR",
|
||||
"Noise",
|
||||
"RX",
|
||||
"TX",
|
||||
"Secs",
|
||||
]
|
||||
assert [header["label"] for header in headers] == expected_labels
|
||||
|
||||
def test_rows_have_date(self, sample_monthly_aggregate):
|
||||
"""Each data row includes date information via cells."""
|
||||
_, _, rows = build_monthly_table_data(sample_monthly_aggregate, "repeater")
|
||||
|
||||
data_rows = [r for r in rows if not r.get("is_summary", False)]
|
||||
for row in data_rows:
|
||||
assert isinstance(row, dict)
|
||||
# Row has cells with date value
|
||||
assert "cells" in row
|
||||
# First cell should be the day
|
||||
assert len(row["cells"]) > 0
|
||||
assert [row["cells"][0]["value"] for row in data_rows] == ["01", "02"]
|
||||
|
||||
def test_daily_row_values(self, sample_monthly_aggregate):
|
||||
"""Daily rows include formatted values and placeholders."""
|
||||
_, _, rows = build_monthly_table_data(sample_monthly_aggregate, "repeater")
|
||||
first_row = next(r for r in rows if not r.get("is_summary", False))
|
||||
cells = first_row["cells"]
|
||||
|
||||
assert cells[0]["value"] == "01"
|
||||
assert cells[1]["value"] == "3.80"
|
||||
assert cells[2]["value"] == "-"
|
||||
assert cells[5]["value"] == "-87"
|
||||
assert cells[6]["value"] == "-"
|
||||
assert cells[8]["value"] == "720"
|
||||
|
||||
def test_handles_empty_aggregate(self):
|
||||
"""Handles aggregate with no daily data."""
|
||||
agg = MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=[],
|
||||
summary={},
|
||||
)
|
||||
|
||||
result = build_monthly_table_data(agg, "repeater")
|
||||
|
||||
column_groups, headers, rows = result
|
||||
assert isinstance(rows, list)
|
||||
# Empty aggregate should have only summary row or no data rows
|
||||
data_rows = [r for r in rows if not r.get("is_summary", False)]
|
||||
assert len(data_rows) == 0
|
||||
|
||||
|
||||
class TestBuildYearlyTableData:
|
||||
"""Tests for build_yearly_table_data function."""
|
||||
|
||||
@pytest.fixture
|
||||
def sample_yearly_aggregate(self):
|
||||
"""Create sample YearlyAggregate for testing."""
|
||||
monthly_data = [
|
||||
MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=[],
|
||||
summary={"bat": MetricStats(min_value=3600, max_value=3900, mean=3750, count=720)},
|
||||
),
|
||||
MonthlyAggregate(
|
||||
year=2024,
|
||||
month=2,
|
||||
role="repeater",
|
||||
daily=[],
|
||||
summary={"bat": MetricStats(min_value=3500, max_value=3850, mean=3700, count=672)},
|
||||
),
|
||||
]
|
||||
|
||||
return YearlyAggregate(
|
||||
year=2024,
|
||||
role="repeater",
|
||||
monthly=monthly_data,
|
||||
summary={"bat": MetricStats(min_value=3500, max_value=3900, mean=3725, count=1392)},
|
||||
)
|
||||
|
||||
def test_returns_tuple_of_three_lists(self, sample_yearly_aggregate):
|
||||
"""Returns tuple of (column_groups, headers, rows)."""
|
||||
result = build_yearly_table_data(sample_yearly_aggregate, "repeater")
|
||||
|
||||
assert isinstance(result, tuple)
|
||||
assert len(result) == 3
|
||||
|
||||
column_groups, headers, rows = result
|
||||
assert isinstance(column_groups, list)
|
||||
assert isinstance(headers, list)
|
||||
assert isinstance(rows, list)
|
||||
|
||||
def test_rows_match_monthly_count(self, sample_yearly_aggregate):
|
||||
"""Number of rows matches number of monthly data (plus summary)."""
|
||||
_, _, rows = build_yearly_table_data(sample_yearly_aggregate, "repeater")
|
||||
|
||||
# Should have 2 data rows + 1 summary row
|
||||
data_rows = [r for r in rows if not r.get("is_summary", False)]
|
||||
assert len(data_rows) == 2
|
||||
assert len(rows) == 3
|
||||
assert rows[-1]["is_summary"] is True
|
||||
|
||||
def test_headers_have_labels(self, sample_yearly_aggregate):
|
||||
"""Headers include label information."""
|
||||
_, headers, _ = build_yearly_table_data(sample_yearly_aggregate, "repeater")
|
||||
|
||||
expected_labels = [
|
||||
"Year",
|
||||
"Mo",
|
||||
"Volt",
|
||||
"%",
|
||||
"High",
|
||||
"Low",
|
||||
"RSSI",
|
||||
"SNR",
|
||||
"RX",
|
||||
"TX",
|
||||
]
|
||||
assert [header["label"] for header in headers] == expected_labels
|
||||
|
||||
def test_rows_have_month(self, sample_yearly_aggregate):
|
||||
"""Each row includes month information."""
|
||||
_, _, rows = build_yearly_table_data(sample_yearly_aggregate, "repeater")
|
||||
|
||||
data_rows = [r for r in rows if not r.get("is_summary", False)]
|
||||
months = [row["cells"][1]["value"] for row in data_rows]
|
||||
assert months == ["01", "02"]
|
||||
|
||||
def test_yearly_row_values(self, sample_yearly_aggregate):
|
||||
"""Yearly rows include formatted values and placeholders."""
|
||||
_, _, rows = build_yearly_table_data(sample_yearly_aggregate, "repeater")
|
||||
first_row = next(r for r in rows if not r.get("is_summary", False))
|
||||
cells = first_row["cells"]
|
||||
|
||||
assert cells[0]["value"] == "2024"
|
||||
assert cells[1]["value"] == "01"
|
||||
assert cells[2]["value"] == "3.75"
|
||||
assert cells[3]["value"] == "-"
|
||||
|
||||
def test_handles_empty_aggregate(self):
|
||||
"""Handles aggregate with no monthly data."""
|
||||
agg = YearlyAggregate(
|
||||
year=2024,
|
||||
role="repeater",
|
||||
monthly=[],
|
||||
summary={},
|
||||
)
|
||||
|
||||
result = build_yearly_table_data(agg, "repeater")
|
||||
|
||||
column_groups, headers, rows = result
|
||||
assert isinstance(rows, list)
|
||||
# Empty aggregate should have only summary row or no data rows
|
||||
data_rows = [r for r in rows if not r.get("is_summary", False)]
|
||||
assert len(data_rows) == 0
|
||||
|
||||
|
||||
class TestTableColumnGroups:
|
||||
"""Tests for column grouping in tables."""
|
||||
|
||||
@pytest.fixture
|
||||
def monthly_aggregate_with_data(self):
|
||||
"""Aggregate with data for column group testing."""
|
||||
daily = DailyAggregate(
|
||||
date=date(2024, 1, 1),
|
||||
metrics={
|
||||
"bat": MetricStats(min_value=3700, max_value=3900, mean=3800, count=24),
|
||||
"last_rssi": MetricStats(min_value=-95, max_value=-80, mean=-87, count=24),
|
||||
"nb_recv": MetricStats(total=720, count=24),
|
||||
},
|
||||
)
|
||||
|
||||
return MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=[daily],
|
||||
summary={},
|
||||
)
|
||||
|
||||
def test_column_groups_structure(self, monthly_aggregate_with_data):
|
||||
"""Column groups have expected structure."""
|
||||
column_groups, _, _ = build_monthly_table_data(monthly_aggregate_with_data, "repeater")
|
||||
|
||||
assert column_groups == [
|
||||
{"label": "", "colspan": 1},
|
||||
{"label": "Battery", "colspan": 4},
|
||||
{"label": "Signal", "colspan": 3},
|
||||
{"label": "Packets", "colspan": 2},
|
||||
{"label": "Air", "colspan": 1},
|
||||
]
|
||||
|
||||
def test_column_groups_span_matches_headers(self, monthly_aggregate_with_data):
|
||||
"""Column group spans should add up to header count."""
|
||||
column_groups, headers, _ = build_monthly_table_data(monthly_aggregate_with_data, "repeater")
|
||||
|
||||
total_span = sum(
|
||||
g.get("span", g.get("colspan", len(g.get("columns", []))))
|
||||
for g in column_groups
|
||||
)
|
||||
|
||||
assert total_span == len(headers)
|
||||
|
||||
|
||||
class TestTableRolesHandling:
|
||||
"""Tests for different role handling in tables."""
|
||||
|
||||
@pytest.fixture
|
||||
def companion_aggregate(self):
|
||||
"""Aggregate for companion role."""
|
||||
daily = DailyAggregate(
|
||||
date=date(2024, 1, 1),
|
||||
metrics={
|
||||
"battery_mv": MetricStats(min_value=3700, max_value=3900, mean=3800, count=24),
|
||||
"contacts": MetricStats(min_value=5, max_value=10, mean=7, count=24),
|
||||
"recv": MetricStats(total=720, count=24),
|
||||
},
|
||||
)
|
||||
|
||||
return MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="companion",
|
||||
daily=[daily],
|
||||
summary={},
|
||||
)
|
||||
|
||||
def test_companion_role_works(self, companion_aggregate):
|
||||
"""Table building works for companion role."""
|
||||
result = build_monthly_table_data(companion_aggregate, "companion")
|
||||
|
||||
column_groups, headers, rows = result
|
||||
assert isinstance(rows, list)
|
||||
# 1 data row + summary row
|
||||
data_rows = [r for r in rows if not r.get("is_summary", False)]
|
||||
assert len(data_rows) == 1
|
||||
assert [header["label"] for header in headers] == [
|
||||
"Day",
|
||||
"Avg V",
|
||||
"Avg %",
|
||||
"Min V",
|
||||
"Max V",
|
||||
"Contacts",
|
||||
"RX",
|
||||
"TX",
|
||||
]
|
||||
|
||||
def test_different_roles_different_columns(self, companion_aggregate):
|
||||
"""Different roles may have different column structures."""
|
||||
# Create a repeater aggregate
|
||||
repeater_daily = DailyAggregate(
|
||||
date=date(2024, 1, 1),
|
||||
metrics={
|
||||
"bat": MetricStats(min_value=3700, max_value=3900, mean=3800, count=24),
|
||||
},
|
||||
)
|
||||
|
||||
repeater_agg = MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=[repeater_daily],
|
||||
summary={},
|
||||
)
|
||||
|
||||
companion_result = build_monthly_table_data(companion_aggregate, "companion")
|
||||
repeater_result = build_monthly_table_data(repeater_agg, "repeater")
|
||||
|
||||
# Both should return valid data
|
||||
assert len(companion_result) == 3
|
||||
assert len(repeater_result) == 3
|
||||
assert [h["label"] for h in companion_result[1]] != [h["label"] for h in repeater_result[1]]
|
||||
1
tests/retry/__init__.py
Normal file
1
tests/retry/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Tests for retry logic and circuit breaker."""
|
||||
67
tests/retry/conftest.py
Normal file
67
tests/retry/conftest.py
Normal file
@@ -0,0 +1,67 @@
|
||||
"""Fixtures for retry and circuit breaker tests."""
|
||||
|
||||
import json
|
||||
|
||||
import pytest
|
||||
|
||||
BASE_TS = 1704067200
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def circuit_state_file(tmp_path):
|
||||
"""Path for circuit breaker state file."""
|
||||
return tmp_path / "circuit.json"
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def closed_circuit(circuit_state_file):
|
||||
"""Circuit breaker state file with closed circuit (no failures)."""
|
||||
state = {
|
||||
"consecutive_failures": 0,
|
||||
"cooldown_until": 0,
|
||||
"last_success": BASE_TS,
|
||||
}
|
||||
circuit_state_file.write_text(json.dumps(state))
|
||||
return circuit_state_file
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def open_circuit(circuit_state_file):
|
||||
"""Circuit breaker state file with open circuit (in cooldown)."""
|
||||
state = {
|
||||
"consecutive_failures": 10,
|
||||
"cooldown_until": BASE_TS + 3600, # 1 hour from BASE_TS
|
||||
"last_success": BASE_TS - 7200, # 2 hours before BASE_TS
|
||||
}
|
||||
circuit_state_file.write_text(json.dumps(state))
|
||||
return circuit_state_file
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def expired_cooldown_circuit(circuit_state_file):
|
||||
"""Circuit breaker state file with expired cooldown."""
|
||||
state = {
|
||||
"consecutive_failures": 10,
|
||||
"cooldown_until": BASE_TS - 100, # Expired 100s before BASE_TS
|
||||
"last_success": BASE_TS - 7200,
|
||||
}
|
||||
circuit_state_file.write_text(json.dumps(state))
|
||||
return circuit_state_file
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def corrupted_state_file(circuit_state_file):
|
||||
"""Circuit breaker state file with corrupted JSON."""
|
||||
circuit_state_file.write_text("not valid json {{{")
|
||||
return circuit_state_file
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def partial_state_file(circuit_state_file):
|
||||
"""Circuit breaker state file with missing keys."""
|
||||
state = {
|
||||
"consecutive_failures": 5,
|
||||
# Missing cooldown_until and last_success
|
||||
}
|
||||
circuit_state_file.write_text(json.dumps(state))
|
||||
return circuit_state_file
|
||||
325
tests/retry/test_circuit_breaker.py
Normal file
325
tests/retry/test_circuit_breaker.py
Normal file
@@ -0,0 +1,325 @@
|
||||
"""Tests for CircuitBreaker class."""
|
||||
|
||||
import json
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.retry import CircuitBreaker
|
||||
|
||||
BASE_TS = 1704067200
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def time_controller(monkeypatch):
|
||||
"""Control time.time() within meshmon.retry."""
|
||||
state = {"now": BASE_TS}
|
||||
|
||||
def _time():
|
||||
return state["now"]
|
||||
|
||||
monkeypatch.setattr("meshmon.retry.time.time", _time)
|
||||
return state
|
||||
|
||||
|
||||
class TestCircuitBreakerInit:
|
||||
"""Tests for CircuitBreaker initialization."""
|
||||
|
||||
def test_creates_with_fresh_state(self, circuit_state_file):
|
||||
"""Fresh circuit breaker has zero failures and no cooldown."""
|
||||
cb = CircuitBreaker(circuit_state_file)
|
||||
|
||||
assert cb.consecutive_failures == 0
|
||||
assert cb.cooldown_until == 0
|
||||
assert cb.last_success == 0
|
||||
|
||||
def test_loads_existing_state(self, closed_circuit):
|
||||
"""Loads state from existing file."""
|
||||
cb = CircuitBreaker(closed_circuit)
|
||||
|
||||
assert cb.consecutive_failures == 0
|
||||
assert cb.cooldown_until == 0
|
||||
assert cb.last_success > 0
|
||||
|
||||
def test_loads_open_circuit_state(self, open_circuit, time_controller):
|
||||
"""Loads open circuit state correctly."""
|
||||
cb = CircuitBreaker(open_circuit)
|
||||
|
||||
assert cb.consecutive_failures == 10
|
||||
assert cb.cooldown_until == BASE_TS + 3600
|
||||
assert cb.is_open() is True
|
||||
|
||||
def test_handles_corrupted_file(self, corrupted_state_file):
|
||||
"""Corrupted JSON file loads defaults without crashing."""
|
||||
cb = CircuitBreaker(corrupted_state_file)
|
||||
|
||||
# Should use defaults
|
||||
assert cb.consecutive_failures == 0
|
||||
assert cb.cooldown_until == 0
|
||||
assert cb.last_success == 0
|
||||
|
||||
def test_handles_partial_state(self, partial_state_file):
|
||||
"""Missing keys in state file use defaults."""
|
||||
cb = CircuitBreaker(partial_state_file)
|
||||
|
||||
assert cb.consecutive_failures == 5 # Present in file
|
||||
assert cb.cooldown_until == 0 # Default
|
||||
assert cb.last_success == 0 # Default
|
||||
|
||||
def test_handles_nonexistent_file(self, circuit_state_file):
|
||||
"""Nonexistent state file uses defaults."""
|
||||
assert not circuit_state_file.exists()
|
||||
|
||||
cb = CircuitBreaker(circuit_state_file)
|
||||
|
||||
assert cb.consecutive_failures == 0
|
||||
|
||||
def test_stores_state_file_path(self, circuit_state_file):
|
||||
"""Stores the state file path."""
|
||||
cb = CircuitBreaker(circuit_state_file)
|
||||
|
||||
assert cb.state_file == circuit_state_file
|
||||
|
||||
|
||||
class TestCircuitBreakerIsOpen:
|
||||
"""Tests for is_open method."""
|
||||
|
||||
def test_closed_circuit_returns_false(self, closed_circuit):
|
||||
"""Closed circuit (no cooldown) returns False."""
|
||||
cb = CircuitBreaker(closed_circuit)
|
||||
|
||||
assert cb.is_open() is False
|
||||
|
||||
def test_open_circuit_returns_true(self, open_circuit, time_controller):
|
||||
"""Open circuit (in cooldown) returns True."""
|
||||
cb = CircuitBreaker(open_circuit)
|
||||
|
||||
assert cb.is_open() is True
|
||||
|
||||
def test_expired_cooldown_returns_false(self, expired_cooldown_circuit, time_controller):
|
||||
"""Expired cooldown returns False (circuit closes)."""
|
||||
cb = CircuitBreaker(expired_cooldown_circuit)
|
||||
|
||||
assert cb.is_open() is False
|
||||
|
||||
def test_cooldown_expiry(self, circuit_state_file, time_controller):
|
||||
"""Circuit closes when cooldown expires."""
|
||||
# Set cooldown to 10 seconds from now
|
||||
state = {
|
||||
"consecutive_failures": 10,
|
||||
"cooldown_until": BASE_TS + 10,
|
||||
"last_success": 0,
|
||||
}
|
||||
circuit_state_file.write_text(json.dumps(state))
|
||||
|
||||
cb = CircuitBreaker(circuit_state_file)
|
||||
assert cb.is_open() is True
|
||||
|
||||
time_controller["now"] = BASE_TS + 11
|
||||
assert cb.is_open() is False
|
||||
|
||||
|
||||
class TestCooldownRemaining:
|
||||
"""Tests for cooldown_remaining method."""
|
||||
|
||||
def test_returns_zero_when_closed(self, closed_circuit):
|
||||
"""Returns 0 when circuit is closed."""
|
||||
cb = CircuitBreaker(closed_circuit)
|
||||
|
||||
assert cb.cooldown_remaining() == 0
|
||||
|
||||
def test_returns_seconds_when_open(self, circuit_state_file, time_controller):
|
||||
"""Returns remaining seconds when in cooldown."""
|
||||
state = {
|
||||
"consecutive_failures": 10,
|
||||
"cooldown_until": BASE_TS + 100,
|
||||
"last_success": 0,
|
||||
}
|
||||
circuit_state_file.write_text(json.dumps(state))
|
||||
|
||||
cb = CircuitBreaker(circuit_state_file)
|
||||
remaining = cb.cooldown_remaining()
|
||||
|
||||
assert remaining == 100
|
||||
|
||||
def test_returns_zero_when_expired(self, expired_cooldown_circuit):
|
||||
"""Returns 0 when cooldown has expired."""
|
||||
cb = CircuitBreaker(expired_cooldown_circuit)
|
||||
|
||||
assert cb.cooldown_remaining() == 0
|
||||
|
||||
def test_returns_integer(self, open_circuit, time_controller):
|
||||
"""Returns an integer, not float."""
|
||||
cb = CircuitBreaker(open_circuit)
|
||||
|
||||
assert isinstance(cb.cooldown_remaining(), int)
|
||||
|
||||
|
||||
class TestRecordSuccess:
|
||||
"""Tests for record_success method."""
|
||||
|
||||
def test_resets_failure_count(self, circuit_state_file):
|
||||
"""Success resets consecutive failure count to 0."""
|
||||
state = {
|
||||
"consecutive_failures": 5,
|
||||
"cooldown_until": 0,
|
||||
"last_success": 0,
|
||||
}
|
||||
circuit_state_file.write_text(json.dumps(state))
|
||||
|
||||
cb = CircuitBreaker(circuit_state_file)
|
||||
cb.record_success()
|
||||
|
||||
assert cb.consecutive_failures == 0
|
||||
|
||||
def test_updates_last_success(self, circuit_state_file, time_controller):
|
||||
"""Success updates last_success timestamp."""
|
||||
cb = CircuitBreaker(circuit_state_file)
|
||||
time_controller["now"] = BASE_TS + 5
|
||||
cb.record_success()
|
||||
|
||||
assert cb.last_success == BASE_TS + 5
|
||||
|
||||
def test_persists_to_file(self, circuit_state_file):
|
||||
"""Success state is persisted to file."""
|
||||
cb = CircuitBreaker(circuit_state_file)
|
||||
cb.consecutive_failures = 5
|
||||
cb.record_success()
|
||||
|
||||
# Read file directly
|
||||
data = json.loads(circuit_state_file.read_text())
|
||||
assert data["consecutive_failures"] == 0
|
||||
assert data["last_success"] > 0
|
||||
|
||||
def test_creates_parent_dirs(self, tmp_path):
|
||||
"""Creates parent directories if they don't exist."""
|
||||
nested_path = tmp_path / "deep" / "nested" / "circuit.json"
|
||||
cb = CircuitBreaker(nested_path)
|
||||
cb.record_success()
|
||||
|
||||
assert nested_path.exists()
|
||||
|
||||
|
||||
class TestRecordFailure:
|
||||
"""Tests for record_failure method."""
|
||||
|
||||
def test_increments_failure_count(self, circuit_state_file):
|
||||
"""Failure increments consecutive failure count."""
|
||||
cb = CircuitBreaker(circuit_state_file)
|
||||
cb.record_failure(max_failures=10, cooldown_s=3600)
|
||||
|
||||
assert cb.consecutive_failures == 1
|
||||
|
||||
def test_opens_circuit_at_threshold(self, circuit_state_file, time_controller):
|
||||
"""Circuit opens when failures reach threshold."""
|
||||
cb = CircuitBreaker(circuit_state_file)
|
||||
|
||||
# Record failures up to threshold
|
||||
for _ in range(5):
|
||||
cb.record_failure(max_failures=5, cooldown_s=3600)
|
||||
|
||||
assert cb.is_open() is True
|
||||
assert cb.cooldown_until == BASE_TS + 3600
|
||||
|
||||
def test_does_not_open_before_threshold(self, circuit_state_file, time_controller):
|
||||
"""Circuit stays closed before reaching threshold."""
|
||||
cb = CircuitBreaker(circuit_state_file)
|
||||
|
||||
for _ in range(4):
|
||||
cb.record_failure(max_failures=5, cooldown_s=3600)
|
||||
|
||||
assert cb.is_open() is False
|
||||
|
||||
def test_cooldown_duration(self, circuit_state_file, time_controller):
|
||||
"""Cooldown is set to specified duration."""
|
||||
cb = CircuitBreaker(circuit_state_file)
|
||||
|
||||
for _ in range(5):
|
||||
cb.record_failure(max_failures=5, cooldown_s=100)
|
||||
|
||||
# Cooldown should be ~100 seconds from now
|
||||
assert cb.cooldown_until == BASE_TS + 100
|
||||
|
||||
def test_persists_to_file(self, circuit_state_file):
|
||||
"""Failure state is persisted to file."""
|
||||
cb = CircuitBreaker(circuit_state_file)
|
||||
cb.record_failure(max_failures=10, cooldown_s=3600)
|
||||
|
||||
data = json.loads(circuit_state_file.read_text())
|
||||
assert data["consecutive_failures"] == 1
|
||||
|
||||
|
||||
class TestToDict:
|
||||
"""Tests for to_dict method."""
|
||||
|
||||
def test_includes_all_fields(self, closed_circuit):
|
||||
"""Dict includes all state fields."""
|
||||
cb = CircuitBreaker(closed_circuit)
|
||||
d = cb.to_dict()
|
||||
|
||||
assert "consecutive_failures" in d
|
||||
assert "cooldown_until" in d
|
||||
assert "last_success" in d
|
||||
assert "is_open" in d
|
||||
assert "cooldown_remaining_s" in d
|
||||
|
||||
def test_is_open_reflects_state(self, open_circuit, time_controller):
|
||||
"""is_open in dict reflects actual circuit state."""
|
||||
cb = CircuitBreaker(open_circuit)
|
||||
d = cb.to_dict()
|
||||
|
||||
assert d["is_open"] is True
|
||||
|
||||
def test_cooldown_remaining_reflects_state(self, open_circuit, time_controller):
|
||||
"""cooldown_remaining_s reflects actual remaining time."""
|
||||
cb = CircuitBreaker(open_circuit)
|
||||
d = cb.to_dict()
|
||||
|
||||
assert d["cooldown_remaining_s"] > 0
|
||||
|
||||
def test_closed_circuit_dict(self, closed_circuit):
|
||||
"""Closed circuit has expected dict values."""
|
||||
cb = CircuitBreaker(closed_circuit)
|
||||
d = cb.to_dict()
|
||||
|
||||
assert d["consecutive_failures"] == 0
|
||||
assert d["is_open"] is False
|
||||
assert d["cooldown_remaining_s"] == 0
|
||||
|
||||
|
||||
class TestStatePersistence:
|
||||
"""Tests for state persistence across instances."""
|
||||
|
||||
def test_state_survives_reload(self, circuit_state_file):
|
||||
"""State persists across CircuitBreaker instances."""
|
||||
cb1 = CircuitBreaker(circuit_state_file)
|
||||
cb1.record_failure(max_failures=10, cooldown_s=3600)
|
||||
cb1.record_failure(max_failures=10, cooldown_s=3600)
|
||||
cb1.record_failure(max_failures=10, cooldown_s=3600)
|
||||
|
||||
# Create new instance
|
||||
cb2 = CircuitBreaker(circuit_state_file)
|
||||
|
||||
assert cb2.consecutive_failures == 3
|
||||
|
||||
def test_success_resets_across_reload(self, circuit_state_file):
|
||||
"""Success reset persists across instances."""
|
||||
cb1 = CircuitBreaker(circuit_state_file)
|
||||
for _ in range(5):
|
||||
cb1.record_failure(max_failures=10, cooldown_s=3600)
|
||||
|
||||
cb1.record_success()
|
||||
|
||||
cb2 = CircuitBreaker(circuit_state_file)
|
||||
assert cb2.consecutive_failures == 0
|
||||
|
||||
def test_open_state_survives_reload(self, circuit_state_file, time_controller):
|
||||
"""Open circuit state persists across instances."""
|
||||
cb1 = CircuitBreaker(circuit_state_file)
|
||||
for _ in range(10):
|
||||
cb1.record_failure(max_failures=10, cooldown_s=3600)
|
||||
|
||||
assert cb1.is_open() is True
|
||||
|
||||
cb2 = CircuitBreaker(circuit_state_file)
|
||||
assert cb2.is_open() is True
|
||||
assert cb2.consecutive_failures == 10
|
||||
63
tests/retry/test_get_circuit_breaker.py
Normal file
63
tests/retry/test_get_circuit_breaker.py
Normal file
@@ -0,0 +1,63 @@
|
||||
"""Tests for get_repeater_circuit_breaker function."""
|
||||
|
||||
|
||||
from meshmon.retry import CircuitBreaker, get_repeater_circuit_breaker
|
||||
|
||||
|
||||
class TestGetRepeaterCircuitBreaker:
|
||||
"""Tests for get_repeater_circuit_breaker function."""
|
||||
|
||||
def test_returns_circuit_breaker(self, configured_env):
|
||||
"""Returns a CircuitBreaker instance."""
|
||||
cb = get_repeater_circuit_breaker()
|
||||
|
||||
assert isinstance(cb, CircuitBreaker)
|
||||
|
||||
def test_uses_state_dir(self, configured_env):
|
||||
"""Uses state_dir from config."""
|
||||
cb = get_repeater_circuit_breaker()
|
||||
|
||||
expected_path = configured_env["state_dir"] / "repeater_circuit.json"
|
||||
assert cb.state_file == expected_path
|
||||
|
||||
def test_state_file_name(self, configured_env):
|
||||
"""State file is named repeater_circuit.json."""
|
||||
cb = get_repeater_circuit_breaker()
|
||||
|
||||
assert cb.state_file.name == "repeater_circuit.json"
|
||||
|
||||
def test_each_call_creates_new_instance(self, configured_env):
|
||||
"""Each call creates a new CircuitBreaker instance."""
|
||||
cb1 = get_repeater_circuit_breaker()
|
||||
cb2 = get_repeater_circuit_breaker()
|
||||
|
||||
assert cb1 is not cb2
|
||||
|
||||
def test_instances_share_state_file(self, configured_env):
|
||||
"""Multiple instances share the same state file."""
|
||||
cb1 = get_repeater_circuit_breaker()
|
||||
cb2 = get_repeater_circuit_breaker()
|
||||
|
||||
assert cb1.state_file == cb2.state_file
|
||||
|
||||
def test_state_persists_across_instances(self, configured_env):
|
||||
"""State changes persist across instances."""
|
||||
cb1 = get_repeater_circuit_breaker()
|
||||
cb1.record_failure(max_failures=10, cooldown_s=3600)
|
||||
cb1.record_failure(max_failures=10, cooldown_s=3600)
|
||||
|
||||
cb2 = get_repeater_circuit_breaker()
|
||||
|
||||
assert cb2.consecutive_failures == 2
|
||||
|
||||
def test_creates_state_file_on_write(self, configured_env):
|
||||
"""State file is created when recording success/failure."""
|
||||
state_dir = configured_env["state_dir"]
|
||||
state_file = state_dir / "repeater_circuit.json"
|
||||
|
||||
assert not state_file.exists()
|
||||
|
||||
cb = get_repeater_circuit_breaker()
|
||||
cb.record_success()
|
||||
|
||||
assert state_file.exists()
|
||||
352
tests/retry/test_with_retries.py
Normal file
352
tests/retry/test_with_retries.py
Normal file
@@ -0,0 +1,352 @@
|
||||
"""Tests for with_retries async function."""
|
||||
|
||||
import asyncio
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.retry import with_retries
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sleep_spy(monkeypatch):
|
||||
"""Capture asyncio.sleep calls without waiting."""
|
||||
calls = []
|
||||
|
||||
async def fake_sleep(delay):
|
||||
calls.append(delay)
|
||||
|
||||
monkeypatch.setattr("meshmon.retry.asyncio.sleep", fake_sleep)
|
||||
return calls
|
||||
|
||||
|
||||
class TestWithRetriesSuccess:
|
||||
"""Tests for successful operation scenarios."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_returns_result_on_success(self):
|
||||
"""Returns result when operation succeeds."""
|
||||
async def success_fn():
|
||||
return "result"
|
||||
|
||||
success, result, exception = await with_retries(success_fn)
|
||||
|
||||
assert success is True
|
||||
assert result == "result"
|
||||
assert exception is None
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_single_attempt_on_success(self):
|
||||
"""Only calls function once when successful."""
|
||||
call_count = 0
|
||||
|
||||
async def counting_fn():
|
||||
nonlocal call_count
|
||||
call_count += 1
|
||||
return "done"
|
||||
|
||||
await with_retries(counting_fn, attempts=3)
|
||||
|
||||
assert call_count == 1
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_returns_complex_result(self):
|
||||
"""Returns complex result types correctly."""
|
||||
async def complex_fn():
|
||||
return {"status": "ok", "data": [1, 2, 3]}
|
||||
|
||||
success, result, _ = await with_retries(complex_fn)
|
||||
|
||||
assert result == {"status": "ok", "data": [1, 2, 3]}
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_returns_none_result(self):
|
||||
"""Returns None result correctly (distinct from failure)."""
|
||||
async def none_fn():
|
||||
return None
|
||||
|
||||
success, result, exception = await with_retries(none_fn)
|
||||
|
||||
assert success is True
|
||||
assert result is None
|
||||
assert exception is None
|
||||
|
||||
|
||||
class TestWithRetriesFailure:
|
||||
"""Tests for failure scenarios."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_returns_false_on_exhausted_attempts(self):
|
||||
"""Returns failure when all attempts exhausted."""
|
||||
async def failing_fn():
|
||||
raise ValueError("always fails")
|
||||
|
||||
success, result, exception = await with_retries(
|
||||
failing_fn, attempts=3, backoff_s=0
|
||||
)
|
||||
|
||||
assert success is False
|
||||
assert result is None
|
||||
assert isinstance(exception, ValueError)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_retries_specified_times(self):
|
||||
"""Retries the specified number of times."""
|
||||
call_count = 0
|
||||
|
||||
async def failing_fn():
|
||||
nonlocal call_count
|
||||
call_count += 1
|
||||
raise RuntimeError("fail")
|
||||
|
||||
await with_retries(failing_fn, attempts=5, backoff_s=0)
|
||||
|
||||
assert call_count == 5
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_returns_last_exception(self):
|
||||
"""Returns the exception from the last attempt."""
|
||||
attempt = 0
|
||||
|
||||
async def changing_error_fn():
|
||||
nonlocal attempt
|
||||
attempt += 1
|
||||
raise ValueError(f"error {attempt}")
|
||||
|
||||
success, result, exception = await with_retries(
|
||||
changing_error_fn, attempts=3, backoff_s=0
|
||||
)
|
||||
|
||||
assert str(exception) == "error 3"
|
||||
|
||||
|
||||
class TestWithRetriesRetryBehavior:
|
||||
"""Tests for retry behavior."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_succeeds_on_retry(self):
|
||||
"""Succeeds if operation succeeds on retry."""
|
||||
attempt = 0
|
||||
|
||||
async def eventually_succeeds():
|
||||
nonlocal attempt
|
||||
attempt += 1
|
||||
if attempt < 3:
|
||||
raise RuntimeError("not yet")
|
||||
return "success"
|
||||
|
||||
success, result, exception = await with_retries(
|
||||
eventually_succeeds, attempts=5, backoff_s=0
|
||||
)
|
||||
|
||||
assert success is True
|
||||
assert result == "success"
|
||||
assert exception is None
|
||||
assert attempt == 3
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_backoff_timing(self, sleep_spy):
|
||||
"""Waits backoff_s between retries."""
|
||||
async def failing_fn():
|
||||
raise RuntimeError("fail")
|
||||
|
||||
await with_retries(failing_fn, attempts=3, backoff_s=0.1)
|
||||
|
||||
assert sleep_spy == [0.1, 0.1]
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_no_backoff_after_last_attempt(self, sleep_spy):
|
||||
"""Does not wait after final failed attempt."""
|
||||
async def failing_fn():
|
||||
raise RuntimeError("fail")
|
||||
|
||||
await with_retries(failing_fn, attempts=2, backoff_s=0.5)
|
||||
|
||||
assert sleep_spy == [0.5]
|
||||
|
||||
|
||||
class TestWithRetriesParameters:
|
||||
"""Tests for parameter handling."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_default_attempts(self):
|
||||
"""Uses default of 2 attempts."""
|
||||
call_count = 0
|
||||
|
||||
async def failing_fn():
|
||||
nonlocal call_count
|
||||
call_count += 1
|
||||
raise RuntimeError("fail")
|
||||
|
||||
await with_retries(failing_fn, backoff_s=0)
|
||||
|
||||
assert call_count == 2
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_single_attempt(self):
|
||||
"""Works with single attempt (no retry)."""
|
||||
call_count = 0
|
||||
|
||||
async def failing_fn():
|
||||
nonlocal call_count
|
||||
call_count += 1
|
||||
raise RuntimeError("fail")
|
||||
|
||||
await with_retries(failing_fn, attempts=1, backoff_s=0)
|
||||
|
||||
assert call_count == 1
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_zero_backoff(self):
|
||||
"""Works with zero backoff."""
|
||||
call_count = 0
|
||||
|
||||
async def failing_fn():
|
||||
nonlocal call_count
|
||||
call_count += 1
|
||||
raise RuntimeError("fail")
|
||||
|
||||
await with_retries(failing_fn, attempts=3, backoff_s=0)
|
||||
|
||||
assert call_count == 3
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_name_parameter_for_logging(self, monkeypatch, sleep_spy):
|
||||
"""Name parameter is used in logging."""
|
||||
messages = []
|
||||
|
||||
def fake_info(msg):
|
||||
messages.append(msg)
|
||||
|
||||
def fake_debug(msg):
|
||||
messages.append(msg)
|
||||
|
||||
monkeypatch.setattr("meshmon.retry.log.info", fake_info)
|
||||
monkeypatch.setattr("meshmon.retry.log.debug", fake_debug)
|
||||
|
||||
async def failing_fn():
|
||||
raise RuntimeError("fail")
|
||||
|
||||
await with_retries(
|
||||
failing_fn, attempts=2, backoff_s=0.1, name="test_operation"
|
||||
)
|
||||
|
||||
assert any("test_operation" in msg for msg in messages)
|
||||
|
||||
|
||||
class TestWithRetriesExceptionTypes:
|
||||
"""Tests for different exception types."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_handles_value_error(self):
|
||||
"""Handles ValueError correctly."""
|
||||
async def fn():
|
||||
raise ValueError("value error")
|
||||
|
||||
success, _, exception = await with_retries(fn, attempts=1)
|
||||
|
||||
assert success is False
|
||||
assert isinstance(exception, ValueError)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_handles_runtime_error(self):
|
||||
"""Handles RuntimeError correctly."""
|
||||
async def fn():
|
||||
raise RuntimeError("runtime error")
|
||||
|
||||
success, _, exception = await with_retries(fn, attempts=1)
|
||||
|
||||
assert success is False
|
||||
assert isinstance(exception, RuntimeError)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_handles_timeout_error(self):
|
||||
"""Handles asyncio.TimeoutError correctly."""
|
||||
async def fn():
|
||||
raise TimeoutError("timeout")
|
||||
|
||||
success, _, exception = await with_retries(fn, attempts=1)
|
||||
|
||||
assert success is False
|
||||
assert isinstance(exception, asyncio.TimeoutError)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_handles_os_error(self):
|
||||
"""Handles OSError correctly."""
|
||||
async def fn():
|
||||
raise OSError("os error")
|
||||
|
||||
success, _, exception = await with_retries(fn, attempts=1)
|
||||
|
||||
assert success is False
|
||||
assert isinstance(exception, OSError)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_handles_custom_exception(self):
|
||||
"""Handles custom exception types correctly."""
|
||||
class CustomError(Exception):
|
||||
pass
|
||||
|
||||
async def fn():
|
||||
raise CustomError("custom")
|
||||
|
||||
success, _, exception = await with_retries(fn, attempts=1)
|
||||
|
||||
assert success is False
|
||||
assert isinstance(exception, CustomError)
|
||||
|
||||
|
||||
class TestWithRetriesAsyncBehavior:
|
||||
"""Tests for async-specific behavior."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_concurrent_retries_independent(self):
|
||||
"""Multiple concurrent retry operations are independent."""
|
||||
calls_a = 0
|
||||
calls_b = 0
|
||||
|
||||
async def fn_a():
|
||||
nonlocal calls_a
|
||||
calls_a += 1
|
||||
if calls_a < 2:
|
||||
raise RuntimeError("a fails first")
|
||||
return "a"
|
||||
|
||||
async def fn_b():
|
||||
nonlocal calls_b
|
||||
calls_b += 1
|
||||
if calls_b < 3:
|
||||
raise RuntimeError("b fails more")
|
||||
return "b"
|
||||
|
||||
results = await asyncio.gather(
|
||||
with_retries(fn_a, attempts=3, backoff_s=0.01),
|
||||
with_retries(fn_b, attempts=4, backoff_s=0.01),
|
||||
)
|
||||
|
||||
assert results[0] == (True, "a", None)
|
||||
assert results[1] == (True, "b", None)
|
||||
assert calls_a == 2
|
||||
assert calls_b == 3
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_does_not_block_event_loop(self):
|
||||
"""Backoff uses asyncio.sleep, not blocking sleep."""
|
||||
events = []
|
||||
|
||||
async def fn():
|
||||
events.append("fn")
|
||||
raise RuntimeError("fail")
|
||||
|
||||
async def background():
|
||||
await asyncio.sleep(0.05)
|
||||
events.append("bg")
|
||||
await asyncio.sleep(0.05)
|
||||
events.append("bg")
|
||||
|
||||
await asyncio.gather(
|
||||
with_retries(fn, attempts=2, backoff_s=0.08),
|
||||
background(),
|
||||
)
|
||||
|
||||
# Background task should interleave with retry backoff
|
||||
assert "bg" in events
|
||||
1
tests/scripts/__init__.py
Normal file
1
tests/scripts/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# Tests for executable scripts
|
||||
189
tests/scripts/conftest.py
Normal file
189
tests/scripts/conftest.py
Normal file
@@ -0,0 +1,189 @@
|
||||
"""Script-specific test fixtures."""
|
||||
|
||||
import importlib.util
|
||||
import sys
|
||||
from contextlib import contextmanager
|
||||
from pathlib import Path
|
||||
from unittest.mock import AsyncMock, MagicMock
|
||||
|
||||
import pytest
|
||||
|
||||
# Ensure scripts can import from src
|
||||
SCRIPTS_DIR = Path(__file__).parent.parent.parent / "scripts"
|
||||
SRC_DIR = Path(__file__).parent.parent.parent / "src"
|
||||
|
||||
if str(SRC_DIR) not in sys.path:
|
||||
sys.path.insert(0, str(SRC_DIR))
|
||||
|
||||
# Track dynamically loaded script modules for cleanup
|
||||
_loaded_script_modules: set[str] = set()
|
||||
|
||||
|
||||
def load_script_module(script_name: str):
|
||||
"""Load a script as a module and track it for cleanup.
|
||||
|
||||
Args:
|
||||
script_name: Name of script file (e.g., "collect_companion.py")
|
||||
|
||||
Returns:
|
||||
Loaded module object
|
||||
"""
|
||||
script_path = SCRIPTS_DIR / script_name
|
||||
module_name = script_name.replace(".py", "")
|
||||
|
||||
spec = importlib.util.spec_from_file_location(module_name, script_path)
|
||||
assert spec is not None, f"Could not load spec for {script_path}"
|
||||
assert spec.loader is not None, f"No loader for {script_path}"
|
||||
|
||||
module = importlib.util.module_from_spec(spec)
|
||||
sys.modules[module_name] = module
|
||||
_loaded_script_modules.add(module_name)
|
||||
|
||||
spec.loader.exec_module(module)
|
||||
return module
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def cleanup_script_modules():
|
||||
"""Clean up dynamically loaded script modules after each test.
|
||||
|
||||
This prevents test pollution where module-level state persists
|
||||
between tests, potentially causing false positives or flaky tests.
|
||||
"""
|
||||
# Clear tracking before test
|
||||
_loaded_script_modules.clear()
|
||||
|
||||
yield
|
||||
|
||||
# Clean up after test
|
||||
for module_name in _loaded_script_modules:
|
||||
if module_name in sys.modules:
|
||||
del sys.modules[module_name]
|
||||
_loaded_script_modules.clear()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def scripts_dir():
|
||||
"""Path to the scripts directory."""
|
||||
return SCRIPTS_DIR
|
||||
|
||||
|
||||
@contextmanager
|
||||
def mock_async_context_manager(return_value=None):
|
||||
"""Create a mock that works as an async context manager.
|
||||
|
||||
Usage:
|
||||
with patch.object(module, "connect_with_lock") as mock_connect:
|
||||
mock_connect.return_value = mock_async_context_manager(mc)
|
||||
# or for None return:
|
||||
mock_connect.return_value = mock_async_context_manager(None)
|
||||
|
||||
Args:
|
||||
return_value: Value to return from __aenter__
|
||||
|
||||
Returns:
|
||||
A mock configured as an async context manager
|
||||
"""
|
||||
mock = MagicMock()
|
||||
mock.__aenter__ = AsyncMock(return_value=return_value)
|
||||
mock.__aexit__ = AsyncMock(return_value=None)
|
||||
yield mock
|
||||
|
||||
|
||||
class AsyncContextManagerMock:
|
||||
"""A class-based async context manager mock for more complex scenarios.
|
||||
|
||||
Can be configured with enter/exit callbacks and exception handling.
|
||||
"""
|
||||
|
||||
def __init__(self, return_value=None, exit_exception=None):
|
||||
"""Initialize the mock.
|
||||
|
||||
Args:
|
||||
return_value: Value to return from __aenter__
|
||||
exit_exception: Exception to raise in __aexit__ (for testing cleanup)
|
||||
"""
|
||||
self.return_value = return_value
|
||||
self.exit_exception = exit_exception
|
||||
self.entered = False
|
||||
self.exited = False
|
||||
self.exit_args = None
|
||||
|
||||
async def __aenter__(self):
|
||||
self.entered = True
|
||||
return self.return_value
|
||||
|
||||
async def __aexit__(self, exc_type, exc_val, exc_tb):
|
||||
self.exited = True
|
||||
self.exit_args = (exc_type, exc_val, exc_tb)
|
||||
if self.exit_exception:
|
||||
raise self.exit_exception
|
||||
return None
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def async_context_manager_factory():
|
||||
"""Factory fixture to create async context manager mocks.
|
||||
|
||||
Usage:
|
||||
def test_something(async_context_manager_factory):
|
||||
mc = MagicMock()
|
||||
ctx_mock = async_context_manager_factory(mc)
|
||||
with patch.object(module, "connect_with_lock", return_value=ctx_mock):
|
||||
...
|
||||
"""
|
||||
|
||||
def factory(return_value=None, exit_exception=None):
|
||||
return AsyncContextManagerMock(return_value, exit_exception)
|
||||
|
||||
return factory
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_repeater_contact():
|
||||
"""Mock repeater contact for testing."""
|
||||
return {
|
||||
"adv_name": "TestRepeater",
|
||||
"public_key": "abc123def456",
|
||||
"last_seen": 1234567890,
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_repeater_status(sample_repeater_metrics):
|
||||
"""Mock repeater status response."""
|
||||
return sample_repeater_metrics.copy()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_run_command_factory():
|
||||
"""Factory to create mock run_command functions with configurable responses.
|
||||
|
||||
Usage:
|
||||
def test_something(mock_run_command_factory):
|
||||
responses = {
|
||||
"send_appstart": (True, "SELF_INFO", {}, None),
|
||||
"get_stats_core": (True, "STATS_CORE", {"battery_mv": 3850}, None),
|
||||
}
|
||||
mock_run = mock_run_command_factory(responses)
|
||||
with patch.object(module, "run_command", side_effect=mock_run):
|
||||
...
|
||||
"""
|
||||
|
||||
def factory(responses: dict, default_response=None):
|
||||
"""Create a mock run_command function.
|
||||
|
||||
Args:
|
||||
responses: Dict mapping command names to (ok, evt_type, payload, err) tuples
|
||||
default_response: Response for commands not in responses dict.
|
||||
If None, returns (False, None, None, "Unknown command")
|
||||
"""
|
||||
if default_response is None:
|
||||
default_response = (False, None, None, "Unknown command")
|
||||
|
||||
async def mock_run_command(mc, coro, name):
|
||||
return responses.get(name, default_response)
|
||||
|
||||
return mock_run_command
|
||||
|
||||
return factory
|
||||
641
tests/scripts/test_collect_companion.py
Normal file
641
tests/scripts/test_collect_companion.py
Normal file
@@ -0,0 +1,641 @@
|
||||
"""Tests for collect_companion.py script entry point.
|
||||
|
||||
These tests verify the actual script behavior, not just the library code.
|
||||
The script is the entry point that users run - if it breaks, everything breaks.
|
||||
"""
|
||||
|
||||
import inspect
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from tests.scripts.conftest import load_script_module
|
||||
|
||||
|
||||
def load_collect_companion():
|
||||
"""Load collect_companion.py as a module."""
|
||||
return load_script_module("collect_companion.py")
|
||||
|
||||
|
||||
class TestCollectCompanionImport:
|
||||
"""Verify script can be imported without errors."""
|
||||
|
||||
def test_imports_successfully(self, configured_env):
|
||||
"""Script should import without errors."""
|
||||
module = load_collect_companion()
|
||||
|
||||
assert hasattr(module, "main")
|
||||
assert hasattr(module, "collect_companion")
|
||||
assert callable(module.main)
|
||||
|
||||
def test_collect_companion_is_async(self, configured_env):
|
||||
"""collect_companion() should be an async function."""
|
||||
module = load_collect_companion()
|
||||
assert inspect.iscoroutinefunction(module.collect_companion)
|
||||
|
||||
|
||||
class TestCollectCompanionExitCodes:
|
||||
"""Test exit code behavior - critical for monitoring."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_returns_zero_on_successful_collection(
|
||||
self, configured_env, async_context_manager_factory, mock_run_command_factory
|
||||
):
|
||||
"""Successful collection should return exit code 0."""
|
||||
module = load_collect_companion()
|
||||
|
||||
responses = {
|
||||
"send_appstart": (True, "SELF_INFO", {}, None),
|
||||
"send_device_query": (True, "DEVICE_INFO", {}, None),
|
||||
"get_time": (True, "TIME", {"time": 1234567890}, None),
|
||||
"get_self_telemetry": (True, "TELEMETRY", {}, None),
|
||||
"get_custom_vars": (True, "CUSTOM_VARS", {}, None),
|
||||
"get_contacts": (True, "CONTACTS", {"c1": {}, "c2": {}}, None),
|
||||
"get_stats_core": (
|
||||
True,
|
||||
"STATS_CORE",
|
||||
{"battery_mv": 3850, "uptime_secs": 86400},
|
||||
None,
|
||||
),
|
||||
"get_stats_radio": (True, "STATS_RADIO", {"noise_floor": -115}, None),
|
||||
"get_stats_packets": (True, "STATS_PACKETS", {"recv": 100, "sent": 50}, None),
|
||||
}
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
ctx_mock = async_context_manager_factory(mc)
|
||||
|
||||
with (
|
||||
patch.object(module, "connect_with_lock", return_value=ctx_mock),
|
||||
patch.object(
|
||||
module, "run_command", side_effect=mock_run_command_factory(responses)
|
||||
),
|
||||
patch.object(module, "insert_metrics", return_value=5),
|
||||
):
|
||||
exit_code = await module.collect_companion()
|
||||
|
||||
assert exit_code == 0
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_returns_one_on_connection_failure(
|
||||
self, configured_env, async_context_manager_factory
|
||||
):
|
||||
"""Failed connection should return exit code 1."""
|
||||
module = load_collect_companion()
|
||||
|
||||
# Connection returns None (failed)
|
||||
ctx_mock = async_context_manager_factory(None)
|
||||
|
||||
with patch.object(module, "connect_with_lock", return_value=ctx_mock):
|
||||
exit_code = await module.collect_companion()
|
||||
|
||||
assert exit_code == 1
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_returns_one_when_no_commands_succeed(
|
||||
self, configured_env, async_context_manager_factory
|
||||
):
|
||||
"""No successful commands should return exit code 1."""
|
||||
module = load_collect_companion()
|
||||
|
||||
# All commands fail
|
||||
async def mock_run_command_fail(mc, coro, name):
|
||||
return (False, None, None, "Command failed")
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
ctx_mock = async_context_manager_factory(mc)
|
||||
|
||||
with (
|
||||
patch.object(module, "connect_with_lock", return_value=ctx_mock),
|
||||
patch.object(module, "run_command", side_effect=mock_run_command_fail),
|
||||
):
|
||||
exit_code = await module.collect_companion()
|
||||
|
||||
assert exit_code == 1
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_returns_one_on_database_error(
|
||||
self, configured_env, async_context_manager_factory, mock_run_command_factory
|
||||
):
|
||||
"""Database write failure should return exit code 1."""
|
||||
module = load_collect_companion()
|
||||
|
||||
responses = {
|
||||
"get_stats_core": (True, "STATS_CORE", {"battery_mv": 3850}, None),
|
||||
}
|
||||
# Default to success for other commands
|
||||
default = (True, "OK", {}, None)
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
ctx_mock = async_context_manager_factory(mc)
|
||||
|
||||
with (
|
||||
patch.object(module, "connect_with_lock", return_value=ctx_mock),
|
||||
patch.object(
|
||||
module, "run_command", side_effect=mock_run_command_factory(responses, default)
|
||||
),
|
||||
patch.object(module, "insert_metrics", side_effect=Exception("DB error")),
|
||||
):
|
||||
exit_code = await module.collect_companion()
|
||||
|
||||
assert exit_code == 1
|
||||
|
||||
|
||||
class TestCollectCompanionMetrics:
|
||||
"""Test metric collection behavior."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_collects_all_numeric_fields_from_stats(
|
||||
self, configured_env, async_context_manager_factory, mock_run_command_factory
|
||||
):
|
||||
"""Should insert all numeric fields from stats responses."""
|
||||
module = load_collect_companion()
|
||||
collected_metrics = {}
|
||||
|
||||
responses = {
|
||||
"send_appstart": (True, "SELF_INFO", {}, None),
|
||||
"send_device_query": (True, "DEVICE_INFO", {}, None),
|
||||
"get_time": (True, "TIME", {}, None),
|
||||
"get_self_telemetry": (True, "TELEMETRY", {}, None),
|
||||
"get_custom_vars": (True, "CUSTOM_VARS", {}, None),
|
||||
"get_contacts": (True, "CONTACTS", {"c1": {}, "c2": {}, "c3": {}}, None),
|
||||
"get_stats_core": (
|
||||
True,
|
||||
"STATS_CORE",
|
||||
{"battery_mv": 3850, "uptime_secs": 86400, "errors": 0},
|
||||
None,
|
||||
),
|
||||
"get_stats_radio": (
|
||||
True,
|
||||
"STATS_RADIO",
|
||||
{"noise_floor": -115, "last_rssi": -85, "last_snr": 7.5},
|
||||
None,
|
||||
),
|
||||
"get_stats_packets": (True, "STATS_PACKETS", {"recv": 100, "sent": 50}, None),
|
||||
}
|
||||
|
||||
def capture_metrics(ts, role, metrics, conn=None):
|
||||
collected_metrics.update(metrics)
|
||||
return len(metrics)
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
ctx_mock = async_context_manager_factory(mc)
|
||||
|
||||
with (
|
||||
patch.object(module, "connect_with_lock", return_value=ctx_mock),
|
||||
patch.object(
|
||||
module, "run_command", side_effect=mock_run_command_factory(responses)
|
||||
),
|
||||
patch.object(module, "insert_metrics", side_effect=capture_metrics),
|
||||
):
|
||||
await module.collect_companion()
|
||||
|
||||
# Verify all expected metrics were collected
|
||||
assert collected_metrics["battery_mv"] == 3850
|
||||
assert collected_metrics["uptime_secs"] == 86400
|
||||
assert collected_metrics["contacts"] == 3 # From get_contacts count
|
||||
assert collected_metrics["recv"] == 100
|
||||
assert collected_metrics["sent"] == 50
|
||||
assert collected_metrics["noise_floor"] == -115
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_telemetry_not_extracted_when_disabled(
|
||||
self, configured_env, async_context_manager_factory, monkeypatch
|
||||
):
|
||||
"""Telemetry metrics should NOT be extracted when TELEMETRY_ENABLED=0 (default)."""
|
||||
module = load_collect_companion()
|
||||
collected_metrics = {}
|
||||
|
||||
async def mock_run_command(mc, coro, name):
|
||||
if name == "get_self_telemetry":
|
||||
# Return telemetry payload with LPP data
|
||||
return (True, "TELEMETRY", {"lpp": b"\x00\x67\x01\x00"}, None)
|
||||
if name == "get_stats_core":
|
||||
return (True, "STATS_CORE", {"battery_mv": 3850}, None)
|
||||
return (True, "OK", {}, None)
|
||||
|
||||
def capture_metrics(ts, role, metrics, conn=None):
|
||||
collected_metrics.update(metrics)
|
||||
return len(metrics)
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
ctx_mock = async_context_manager_factory(mc)
|
||||
|
||||
with (
|
||||
patch.object(module, "connect_with_lock", return_value=ctx_mock),
|
||||
patch.object(module, "run_command", side_effect=mock_run_command),
|
||||
patch.object(module, "insert_metrics", side_effect=capture_metrics),
|
||||
):
|
||||
await module.collect_companion()
|
||||
|
||||
# No telemetry.* keys should be present
|
||||
telemetry_keys = [k for k in collected_metrics if k.startswith("telemetry.")]
|
||||
assert len(telemetry_keys) == 0
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_telemetry_extracted_when_enabled(
|
||||
self, configured_env, async_context_manager_factory, monkeypatch
|
||||
):
|
||||
"""Telemetry metrics SHOULD be extracted when TELEMETRY_ENABLED=1."""
|
||||
# Enable telemetry BEFORE loading the module
|
||||
monkeypatch.setenv("TELEMETRY_ENABLED", "1")
|
||||
import meshmon.env
|
||||
|
||||
meshmon.env._config = None
|
||||
|
||||
module = load_collect_companion()
|
||||
collected_metrics = {}
|
||||
|
||||
# LPP data format: list of dictionaries with type, channel, value
|
||||
# This matches the format from MeshCore API
|
||||
lpp_data = [
|
||||
{"type": "temperature", "channel": 0, "value": 25.5},
|
||||
]
|
||||
|
||||
async def mock_run_command(mc, coro, name):
|
||||
if name == "get_self_telemetry":
|
||||
return (True, "TELEMETRY", {"lpp": lpp_data}, None)
|
||||
if name == "get_stats_core":
|
||||
return (True, "STATS_CORE", {"battery_mv": 3850}, None)
|
||||
return (True, "OK", {}, None)
|
||||
|
||||
def capture_metrics(ts, role, metrics, conn=None):
|
||||
collected_metrics.update(metrics)
|
||||
return len(metrics)
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
ctx_mock = async_context_manager_factory(mc)
|
||||
|
||||
with (
|
||||
patch.object(module, "connect_with_lock", return_value=ctx_mock),
|
||||
patch.object(module, "run_command", side_effect=mock_run_command),
|
||||
patch.object(module, "insert_metrics", side_effect=capture_metrics),
|
||||
):
|
||||
exit_code = await module.collect_companion()
|
||||
|
||||
assert exit_code == 0
|
||||
# Telemetry keys should be present
|
||||
telemetry_keys = [k for k in collected_metrics if k.startswith("telemetry.")]
|
||||
assert len(telemetry_keys) > 0, f"Expected telemetry keys, got: {collected_metrics.keys()}"
|
||||
assert "telemetry.temperature.0" in collected_metrics
|
||||
assert collected_metrics["telemetry.temperature.0"] == 25.5
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_telemetry_extraction_handles_invalid_lpp(
|
||||
self, configured_env, async_context_manager_factory, monkeypatch
|
||||
):
|
||||
"""Telemetry extraction should handle invalid LPP data gracefully."""
|
||||
monkeypatch.setenv("TELEMETRY_ENABLED", "1")
|
||||
import meshmon.env
|
||||
|
||||
meshmon.env._config = None
|
||||
|
||||
module = load_collect_companion()
|
||||
collected_metrics = {}
|
||||
|
||||
async def mock_run_command(mc, coro, name):
|
||||
if name == "get_self_telemetry":
|
||||
# Invalid LPP data (too short)
|
||||
return (True, "TELEMETRY", {"lpp": b"\x00"}, None)
|
||||
if name == "get_stats_core":
|
||||
return (True, "STATS_CORE", {"battery_mv": 3850}, None)
|
||||
return (True, "OK", {}, None)
|
||||
|
||||
def capture_metrics(ts, role, metrics, conn=None):
|
||||
collected_metrics.update(metrics)
|
||||
return len(metrics)
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
ctx_mock = async_context_manager_factory(mc)
|
||||
|
||||
with (
|
||||
patch.object(module, "connect_with_lock", return_value=ctx_mock),
|
||||
patch.object(module, "run_command", side_effect=mock_run_command),
|
||||
patch.object(module, "insert_metrics", side_effect=capture_metrics),
|
||||
):
|
||||
exit_code = await module.collect_companion()
|
||||
|
||||
# Should still succeed - just no telemetry extracted
|
||||
assert exit_code == 0
|
||||
# No telemetry keys because LPP was invalid
|
||||
telemetry_keys = [k for k in collected_metrics if k.startswith("telemetry.")]
|
||||
assert len(telemetry_keys) == 0
|
||||
|
||||
|
||||
class TestPartialSuccessScenarios:
|
||||
"""Test behavior when only some commands succeed."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_succeeds_with_only_stats_core(
|
||||
self, configured_env, async_context_manager_factory
|
||||
):
|
||||
"""Should succeed if only stats_core returns metrics."""
|
||||
module = load_collect_companion()
|
||||
collected_metrics = {}
|
||||
|
||||
async def mock_run_command(mc, coro, name):
|
||||
if name == "get_stats_core":
|
||||
return (True, "STATS_CORE", {"battery_mv": 3850, "uptime_secs": 1000}, None)
|
||||
# All other commands fail
|
||||
return (False, None, None, "Timeout")
|
||||
|
||||
def capture_metrics(ts, role, metrics, conn=None):
|
||||
collected_metrics.update(metrics)
|
||||
return len(metrics)
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
ctx_mock = async_context_manager_factory(mc)
|
||||
|
||||
with (
|
||||
patch.object(module, "connect_with_lock", return_value=ctx_mock),
|
||||
patch.object(module, "run_command", side_effect=mock_run_command),
|
||||
patch.object(module, "insert_metrics", side_effect=capture_metrics),
|
||||
):
|
||||
exit_code = await module.collect_companion()
|
||||
|
||||
# Should succeed because stats_core succeeded and had metrics
|
||||
assert exit_code == 0
|
||||
assert collected_metrics["battery_mv"] == 3850
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_succeeds_with_only_contacts(
|
||||
self, configured_env, async_context_manager_factory
|
||||
):
|
||||
"""Should succeed if only contacts command returns data."""
|
||||
module = load_collect_companion()
|
||||
collected_metrics = {}
|
||||
|
||||
async def mock_run_command(mc, coro, name):
|
||||
if name == "get_contacts":
|
||||
return (True, "CONTACTS", {"c1": {}, "c2": {}}, None)
|
||||
# Stats commands succeed but return no numeric data
|
||||
if name.startswith("get_stats"):
|
||||
return (True, "OK", {}, None)
|
||||
# Other commands succeed
|
||||
return (True, "OK", {}, None)
|
||||
|
||||
def capture_metrics(ts, role, metrics, conn=None):
|
||||
collected_metrics.update(metrics)
|
||||
return len(metrics)
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
ctx_mock = async_context_manager_factory(mc)
|
||||
|
||||
with (
|
||||
patch.object(module, "connect_with_lock", return_value=ctx_mock),
|
||||
patch.object(module, "run_command", side_effect=mock_run_command),
|
||||
patch.object(module, "insert_metrics", side_effect=capture_metrics),
|
||||
):
|
||||
exit_code = await module.collect_companion()
|
||||
|
||||
assert exit_code == 0
|
||||
assert collected_metrics["contacts"] == 2
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_fails_when_metrics_empty_despite_success(
|
||||
self, configured_env, async_context_manager_factory
|
||||
):
|
||||
"""Should fail if commands succeed but no metrics collected."""
|
||||
module = load_collect_companion()
|
||||
|
||||
async def mock_run_command(mc, coro, name):
|
||||
# Commands succeed but return empty/non-dict payloads
|
||||
if name == "get_stats_core":
|
||||
return (True, "STATS_CORE", None, None) # No payload
|
||||
if name == "get_stats_radio":
|
||||
return (True, "STATS_RADIO", "not a dict", None) # Invalid payload
|
||||
if name == "get_stats_packets":
|
||||
return (True, "STATS_PACKETS", {}, None) # Empty payload
|
||||
if name == "get_contacts":
|
||||
return (False, None, None, "Failed") # Fails
|
||||
return (True, "OK", {}, None)
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
ctx_mock = async_context_manager_factory(mc)
|
||||
|
||||
with (
|
||||
patch.object(module, "connect_with_lock", return_value=ctx_mock),
|
||||
patch.object(module, "run_command", side_effect=mock_run_command),
|
||||
):
|
||||
exit_code = await module.collect_companion()
|
||||
|
||||
# Should fail because no metrics were collected
|
||||
assert exit_code == 1
|
||||
|
||||
|
||||
class TestExceptionHandling:
|
||||
"""Test exception handling in the command loop (lines 165-166)."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_handles_exception_in_command_loop(
|
||||
self, configured_env, async_context_manager_factory
|
||||
):
|
||||
"""Should catch and log exceptions during command execution."""
|
||||
module = load_collect_companion()
|
||||
|
||||
call_count = 0
|
||||
|
||||
async def mock_run_command_with_exception(mc, coro, name):
|
||||
nonlocal call_count
|
||||
call_count += 1
|
||||
if call_count == 3: # Fail on third command
|
||||
raise RuntimeError("Unexpected network error")
|
||||
return (True, "OK", {}, None)
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
ctx_mock = async_context_manager_factory(mc)
|
||||
|
||||
with (
|
||||
patch.object(module, "connect_with_lock", return_value=ctx_mock),
|
||||
patch.object(module, "run_command", side_effect=mock_run_command_with_exception),
|
||||
patch.object(module, "log") as mock_log,
|
||||
):
|
||||
exit_code = await module.collect_companion()
|
||||
|
||||
# Should have logged the error
|
||||
error_calls = [c for c in mock_log.error.call_args_list if "Error during collection" in str(c)]
|
||||
assert len(error_calls) > 0
|
||||
|
||||
# Should return 1 because exception interrupted collection
|
||||
assert exit_code == 1
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_exception_closes_connection_properly(
|
||||
self, configured_env, async_context_manager_factory
|
||||
):
|
||||
"""Context manager should still exit properly after exception."""
|
||||
module = load_collect_companion()
|
||||
|
||||
async def mock_run_command_raise(mc, coro, name):
|
||||
raise RuntimeError("Connection lost")
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
ctx_mock = async_context_manager_factory(mc)
|
||||
|
||||
with (
|
||||
patch.object(module, "connect_with_lock", return_value=ctx_mock),
|
||||
patch.object(module, "run_command", side_effect=mock_run_command_raise),
|
||||
):
|
||||
await module.collect_companion()
|
||||
|
||||
# Verify context manager was properly exited
|
||||
assert ctx_mock.exited is True
|
||||
|
||||
|
||||
class TestMainEntryPoint:
|
||||
"""Test the main() entry point behavior."""
|
||||
|
||||
def test_main_calls_init_db(self, configured_env):
|
||||
"""main() should initialize database before collection."""
|
||||
module = load_collect_companion()
|
||||
|
||||
with (
|
||||
patch.object(module, "init_db") as mock_init,
|
||||
patch.object(module, "collect_companion", new=MagicMock(return_value=0)),
|
||||
patch.object(module, "asyncio") as mock_asyncio,
|
||||
patch.object(module, "sys"),
|
||||
):
|
||||
# Patch collect_companion to return a non-coroutine to avoid unawaited coroutine warning
|
||||
mock_asyncio.run.return_value = 0
|
||||
module.main()
|
||||
|
||||
mock_init.assert_called_once()
|
||||
|
||||
def test_main_exits_with_collection_result(self, configured_env):
|
||||
"""main() should exit with the collection exit code."""
|
||||
module = load_collect_companion()
|
||||
|
||||
with (
|
||||
patch.object(module, "init_db"),
|
||||
patch.object(module, "collect_companion", new=MagicMock(return_value=1)),
|
||||
patch.object(module, "asyncio") as mock_asyncio,
|
||||
patch.object(module, "sys") as mock_sys,
|
||||
):
|
||||
# Patch collect_companion to return a non-coroutine to avoid unawaited coroutine warning
|
||||
mock_asyncio.run.return_value = 1 # Collection failed
|
||||
module.main()
|
||||
|
||||
mock_sys.exit.assert_called_once_with(1)
|
||||
|
||||
def test_main_runs_collect_companion_async(self, configured_env):
|
||||
"""main() should run collect_companion() with asyncio.run()."""
|
||||
module = load_collect_companion()
|
||||
|
||||
with (
|
||||
patch.object(module, "init_db"),
|
||||
patch.object(module, "collect_companion", new=MagicMock(return_value=0)),
|
||||
patch.object(module, "asyncio") as mock_asyncio,
|
||||
patch.object(module, "sys"),
|
||||
):
|
||||
# Patch collect_companion to return a non-coroutine to avoid unawaited coroutine warning
|
||||
mock_asyncio.run.return_value = 0
|
||||
module.main()
|
||||
|
||||
# asyncio.run should be called with the return value
|
||||
mock_asyncio.run.assert_called_once()
|
||||
|
||||
|
||||
class TestDatabaseIntegration:
|
||||
"""Test that collection actually writes to database."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_writes_metrics_to_database(
|
||||
self, configured_env, initialized_db, async_context_manager_factory, mock_run_command_factory
|
||||
):
|
||||
"""Collection should write metrics to database."""
|
||||
from meshmon.db import get_latest_metrics
|
||||
|
||||
module = load_collect_companion()
|
||||
|
||||
responses = {
|
||||
"send_appstart": (True, "SELF_INFO", {}, None),
|
||||
"send_device_query": (True, "DEVICE_INFO", {}, None),
|
||||
"get_time": (True, "TIME", {}, None),
|
||||
"get_self_telemetry": (True, "TELEMETRY", {}, None),
|
||||
"get_custom_vars": (True, "CUSTOM_VARS", {}, None),
|
||||
"get_contacts": (True, "CONTACTS", {"c1": {}}, None),
|
||||
"get_stats_core": (
|
||||
True,
|
||||
"STATS_CORE",
|
||||
{"battery_mv": 3777, "uptime_secs": 12345},
|
||||
None,
|
||||
),
|
||||
"get_stats_radio": (True, "STATS_RADIO", {}, None),
|
||||
"get_stats_packets": (True, "STATS_PACKETS", {"recv": 999, "sent": 888}, None),
|
||||
}
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
ctx_mock = async_context_manager_factory(mc)
|
||||
|
||||
with (
|
||||
patch.object(module, "connect_with_lock", return_value=ctx_mock),
|
||||
patch.object(
|
||||
module, "run_command", side_effect=mock_run_command_factory(responses)
|
||||
),
|
||||
):
|
||||
exit_code = await module.collect_companion()
|
||||
|
||||
assert exit_code == 0
|
||||
|
||||
# Verify data was written to database
|
||||
latest = get_latest_metrics("companion")
|
||||
assert latest is not None
|
||||
assert latest["battery_mv"] == 3777
|
||||
assert latest["recv"] == 999
|
||||
assert latest["sent"] == 888
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_writes_telemetry_to_database_when_enabled(
|
||||
self, configured_env, initialized_db, async_context_manager_factory, monkeypatch
|
||||
):
|
||||
"""Telemetry should be written to database when enabled."""
|
||||
monkeypatch.setenv("TELEMETRY_ENABLED", "1")
|
||||
import meshmon.env
|
||||
|
||||
meshmon.env._config = None
|
||||
|
||||
from meshmon.db import get_latest_metrics
|
||||
|
||||
module = load_collect_companion()
|
||||
|
||||
# LPP data format: list of dictionaries with type, channel, value
|
||||
lpp_data = [
|
||||
{"type": "temperature", "channel": 0, "value": 25.5},
|
||||
]
|
||||
|
||||
async def mock_run_command(mc, coro, name):
|
||||
if name == "get_self_telemetry":
|
||||
return (True, "TELEMETRY", {"lpp": lpp_data}, None)
|
||||
if name == "get_stats_core":
|
||||
return (True, "STATS_CORE", {"battery_mv": 3850}, None)
|
||||
return (True, "OK", {}, None)
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
ctx_mock = async_context_manager_factory(mc)
|
||||
|
||||
with (
|
||||
patch.object(module, "connect_with_lock", return_value=ctx_mock),
|
||||
patch.object(module, "run_command", side_effect=mock_run_command),
|
||||
):
|
||||
exit_code = await module.collect_companion()
|
||||
|
||||
assert exit_code == 0
|
||||
|
||||
# Verify telemetry was written to database
|
||||
latest = get_latest_metrics("companion")
|
||||
assert latest is not None
|
||||
assert "telemetry.temperature.0" in latest
|
||||
assert latest["telemetry.temperature.0"] == 25.5
|
||||
880
tests/scripts/test_collect_repeater.py
Normal file
880
tests/scripts/test_collect_repeater.py
Normal file
@@ -0,0 +1,880 @@
|
||||
"""Tests for collect_repeater.py script entry point.
|
||||
|
||||
These tests verify the actual script behavior, including:
|
||||
- Finding repeater contact by name or key prefix
|
||||
- Circuit breaker integration
|
||||
- Exit codes for monitoring
|
||||
- Database writes
|
||||
"""
|
||||
|
||||
import inspect
|
||||
from unittest.mock import AsyncMock, MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from tests.scripts.conftest import load_script_module
|
||||
|
||||
|
||||
def load_collect_repeater():
|
||||
"""Load collect_repeater.py as a module."""
|
||||
return load_script_module("collect_repeater.py")
|
||||
|
||||
|
||||
class TestCollectRepeaterImport:
|
||||
"""Verify script can be imported without errors."""
|
||||
|
||||
def test_imports_successfully(self, configured_env):
|
||||
"""Script should import without errors."""
|
||||
module = load_collect_repeater()
|
||||
|
||||
assert hasattr(module, "main")
|
||||
assert hasattr(module, "collect_repeater")
|
||||
assert hasattr(module, "find_repeater_contact")
|
||||
assert hasattr(module, "query_repeater_with_retry")
|
||||
assert callable(module.main)
|
||||
|
||||
def test_collect_repeater_is_async(self, configured_env):
|
||||
"""collect_repeater() should be an async function."""
|
||||
module = load_collect_repeater()
|
||||
assert inspect.iscoroutinefunction(module.collect_repeater)
|
||||
|
||||
def test_find_repeater_contact_is_async(self, configured_env):
|
||||
"""find_repeater_contact() should be an async function."""
|
||||
module = load_collect_repeater()
|
||||
assert inspect.iscoroutinefunction(module.find_repeater_contact)
|
||||
|
||||
|
||||
class TestFindRepeaterContact:
|
||||
"""Test the find_repeater_contact function."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_finds_contact_by_name(self, configured_env, monkeypatch):
|
||||
"""Should find repeater by advertised name."""
|
||||
monkeypatch.setenv("REPEATER_NAME", "MyRepeater")
|
||||
import meshmon.env
|
||||
|
||||
meshmon.env._config = None
|
||||
|
||||
module = load_collect_repeater()
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
mc.contacts = {"abc123": {"adv_name": "MyRepeater", "public_key": "abc123"}}
|
||||
|
||||
with (
|
||||
patch.object(module, "run_command") as mock_run,
|
||||
patch.object(module, "get_contact_by_name") as mock_get,
|
||||
):
|
||||
mock_run.return_value = (True, "CONTACTS", mc.contacts, None)
|
||||
mock_get.return_value = mc.contacts["abc123"]
|
||||
|
||||
contact = await module.find_repeater_contact(mc)
|
||||
|
||||
assert contact is not None
|
||||
assert contact["adv_name"] == "MyRepeater"
|
||||
mock_get.assert_called_once_with(mc, "MyRepeater")
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_finds_contact_by_key_prefix(self, configured_env, monkeypatch):
|
||||
"""Should find repeater by public key prefix when name not set."""
|
||||
monkeypatch.setenv("REPEATER_KEY_PREFIX", "abc123")
|
||||
import meshmon.env
|
||||
|
||||
meshmon.env._config = None
|
||||
|
||||
module = load_collect_repeater()
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
mc.contacts = {"abc123def456": {"adv_name": "SomeNode", "public_key": "abc123def456"}}
|
||||
|
||||
with (
|
||||
patch.object(module, "run_command") as mock_run,
|
||||
patch.object(module, "get_contact_by_name", return_value=None),
|
||||
patch.object(module, "get_contact_by_key_prefix") as mock_get,
|
||||
):
|
||||
mock_run.return_value = (True, "CONTACTS", mc.contacts, None)
|
||||
mock_get.return_value = mc.contacts["abc123def456"]
|
||||
|
||||
contact = await module.find_repeater_contact(mc)
|
||||
|
||||
assert contact is not None
|
||||
assert contact["public_key"] == "abc123def456"
|
||||
mock_get.assert_called_once()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_fallback_to_manual_name_search(self, configured_env, monkeypatch):
|
||||
"""Should fallback to manual name search in payload dict."""
|
||||
monkeypatch.setenv("REPEATER_NAME", "ManualFind")
|
||||
import meshmon.env
|
||||
|
||||
meshmon.env._config = None
|
||||
|
||||
module = load_collect_repeater()
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
contacts_dict = {"xyz789": {"adv_name": "ManualFind", "public_key": "xyz789"}}
|
||||
|
||||
with (
|
||||
patch.object(module, "run_command") as mock_run,
|
||||
patch.object(module, "get_contact_by_name", return_value=None),
|
||||
):
|
||||
mock_run.return_value = (True, "CONTACTS", contacts_dict, None)
|
||||
# get_contact_by_name returns None, forcing manual search
|
||||
contact = await module.find_repeater_contact(mc)
|
||||
|
||||
assert contact is not None
|
||||
assert contact["adv_name"] == "ManualFind"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_case_insensitive_name_match(self, configured_env, monkeypatch):
|
||||
"""Name search should be case-insensitive."""
|
||||
monkeypatch.setenv("REPEATER_NAME", "myrepeater") # lowercase
|
||||
import meshmon.env
|
||||
|
||||
meshmon.env._config = None
|
||||
|
||||
module = load_collect_repeater()
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
contacts_dict = {"key1": {"adv_name": "MyRepeater", "public_key": "key1"}} # Mixed case
|
||||
|
||||
with (
|
||||
patch.object(module, "run_command") as mock_run,
|
||||
patch.object(module, "get_contact_by_name", return_value=None),
|
||||
):
|
||||
mock_run.return_value = (True, "CONTACTS", contacts_dict, None)
|
||||
contact = await module.find_repeater_contact(mc)
|
||||
|
||||
assert contact is not None
|
||||
assert contact["adv_name"] == "MyRepeater"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_returns_none_when_not_found(self, configured_env, monkeypatch):
|
||||
"""Should return None when repeater not in contacts."""
|
||||
monkeypatch.setenv("REPEATER_NAME", "NonExistent")
|
||||
import meshmon.env
|
||||
|
||||
meshmon.env._config = None
|
||||
|
||||
module = load_collect_repeater()
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
|
||||
with (
|
||||
patch.object(module, "run_command") as mock_run,
|
||||
patch.object(module, "get_contact_by_name", return_value=None),
|
||||
):
|
||||
mock_run.return_value = (True, "CONTACTS", {}, None)
|
||||
contact = await module.find_repeater_contact(mc)
|
||||
|
||||
assert contact is None
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_returns_none_when_get_contacts_fails(self, configured_env, monkeypatch):
|
||||
"""Should return None when get_contacts command fails."""
|
||||
monkeypatch.setenv("REPEATER_NAME", "AnyName")
|
||||
import meshmon.env
|
||||
|
||||
meshmon.env._config = None
|
||||
|
||||
module = load_collect_repeater()
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
|
||||
with patch.object(module, "run_command") as mock_run:
|
||||
mock_run.return_value = (False, None, None, "Connection failed")
|
||||
|
||||
contact = await module.find_repeater_contact(mc)
|
||||
|
||||
assert contact is None
|
||||
|
||||
|
||||
class TestCircuitBreakerIntegration:
|
||||
"""Test circuit breaker integration in collect_repeater."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_skips_collection_when_circuit_open(self, configured_env):
|
||||
"""Should return 0 and skip collection when circuit breaker is open."""
|
||||
module = load_collect_repeater()
|
||||
|
||||
# Create mock circuit breaker that is open
|
||||
mock_cb = MagicMock()
|
||||
mock_cb.is_open.return_value = True
|
||||
mock_cb.cooldown_remaining.return_value = 1800
|
||||
|
||||
with (
|
||||
patch.object(module, "get_repeater_circuit_breaker", return_value=mock_cb),
|
||||
patch.object(module, "connect_with_lock") as mock_connect,
|
||||
):
|
||||
exit_code = await module.collect_repeater()
|
||||
|
||||
# Should return 0 (not an error, just skipped)
|
||||
assert exit_code == 0
|
||||
# Should not have tried to connect
|
||||
mock_cb.is_open.assert_called_once()
|
||||
mock_connect.assert_not_called()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_records_success_on_successful_status(
|
||||
self, configured_env, monkeypatch, async_context_manager_factory
|
||||
):
|
||||
"""Should record success when status query succeeds."""
|
||||
monkeypatch.setenv("REPEATER_NAME", "TestRepeater")
|
||||
import meshmon.env
|
||||
|
||||
meshmon.env._config = None
|
||||
|
||||
module = load_collect_repeater()
|
||||
|
||||
mock_cb = MagicMock()
|
||||
mock_cb.is_open.return_value = False
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
ctx_mock = async_context_manager_factory(mc)
|
||||
|
||||
with (
|
||||
patch.object(module, "get_repeater_circuit_breaker", return_value=mock_cb),
|
||||
patch.object(module, "connect_with_lock", return_value=ctx_mock),
|
||||
patch.object(module, "run_command") as mock_run,
|
||||
patch.object(module, "find_repeater_contact") as mock_find,
|
||||
patch.object(module, "query_repeater_with_retry") as mock_query,
|
||||
patch.object(module, "insert_metrics", return_value=2),
|
||||
):
|
||||
mock_run.return_value = (True, "OK", {}, None)
|
||||
mock_find.return_value = {"adv_name": "TestRepeater"}
|
||||
mock_query.return_value = (True, {"bat": 3850, "uptime": 86400}, None)
|
||||
|
||||
await module.collect_repeater()
|
||||
|
||||
mock_cb.record_success.assert_called_once()
|
||||
mock_cb.record_failure.assert_not_called()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_records_failure_on_status_timeout(
|
||||
self, configured_env, monkeypatch, async_context_manager_factory
|
||||
):
|
||||
"""Should record failure when status query times out."""
|
||||
monkeypatch.setenv("REPEATER_NAME", "TestRepeater")
|
||||
import meshmon.env
|
||||
|
||||
meshmon.env._config = None
|
||||
|
||||
module = load_collect_repeater()
|
||||
|
||||
mock_cb = MagicMock()
|
||||
mock_cb.is_open.return_value = False
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
ctx_mock = async_context_manager_factory(mc)
|
||||
|
||||
with (
|
||||
patch.object(module, "get_repeater_circuit_breaker", return_value=mock_cb),
|
||||
patch.object(module, "connect_with_lock", return_value=ctx_mock),
|
||||
patch.object(module, "run_command") as mock_run,
|
||||
patch.object(module, "find_repeater_contact") as mock_find,
|
||||
patch.object(module, "query_repeater_with_retry") as mock_query,
|
||||
):
|
||||
mock_run.return_value = (True, "OK", {}, None)
|
||||
mock_find.return_value = {"adv_name": "TestRepeater"}
|
||||
mock_query.return_value = (False, None, "Timeout")
|
||||
|
||||
exit_code = await module.collect_repeater()
|
||||
|
||||
mock_cb.record_failure.assert_called_once()
|
||||
mock_cb.record_success.assert_not_called()
|
||||
assert exit_code == 1
|
||||
|
||||
|
||||
class TestCollectRepeaterExitCodes:
|
||||
"""Test exit code behavior - critical for monitoring."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_returns_zero_on_successful_collection(
|
||||
self, configured_env, monkeypatch, async_context_manager_factory
|
||||
):
|
||||
"""Successful collection should return exit code 0."""
|
||||
monkeypatch.setenv("REPEATER_NAME", "TestRepeater")
|
||||
import meshmon.env
|
||||
|
||||
meshmon.env._config = None
|
||||
|
||||
module = load_collect_repeater()
|
||||
|
||||
mock_cb = MagicMock()
|
||||
mock_cb.is_open.return_value = False
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
ctx_mock = async_context_manager_factory(mc)
|
||||
|
||||
with (
|
||||
patch.object(module, "get_repeater_circuit_breaker", return_value=mock_cb),
|
||||
patch.object(module, "connect_with_lock", return_value=ctx_mock),
|
||||
patch.object(module, "run_command") as mock_run,
|
||||
patch.object(module, "find_repeater_contact") as mock_find,
|
||||
patch.object(module, "query_repeater_with_retry") as mock_query,
|
||||
patch.object(module, "insert_metrics") as mock_insert,
|
||||
):
|
||||
mock_run.return_value = (True, "OK", {}, None)
|
||||
mock_find.return_value = {"adv_name": "TestRepeater"}
|
||||
mock_query.return_value = (
|
||||
True,
|
||||
{"bat": 3850, "uptime": 86400, "nb_recv": 100},
|
||||
None,
|
||||
)
|
||||
|
||||
exit_code = await module.collect_repeater()
|
||||
|
||||
assert exit_code == 0
|
||||
mock_insert.assert_called_once()
|
||||
insert_kwargs = mock_insert.call_args.kwargs
|
||||
assert insert_kwargs["role"] == "repeater"
|
||||
assert insert_kwargs["metrics"]["bat"] == 3850
|
||||
assert insert_kwargs["metrics"]["nb_recv"] == 100
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_returns_one_on_connection_failure(
|
||||
self, configured_env, async_context_manager_factory
|
||||
):
|
||||
"""Failed connection should return exit code 1."""
|
||||
module = load_collect_repeater()
|
||||
|
||||
mock_cb = MagicMock()
|
||||
mock_cb.is_open.return_value = False
|
||||
ctx_mock = async_context_manager_factory(None) # Connection returns None
|
||||
|
||||
with (
|
||||
patch.object(module, "get_repeater_circuit_breaker", return_value=mock_cb),
|
||||
patch.object(module, "connect_with_lock", return_value=ctx_mock),
|
||||
):
|
||||
exit_code = await module.collect_repeater()
|
||||
|
||||
assert exit_code == 1
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_returns_one_when_repeater_not_found(
|
||||
self, configured_env, monkeypatch, async_context_manager_factory
|
||||
):
|
||||
"""Should return 1 when repeater contact not found."""
|
||||
monkeypatch.setenv("REPEATER_NAME", "NonExistent")
|
||||
import meshmon.env
|
||||
|
||||
meshmon.env._config = None
|
||||
|
||||
module = load_collect_repeater()
|
||||
|
||||
mock_cb = MagicMock()
|
||||
mock_cb.is_open.return_value = False
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
ctx_mock = async_context_manager_factory(mc)
|
||||
|
||||
with (
|
||||
patch.object(module, "get_repeater_circuit_breaker", return_value=mock_cb),
|
||||
patch.object(module, "connect_with_lock", return_value=ctx_mock),
|
||||
patch.object(module, "run_command") as mock_run,
|
||||
patch.object(module, "find_repeater_contact") as mock_find,
|
||||
):
|
||||
mock_run.return_value = (True, "OK", {}, None)
|
||||
mock_find.return_value = None # Not found
|
||||
|
||||
exit_code = await module.collect_repeater()
|
||||
|
||||
assert exit_code == 1
|
||||
|
||||
|
||||
class TestQueryRepeaterWithRetry:
|
||||
"""Test the retry wrapper for repeater queries."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_returns_success_on_first_try(self, configured_env):
|
||||
"""Should return success when command succeeds immediately."""
|
||||
module = load_collect_repeater()
|
||||
|
||||
mc = MagicMock()
|
||||
contact = {"adv_name": "Test"}
|
||||
|
||||
async def successful_command():
|
||||
return {"bat": 3850}
|
||||
|
||||
with patch.object(module, "with_retries") as mock_retries:
|
||||
mock_retries.return_value = (True, {"bat": 3850}, None)
|
||||
|
||||
success, payload, err = await module.query_repeater_with_retry(
|
||||
mc, contact, "test_cmd", successful_command
|
||||
)
|
||||
|
||||
assert success is True
|
||||
assert payload == {"bat": 3850}
|
||||
assert err is None
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_returns_failure_after_retries_exhausted(self, configured_env):
|
||||
"""Should return failure when all retries fail."""
|
||||
module = load_collect_repeater()
|
||||
|
||||
mc = MagicMock()
|
||||
contact = {"adv_name": "Test"}
|
||||
|
||||
async def failing_command():
|
||||
raise Exception("Timeout")
|
||||
|
||||
with patch.object(module, "with_retries") as mock_retries:
|
||||
mock_retries.return_value = (False, None, Exception("Timeout"))
|
||||
|
||||
success, payload, err = await module.query_repeater_with_retry(
|
||||
mc, contact, "test_cmd", failing_command
|
||||
)
|
||||
|
||||
assert success is False
|
||||
assert payload is None
|
||||
assert "Timeout" in err
|
||||
|
||||
|
||||
class TestMainEntryPoint:
|
||||
"""Test the main() entry point behavior."""
|
||||
|
||||
def test_main_calls_init_db(self, configured_env):
|
||||
"""main() should initialize database before collection."""
|
||||
module = load_collect_repeater()
|
||||
|
||||
with (
|
||||
patch.object(module, "init_db") as mock_init,
|
||||
patch.object(module, "collect_repeater", new=MagicMock(return_value=0)),
|
||||
patch.object(module, "asyncio") as mock_asyncio,
|
||||
patch.object(module, "sys"),
|
||||
):
|
||||
# Patch collect_repeater to return a non-coroutine to avoid unawaited coroutine warning
|
||||
mock_asyncio.run.return_value = 0
|
||||
module.main()
|
||||
|
||||
mock_init.assert_called_once()
|
||||
|
||||
def test_main_exits_with_collection_result(self, configured_env):
|
||||
"""main() should exit with the collection exit code."""
|
||||
module = load_collect_repeater()
|
||||
|
||||
with (
|
||||
patch.object(module, "init_db"),
|
||||
patch.object(module, "collect_repeater", new=MagicMock(return_value=1)),
|
||||
patch.object(module, "asyncio") as mock_asyncio,
|
||||
patch.object(module, "sys") as mock_sys,
|
||||
):
|
||||
# Patch collect_repeater to return a non-coroutine to avoid unawaited coroutine warning
|
||||
mock_asyncio.run.return_value = 1 # Collection failed
|
||||
module.main()
|
||||
|
||||
mock_sys.exit.assert_called_once_with(1)
|
||||
|
||||
|
||||
class TestDatabaseIntegration:
|
||||
"""Test that collection actually writes to database."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_writes_metrics_to_database(
|
||||
self, configured_env, initialized_db, monkeypatch, async_context_manager_factory
|
||||
):
|
||||
"""Collection should write metrics to database."""
|
||||
from meshmon.db import get_latest_metrics
|
||||
|
||||
monkeypatch.setenv("REPEATER_NAME", "TestRepeater")
|
||||
import meshmon.env
|
||||
|
||||
meshmon.env._config = None
|
||||
|
||||
module = load_collect_repeater()
|
||||
|
||||
mock_cb = MagicMock()
|
||||
mock_cb.is_open.return_value = False
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
ctx_mock = async_context_manager_factory(mc)
|
||||
|
||||
with (
|
||||
patch.object(module, "get_repeater_circuit_breaker", return_value=mock_cb),
|
||||
patch.object(module, "connect_with_lock", return_value=ctx_mock),
|
||||
patch.object(module, "run_command") as mock_run,
|
||||
patch.object(module, "find_repeater_contact") as mock_find,
|
||||
patch.object(module, "query_repeater_with_retry") as mock_query,
|
||||
):
|
||||
mock_run.return_value = (True, "OK", {}, None)
|
||||
mock_find.return_value = {"adv_name": "TestRepeater"}
|
||||
mock_query.return_value = (
|
||||
True,
|
||||
{"bat": 3777, "uptime": 99999, "nb_recv": 1234, "nb_sent": 567},
|
||||
None,
|
||||
)
|
||||
|
||||
exit_code = await module.collect_repeater()
|
||||
|
||||
assert exit_code == 0
|
||||
|
||||
# Verify data was written to database
|
||||
latest = get_latest_metrics("repeater")
|
||||
assert latest is not None
|
||||
assert latest["bat"] == 3777
|
||||
assert latest["nb_recv"] == 1234
|
||||
|
||||
|
||||
class TestFindRepeaterContactEdgeCases:
|
||||
"""Test edge cases in find_repeater_contact."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_finds_contact_in_payload_dict(self, configured_env, monkeypatch):
|
||||
"""Should find contact in payload dict when mc.contacts is empty."""
|
||||
monkeypatch.setenv("REPEATER_NAME", "PayloadRepeater")
|
||||
import meshmon.env
|
||||
|
||||
meshmon.env._config = None
|
||||
|
||||
module = load_collect_repeater()
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
mc.contacts = {} # Empty contacts attribute
|
||||
payload_dict = {"pk123": {"adv_name": "PayloadRepeater", "public_key": "pk123"}}
|
||||
|
||||
with (
|
||||
patch.object(module, "run_command") as mock_run,
|
||||
patch.object(module, "get_contact_by_name", return_value=None),
|
||||
):
|
||||
# Return contacts in payload
|
||||
mock_run.return_value = (True, "CONTACTS", payload_dict, None)
|
||||
# get_contact_by_name returns None, forcing manual search in payload
|
||||
contact = await module.find_repeater_contact(mc)
|
||||
|
||||
assert contact is not None
|
||||
assert contact["adv_name"] == "PayloadRepeater"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_finds_contact_by_key_prefix_manual_search(self, configured_env, monkeypatch):
|
||||
"""Should find contact by key prefix via manual search in payload."""
|
||||
monkeypatch.setenv("REPEATER_KEY_PREFIX", "abc")
|
||||
import meshmon.env
|
||||
|
||||
meshmon.env._config = None
|
||||
|
||||
module = load_collect_repeater()
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
contacts_dict = {"abc123xyz": {"adv_name": "KeyPrefixNode"}}
|
||||
|
||||
with (
|
||||
patch.object(module, "run_command") as mock_run,
|
||||
patch.object(module, "get_contact_by_name", return_value=None),
|
||||
patch.object(module, "get_contact_by_key_prefix", return_value=None),
|
||||
):
|
||||
mock_run.return_value = (True, "CONTACTS", contacts_dict, None)
|
||||
# Both helper functions return None, forcing manual search
|
||||
contact = await module.find_repeater_contact(mc)
|
||||
|
||||
assert contact is not None
|
||||
assert contact["adv_name"] == "KeyPrefixNode"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_prints_available_contacts_when_not_found(self, configured_env, monkeypatch):
|
||||
"""Should print available contacts when repeater not found."""
|
||||
monkeypatch.setenv("REPEATER_NAME", "NonExistent")
|
||||
import meshmon.env
|
||||
|
||||
meshmon.env._config = None
|
||||
|
||||
module = load_collect_repeater()
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
contacts_dict = {
|
||||
"key1": {"adv_name": "Node1", "name": "alt1"},
|
||||
"key2": {"adv_name": "Node2"},
|
||||
"key3": {}, # No name fields
|
||||
}
|
||||
|
||||
with (
|
||||
patch.object(module, "run_command") as mock_run,
|
||||
patch.object(module, "get_contact_by_name", return_value=None),
|
||||
patch.object(module, "log") as mock_log,
|
||||
):
|
||||
mock_run.return_value = (True, "CONTACTS", contacts_dict, None)
|
||||
contact = await module.find_repeater_contact(mc)
|
||||
|
||||
assert contact is None
|
||||
# Should have logged available contacts
|
||||
mock_log.info.assert_called()
|
||||
|
||||
|
||||
class TestLoginFunctionality:
|
||||
"""Test optional login functionality."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_attempts_login_when_password_set(
|
||||
self, configured_env, monkeypatch, async_context_manager_factory
|
||||
):
|
||||
"""Should attempt login when REPEATER_PASSWORD is set."""
|
||||
monkeypatch.setenv("REPEATER_NAME", "TestRepeater")
|
||||
monkeypatch.setenv("REPEATER_PASSWORD", "secret123")
|
||||
import meshmon.env
|
||||
|
||||
meshmon.env._config = None
|
||||
|
||||
module = load_collect_repeater()
|
||||
|
||||
mock_cb = MagicMock()
|
||||
mock_cb.is_open.return_value = False
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
mc.commands.send_login = MagicMock()
|
||||
ctx_mock = async_context_manager_factory(mc)
|
||||
|
||||
with (
|
||||
patch.object(module, "get_repeater_circuit_breaker", return_value=mock_cb),
|
||||
patch.object(module, "connect_with_lock", return_value=ctx_mock),
|
||||
patch.object(module, "run_command") as mock_run,
|
||||
patch.object(module, "find_repeater_contact") as mock_find,
|
||||
patch.object(module, "extract_contact_info") as mock_extract,
|
||||
patch.object(module, "query_repeater_with_retry") as mock_query,
|
||||
patch.object(module, "insert_metrics", return_value=1),
|
||||
):
|
||||
# Return success for all commands
|
||||
mock_run.return_value = (True, "OK", {}, None)
|
||||
mock_find.return_value = {"adv_name": "TestRepeater"}
|
||||
mock_extract.return_value = {"adv_name": "TestRepeater"}
|
||||
mock_query.return_value = (True, {"bat": 3850}, None)
|
||||
|
||||
await module.collect_repeater()
|
||||
|
||||
# Verify login was attempted (run_command called with send_login)
|
||||
login_calls = [c for c in mock_run.call_args_list if c[0][2] == "send_login"]
|
||||
assert len(login_calls) == 1
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_handles_login_exception(
|
||||
self, configured_env, monkeypatch, async_context_manager_factory
|
||||
):
|
||||
"""Should handle exception during login gracefully."""
|
||||
monkeypatch.setenv("REPEATER_NAME", "TestRepeater")
|
||||
monkeypatch.setenv("REPEATER_PASSWORD", "secret123")
|
||||
import meshmon.env
|
||||
|
||||
meshmon.env._config = None
|
||||
|
||||
module = load_collect_repeater()
|
||||
|
||||
mock_cb = MagicMock()
|
||||
mock_cb.is_open.return_value = False
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
mc.commands.send_login = MagicMock(side_effect=Exception("Login not supported"))
|
||||
ctx_mock = async_context_manager_factory(mc)
|
||||
|
||||
call_count = 0
|
||||
|
||||
async def mock_run_command(mc, coro, name):
|
||||
nonlocal call_count
|
||||
call_count += 1
|
||||
if name == "send_login":
|
||||
raise Exception("Login not supported")
|
||||
return (True, "OK", {}, None)
|
||||
|
||||
with (
|
||||
patch.object(module, "get_repeater_circuit_breaker", return_value=mock_cb),
|
||||
patch.object(module, "connect_with_lock", return_value=ctx_mock),
|
||||
patch.object(module, "run_command", side_effect=mock_run_command),
|
||||
patch.object(module, "find_repeater_contact") as mock_find,
|
||||
patch.object(module, "extract_contact_info") as mock_extract,
|
||||
patch.object(module, "query_repeater_with_retry") as mock_query,
|
||||
patch.object(module, "insert_metrics", return_value=1),
|
||||
):
|
||||
mock_find.return_value = {"adv_name": "TestRepeater"}
|
||||
mock_extract.return_value = {"adv_name": "TestRepeater"}
|
||||
mock_query.return_value = (True, {"bat": 3850}, None)
|
||||
|
||||
# Should not raise - login failure should be handled
|
||||
exit_code = await module.collect_repeater()
|
||||
assert exit_code == 0
|
||||
|
||||
|
||||
class TestTelemetryCollection:
|
||||
"""Test telemetry collection when enabled."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_collects_telemetry_when_enabled(
|
||||
self, configured_env, monkeypatch, initialized_db, async_context_manager_factory
|
||||
):
|
||||
"""Should collect telemetry when TELEMETRY_ENABLED=1."""
|
||||
monkeypatch.setenv("REPEATER_NAME", "TestRepeater")
|
||||
monkeypatch.setenv("TELEMETRY_ENABLED", "1")
|
||||
import meshmon.env
|
||||
|
||||
meshmon.env._config = None
|
||||
|
||||
module = load_collect_repeater()
|
||||
|
||||
mock_cb = MagicMock()
|
||||
mock_cb.is_open.return_value = False
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
ctx_mock = async_context_manager_factory(mc)
|
||||
|
||||
with (
|
||||
patch.object(module, "get_repeater_circuit_breaker", return_value=mock_cb),
|
||||
patch.object(module, "connect_with_lock", return_value=ctx_mock),
|
||||
patch.object(module, "run_command") as mock_run,
|
||||
patch.object(module, "find_repeater_contact") as mock_find,
|
||||
patch.object(module, "extract_contact_info") as mock_extract,
|
||||
patch.object(module, "query_repeater_with_retry") as mock_query,
|
||||
patch.object(
|
||||
module,
|
||||
"with_retries",
|
||||
new=AsyncMock(return_value=(True, {"lpp": b"\x00\x67\x01\x00"}, None)),
|
||||
),
|
||||
patch.object(module, "extract_lpp_from_payload") as mock_lpp,
|
||||
patch.object(module, "extract_telemetry_metrics") as mock_telem,
|
||||
):
|
||||
mock_run.return_value = (True, "OK", {}, None)
|
||||
mock_find.return_value = {"adv_name": "TestRepeater"}
|
||||
mock_extract.return_value = {"adv_name": "TestRepeater"}
|
||||
mock_query.return_value = (True, {"bat": 3850}, None)
|
||||
mock_lpp.return_value = {"temperature": [(0, 25.5)]}
|
||||
mock_telem.return_value = {"telemetry.temperature.0": 25.5}
|
||||
|
||||
exit_code = await module.collect_repeater()
|
||||
|
||||
assert exit_code == 0
|
||||
# Verify telemetry was processed
|
||||
mock_lpp.assert_called_once()
|
||||
mock_telem.assert_called_once()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_handles_telemetry_failure_gracefully(
|
||||
self, configured_env, monkeypatch, async_context_manager_factory
|
||||
):
|
||||
"""Should continue when telemetry collection fails."""
|
||||
monkeypatch.setenv("REPEATER_NAME", "TestRepeater")
|
||||
monkeypatch.setenv("TELEMETRY_ENABLED", "1")
|
||||
import meshmon.env
|
||||
|
||||
meshmon.env._config = None
|
||||
|
||||
module = load_collect_repeater()
|
||||
|
||||
mock_cb = MagicMock()
|
||||
mock_cb.is_open.return_value = False
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
ctx_mock = async_context_manager_factory(mc)
|
||||
|
||||
with (
|
||||
patch.object(module, "get_repeater_circuit_breaker", return_value=mock_cb),
|
||||
patch.object(module, "connect_with_lock", return_value=ctx_mock),
|
||||
patch.object(module, "run_command") as mock_run,
|
||||
patch.object(module, "find_repeater_contact") as mock_find,
|
||||
patch.object(module, "extract_contact_info") as mock_extract,
|
||||
patch.object(module, "query_repeater_with_retry") as mock_query,
|
||||
patch.object(module, "insert_metrics", return_value=1),
|
||||
patch.object(
|
||||
module,
|
||||
"with_retries",
|
||||
new=AsyncMock(return_value=(False, None, Exception("Timeout"))),
|
||||
),
|
||||
):
|
||||
mock_run.return_value = (True, "OK", {}, None)
|
||||
mock_find.return_value = {"adv_name": "TestRepeater"}
|
||||
mock_extract.return_value = {"adv_name": "TestRepeater"}
|
||||
mock_query.return_value = (True, {"bat": 3850}, None)
|
||||
|
||||
# Should still succeed (status metrics were saved)
|
||||
exit_code = await module.collect_repeater()
|
||||
assert exit_code == 0
|
||||
|
||||
|
||||
class TestDatabaseErrorHandling:
|
||||
"""Test database error handling."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_returns_one_on_status_db_error(
|
||||
self, configured_env, monkeypatch, async_context_manager_factory
|
||||
):
|
||||
"""Should return 1 when status metrics DB write fails."""
|
||||
monkeypatch.setenv("REPEATER_NAME", "TestRepeater")
|
||||
import meshmon.env
|
||||
|
||||
meshmon.env._config = None
|
||||
|
||||
module = load_collect_repeater()
|
||||
|
||||
mock_cb = MagicMock()
|
||||
mock_cb.is_open.return_value = False
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
ctx_mock = async_context_manager_factory(mc)
|
||||
|
||||
with (
|
||||
patch.object(module, "get_repeater_circuit_breaker", return_value=mock_cb),
|
||||
patch.object(module, "connect_with_lock", return_value=ctx_mock),
|
||||
patch.object(module, "run_command") as mock_run,
|
||||
patch.object(module, "find_repeater_contact") as mock_find,
|
||||
patch.object(module, "extract_contact_info") as mock_extract,
|
||||
patch.object(module, "query_repeater_with_retry") as mock_query,
|
||||
patch.object(module, "insert_metrics", side_effect=Exception("DB error")),
|
||||
):
|
||||
mock_run.return_value = (True, "OK", {}, None)
|
||||
mock_find.return_value = {"adv_name": "TestRepeater"}
|
||||
mock_extract.return_value = {"adv_name": "TestRepeater"}
|
||||
mock_query.return_value = (True, {"bat": 3850}, None)
|
||||
|
||||
exit_code = await module.collect_repeater()
|
||||
assert exit_code == 1
|
||||
|
||||
|
||||
class TestExceptionHandling:
|
||||
"""Test general exception handling."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_records_failure_on_exception(
|
||||
self, configured_env, monkeypatch, async_context_manager_factory
|
||||
):
|
||||
"""Should record circuit breaker failure on unexpected exception."""
|
||||
monkeypatch.setenv("REPEATER_NAME", "TestRepeater")
|
||||
import meshmon.env
|
||||
|
||||
meshmon.env._config = None
|
||||
|
||||
module = load_collect_repeater()
|
||||
|
||||
mock_cb = MagicMock()
|
||||
mock_cb.is_open.return_value = False
|
||||
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
ctx_mock = async_context_manager_factory(mc)
|
||||
|
||||
with (
|
||||
patch.object(module, "get_repeater_circuit_breaker", return_value=mock_cb),
|
||||
patch.object(module, "connect_with_lock", return_value=ctx_mock),
|
||||
patch.object(module, "run_command") as mock_run,
|
||||
patch.object(module, "find_repeater_contact") as mock_find,
|
||||
patch.object(module, "extract_contact_info") as mock_extract,
|
||||
):
|
||||
mock_run.return_value = (True, "OK", {}, None)
|
||||
mock_find.return_value = {"adv_name": "TestRepeater"}
|
||||
mock_extract.side_effect = Exception("Unexpected error")
|
||||
|
||||
await module.collect_repeater()
|
||||
|
||||
# Circuit breaker should record failure
|
||||
mock_cb.record_failure.assert_called_once()
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user