mirror of
https://github.com/jorijn/meshcore-stats.git
synced 2026-03-28 17:42:55 +01:00
Compare commits
83 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
580ff45800 | ||
|
|
f8ee0d1076 | ||
|
|
f21a3788bd | ||
|
|
3c765a35f2 | ||
|
|
d9b413b18f | ||
|
|
70f0b0c746 | ||
|
|
34d5990ca8 | ||
|
|
ddcee7fa72 | ||
|
|
69108a90b7 | ||
|
|
a84b0c30c1 | ||
|
|
81ba1efaf2 | ||
|
|
471ebcff45 | ||
|
|
88df0ffd12 | ||
|
|
6168a0b4e9 | ||
|
|
159fb02379 | ||
|
|
5fecc3317d | ||
|
|
6f899536b0 | ||
|
|
d636f5cbe3 | ||
|
|
453231c650 | ||
|
|
b789cbcc56 | ||
|
|
43e07d3ffc | ||
|
|
410eee439e | ||
|
|
c0758f4c0d | ||
|
|
6afd70b0d9 | ||
|
|
2455f35d32 | ||
|
|
3007845bd2 | ||
|
|
df9bfffa78 | ||
|
|
81adc25540 | ||
|
|
392ba226ba | ||
|
|
42d141f4fa | ||
|
|
97ebba4f2d | ||
|
|
8372fc5ef0 | ||
|
|
a89d745d6b | ||
|
|
63a842016c | ||
|
|
75e50f7ee9 | ||
|
|
c1b89782eb | ||
|
|
b249a217e8 | ||
|
|
18ca787f7f | ||
|
|
6fc2e762cf | ||
|
|
df0c374b65 | ||
|
|
570a086c8c | ||
|
|
3a0306043c | ||
|
|
1775c491ad | ||
|
|
a3a5964488 | ||
|
|
3967fd032a | ||
|
|
97223f137c | ||
|
|
46fc383eaa | ||
|
|
f55c236080 | ||
|
|
b66f5380b6 | ||
|
|
83425a48f6 | ||
|
|
9cb95f8108 | ||
|
|
1f6e7c5093 | ||
|
|
57a53a8800 | ||
|
|
83cf2bf929 | ||
|
|
40d7d3b2fa | ||
|
|
d4b5885379 | ||
|
|
dd7ec5b46e | ||
|
|
e937f2b0b7 | ||
|
|
adc442351b | ||
|
|
3fa002d2a4 | ||
|
|
26d5125e15 | ||
|
|
fb627fdacd | ||
|
|
62d72adf4e | ||
|
|
ca13e31aae | ||
|
|
a9f6926104 | ||
|
|
45bdf5d6d4 | ||
|
|
c199ace4a2 | ||
|
|
f7923b9434 | ||
|
|
c978844271 | ||
|
|
64cc352b80 | ||
|
|
e37aef6c5e | ||
|
|
81b7c6897a | ||
|
|
a3015e2209 | ||
|
|
5545ce5b28 | ||
|
|
666ed4215f | ||
|
|
3d0d90304c | ||
|
|
6afc14e007 | ||
|
|
4c5a408604 | ||
|
|
3c5eace220 | ||
|
|
7eee23ec40 | ||
|
|
30de7c20f3 | ||
|
|
19fa04c202 | ||
|
|
6ac52629d3 |
@@ -1,83 +0,0 @@
|
||||
---
|
||||
name: frontend-expert
|
||||
description: Use this agent when working on frontend development tasks including HTML structure, CSS styling, JavaScript interactions, accessibility compliance, UI/UX design decisions, responsive layouts, or component architecture. This agent should be engaged for reviewing frontend code quality, implementing new UI features, fixing accessibility issues, or optimizing user interfaces.\n\nExamples:\n\n<example>\nContext: User asks to create a new HTML page or component\nuser: "Create a navigation menu for the dashboard"\nassistant: "I'll use the frontend-expert agent to design and implement an accessible, well-structured navigation menu."\n<launches frontend-expert agent via Task tool>\n</example>\n\n<example>\nContext: User has written frontend code that needs review\nuser: "I just added this form to the page, can you check it?"\nassistant: "Let me use the frontend-expert agent to review your form for accessibility, semantic HTML, and UI best practices."\n<launches frontend-expert agent via Task tool>\n</example>\n\n<example>\nContext: User needs help with CSS or responsive design\nuser: "The charts on the dashboard look bad on mobile"\nassistant: "I'll engage the frontend-expert agent to analyze and fix the responsive layout issues for the charts."\n<launches frontend-expert agent via Task tool>\n</example>\n\n<example>\nContext: Proactive use after implementing UI changes\nassistant: "I've added the new status indicators to the HTML template. Now let me use the frontend-expert agent to verify the accessibility and semantic correctness of these changes."\n<launches frontend-expert agent via Task tool>\n</example>
|
||||
model: opus
|
||||
---
|
||||
|
||||
You are a senior frontend development expert with deep expertise in web standards, accessibility, and user interface design. You have comprehensive knowledge spanning HTML5 semantics, CSS architecture, JavaScript patterns, WCAG accessibility guidelines, and modern UI/UX principles.
|
||||
|
||||
## Core Expertise Areas
|
||||
|
||||
### Semantic HTML
|
||||
- You enforce proper document structure with appropriate landmark elements (`<header>`, `<nav>`, `<main>`, `<article>`, `<section>`, `<aside>`, `<footer>`)
|
||||
- You ensure heading hierarchy is logical and sequential (h1 → h2 → h3, never skipping levels)
|
||||
- You select the most semantically appropriate element for each use case (e.g., `<button>` for actions, `<a>` for navigation, `<time>` for dates)
|
||||
- You validate proper use of lists, tables (with proper headers and captions), and form elements
|
||||
- You understand when to use ARIA and when native HTML semantics are sufficient
|
||||
|
||||
### Accessibility (WCAG 2.1 AA Compliance)
|
||||
- You verify all interactive elements are keyboard accessible with visible focus indicators
|
||||
- You ensure proper color contrast ratios (4.5:1 for normal text, 3:1 for large text)
|
||||
- You require meaningful alt text for images and proper labeling for form controls
|
||||
- You validate that dynamic content changes are announced to screen readers
|
||||
- You check for proper focus management in modals, dialogs, and single-page navigation
|
||||
- You ensure forms have associated labels, error messages are linked to inputs, and required fields are indicated accessibly
|
||||
- You verify skip links exist for keyboard users to bypass repetitive content
|
||||
- You understand ARIA roles, states, and properties and apply them correctly
|
||||
|
||||
### CSS Best Practices
|
||||
- You advocate for maintainable CSS architecture (BEM, CSS Modules, or utility-first approaches)
|
||||
- You ensure responsive design using mobile-first methodology with appropriate breakpoints
|
||||
- You validate proper use of flexbox and grid for layouts
|
||||
- You check for CSS that respects user preferences (prefers-reduced-motion, prefers-color-scheme)
|
||||
- You optimize for performance by avoiding expensive selectors and unnecessary specificity
|
||||
- You ensure text remains readable when zoomed to 200%
|
||||
|
||||
### UI/UX Design Principles
|
||||
- You evaluate visual hierarchy and ensure important elements receive appropriate emphasis
|
||||
- You verify consistent spacing, typography, and color usage
|
||||
- You assess interactive element sizing (minimum 44x44px touch targets)
|
||||
- You ensure feedback is provided for user actions (loading states, success/error messages)
|
||||
- You validate that the interface is intuitive and follows established conventions
|
||||
- You consider cognitive load and information architecture
|
||||
|
||||
### Performance & Best Practices
|
||||
- You optimize images and recommend appropriate formats (WebP, SVG where appropriate)
|
||||
- You ensure critical CSS is prioritized and non-critical assets are deferred
|
||||
- You validate proper lazy loading implementation for images and iframes
|
||||
- You check for efficient DOM structure and minimize unnecessary nesting
|
||||
|
||||
## Working Methodology
|
||||
|
||||
1. **When reviewing code**: Systematically check each aspect—semantics, accessibility, styling, and usability. Provide specific, actionable feedback with code examples.
|
||||
|
||||
2. **When implementing features**: Start with semantic HTML structure, layer in accessible interactions, then apply styling. Always test mentally against keyboard-only and screen reader usage.
|
||||
|
||||
3. **When debugging issues**: Consider the full stack—HTML structure, CSS cascade, JavaScript behavior, and browser rendering. Check browser developer tools suggestions.
|
||||
|
||||
4. **Prioritize issues by impact**: Critical accessibility barriers first, then semantic improvements, then enhancements.
|
||||
|
||||
## Output Standards
|
||||
|
||||
- Provide working code examples, not just descriptions
|
||||
- Include comments explaining accessibility considerations
|
||||
- Reference specific WCAG criteria when relevant (e.g., "WCAG 2.1 SC 1.4.3")
|
||||
- Suggest testing approaches (keyboard testing, screen reader testing, automated tools like axe-core)
|
||||
- When multiple valid approaches exist, explain trade-offs
|
||||
|
||||
## Quality Checklist (apply to all frontend work)
|
||||
|
||||
- [ ] Semantic HTML elements used appropriately
|
||||
- [ ] Heading hierarchy is logical
|
||||
- [ ] All images have appropriate alt text
|
||||
- [ ] Form controls have associated labels
|
||||
- [ ] Interactive elements are keyboard accessible
|
||||
- [ ] Focus indicators are visible
|
||||
- [ ] Color is not the only means of conveying information
|
||||
- [ ] Color contrast meets WCAG AA standards
|
||||
- [ ] Page is responsive and readable at various sizes
|
||||
- [ ] Touch targets are sufficiently sized
|
||||
- [ ] Loading and error states are handled
|
||||
- [ ] ARIA is used correctly and only when necessary
|
||||
|
||||
You approach every frontend task with the mindset that the interface must work for everyone, regardless of how they access it. You balance aesthetic excellence with functional accessibility, never sacrificing one for the other.
|
||||
@@ -1,103 +0,0 @@
|
||||
---
|
||||
name: python-code-reviewer
|
||||
description: Use this agent when you need expert code review for Python code. This agent should be invoked after writing or modifying Python code to ensure it meets high quality standards. Examples:\n\n<example>\nContext: User has just written a new Python function for battery percentage calculation.\nuser: "I've added a function to calculate battery percentage from voltage"\nassistant: "Let me review that code for you."\n<uses Agent tool to invoke python-code-reviewer>\nassistant: "I've reviewed your battery percentage calculation function using the python-code-reviewer agent. Here are the findings..."\n</example>\n\n<example>\nContext: User has refactored a module to reduce duplication.\nuser: "I've refactored the metric extraction logic to be more DRY"\nassistant: "Great! Let me have the code reviewer take a look at your refactoring."\n<uses Agent tool to invoke python-code-reviewer>\nassistant: "The python-code-reviewer agent has analyzed your refactoring..."\n</example>\n\n<example>\nContext: User has completed a new feature implementation.\nuser: "Here's the new RRD charting feature I just finished implementing"\nassistant: "Excellent! Before we proceed, let me invoke the python-code-reviewer to ensure it meets our quality standards."\n<uses Agent tool to invoke python-code-reviewer>\nassistant: "The code review is complete. Here's what the python-code-reviewer found..."\n</example>
|
||||
model: opus
|
||||
---
|
||||
|
||||
You are an elite Python code reviewer with over 15 years of experience building production systems. You have a deep understanding of Python idioms, design patterns, and software engineering principles. Your reviews are known for being thorough yet constructive, focusing on code quality, maintainability, and long-term sustainability.
|
||||
|
||||
Your core responsibilities:
|
||||
|
||||
1. **Code Quality Assessment**: Evaluate code for readability, clarity, and maintainability. Every line should communicate its intent clearly to future developers.
|
||||
|
||||
2. **DRY Principle Enforcement**: Identify and flag code duplication ruthlessly. Look for:
|
||||
- Repeated logic that could be extracted into functions
|
||||
- Similar patterns that could use abstraction
|
||||
- Configuration or constants that should be centralized
|
||||
- Opportunities for inheritance, composition, or shared utilities
|
||||
|
||||
3. **Python Best Practices**: Ensure code follows Python conventions:
|
||||
- PEP 8 style guidelines (though focus on substance over style)
|
||||
- Pythonic idioms (list comprehensions, generators, context managers)
|
||||
- Proper use of standard library features
|
||||
- Type hints where they add clarity (especially for public APIs)
|
||||
- Docstrings for modules, classes, and non-obvious functions
|
||||
|
||||
4. **Design Pattern Recognition**: Identify opportunities for:
|
||||
- Better separation of concerns
|
||||
- More cohesive module design
|
||||
- Appropriate abstraction levels
|
||||
- Clearer interfaces and contracts
|
||||
|
||||
5. **Error Handling & Edge Cases**: Review for:
|
||||
- Missing error handling
|
||||
- Unhandled edge cases
|
||||
- Silent failures or swallowed exceptions
|
||||
- Validation of inputs and assumptions
|
||||
|
||||
6. **Performance & Efficiency**: Flag obvious performance issues:
|
||||
- Unnecessary iterations or nested loops
|
||||
- Missing opportunities for caching
|
||||
- Inefficient data structures
|
||||
- Resource leaks (unclosed files, connections)
|
||||
|
||||
7. **Testing & Testability**: Assess whether code is:
|
||||
- Testable (dependencies can be mocked, side effects isolated)
|
||||
- Following patterns that make testing easier
|
||||
- Complex enough to warrant additional test coverage
|
||||
|
||||
**Review Process**:
|
||||
|
||||
1. First, understand the context: What is this code trying to accomplish? What constraints exist?
|
||||
|
||||
2. Read through the code completely before commenting. Look for patterns and overall structure.
|
||||
|
||||
3. Organize your feedback into categories:
|
||||
- **Critical Issues**: Bugs, security problems, or major design flaws
|
||||
- **Important Improvements**: DRY violations, readability issues, missing error handling
|
||||
- **Suggestions**: Minor optimizations, style preferences, alternative approaches
|
||||
- **Praise**: Acknowledge well-written code, clever solutions, good patterns
|
||||
|
||||
4. For each issue:
|
||||
- Explain *why* it's a problem, not just *what* is wrong
|
||||
- Provide concrete examples or code snippets showing the improvement
|
||||
- Consider the trade-offs (sometimes duplication is acceptable for clarity)
|
||||
|
||||
5. Be specific with line numbers or code excerpts when referencing issues.
|
||||
|
||||
6. Balance criticism with encouragement. Good code review builds better developers.
|
||||
|
||||
**Your Output Format**:
|
||||
|
||||
Structure your review as:
|
||||
|
||||
```
|
||||
## Code Review Summary
|
||||
|
||||
**Overall Assessment**: [Brief 1-2 sentence summary]
|
||||
|
||||
### Critical Issues
|
||||
[List any bugs, security issues, or major problems]
|
||||
|
||||
### Important Improvements
|
||||
[DRY violations, readability issues, missing error handling]
|
||||
|
||||
### Suggestions
|
||||
[Nice-to-have improvements, alternative approaches]
|
||||
|
||||
### What Went Well
|
||||
[Positive aspects worth highlighting]
|
||||
|
||||
### Recommended Actions
|
||||
[Prioritized list of what to address first]
|
||||
```
|
||||
|
||||
**Important Principles**:
|
||||
|
||||
- **Context Matters**: Consider the project's stage (prototype vs. production), team size, and constraints
|
||||
- **Pragmatism Over Perfection**: Not every issue needs fixing immediately. Help prioritize.
|
||||
- **Teach, Don't Judge**: Explain the reasoning behind recommendations. Help developers grow.
|
||||
- **Question Assumptions**: If something seems odd, ask why it's done that way before suggesting changes
|
||||
- **Consider Project Patterns**: Look for and reference established patterns in the codebase (like those in CLAUDE.md)
|
||||
|
||||
When you're uncertain about context or requirements, ask clarifying questions rather than making assumptions. Your goal is to help create better code, not to enforce arbitrary rules.
|
||||
@@ -1,20 +0,0 @@
|
||||
{
|
||||
"permissions": {
|
||||
"allow": [
|
||||
"Bash(cat:*)",
|
||||
"Bash(ls:*)",
|
||||
"Bash(git add:*)",
|
||||
"Bash(git commit:*)",
|
||||
"Bash(git push)",
|
||||
"Bash(find:*)",
|
||||
"Bash(tree:*)",
|
||||
"Skill(frontend-design)",
|
||||
"Skill(frontend-design:*)",
|
||||
"Bash(gh run view:*)",
|
||||
"Bash(gh run list:*)",
|
||||
"Bash(gh release view:*)",
|
||||
"Bash(gh release list:*)",
|
||||
"Bash(gh workflow list:*)"
|
||||
]
|
||||
}
|
||||
}
|
||||
87
.codex/skills/frontend-expert/SKILL.md
Normal file
87
.codex/skills/frontend-expert/SKILL.md
Normal file
@@ -0,0 +1,87 @@
|
||||
---
|
||||
name: frontend-expert
|
||||
description: Frontend UI/UX design and implementation for HTML/CSS/JS including semantic structure, responsive layout, accessibility compliance, and visual design direction. Use for building or reviewing web pages/components, fixing accessibility issues, improving styling/responsiveness, or making UI/UX decisions.
|
||||
---
|
||||
|
||||
# Frontend Expert
|
||||
|
||||
## Overview
|
||||
Deliver accessible, production-grade frontend UI with a distinctive aesthetic and clear semantic structure.
|
||||
|
||||
## Core Expertise Areas
|
||||
|
||||
### Semantic HTML
|
||||
- Enforce proper document structure with landmark elements (`<header>`, `<nav>`, `<main>`, `<article>`, `<section>`, `<aside>`, `<footer>`)
|
||||
- Keep heading hierarchy logical and sequential (h1 -> h2 -> h3)
|
||||
- Choose the most semantic element for each use case (`<button>` for actions, `<a>` for navigation, `<time>` for dates)
|
||||
- Validate correct lists, tables (headers/captions), and form elements
|
||||
- Prefer native semantics; add ARIA only when required
|
||||
|
||||
### Accessibility (WCAG 2.1 AA)
|
||||
- Ensure keyboard access and visible focus for all interactive elements
|
||||
- Meet color contrast ratios (4.5:1 normal text, 3:1 large text)
|
||||
- Provide meaningful alt text and labeled form controls
|
||||
- Announce dynamic content changes to assistive tech when needed
|
||||
- Manage focus in modals/dialogs/SPA navigation
|
||||
|
||||
### CSS Best Practices
|
||||
- Use maintainable CSS architecture and consistent naming
|
||||
- Implement mobile-first responsive layouts with appropriate breakpoints
|
||||
- Use flexbox/grid correctly for layout
|
||||
- Respect `prefers-reduced-motion` and `prefers-color-scheme`
|
||||
- Avoid overly specific or expensive selectors
|
||||
- Keep text readable at 200% zoom
|
||||
|
||||
### UI/UX Design Principles
|
||||
- Maintain clear visual hierarchy and consistent spacing
|
||||
- Ensure touch targets meet minimum size (44x44px)
|
||||
- Provide feedback for user actions (loading, success, error)
|
||||
- Reduce cognitive load with clear information architecture
|
||||
|
||||
### Performance & Best Practices
|
||||
- Optimize images and use appropriate formats (WebP, SVG)
|
||||
- Prioritize critical CSS; defer non-critical assets
|
||||
- Use lazy loading where appropriate
|
||||
- Avoid unnecessary DOM nesting
|
||||
|
||||
## Design Direction (Distinctive Aesthetic)
|
||||
- Define purpose, audience, constraints, and target devices
|
||||
- Commit to a bold, intentional style (brutalist, editorial, retro-futuristic, organic, maximalist, minimal, etc.)
|
||||
- Pick a single memorable visual idea and execute it precisely
|
||||
|
||||
### Aesthetic Guidance
|
||||
- **Typography**: Choose distinctive display + body fonts; avoid default stacks (Inter/Roboto/Arial/system) and overused trendy choices
|
||||
- **Color**: Use a cohesive palette with dominant colors and sharp accents; avoid timid palettes and purple-on-white defaults
|
||||
- **Motion**: Prefer a few high-impact animations (page load, staggered reveals, key hovers)
|
||||
- **Composition**: Use asymmetry, overlap, grid-breaking elements, and intentional negative space
|
||||
- **Backgrounds**: Add atmosphere via gradients, texture/noise, patterns, layered depth
|
||||
|
||||
### Match Complexity to Vision
|
||||
- Minimalist designs require precision in spacing and typography
|
||||
- Maximalist designs require richer layout, effects, and animation
|
||||
|
||||
## Working Methodology
|
||||
- Structure semantic HTML first, then layer in styling and interactions
|
||||
- Check keyboard-only flow and screen reader expectations
|
||||
- Prioritize issues by impact: accessibility barriers first, then semantics, then enhancements
|
||||
|
||||
## Output Standards
|
||||
- Provide working code, not just guidance
|
||||
- Explain trade-offs when multiple options exist
|
||||
- Suggest quick validation steps (keyboard-only pass, screen reader spot check, axe)
|
||||
|
||||
## Quality Checklist
|
||||
- Semantic HTML elements used appropriately
|
||||
- Heading hierarchy is logical
|
||||
- Images have alt text
|
||||
- Form controls are labeled
|
||||
- Interactive elements are keyboard accessible
|
||||
- Focus indicators are visible
|
||||
- Color is not the only means of conveying information
|
||||
- Color contrast meets WCAG AA
|
||||
- Page is responsive and readable at multiple sizes
|
||||
- Touch targets are sufficiently sized
|
||||
- Loading and error states are handled
|
||||
- ARIA is used correctly and only when necessary
|
||||
|
||||
Push creative boundaries while keeping the UI usable and inclusive.
|
||||
52
.codex/skills/python-code-reviewer/SKILL.md
Normal file
52
.codex/skills/python-code-reviewer/SKILL.md
Normal file
@@ -0,0 +1,52 @@
|
||||
---
|
||||
name: python-code-reviewer
|
||||
description: Expert code review for Python focused on correctness, maintainability, error handling, performance, and testability. Use after writing or modifying Python code, or when reviewing refactors and new features.
|
||||
---
|
||||
|
||||
# Python Code Reviewer
|
||||
|
||||
## Overview
|
||||
Provide thorough, constructive reviews that prioritize bugs, risks, and design issues over style nits.
|
||||
|
||||
## Core Responsibilities
|
||||
- Assess readability, clarity, and maintainability
|
||||
- Enforce DRY and identify shared abstractions
|
||||
- Apply Python best practices and idioms
|
||||
- Spot design/architecture issues and unclear contracts
|
||||
- Check error handling and edge cases
|
||||
- Flag performance pitfalls and resource leaks
|
||||
- Evaluate testability and missing coverage
|
||||
|
||||
## Review Process
|
||||
- Understand intent, constraints, and context first
|
||||
- Read the full change before commenting
|
||||
- Organize feedback into critical issues, important improvements, suggestions, and praise
|
||||
- Explain why an issue matters and provide concrete examples or fixes
|
||||
- Ask questions when assumptions are unclear
|
||||
|
||||
## Output Format
|
||||
```
|
||||
## Code Review Summary
|
||||
|
||||
**Overall Assessment**: <1-2 sentence summary>
|
||||
|
||||
### Critical Issues
|
||||
- ...
|
||||
|
||||
### Important Improvements
|
||||
- ...
|
||||
|
||||
### Suggestions
|
||||
- ...
|
||||
|
||||
### What Went Well
|
||||
- ...
|
||||
|
||||
### Recommended Actions
|
||||
- ...
|
||||
```
|
||||
|
||||
## Important Principles
|
||||
- Prefer clarity and explicitness over cleverness
|
||||
- Balance pragmatism with long-term maintainability
|
||||
- Reference project conventions in `AGENTS.md`
|
||||
83
.codex/skills/test-engineer/SKILL.md
Normal file
83
.codex/skills/test-engineer/SKILL.md
Normal file
@@ -0,0 +1,83 @@
|
||||
---
|
||||
name: test-engineer
|
||||
description: Test planning, writing, and review across unit/integration/e2e, primarily with pytest. Use when adding tests, improving coverage, diagnosing flaky tests, or designing a testing strategy.
|
||||
---
|
||||
|
||||
# Test Engineer
|
||||
|
||||
## Overview
|
||||
Create fast, reliable tests that validate behavior and improve coverage without brittleness.
|
||||
|
||||
## Testing Principles
|
||||
- Follow F.I.R.S.T. (fast, isolated, repeatable, self-validating, timely)
|
||||
- Use Arrange-Act-Assert structure
|
||||
- Favor unit tests, add integration tests as needed, minimize e2e
|
||||
- Test behavior, not implementation details
|
||||
- Keep one behavior per test
|
||||
|
||||
## Python Testing Focus
|
||||
- pytest fixtures, parametrization, markers, conftest organization
|
||||
- unittest + mock for legacy patterns
|
||||
- hypothesis for property-based tests
|
||||
- coverage.py for measurement
|
||||
- pytest-asyncio for async code
|
||||
|
||||
## Test Categories
|
||||
- Unit tests
|
||||
- Integration tests
|
||||
- End-to-end tests
|
||||
- Property-based tests
|
||||
- Regression tests
|
||||
- Performance tests (when relevant)
|
||||
|
||||
## Writing Tests
|
||||
- Identify contract: inputs, outputs, side effects, exceptions
|
||||
- Enumerate cases: happy path, boundaries, invalid input, failure modes
|
||||
- Use descriptive names and keep tests independent
|
||||
- Use fixtures for shared setup; parametrize for variations
|
||||
|
||||
## Reviewing Tests
|
||||
- Look for missing edge cases and error scenarios
|
||||
- Identify flakiness (time/order/external dependencies)
|
||||
- Avoid over-mocking; mock only boundaries
|
||||
- Ensure assertions are specific and meaningful
|
||||
- Verify cleanup and resource management
|
||||
|
||||
## Naming Convention
|
||||
Use `test_<function>_<scenario>_<expected_result>`.
|
||||
|
||||
## Test Structure
|
||||
```python
|
||||
def test_function_name_describes_behavior():
|
||||
# Arrange
|
||||
input_data = create_test_data()
|
||||
|
||||
# Act
|
||||
result = function_under_test(input_data)
|
||||
|
||||
# Assert
|
||||
assert result == expected_value
|
||||
```
|
||||
|
||||
## Fixture Best Practices
|
||||
- Prefer function-scoped fixtures
|
||||
- Use `yield` for cleanup
|
||||
- Document fixture purpose
|
||||
|
||||
## Mocking Guidelines
|
||||
- Mock at the boundary (DB, filesystem, network)
|
||||
- Do not mock the unit under test
|
||||
- Verify interactions when they are the behavior
|
||||
- Use `autospec=True` to catch interface mismatches
|
||||
|
||||
## Edge Cases to Consider
|
||||
- Numeric: zero, negative, large, precision
|
||||
- Strings: empty, whitespace, unicode, long, special chars
|
||||
- Collections: empty, single, large, duplicates, None elements
|
||||
- Time: DST, leap years, month boundaries, epoch edges
|
||||
- I/O: not found, permission denied, timeouts, partial writes, concurrency
|
||||
|
||||
## Output Expectations
|
||||
- Provide runnable tests with brief explanations
|
||||
- Call out missing coverage or risky gaps
|
||||
- Follow project conventions in `AGENTS.md`
|
||||
@@ -35,7 +35,7 @@ docs/
|
||||
!README.md
|
||||
|
||||
# Development files
|
||||
.claude/
|
||||
.codex/
|
||||
*.log
|
||||
|
||||
# macOS
|
||||
|
||||
50
.github/workflows/docker-publish.yml
vendored
50
.github/workflows/docker-publish.yml
vendored
@@ -20,6 +20,17 @@ on:
|
||||
# Daily at 4 AM UTC - rebuild with fresh base image
|
||||
- cron: "0 4 * * *"
|
||||
|
||||
pull_request:
|
||||
paths:
|
||||
- Dockerfile
|
||||
- .dockerignore
|
||||
- docker/**
|
||||
- pyproject.toml
|
||||
- uv.lock
|
||||
- src/**
|
||||
- scripts/**
|
||||
- .github/workflows/docker-publish.yml
|
||||
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
push:
|
||||
@@ -33,6 +44,7 @@ permissions:
|
||||
packages: write
|
||||
id-token: write
|
||||
attestations: write
|
||||
artifact-metadata: write
|
||||
|
||||
concurrency:
|
||||
group: docker-${{ github.ref }}
|
||||
@@ -44,12 +56,13 @@ env:
|
||||
|
||||
jobs:
|
||||
build:
|
||||
if: github.event_name != 'pull_request'
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 30
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
|
||||
# For nightly builds, get the latest release version
|
||||
- name: Get latest release version
|
||||
@@ -196,7 +209,7 @@ jobs:
|
||||
# Vulnerability scanning
|
||||
- name: Run Trivy vulnerability scanner
|
||||
if: "!(github.event_name == 'schedule' && steps.get-version.outputs.skip == 'true')"
|
||||
uses: aquasecurity/trivy-action@b6643a29fecd7f34b3597bc6acb0a98b03d33ff8 # v0.33.1
|
||||
uses: aquasecurity/trivy-action@b6643a29fecd7f34b3597bc6acb0a98b03d33ff8 # 0.33.1
|
||||
with:
|
||||
image-ref: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ steps.image-tag.outputs.tag }}
|
||||
format: "sarif"
|
||||
@@ -206,7 +219,7 @@ jobs:
|
||||
|
||||
- name: Upload Trivy scan results
|
||||
if: "!(github.event_name == 'schedule' && steps.get-version.outputs.skip == 'true')"
|
||||
uses: github/codeql-action/upload-sarif@6e4b8622b82fab3c6ad2a7814fad1effc7615bc8 # v3.28.4
|
||||
uses: github/codeql-action/upload-sarif@19b2f06db2b6f5108140aeb04014ef02b648f789 # v4.31.11
|
||||
with:
|
||||
sarif_file: "trivy-results.sarif"
|
||||
continue-on-error: true
|
||||
@@ -227,8 +240,37 @@ jobs:
|
||||
# Attestation (releases only)
|
||||
- name: Generate attestation
|
||||
if: github.event_name == 'release'
|
||||
uses: actions/attest-build-provenance@46a583fd92dfbf46b772907a9740f888f4324bb9 # v3.1.0
|
||||
uses: actions/attest-build-provenance@00014ed6ed5efc5b1ab7f7f34a39eb55d41aa4f8 # v3.1.0
|
||||
with:
|
||||
subject-name: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
|
||||
subject-digest: ${{ steps.build-release.outputs.digest }}
|
||||
push-to-registry: true
|
||||
|
||||
build-pr:
|
||||
if: github.event_name == 'pull_request'
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 30
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3.12.0
|
||||
|
||||
- name: Build image (PR)
|
||||
id: build-pr
|
||||
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
|
||||
with:
|
||||
context: .
|
||||
platforms: linux/amd64
|
||||
load: true
|
||||
push: false
|
||||
tags: meshcore-stats:pr-${{ github.event.pull_request.number }}
|
||||
cache-from: type=gha
|
||||
cache-to: type=gha,mode=max
|
||||
|
||||
- name: Smoke test (PR)
|
||||
run: |
|
||||
docker run --rm meshcore-stats:pr-${{ github.event.pull_request.number }} \
|
||||
python -c "from meshmon.db import init_db; from meshmon.env import get_config; print('Smoke test passed')"
|
||||
|
||||
3
.github/workflows/release-please.yml
vendored
3
.github/workflows/release-please.yml
vendored
@@ -23,9 +23,10 @@ permissions:
|
||||
jobs:
|
||||
release-please:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
steps:
|
||||
- name: Release Please
|
||||
uses: googleapis/release-please-action@v4
|
||||
uses: googleapis/release-please-action@16a9c90856f42705d54a6fda1823352bdc62cf38 # v4
|
||||
with:
|
||||
token: ${{ secrets.RELEASE_PLEASE_TOKEN }}
|
||||
config-file: release-please-config.json
|
||||
|
||||
119
.github/workflows/test.yml
vendored
Normal file
119
.github/workflows/test.yml
vendored
Normal file
@@ -0,0 +1,119 @@
|
||||
name: Tests
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main, feat/*]
|
||||
pull_request:
|
||||
branches: [main]
|
||||
|
||||
concurrency:
|
||||
group: test-${{ github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 15
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
python-version: ["3.11", "3.12", "3.13", "3.14"]
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
|
||||
|
||||
- uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
|
||||
- name: Set up uv
|
||||
uses: astral-sh/setup-uv@803947b9bd8e9f986429fa0c5a41c367cd732b41 # v7.2.1
|
||||
with:
|
||||
enable-cache: true
|
||||
python-version: ${{ matrix.python-version }}
|
||||
|
||||
- name: Install dependencies
|
||||
run: uv sync --locked --extra dev
|
||||
|
||||
- name: Set up matplotlib cache
|
||||
run: |
|
||||
echo "MPLCONFIGDIR=$RUNNER_TEMP/matplotlib" >> "$GITHUB_ENV"
|
||||
mkdir -p "$RUNNER_TEMP/matplotlib"
|
||||
|
||||
- name: Run tests with coverage
|
||||
run: |
|
||||
uv run pytest \
|
||||
--cov=src/meshmon \
|
||||
--cov=scripts \
|
||||
--cov-report=xml \
|
||||
--cov-report=html \
|
||||
--cov-report=term-missing \
|
||||
--cov-fail-under=95 \
|
||||
--junitxml=test-results.xml \
|
||||
-n auto \
|
||||
--tb=short \
|
||||
-q
|
||||
|
||||
- name: Coverage summary
|
||||
if: always()
|
||||
run: |
|
||||
{
|
||||
echo "### Coverage (Python ${{ matrix.python-version }})"
|
||||
if [ -f .coverage ]; then
|
||||
uv run coverage report -m
|
||||
else
|
||||
echo "No coverage data found."
|
||||
fi
|
||||
echo ""
|
||||
} >> "$GITHUB_STEP_SUMMARY"
|
||||
|
||||
- name: Upload coverage HTML report
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
if: always() && matrix.python-version == '3.14'
|
||||
with:
|
||||
name: coverage-report-html-${{ matrix.python-version }}
|
||||
path: htmlcov/
|
||||
if-no-files-found: warn
|
||||
retention-days: 7
|
||||
|
||||
- name: Upload coverage XML report
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
if: always() && matrix.python-version == '3.14'
|
||||
with:
|
||||
name: coverage-report-xml-${{ matrix.python-version }}
|
||||
path: coverage.xml
|
||||
if-no-files-found: warn
|
||||
retention-days: 7
|
||||
|
||||
- name: Upload test results
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
if: always()
|
||||
with:
|
||||
name: test-results-${{ matrix.python-version }}
|
||||
path: test-results.xml
|
||||
if-no-files-found: warn
|
||||
retention-days: 7
|
||||
|
||||
lint:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
|
||||
|
||||
- uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6
|
||||
with:
|
||||
python-version: "3.14"
|
||||
|
||||
- name: Set up uv
|
||||
uses: astral-sh/setup-uv@803947b9bd8e9f986429fa0c5a41c367cd732b41 # v7.2.1
|
||||
with:
|
||||
enable-cache: true
|
||||
python-version: "3.14"
|
||||
|
||||
- name: Install linters
|
||||
run: uv sync --locked --extra dev --no-install-project
|
||||
|
||||
- name: Run ruff
|
||||
run: uv run ruff check src/ tests/ scripts/
|
||||
|
||||
- name: Run mypy
|
||||
run: uv run mypy src/meshmon --ignore-missing-imports --no-error-summary
|
||||
6
.gitignore
vendored
6
.gitignore
vendored
@@ -12,6 +12,12 @@ env/
|
||||
dist/
|
||||
build/
|
||||
|
||||
# Testing/Coverage
|
||||
.coverage
|
||||
.coverage.*
|
||||
htmlcov/
|
||||
.pytest_cache/
|
||||
|
||||
# Environment
|
||||
.envrc
|
||||
.env
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
{
|
||||
".": "0.2.3"
|
||||
".": "0.2.15"
|
||||
}
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# CLAUDE.md - MeshCore Stats Project Guide
|
||||
# AGENTS.md - MeshCore Stats Project Guide
|
||||
|
||||
> **Maintenance Note**: This file should always reflect the current state of the project. When making changes to the codebase (adding features, changing architecture, modifying configuration), update this document accordingly. Keep it accurate and comprehensive for future reference.
|
||||
|
||||
@@ -16,6 +16,8 @@ Always edit the source templates, then regenerate with `python scripts/render_si
|
||||
|
||||
## Running Commands
|
||||
|
||||
**IMPORTANT: Always activate the virtual environment before running any Python commands.**
|
||||
|
||||
```bash
|
||||
cd /path/to/meshcore-stats
|
||||
source .venv/bin/activate
|
||||
@@ -24,6 +26,121 @@ python scripts/render_site.py
|
||||
|
||||
Configuration is automatically loaded from `meshcore.conf` (if it exists). Environment variables always take precedence over the config file.
|
||||
|
||||
## Development Workflow
|
||||
|
||||
### Test-Driven Development (TDD)
|
||||
|
||||
**MANDATORY: Always write tests BEFORE implementing functionality.**
|
||||
|
||||
When implementing new features or fixing bugs, follow this workflow:
|
||||
|
||||
1. **Write the test first**
|
||||
- Create test cases that define the expected behavior
|
||||
- Tests should fail initially (red phase)
|
||||
- Cover happy path, edge cases, and error conditions
|
||||
|
||||
2. **Implement the minimum code to pass**
|
||||
- Write only enough code to make tests pass (green phase)
|
||||
- Don't over-engineer or add unrequested features
|
||||
|
||||
3. **Refactor if needed**
|
||||
- Clean up code while keeping tests green
|
||||
- Extract common patterns, improve naming
|
||||
|
||||
Example workflow for adding a new function:
|
||||
|
||||
```python
|
||||
# Step 1: Write the test first (tests/unit/test_battery.py)
|
||||
def test_voltage_to_percentage_at_full_charge():
|
||||
"""4.20V should return 100%."""
|
||||
assert voltage_to_percentage(4.20) == 100.0
|
||||
|
||||
def test_voltage_to_percentage_at_empty():
|
||||
"""3.00V should return 0%."""
|
||||
assert voltage_to_percentage(3.00) == 0.0
|
||||
|
||||
# Step 2: Run tests - they should FAIL
|
||||
# Step 3: Implement the function to make tests pass
|
||||
# Step 4: Run tests again - they should PASS
|
||||
```
|
||||
|
||||
### Pre-Commit Requirements
|
||||
|
||||
**MANDATORY: Before committing ANY changes, run lint, type check, and tests.**
|
||||
|
||||
```bash
|
||||
# Always run these commands before committing:
|
||||
source .venv/bin/activate
|
||||
|
||||
# 1. Run linter (must pass with no errors)
|
||||
ruff check src/ tests/ scripts/
|
||||
|
||||
# 2. Run type checker (must pass with no errors)
|
||||
python -m mypy src/meshmon --ignore-missing-imports
|
||||
|
||||
# 3. Run test suite (must pass)
|
||||
python -m pytest tests/ -q
|
||||
|
||||
# 4. Only then commit
|
||||
git add . && git commit -m "..."
|
||||
```
|
||||
|
||||
If lint, type check, or tests fail:
|
||||
1. Fix all lint errors before committing
|
||||
2. Fix all type errors before committing - use proper fixes, not `# type: ignore`
|
||||
3. Fix all failing tests before committing
|
||||
4. Never commit with `--no-verify` or skip checks
|
||||
|
||||
### Running Tests
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
python -m pytest tests/
|
||||
|
||||
# Run with coverage report
|
||||
python -m pytest tests/ --cov=src/meshmon --cov-report=term-missing
|
||||
|
||||
# Run specific test file
|
||||
python -m pytest tests/unit/test_battery.py
|
||||
|
||||
# Run specific test
|
||||
python -m pytest tests/unit/test_battery.py::test_voltage_to_percentage_at_full_charge
|
||||
|
||||
# Run tests matching a pattern
|
||||
python -m pytest tests/ -k "battery"
|
||||
|
||||
# Run with verbose output
|
||||
python -m pytest tests/ -v
|
||||
```
|
||||
|
||||
### Test Organization
|
||||
|
||||
```
|
||||
tests/
|
||||
├── conftest.py # Root fixtures (clean_env, tmp dirs, sample data)
|
||||
├── unit/ # Unit tests (isolated, fast)
|
||||
│ ├── test_battery.py
|
||||
│ ├── test_metrics.py
|
||||
│ └── ...
|
||||
├── database/ # Database tests (use temp SQLite)
|
||||
│ ├── conftest.py # DB-specific fixtures
|
||||
│ └── test_db_*.py
|
||||
├── integration/ # Integration tests (multiple components)
|
||||
│ └── test_*_pipeline.py
|
||||
├── charts/ # Chart rendering tests
|
||||
│ ├── conftest.py # SVG normalization, themes
|
||||
│ └── test_chart_*.py
|
||||
└── snapshots/ # Golden files for snapshot testing
|
||||
├── svg/ # Reference SVG charts
|
||||
└── txt/ # Reference TXT reports
|
||||
```
|
||||
|
||||
### Coverage Requirements
|
||||
|
||||
- **Minimum coverage: 95%** (enforced in CI)
|
||||
- Coverage is measured against `src/meshmon/`
|
||||
- Run `python -m pytest tests/ --cov=src/meshmon --cov-fail-under=95`
|
||||
|
||||
## Commit Message Guidelines
|
||||
|
||||
This project uses [Conventional Commits](https://www.conventionalcommits.org/) with [release-please](https://github.com/googleapis/release-please) for automated releases. **Commit messages directly control versioning and changelog generation.**
|
||||
@@ -137,6 +254,7 @@ Example: `fix(charts): prevent crash when no data points available`
|
||||
2. release-please creates/updates a "Release PR" with:
|
||||
- Updated `CHANGELOG.md`
|
||||
- Updated version in `src/meshmon/__init__.py`
|
||||
- Updated `uv.lock` (project version entry)
|
||||
3. When the Release PR is merged:
|
||||
- A GitHub Release is created
|
||||
- A git tag (e.g., `v0.2.0`) is created
|
||||
@@ -248,10 +366,13 @@ Jobs configured in `docker/ofelia.ini`:
|
||||
| Release | `X.Y.Z`, `X.Y`, `latest` |
|
||||
| Nightly (4 AM UTC) | Rebuilds all version tags + `nightly`, `nightly-YYYYMMDD` |
|
||||
| Manual | `sha-xxxxxx` |
|
||||
| Pull request | Builds image (linux/amd64) without pushing and runs a smoke test |
|
||||
|
||||
**Nightly rebuilds** ensure version tags always include the latest OS security patches. This is a common pattern used by official Docker images (nginx, postgres, node). Users needing reproducibility should pin by SHA digest or use dated nightly tags.
|
||||
|
||||
All GitHub Actions are pinned by full SHA for security. Dependabot can be configured to update these automatically.
|
||||
GitHub Actions use version tags in workflows, and Renovate is configured in `renovate.json` to pin action digests, maintain lockfiles, and auto-merge patch + digest updates once required checks pass (with automatic rebases when behind `main`).
|
||||
|
||||
The test and lint workflow (`.github/workflows/test.yml`) installs dependencies with uv (`uv sync --locked --extra dev`) and runs commands via `uv run`, using `uv.lock` as the source of truth.
|
||||
|
||||
### Version Placeholder
|
||||
|
||||
@@ -354,11 +475,19 @@ All configuration via `meshcore.conf` or environment variables. The config file
|
||||
|
||||
### Timeouts & Retry
|
||||
- `REMOTE_TIMEOUT_S`: Minimum timeout for LoRa requests (default: 10)
|
||||
- `REMOTE_RETRY_ATTEMPTS`: Number of retry attempts (default: 5)
|
||||
- `REMOTE_RETRY_ATTEMPTS`: Number of retry attempts (default: 2)
|
||||
- `REMOTE_RETRY_BACKOFF_S`: Seconds between retries (default: 4)
|
||||
- `REMOTE_CB_FAILS`: Failures before circuit breaker opens (default: 6)
|
||||
- `REMOTE_CB_COOLDOWN_S`: Circuit breaker cooldown (default: 3600)
|
||||
|
||||
### Telemetry Collection
|
||||
- `TELEMETRY_ENABLED`: Enable environmental telemetry collection from repeater (0/1, default: 0)
|
||||
- `TELEMETRY_TIMEOUT_S`: Timeout for telemetry requests (default: 10)
|
||||
- `TELEMETRY_RETRY_ATTEMPTS`: Retry attempts for telemetry (default: 2)
|
||||
- `TELEMETRY_RETRY_BACKOFF_S`: Backoff between telemetry retries (default: 4)
|
||||
- When enabled, repeater telemetry charts are auto-discovered from `telemetry.*` metrics present in the database.
|
||||
- `telemetry.voltage.*` and `telemetry.gps.*` metrics are intentionally excluded from chart rendering.
|
||||
|
||||
### Intervals
|
||||
- `COMPANION_STEP`: Collection interval for companion (default: 60s)
|
||||
- `REPEATER_STEP`: Collection interval for repeater (default: 900s / 15min)
|
||||
@@ -370,6 +499,7 @@ All configuration via `meshcore.conf` or environment variables. The config file
|
||||
- `REPORT_LON`: Longitude in decimal degrees (default: 0.0)
|
||||
- `REPORT_ELEV`: Elevation (default: 0.0)
|
||||
- `REPORT_ELEV_UNIT`: Elevation unit, "m" or "ft" (default: "m")
|
||||
- `DISPLAY_UNIT_SYSTEM`: `metric` or `imperial` for telemetry display formatting (default: `metric`)
|
||||
- `REPEATER_DISPLAY_NAME`: Display name for repeater in UI (default: "Repeater Node")
|
||||
- `COMPANION_DISPLAY_NAME`: Display name for companion in UI (default: "Companion Node")
|
||||
- `REPEATER_HARDWARE`: Repeater hardware model for sidebar (default: "LoRa Repeater")
|
||||
@@ -410,6 +540,15 @@ Metrics are classified as either **gauge** or **counter** in `src/meshmon/metric
|
||||
|
||||
Counter metrics are converted to rates during chart rendering by calculating deltas between consecutive readings.
|
||||
|
||||
- **TELEMETRY**: Environmental sensor data (when `TELEMETRY_ENABLED=1`):
|
||||
- Stored with `telemetry.` prefix: `telemetry.temperature.0`, `telemetry.humidity.0`, `telemetry.barometer.0`
|
||||
- Channel number distinguishes multiple sensors of the same type
|
||||
- Compound values (e.g., GPS) stored as: `telemetry.gps.0.latitude`, `telemetry.gps.0.longitude`
|
||||
- Telemetry collection does NOT affect circuit breaker state
|
||||
- Repeater telemetry charts are auto-discovered from available `telemetry.*` metrics
|
||||
- `telemetry.voltage.*` and `telemetry.gps.*` are collected but not charted
|
||||
- Display conversion is chart/UI-only (DB values remain raw firmware values)
|
||||
|
||||
## Database Schema
|
||||
|
||||
Metrics are stored in a SQLite database at `data/state/metrics.db` with WAL mode enabled for concurrent access.
|
||||
@@ -545,6 +684,7 @@ The static site uses a modern, responsive design with the following features:
|
||||
- **Repeater pages at root**: `/day.html`, `/week.html`, etc. (entry point)
|
||||
- **Companion pages**: `/companion/day.html`, `/companion/week.html`, etc.
|
||||
- **`.htaccess`**: Sets `DirectoryIndex day.html` so `/` loads repeater day view
|
||||
- **Relative links**: All internal navigation and static asset references are relative (no leading `/`) so the dashboard can be served from a reverse-proxy subpath.
|
||||
|
||||
### Page Layout
|
||||
1. **Header**: Site branding, node name, pubkey prefix, status indicator, last updated time
|
||||
@@ -564,6 +704,7 @@ Color-coded based on data freshness:
|
||||
- Shows datetime and value when hovering over chart data
|
||||
- Works without JavaScript (charts still display, just no tooltips)
|
||||
- Uses `data-points`, `data-x-start`, `data-x-end` attributes embedded in SVG
|
||||
- Telemetry tooltip units/precision follow `DISPLAY_UNIT_SYSTEM`
|
||||
|
||||
### Social Sharing
|
||||
Open Graph and Twitter Card meta tags for link previews:
|
||||
@@ -598,6 +739,18 @@ Charts are generated as inline SVGs using matplotlib (`src/meshmon/charts.py`).
|
||||
- **Inline**: SVGs are embedded directly in HTML for zero additional requests
|
||||
- **Tooltips**: Data points embedded as JSON in SVG `data-points` attribute
|
||||
|
||||
### Telemetry Chart Discovery
|
||||
- Applies to repeater charts only (companion telemetry is not grouped/rendered in dashboard UI)
|
||||
- Active only when `TELEMETRY_ENABLED=1`
|
||||
- Discovers all `telemetry.<type>.<channel>[.<subkey>]` metrics found in DB metadata
|
||||
- Excludes `telemetry.voltage.*` and `telemetry.gps.*` from charts
|
||||
- Appends a `Telemetry` chart section at the end of the repeater dashboard when metrics are present
|
||||
- Uses display-only unit conversion based on `DISPLAY_UNIT_SYSTEM`:
|
||||
- `temperature`: `°C` -> `°F` (imperial)
|
||||
- `barometer`/`pressure`: `hPa` -> `inHg` (imperial)
|
||||
- `altitude`: `m` -> `ft` (imperial)
|
||||
- `humidity`: unchanged (`%`)
|
||||
|
||||
### Time Aggregation (Binning)
|
||||
Data points are aggregated into bins to keep chart file sizes reasonable and lines clean:
|
||||
|
||||
@@ -639,6 +792,9 @@ Metrics use firmware field names directly from `req_status_sync`:
|
||||
| `sent_direct` | counter | Packets/min | Direct packets transmitted |
|
||||
| `recv_direct` | counter | Packets/min | Direct packets received |
|
||||
|
||||
Telemetry charts are discovered dynamically when telemetry is enabled and data exists.
|
||||
Units/labels are generated from metric keys at runtime, with display conversion controlled by `DISPLAY_UNIT_SYSTEM`.
|
||||
|
||||
### Companion Metrics Summary
|
||||
|
||||
Metrics use firmware field names directly from `get_stats_*`:
|
||||
@@ -694,16 +850,14 @@ meshcore-cli -s /dev/ttyACM0 reset_path "repeater name"
|
||||
|
||||
## Cron Setup (Example)
|
||||
|
||||
Use `flock` to prevent USB serial conflicts when companion and repeater collection overlap.
|
||||
|
||||
```cron
|
||||
MESHCORE=/path/to/meshcore-stats
|
||||
|
||||
# Companion: every minute
|
||||
* * * * * cd $MESHCORE && flock -w 60 /tmp/meshcore.lock .venv/bin/python scripts/collect_companion.py
|
||||
* * * * * cd $MESHCORE && .venv/bin/python scripts/collect_companion.py
|
||||
|
||||
# Repeater: every 15 minutes (offset by 1 min for staggering)
|
||||
1,16,31,46 * * * * cd $MESHCORE && flock -w 60 /tmp/meshcore.lock .venv/bin/python scripts/collect_repeater.py
|
||||
1,16,31,46 * * * * cd $MESHCORE && .venv/bin/python scripts/collect_repeater.py
|
||||
|
||||
# Charts: every 5 minutes (generates SVG charts from database)
|
||||
*/5 * * * * cd $MESHCORE && .venv/bin/python scripts/render_charts.py
|
||||
@@ -717,7 +871,7 @@ MESHCORE=/path/to/meshcore-stats
|
||||
|
||||
**Notes:**
|
||||
- `cd $MESHCORE` is required because paths in the config are relative to the project root
|
||||
- `flock -w 60` waits up to 60 seconds for the lock, preventing USB serial conflicts
|
||||
- Serial port locking is handled automatically via `fcntl.flock()` in Python (no external `flock` needed)
|
||||
|
||||
## Adding New Metrics
|
||||
|
||||
@@ -729,6 +883,7 @@ With the EAV schema, adding new metrics is simple:
|
||||
- `METRIC_CONFIG` in `src/meshmon/metrics.py` (label, unit, type, transform)
|
||||
- `COMPANION_CHART_METRICS` or `REPEATER_CHART_METRICS` in `src/meshmon/metrics.py`
|
||||
- `COMPANION_CHART_GROUPS` or `REPEATER_CHART_GROUPS` in `src/meshmon/html.py`
|
||||
- Exception: repeater `telemetry.*` metrics are auto-discovered, so they do not need to be added to static chart lists/groups.
|
||||
|
||||
3. **To display in reports**: Add the firmware field name to:
|
||||
- `COMPANION_REPORT_METRICS` or `REPEATER_REPORT_METRICS` in `src/meshmon/reports.py`
|
||||
140
CHANGELOG.md
140
CHANGELOG.md
@@ -4,6 +4,146 @@ All notable changes to this project will be documented in this file.
|
||||
|
||||
This changelog is automatically generated by [release-please](https://github.com/googleapis/release-please) based on [Conventional Commits](https://www.conventionalcommits.org/).
|
||||
|
||||
## [0.2.15](https://github.com/jorijn/meshcore-stats/compare/v0.2.14...v0.2.15) (2026-01-13)
|
||||
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
* **charts:** skip short counter intervals ([#73](https://github.com/jorijn/meshcore-stats/issues/73)) ([97ebba4](https://github.com/jorijn/meshcore-stats/commit/97ebba4f2da723100ec87d21b6f8780ee0793e46))
|
||||
|
||||
|
||||
### Miscellaneous Chores
|
||||
|
||||
* **deps:** update python:3.14-slim-bookworm docker digest to 55b18d5 ([#69](https://github.com/jorijn/meshcore-stats/issues/69)) ([392ba22](https://github.com/jorijn/meshcore-stats/commit/392ba226babdaa7bd4beb0c6ff7b832a3aca5e71))
|
||||
|
||||
## [0.2.14](https://github.com/jorijn/meshcore-stats/compare/v0.2.13...v0.2.14) (2026-01-13)
|
||||
|
||||
|
||||
### Miscellaneous Chores
|
||||
|
||||
* add lockFileMaintenance to update types ([#65](https://github.com/jorijn/meshcore-stats/issues/65)) ([b249a21](https://github.com/jorijn/meshcore-stats/commit/b249a217e85031a0ce73865e577d37583c3af5ea))
|
||||
* **deps:** lock file maintenance ([#66](https://github.com/jorijn/meshcore-stats/issues/66)) ([a89d745](https://github.com/jorijn/meshcore-stats/commit/a89d745d6bcb4aae13fab3f0c0d7dd7c1a643f3a))
|
||||
* **deps:** update ghcr.io/astral-sh/uv docker tag to v0.9.24 ([#61](https://github.com/jorijn/meshcore-stats/issues/61)) ([18ca787](https://github.com/jorijn/meshcore-stats/commit/18ca787f7fe054a425af4fba16306621fead7ced))
|
||||
* **deps:** update github/codeql-action action to v4.31.10 ([#67](https://github.com/jorijn/meshcore-stats/issues/67)) ([c1b8978](https://github.com/jorijn/meshcore-stats/commit/c1b89782eb374bb1f161ef86bebd64dc8ece9e1c))
|
||||
* **deps:** update golang docker tag to v1.25 ([#70](https://github.com/jorijn/meshcore-stats/issues/70)) ([63a8420](https://github.com/jorijn/meshcore-stats/commit/63a842016cd14d0e338840fb4e41abb17bb32ba5))
|
||||
* **deps:** update nginx:1.29-alpine docker digest to c083c37 ([#62](https://github.com/jorijn/meshcore-stats/issues/62)) ([df0c374](https://github.com/jorijn/meshcore-stats/commit/df0c374b654606c2b6d36ae3fa5134691885cd5d))
|
||||
* enable renovate automerge for patch and digest updates ([#64](https://github.com/jorijn/meshcore-stats/issues/64)) ([6fc2e76](https://github.com/jorijn/meshcore-stats/commit/6fc2e762cfbea31ebca4a120d0d0e1a3547b0455))
|
||||
|
||||
|
||||
### Build System
|
||||
|
||||
* **docker:** add armv7 container support ([#68](https://github.com/jorijn/meshcore-stats/issues/68)) ([75e50f7](https://github.com/jorijn/meshcore-stats/commit/75e50f7ee95404b7ab9c0abeec12fa5e17ad24f6))
|
||||
|
||||
## [0.2.13](https://github.com/jorijn/meshcore-stats/compare/v0.2.12...v0.2.13) (2026-01-09)
|
||||
|
||||
|
||||
### Miscellaneous Chores
|
||||
|
||||
* drop digest from compose image ([#59](https://github.com/jorijn/meshcore-stats/issues/59)) ([3a03060](https://github.com/jorijn/meshcore-stats/commit/3a0306043c8a8dfeb1b5b6df6fa988322cc64e98))
|
||||
|
||||
## [0.2.12](https://github.com/jorijn/meshcore-stats/compare/v0.2.11...v0.2.12) (2026-01-09)
|
||||
|
||||
|
||||
### Miscellaneous Chores
|
||||
|
||||
* **deps:** lock file maintenance ([#52](https://github.com/jorijn/meshcore-stats/issues/52)) ([d4b5885](https://github.com/jorijn/meshcore-stats/commit/d4b5885379c06988bd8261039c67c6a6724b7704))
|
||||
* **deps:** lock file maintenance ([#58](https://github.com/jorijn/meshcore-stats/issues/58)) ([a3a5964](https://github.com/jorijn/meshcore-stats/commit/a3a5964488e7fbda5b6d792fa9f0f712e0a0d0c3))
|
||||
* **deps:** pin dependencies ([#55](https://github.com/jorijn/meshcore-stats/issues/55)) ([9cb95f8](https://github.com/jorijn/meshcore-stats/commit/9cb95f8108738ff21a8346f8922fcd218843fb7d))
|
||||
* **deps:** pin python docker tag to e8a1ad8 ([#57](https://github.com/jorijn/meshcore-stats/issues/57)) ([f55c236](https://github.com/jorijn/meshcore-stats/commit/f55c236080f6c9bc7a7f090f4382cd53281fc2ac))
|
||||
* **deps:** update actions/attest-build-provenance digest to 00014ed ([#40](https://github.com/jorijn/meshcore-stats/issues/40)) ([e937f2b](https://github.com/jorijn/meshcore-stats/commit/e937f2b0b7a34bb5c7f3f51b60a592f78a78079d))
|
||||
* **deps:** update actions/checkout action to v6 ([#48](https://github.com/jorijn/meshcore-stats/issues/48)) ([3967fd0](https://github.com/jorijn/meshcore-stats/commit/3967fd032ad95873bc50c438351ba52e6448a335))
|
||||
* **deps:** update actions/setup-python action to v6 ([#49](https://github.com/jorijn/meshcore-stats/issues/49)) ([97223f1](https://github.com/jorijn/meshcore-stats/commit/97223f137ca069f6f2632e2e849274cced91a8b3))
|
||||
* **deps:** update actions/upload-artifact action to v6 ([#50](https://github.com/jorijn/meshcore-stats/issues/50)) ([46fc383](https://github.com/jorijn/meshcore-stats/commit/46fc383eaa9cd99185a5b2112e58d5ff163f3185))
|
||||
* **deps:** update ghcr.io/astral-sh/uv docker tag to v0.9.22 ([#44](https://github.com/jorijn/meshcore-stats/issues/44)) ([83cf2bf](https://github.com/jorijn/meshcore-stats/commit/83cf2bf929bfba9f7019e78767abf04abe7700d2))
|
||||
* **deps:** update github/codeql-action action to v4 ([#51](https://github.com/jorijn/meshcore-stats/issues/51)) ([83425a4](https://github.com/jorijn/meshcore-stats/commit/83425a48f67a5d974065b9d33ad0a24a044d67d0))
|
||||
* **deps:** update github/codeql-action digest to ee117c9 ([#41](https://github.com/jorijn/meshcore-stats/issues/41)) ([dd7ec5b](https://github.com/jorijn/meshcore-stats/commit/dd7ec5b46e92365dbf2731f2378b2168c24f0b88))
|
||||
* **deps:** update nginx docker tag to v1.29 ([#47](https://github.com/jorijn/meshcore-stats/issues/47)) ([57a53a8](https://github.com/jorijn/meshcore-stats/commit/57a53a8800c9c97459ef5139310a8c23c7540943))
|
||||
* support python 3.14 in CI and docker ([#56](https://github.com/jorijn/meshcore-stats/issues/56)) ([b66f538](https://github.com/jorijn/meshcore-stats/commit/b66f5380b69108f22d53aaf1a48642c240788d3f))
|
||||
* switch to Renovate and pin uv image ([#38](https://github.com/jorijn/meshcore-stats/issues/38)) ([adc4423](https://github.com/jorijn/meshcore-stats/commit/adc442351bc84beb6216eafedd8e2eaa95109bfd))
|
||||
|
||||
|
||||
### Continuous Integration
|
||||
|
||||
* **docker:** add PR build and smoke test ([#53](https://github.com/jorijn/meshcore-stats/issues/53)) ([40d7d3b](https://github.com/jorijn/meshcore-stats/commit/40d7d3b2faef5ae7c268cd1ecc9616d1dd421f12))
|
||||
* switch actions to version tags for renovate digests ([#54](https://github.com/jorijn/meshcore-stats/issues/54)) ([1f6e7c5](https://github.com/jorijn/meshcore-stats/commit/1f6e7c50935265579be4faadeb5dc88c4098a71c))
|
||||
|
||||
## [0.2.11](https://github.com/jorijn/meshcore-stats/compare/v0.2.10...v0.2.11) (2026-01-08)
|
||||
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
* **docker:** skip project install in uv sync ([#35](https://github.com/jorijn/meshcore-stats/issues/35)) ([26d5125](https://github.com/jorijn/meshcore-stats/commit/26d5125e15a78fd7b3fddd09292b4aff6efd23b7))
|
||||
|
||||
|
||||
### Miscellaneous Chores
|
||||
|
||||
* **release:** track uv.lock in release-please ([#33](https://github.com/jorijn/meshcore-stats/issues/33)) ([fb627fd](https://github.com/jorijn/meshcore-stats/commit/fb627fdacd1b58d0c8fc10b8d3d8738a1bdce799))
|
||||
|
||||
## [0.2.10](https://github.com/jorijn/meshcore-stats/compare/v0.2.9...v0.2.10) (2026-01-08)
|
||||
|
||||
|
||||
### Documentation
|
||||
|
||||
* add TZ timezone setting to example config ([45bdf5d](https://github.com/jorijn/meshcore-stats/commit/45bdf5d6d47aacb7ebaba8e420bc9f8d917d06a3))
|
||||
|
||||
|
||||
### Tests
|
||||
|
||||
* add comprehensive pytest test suite with 95% coverage ([#29](https://github.com/jorijn/meshcore-stats/issues/29)) ([a9f6926](https://github.com/jorijn/meshcore-stats/commit/a9f69261049e45b36119fd502dd0d7fc2be2691c))
|
||||
* stabilize suite and broaden integration coverage ([#32](https://github.com/jorijn/meshcore-stats/issues/32)) ([ca13e31](https://github.com/jorijn/meshcore-stats/commit/ca13e31aae1bff561b278608c16df8e17424f9eb))
|
||||
|
||||
## [0.2.9](https://github.com/jorijn/meshcore-stats/compare/v0.2.8...v0.2.9) (2026-01-06)
|
||||
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
* tooltip positioning and locale-aware time formatting ([f7923b9](https://github.com/jorijn/meshcore-stats/commit/f7923b94346c3d492e7291ecca208ab704176308))
|
||||
|
||||
|
||||
### Continuous Integration
|
||||
|
||||
* add artifact-metadata permission for attestation storage records ([c978844](https://github.com/jorijn/meshcore-stats/commit/c978844271eafd35f4778d748d7c832309d1614f))
|
||||
|
||||
## [0.2.8](https://github.com/jorijn/meshcore-stats/compare/v0.2.7...v0.2.8) (2026-01-06)
|
||||
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
* normalize reporting outputs and chart tooltips ([e37aef6](https://github.com/jorijn/meshcore-stats/commit/e37aef6c5e55d2077baf4ee35abdff0562983d69))
|
||||
|
||||
## [0.2.7](https://github.com/jorijn/meshcore-stats/compare/v0.2.6...v0.2.7) (2026-01-06)
|
||||
|
||||
|
||||
### Features
|
||||
|
||||
* add telemetry collection for companion and repeater nodes ([#24](https://github.com/jorijn/meshcore-stats/issues/24)) ([a3015e2](https://github.com/jorijn/meshcore-stats/commit/a3015e2209781bdd7c317fa992ced6afa19efe61))
|
||||
|
||||
## [0.2.6](https://github.com/jorijn/meshcore-stats/compare/v0.2.5...v0.2.6) (2026-01-05)
|
||||
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
* add tmpfs mount for fontconfig cache to fix read-only filesystem errors ([3d0d903](https://github.com/jorijn/meshcore-stats/commit/3d0d90304cec5ebcdb34935400de31afd62e258d))
|
||||
|
||||
## [0.2.5](https://github.com/jorijn/meshcore-stats/compare/v0.2.4...v0.2.5) (2026-01-05)
|
||||
|
||||
|
||||
### Features
|
||||
|
||||
* add automatic serial port locking to prevent concurrent access ([3c5eace](https://github.com/jorijn/meshcore-stats/commit/3c5eace2207279c55401dd8fa27294d5a94bb682))
|
||||
|
||||
|
||||
### Documentation
|
||||
|
||||
* fix formatting in architecture diagram ([7eee23e](https://github.com/jorijn/meshcore-stats/commit/7eee23ec40ff9441515b4ac18fbb7cd3f87fa4b5))
|
||||
|
||||
## [0.2.4](https://github.com/jorijn/meshcore-stats/compare/v0.2.3...v0.2.4) (2026-01-05)
|
||||
|
||||
|
||||
### Documentation
|
||||
|
||||
* rewrite README with Docker-first installation guide ([6ac5262](https://github.com/jorijn/meshcore-stats/commit/6ac52629d3025db69f9334d3185b97ce16cd3e4b))
|
||||
|
||||
## [0.2.3](https://github.com/jorijn/meshcore-stats/compare/v0.2.2...v0.2.3) (2026-01-05)
|
||||
|
||||
|
||||
|
||||
19
Dockerfile
19
Dockerfile
@@ -1,7 +1,12 @@
|
||||
# =============================================================================
|
||||
# Stage 0: uv binary
|
||||
# =============================================================================
|
||||
FROM ghcr.io/astral-sh/uv:0.9.30@sha256:538e0b39736e7feae937a65983e49d2ab75e1559d35041f9878b7b7e51de91e4 AS uv
|
||||
|
||||
# =============================================================================
|
||||
# Stage 1: Build dependencies
|
||||
# =============================================================================
|
||||
FROM python:3.12-slim-bookworm AS builder
|
||||
FROM python:3.14-slim-bookworm@sha256:f0540d0436a220db0a576ccfe75631ab072391e43a24b88972ef9833f699095f AS builder
|
||||
|
||||
# Ofelia version and checksums (verified from GitHub releases)
|
||||
ARG OFELIA_VERSION=0.3.12
|
||||
@@ -34,17 +39,21 @@ RUN set -ex; \
|
||||
|
||||
# Create virtual environment
|
||||
RUN python -m venv /opt/venv
|
||||
ENV PATH="/opt/venv/bin:$PATH"
|
||||
ENV PATH="/opt/venv/bin:$PATH" \
|
||||
UV_PROJECT_ENVIRONMENT=/opt/venv
|
||||
|
||||
# Copy uv binary from pinned image
|
||||
COPY --from=uv /uv /usr/local/bin/uv
|
||||
|
||||
# Install Python dependencies
|
||||
COPY requirements.txt .
|
||||
COPY pyproject.toml uv.lock ./
|
||||
RUN pip install --no-cache-dir --upgrade pip && \
|
||||
pip install --no-cache-dir -r requirements.txt
|
||||
uv sync --frozen --no-dev --no-install-project
|
||||
|
||||
# =============================================================================
|
||||
# Stage 2: Runtime
|
||||
# =============================================================================
|
||||
FROM python:3.12-slim-bookworm
|
||||
FROM python:3.14-slim-bookworm@sha256:f0540d0436a220db0a576ccfe75631ab072391e43a24b88972ef9833f699095f
|
||||
|
||||
# OCI Labels
|
||||
LABEL org.opencontainers.image.source="https://github.com/jorijn/meshcore-stats"
|
||||
|
||||
709
README.md
709
README.md
@@ -1,6 +1,6 @@
|
||||
# MeshCore Stats
|
||||
|
||||
A Python-based monitoring system for a MeshCore repeater node and its companion. Collects metrics from both devices, stores them in a SQLite database, and generates a static website with interactive SVG charts and statistics.
|
||||
A monitoring system for MeshCore LoRa mesh networks. Collects metrics from companion and repeater nodes, stores them in SQLite, and generates a static dashboard with interactive charts.
|
||||
|
||||
**Live demo:** [meshcore.jorijn.com](https://meshcore.jorijn.com)
|
||||
|
||||
@@ -9,500 +9,371 @@ A Python-based monitoring system for a MeshCore repeater node and its companion.
|
||||
<img src="docs/screenshot-2.png" width="49%" alt="MeshCore Stats Reports">
|
||||
</p>
|
||||
|
||||
## Features
|
||||
## Quick Start
|
||||
|
||||
- **Data Collection** - Collect metrics from companion (local) and repeater (remote) nodes
|
||||
- **Chart Rendering** - Generate interactive SVG charts from the database using matplotlib
|
||||
- **Static Site** - Generate a static HTML website with day/week/month/year views
|
||||
- **Reports** - Generate monthly and yearly statistics reports
|
||||
|
||||
## Requirements
|
||||
|
||||
### Python Dependencies
|
||||
|
||||
- Python 3.10+
|
||||
- meshcore >= 2.2.3
|
||||
- pyserial >= 3.5
|
||||
- jinja2 >= 3.1.0
|
||||
- matplotlib >= 3.8.0
|
||||
|
||||
### System Dependencies
|
||||
|
||||
- sqlite3 (for database maintenance script)
|
||||
|
||||
## Setup
|
||||
|
||||
### 1. Create Virtual Environment
|
||||
> **Linux only** - macOS and Windows users see [Platform Notes](#platform-notes) first.
|
||||
|
||||
```bash
|
||||
cd /path/to/meshcore-stats
|
||||
python3 -m venv .venv
|
||||
source .venv/bin/activate
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
### 2. Configure
|
||||
|
||||
Copy the example configuration file and customize it:
|
||||
|
||||
```bash
|
||||
cp meshcore.conf.example meshcore.conf
|
||||
# Edit meshcore.conf with your settings
|
||||
```
|
||||
|
||||
The configuration file is automatically loaded by the scripts. Key settings to configure:
|
||||
|
||||
- **Connection**: `MESH_SERIAL_PORT`, `MESH_TRANSPORT`
|
||||
- **Repeater Identity**: `REPEATER_NAME`, `REPEATER_PASSWORD`
|
||||
- **Display Names**: `REPEATER_DISPLAY_NAME`, `COMPANION_DISPLAY_NAME`
|
||||
- **Location**: `REPORT_LOCATION_NAME`, `REPORT_LAT`, `REPORT_LON`, `REPORT_ELEV`
|
||||
- **Hardware Info**: `REPEATER_HARDWARE`, `COMPANION_HARDWARE`
|
||||
- **Radio Config**: `RADIO_FREQUENCY`, `RADIO_BANDWIDTH`, etc. (includes presets for different regions)
|
||||
|
||||
See `meshcore.conf.example` for all available options with documentation.
|
||||
|
||||
## Usage
|
||||
|
||||
### Manual Execution
|
||||
|
||||
```bash
|
||||
cd /path/to/meshcore-stats
|
||||
source .venv/bin/activate
|
||||
|
||||
# Collect companion data
|
||||
python scripts/collect_companion.py
|
||||
|
||||
# Collect repeater data
|
||||
python scripts/collect_repeater.py
|
||||
|
||||
# Generate static site (includes chart rendering)
|
||||
python scripts/render_site.py
|
||||
|
||||
# Generate reports
|
||||
python scripts/render_reports.py
|
||||
```
|
||||
|
||||
The configuration is automatically loaded from `meshcore.conf`.
|
||||
|
||||
### Cron Setup
|
||||
|
||||
Add these entries to your crontab (`crontab -e`):
|
||||
|
||||
```cron
|
||||
# MeshCore Stats - adjust path as needed
|
||||
MESHCORE=/home/user/meshcore-stats
|
||||
|
||||
# Every minute: collect companion data
|
||||
* * * * * cd $MESHCORE && flock -w 60 /tmp/meshcore.lock .venv/bin/python scripts/collect_companion.py
|
||||
|
||||
# Every 15 minutes: collect repeater data
|
||||
1,16,31,46 * * * * cd $MESHCORE && flock -w 60 /tmp/meshcore.lock .venv/bin/python scripts/collect_repeater.py
|
||||
|
||||
# Every 5 minutes: render site
|
||||
*/5 * * * * cd $MESHCORE && .venv/bin/python scripts/render_site.py
|
||||
|
||||
# Daily at midnight: generate reports
|
||||
0 0 * * * cd $MESHCORE && .venv/bin/python scripts/render_reports.py
|
||||
|
||||
# Monthly at 3 AM on the 1st: database maintenance
|
||||
0 3 1 * * $MESHCORE/scripts/db_maintenance.sh
|
||||
```
|
||||
|
||||
**Notes:**
|
||||
- `cd $MESHCORE` is required because paths in the config are relative to the project root
|
||||
- `flock` prevents USB serial conflicts when companion and repeater collection overlap
|
||||
|
||||
### Docker Installation
|
||||
|
||||
The recommended way to run MeshCore Stats is with Docker Compose. This provides automatic scheduling of all collection and rendering tasks.
|
||||
|
||||
#### Quick Start
|
||||
|
||||
```bash
|
||||
# Clone the repository
|
||||
# Clone and configure
|
||||
git clone https://github.com/jorijn/meshcore-stats.git
|
||||
cd meshcore-stats
|
||||
|
||||
# Create configuration
|
||||
cp meshcore.conf.example meshcore.conf
|
||||
# Edit meshcore.conf with your settings
|
||||
# Edit meshcore.conf with your repeater name and password
|
||||
|
||||
# Create data directories with correct ownership for container (UID 1000)
|
||||
# Create data directories (container runs as UID 1000)
|
||||
mkdir -p data/state out
|
||||
sudo chown -R 1000:1000 data out
|
||||
# Alternative: chmod -R 777 data out (less secure, use chown if possible)
|
||||
|
||||
# Start the containers
|
||||
docker compose up -d
|
||||
|
||||
# View logs
|
||||
docker compose logs -f
|
||||
```
|
||||
|
||||
The web interface will be available at `http://localhost:8080`.
|
||||
|
||||
#### Architecture
|
||||
|
||||
The Docker setup uses two containers:
|
||||
|
||||
| Container | Purpose |
|
||||
|-----------|---------|
|
||||
| `meshcore-stats` | Runs Ofelia scheduler for data collection and rendering |
|
||||
| `nginx` | Serves the static website |
|
||||
|
||||
#### Configuration
|
||||
|
||||
Configuration is loaded from `meshcore.conf` via the `env_file` directive. Key settings:
|
||||
|
||||
```bash
|
||||
# Required: Serial device for companion node
|
||||
MESH_SERIAL_PORT=/dev/ttyUSB0 # Adjust for your system
|
||||
|
||||
# Required: Repeater identity
|
||||
REPEATER_NAME="Your Repeater Name"
|
||||
REPEATER_PASSWORD="your-password"
|
||||
|
||||
# Display names (shown in UI)
|
||||
REPEATER_DISPLAY_NAME="My Repeater"
|
||||
COMPANION_DISPLAY_NAME="My Companion"
|
||||
```
|
||||
|
||||
See `meshcore.conf.example` for all available options.
|
||||
|
||||
#### Serial Device Access
|
||||
|
||||
For serial transport, the container needs access to your USB serial device. Create a `docker-compose.override.yml` file (gitignored) to specify your device:
|
||||
|
||||
```yaml
|
||||
# docker-compose.override.yml - Local device configuration (not tracked in git)
|
||||
# Add your serial device
|
||||
cat > docker-compose.override.yml << 'EOF'
|
||||
services:
|
||||
meshcore-stats:
|
||||
devices:
|
||||
- /dev/ttyUSB0:/dev/ttyUSB0:rw # Linux example
|
||||
# - /dev/ttyACM0:/dev/ttyACM0:rw # Alternative Linux device
|
||||
- /dev/ttyACM0:/dev/ttyACM0
|
||||
EOF
|
||||
|
||||
# Start
|
||||
docker compose up -d
|
||||
|
||||
# Verify it's working. The various collection and render jobs will trigger after a few minutes.
|
||||
docker compose ps
|
||||
docker compose logs meshcore-stats | head -20
|
||||
|
||||
# View dashboard at http://localhost:8080
|
||||
```
|
||||
|
||||
This file is automatically merged with `docker-compose.yml` when running `docker compose up`.
|
||||
## Features
|
||||
|
||||
> **Note**: TCP transport users (e.g., macOS with socat) don't need a devices section - just configure `MESH_TRANSPORT=tcp` in your `meshcore.conf`.
|
||||
- **Data Collection** - Metrics from local companion and remote repeater nodes
|
||||
- **Interactive Charts** - SVG charts with day/week/month/year views and tooltips
|
||||
- **Auto Telemetry Charts** - Repeater `telemetry.*` metrics are charted automatically when telemetry is enabled (`telemetry.voltage.*` excluded)
|
||||
- **Statistics Reports** - Monthly and yearly report generation
|
||||
- **Light/Dark Theme** - Automatic theme switching based on system preference
|
||||
|
||||
On the host, ensure the device is accessible:
|
||||
## Prerequisites
|
||||
|
||||
- Docker and Docker Compose V2
|
||||
- MeshCore companion node connected via USB serial
|
||||
- Remote repeater node reachable via LoRa from the companion
|
||||
|
||||
**Resource requirements:** ~100MB memory, ~100MB disk per year of data.
|
||||
|
||||
## Installation
|
||||
|
||||
### Docker (Recommended)
|
||||
|
||||
#### 1. Clone the Repository
|
||||
|
||||
```bash
|
||||
# Add user to dialout group (Linux)
|
||||
sudo usermod -a -G dialout $USER
|
||||
git clone https://github.com/jorijn/meshcore-stats.git
|
||||
cd meshcore-stats
|
||||
```
|
||||
|
||||
#### 2. Configure
|
||||
|
||||
Copy the example configuration and edit it:
|
||||
|
||||
```bash
|
||||
cp meshcore.conf.example meshcore.conf
|
||||
```
|
||||
|
||||
**Minimal required settings:**
|
||||
|
||||
```ini
|
||||
# Repeater identity (required)
|
||||
REPEATER_NAME=Your Repeater Name
|
||||
REPEATER_PASSWORD=your-admin-password
|
||||
|
||||
# Display names
|
||||
REPEATER_DISPLAY_NAME=My Repeater
|
||||
COMPANION_DISPLAY_NAME=My Companion
|
||||
```
|
||||
|
||||
See [meshcore.conf.example](meshcore.conf.example) for all available options.
|
||||
|
||||
Optional telemetry display settings:
|
||||
|
||||
```ini
|
||||
# Enable environmental telemetry collection from repeater
|
||||
TELEMETRY_ENABLED=1
|
||||
|
||||
# Telemetry display units only (DB values stay unchanged)
|
||||
DISPLAY_UNIT_SYSTEM=metric # or imperial
|
||||
```
|
||||
|
||||
#### 3. Create Data Directories
|
||||
|
||||
```bash
|
||||
mkdir -p data/state out
|
||||
sudo chown -R 1000:1000 data out
|
||||
```
|
||||
|
||||
The container runs as UID 1000, so directories must be writable by this user. If `sudo` is not available, you can relaxed the permissions using `chmod 777 data out`, but this is less secure.
|
||||
|
||||
#### 4. Configure Serial Device
|
||||
|
||||
Create `docker-compose.override.yml` to specify your serial device:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
meshcore-stats:
|
||||
devices:
|
||||
- /dev/ttyACM0:/dev/ttyACM0
|
||||
```
|
||||
|
||||
Ensure your user has serial port access:
|
||||
|
||||
```bash
|
||||
sudo usermod -aG dialout $USER
|
||||
# Log out and back in for changes to take effect
|
||||
```
|
||||
|
||||
#### Development Mode
|
||||
|
||||
For local development with live code changes:
|
||||
#### 5. Start the Containers
|
||||
|
||||
```bash
|
||||
docker compose -f docker-compose.yml -f docker-compose.dev.yml up --build
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
This mounts `src/` and `scripts/` into the container, so changes take effect immediately without rebuilding.
|
||||
After the various collection and render jobs has run, the dashboard will be available at **http://localhost:8080**.
|
||||
|
||||
#### Image Tags
|
||||
#### Verify Installation
|
||||
|
||||
Images are published to `ghcr.io/jorijn/meshcore-stats`:
|
||||
```bash
|
||||
# Check container status
|
||||
docker compose ps
|
||||
|
||||
| Tag | Description |
|
||||
|-----|-------------|
|
||||
| `X.Y.Z` | Specific version (e.g., `0.3.0`) |
|
||||
| `latest` | Latest release |
|
||||
| `nightly` | Latest release rebuilt with OS patches |
|
||||
| `nightly-YYYYMMDD` | Dated nightly build |
|
||||
# View logs
|
||||
docker compose logs -f meshcore-stats
|
||||
```
|
||||
|
||||
Version tags are rebuilt nightly to include OS security patches. For reproducible deployments, pin by SHA digest:
|
||||
### Common Docker Commands
|
||||
|
||||
```bash
|
||||
# View real-time logs
|
||||
docker compose logs -f meshcore-stats
|
||||
|
||||
# Restart after configuration changes
|
||||
docker compose restart meshcore-stats
|
||||
|
||||
# Update to latest version (database migrations are automatic)
|
||||
docker compose pull && docker compose up -d
|
||||
|
||||
# Stop all containers
|
||||
docker compose down
|
||||
|
||||
# Backup database
|
||||
cp data/state/metrics.db data/state/metrics.db.backup
|
||||
```
|
||||
|
||||
> **Note**: `docker compose down` preserves your data. Use `docker compose down -v` only if you want to delete everything.
|
||||
|
||||
### Manual Installation (Alternative)
|
||||
|
||||
For environments where Docker is not available.
|
||||
|
||||
#### Requirements
|
||||
|
||||
- Python 3.11+ (3.14 recommended)
|
||||
- SQLite3
|
||||
- [uv](https://github.com/astral-sh/uv)
|
||||
|
||||
#### Setup
|
||||
|
||||
```bash
|
||||
cd meshcore-stats
|
||||
uv venv
|
||||
source .venv/bin/activate
|
||||
uv sync
|
||||
cp meshcore.conf.example meshcore.conf
|
||||
# Edit meshcore.conf with your settings
|
||||
```
|
||||
|
||||
#### Cron Setup
|
||||
|
||||
Add to your crontab (`crontab -e`):
|
||||
|
||||
```cron
|
||||
MESHCORE=/path/to/meshcore-stats
|
||||
|
||||
# Companion: every minute
|
||||
* * * * * cd $MESHCORE && .venv/bin/python scripts/collect_companion.py
|
||||
|
||||
# Repeater: every 15 minutes
|
||||
1,16,31,46 * * * * cd $MESHCORE && .venv/bin/python scripts/collect_repeater.py
|
||||
|
||||
# Charts: every 5 minutes
|
||||
*/5 * * * * cd $MESHCORE && .venv/bin/python scripts/render_charts.py
|
||||
|
||||
# Site: every 5 minutes
|
||||
*/5 * * * * cd $MESHCORE && .venv/bin/python scripts/render_site.py
|
||||
|
||||
# Reports: daily at midnight
|
||||
0 0 * * * cd $MESHCORE && .venv/bin/python scripts/render_reports.py
|
||||
```
|
||||
|
||||
Serve the `out/` directory with any web server.
|
||||
|
||||
## Platform Notes
|
||||
|
||||
<details>
|
||||
<summary><strong>Linux</strong></summary>
|
||||
|
||||
Docker can access USB serial devices directly. Add your device to `docker-compose.override.yml`:
|
||||
|
||||
```yaml
|
||||
image: ghcr.io/jorijn/meshcore-stats@sha256:abc123...
|
||||
services:
|
||||
meshcore-stats:
|
||||
devices:
|
||||
- /dev/ttyACM0:/dev/ttyACM0
|
||||
```
|
||||
|
||||
#### Volumes
|
||||
Common device paths:
|
||||
- `/dev/ttyACM0` - Arduino/native USB
|
||||
- `/dev/ttyUSB0` - USB-to-serial adapters
|
||||
|
||||
| Path | Purpose |
|
||||
|------|---------|
|
||||
| `./data/state` | SQLite database and circuit breaker state |
|
||||
| `./out` | Generated static site (served by nginx) |
|
||||
</details>
|
||||
|
||||
Both directories must be writable by UID 1000 (the container user). See Quick Start for setup.
|
||||
<details>
|
||||
<summary><strong>macOS</strong></summary>
|
||||
|
||||
#### Resource Limits
|
||||
Docker Desktop for macOS runs in a Linux VM and **cannot directly access USB serial devices**.
|
||||
|
||||
Default resource limits in `docker-compose.yml`:
|
||||
**Option 1: TCP Bridge (Recommended)**
|
||||
|
||||
| Container | CPU | Memory |
|
||||
|-----------|-----|--------|
|
||||
| meshcore-stats | 1.0 | 512MB |
|
||||
| nginx | 0.5 | 64MB |
|
||||
|
||||
Adjust in `docker-compose.yml` if needed.
|
||||
|
||||
#### Important Notes
|
||||
|
||||
- **Single instance only**: SQLite uses WAL mode which requires exclusive access. Do not run multiple container instances.
|
||||
- **Persistent storage**: Mount `./data/state` to preserve your database across container restarts.
|
||||
- **Health checks**: Both containers have health checks. Use `docker compose ps` to verify status.
|
||||
|
||||
Environment variables always take precedence over `meshcore.conf`.
|
||||
|
||||
### Serving the Site
|
||||
|
||||
The static site is generated in the `out/` directory. You can serve it with any web server:
|
||||
Expose the serial port over TCP using socat:
|
||||
|
||||
```bash
|
||||
# Simple Python server for testing
|
||||
cd out && python3 -m http.server 8080
|
||||
# Install socat
|
||||
brew install socat
|
||||
|
||||
# Or configure nginx/caddy to serve the out/ directory
|
||||
# Bridge serial to TCP (run in background)
|
||||
socat TCP-LISTEN:5000,fork,reuseaddr OPEN:/dev/cu.usbserial-0001,rawer,nonblock,ispeed=115200,ospeed=115200
|
||||
```
|
||||
|
||||
## Project Structure
|
||||
Configure in `meshcore.conf`:
|
||||
|
||||
```
|
||||
meshcore-stats/
|
||||
├── requirements.txt
|
||||
├── README.md
|
||||
├── meshcore.conf.example # Example configuration
|
||||
├── meshcore.conf # Your configuration (create this)
|
||||
├── src/meshmon/
|
||||
│ ├── __init__.py
|
||||
│ ├── env.py # Environment variable parsing
|
||||
│ ├── log.py # Logging helper
|
||||
│ ├── meshcore_client.py # MeshCore connection and commands
|
||||
│ ├── db.py # SQLite database module
|
||||
│ ├── retry.py # Retry logic and circuit breaker
|
||||
│ ├── charts.py # Matplotlib SVG chart generation
|
||||
│ ├── html.py # HTML rendering
|
||||
│ ├── reports.py # Report generation
|
||||
│ ├── metrics.py # Metric type definitions
|
||||
│ ├── battery.py # Battery voltage to percentage conversion
|
||||
│ ├── migrations/ # SQL schema migrations
|
||||
│ │ ├── 001_initial_schema.sql
|
||||
│ │ └── 002_eav_schema.sql
|
||||
│ └── templates/ # Jinja2 HTML templates
|
||||
├── scripts/
|
||||
│ ├── collect_companion.py # Collect metrics from companion node
|
||||
│ ├── collect_repeater.py # Collect metrics from repeater node
|
||||
│ ├── render_charts.py # Generate SVG charts from database
|
||||
│ ├── render_site.py # Generate static HTML site
|
||||
│ ├── render_reports.py # Generate monthly/yearly reports
|
||||
│ └── db_maintenance.sh # Database VACUUM/ANALYZE
|
||||
├── data/
|
||||
│ └── state/
|
||||
│ ├── metrics.db # SQLite database (WAL mode)
|
||||
│ └── repeater_circuit.json
|
||||
└── out/ # Generated site
|
||||
├── .htaccess # Apache config (DirectoryIndex, caching)
|
||||
├── styles.css # Stylesheet
|
||||
├── chart-tooltip.js # Chart tooltip enhancement
|
||||
├── day.html # Repeater pages (entry point)
|
||||
├── week.html
|
||||
├── month.html
|
||||
├── year.html
|
||||
├── companion/
|
||||
│ ├── day.html
|
||||
│ ├── week.html
|
||||
│ ├── month.html
|
||||
│ └── year.html
|
||||
└── reports/
|
||||
├── index.html
|
||||
├── repeater/ # YYYY/MM reports
|
||||
└── companion/
|
||||
```ini
|
||||
MESH_TRANSPORT=tcp
|
||||
MESH_TCP_HOST=host.docker.internal
|
||||
MESH_TCP_PORT=5000
|
||||
```
|
||||
|
||||
## Chart Features
|
||||
**Option 2: Native Installation**
|
||||
|
||||
Charts are rendered as inline SVG using matplotlib with the following features:
|
||||
Use the manual installation method with cron instead of Docker.
|
||||
|
||||
- **Theme Support**: Automatic light/dark mode via CSS `prefers-color-scheme`
|
||||
- **Interactive Tooltips**: Hover to see exact values and timestamps
|
||||
- **Data Point Indicator**: Visual marker shows position on the chart line
|
||||
- **Mobile Support**: Touch-friendly tooltips
|
||||
- **Statistics**: Min/Avg/Max values displayed below each chart
|
||||
- **Period Views**: Day, week, month, and year time ranges
|
||||
</details>
|
||||
|
||||
## Troubleshooting
|
||||
<details>
|
||||
<summary><strong>Windows (WSL2)</strong></summary>
|
||||
|
||||
### Serial Device Not Found
|
||||
WSL2 and Docker Desktop for Windows cannot directly access COM ports.
|
||||
|
||||
If you see "No serial ports found" or connection fails:
|
||||
Use the TCP bridge approach (similar to macOS) or native installation.
|
||||
|
||||
1. Check that your device is connected:
|
||||
```bash
|
||||
ls -la /dev/ttyUSB* /dev/ttyACM*
|
||||
```
|
||||
</details>
|
||||
|
||||
2. Check permissions (add user to dialout group):
|
||||
```bash
|
||||
sudo usermod -a -G dialout $USER
|
||||
# Log out and back in for changes to take effect
|
||||
```
|
||||
|
||||
3. Try specifying the port explicitly:
|
||||
```bash
|
||||
export MESH_SERIAL_PORT=/dev/ttyACM0
|
||||
```
|
||||
|
||||
4. Check dmesg for device detection:
|
||||
```bash
|
||||
dmesg | tail -20
|
||||
```
|
||||
|
||||
### Repeater Not Found
|
||||
|
||||
If the script cannot find the repeater contact:
|
||||
|
||||
1. The script will print all discovered contacts - check for the correct name
|
||||
2. Verify REPEATER_NAME matches exactly (case-sensitive)
|
||||
3. Try using REPEATER_KEY_PREFIX instead with the first 6-12 hex chars of the public key
|
||||
|
||||
### Circuit Breaker
|
||||
|
||||
If repeater collection shows "cooldown active":
|
||||
|
||||
1. This is normal after multiple failed remote requests
|
||||
2. Wait for the cooldown period (default 1 hour) or reset manually:
|
||||
```bash
|
||||
rm data/state/repeater_circuit.json
|
||||
```
|
||||
|
||||
### Docker on macOS: Serial Devices Not Available
|
||||
|
||||
Docker on macOS (including Docker Desktop and OrbStack) runs containers inside a Linux virtual machine. USB and serial devices connected to the Mac host cannot be passed through to this VM, so the `devices:` section in docker-compose.yml will fail with:
|
||||
|
||||
```
|
||||
error gathering device information while adding custom device "/dev/cu.usbserial-0001": no such file or directory
|
||||
```
|
||||
|
||||
**Workarounds:**
|
||||
|
||||
1. **Use TCP transport**: Run a serial-to-TCP bridge on the host and configure the container to connect via TCP:
|
||||
```bash
|
||||
# On macOS host, expose serial port over TCP (install socat via Homebrew)
|
||||
socat TCP-LISTEN:5000,fork,reuseaddr OPEN:/dev/cu.usbserial-0001,rawer,nonblock,ispeed=115200,ospeed=115200
|
||||
```
|
||||
Then configure in meshcore.conf:
|
||||
```bash
|
||||
MESH_TRANSPORT=tcp
|
||||
MESH_TCP_HOST=host.docker.internal
|
||||
MESH_TCP_PORT=5000
|
||||
```
|
||||
|
||||
2. **Run natively on macOS**: Use the cron-based setup instead of Docker (see "Cron Setup" section).
|
||||
|
||||
3. **Use a Linux host**: Docker on Linux can pass through USB devices directly.
|
||||
|
||||
Note: OrbStack has [USB passthrough on their roadmap](https://github.com/orbstack/orbstack/issues/89) but it is not yet available.
|
||||
|
||||
## Environment Variables Reference
|
||||
## Configuration Reference
|
||||
|
||||
| Variable | Default | Description |
|
||||
|----------|---------|-------------|
|
||||
| **Connection** | | |
|
||||
| `MESH_TRANSPORT` | serial | Connection type: serial, tcp, ble |
|
||||
| `MESH_SERIAL_PORT` | (auto) | Serial port path |
|
||||
| `MESH_SERIAL_BAUD` | 115200 | Baud rate |
|
||||
| `MESH_TCP_HOST` | localhost | TCP host |
|
||||
| `MESH_TCP_PORT` | 5000 | TCP port |
|
||||
| `MESH_BLE_ADDR` | - | BLE device address |
|
||||
| `MESH_BLE_PIN` | - | BLE PIN |
|
||||
| `MESH_DEBUG` | 0 | Enable debug output |
|
||||
| **Repeater Identity** | | |
|
||||
| `REPEATER_NAME` | - | Repeater advertised name |
|
||||
| `REPEATER_KEY_PREFIX` | - | Repeater public key prefix |
|
||||
| `REPEATER_PASSWORD` | - | Repeater login password |
|
||||
| **Display Names** | | |
|
||||
| `REPEATER_DISPLAY_NAME` | Repeater Node | Display name for repeater in UI |
|
||||
| `COMPANION_DISPLAY_NAME` | Companion Node | Display name for companion in UI |
|
||||
| `REPEATER_NAME` | *required* | Advertised name to find in contacts |
|
||||
| `REPEATER_PASSWORD` | *required* | Admin password for repeater |
|
||||
| `REPEATER_KEY_PREFIX` | - | Alternative to `REPEATER_NAME`: hex prefix of public key |
|
||||
| **Connection** | | |
|
||||
| `MESH_TRANSPORT` | serial | Transport type: `serial`, `tcp`, or `ble` |
|
||||
| `MESH_SERIAL_PORT` | auto | Serial port path |
|
||||
| `MESH_TCP_HOST` | localhost | TCP host (for TCP transport) |
|
||||
| `MESH_TCP_PORT` | 5000 | TCP port (for TCP transport) |
|
||||
| **Display** | | |
|
||||
| `REPEATER_DISPLAY_NAME` | Repeater Node | Name shown in UI |
|
||||
| `COMPANION_DISPLAY_NAME` | Companion Node | Name shown in UI |
|
||||
| `REPEATER_HARDWARE` | LoRa Repeater | Hardware model for sidebar |
|
||||
| `COMPANION_HARDWARE` | LoRa Node | Hardware model for sidebar |
|
||||
| **Location** | | |
|
||||
| `REPORT_LOCATION_NAME` | Your Location | Full location name for reports |
|
||||
| `REPORT_LOCATION_SHORT` | Your Location | Short location for sidebar/meta |
|
||||
| `REPORT_LAT` | 0.0 | Latitude in decimal degrees |
|
||||
| `REPORT_LON` | 0.0 | Longitude in decimal degrees |
|
||||
| `REPORT_LOCATION_NAME` | Your Location | Full location for reports |
|
||||
| `REPORT_LAT` | 0.0 | Latitude |
|
||||
| `REPORT_LON` | 0.0 | Longitude |
|
||||
| `REPORT_ELEV` | 0.0 | Elevation |
|
||||
| `REPORT_ELEV_UNIT` | m | Elevation unit: "m" or "ft" |
|
||||
| **Hardware Info** | | |
|
||||
| `REPEATER_HARDWARE` | LoRa Repeater | Repeater hardware model for sidebar |
|
||||
| `COMPANION_HARDWARE` | LoRa Node | Companion hardware model for sidebar |
|
||||
| **Radio Config** | | |
|
||||
| `RADIO_FREQUENCY` | 869.618 MHz | Radio frequency for display |
|
||||
| `RADIO_BANDWIDTH` | 62.5 kHz | Radio bandwidth for display |
|
||||
| `RADIO_SPREAD_FACTOR` | SF8 | Spread factor for display |
|
||||
| `RADIO_CODING_RATE` | CR8 | Coding rate for display |
|
||||
| **Intervals** | | |
|
||||
| `COMPANION_STEP` | 60 | Companion data collection interval (seconds) |
|
||||
| `REPEATER_STEP` | 900 | Repeater data collection interval (seconds) |
|
||||
| `REMOTE_TIMEOUT_S` | 10 | Remote request timeout |
|
||||
| `REMOTE_RETRY_ATTEMPTS` | 2 | Max retry attempts |
|
||||
| `REMOTE_RETRY_BACKOFF_S` | 4 | Retry backoff delay |
|
||||
| `REMOTE_CB_FAILS` | 6 | Failures before circuit opens |
|
||||
| `REMOTE_CB_COOLDOWN_S` | 3600 | Circuit breaker cooldown |
|
||||
| **Paths** | | |
|
||||
| `STATE_DIR` | ./data/state | State file path |
|
||||
| `OUT_DIR` | ./out | Output site path |
|
||||
| **Radio** (display only) | | |
|
||||
| `RADIO_FREQUENCY` | 869.618 MHz | Frequency shown in sidebar |
|
||||
| `RADIO_BANDWIDTH` | 62.5 kHz | Bandwidth |
|
||||
| `RADIO_SPREAD_FACTOR` | SF8 | Spread factor |
|
||||
|
||||
## Metrics Reference
|
||||
See [meshcore.conf.example](meshcore.conf.example) for all options with regional radio presets.
|
||||
|
||||
The system uses an EAV (Entity-Attribute-Value) schema where firmware field names are stored directly in the database. This allows new metrics to be captured automatically without schema changes.
|
||||
## Troubleshooting
|
||||
|
||||
### Repeater Metrics
|
||||
| Symptom | Cause | Solution |
|
||||
|---------|-------|----------|
|
||||
| "Permission denied" on serial port | User not in dialout group | `sudo usermod -aG dialout $USER` then re-login |
|
||||
| Repeater shows "offline" status | No data or circuit breaker tripped | Check logs; delete `data/state/repeater_circuit.json` to reset |
|
||||
| Empty charts | Not enough data collected | Wait for 2+ collection cycles |
|
||||
| Container exits immediately | Missing or invalid configuration | Verify `meshcore.conf` exists and has required values |
|
||||
| "No serial ports found" | Device not connected/detected | Check `ls /dev/tty*` and device permissions |
|
||||
| Device path changed after reboot | USB enumeration order changed | Update path in `docker-compose.override.yml` or use udev rules |
|
||||
| "database is locked" errors | Maintenance script running | Wait for completion; check if VACUUM is running |
|
||||
|
||||
| Metric | Type | Display Unit | Description |
|
||||
|--------|------|--------------|-------------|
|
||||
| `bat` | Gauge | Voltage (V) | Battery voltage (stored in mV, displayed as V) |
|
||||
| `bat_pct` | Gauge | Battery (%) | Battery percentage (computed from voltage) |
|
||||
| `last_rssi` | Gauge | RSSI (dBm) | Signal strength of last packet |
|
||||
| `last_snr` | Gauge | SNR (dB) | Signal-to-noise ratio |
|
||||
| `noise_floor` | Gauge | dBm | Background RF noise |
|
||||
| `uptime` | Gauge | Days | Time since reboot (seconds ÷ 86400) |
|
||||
| `tx_queue_len` | Gauge | Queue depth | TX queue length |
|
||||
| `nb_recv` | Counter | Packets/min | Total packets received |
|
||||
| `nb_sent` | Counter | Packets/min | Total packets transmitted |
|
||||
| `airtime` | Counter | Seconds/min | TX airtime rate |
|
||||
| `rx_airtime` | Counter | Seconds/min | RX airtime rate |
|
||||
| `flood_dups` | Counter | Packets/min | Flood duplicate packets |
|
||||
| `direct_dups` | Counter | Packets/min | Direct duplicate packets |
|
||||
| `sent_flood` | Counter | Packets/min | Flood packets transmitted |
|
||||
| `recv_flood` | Counter | Packets/min | Flood packets received |
|
||||
| `sent_direct` | Counter | Packets/min | Direct packets transmitted |
|
||||
| `recv_direct` | Counter | Packets/min | Direct packets received |
|
||||
### Debug Logging
|
||||
|
||||
### Companion Metrics
|
||||
```bash
|
||||
# Enable debug mode in meshcore.conf
|
||||
MESH_DEBUG=1
|
||||
|
||||
| Metric | Type | Display Unit | Description |
|
||||
|--------|------|--------------|-------------|
|
||||
| `battery_mv` | Gauge | Voltage (V) | Battery voltage (stored in mV, displayed as V) |
|
||||
| `bat_pct` | Gauge | Battery (%) | Battery percentage (computed from voltage) |
|
||||
| `contacts` | Gauge | Count | Known mesh nodes |
|
||||
| `uptime_secs` | Gauge | Days | Time since reboot (seconds ÷ 86400) |
|
||||
| `recv` | Counter | Packets/min | Total packets received |
|
||||
| `sent` | Counter | Packets/min | Total packets transmitted |
|
||||
# View detailed logs
|
||||
docker compose logs -f meshcore-stats
|
||||
```
|
||||
|
||||
### Metric Types
|
||||
### Circuit Breaker
|
||||
|
||||
- **Gauge**: Instantaneous values stored as-is (battery voltage, RSSI, queue depth)
|
||||
- **Counter**: Cumulative values where the rate of change is calculated (packets, airtime). Charts display per-minute rates.
|
||||
The repeater collector uses a circuit breaker to avoid spamming LoRa when the repeater is unreachable. After multiple failures, it enters a cooldown period (default: 1 hour).
|
||||
|
||||
## Database
|
||||
To reset manually:
|
||||
|
||||
Metrics are stored in a SQLite database at `data/state/metrics.db` with WAL mode enabled for concurrent read/write access.
|
||||
```bash
|
||||
rm data/state/repeater_circuit.json
|
||||
docker compose restart meshcore-stats
|
||||
```
|
||||
|
||||
### Schema Migrations
|
||||
## Architecture
|
||||
|
||||
Database migrations are stored as SQL files in `src/meshmon/migrations/` and are applied automatically when the database is initialized. Migration files follow the naming convention `NNN_description.sql` (e.g., `001_initial_schema.sql`).
|
||||
```
|
||||
┌─────────────────┐ LoRa ┌─────────────────┐
|
||||
│ Companion │◄─────────────►│ Repeater │
|
||||
│ (USB Serial) │ │ (Remote) │
|
||||
└────────┬────────┘ └─────────────────┘
|
||||
│
|
||||
│ Serial/TCP
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Docker Host │
|
||||
│ ┌───────────┐ │
|
||||
│ │ meshcore- │ │ ┌─────────┐
|
||||
│ │ stats │──┼────►│ nginx │──► :8080
|
||||
│ └───────────┘ │ └─────────┘
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ SQLite + SVG │
|
||||
└─────────────────┘
|
||||
```
|
||||
|
||||
## Public Instances
|
||||
The system runs two containers:
|
||||
- **meshcore-stats**: Collects data on schedule (Ofelia) and generates charts
|
||||
- **nginx**: Serves the static dashboard
|
||||
|
||||
A list of publicly accessible MeshCore Stats installations. Want to add yours? [Open a pull request](https://github.com/jorijn/meshcore-stats/pulls)!
|
||||
## Documentation
|
||||
|
||||
| URL | Hardware | Location |
|
||||
|-----|----------|----------|
|
||||
| [meshcore.jorijn.com](https://meshcore.jorijn.com) | SenseCAP Solar Node P1 Pro + 6.5dBi Mikrotik antenna | Oosterhout, The Netherlands |
|
||||
- [docs/firmware-responses.md](docs/firmware-responses.md) - MeshCore firmware response formats
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
||||
|
||||
## Public Instances
|
||||
|
||||
Public MeshCore Stats installations. Want to add yours? [Open a pull request](https://github.com/jorijn/meshcore-stats/pulls)!
|
||||
|
||||
| URL | Hardware | Location |
|
||||
|-----|----------|----------|
|
||||
| [meshcore.jorijn.com](https://meshcore.jorijn.com) | SenseCAP Solar Node P1 Pro + 6.5dBi Mikrotik antenna | Oosterhout, The Netherlands |
|
||||
|
||||
@@ -15,7 +15,7 @@ services:
|
||||
# MeshCore Stats - Data collection and rendering
|
||||
# ==========================================================================
|
||||
meshcore-stats:
|
||||
image: ghcr.io/jorijn/meshcore-stats:0.2.3 # x-release-please-version
|
||||
image: ghcr.io/jorijn/meshcore-stats:0.2.15 # x-release-please-version
|
||||
container_name: meshcore-stats
|
||||
restart: unless-stopped
|
||||
|
||||
@@ -47,6 +47,7 @@ services:
|
||||
read_only: true
|
||||
tmpfs:
|
||||
- /tmp:noexec,nosuid,size=64m
|
||||
- /var/cache/fontconfig:noexec,nosuid,size=4m
|
||||
|
||||
# Resource limits
|
||||
deploy:
|
||||
@@ -77,7 +78,7 @@ services:
|
||||
# nginx - Static site server
|
||||
# ==========================================================================
|
||||
nginx:
|
||||
image: nginx:1.27-alpine
|
||||
image: nginx:1.29-alpine@sha256:1d13701a5f9f3fb01aaa88cef2344d65b6b5bf6b7d9fa4cf0dca557a8d7702ba
|
||||
container_name: meshcore-stats-nginx
|
||||
restart: unless-stopped
|
||||
|
||||
|
||||
@@ -102,6 +102,84 @@ Returns a single dict with all status fields.
|
||||
|
||||
---
|
||||
|
||||
## Telemetry Data
|
||||
|
||||
Environmental telemetry is requested via `req_telemetry_sync(contact)` and returns
|
||||
Cayenne LPP formatted sensor data. This requires `TELEMETRY_ENABLED=1` and a sensor
|
||||
board attached to the repeater.
|
||||
|
||||
### Payload Format
|
||||
|
||||
Both `req_telemetry_sync()` and `get_self_telemetry()` return a dict containing the
|
||||
LPP data list and a public key prefix:
|
||||
|
||||
```python
|
||||
{
|
||||
'pubkey_pre': 'a5c14f5244d6',
|
||||
'lpp': [
|
||||
{'channel': 0, 'type': 'temperature', 'value': 23.5},
|
||||
{'channel': 0, 'type': 'humidity', 'value': 45.2},
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
The `extract_lpp_from_payload()` helper in `src/meshmon/telemetry.py` handles
|
||||
extracting the `lpp` list from this wrapper format.
|
||||
|
||||
### `req_telemetry_sync(contact)`
|
||||
|
||||
Returns sensor readings from a remote node in Cayenne LPP format:
|
||||
|
||||
```python
|
||||
[
|
||||
{'channel': 0, 'type': 'temperature', 'value': 23.5},
|
||||
{'channel': 0, 'type': 'humidity', 'value': 45.2},
|
||||
{'channel': 0, 'type': 'barometer', 'value': 1013.25},
|
||||
{'channel': 1, 'type': 'gps', 'value': {'latitude': 51.5, 'longitude': -0.1, 'altitude': 10}},
|
||||
]
|
||||
```
|
||||
|
||||
**Common sensor types:**
|
||||
|
||||
| Type | Unit | Description |
|
||||
|------|------|-------------|
|
||||
| `temperature` | Celsius | Temperature reading |
|
||||
| `humidity` | % | Relative humidity |
|
||||
| `barometer` | hPa/mbar | Barometric pressure |
|
||||
| `voltage` | V | Voltage reading |
|
||||
| `gps` | compound | GPS with `latitude`, `longitude`, `altitude` |
|
||||
|
||||
**Stored as:**
|
||||
- `telemetry.temperature.0` - Temperature on channel 0
|
||||
- `telemetry.humidity.0` - Humidity on channel 0
|
||||
- `telemetry.gps.1.latitude` - GPS latitude on channel 1
|
||||
|
||||
**Notes:**
|
||||
- Requires environmental sensor board (BME280, BME680, etc.) on repeater
|
||||
- Channel number distinguishes multiple sensors of the same type
|
||||
- Not all repeaters have environmental sensors attached
|
||||
- Telemetry collection does not affect circuit breaker state
|
||||
- Telemetry failures are logged as warnings and do not block status collection
|
||||
|
||||
### `get_self_telemetry()`
|
||||
|
||||
Returns self telemetry from the companion node's attached sensors.
|
||||
Same Cayenne LPP format as `req_telemetry_sync()`.
|
||||
|
||||
```python
|
||||
[
|
||||
{'channel': 0, 'type': 'temperature', 'value': 23.5},
|
||||
{'channel': 0, 'type': 'humidity', 'value': 45.2},
|
||||
]
|
||||
```
|
||||
|
||||
**Notes:**
|
||||
- Requires environmental sensor board attached to companion
|
||||
- Returns empty list if no sensors attached
|
||||
- Uses same format as repeater telemetry
|
||||
|
||||
---
|
||||
|
||||
## Derived Metrics
|
||||
|
||||
These are computed at query time, not stored:
|
||||
|
||||
@@ -6,6 +6,13 @@
|
||||
# This format is compatible with both Docker env_file and shell 'source' command.
|
||||
# Comments start with # and blank lines are ignored.
|
||||
|
||||
# =============================================================================
|
||||
# Timezone (for Docker deployments)
|
||||
# =============================================================================
|
||||
# Set the timezone for timestamps in charts and reports.
|
||||
# Uses IANA timezone names: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
|
||||
# TZ=Europe/Amsterdam
|
||||
|
||||
# =============================================================================
|
||||
# Connection Settings
|
||||
# =============================================================================
|
||||
@@ -45,6 +52,15 @@ COMPANION_DISPLAY_NAME=My Companion
|
||||
# REPEATER_PUBKEY_PREFIX=!a1b2c3d4
|
||||
# COMPANION_PUBKEY_PREFIX=!e5f6g7h8
|
||||
|
||||
# =============================================================================
|
||||
# Display Units (telemetry formatting only)
|
||||
# =============================================================================
|
||||
# Select telemetry display unit system:
|
||||
# metric -> °C, hPa, m
|
||||
# imperial -> °F, inHg, ft
|
||||
# Default: metric
|
||||
# DISPLAY_UNIT_SYSTEM=metric
|
||||
|
||||
# =============================================================================
|
||||
# Location Metadata (for reports and sidebar display)
|
||||
# =============================================================================
|
||||
@@ -113,6 +129,25 @@ RADIO_CODING_RATE=CR8
|
||||
# REMOTE_CB_FAILS=6
|
||||
# REMOTE_CB_COOLDOWN_S=3600
|
||||
|
||||
# =============================================================================
|
||||
# Telemetry Collection (Environmental Sensors)
|
||||
# =============================================================================
|
||||
# Enable telemetry collection from repeater's environmental sensors
|
||||
# (temperature, humidity, barometric pressure, etc.)
|
||||
# Requires sensor board attached to repeater (e.g., BME280, BME680)
|
||||
# Repeater dashboard charts for telemetry are auto-discovered from telemetry.* keys.
|
||||
# telemetry.voltage.* and telemetry.gps.* are collected but intentionally not charted.
|
||||
# Default: 0 (disabled)
|
||||
# TELEMETRY_ENABLED=1
|
||||
|
||||
# Telemetry-specific timeout and retry settings
|
||||
# Defaults match status settings. Separate config allows tuning if telemetry
|
||||
# proves problematic (e.g., firmware doesn't support it, sensor board missing).
|
||||
# You can reduce these if telemetry collection is causing issues.
|
||||
# TELEMETRY_TIMEOUT_S=10
|
||||
# TELEMETRY_RETRY_ATTEMPTS=2
|
||||
# TELEMETRY_RETRY_BACKOFF_S=4
|
||||
|
||||
# =============================================================================
|
||||
# Paths (Native installation only)
|
||||
# =============================================================================
|
||||
|
||||
84
pyproject.toml
Normal file
84
pyproject.toml
Normal file
@@ -0,0 +1,84 @@
|
||||
[build-system]
|
||||
requires = ["setuptools>=61.0"]
|
||||
build-backend = "setuptools.build_meta"
|
||||
|
||||
[project]
|
||||
name = "meshcore-stats"
|
||||
version = "0.2.15"
|
||||
description = "MeshCore LoRa mesh network monitoring and statistics"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.11"
|
||||
dependencies = [
|
||||
"meshcore>=2.2.3",
|
||||
"meshcore-cli>=1.0.0",
|
||||
"pyserial>=3.5",
|
||||
"jinja2>=3.1.0",
|
||||
"matplotlib>=3.8.0",
|
||||
]
|
||||
|
||||
[project.optional-dependencies]
|
||||
dev = [
|
||||
"pytest>=8.0.0",
|
||||
"pytest-asyncio>=0.24.0",
|
||||
"pytest-cov>=5.0.0",
|
||||
"pytest-xdist>=3.5.0",
|
||||
"coverage[toml]>=7.4.0",
|
||||
"freezegun>=1.2.0",
|
||||
"ruff>=0.3.0",
|
||||
"mypy>=1.8.0",
|
||||
]
|
||||
|
||||
[tool.setuptools.packages.find]
|
||||
where = ["src"]
|
||||
|
||||
[tool.pytest.ini_options]
|
||||
testpaths = ["tests"]
|
||||
asyncio_mode = "auto"
|
||||
asyncio_default_fixture_loop_scope = "function"
|
||||
addopts = ["-v", "--strict-markers", "-ra", "--tb=short"]
|
||||
markers = [
|
||||
"slow: marks tests as slow (deselect with '-m \"not slow\"')",
|
||||
"integration: marks integration tests",
|
||||
"snapshot: marks snapshot comparison tests",
|
||||
]
|
||||
filterwarnings = [
|
||||
"ignore::DeprecationWarning:matplotlib.*",
|
||||
]
|
||||
|
||||
[tool.coverage.run]
|
||||
source = ["src/meshmon", "scripts"]
|
||||
branch = true
|
||||
omit = [
|
||||
"src/meshmon/__init__.py",
|
||||
"scripts/generate_snapshots.py", # Dev utility for test fixtures
|
||||
]
|
||||
|
||||
[tool.coverage.report]
|
||||
fail_under = 95
|
||||
show_missing = true
|
||||
skip_covered = false
|
||||
exclude_lines = [
|
||||
"pragma: no cover",
|
||||
"if TYPE_CHECKING:",
|
||||
"raise NotImplementedError",
|
||||
"if not MESHCORE_AVAILABLE:",
|
||||
"except ImportError:",
|
||||
"if __name__ == .__main__.:",
|
||||
]
|
||||
|
||||
[tool.coverage.html]
|
||||
directory = "htmlcov"
|
||||
|
||||
[tool.ruff]
|
||||
target-version = "py311"
|
||||
line-length = 100
|
||||
|
||||
[tool.ruff.lint]
|
||||
select = ["E", "F", "I", "UP", "B", "SIM"]
|
||||
ignore = ["E501"]
|
||||
|
||||
[tool.mypy]
|
||||
python_version = "3.11"
|
||||
warn_return_any = true
|
||||
warn_unused_ignores = true
|
||||
ignore_missing_imports = true
|
||||
@@ -14,6 +14,11 @@
|
||||
"type": "generic",
|
||||
"path": "docker-compose.yml",
|
||||
"glob": false
|
||||
},
|
||||
{
|
||||
"jsonpath": "$.package[?(@.name.value=='meshcore-stats')].version",
|
||||
"path": "uv.lock",
|
||||
"type": "toml"
|
||||
}
|
||||
],
|
||||
"changelog-sections": [
|
||||
|
||||
39
renovate.json
Normal file
39
renovate.json
Normal file
@@ -0,0 +1,39 @@
|
||||
{
|
||||
"$schema": "https://docs.renovatebot.com/renovate-schema.json",
|
||||
"extends": [
|
||||
"config:best-practices"
|
||||
],
|
||||
"rebaseWhen": "behind-base-branch",
|
||||
"automergeType": "pr",
|
||||
"platformAutomerge": true,
|
||||
"lockFileMaintenance": {
|
||||
"enabled": true
|
||||
},
|
||||
"dependencyDashboard": true,
|
||||
"packageRules": [
|
||||
{
|
||||
"matchManagers": [
|
||||
"github-actions"
|
||||
],
|
||||
"pinDigests": true
|
||||
},
|
||||
{
|
||||
"matchManagers": [
|
||||
"docker-compose"
|
||||
],
|
||||
"matchPackageNames": [
|
||||
"ghcr.io/jorijn/meshcore-stats"
|
||||
],
|
||||
"pinDigests": false
|
||||
},
|
||||
{
|
||||
"description": "Auto-merge patch and digest updates once checks pass",
|
||||
"matchUpdateTypes": [
|
||||
"patch",
|
||||
"digest",
|
||||
"lockFileMaintenance"
|
||||
],
|
||||
"automerge": true
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1,5 +0,0 @@
|
||||
meshcore>=2.2.3
|
||||
meshcore-cli>=1.0.0
|
||||
pyserial>=3.5
|
||||
jinja2>=3.1.0
|
||||
matplotlib>=3.8.0
|
||||
@@ -23,10 +23,11 @@ from pathlib import Path
|
||||
# Add src to path for imports
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent / "src"))
|
||||
|
||||
from meshmon.env import get_config
|
||||
from meshmon import log
|
||||
from meshmon.meshcore_client import connect_from_env, run_command
|
||||
from meshmon.db import init_db, insert_metrics
|
||||
from meshmon.env import get_config
|
||||
from meshmon.meshcore_client import connect_with_lock, run_command
|
||||
from meshmon.telemetry import extract_lpp_from_payload, extract_telemetry_metrics
|
||||
|
||||
|
||||
async def collect_companion() -> int:
|
||||
@@ -39,138 +40,132 @@ async def collect_companion() -> int:
|
||||
cfg = get_config()
|
||||
ts = int(time.time())
|
||||
|
||||
log.debug("Connecting to companion node...")
|
||||
mc = await connect_from_env()
|
||||
|
||||
if mc is None:
|
||||
log.error("Failed to connect to companion node")
|
||||
return 1
|
||||
|
||||
# Metrics to insert (firmware field names)
|
||||
metrics: dict[str, float] = {}
|
||||
commands_succeeded = 0
|
||||
|
||||
# Commands are accessed via mc.commands
|
||||
cmd = mc.commands
|
||||
log.debug("Connecting to companion node...")
|
||||
async with connect_with_lock() as mc:
|
||||
if mc is None:
|
||||
log.error("Failed to connect to companion node")
|
||||
return 1
|
||||
|
||||
try:
|
||||
# send_appstart (already called during connect, but call again to get self_info)
|
||||
ok, evt_type, payload, err = await run_command(
|
||||
mc, cmd.send_appstart(), "send_appstart"
|
||||
)
|
||||
if ok:
|
||||
commands_succeeded += 1
|
||||
log.debug(f"appstart: {evt_type}")
|
||||
else:
|
||||
log.error(f"appstart failed: {err}")
|
||||
# Commands are accessed via mc.commands
|
||||
cmd = mc.commands
|
||||
|
||||
# send_device_query
|
||||
ok, evt_type, payload, err = await run_command(
|
||||
mc, cmd.send_device_query(), "send_device_query"
|
||||
)
|
||||
if ok:
|
||||
commands_succeeded += 1
|
||||
log.debug(f"device_query: {payload}")
|
||||
else:
|
||||
log.error(f"device_query failed: {err}")
|
||||
try:
|
||||
# send_appstart (already called during connect, but call again to get self_info)
|
||||
ok, evt_type, payload, err = await run_command(
|
||||
mc, cmd.send_appstart(), "send_appstart"
|
||||
)
|
||||
if ok:
|
||||
commands_succeeded += 1
|
||||
log.debug(f"appstart: {evt_type}")
|
||||
else:
|
||||
log.error(f"appstart failed: {err}")
|
||||
|
||||
# get_bat
|
||||
ok, evt_type, payload, err = await run_command(
|
||||
mc, cmd.get_bat(), "get_bat"
|
||||
)
|
||||
if ok:
|
||||
commands_succeeded += 1
|
||||
log.debug(f"get_bat: {payload}")
|
||||
else:
|
||||
log.error(f"get_bat failed: {err}")
|
||||
# send_device_query
|
||||
ok, evt_type, payload, err = await run_command(
|
||||
mc, cmd.send_device_query(), "send_device_query"
|
||||
)
|
||||
if ok:
|
||||
commands_succeeded += 1
|
||||
log.debug(f"device_query: {payload}")
|
||||
else:
|
||||
log.error(f"device_query failed: {err}")
|
||||
|
||||
# get_time
|
||||
ok, evt_type, payload, err = await run_command(
|
||||
mc, cmd.get_time(), "get_time"
|
||||
)
|
||||
if ok:
|
||||
commands_succeeded += 1
|
||||
log.debug(f"get_time: {payload}")
|
||||
else:
|
||||
log.error(f"get_time failed: {err}")
|
||||
# get_time
|
||||
ok, evt_type, payload, err = await run_command(
|
||||
mc, cmd.get_time(), "get_time"
|
||||
)
|
||||
if ok:
|
||||
commands_succeeded += 1
|
||||
log.debug(f"get_time: {payload}")
|
||||
else:
|
||||
log.error(f"get_time failed: {err}")
|
||||
|
||||
# get_self_telemetry
|
||||
ok, evt_type, payload, err = await run_command(
|
||||
mc, cmd.get_self_telemetry(), "get_self_telemetry"
|
||||
)
|
||||
if ok:
|
||||
commands_succeeded += 1
|
||||
log.debug(f"get_self_telemetry: {payload}")
|
||||
else:
|
||||
log.error(f"get_self_telemetry failed: {err}")
|
||||
# get_self_telemetry - collect environmental sensor data
|
||||
# Note: The call happens regardless of telemetry_enabled for device query completeness,
|
||||
# but we only extract and store metrics if the feature is enabled.
|
||||
ok, evt_type, payload, err = await run_command(
|
||||
mc, cmd.get_self_telemetry(), "get_self_telemetry"
|
||||
)
|
||||
if ok:
|
||||
commands_succeeded += 1
|
||||
log.debug(f"get_self_telemetry: {payload}")
|
||||
# Extract and store telemetry if enabled
|
||||
if cfg.telemetry_enabled:
|
||||
lpp_data = extract_lpp_from_payload(payload)
|
||||
if lpp_data is not None:
|
||||
telemetry_metrics = extract_telemetry_metrics(lpp_data)
|
||||
if telemetry_metrics:
|
||||
metrics.update(telemetry_metrics)
|
||||
log.debug(f"Extracted {len(telemetry_metrics)} telemetry metrics")
|
||||
else:
|
||||
# Debug level because not all devices have sensors attached - this is expected
|
||||
log.debug(f"get_self_telemetry failed: {err}")
|
||||
|
||||
# get_custom_vars
|
||||
ok, evt_type, payload, err = await run_command(
|
||||
mc, cmd.get_custom_vars(), "get_custom_vars"
|
||||
)
|
||||
if ok:
|
||||
commands_succeeded += 1
|
||||
log.debug(f"get_custom_vars: {payload}")
|
||||
else:
|
||||
log.debug(f"get_custom_vars failed: {err}")
|
||||
# get_custom_vars
|
||||
ok, evt_type, payload, err = await run_command(
|
||||
mc, cmd.get_custom_vars(), "get_custom_vars"
|
||||
)
|
||||
if ok:
|
||||
commands_succeeded += 1
|
||||
log.debug(f"get_custom_vars: {payload}")
|
||||
else:
|
||||
log.debug(f"get_custom_vars failed: {err}")
|
||||
|
||||
# get_contacts - count contacts
|
||||
ok, evt_type, payload, err = await run_command(
|
||||
mc, cmd.get_contacts(), "get_contacts"
|
||||
)
|
||||
if ok:
|
||||
commands_succeeded += 1
|
||||
contacts_count = len(payload) if payload else 0
|
||||
metrics["contacts"] = float(contacts_count)
|
||||
log.debug(f"get_contacts: found {contacts_count} contacts")
|
||||
else:
|
||||
log.error(f"get_contacts failed: {err}")
|
||||
# get_contacts - count contacts
|
||||
ok, evt_type, payload, err = await run_command(
|
||||
mc, cmd.get_contacts(), "get_contacts"
|
||||
)
|
||||
if ok:
|
||||
commands_succeeded += 1
|
||||
contacts_count = len(payload) if payload else 0
|
||||
metrics["contacts"] = float(contacts_count)
|
||||
log.debug(f"get_contacts: found {contacts_count} contacts")
|
||||
else:
|
||||
log.error(f"get_contacts failed: {err}")
|
||||
|
||||
# Get statistics - these contain the main metrics
|
||||
# Core stats (battery_mv, uptime_secs, errors, queue_len)
|
||||
ok, evt_type, payload, err = await run_command(
|
||||
mc, cmd.get_stats_core(), "get_stats_core"
|
||||
)
|
||||
if ok and payload and isinstance(payload, dict):
|
||||
commands_succeeded += 1
|
||||
# Insert all numeric fields from stats_core
|
||||
for key, value in payload.items():
|
||||
if isinstance(value, (int, float)):
|
||||
metrics[key] = float(value)
|
||||
log.debug(f"stats_core: {payload}")
|
||||
# Get statistics - these contain the main metrics
|
||||
# Core stats (battery_mv, uptime_secs, errors, queue_len)
|
||||
ok, evt_type, payload, err = await run_command(
|
||||
mc, cmd.get_stats_core(), "get_stats_core"
|
||||
)
|
||||
if ok and payload and isinstance(payload, dict):
|
||||
commands_succeeded += 1
|
||||
# Insert all numeric fields from stats_core
|
||||
for key, value in payload.items():
|
||||
if isinstance(value, (int, float)):
|
||||
metrics[key] = float(value)
|
||||
log.debug(f"stats_core: {payload}")
|
||||
|
||||
# Radio stats (noise_floor, last_rssi, last_snr, tx_air_secs, rx_air_secs)
|
||||
ok, evt_type, payload, err = await run_command(
|
||||
mc, cmd.get_stats_radio(), "get_stats_radio"
|
||||
)
|
||||
if ok and payload and isinstance(payload, dict):
|
||||
commands_succeeded += 1
|
||||
for key, value in payload.items():
|
||||
if isinstance(value, (int, float)):
|
||||
metrics[key] = float(value)
|
||||
log.debug(f"stats_radio: {payload}")
|
||||
# Radio stats (noise_floor, last_rssi, last_snr, tx_air_secs, rx_air_secs)
|
||||
ok, evt_type, payload, err = await run_command(
|
||||
mc, cmd.get_stats_radio(), "get_stats_radio"
|
||||
)
|
||||
if ok and payload and isinstance(payload, dict):
|
||||
commands_succeeded += 1
|
||||
for key, value in payload.items():
|
||||
if isinstance(value, (int, float)):
|
||||
metrics[key] = float(value)
|
||||
log.debug(f"stats_radio: {payload}")
|
||||
|
||||
# Packet stats (recv, sent, flood_tx, direct_tx, flood_rx, direct_rx)
|
||||
ok, evt_type, payload, err = await run_command(
|
||||
mc, cmd.get_stats_packets(), "get_stats_packets"
|
||||
)
|
||||
if ok and payload and isinstance(payload, dict):
|
||||
commands_succeeded += 1
|
||||
for key, value in payload.items():
|
||||
if isinstance(value, (int, float)):
|
||||
metrics[key] = float(value)
|
||||
log.debug(f"stats_packets: {payload}")
|
||||
# Packet stats (recv, sent, flood_tx, direct_tx, flood_rx, direct_rx)
|
||||
ok, evt_type, payload, err = await run_command(
|
||||
mc, cmd.get_stats_packets(), "get_stats_packets"
|
||||
)
|
||||
if ok and payload and isinstance(payload, dict):
|
||||
commands_succeeded += 1
|
||||
for key, value in payload.items():
|
||||
if isinstance(value, (int, float)):
|
||||
metrics[key] = float(value)
|
||||
log.debug(f"stats_packets: {payload}")
|
||||
|
||||
except Exception as e:
|
||||
log.error(f"Error during collection: {e}")
|
||||
except Exception as e:
|
||||
log.error(f"Error during collection: {e}")
|
||||
|
||||
finally:
|
||||
# Close connection
|
||||
if hasattr(mc, "disconnect"):
|
||||
try:
|
||||
await mc.disconnect()
|
||||
except Exception:
|
||||
pass
|
||||
# Connection closed and lock released by context manager
|
||||
|
||||
# Print summary
|
||||
summary_parts = [f"ts={ts}"]
|
||||
@@ -183,6 +178,10 @@ async def collect_companion() -> int:
|
||||
summary_parts.append(f"rx={int(metrics['recv'])}")
|
||||
if "sent" in metrics:
|
||||
summary_parts.append(f"tx={int(metrics['sent'])}")
|
||||
# Add telemetry count to summary if present
|
||||
telemetry_count = sum(1 for k in metrics if k.startswith("telemetry."))
|
||||
if telemetry_count > 0:
|
||||
summary_parts.append(f"telem={telemetry_count}")
|
||||
|
||||
log.info(f"Companion: {', '.join(summary_parts)}")
|
||||
|
||||
|
||||
@@ -18,27 +18,28 @@ Outputs:
|
||||
import asyncio
|
||||
import sys
|
||||
import time
|
||||
from collections.abc import Callable, Coroutine
|
||||
from pathlib import Path
|
||||
from typing import Any, Callable, Coroutine, Optional
|
||||
from typing import Any
|
||||
|
||||
# Add src to path for imports
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent / "src"))
|
||||
|
||||
from meshmon.env import get_config
|
||||
from meshmon import log
|
||||
from meshmon.meshcore_client import (
|
||||
connect_from_env,
|
||||
run_command,
|
||||
get_contact_by_name,
|
||||
get_contact_by_key_prefix,
|
||||
extract_contact_info,
|
||||
list_contacts_summary,
|
||||
)
|
||||
from meshmon.db import init_db, insert_metrics
|
||||
from meshmon.env import get_config
|
||||
from meshmon.meshcore_client import (
|
||||
connect_with_lock,
|
||||
extract_contact_info,
|
||||
get_contact_by_key_prefix,
|
||||
get_contact_by_name,
|
||||
run_command,
|
||||
)
|
||||
from meshmon.retry import get_repeater_circuit_breaker, with_retries
|
||||
from meshmon.telemetry import extract_lpp_from_payload, extract_telemetry_metrics
|
||||
|
||||
|
||||
async def find_repeater_contact(mc: Any) -> Optional[Any]:
|
||||
async def find_repeater_contact(mc: Any) -> Any | None:
|
||||
"""
|
||||
Find the repeater contact by name or key prefix.
|
||||
|
||||
@@ -69,7 +70,7 @@ async def find_repeater_contact(mc: Any) -> Optional[Any]:
|
||||
return contact
|
||||
|
||||
# Manual search in payload dict
|
||||
for pk, c in contacts_dict.items():
|
||||
for _pk, c in contacts_dict.items():
|
||||
if isinstance(c, dict):
|
||||
name = c.get("adv_name", "")
|
||||
if name and name.lower() == cfg.repeater_name.lower():
|
||||
@@ -105,7 +106,7 @@ async def query_repeater_with_retry(
|
||||
contact: Any,
|
||||
command_name: str,
|
||||
command_coro_fn: Callable[[], Coroutine[Any, Any, Any]],
|
||||
) -> tuple[bool, Optional[dict], Optional[str]]:
|
||||
) -> tuple[bool, dict | None, str | None]:
|
||||
"""
|
||||
Query repeater with retry logic.
|
||||
|
||||
@@ -143,8 +144,10 @@ async def query_repeater_with_retry(
|
||||
|
||||
|
||||
async def collect_repeater() -> int:
|
||||
"""
|
||||
Collect data from remote repeater node.
|
||||
"""Collect data from remote repeater node.
|
||||
|
||||
Collects status metrics (battery, uptime, packet counters, etc.) and
|
||||
optionally telemetry data (temperature, humidity, pressure) if enabled.
|
||||
|
||||
Returns:
|
||||
Exit code (0 = success, 1 = error)
|
||||
@@ -161,122 +164,154 @@ async def collect_repeater() -> int:
|
||||
# Skip collection - no metrics to write
|
||||
return 0
|
||||
|
||||
# Connect to companion
|
||||
log.debug("Connecting to companion node...")
|
||||
mc = await connect_from_env()
|
||||
|
||||
if mc is None:
|
||||
log.error("Failed to connect to companion node")
|
||||
return 1
|
||||
|
||||
# Metrics to insert (firmware field names from req_status_sync)
|
||||
metrics: dict[str, float] = {}
|
||||
status_metrics: dict[str, float] = {}
|
||||
telemetry_metrics: dict[str, float] = {}
|
||||
node_name = "unknown"
|
||||
status_ok = False
|
||||
|
||||
# Commands are accessed via mc.commands
|
||||
cmd = mc.commands
|
||||
|
||||
try:
|
||||
# Initialize (appstart already called during connect)
|
||||
ok, evt_type, payload, err = await run_command(
|
||||
mc, cmd.send_appstart(), "send_appstart"
|
||||
)
|
||||
if not ok:
|
||||
log.error(f"appstart failed: {err}")
|
||||
|
||||
# Find repeater contact
|
||||
contact = await find_repeater_contact(mc)
|
||||
|
||||
if contact is None:
|
||||
log.error("Cannot find repeater contact")
|
||||
# Connect to companion
|
||||
log.debug("Connecting to companion node...")
|
||||
async with connect_with_lock() as mc:
|
||||
if mc is None:
|
||||
log.error("Failed to connect to companion node")
|
||||
return 1
|
||||
|
||||
# Store contact info
|
||||
contact_info = extract_contact_info(contact)
|
||||
node_name = contact_info.get("adv_name", "unknown")
|
||||
# Commands are accessed via mc.commands
|
||||
cmd = mc.commands
|
||||
|
||||
log.debug(f"Found repeater: {node_name}")
|
||||
try:
|
||||
# Initialize (appstart already called during connect)
|
||||
ok, evt_type, payload, err = await run_command(
|
||||
mc, cmd.send_appstart(), "send_appstart"
|
||||
)
|
||||
if not ok:
|
||||
log.error(f"appstart failed: {err}")
|
||||
|
||||
# Optional login (if command exists)
|
||||
if cfg.repeater_password and hasattr(cmd, "send_login"):
|
||||
log.debug("Attempting login...")
|
||||
try:
|
||||
ok, evt_type, payload, err = await run_command(
|
||||
mc,
|
||||
cmd.send_login(contact, cfg.repeater_password),
|
||||
"send_login",
|
||||
)
|
||||
if ok:
|
||||
log.debug("Login successful")
|
||||
else:
|
||||
log.debug(f"Login failed or not supported: {err}")
|
||||
except Exception as e:
|
||||
log.debug(f"Login not supported: {e}")
|
||||
# Find repeater contact
|
||||
contact = await find_repeater_contact(mc)
|
||||
|
||||
# Query status (using _sync version which returns payload directly)
|
||||
# Use timeout=0 to let the device suggest timeout, with min_timeout as floor
|
||||
log.debug("Querying repeater status...")
|
||||
success, payload, err = await query_repeater_with_retry(
|
||||
mc,
|
||||
contact,
|
||||
"req_status_sync",
|
||||
lambda: cmd.req_status_sync(contact, timeout=0, min_timeout=cfg.remote_timeout_s),
|
||||
)
|
||||
if success and payload and isinstance(payload, dict):
|
||||
status_ok = True
|
||||
# Insert all numeric fields from status response
|
||||
for key, value in payload.items():
|
||||
if isinstance(value, (int, float)):
|
||||
metrics[key] = float(value)
|
||||
log.debug(f"req_status_sync: {payload}")
|
||||
else:
|
||||
log.warn(f"req_status_sync failed: {err}")
|
||||
if contact is None:
|
||||
log.error("Cannot find repeater contact")
|
||||
return 1
|
||||
|
||||
# Update circuit breaker
|
||||
if status_ok:
|
||||
cb.record_success()
|
||||
log.debug("Circuit breaker: recorded success")
|
||||
else:
|
||||
# Store contact info
|
||||
contact_info = extract_contact_info(contact)
|
||||
node_name = contact_info.get("adv_name", "unknown")
|
||||
|
||||
log.debug(f"Found repeater: {node_name}")
|
||||
|
||||
# Optional login (if command exists)
|
||||
if cfg.repeater_password and hasattr(cmd, "send_login"):
|
||||
log.debug("Attempting login...")
|
||||
try:
|
||||
ok, evt_type, payload, err = await run_command(
|
||||
mc,
|
||||
cmd.send_login(contact, cfg.repeater_password),
|
||||
"send_login",
|
||||
)
|
||||
if ok:
|
||||
log.debug("Login successful")
|
||||
else:
|
||||
log.debug(f"Login failed or not supported: {err}")
|
||||
except Exception as e:
|
||||
log.debug(f"Login not supported: {e}")
|
||||
|
||||
# Phase 1: Status collection (affects circuit breaker)
|
||||
# Use timeout=0 to let the device suggest timeout, with min_timeout as floor
|
||||
log.debug("Querying repeater status...")
|
||||
success, payload, err = await query_repeater_with_retry(
|
||||
mc,
|
||||
contact,
|
||||
"req_status_sync",
|
||||
lambda: cmd.req_status_sync(contact, timeout=0, min_timeout=cfg.remote_timeout_s),
|
||||
)
|
||||
if success and payload and isinstance(payload, dict):
|
||||
status_ok = True
|
||||
# Insert all numeric fields from status response
|
||||
for key, value in payload.items():
|
||||
if isinstance(value, (int, float)):
|
||||
status_metrics[key] = float(value)
|
||||
log.debug(f"req_status_sync: {payload}")
|
||||
else:
|
||||
log.warn(f"req_status_sync failed: {err}")
|
||||
|
||||
# Update circuit breaker based on status result
|
||||
if status_ok:
|
||||
cb.record_success()
|
||||
log.debug("Circuit breaker: recorded success")
|
||||
else:
|
||||
cb.record_failure(cfg.remote_cb_fails, cfg.remote_cb_cooldown_s)
|
||||
log.debug(f"Circuit breaker: recorded failure ({cb.consecutive_failures}/{cfg.remote_cb_fails})")
|
||||
|
||||
# CRITICAL: Store status metrics immediately before attempting telemetry
|
||||
# This ensures critical data is saved even if telemetry fails
|
||||
if status_ok and status_metrics:
|
||||
try:
|
||||
inserted = insert_metrics(ts=ts, role="repeater", metrics=status_metrics)
|
||||
log.debug(f"Stored {inserted} status metrics (ts={ts})")
|
||||
except Exception as e:
|
||||
log.error(f"Failed to store status metrics: {e}")
|
||||
return 1
|
||||
|
||||
# Phase 2: Telemetry collection (does NOT affect circuit breaker)
|
||||
if cfg.telemetry_enabled and status_ok:
|
||||
log.debug("Querying repeater telemetry...")
|
||||
try:
|
||||
# Note: Telemetry uses its own retry settings and does NOT
|
||||
# affect circuit breaker. Status success proves the link is up;
|
||||
# telemetry failures are likely firmware/capability issues.
|
||||
telem_success, telem_payload, telem_err = await with_retries(
|
||||
lambda: cmd.req_telemetry_sync(
|
||||
contact, timeout=0, min_timeout=cfg.telemetry_timeout_s
|
||||
),
|
||||
attempts=cfg.telemetry_retry_attempts,
|
||||
backoff_s=cfg.telemetry_retry_backoff_s,
|
||||
name="req_telemetry_sync",
|
||||
)
|
||||
|
||||
if telem_success and telem_payload:
|
||||
log.debug(f"req_telemetry_sync: {telem_payload}")
|
||||
lpp_data = extract_lpp_from_payload(telem_payload)
|
||||
if lpp_data is not None:
|
||||
telemetry_metrics = extract_telemetry_metrics(lpp_data)
|
||||
log.debug(f"Extracted {len(telemetry_metrics)} telemetry metrics")
|
||||
|
||||
# Store telemetry metrics
|
||||
if telemetry_metrics:
|
||||
try:
|
||||
inserted = insert_metrics(ts=ts, role="repeater", metrics=telemetry_metrics)
|
||||
log.debug(f"Stored {inserted} telemetry metrics")
|
||||
except Exception as e:
|
||||
log.warn(f"Failed to store telemetry metrics: {e}")
|
||||
else:
|
||||
log.warn(f"req_telemetry_sync failed: {telem_err}")
|
||||
except Exception as e:
|
||||
log.warn(f"Telemetry collection error (continuing): {e}")
|
||||
|
||||
except Exception as e:
|
||||
log.error(f"Error during collection: {e}")
|
||||
cb.record_failure(cfg.remote_cb_fails, cfg.remote_cb_cooldown_s)
|
||||
log.debug(f"Circuit breaker: recorded failure ({cb.consecutive_failures}/{cfg.remote_cb_fails})")
|
||||
|
||||
except Exception as e:
|
||||
log.error(f"Error during collection: {e}")
|
||||
cb.record_failure(cfg.remote_cb_fails, cfg.remote_cb_cooldown_s)
|
||||
|
||||
finally:
|
||||
# Close connection
|
||||
if hasattr(mc, "disconnect"):
|
||||
try:
|
||||
await mc.disconnect()
|
||||
except Exception:
|
||||
pass
|
||||
# Connection closed and lock released by context manager
|
||||
|
||||
# Print summary
|
||||
summary_parts = [f"ts={ts}"]
|
||||
if "bat" in metrics:
|
||||
bat_v = metrics["bat"] / 1000.0
|
||||
if "bat" in status_metrics:
|
||||
bat_v = status_metrics["bat"] / 1000.0
|
||||
summary_parts.append(f"bat={bat_v:.2f}V")
|
||||
if "uptime" in metrics:
|
||||
uptime_days = metrics["uptime"] // 86400
|
||||
if "uptime" in status_metrics:
|
||||
uptime_days = status_metrics["uptime"] // 86400
|
||||
summary_parts.append(f"uptime={int(uptime_days)}d")
|
||||
if "nb_recv" in metrics:
|
||||
summary_parts.append(f"rx={int(metrics['nb_recv'])}")
|
||||
if "nb_sent" in metrics:
|
||||
summary_parts.append(f"tx={int(metrics['nb_sent'])}")
|
||||
if "nb_recv" in status_metrics:
|
||||
summary_parts.append(f"rx={int(status_metrics['nb_recv'])}")
|
||||
if "nb_sent" in status_metrics:
|
||||
summary_parts.append(f"tx={int(status_metrics['nb_sent'])}")
|
||||
if telemetry_metrics:
|
||||
summary_parts.append(f"telem={len(telemetry_metrics)}")
|
||||
|
||||
log.info(f"Repeater ({node_name}): {', '.join(summary_parts)}")
|
||||
|
||||
# Write metrics to database
|
||||
if status_ok and metrics:
|
||||
try:
|
||||
inserted = insert_metrics(ts=ts, role="repeater", metrics=metrics)
|
||||
log.debug(f"Inserted {inserted} metrics to database (ts={ts})")
|
||||
except Exception as e:
|
||||
log.error(f"Failed to write metrics to database: {e}")
|
||||
return 1
|
||||
|
||||
return 0 if status_ok else 1
|
||||
|
||||
|
||||
|
||||
357
scripts/generate_snapshots.py
Normal file
357
scripts/generate_snapshots.py
Normal file
@@ -0,0 +1,357 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Generate initial snapshot files for tests.
|
||||
|
||||
This script creates the initial SVG and TXT snapshots for snapshot testing.
|
||||
Run this once to generate the baseline snapshots, then use pytest to verify them.
|
||||
|
||||
Usage:
|
||||
python scripts/generate_snapshots.py
|
||||
"""
|
||||
|
||||
import re
|
||||
import sys
|
||||
from datetime import date, datetime, timedelta
|
||||
from pathlib import Path
|
||||
|
||||
# Add src to path
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent / "src"))
|
||||
|
||||
from meshmon.charts import (
|
||||
CHART_THEMES,
|
||||
DataPoint,
|
||||
TimeSeries,
|
||||
render_chart_svg,
|
||||
)
|
||||
from meshmon.reports import (
|
||||
DailyAggregate,
|
||||
LocationInfo,
|
||||
MetricStats,
|
||||
MonthlyAggregate,
|
||||
YearlyAggregate,
|
||||
format_monthly_txt,
|
||||
format_yearly_txt,
|
||||
)
|
||||
|
||||
|
||||
def normalize_svg_for_snapshot(svg: str) -> str:
|
||||
"""Normalize SVG for deterministic snapshot comparison."""
|
||||
# 1. Normalize matplotlib-generated IDs (prefixed with random hex)
|
||||
svg = re.sub(r'id="[a-zA-Z0-9]+-[0-9a-f]+"', 'id="normalized"', svg)
|
||||
svg = re.sub(r'id="m[0-9a-f]{8,}"', 'id="normalized"', svg)
|
||||
|
||||
# 2. Normalize url(#...) references to match
|
||||
svg = re.sub(r'url\(#[a-zA-Z0-9]+-[0-9a-f]+\)', 'url(#normalized)', svg)
|
||||
svg = re.sub(r'url\(#m[0-9a-f]{8,}\)', 'url(#normalized)', svg)
|
||||
|
||||
# 3. Normalize clip-path IDs
|
||||
svg = re.sub(r'clip-path="url\(#[^)]+\)"', 'clip-path="url(#clip)"', svg)
|
||||
|
||||
# 4. Normalize xlink:href="#..." references
|
||||
svg = re.sub(r'xlink:href="#[a-zA-Z0-9]+-[0-9a-f]+"', 'xlink:href="#normalized"', svg)
|
||||
svg = re.sub(r'xlink:href="#m[0-9a-f]{8,}"', 'xlink:href="#normalized"', svg)
|
||||
|
||||
# 5. Remove matplotlib version comment (changes between versions)
|
||||
svg = re.sub(r'<!-- Created with matplotlib.*?-->', '', svg)
|
||||
|
||||
# 6. Normalize whitespace (but preserve newlines for readability)
|
||||
svg = re.sub(r'[ \t]+', ' ', svg)
|
||||
svg = re.sub(r' ?\n ?', '\n', svg)
|
||||
|
||||
return svg.strip()
|
||||
|
||||
|
||||
def generate_svg_snapshots():
|
||||
"""Generate all SVG snapshot files."""
|
||||
print("Generating SVG snapshots...")
|
||||
|
||||
svg_dir = Path(__file__).parent.parent / "tests" / "snapshots" / "svg"
|
||||
svg_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
light_theme = CHART_THEMES["light"]
|
||||
dark_theme = CHART_THEMES["dark"]
|
||||
|
||||
# Fixed base time for deterministic tests
|
||||
base_time = datetime(2024, 1, 15, 12, 0, 0)
|
||||
|
||||
# Generate gauge timeseries (battery voltage)
|
||||
gauge_points = []
|
||||
for i in range(24):
|
||||
ts = base_time - timedelta(hours=23 - i)
|
||||
value = 3.7 + 0.3 * abs(12 - i) / 12
|
||||
gauge_points.append(DataPoint(timestamp=ts, value=value))
|
||||
|
||||
gauge_ts = TimeSeries(
|
||||
metric="bat",
|
||||
role="repeater",
|
||||
period="day",
|
||||
points=gauge_points,
|
||||
)
|
||||
|
||||
# Generate counter timeseries (packet rate)
|
||||
counter_points = []
|
||||
for i in range(24):
|
||||
ts = base_time - timedelta(hours=23 - i)
|
||||
hour = (i + 12) % 24
|
||||
value = 2.0 + (hour - 6) * 0.3 if 6 <= hour <= 18 else 0.5 + hour % 6 * 0.1
|
||||
counter_points.append(DataPoint(timestamp=ts, value=value))
|
||||
|
||||
counter_ts = TimeSeries(
|
||||
metric="nb_recv",
|
||||
role="repeater",
|
||||
period="day",
|
||||
points=counter_points,
|
||||
)
|
||||
|
||||
# Empty timeseries
|
||||
empty_ts = TimeSeries(
|
||||
metric="bat",
|
||||
role="repeater",
|
||||
period="day",
|
||||
points=[],
|
||||
)
|
||||
|
||||
# Single point timeseries
|
||||
single_point_ts = TimeSeries(
|
||||
metric="bat",
|
||||
role="repeater",
|
||||
period="day",
|
||||
points=[DataPoint(timestamp=base_time, value=3.85)],
|
||||
)
|
||||
|
||||
# Generate snapshots
|
||||
snapshots = [
|
||||
("bat_day_light.svg", gauge_ts, light_theme, 3.0, 4.2),
|
||||
("bat_day_dark.svg", gauge_ts, dark_theme, 3.0, 4.2),
|
||||
("nb_recv_day_light.svg", counter_ts, light_theme, None, None),
|
||||
("nb_recv_day_dark.svg", counter_ts, dark_theme, None, None),
|
||||
("empty_day_light.svg", empty_ts, light_theme, None, None),
|
||||
("empty_day_dark.svg", empty_ts, dark_theme, None, None),
|
||||
("single_point_day_light.svg", single_point_ts, light_theme, 3.0, 4.2),
|
||||
]
|
||||
|
||||
for filename, ts, theme, y_min, y_max in snapshots:
|
||||
svg = render_chart_svg(ts, theme, y_min=y_min, y_max=y_max)
|
||||
normalized = normalize_svg_for_snapshot(svg)
|
||||
|
||||
output_path = svg_dir / filename
|
||||
output_path.write_text(normalized, encoding="utf-8")
|
||||
print(f" Created: {output_path}")
|
||||
|
||||
|
||||
def generate_txt_snapshots():
|
||||
"""Generate all TXT report snapshot files."""
|
||||
print("Generating TXT snapshots...")
|
||||
|
||||
txt_dir = Path(__file__).parent.parent / "tests" / "snapshots" / "txt"
|
||||
txt_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
sample_location = LocationInfo(
|
||||
name="Test Observatory",
|
||||
lat=52.3676,
|
||||
lon=4.9041,
|
||||
elev=2.0,
|
||||
)
|
||||
|
||||
# Repeater monthly aggregate
|
||||
repeater_daily_data = []
|
||||
for day in range(1, 6):
|
||||
repeater_daily_data.append(
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, day),
|
||||
metrics={
|
||||
"bat": MetricStats(
|
||||
min_value=3600 + day * 10,
|
||||
min_time=datetime(2024, 1, day, 4, 0),
|
||||
max_value=3900 + day * 10,
|
||||
max_time=datetime(2024, 1, day, 14, 0),
|
||||
mean=3750 + day * 10,
|
||||
count=96,
|
||||
),
|
||||
"bat_pct": MetricStats(mean=65.0 + day * 2, count=96),
|
||||
"last_rssi": MetricStats(mean=-85.0 - day, count=96),
|
||||
"last_snr": MetricStats(mean=8.5 + day * 0.2, count=96),
|
||||
"noise_floor": MetricStats(mean=-115.0, count=96),
|
||||
"nb_recv": MetricStats(total=500 + day * 100, count=96),
|
||||
"nb_sent": MetricStats(total=200 + day * 50, count=96),
|
||||
"airtime": MetricStats(total=120 + day * 20, count=96),
|
||||
},
|
||||
snapshot_count=96,
|
||||
)
|
||||
)
|
||||
|
||||
repeater_monthly = MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=repeater_daily_data,
|
||||
summary={
|
||||
"bat": MetricStats(
|
||||
min_value=3610, min_time=datetime(2024, 1, 1, 4, 0),
|
||||
max_value=3950, max_time=datetime(2024, 1, 5, 14, 0),
|
||||
mean=3780, count=480,
|
||||
),
|
||||
"bat_pct": MetricStats(mean=71.0, count=480),
|
||||
"last_rssi": MetricStats(mean=-88.0, count=480),
|
||||
"last_snr": MetricStats(mean=9.1, count=480),
|
||||
"noise_floor": MetricStats(mean=-115.0, count=480),
|
||||
"nb_recv": MetricStats(total=4000, count=480),
|
||||
"nb_sent": MetricStats(total=1750, count=480),
|
||||
"airtime": MetricStats(total=900, count=480),
|
||||
},
|
||||
)
|
||||
|
||||
# Companion monthly aggregate
|
||||
companion_daily_data = []
|
||||
for day in range(1, 6):
|
||||
companion_daily_data.append(
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, day),
|
||||
metrics={
|
||||
"battery_mv": MetricStats(
|
||||
min_value=3700 + day * 10,
|
||||
min_time=datetime(2024, 1, day, 5, 0),
|
||||
max_value=4000 + day * 10,
|
||||
max_time=datetime(2024, 1, day, 12, 0),
|
||||
mean=3850 + day * 10,
|
||||
count=1440,
|
||||
),
|
||||
"bat_pct": MetricStats(mean=75.0 + day * 2, count=1440),
|
||||
"contacts": MetricStats(mean=8 + day, count=1440),
|
||||
"recv": MetricStats(total=1000 + day * 200, count=1440),
|
||||
"sent": MetricStats(total=500 + day * 100, count=1440),
|
||||
},
|
||||
snapshot_count=1440,
|
||||
)
|
||||
)
|
||||
|
||||
companion_monthly = MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="companion",
|
||||
daily=companion_daily_data,
|
||||
summary={
|
||||
"battery_mv": MetricStats(
|
||||
min_value=3710, min_time=datetime(2024, 1, 1, 5, 0),
|
||||
max_value=4050, max_time=datetime(2024, 1, 5, 12, 0),
|
||||
mean=3880, count=7200,
|
||||
),
|
||||
"bat_pct": MetricStats(mean=81.0, count=7200),
|
||||
"contacts": MetricStats(mean=11.0, count=7200),
|
||||
"recv": MetricStats(total=8000, count=7200),
|
||||
"sent": MetricStats(total=4000, count=7200),
|
||||
},
|
||||
)
|
||||
|
||||
# Repeater yearly aggregate
|
||||
repeater_yearly_monthly = []
|
||||
for month in range(1, 4):
|
||||
repeater_yearly_monthly.append(
|
||||
MonthlyAggregate(
|
||||
year=2024,
|
||||
month=month,
|
||||
role="repeater",
|
||||
daily=[],
|
||||
summary={
|
||||
"bat": MetricStats(
|
||||
min_value=3500 + month * 50,
|
||||
min_time=datetime(2024, month, 15, 4, 0),
|
||||
max_value=3950 + month * 20,
|
||||
max_time=datetime(2024, month, 20, 14, 0),
|
||||
mean=3700 + month * 30,
|
||||
count=2976,
|
||||
),
|
||||
"bat_pct": MetricStats(mean=60.0 + month * 5, count=2976),
|
||||
"last_rssi": MetricStats(mean=-90.0 + month, count=2976),
|
||||
"last_snr": MetricStats(mean=7.5 + month * 0.5, count=2976),
|
||||
"nb_recv": MetricStats(total=30000 + month * 5000, count=2976),
|
||||
"nb_sent": MetricStats(total=15000 + month * 2500, count=2976),
|
||||
},
|
||||
)
|
||||
)
|
||||
|
||||
repeater_yearly = YearlyAggregate(
|
||||
year=2024,
|
||||
role="repeater",
|
||||
monthly=repeater_yearly_monthly,
|
||||
summary={
|
||||
"bat": MetricStats(
|
||||
min_value=3550, min_time=datetime(2024, 1, 15, 4, 0),
|
||||
max_value=4010, max_time=datetime(2024, 3, 20, 14, 0),
|
||||
mean=3760, count=8928,
|
||||
),
|
||||
"bat_pct": MetricStats(mean=70.0, count=8928),
|
||||
"last_rssi": MetricStats(mean=-88.0, count=8928),
|
||||
"last_snr": MetricStats(mean=8.5, count=8928),
|
||||
"nb_recv": MetricStats(total=120000, count=8928),
|
||||
"nb_sent": MetricStats(total=60000, count=8928),
|
||||
},
|
||||
)
|
||||
|
||||
# Companion yearly aggregate
|
||||
companion_yearly_monthly = []
|
||||
for month in range(1, 4):
|
||||
companion_yearly_monthly.append(
|
||||
MonthlyAggregate(
|
||||
year=2024,
|
||||
month=month,
|
||||
role="companion",
|
||||
daily=[],
|
||||
summary={
|
||||
"battery_mv": MetricStats(
|
||||
min_value=3600 + month * 30,
|
||||
min_time=datetime(2024, month, 10, 5, 0),
|
||||
max_value=4100 + month * 20,
|
||||
max_time=datetime(2024, month, 25, 12, 0),
|
||||
mean=3850 + month * 25,
|
||||
count=44640,
|
||||
),
|
||||
"bat_pct": MetricStats(mean=70.0 + month * 3, count=44640),
|
||||
"contacts": MetricStats(mean=10 + month, count=44640),
|
||||
"recv": MetricStats(total=50000 + month * 10000, count=44640),
|
||||
"sent": MetricStats(total=25000 + month * 5000, count=44640),
|
||||
},
|
||||
)
|
||||
)
|
||||
|
||||
companion_yearly = YearlyAggregate(
|
||||
year=2024,
|
||||
role="companion",
|
||||
monthly=companion_yearly_monthly,
|
||||
summary={
|
||||
"battery_mv": MetricStats(
|
||||
min_value=3630, min_time=datetime(2024, 1, 10, 5, 0),
|
||||
max_value=4160, max_time=datetime(2024, 3, 25, 12, 0),
|
||||
mean=3900, count=133920,
|
||||
),
|
||||
"bat_pct": MetricStats(mean=76.0, count=133920),
|
||||
"contacts": MetricStats(mean=12.0, count=133920),
|
||||
"recv": MetricStats(total=210000, count=133920),
|
||||
"sent": MetricStats(total=105000, count=133920),
|
||||
},
|
||||
)
|
||||
|
||||
# Empty aggregates
|
||||
empty_monthly = MonthlyAggregate(year=2024, month=1, role="repeater", daily=[], summary={})
|
||||
empty_yearly = YearlyAggregate(year=2024, role="repeater", monthly=[], summary={})
|
||||
|
||||
# Generate all TXT snapshots
|
||||
txt_snapshots = [
|
||||
("monthly_report_repeater.txt", format_monthly_txt(repeater_monthly, "Test Repeater", sample_location)),
|
||||
("monthly_report_companion.txt", format_monthly_txt(companion_monthly, "Test Companion", sample_location)),
|
||||
("yearly_report_repeater.txt", format_yearly_txt(repeater_yearly, "Test Repeater", sample_location)),
|
||||
("yearly_report_companion.txt", format_yearly_txt(companion_yearly, "Test Companion", sample_location)),
|
||||
("empty_monthly_report.txt", format_monthly_txt(empty_monthly, "Test Repeater", sample_location)),
|
||||
("empty_yearly_report.txt", format_yearly_txt(empty_yearly, "Test Repeater", sample_location)),
|
||||
]
|
||||
|
||||
for filename, content in txt_snapshots:
|
||||
output_path = txt_dir / filename
|
||||
output_path.write_text(content, encoding="utf-8")
|
||||
print(f" Created: {output_path}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
generate_svg_snapshots()
|
||||
generate_txt_snapshots()
|
||||
print("\nSnapshot generation complete!")
|
||||
print("Run pytest to verify the snapshots work correctly.")
|
||||
@@ -12,9 +12,9 @@ from pathlib import Path
|
||||
# Add src to path for imports
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent / "src"))
|
||||
|
||||
from meshmon.db import init_db, get_metric_count
|
||||
from meshmon import log
|
||||
from meshmon.charts import render_all_charts, save_chart_stats
|
||||
from meshmon.db import get_metric_count, init_db
|
||||
|
||||
|
||||
def main():
|
||||
|
||||
@@ -25,14 +25,24 @@ import calendar
|
||||
import json
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
# Add src to path for imports
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent / "src"))
|
||||
|
||||
from meshmon import log
|
||||
from meshmon.db import init_db
|
||||
from meshmon.env import get_config
|
||||
from meshmon import log
|
||||
from meshmon.html import render_report_page, render_reports_index
|
||||
from meshmon.reports import (
|
||||
LocationInfo,
|
||||
aggregate_monthly,
|
||||
aggregate_yearly,
|
||||
format_monthly_txt,
|
||||
format_yearly_txt,
|
||||
get_available_periods,
|
||||
monthly_to_json,
|
||||
yearly_to_json,
|
||||
)
|
||||
|
||||
|
||||
def safe_write(path: Path, content: str) -> bool:
|
||||
@@ -48,24 +58,11 @@ def safe_write(path: Path, content: str) -> bool:
|
||||
try:
|
||||
path.write_text(content, encoding="utf-8")
|
||||
return True
|
||||
except IOError as e:
|
||||
except OSError as e:
|
||||
log.error(f"Failed to write {path}: {e}")
|
||||
return False
|
||||
|
||||
|
||||
from meshmon.reports import (
|
||||
LocationInfo,
|
||||
aggregate_monthly,
|
||||
aggregate_yearly,
|
||||
format_monthly_txt,
|
||||
format_yearly_txt,
|
||||
get_available_periods,
|
||||
monthly_to_json,
|
||||
yearly_to_json,
|
||||
)
|
||||
from meshmon.html import render_report_page, render_reports_index
|
||||
|
||||
|
||||
def get_node_name(role: str) -> str:
|
||||
"""Get display name for a node role from configuration."""
|
||||
cfg = get_config()
|
||||
@@ -91,8 +88,8 @@ def render_monthly_report(
|
||||
role: str,
|
||||
year: int,
|
||||
month: int,
|
||||
prev_period: Optional[tuple[int, int]] = None,
|
||||
next_period: Optional[tuple[int, int]] = None,
|
||||
prev_period: tuple[int, int] | None = None,
|
||||
next_period: tuple[int, int] | None = None,
|
||||
) -> None:
|
||||
"""Render monthly report in all formats.
|
||||
|
||||
@@ -152,8 +149,8 @@ def render_monthly_report(
|
||||
def render_yearly_report(
|
||||
role: str,
|
||||
year: int,
|
||||
prev_year: Optional[int] = None,
|
||||
next_year: Optional[int] = None,
|
||||
prev_year: int | None = None,
|
||||
next_year: int | None = None,
|
||||
) -> None:
|
||||
"""Render yearly report in all formats.
|
||||
|
||||
|
||||
@@ -13,9 +13,9 @@ from pathlib import Path
|
||||
# Add src to path for imports
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent / "src"))
|
||||
|
||||
from meshmon.db import init_db, get_latest_metrics
|
||||
from meshmon.env import get_config
|
||||
from meshmon import log
|
||||
from meshmon.db import get_latest_metrics, init_db
|
||||
from meshmon.env import get_config
|
||||
from meshmon.html import write_site
|
||||
|
||||
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
"""MeshCore network monitoring library."""
|
||||
|
||||
__version__ = "0.2.3" # x-release-please-version
|
||||
__version__ = "0.2.15" # x-release-please-version
|
||||
|
||||
@@ -10,23 +10,24 @@ import re
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime, timedelta
|
||||
from pathlib import Path
|
||||
from typing import Any, Literal, Optional
|
||||
from typing import Any, Literal
|
||||
|
||||
import matplotlib
|
||||
matplotlib.use('Agg') # Non-interactive backend for server-side rendering
|
||||
import matplotlib.pyplot as plt
|
||||
import matplotlib.dates as mdates
|
||||
|
||||
from .db import get_metrics_for_period
|
||||
matplotlib.use('Agg') # Non-interactive backend for server-side rendering
|
||||
import matplotlib.dates as mdates
|
||||
import matplotlib.pyplot as plt
|
||||
|
||||
from . import log
|
||||
from .db import get_available_metrics, get_metrics_for_period
|
||||
from .env import get_config
|
||||
from .metrics import (
|
||||
convert_telemetry_value,
|
||||
get_chart_metrics,
|
||||
is_counter_metric,
|
||||
get_graph_scale,
|
||||
is_counter_metric,
|
||||
transform_value,
|
||||
)
|
||||
from . import log
|
||||
|
||||
|
||||
# Type alias for theme names
|
||||
ThemeName = Literal["light", "dark"]
|
||||
@@ -36,28 +37,37 @@ ThemeName = Literal["light", "dark"]
|
||||
BIN_30_MINUTES = 1800 # 30 minutes in seconds
|
||||
BIN_2_HOURS = 7200 # 2 hours in seconds
|
||||
BIN_1_DAY = 86400 # 1 day in seconds
|
||||
MIN_COUNTER_INTERVAL_RATIO = 0.9 # Allow small scheduling jitter
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class PeriodConfig:
|
||||
"""Configuration for a chart time period."""
|
||||
|
||||
lookback: timedelta
|
||||
bin_seconds: int | None = None # None = no binning (raw data)
|
||||
|
||||
|
||||
# Period configuration: lookback duration and aggregation bin size
|
||||
# Period configuration for chart rendering
|
||||
# Target: ~100-400 data points per chart for clean visualization
|
||||
# Chart plot area is ~640px, so aim for 1.5-6px per point
|
||||
PERIOD_CONFIG = {
|
||||
"day": {
|
||||
"lookback": timedelta(days=1),
|
||||
"bin_seconds": None, # No binning - raw data (~96 points at 15-min intervals)
|
||||
},
|
||||
"week": {
|
||||
"lookback": timedelta(days=7),
|
||||
"bin_seconds": BIN_30_MINUTES, # 30-min bins (~336 points, ~2px per point)
|
||||
},
|
||||
"month": {
|
||||
"lookback": timedelta(days=31),
|
||||
"bin_seconds": BIN_2_HOURS, # 2-hour bins (~372 points, ~1.7px per point)
|
||||
},
|
||||
"year": {
|
||||
"lookback": timedelta(days=365),
|
||||
"bin_seconds": BIN_1_DAY, # 1-day bins (~365 points, ~1.8px per point)
|
||||
},
|
||||
PERIOD_CONFIG: dict[str, PeriodConfig] = {
|
||||
"day": PeriodConfig(
|
||||
lookback=timedelta(days=1),
|
||||
bin_seconds=None, # No binning - raw data (~96 points at 15-min intervals)
|
||||
),
|
||||
"week": PeriodConfig(
|
||||
lookback=timedelta(days=7),
|
||||
bin_seconds=BIN_30_MINUTES, # 30-min bins (~336 points, ~2px per point)
|
||||
),
|
||||
"month": PeriodConfig(
|
||||
lookback=timedelta(days=31),
|
||||
bin_seconds=BIN_2_HOURS, # 2-hour bins (~372 points, ~1.7px per point)
|
||||
),
|
||||
"year": PeriodConfig(
|
||||
lookback=timedelta(days=365),
|
||||
bin_seconds=BIN_1_DAY, # 1-day bins (~365 points, ~1.8px per point)
|
||||
),
|
||||
}
|
||||
|
||||
|
||||
@@ -134,12 +144,12 @@ class TimeSeries:
|
||||
class ChartStatistics:
|
||||
"""Statistics for a time series (min/avg/max/current)."""
|
||||
|
||||
min_value: Optional[float] = None
|
||||
avg_value: Optional[float] = None
|
||||
max_value: Optional[float] = None
|
||||
current_value: Optional[float] = None
|
||||
min_value: float | None = None
|
||||
avg_value: float | None = None
|
||||
max_value: float | None = None
|
||||
current_value: float | None = None
|
||||
|
||||
def to_dict(self) -> dict[str, Optional[float]]:
|
||||
def to_dict(self) -> dict[str, float | None]:
|
||||
"""Convert to dict matching existing chart_stats.json format."""
|
||||
return {
|
||||
"min": self.min_value,
|
||||
@@ -167,6 +177,7 @@ def load_timeseries_from_db(
|
||||
end_time: datetime,
|
||||
lookback: timedelta,
|
||||
period: str,
|
||||
all_metrics: dict[str, list[tuple[int, float]]] | None = None,
|
||||
) -> TimeSeries:
|
||||
"""Load time series data from SQLite database.
|
||||
|
||||
@@ -179,6 +190,7 @@ def load_timeseries_from_db(
|
||||
end_time: End of the time range (typically now)
|
||||
lookback: How far back to look
|
||||
period: Period name for binning config ("day", "week", etc.)
|
||||
all_metrics: Optional pre-fetched metrics dict for this period
|
||||
|
||||
Returns:
|
||||
TimeSeries with extracted data points
|
||||
@@ -188,7 +200,8 @@ def load_timeseries_from_db(
|
||||
end_ts = int(end_time.timestamp())
|
||||
|
||||
# Fetch all metrics for this role/period (returns pivoted dict)
|
||||
all_metrics = get_metrics_for_period(role, start_ts, end_ts)
|
||||
if all_metrics is None:
|
||||
all_metrics = get_metrics_for_period(role, start_ts, end_ts)
|
||||
|
||||
# Get data for this specific metric
|
||||
metric_data = all_metrics.get(metric, [])
|
||||
@@ -198,6 +211,7 @@ def load_timeseries_from_db(
|
||||
|
||||
is_counter = is_counter_metric(metric)
|
||||
scale = get_graph_scale(metric)
|
||||
unit_system = get_config().display_unit_system
|
||||
|
||||
# Convert to (datetime, value) tuples with transform applied
|
||||
raw_points: list[tuple[datetime, float]] = []
|
||||
@@ -212,37 +226,55 @@ def load_timeseries_from_db(
|
||||
# For counter metrics, calculate rate of change
|
||||
if is_counter:
|
||||
rate_points: list[tuple[datetime, float]] = []
|
||||
cfg = get_config()
|
||||
min_interval = max(
|
||||
1.0,
|
||||
(cfg.companion_step if role == "companion" else cfg.repeater_step)
|
||||
* MIN_COUNTER_INTERVAL_RATIO,
|
||||
)
|
||||
|
||||
for i in range(1, len(raw_points)):
|
||||
prev_ts, prev_val = raw_points[i - 1]
|
||||
curr_ts, curr_val = raw_points[i]
|
||||
|
||||
delta_val = curr_val - prev_val
|
||||
prev_ts, prev_val = raw_points[0]
|
||||
for curr_ts, curr_val in raw_points[1:]:
|
||||
delta_secs = (curr_ts - prev_ts).total_seconds()
|
||||
|
||||
if delta_secs <= 0:
|
||||
continue
|
||||
if delta_secs < min_interval:
|
||||
log.debug(
|
||||
f"Skipping counter sample for {metric} at {curr_ts} "
|
||||
f"({delta_secs:.1f}s < {min_interval:.1f}s)"
|
||||
)
|
||||
continue
|
||||
|
||||
delta_val = curr_val - prev_val
|
||||
|
||||
# Skip negative deltas (device reboot)
|
||||
if delta_val < 0:
|
||||
log.debug(f"Counter reset detected for {metric} at {curr_ts}")
|
||||
prev_ts, prev_val = curr_ts, curr_val
|
||||
continue
|
||||
|
||||
# Calculate per-second rate, then apply scaling (typically x60 for per-minute)
|
||||
rate = (delta_val / delta_secs) * scale
|
||||
rate_points.append((curr_ts, rate))
|
||||
prev_ts, prev_val = curr_ts, curr_val
|
||||
|
||||
raw_points = rate_points
|
||||
else:
|
||||
# For gauges, just apply scaling
|
||||
raw_points = [(ts, val * scale) for ts, val in raw_points]
|
||||
|
||||
# Apply time binning if configured
|
||||
period_cfg = PERIOD_CONFIG.get(period, {})
|
||||
bin_seconds = period_cfg.get("bin_seconds")
|
||||
# Convert telemetry values to selected display unit system (display-only)
|
||||
if metric.startswith("telemetry."):
|
||||
raw_points = [
|
||||
(ts, convert_telemetry_value(metric, val, unit_system))
|
||||
for ts, val in raw_points
|
||||
]
|
||||
|
||||
if bin_seconds and len(raw_points) > 1:
|
||||
raw_points = _aggregate_bins(raw_points, bin_seconds)
|
||||
# Apply time binning if configured
|
||||
period_cfg = PERIOD_CONFIG.get(period)
|
||||
if period_cfg and period_cfg.bin_seconds and len(raw_points) > 1:
|
||||
raw_points = _aggregate_bins(raw_points, period_cfg.bin_seconds)
|
||||
|
||||
# Convert to DataPoints
|
||||
points = [DataPoint(timestamp=ts, value=val) for ts, val in raw_points]
|
||||
@@ -315,10 +347,10 @@ def render_chart_svg(
|
||||
theme: ChartTheme,
|
||||
width: int = 800,
|
||||
height: int = 280,
|
||||
y_min: Optional[float] = None,
|
||||
y_max: Optional[float] = None,
|
||||
x_start: Optional[datetime] = None,
|
||||
x_end: Optional[datetime] = None,
|
||||
y_min: float | None = None,
|
||||
y_max: float | None = None,
|
||||
x_start: datetime | None = None,
|
||||
x_end: datetime | None = None,
|
||||
) -> str:
|
||||
"""Render time series as SVG using matplotlib.
|
||||
|
||||
@@ -377,12 +409,28 @@ def render_chart_svg(
|
||||
timestamps = ts.timestamps
|
||||
values = ts.values
|
||||
|
||||
# Convert datetime to matplotlib date numbers for proper typing
|
||||
# and correct axis formatter behavior
|
||||
x_dates = mdates.date2num(timestamps)
|
||||
|
||||
# Plot area fill
|
||||
area_color = _hex_to_rgba(theme.area)
|
||||
ax.fill_between(timestamps, values, alpha=area_color[3], color=f"#{theme.line}")
|
||||
area = ax.fill_between(
|
||||
x_dates,
|
||||
values,
|
||||
alpha=area_color[3],
|
||||
color=f"#{theme.line}",
|
||||
)
|
||||
area.set_gid("chart-area")
|
||||
|
||||
# Plot line
|
||||
ax.plot(timestamps, values, color=f"#{theme.line}", linewidth=2)
|
||||
(line,) = ax.plot(
|
||||
x_dates,
|
||||
values,
|
||||
color=f"#{theme.line}",
|
||||
linewidth=2,
|
||||
)
|
||||
line.set_gid("chart-line")
|
||||
|
||||
# Set Y-axis limits and track actual values used
|
||||
if y_min is not None and y_max is not None:
|
||||
@@ -399,7 +447,17 @@ def render_chart_svg(
|
||||
|
||||
# Set X-axis limits first (before configuring ticks)
|
||||
if x_start is not None and x_end is not None:
|
||||
ax.set_xlim(x_start, x_end)
|
||||
ax.set_xlim(mdates.date2num(x_start), mdates.date2num(x_end))
|
||||
else:
|
||||
# Compute sensible x-axis limits from data
|
||||
# For single point or sparse data, add padding based on period
|
||||
x_min_dt = min(timestamps)
|
||||
x_max_dt = max(timestamps)
|
||||
if x_min_dt == x_max_dt:
|
||||
# Single point: use period lookback for range
|
||||
period_cfg = PERIOD_CONFIG.get(ts.period, PERIOD_CONFIG["day"])
|
||||
x_min_dt = x_max_dt - period_cfg.lookback
|
||||
ax.set_xlim(mdates.date2num(x_min_dt), mdates.date2num(x_max_dt))
|
||||
|
||||
# Format X-axis based on period (after setting limits)
|
||||
_configure_x_axis(ax, ts.period)
|
||||
@@ -449,16 +507,16 @@ def _inject_data_attributes(
|
||||
svg: str,
|
||||
ts: TimeSeries,
|
||||
theme_name: str,
|
||||
x_start: Optional[datetime] = None,
|
||||
x_end: Optional[datetime] = None,
|
||||
y_min: Optional[float] = None,
|
||||
y_max: Optional[float] = None,
|
||||
x_start: datetime | None = None,
|
||||
x_end: datetime | None = None,
|
||||
y_min: float | None = None,
|
||||
y_max: float | None = None,
|
||||
) -> str:
|
||||
"""Inject data-* attributes into SVG for tooltip support.
|
||||
|
||||
Adds:
|
||||
- data-metric, data-period, data-theme, data-x-start, data-x-end, data-y-min, data-y-max to root <svg>
|
||||
- data-points JSON array to the chart path element
|
||||
- data-points JSON array to the root <svg> and chart line path
|
||||
|
||||
Args:
|
||||
svg: Raw SVG string
|
||||
@@ -495,29 +553,40 @@ def _inject_data_attributes(
|
||||
r'<svg\b',
|
||||
f'<svg data-metric="{ts.metric}" data-period="{ts.period}" data-theme="{theme_name}" '
|
||||
f'data-x-start="{x_start_ts}" data-x-end="{x_end_ts}" '
|
||||
f'data-y-min="{y_min_val}" data-y-max="{y_max_val}"',
|
||||
f'data-y-min="{y_min_val}" data-y-max="{y_max_val}" '
|
||||
f'data-points="{data_points_attr}"',
|
||||
svg,
|
||||
count=1
|
||||
)
|
||||
|
||||
# Add data-points to the main path element (the line, not the fill)
|
||||
# Look for the second path element (first is usually the fill area)
|
||||
path_count = 0
|
||||
def add_data_to_path(match):
|
||||
nonlocal path_count
|
||||
path_count += 1
|
||||
if path_count == 2: # The line path
|
||||
return f'<path data-points="{data_points_attr}"'
|
||||
return match.group(0)
|
||||
# Add data-points to the line path inside the #chart-line group
|
||||
# matplotlib creates <g id="chart-line"><path d="..."></g>
|
||||
svg, count = re.subn(
|
||||
r'(<g[^>]*id="chart-line"[^>]*>\s*<path\b)',
|
||||
rf'\1 data-points="{data_points_attr}"',
|
||||
svg,
|
||||
count=1,
|
||||
)
|
||||
|
||||
svg = re.sub(r'<path\b', add_data_to_path, svg)
|
||||
if count == 0:
|
||||
# Fallback: look for the second path element (first is usually the fill area)
|
||||
path_count = 0
|
||||
|
||||
def add_data_to_path(match):
|
||||
nonlocal path_count
|
||||
path_count += 1
|
||||
if path_count == 2: # The line path
|
||||
return f'<path data-points="{data_points_attr}"'
|
||||
return match.group(0)
|
||||
|
||||
svg = re.sub(r'<path\b', add_data_to_path, svg)
|
||||
|
||||
return svg
|
||||
|
||||
|
||||
def render_all_charts(
|
||||
role: str,
|
||||
metrics: Optional[list[str]] = None,
|
||||
metrics: list[str] | None = None,
|
||||
) -> tuple[list[Path], dict[str, dict[str, dict[str, Any]]]]:
|
||||
"""Render all charts for a role in both light and dark themes.
|
||||
|
||||
@@ -531,10 +600,15 @@ def render_all_charts(
|
||||
Tuple of (list of generated chart paths, stats dict)
|
||||
Stats dict structure: {metric_name: {period: {min, avg, max, current}}}
|
||||
"""
|
||||
if metrics is None:
|
||||
metrics = get_chart_metrics(role)
|
||||
|
||||
cfg = get_config()
|
||||
if metrics is None:
|
||||
available_metrics = get_available_metrics(role)
|
||||
metrics = get_chart_metrics(
|
||||
role,
|
||||
available_metrics=available_metrics,
|
||||
telemetry_enabled=cfg.telemetry_enabled,
|
||||
)
|
||||
|
||||
charts_dir = cfg.out_dir / "assets" / role
|
||||
charts_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
@@ -558,16 +632,24 @@ def render_all_charts(
|
||||
for metric in metrics:
|
||||
all_stats[metric] = {}
|
||||
|
||||
for period in periods:
|
||||
period_cfg = PERIOD_CONFIG[period]
|
||||
for period in periods:
|
||||
period_cfg = PERIOD_CONFIG[period]
|
||||
x_end = now
|
||||
x_start = now - period_cfg.lookback
|
||||
|
||||
start_ts = int(x_start.timestamp())
|
||||
end_ts = int(x_end.timestamp())
|
||||
all_metrics = get_metrics_for_period(role, start_ts, end_ts)
|
||||
|
||||
for metric in metrics:
|
||||
# Load time series from database
|
||||
ts = load_timeseries_from_db(
|
||||
role=role,
|
||||
metric=metric,
|
||||
end_time=now,
|
||||
lookback=period_cfg["lookback"],
|
||||
lookback=period_cfg.lookback,
|
||||
period=period,
|
||||
all_metrics=all_metrics,
|
||||
)
|
||||
|
||||
# Calculate and store statistics
|
||||
@@ -579,10 +661,6 @@ def render_all_charts(
|
||||
y_min = y_range[0] if y_range else None
|
||||
y_max = y_range[1] if y_range else None
|
||||
|
||||
# Calculate X-axis range for full period padding
|
||||
x_end = now
|
||||
x_start = now - period_cfg["lookback"]
|
||||
|
||||
# Render chart for each theme
|
||||
for theme_name in themes:
|
||||
theme = CHART_THEMES[theme_name]
|
||||
@@ -645,7 +723,8 @@ def load_chart_stats(role: str) -> dict[str, dict[str, dict[str, Any]]]:
|
||||
|
||||
try:
|
||||
with open(stats_path) as f:
|
||||
return json.load(f)
|
||||
data: dict[str, dict[str, dict[str, Any]]] = json.load(f)
|
||||
return data
|
||||
except Exception as e:
|
||||
log.debug(f"Failed to load chart stats: {e}")
|
||||
return {}
|
||||
|
||||
@@ -19,14 +19,14 @@ Migration system:
|
||||
|
||||
import sqlite3
|
||||
from collections import defaultdict
|
||||
from collections.abc import Iterator
|
||||
from contextlib import contextmanager
|
||||
from pathlib import Path
|
||||
from typing import Any, Iterator, Optional
|
||||
from typing import Any
|
||||
|
||||
from . import log
|
||||
from .battery import voltage_to_percentage
|
||||
from .env import get_config
|
||||
from . import log
|
||||
|
||||
|
||||
# Path to migrations directory (relative to this file)
|
||||
MIGRATIONS_DIR = Path(__file__).parent / "migrations"
|
||||
@@ -176,7 +176,7 @@ def get_db_path() -> Path:
|
||||
return cfg.state_dir / "metrics.db"
|
||||
|
||||
|
||||
def init_db(db_path: Optional[Path] = None) -> None:
|
||||
def init_db(db_path: Path | None = None) -> None:
|
||||
"""Initialize database with schema and apply pending migrations.
|
||||
|
||||
Creates tables if they don't exist. Safe to call multiple times.
|
||||
@@ -212,7 +212,7 @@ def init_db(db_path: Optional[Path] = None) -> None:
|
||||
|
||||
@contextmanager
|
||||
def get_connection(
|
||||
db_path: Optional[Path] = None,
|
||||
db_path: Path | None = None,
|
||||
readonly: bool = False
|
||||
) -> Iterator[sqlite3.Connection]:
|
||||
"""Context manager for database connections.
|
||||
@@ -259,7 +259,7 @@ def insert_metric(
|
||||
role: str,
|
||||
metric: str,
|
||||
value: float,
|
||||
db_path: Optional[Path] = None,
|
||||
db_path: Path | None = None,
|
||||
) -> bool:
|
||||
"""Insert a single metric value.
|
||||
|
||||
@@ -293,7 +293,7 @@ def insert_metrics(
|
||||
ts: int,
|
||||
role: str,
|
||||
metrics: dict[str, Any],
|
||||
db_path: Optional[Path] = None,
|
||||
db_path: Path | None = None,
|
||||
) -> int:
|
||||
"""Insert multiple metrics from a dict (e.g., firmware status response).
|
||||
|
||||
@@ -348,7 +348,7 @@ def get_metrics_for_period(
|
||||
role: str,
|
||||
start_ts: int,
|
||||
end_ts: int,
|
||||
db_path: Optional[Path] = None,
|
||||
db_path: Path | None = None,
|
||||
) -> dict[str, list[tuple[int, float]]]:
|
||||
"""Fetch all metrics for a role within a time range.
|
||||
|
||||
@@ -403,8 +403,8 @@ def get_metrics_for_period(
|
||||
|
||||
def get_latest_metrics(
|
||||
role: str,
|
||||
db_path: Optional[Path] = None,
|
||||
) -> Optional[dict[str, Any]]:
|
||||
db_path: Path | None = None,
|
||||
) -> dict[str, Any] | None:
|
||||
"""Get the most recent metrics for a role.
|
||||
|
||||
Returns all metrics at the most recent timestamp as a flat dict.
|
||||
@@ -455,7 +455,7 @@ def get_latest_metrics(
|
||||
|
||||
def get_metric_count(
|
||||
role: str,
|
||||
db_path: Optional[Path] = None,
|
||||
db_path: Path | None = None,
|
||||
) -> int:
|
||||
"""Get total number of metric rows for a role.
|
||||
|
||||
@@ -476,12 +476,13 @@ def get_metric_count(
|
||||
"SELECT COUNT(*) FROM metrics WHERE role = ?",
|
||||
(role,)
|
||||
)
|
||||
return cursor.fetchone()[0]
|
||||
row = cursor.fetchone()
|
||||
return int(row[0]) if row else 0
|
||||
|
||||
|
||||
def get_distinct_timestamps(
|
||||
role: str,
|
||||
db_path: Optional[Path] = None,
|
||||
db_path: Path | None = None,
|
||||
) -> int:
|
||||
"""Get count of distinct timestamps for a role.
|
||||
|
||||
@@ -501,12 +502,13 @@ def get_distinct_timestamps(
|
||||
"SELECT COUNT(DISTINCT ts) FROM metrics WHERE role = ?",
|
||||
(role,)
|
||||
)
|
||||
return cursor.fetchone()[0]
|
||||
row = cursor.fetchone()
|
||||
return int(row[0]) if row else 0
|
||||
|
||||
|
||||
def get_available_metrics(
|
||||
role: str,
|
||||
db_path: Optional[Path] = None,
|
||||
db_path: Path | None = None,
|
||||
) -> list[str]:
|
||||
"""Get list of all metric names stored for a role.
|
||||
|
||||
@@ -529,7 +531,7 @@ def get_available_metrics(
|
||||
return [row["metric"] for row in cursor]
|
||||
|
||||
|
||||
def vacuum_db(db_path: Optional[Path] = None) -> None:
|
||||
def vacuum_db(db_path: Path | None = None) -> None:
|
||||
"""Compact database and rebuild indexes.
|
||||
|
||||
Should be run periodically (e.g., weekly via cron).
|
||||
|
||||
@@ -4,7 +4,6 @@ import os
|
||||
import re
|
||||
import warnings
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
|
||||
def _parse_config_value(value: str) -> str:
|
||||
@@ -79,14 +78,14 @@ def _load_config_file() -> None:
|
||||
os.environ[key] = value
|
||||
|
||||
except (OSError, UnicodeDecodeError) as e:
|
||||
warnings.warn(f"Failed to load {config_path}: {e}")
|
||||
warnings.warn(f"Failed to load {config_path}: {e}", stacklevel=2)
|
||||
|
||||
|
||||
# Load config file at module import time, before Config is instantiated
|
||||
_load_config_file()
|
||||
|
||||
|
||||
def get_str(key: str, default: Optional[str] = None) -> Optional[str]:
|
||||
def get_str(key: str, default: str | None = None) -> str | None:
|
||||
"""Get string env var."""
|
||||
return os.environ.get(key, default)
|
||||
|
||||
@@ -127,12 +126,80 @@ def get_path(key: str, default: str) -> Path:
|
||||
return Path(val).expanduser().resolve()
|
||||
|
||||
|
||||
def get_unit_system(key: str, default: str = "metric") -> str:
|
||||
"""Get display unit system env var, normalized to metric/imperial."""
|
||||
val = os.environ.get(key, default).strip().lower()
|
||||
if val in ("metric", "imperial"):
|
||||
return val
|
||||
return default
|
||||
|
||||
|
||||
class Config:
|
||||
"""Configuration loaded from environment variables."""
|
||||
|
||||
def __init__(self):
|
||||
# Connection settings
|
||||
mesh_transport: str
|
||||
mesh_serial_port: str | None
|
||||
mesh_serial_baud: int
|
||||
mesh_tcp_host: str | None
|
||||
mesh_tcp_port: int
|
||||
mesh_ble_addr: str | None
|
||||
mesh_ble_pin: str | None
|
||||
mesh_debug: bool
|
||||
|
||||
# Remote repeater identity
|
||||
repeater_name: str | None
|
||||
repeater_key_prefix: str | None
|
||||
repeater_password: str | None
|
||||
|
||||
# Intervals and timeouts
|
||||
companion_step: int
|
||||
repeater_step: int
|
||||
remote_timeout_s: int
|
||||
remote_retry_attempts: int
|
||||
remote_retry_backoff_s: int
|
||||
remote_cb_fails: int
|
||||
remote_cb_cooldown_s: int
|
||||
|
||||
# Telemetry
|
||||
telemetry_enabled: bool
|
||||
telemetry_timeout_s: int
|
||||
telemetry_retry_attempts: int
|
||||
telemetry_retry_backoff_s: int
|
||||
|
||||
# Paths
|
||||
state_dir: Path
|
||||
out_dir: Path
|
||||
html_path: str
|
||||
|
||||
# Report location metadata
|
||||
report_location_name: str | None
|
||||
report_location_short: str | None
|
||||
report_lat: float
|
||||
report_lon: float
|
||||
report_elev: float
|
||||
report_elev_unit: str | None
|
||||
|
||||
# Node display names
|
||||
repeater_display_name: str | None
|
||||
companion_display_name: str | None
|
||||
repeater_pubkey_prefix: str | None
|
||||
companion_pubkey_prefix: str | None
|
||||
repeater_hardware: str | None
|
||||
companion_hardware: str | None
|
||||
|
||||
# Radio configuration
|
||||
radio_frequency: str | None
|
||||
radio_bandwidth: str | None
|
||||
radio_spread_factor: str | None
|
||||
radio_coding_rate: str | None
|
||||
|
||||
# Display formatting
|
||||
display_unit_system: str
|
||||
|
||||
def __init__(self) -> None:
|
||||
# Connection settings
|
||||
self.mesh_transport = get_str("MESH_TRANSPORT", "serial")
|
||||
self.mesh_transport = get_str("MESH_TRANSPORT", "serial") or "serial"
|
||||
self.mesh_serial_port = get_str("MESH_SERIAL_PORT") # None = auto-detect
|
||||
self.mesh_serial_baud = get_int("MESH_SERIAL_BAUD", 115200)
|
||||
self.mesh_tcp_host = get_str("MESH_TCP_HOST", "localhost")
|
||||
@@ -155,6 +222,14 @@ class Config:
|
||||
self.remote_cb_fails = get_int("REMOTE_CB_FAILS", 6)
|
||||
self.remote_cb_cooldown_s = get_int("REMOTE_CB_COOLDOWN_S", 3600)
|
||||
|
||||
# Telemetry collection (requires sensor board on repeater)
|
||||
self.telemetry_enabled = get_bool("TELEMETRY_ENABLED", False)
|
||||
# Separate settings allow tuning if telemetry proves problematic
|
||||
# Defaults match status settings - tune down if needed
|
||||
self.telemetry_timeout_s = get_int("TELEMETRY_TIMEOUT_S", 10)
|
||||
self.telemetry_retry_attempts = get_int("TELEMETRY_RETRY_ATTEMPTS", 2)
|
||||
self.telemetry_retry_backoff_s = get_int("TELEMETRY_RETRY_BACKOFF_S", 4)
|
||||
|
||||
# Paths (defaults are Docker container paths; native installs override via config)
|
||||
self.state_dir = get_path("STATE_DIR", "/data/state")
|
||||
self.out_dir = get_path("OUT_DIR", "/out")
|
||||
@@ -193,9 +268,13 @@ class Config:
|
||||
self.radio_spread_factor = get_str("RADIO_SPREAD_FACTOR", "SF8")
|
||||
self.radio_coding_rate = get_str("RADIO_CODING_RATE", "CR8")
|
||||
|
||||
# Display formatting
|
||||
self.display_unit_system = get_unit_system("DISPLAY_UNIT_SYSTEM", "metric")
|
||||
|
||||
self.html_path = get_str("HTML_PATH", "") or ""
|
||||
|
||||
# Global config instance
|
||||
_config: Optional[Config] = None
|
||||
_config: Config | None = None
|
||||
|
||||
|
||||
def get_config() -> Config:
|
||||
|
||||
@@ -1,14 +1,14 @@
|
||||
"""Shared formatting functions for display values."""
|
||||
|
||||
from datetime import datetime
|
||||
from typing import Any, Optional, Union
|
||||
|
||||
Number = Union[int, float]
|
||||
from typing import Any
|
||||
|
||||
from .battery import voltage_to_percentage
|
||||
|
||||
Number = int | float
|
||||
|
||||
def format_time(ts: Optional[int]) -> str:
|
||||
|
||||
def format_time(ts: int | None) -> str:
|
||||
"""Format Unix timestamp to human readable string."""
|
||||
if ts is None:
|
||||
return "N/A"
|
||||
@@ -28,14 +28,14 @@ def format_value(value: Any) -> str:
|
||||
return str(value)
|
||||
|
||||
|
||||
def format_number(value: Optional[int]) -> str:
|
||||
def format_number(value: int | None) -> str:
|
||||
"""Format an integer with thousands separators."""
|
||||
if value is None:
|
||||
return "N/A"
|
||||
return f"{value:,}"
|
||||
|
||||
|
||||
def format_duration(seconds: Optional[int]) -> str:
|
||||
def format_duration(seconds: int | None) -> str:
|
||||
"""Format duration in seconds to human readable string (days, hours, minutes, seconds)."""
|
||||
if seconds is None:
|
||||
return "N/A"
|
||||
@@ -57,7 +57,7 @@ def format_duration(seconds: Optional[int]) -> str:
|
||||
return " ".join(parts)
|
||||
|
||||
|
||||
def format_uptime(seconds: Optional[int]) -> str:
|
||||
def format_uptime(seconds: int | None) -> str:
|
||||
"""Format uptime seconds to human readable string (days, hours, minutes)."""
|
||||
if seconds is None:
|
||||
return "N/A"
|
||||
@@ -76,7 +76,7 @@ def format_uptime(seconds: Optional[int]) -> str:
|
||||
return " ".join(parts)
|
||||
|
||||
|
||||
def format_voltage_with_pct(mv: Optional[float]) -> str:
|
||||
def format_voltage_with_pct(mv: float | None) -> str:
|
||||
"""Format millivolts as voltage with battery percentage."""
|
||||
if mv is None:
|
||||
return "N/A"
|
||||
@@ -85,7 +85,7 @@ def format_voltage_with_pct(mv: Optional[float]) -> str:
|
||||
return f"{v:.2f} V ({pct:.0f}%)"
|
||||
|
||||
|
||||
def format_compact_number(value: Optional[Number], precision: int = 1) -> str:
|
||||
def format_compact_number(value: Number | None, precision: int = 1) -> str:
|
||||
"""Format a number using compact notation (k, M suffixes).
|
||||
|
||||
Rules:
|
||||
@@ -119,7 +119,7 @@ def format_compact_number(value: Optional[Number], precision: int = 1) -> str:
|
||||
return str(int(value))
|
||||
|
||||
|
||||
def format_duration_compact(seconds: Optional[int]) -> str:
|
||||
def format_duration_compact(seconds: int | None) -> str:
|
||||
"""Format duration showing only the two most significant units.
|
||||
|
||||
Uses truncation (floor), not rounding.
|
||||
|
||||
@@ -1,27 +1,46 @@
|
||||
"""HTML rendering helpers using Jinja2 templates."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import calendar
|
||||
import shutil
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import Any, Optional
|
||||
from typing import TYPE_CHECKING, Any, TypedDict
|
||||
|
||||
from jinja2 import Environment, PackageLoader, select_autoescape
|
||||
|
||||
from . import log
|
||||
from .charts import load_chart_stats
|
||||
from .env import get_config
|
||||
from .formatters import (
|
||||
format_time,
|
||||
format_value,
|
||||
format_number,
|
||||
format_duration,
|
||||
format_uptime,
|
||||
format_compact_number,
|
||||
format_duration,
|
||||
format_duration_compact,
|
||||
format_number,
|
||||
format_time,
|
||||
format_uptime,
|
||||
format_value,
|
||||
)
|
||||
from .metrics import (
|
||||
get_chart_metrics,
|
||||
get_metric_label,
|
||||
get_metric_unit,
|
||||
get_telemetry_metric_decimals,
|
||||
is_telemetry_metric,
|
||||
)
|
||||
from .charts import load_chart_stats
|
||||
from .metrics import get_chart_metrics, get_metric_label
|
||||
from . import log
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from .reports import MonthlyAggregate, YearlyAggregate
|
||||
|
||||
|
||||
class MetricDisplay(TypedDict, total=False):
|
||||
"""A metric display item for the UI."""
|
||||
|
||||
label: str
|
||||
value: str
|
||||
unit: str | None
|
||||
raw_value: int
|
||||
|
||||
# Status indicator thresholds (seconds)
|
||||
STATUS_ONLINE_THRESHOLD = 1800 # 30 minutes
|
||||
@@ -76,7 +95,7 @@ COMPANION_CHART_GROUPS = [
|
||||
]
|
||||
|
||||
# Singleton Jinja2 environment
|
||||
_jinja_env: Optional[Environment] = None
|
||||
_jinja_env: Environment | None = None
|
||||
|
||||
|
||||
def get_jinja_env() -> Environment:
|
||||
@@ -110,7 +129,7 @@ def get_jinja_env() -> Environment:
|
||||
return env
|
||||
|
||||
|
||||
def get_status(ts: Optional[int]) -> tuple[str, str]:
|
||||
def get_status(ts: int | None) -> tuple[str, str]:
|
||||
"""Determine status based on timestamp age.
|
||||
|
||||
Returns:
|
||||
@@ -128,7 +147,7 @@ def get_status(ts: Optional[int]) -> tuple[str, str]:
|
||||
return ("offline", "Offline")
|
||||
|
||||
|
||||
def build_repeater_metrics(row: Optional[dict]) -> dict:
|
||||
def build_repeater_metrics(row: dict | None) -> dict:
|
||||
"""Build metrics data from repeater database row.
|
||||
|
||||
Args:
|
||||
@@ -242,7 +261,7 @@ def build_repeater_metrics(row: Optional[dict]) -> dict:
|
||||
}
|
||||
|
||||
|
||||
def build_companion_metrics(row: Optional[dict]) -> dict:
|
||||
def build_companion_metrics(row: dict | None) -> dict:
|
||||
"""Build metrics data from companion database row.
|
||||
|
||||
Args:
|
||||
@@ -296,7 +315,7 @@ def build_companion_metrics(row: Optional[dict]) -> dict:
|
||||
})
|
||||
|
||||
# Secondary metrics (empty for companion)
|
||||
secondary_metrics = []
|
||||
secondary_metrics: list[MetricDisplay] = []
|
||||
|
||||
# Traffic metrics for companion
|
||||
traffic_metrics = []
|
||||
@@ -402,7 +421,7 @@ def build_radio_config() -> list[dict]:
|
||||
]
|
||||
|
||||
|
||||
def _format_stat_value(value: Optional[float], metric: str) -> str:
|
||||
def _format_stat_value(value: float | None, metric: str) -> str:
|
||||
"""Format a statistic value for display in chart footer.
|
||||
|
||||
Args:
|
||||
@@ -415,6 +434,14 @@ def _format_stat_value(value: Optional[float], metric: str) -> str:
|
||||
if value is None:
|
||||
return "-"
|
||||
|
||||
# Telemetry metrics can be auto-discovered and need dynamic unit conversion.
|
||||
if is_telemetry_metric(metric):
|
||||
cfg = get_config()
|
||||
decimals = get_telemetry_metric_decimals(metric, cfg.display_unit_system)
|
||||
unit = get_metric_unit(metric, cfg.display_unit_system)
|
||||
formatted = f"{value:.{decimals}f}"
|
||||
return f"{formatted} {unit}" if unit else formatted
|
||||
|
||||
# Determine format and suffix based on metric (using firmware field names)
|
||||
# Battery voltage (already transformed to volts in charts.py)
|
||||
if metric in ("bat", "battery_mv"):
|
||||
@@ -444,7 +471,7 @@ def _format_stat_value(value: Optional[float], metric: str) -> str:
|
||||
return f"{value:.2f}"
|
||||
|
||||
|
||||
def _load_svg_content(path: Path) -> Optional[str]:
|
||||
def _load_svg_content(path: Path) -> str | None:
|
||||
"""Load SVG file content for inline embedding.
|
||||
|
||||
Args:
|
||||
@@ -466,7 +493,8 @@ def _load_svg_content(path: Path) -> Optional[str]:
|
||||
def build_chart_groups(
|
||||
role: str,
|
||||
period: str,
|
||||
chart_stats: Optional[dict] = None,
|
||||
chart_stats: dict | None = None,
|
||||
asset_prefix: str = "",
|
||||
) -> list[dict]:
|
||||
"""Build chart groups for template.
|
||||
|
||||
@@ -477,10 +505,31 @@ def build_chart_groups(
|
||||
role: "companion" or "repeater"
|
||||
period: Time period ("day", "week", etc.)
|
||||
chart_stats: Stats dict from chart_stats.json (optional)
|
||||
asset_prefix: Relative path prefix to reach /assets from page location
|
||||
"""
|
||||
cfg = get_config()
|
||||
groups_config = REPEATER_CHART_GROUPS if role == "repeater" else COMPANION_CHART_GROUPS
|
||||
chart_metrics = get_chart_metrics(role)
|
||||
available_metrics = sorted(chart_stats.keys()) if chart_stats else []
|
||||
chart_metrics = get_chart_metrics(
|
||||
role,
|
||||
available_metrics=available_metrics,
|
||||
telemetry_enabled=cfg.telemetry_enabled,
|
||||
)
|
||||
groups_config = [
|
||||
{"title": group["title"], "metrics": list(group["metrics"])}
|
||||
for group in (
|
||||
REPEATER_CHART_GROUPS if role == "repeater" else COMPANION_CHART_GROUPS
|
||||
)
|
||||
]
|
||||
|
||||
if role == "repeater" and cfg.telemetry_enabled:
|
||||
telemetry_metrics = [metric for metric in chart_metrics if is_telemetry_metric(metric)]
|
||||
if telemetry_metrics:
|
||||
groups_config.append(
|
||||
{
|
||||
"title": "Telemetry",
|
||||
"metrics": telemetry_metrics,
|
||||
}
|
||||
)
|
||||
|
||||
if chart_stats is None:
|
||||
chart_stats = {}
|
||||
@@ -523,7 +572,8 @@ def build_chart_groups(
|
||||
{"label": "Max", "value": _format_stat_value(max_val, metric)},
|
||||
]
|
||||
|
||||
chart_data = {
|
||||
# Build chart data for template - mixed types require Any
|
||||
chart_data: dict[str, Any] = {
|
||||
"label": get_metric_label(metric),
|
||||
"metric": metric,
|
||||
"current": current_formatted,
|
||||
@@ -537,8 +587,9 @@ def build_chart_groups(
|
||||
chart_data["use_svg"] = True
|
||||
else:
|
||||
# Fallback to PNG paths
|
||||
chart_data["src_light"] = f"/assets/{role}/{metric}_{period}_light.png"
|
||||
chart_data["src_dark"] = f"/assets/{role}/{metric}_{period}_dark.png"
|
||||
asset_base = f"{asset_prefix}assets/{role}/"
|
||||
chart_data["src_light"] = f"{asset_base}{metric}_{period}_light.png"
|
||||
chart_data["src_dark"] = f"{asset_base}{metric}_{period}_dark.png"
|
||||
chart_data["use_svg"] = False
|
||||
|
||||
charts.append(chart_data)
|
||||
@@ -555,7 +606,7 @@ def build_chart_groups(
|
||||
def build_page_context(
|
||||
role: str,
|
||||
period: str,
|
||||
row: Optional[dict],
|
||||
row: dict | None,
|
||||
at_root: bool,
|
||||
) -> dict[str, Any]:
|
||||
"""Build template context dictionary for node pages.
|
||||
@@ -569,16 +620,10 @@ def build_page_context(
|
||||
cfg = get_config()
|
||||
|
||||
# Get node name from config
|
||||
if role == "repeater":
|
||||
node_name = cfg.repeater_display_name
|
||||
else:
|
||||
node_name = cfg.companion_display_name
|
||||
node_name = cfg.repeater_display_name if role == "repeater" else cfg.companion_display_name
|
||||
|
||||
# Pubkey prefix from config
|
||||
if role == "repeater":
|
||||
pubkey_pre = cfg.repeater_pubkey_prefix
|
||||
else:
|
||||
pubkey_pre = cfg.companion_pubkey_prefix
|
||||
pubkey_pre = cfg.repeater_pubkey_prefix if role == "repeater" else cfg.companion_pubkey_prefix
|
||||
|
||||
# Status based on timestamp
|
||||
ts = row.get("ts") if row else None
|
||||
@@ -588,8 +633,8 @@ def build_page_context(
|
||||
last_updated = None
|
||||
last_updated_iso = None
|
||||
if ts:
|
||||
dt = datetime.fromtimestamp(ts)
|
||||
last_updated = dt.strftime("%b %d, %Y at %H:%M UTC")
|
||||
dt = datetime.fromtimestamp(ts).astimezone()
|
||||
last_updated = dt.strftime("%b %d, %Y at %H:%M %Z")
|
||||
last_updated_iso = dt.isoformat()
|
||||
|
||||
# Build metrics for sidebar
|
||||
@@ -606,7 +651,10 @@ def build_page_context(
|
||||
|
||||
# Load chart stats and build chart groups
|
||||
chart_stats = load_chart_stats(role)
|
||||
chart_groups = build_chart_groups(role, period, chart_stats)
|
||||
|
||||
# Relative path prefixes (avoid absolute paths for subpath deployments)
|
||||
css_path = "" if at_root else "../"
|
||||
asset_prefix = "" if at_root else "../"
|
||||
|
||||
# Period config
|
||||
page_title, page_subtitle = PERIOD_CONFIG.get(period, ("Observations", "Radio telemetry"))
|
||||
@@ -626,9 +674,18 @@ def build_page_context(
|
||||
),
|
||||
}
|
||||
|
||||
# CSS and link paths - depend on whether we're at root or in /companion/
|
||||
css_path = "/" if at_root else "../"
|
||||
base_path = "" if at_root else "/companion"
|
||||
chart_groups = build_chart_groups(role, period, chart_stats, asset_prefix=asset_prefix)
|
||||
|
||||
# Navigation links depend on whether we're at root or in /companion/
|
||||
base_path = ""
|
||||
if at_root:
|
||||
repeater_link = "day.html"
|
||||
companion_link = "companion/day.html"
|
||||
reports_link = "reports/"
|
||||
else:
|
||||
repeater_link = "../day.html"
|
||||
companion_link = "day.html"
|
||||
reports_link = "../reports/"
|
||||
|
||||
return {
|
||||
# Page meta
|
||||
@@ -636,6 +693,7 @@ def build_page_context(
|
||||
"meta_description": meta_descriptions.get(role, "MeshCore mesh network statistics dashboard."),
|
||||
"og_image": None,
|
||||
"css_path": css_path,
|
||||
"display_unit_system": cfg.display_unit_system,
|
||||
|
||||
# Node info
|
||||
"node_name": node_name,
|
||||
@@ -657,9 +715,9 @@ def build_page_context(
|
||||
# Navigation
|
||||
"period": period,
|
||||
"base_path": base_path,
|
||||
"repeater_link": f"{css_path}day.html",
|
||||
"companion_link": f"{css_path}companion/day.html",
|
||||
"reports_link": f"{css_path}reports/",
|
||||
"repeater_link": repeater_link,
|
||||
"companion_link": companion_link,
|
||||
"reports_link": reports_link,
|
||||
|
||||
# Timestamps
|
||||
"last_updated": last_updated,
|
||||
@@ -675,7 +733,7 @@ def build_page_context(
|
||||
def render_node_page(
|
||||
role: str,
|
||||
period: str,
|
||||
row: Optional[dict],
|
||||
row: dict | None,
|
||||
at_root: bool = False,
|
||||
) -> str:
|
||||
"""Render a node page (companion or repeater).
|
||||
@@ -689,7 +747,7 @@ def render_node_page(
|
||||
env = get_jinja_env()
|
||||
context = build_page_context(role, period, row, at_root)
|
||||
template = env.get_template("node.html")
|
||||
return template.render(**context)
|
||||
return str(template.render(**context))
|
||||
|
||||
|
||||
def copy_static_assets():
|
||||
@@ -712,8 +770,8 @@ def copy_static_assets():
|
||||
|
||||
|
||||
def write_site(
|
||||
companion_row: Optional[dict],
|
||||
repeater_row: Optional[dict],
|
||||
companion_row: dict | None,
|
||||
repeater_row: dict | None,
|
||||
) -> list[Path]:
|
||||
"""
|
||||
Write all static site pages.
|
||||
@@ -794,8 +852,8 @@ def _fmt_val_plain(value: float | None, fmt: str = ".2f") -> str:
|
||||
|
||||
|
||||
def build_monthly_table_data(
|
||||
agg: "MonthlyAggregate", role: str
|
||||
) -> tuple[list[dict], list[dict], list[dict]]:
|
||||
agg: MonthlyAggregate, role: str
|
||||
) -> tuple[list[dict[str, Any]], list[dict[str, Any]], list[dict[str, Any]]]:
|
||||
"""Build table column groups, headers and rows for a monthly report.
|
||||
|
||||
Args:
|
||||
@@ -807,6 +865,11 @@ def build_monthly_table_data(
|
||||
"""
|
||||
from .reports import MetricStats
|
||||
|
||||
# Define types upfront for mypy
|
||||
col_groups: list[dict[str, Any]]
|
||||
headers: list[dict[str, Any]]
|
||||
rows: list[dict[str, Any]]
|
||||
|
||||
if role == "repeater":
|
||||
# Column groups matching redesign/reports/monthly.html
|
||||
col_groups = [
|
||||
@@ -845,24 +908,24 @@ def build_monthly_table_data(
|
||||
airtime = m.get("airtime", MetricStats())
|
||||
|
||||
# Convert mV to V for display
|
||||
bat_v_mean = bat.mean / 1000.0 if bat.mean else None
|
||||
bat_v_min = bat.min_value / 1000.0 if bat.min_value else None
|
||||
bat_v_max = bat.max_value / 1000.0 if bat.max_value else None
|
||||
bat_v_mean = bat.mean / 1000.0 if bat.mean is not None else None
|
||||
bat_v_min = bat.min_value / 1000.0 if bat.min_value is not None else None
|
||||
bat_v_max = bat.max_value / 1000.0 if bat.max_value is not None else None
|
||||
|
||||
rows.append({
|
||||
"is_summary": False,
|
||||
"cells": [
|
||||
{"value": f"{daily.date.day:02d}", "class": None},
|
||||
{"value": f"{bat_v_mean:.2f}" if bat_v_mean else "-", "class": None},
|
||||
{"value": f"{bat_pct.mean:.0f}" if bat_pct.mean else "-", "class": None},
|
||||
{"value": f"{bat_v_mean:.2f}" if bat_v_mean is not None else "-", "class": None},
|
||||
{"value": f"{bat_pct.mean:.0f}" if bat_pct.mean is not None else "-", "class": None},
|
||||
{"value": _fmt_val_time(bat_v_min, bat.min_time), "class": "muted"},
|
||||
{"value": _fmt_val_time(bat_v_max, bat.max_time), "class": "muted"},
|
||||
{"value": f"{rssi.mean:.0f}" if rssi.mean else "-", "class": None},
|
||||
{"value": f"{snr.mean:.1f}" if snr.mean else "-", "class": None},
|
||||
{"value": f"{noise.mean:.0f}" if noise.mean else "-", "class": None},
|
||||
{"value": f"{rx.total:,}" if rx.total else "-", "class": "highlight"},
|
||||
{"value": f"{tx.total:,}" if tx.total else "-", "class": None},
|
||||
{"value": f"{airtime.total:,}" if airtime.total else "-", "class": None},
|
||||
{"value": f"{rssi.mean:.0f}" if rssi.mean is not None else "-", "class": None},
|
||||
{"value": f"{snr.mean:.1f}" if snr.mean is not None else "-", "class": None},
|
||||
{"value": f"{noise.mean:.0f}" if noise.mean is not None else "-", "class": None},
|
||||
{"value": f"{rx.total:,}" if rx.total is not None else "-", "class": "highlight"},
|
||||
{"value": f"{tx.total:,}" if tx.total is not None else "-", "class": None},
|
||||
{"value": f"{airtime.total:,}" if airtime.total is not None else "-", "class": None},
|
||||
],
|
||||
})
|
||||
|
||||
@@ -877,24 +940,24 @@ def build_monthly_table_data(
|
||||
tx = s.get("nb_sent", MetricStats())
|
||||
airtime = s.get("airtime", MetricStats())
|
||||
|
||||
bat_v_mean = bat.mean / 1000.0 if bat.mean else None
|
||||
bat_v_min = bat.min_value / 1000.0 if bat.min_value else None
|
||||
bat_v_max = bat.max_value / 1000.0 if bat.max_value else None
|
||||
bat_v_mean = bat.mean / 1000.0 if bat.mean is not None else None
|
||||
bat_v_min = bat.min_value / 1000.0 if bat.min_value is not None else None
|
||||
bat_v_max = bat.max_value / 1000.0 if bat.max_value is not None else None
|
||||
|
||||
rows.append({
|
||||
"is_summary": True,
|
||||
"cells": [
|
||||
{"value": "", "class": None},
|
||||
{"value": f"{bat_v_mean:.2f}" if bat_v_mean else "-", "class": None},
|
||||
{"value": f"{bat_pct.mean:.0f}" if bat_pct.mean else "-", "class": None},
|
||||
{"value": f"{bat_v_mean:.2f}" if bat_v_mean is not None else "-", "class": None},
|
||||
{"value": f"{bat_pct.mean:.0f}" if bat_pct.mean is not None else "-", "class": None},
|
||||
{"value": _fmt_val_day(bat_v_min, bat.min_time), "class": "muted"},
|
||||
{"value": _fmt_val_day(bat_v_max, bat.max_time), "class": "muted"},
|
||||
{"value": f"{rssi.mean:.0f}" if rssi.mean else "-", "class": None},
|
||||
{"value": f"{snr.mean:.1f}" if snr.mean else "-", "class": None},
|
||||
{"value": f"{noise.mean:.0f}" if noise.mean else "-", "class": None},
|
||||
{"value": f"{rx.total:,}" if rx.total else "-", "class": "highlight"},
|
||||
{"value": f"{tx.total:,}" if tx.total else "-", "class": None},
|
||||
{"value": f"{airtime.total:,}" if airtime.total else "-", "class": None},
|
||||
{"value": f"{rssi.mean:.0f}" if rssi.mean is not None else "-", "class": None},
|
||||
{"value": f"{snr.mean:.1f}" if snr.mean is not None else "-", "class": None},
|
||||
{"value": f"{noise.mean:.0f}" if noise.mean is not None else "-", "class": None},
|
||||
{"value": f"{rx.total:,}" if rx.total is not None else "-", "class": "highlight"},
|
||||
{"value": f"{tx.total:,}" if tx.total is not None else "-", "class": None},
|
||||
{"value": f"{airtime.total:,}" if airtime.total is not None else "-", "class": None},
|
||||
],
|
||||
})
|
||||
|
||||
@@ -928,21 +991,21 @@ def build_monthly_table_data(
|
||||
tx = m.get("sent", MetricStats())
|
||||
|
||||
# Convert mV to V for display
|
||||
bat_v_mean = bat.mean / 1000.0 if bat.mean else None
|
||||
bat_v_min = bat.min_value / 1000.0 if bat.min_value else None
|
||||
bat_v_max = bat.max_value / 1000.0 if bat.max_value else None
|
||||
bat_v_mean = bat.mean / 1000.0 if bat.mean is not None else None
|
||||
bat_v_min = bat.min_value / 1000.0 if bat.min_value is not None else None
|
||||
bat_v_max = bat.max_value / 1000.0 if bat.max_value is not None else None
|
||||
|
||||
rows.append({
|
||||
"is_summary": False,
|
||||
"cells": [
|
||||
{"value": f"{daily.date.day:02d}", "class": None},
|
||||
{"value": f"{bat_v_mean:.2f}" if bat_v_mean else "-", "class": None},
|
||||
{"value": f"{bat_pct.mean:.0f}" if bat_pct.mean else "-", "class": None},
|
||||
{"value": f"{bat_v_mean:.2f}" if bat_v_mean is not None else "-", "class": None},
|
||||
{"value": f"{bat_pct.mean:.0f}" if bat_pct.mean is not None else "-", "class": None},
|
||||
{"value": _fmt_val_time(bat_v_min, bat.min_time), "class": "muted"},
|
||||
{"value": _fmt_val_time(bat_v_max, bat.max_time), "class": "muted"},
|
||||
{"value": f"{contacts.mean:.0f}" if contacts.mean else "-", "class": None},
|
||||
{"value": f"{rx.total:,}" if rx.total else "-", "class": "highlight"},
|
||||
{"value": f"{tx.total:,}" if tx.total else "-", "class": None},
|
||||
{"value": f"{contacts.mean:.0f}" if contacts.mean is not None else "-", "class": None},
|
||||
{"value": f"{rx.total:,}" if rx.total is not None else "-", "class": "highlight"},
|
||||
{"value": f"{tx.total:,}" if tx.total is not None else "-", "class": None},
|
||||
],
|
||||
})
|
||||
|
||||
@@ -954,21 +1017,21 @@ def build_monthly_table_data(
|
||||
rx = s.get("recv", MetricStats())
|
||||
tx = s.get("sent", MetricStats())
|
||||
|
||||
bat_v_mean = bat.mean / 1000.0 if bat.mean else None
|
||||
bat_v_min = bat.min_value / 1000.0 if bat.min_value else None
|
||||
bat_v_max = bat.max_value / 1000.0 if bat.max_value else None
|
||||
bat_v_mean = bat.mean / 1000.0 if bat.mean is not None else None
|
||||
bat_v_min = bat.min_value / 1000.0 if bat.min_value is not None else None
|
||||
bat_v_max = bat.max_value / 1000.0 if bat.max_value is not None else None
|
||||
|
||||
rows.append({
|
||||
"is_summary": True,
|
||||
"cells": [
|
||||
{"value": "", "class": None},
|
||||
{"value": f"{bat_v_mean:.2f}" if bat_v_mean else "-", "class": None},
|
||||
{"value": f"{bat_pct.mean:.0f}" if bat_pct.mean else "-", "class": None},
|
||||
{"value": f"{bat_v_mean:.2f}" if bat_v_mean is not None else "-", "class": None},
|
||||
{"value": f"{bat_pct.mean:.0f}" if bat_pct.mean is not None else "-", "class": None},
|
||||
{"value": _fmt_val_day(bat_v_min, bat.min_time), "class": "muted"},
|
||||
{"value": _fmt_val_day(bat_v_max, bat.max_time), "class": "muted"},
|
||||
{"value": f"{contacts.mean:.0f}" if contacts.mean else "-", "class": None},
|
||||
{"value": f"{rx.total:,}" if rx.total else "-", "class": "highlight"},
|
||||
{"value": f"{tx.total:,}" if tx.total else "-", "class": None},
|
||||
{"value": f"{contacts.mean:.0f}" if contacts.mean is not None else "-", "class": None},
|
||||
{"value": f"{rx.total:,}" if rx.total is not None else "-", "class": "highlight"},
|
||||
{"value": f"{tx.total:,}" if tx.total is not None else "-", "class": None},
|
||||
],
|
||||
})
|
||||
|
||||
@@ -986,8 +1049,8 @@ def _fmt_val_month(value: float | None, time_obj, fmt: str = ".2f") -> str:
|
||||
|
||||
|
||||
def build_yearly_table_data(
|
||||
agg: "YearlyAggregate", role: str
|
||||
) -> tuple[list[dict], list[dict], list[dict]]:
|
||||
agg: YearlyAggregate, role: str
|
||||
) -> tuple[list[dict[str, Any]], list[dict[str, Any]], list[dict[str, Any]]]:
|
||||
"""Build table column groups, headers and rows for a yearly report.
|
||||
|
||||
Args:
|
||||
@@ -999,6 +1062,11 @@ def build_yearly_table_data(
|
||||
"""
|
||||
from .reports import MetricStats
|
||||
|
||||
# Define types upfront for mypy
|
||||
col_groups: list[dict[str, Any]]
|
||||
headers: list[dict[str, Any]]
|
||||
rows: list[dict[str, Any]]
|
||||
|
||||
if role == "repeater":
|
||||
# Column groups matching redesign/reports/yearly.html
|
||||
col_groups = [
|
||||
@@ -1033,23 +1101,23 @@ def build_yearly_table_data(
|
||||
tx = s.get("nb_sent", MetricStats())
|
||||
|
||||
# Convert mV to V
|
||||
bat_v_mean = bat.mean / 1000.0 if bat.mean else None
|
||||
bat_v_min = bat.min_value / 1000.0 if bat.min_value else None
|
||||
bat_v_max = bat.max_value / 1000.0 if bat.max_value else None
|
||||
bat_v_mean = bat.mean / 1000.0 if bat.mean is not None else None
|
||||
bat_v_min = bat.min_value / 1000.0 if bat.min_value is not None else None
|
||||
bat_v_max = bat.max_value / 1000.0 if bat.max_value is not None else None
|
||||
|
||||
rows.append({
|
||||
"is_summary": False,
|
||||
"cells": [
|
||||
{"value": str(agg.year), "class": None},
|
||||
{"value": f"{monthly.month:02d}", "class": None},
|
||||
{"value": f"{bat_v_mean:.2f}" if bat_v_mean else "-", "class": None},
|
||||
{"value": f"{bat_pct.mean:.0f}" if bat_pct.mean else "-", "class": None},
|
||||
{"value": f"{bat_v_mean:.2f}" if bat_v_mean is not None else "-", "class": None},
|
||||
{"value": f"{bat_pct.mean:.0f}" if bat_pct.mean is not None else "-", "class": None},
|
||||
{"value": _fmt_val_day(bat_v_max, bat.max_time), "class": "muted"},
|
||||
{"value": _fmt_val_day(bat_v_min, bat.min_time), "class": "muted"},
|
||||
{"value": f"{rssi.mean:.0f}" if rssi.mean else "-", "class": None},
|
||||
{"value": f"{snr.mean:.1f}" if snr.mean else "-", "class": None},
|
||||
{"value": f"{rx.total:,}" if rx.total else "-", "class": "highlight"},
|
||||
{"value": f"{tx.total:,}" if tx.total else "-", "class": None},
|
||||
{"value": f"{rssi.mean:.0f}" if rssi.mean is not None else "-", "class": None},
|
||||
{"value": f"{snr.mean:.1f}" if snr.mean is not None else "-", "class": None},
|
||||
{"value": f"{rx.total:,}" if rx.total is not None else "-", "class": "highlight"},
|
||||
{"value": f"{tx.total:,}" if tx.total is not None else "-", "class": None},
|
||||
],
|
||||
})
|
||||
|
||||
@@ -1062,23 +1130,23 @@ def build_yearly_table_data(
|
||||
rx = s.get("nb_recv", MetricStats())
|
||||
tx = s.get("nb_sent", MetricStats())
|
||||
|
||||
bat_v_mean = bat.mean / 1000.0 if bat.mean else None
|
||||
bat_v_min = bat.min_value / 1000.0 if bat.min_value else None
|
||||
bat_v_max = bat.max_value / 1000.0 if bat.max_value else None
|
||||
bat_v_mean = bat.mean / 1000.0 if bat.mean is not None else None
|
||||
bat_v_min = bat.min_value / 1000.0 if bat.min_value is not None else None
|
||||
bat_v_max = bat.max_value / 1000.0 if bat.max_value is not None else None
|
||||
|
||||
rows.append({
|
||||
"is_summary": True,
|
||||
"cells": [
|
||||
{"value": "", "class": None},
|
||||
{"value": "", "class": None},
|
||||
{"value": f"{bat_v_mean:.2f}" if bat_v_mean else "-", "class": None},
|
||||
{"value": f"{bat_pct.mean:.0f}" if bat_pct.mean else "-", "class": None},
|
||||
{"value": f"{bat_v_mean:.2f}" if bat_v_mean is not None else "-", "class": None},
|
||||
{"value": f"{bat_pct.mean:.0f}" if bat_pct.mean is not None else "-", "class": None},
|
||||
{"value": _fmt_val_month(bat_v_max, bat.max_time), "class": "muted"},
|
||||
{"value": _fmt_val_month(bat_v_min, bat.min_time), "class": "muted"},
|
||||
{"value": f"{rssi.mean:.0f}" if rssi.mean else "-", "class": None},
|
||||
{"value": f"{snr.mean:.1f}" if snr.mean else "-", "class": None},
|
||||
{"value": f"{rx.total:,}" if rx.total else "-", "class": "highlight"},
|
||||
{"value": f"{tx.total:,}" if tx.total else "-", "class": None},
|
||||
{"value": f"{rssi.mean:.0f}" if rssi.mean is not None else "-", "class": None},
|
||||
{"value": f"{snr.mean:.1f}" if snr.mean is not None else "-", "class": None},
|
||||
{"value": f"{rx.total:,}" if rx.total is not None else "-", "class": "highlight"},
|
||||
{"value": f"{tx.total:,}" if tx.total is not None else "-", "class": None},
|
||||
],
|
||||
})
|
||||
|
||||
@@ -1113,22 +1181,22 @@ def build_yearly_table_data(
|
||||
tx = s.get("sent", MetricStats())
|
||||
|
||||
# Convert mV to V
|
||||
bat_v_mean = bat.mean / 1000.0 if bat.mean else None
|
||||
bat_v_min = bat.min_value / 1000.0 if bat.min_value else None
|
||||
bat_v_max = bat.max_value / 1000.0 if bat.max_value else None
|
||||
bat_v_mean = bat.mean / 1000.0 if bat.mean is not None else None
|
||||
bat_v_min = bat.min_value / 1000.0 if bat.min_value is not None else None
|
||||
bat_v_max = bat.max_value / 1000.0 if bat.max_value is not None else None
|
||||
|
||||
rows.append({
|
||||
"is_summary": False,
|
||||
"cells": [
|
||||
{"value": str(agg.year), "class": None},
|
||||
{"value": f"{monthly.month:02d}", "class": None},
|
||||
{"value": f"{bat_v_mean:.2f}" if bat_v_mean else "-", "class": None},
|
||||
{"value": f"{bat_pct.mean:.0f}" if bat_pct.mean else "-", "class": None},
|
||||
{"value": f"{bat_v_mean:.2f}" if bat_v_mean is not None else "-", "class": None},
|
||||
{"value": f"{bat_pct.mean:.0f}" if bat_pct.mean is not None else "-", "class": None},
|
||||
{"value": _fmt_val_day(bat_v_max, bat.max_time), "class": "muted"},
|
||||
{"value": _fmt_val_day(bat_v_min, bat.min_time), "class": "muted"},
|
||||
{"value": f"{contacts.mean:.0f}" if contacts.mean else "-", "class": None},
|
||||
{"value": f"{rx.total:,}" if rx.total else "-", "class": "highlight"},
|
||||
{"value": f"{tx.total:,}" if tx.total else "-", "class": None},
|
||||
{"value": f"{contacts.mean:.0f}" if contacts.mean is not None else "-", "class": None},
|
||||
{"value": f"{rx.total:,}" if rx.total is not None else "-", "class": "highlight"},
|
||||
{"value": f"{tx.total:,}" if tx.total is not None else "-", "class": None},
|
||||
],
|
||||
})
|
||||
|
||||
@@ -1140,22 +1208,22 @@ def build_yearly_table_data(
|
||||
rx = s.get("recv", MetricStats())
|
||||
tx = s.get("sent", MetricStats())
|
||||
|
||||
bat_v_mean = bat.mean / 1000.0 if bat.mean else None
|
||||
bat_v_min = bat.min_value / 1000.0 if bat.min_value else None
|
||||
bat_v_max = bat.max_value / 1000.0 if bat.max_value else None
|
||||
bat_v_mean = bat.mean / 1000.0 if bat.mean is not None else None
|
||||
bat_v_min = bat.min_value / 1000.0 if bat.min_value is not None else None
|
||||
bat_v_max = bat.max_value / 1000.0 if bat.max_value is not None else None
|
||||
|
||||
rows.append({
|
||||
"is_summary": True,
|
||||
"cells": [
|
||||
{"value": "", "class": None},
|
||||
{"value": "", "class": None},
|
||||
{"value": f"{bat_v_mean:.2f}" if bat_v_mean else "-", "class": None},
|
||||
{"value": f"{bat_pct.mean:.0f}" if bat_pct.mean else "-", "class": None},
|
||||
{"value": f"{bat_v_mean:.2f}" if bat_v_mean is not None else "-", "class": None},
|
||||
{"value": f"{bat_pct.mean:.0f}" if bat_pct.mean is not None else "-", "class": None},
|
||||
{"value": _fmt_val_month(bat_v_max, bat.max_time), "class": "muted"},
|
||||
{"value": _fmt_val_month(bat_v_min, bat.min_time), "class": "muted"},
|
||||
{"value": f"{contacts.mean:.0f}" if contacts.mean else "-", "class": None},
|
||||
{"value": f"{rx.total:,}" if rx.total else "-", "class": "highlight"},
|
||||
{"value": f"{tx.total:,}" if tx.total else "-", "class": None},
|
||||
{"value": f"{contacts.mean:.0f}" if contacts.mean is not None else "-", "class": None},
|
||||
{"value": f"{rx.total:,}" if rx.total is not None else "-", "class": "highlight"},
|
||||
{"value": f"{tx.total:,}" if tx.total is not None else "-", "class": None},
|
||||
],
|
||||
})
|
||||
|
||||
@@ -1166,8 +1234,8 @@ def render_report_page(
|
||||
agg: Any,
|
||||
node_name: str,
|
||||
report_type: str,
|
||||
prev_report: Optional[dict] = None,
|
||||
next_report: Optional[dict] = None,
|
||||
prev_report: dict | None = None,
|
||||
next_report: dict | None = None,
|
||||
) -> str:
|
||||
"""Render a report page (monthly or yearly).
|
||||
|
||||
@@ -1239,7 +1307,7 @@ def render_report_page(
|
||||
}
|
||||
|
||||
template = env.get_template("report.html")
|
||||
return template.render(**context)
|
||||
return str(template.render(**context))
|
||||
|
||||
|
||||
def render_reports_index(report_sections: list[dict]) -> str:
|
||||
@@ -1276,4 +1344,4 @@ def render_reports_index(report_sections: list[dict]) -> str:
|
||||
}
|
||||
|
||||
template = env.get_template("report_index.html")
|
||||
return template.render(**context)
|
||||
return str(template.render(**context))
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
|
||||
import sys
|
||||
from datetime import datetime
|
||||
|
||||
from .env import get_config
|
||||
|
||||
|
||||
|
||||
@@ -1,14 +1,18 @@
|
||||
"""MeshCore client wrapper with safe command execution and contact lookup."""
|
||||
|
||||
import asyncio
|
||||
from typing import Any, Optional, Callable, Coroutine
|
||||
import fcntl
|
||||
from collections.abc import AsyncIterator, Coroutine
|
||||
from contextlib import asynccontextmanager
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from .env import get_config
|
||||
from . import log
|
||||
from .env import get_config
|
||||
|
||||
# Try to import meshcore - will fail gracefully if not installed
|
||||
try:
|
||||
from meshcore import MeshCore, EventType
|
||||
from meshcore import EventType, MeshCore
|
||||
MESHCORE_AVAILABLE = True
|
||||
except ImportError:
|
||||
MESHCORE_AVAILABLE = False
|
||||
@@ -16,7 +20,7 @@ except ImportError:
|
||||
EventType = None
|
||||
|
||||
|
||||
def auto_detect_serial_port() -> Optional[str]:
|
||||
def auto_detect_serial_port() -> str | None:
|
||||
"""
|
||||
Auto-detect a suitable serial port for MeshCore device.
|
||||
Prefers /dev/ttyACM* or /dev/ttyUSB* devices.
|
||||
@@ -36,20 +40,20 @@ def auto_detect_serial_port() -> Optional[str]:
|
||||
for port in ports:
|
||||
if "ttyACM" in port.device:
|
||||
log.info(f"Auto-detected serial port: {port.device} ({port.description})")
|
||||
return port.device
|
||||
return str(port.device)
|
||||
|
||||
for port in ports:
|
||||
if "ttyUSB" in port.device:
|
||||
log.info(f"Auto-detected serial port: {port.device} ({port.description})")
|
||||
return port.device
|
||||
return str(port.device)
|
||||
|
||||
# Fall back to first available
|
||||
port = ports[0]
|
||||
log.info(f"Using first available port: {port.device} ({port.description})")
|
||||
return port.device
|
||||
return str(port.device)
|
||||
|
||||
|
||||
async def connect_from_env() -> Optional[Any]:
|
||||
async def connect_from_env() -> Any | None:
|
||||
"""
|
||||
Connect to MeshCore device using environment configuration.
|
||||
|
||||
@@ -100,11 +104,97 @@ async def connect_from_env() -> Optional[Any]:
|
||||
return None
|
||||
|
||||
|
||||
async def _acquire_lock_async(
|
||||
lock_file,
|
||||
timeout: float = 60.0,
|
||||
poll_interval: float = 0.1,
|
||||
) -> None:
|
||||
"""Acquire exclusive file lock without blocking the event loop.
|
||||
|
||||
Uses non-blocking LOCK_NB with async polling to avoid freezing the event loop.
|
||||
|
||||
Args:
|
||||
lock_file: Open file handle to lock
|
||||
timeout: Maximum seconds to wait for lock
|
||||
poll_interval: Seconds between lock attempts
|
||||
|
||||
Raises:
|
||||
TimeoutError: If lock cannot be acquired within timeout
|
||||
"""
|
||||
loop = asyncio.get_running_loop()
|
||||
deadline = loop.time() + timeout
|
||||
|
||||
while True:
|
||||
try:
|
||||
fcntl.flock(lock_file.fileno(), fcntl.LOCK_EX | fcntl.LOCK_NB)
|
||||
return
|
||||
except BlockingIOError as err:
|
||||
if loop.time() >= deadline:
|
||||
raise TimeoutError(
|
||||
f"Could not acquire serial lock within {timeout}s. "
|
||||
"Another process may be using the serial port."
|
||||
) from err
|
||||
await asyncio.sleep(poll_interval)
|
||||
|
||||
|
||||
@asynccontextmanager
|
||||
async def connect_with_lock(
|
||||
lock_timeout: float = 60.0,
|
||||
) -> AsyncIterator[Any | None]:
|
||||
"""Connect to MeshCore with serial port locking to prevent concurrent access.
|
||||
|
||||
For serial transport: Acquires exclusive file lock before connecting.
|
||||
For TCP/BLE: No locking needed (protocol handles multiple connections).
|
||||
|
||||
Args:
|
||||
lock_timeout: Maximum seconds to wait for serial lock
|
||||
|
||||
Yields:
|
||||
MeshCore client instance, or None if connection failed
|
||||
"""
|
||||
cfg = get_config()
|
||||
lock_file = None
|
||||
mc = None
|
||||
needs_lock = cfg.mesh_transport.lower() == "serial"
|
||||
|
||||
try:
|
||||
if needs_lock:
|
||||
lock_path: Path = cfg.state_dir / "serial.lock"
|
||||
lock_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Use 'a' mode: doesn't truncate, creates if missing
|
||||
lock_file = open(lock_path, "a") # noqa: SIM115 - must stay open for lock
|
||||
try:
|
||||
await _acquire_lock_async(lock_file, timeout=lock_timeout)
|
||||
log.debug(f"Acquired serial lock: {lock_path}")
|
||||
except Exception:
|
||||
# If lock acquisition fails, close file before re-raising
|
||||
lock_file.close()
|
||||
lock_file = None
|
||||
raise
|
||||
|
||||
mc = await connect_from_env()
|
||||
yield mc
|
||||
|
||||
finally:
|
||||
# Disconnect first (while we still hold the lock)
|
||||
if mc is not None and hasattr(mc, "disconnect"):
|
||||
try:
|
||||
await mc.disconnect()
|
||||
except Exception as e:
|
||||
log.debug(f"Error during disconnect (ignored): {e}")
|
||||
|
||||
# Release lock by closing the file (close() auto-releases flock)
|
||||
if lock_file is not None:
|
||||
lock_file.close()
|
||||
log.debug("Released serial lock")
|
||||
|
||||
|
||||
async def run_command(
|
||||
mc: Any,
|
||||
cmd_coro: Coroutine,
|
||||
name: str,
|
||||
) -> tuple[bool, Optional[str], Optional[dict], Optional[str]]:
|
||||
) -> tuple[bool, str | None, dict | None, str | None]:
|
||||
"""
|
||||
Run a MeshCore command and capture result.
|
||||
|
||||
@@ -129,10 +219,7 @@ async def run_command(
|
||||
# Extract event type name
|
||||
event_type_name = None
|
||||
if hasattr(event, "type"):
|
||||
if hasattr(event.type, "name"):
|
||||
event_type_name = event.type.name
|
||||
else:
|
||||
event_type_name = str(event.type)
|
||||
event_type_name = event.type.name if hasattr(event.type, "name") else str(event.type)
|
||||
|
||||
# Check for error
|
||||
if EventType and hasattr(event, "type") and event.type == EventType.ERROR:
|
||||
@@ -157,13 +244,13 @@ async def run_command(
|
||||
log.debug(f"Command {name} returned: {event_type_name}")
|
||||
return (True, event_type_name, payload, None)
|
||||
|
||||
except asyncio.TimeoutError:
|
||||
except TimeoutError:
|
||||
return (False, None, None, "Timeout")
|
||||
except Exception as e:
|
||||
return (False, None, None, str(e))
|
||||
|
||||
|
||||
def get_contact_by_name(mc: Any, name: str) -> Optional[Any]:
|
||||
def get_contact_by_name(mc: Any, name: str) -> Any | None:
|
||||
"""
|
||||
Find a contact by advertised name.
|
||||
|
||||
@@ -187,7 +274,7 @@ def get_contact_by_name(mc: Any, name: str) -> Optional[Any]:
|
||||
return None
|
||||
|
||||
|
||||
def get_contact_by_key_prefix(mc: Any, prefix: str) -> Optional[Any]:
|
||||
def get_contact_by_key_prefix(mc: Any, prefix: str) -> Any | None:
|
||||
"""
|
||||
Find a contact by public key prefix.
|
||||
|
||||
|
||||
@@ -11,8 +11,24 @@ Firmware field names are used directly (e.g., 'bat', 'nb_recv', 'battery_mv').
|
||||
See docs/firmware-responses.md for the complete field reference.
|
||||
"""
|
||||
|
||||
import re
|
||||
from dataclasses import dataclass
|
||||
from typing import Optional
|
||||
|
||||
TELEMETRY_METRIC_RE = re.compile(
|
||||
r"^telemetry\.([a-z0-9_]+)\.(\d+)(?:\.([a-z0-9_]+))?$"
|
||||
)
|
||||
TELEMETRY_EXCLUDED_SENSOR_TYPES = {"gps", "voltage"}
|
||||
HPA_TO_INHG = 0.029529983071445
|
||||
M_TO_FT = 3.280839895013123
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class TelemetryMetricParts:
|
||||
"""Parsed telemetry metric parts."""
|
||||
|
||||
sensor_type: str
|
||||
channel: int
|
||||
subkey: str | None = None
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
@@ -30,7 +46,7 @@ class MetricConfig:
|
||||
unit: str
|
||||
type: str = "gauge"
|
||||
scale: float = 1.0
|
||||
transform: Optional[str] = None
|
||||
transform: str | None = None
|
||||
|
||||
|
||||
# =============================================================================
|
||||
@@ -206,24 +222,150 @@ REPEATER_CHART_METRICS = [
|
||||
# Helper functions
|
||||
# =============================================================================
|
||||
|
||||
def get_chart_metrics(role: str) -> list[str]:
|
||||
def parse_telemetry_metric(metric: str) -> TelemetryMetricParts | None:
|
||||
"""Parse telemetry metric key into its parts.
|
||||
|
||||
Expected format: telemetry.<type>.<channel>[.<subkey>]
|
||||
"""
|
||||
match = TELEMETRY_METRIC_RE.match(metric)
|
||||
if not match:
|
||||
return None
|
||||
sensor_type, channel_raw, subkey = match.groups()
|
||||
return TelemetryMetricParts(
|
||||
sensor_type=sensor_type,
|
||||
channel=int(channel_raw),
|
||||
subkey=subkey,
|
||||
)
|
||||
|
||||
|
||||
def is_telemetry_metric(metric: str) -> bool:
|
||||
"""Check if metric key is a telemetry metric."""
|
||||
return parse_telemetry_metric(metric) is not None
|
||||
|
||||
|
||||
def _normalize_unit_system(unit_system: str) -> str:
|
||||
"""Normalize unit system string to metric/imperial."""
|
||||
return unit_system if unit_system in ("metric", "imperial") else "metric"
|
||||
|
||||
|
||||
def _humanize_token(token: str) -> str:
|
||||
"""Convert snake_case token to display title, preserving common acronyms."""
|
||||
if token.lower() == "gps":
|
||||
return "GPS"
|
||||
return token.replace("_", " ").title()
|
||||
|
||||
|
||||
def get_telemetry_metric_label(metric: str) -> str:
|
||||
"""Get human-readable label for a telemetry metric key."""
|
||||
parts = parse_telemetry_metric(metric)
|
||||
if parts is None:
|
||||
return metric
|
||||
|
||||
base = _humanize_token(parts.sensor_type)
|
||||
if parts.subkey:
|
||||
base = f"{base} {_humanize_token(parts.subkey)}"
|
||||
return f"{base} (CH{parts.channel})"
|
||||
|
||||
|
||||
def get_telemetry_metric_unit(metric: str, unit_system: str = "metric") -> str:
|
||||
"""Get telemetry unit based on metric type and selected unit system."""
|
||||
parts = parse_telemetry_metric(metric)
|
||||
if parts is None:
|
||||
return ""
|
||||
|
||||
unit_system = _normalize_unit_system(unit_system)
|
||||
|
||||
if parts.sensor_type == "temperature":
|
||||
return "°F" if unit_system == "imperial" else "°C"
|
||||
if parts.sensor_type == "humidity":
|
||||
return "%"
|
||||
if parts.sensor_type in ("barometer", "pressure"):
|
||||
return "inHg" if unit_system == "imperial" else "hPa"
|
||||
if parts.sensor_type == "altitude":
|
||||
return "ft" if unit_system == "imperial" else "m"
|
||||
return ""
|
||||
|
||||
|
||||
def get_telemetry_metric_decimals(metric: str, unit_system: str = "metric") -> int:
|
||||
"""Get display decimal precision for telemetry metrics."""
|
||||
parts = parse_telemetry_metric(metric)
|
||||
if parts is None:
|
||||
return 2
|
||||
|
||||
unit_system = _normalize_unit_system(unit_system)
|
||||
|
||||
if parts.sensor_type in ("temperature", "humidity", "altitude"):
|
||||
return 1
|
||||
if parts.sensor_type in ("barometer", "pressure"):
|
||||
return 2 if unit_system == "imperial" else 1
|
||||
return 2
|
||||
|
||||
|
||||
def convert_telemetry_value(metric: str, value: float, unit_system: str = "metric") -> float:
|
||||
"""Convert telemetry value to selected display unit system."""
|
||||
parts = parse_telemetry_metric(metric)
|
||||
if parts is None:
|
||||
return value
|
||||
|
||||
unit_system = _normalize_unit_system(unit_system)
|
||||
if unit_system != "imperial":
|
||||
return value
|
||||
|
||||
if parts.sensor_type == "temperature":
|
||||
return (value * 9.0 / 5.0) + 32.0
|
||||
if parts.sensor_type in ("barometer", "pressure"):
|
||||
return value * HPA_TO_INHG
|
||||
if parts.sensor_type == "altitude":
|
||||
return value * M_TO_FT
|
||||
return value
|
||||
|
||||
|
||||
def discover_telemetry_chart_metrics(available_metrics: list[str]) -> list[str]:
|
||||
"""Discover telemetry metrics to chart from available metric keys."""
|
||||
discovered: set[str] = set()
|
||||
for metric in available_metrics:
|
||||
parts = parse_telemetry_metric(metric)
|
||||
if parts is None:
|
||||
continue
|
||||
if parts.sensor_type in TELEMETRY_EXCLUDED_SENSOR_TYPES:
|
||||
continue
|
||||
discovered.add(metric)
|
||||
|
||||
return sorted(
|
||||
discovered,
|
||||
key=lambda metric: (get_telemetry_metric_label(metric).lower(), metric),
|
||||
)
|
||||
|
||||
|
||||
def get_chart_metrics(
|
||||
role: str,
|
||||
available_metrics: list[str] | None = None,
|
||||
telemetry_enabled: bool = False,
|
||||
) -> list[str]:
|
||||
"""Get list of metrics to chart for a role.
|
||||
|
||||
Args:
|
||||
role: 'companion' or 'repeater'
|
||||
available_metrics: Optional list of available metrics for discovery
|
||||
telemetry_enabled: Whether telemetry charts should be included
|
||||
|
||||
Returns:
|
||||
List of metric names in display order
|
||||
"""
|
||||
if role == "companion":
|
||||
return COMPANION_CHART_METRICS
|
||||
return list(COMPANION_CHART_METRICS)
|
||||
elif role == "repeater":
|
||||
return REPEATER_CHART_METRICS
|
||||
metrics = list(REPEATER_CHART_METRICS)
|
||||
if telemetry_enabled and available_metrics:
|
||||
for metric in discover_telemetry_chart_metrics(available_metrics):
|
||||
if metric not in metrics:
|
||||
metrics.append(metric)
|
||||
return metrics
|
||||
else:
|
||||
raise ValueError(f"Unknown role: {role}")
|
||||
|
||||
|
||||
def get_metric_config(metric: str) -> Optional[MetricConfig]:
|
||||
def get_metric_config(metric: str) -> MetricConfig | None:
|
||||
"""Get configuration for a metric.
|
||||
|
||||
Args:
|
||||
@@ -274,20 +416,29 @@ def get_metric_label(metric: str) -> str:
|
||||
Display label or the metric name if not configured
|
||||
"""
|
||||
config = METRIC_CONFIG.get(metric)
|
||||
return config.label if config else metric
|
||||
if config:
|
||||
return config.label
|
||||
if is_telemetry_metric(metric):
|
||||
return get_telemetry_metric_label(metric)
|
||||
return metric
|
||||
|
||||
|
||||
def get_metric_unit(metric: str) -> str:
|
||||
def get_metric_unit(metric: str, unit_system: str = "metric") -> str:
|
||||
"""Get display unit for a metric.
|
||||
|
||||
Args:
|
||||
metric: Firmware field name
|
||||
unit_system: Unit system for telemetry metrics ('metric' or 'imperial')
|
||||
|
||||
Returns:
|
||||
Unit string or empty string if not configured
|
||||
"""
|
||||
config = METRIC_CONFIG.get(metric)
|
||||
return config.unit if config else ""
|
||||
if config:
|
||||
return config.unit
|
||||
if is_telemetry_metric(metric):
|
||||
return get_telemetry_metric_unit(metric, unit_system)
|
||||
return ""
|
||||
|
||||
|
||||
def transform_value(metric: str, value: float) -> float:
|
||||
|
||||
@@ -14,20 +14,14 @@ Metric names use firmware field names directly:
|
||||
"""
|
||||
|
||||
import calendar
|
||||
import json
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import date, datetime, timedelta
|
||||
from pathlib import Path
|
||||
from typing import Any, Optional
|
||||
from datetime import date, datetime
|
||||
from typing import Any
|
||||
|
||||
from .db import get_connection, get_metrics_for_period, VALID_ROLES
|
||||
from .env import get_config
|
||||
from .db import VALID_ROLES, get_connection, get_metrics_for_period
|
||||
from .metrics import (
|
||||
is_counter_metric,
|
||||
get_chart_metrics,
|
||||
transform_value,
|
||||
)
|
||||
from . import log
|
||||
|
||||
|
||||
def _validate_role(role: str) -> str:
|
||||
@@ -59,6 +53,32 @@ def get_metrics_for_role(role: str) -> list[str]:
|
||||
raise ValueError(f"Unknown role: {role}")
|
||||
|
||||
|
||||
REPORT_UNITS_RAW = {
|
||||
"battery_mv": "mV",
|
||||
"bat": "mV",
|
||||
"bat_pct": "%",
|
||||
"uptime": "s",
|
||||
"uptime_secs": "s",
|
||||
"last_rssi": "dBm",
|
||||
"last_snr": "dB",
|
||||
"noise_floor": "dBm",
|
||||
"tx_queue_len": "count",
|
||||
"contacts": "count",
|
||||
"recv": "packets",
|
||||
"sent": "packets",
|
||||
"nb_recv": "packets",
|
||||
"nb_sent": "packets",
|
||||
"airtime": "s",
|
||||
"rx_airtime": "s",
|
||||
"flood_dups": "packets",
|
||||
"direct_dups": "packets",
|
||||
"sent_flood": "packets",
|
||||
"recv_flood": "packets",
|
||||
"sent_direct": "packets",
|
||||
"recv_direct": "packets",
|
||||
}
|
||||
|
||||
|
||||
@dataclass
|
||||
class MetricStats:
|
||||
"""Statistics for a single metric over a period.
|
||||
@@ -67,12 +87,12 @@ class MetricStats:
|
||||
For counter metrics: total (sum of positive deltas), reboot_count.
|
||||
"""
|
||||
|
||||
mean: Optional[float] = None
|
||||
min_value: Optional[float] = None
|
||||
min_time: Optional[datetime] = None
|
||||
max_value: Optional[float] = None
|
||||
max_time: Optional[datetime] = None
|
||||
total: Optional[int] = None # For counters: sum of positive deltas
|
||||
mean: float | None = None
|
||||
min_value: float | None = None
|
||||
min_time: datetime | None = None
|
||||
max_value: float | None = None
|
||||
max_time: datetime | None = None
|
||||
total: int | None = None # For counters: sum of positive deltas
|
||||
count: int = 0
|
||||
reboot_count: int = 0 # Number of counter resets detected
|
||||
|
||||
@@ -156,7 +176,7 @@ def get_rows_for_date(role: str, d: date) -> list[dict[str, Any]]:
|
||||
|
||||
def compute_counter_total(
|
||||
values: list[tuple[datetime, int]],
|
||||
) -> tuple[Optional[int], int]:
|
||||
) -> tuple[int | None, int]:
|
||||
"""Compute total for a counter metric, handling reboots.
|
||||
|
||||
Sums positive deltas between consecutive readings. Negative deltas
|
||||
@@ -290,8 +310,8 @@ def _aggregate_daily_gauge_to_summary(
|
||||
"""
|
||||
total_sum = 0.0
|
||||
total_count = 0
|
||||
overall_min: Optional[tuple[float, datetime]] = None
|
||||
overall_max: Optional[tuple[float, datetime]] = None
|
||||
overall_min: tuple[float, datetime] | None = None
|
||||
overall_max: tuple[float, datetime] | None = None
|
||||
|
||||
for daily in daily_list:
|
||||
if ds_name not in daily.metrics or not daily.metrics[ds_name].has_data:
|
||||
@@ -305,14 +325,20 @@ def _aggregate_daily_gauge_to_summary(
|
||||
total_count += stats.count
|
||||
|
||||
# Track overall min
|
||||
if stats.min_value is not None and stats.min_time is not None:
|
||||
if overall_min is None or stats.min_value < overall_min[0]:
|
||||
overall_min = (stats.min_value, stats.min_time)
|
||||
if (
|
||||
stats.min_value is not None
|
||||
and stats.min_time is not None
|
||||
and (overall_min is None or stats.min_value < overall_min[0])
|
||||
):
|
||||
overall_min = (stats.min_value, stats.min_time)
|
||||
|
||||
# Track overall max
|
||||
if stats.max_value is not None and stats.max_time is not None:
|
||||
if overall_max is None or stats.max_value > overall_max[0]:
|
||||
overall_max = (stats.max_value, stats.max_time)
|
||||
if (
|
||||
stats.max_value is not None
|
||||
and stats.max_time is not None
|
||||
and (overall_max is None or stats.max_value > overall_max[0])
|
||||
):
|
||||
overall_max = (stats.max_value, stats.max_time)
|
||||
|
||||
if total_count == 0:
|
||||
return MetricStats()
|
||||
@@ -401,8 +427,8 @@ def _aggregate_monthly_gauge_to_summary(
|
||||
"""Aggregate monthly gauge stats into a yearly summary."""
|
||||
total_sum = 0.0
|
||||
total_count = 0
|
||||
overall_min: Optional[tuple[float, datetime]] = None
|
||||
overall_max: Optional[tuple[float, datetime]] = None
|
||||
overall_min: tuple[float, datetime] | None = None
|
||||
overall_max: tuple[float, datetime] | None = None
|
||||
|
||||
for monthly in monthly_list:
|
||||
if ds_name not in monthly.summary or not monthly.summary[ds_name].has_data:
|
||||
@@ -414,13 +440,19 @@ def _aggregate_monthly_gauge_to_summary(
|
||||
total_sum += stats.mean * stats.count
|
||||
total_count += stats.count
|
||||
|
||||
if stats.min_value is not None and stats.min_time is not None:
|
||||
if overall_min is None or stats.min_value < overall_min[0]:
|
||||
overall_min = (stats.min_value, stats.min_time)
|
||||
if (
|
||||
stats.min_value is not None
|
||||
and stats.min_time is not None
|
||||
and (overall_min is None or stats.min_value < overall_min[0])
|
||||
):
|
||||
overall_min = (stats.min_value, stats.min_time)
|
||||
|
||||
if stats.max_value is not None and stats.max_time is not None:
|
||||
if overall_max is None or stats.max_value > overall_max[0]:
|
||||
overall_max = (stats.max_value, stats.max_time)
|
||||
if (
|
||||
stats.max_value is not None
|
||||
and stats.max_time is not None
|
||||
and (overall_max is None or stats.max_value > overall_max[0])
|
||||
):
|
||||
overall_max = (stats.max_value, stats.max_time)
|
||||
|
||||
if total_count == 0:
|
||||
return MetricStats()
|
||||
@@ -475,12 +507,18 @@ def aggregate_yearly(role: str, year: int) -> YearlyAggregate:
|
||||
"""
|
||||
agg = YearlyAggregate(year=year, role=role)
|
||||
metrics = get_metrics_for_role(role)
|
||||
today = date.today()
|
||||
|
||||
# Process month by month to limit memory usage
|
||||
for month in range(1, 13):
|
||||
# Don't aggregate future months
|
||||
if date(year, month, 1) > date.today():
|
||||
break
|
||||
periods = get_available_periods(role)
|
||||
months_with_data = sorted({month for y, month in periods if y == year})
|
||||
|
||||
if year > today.year:
|
||||
months_with_data = []
|
||||
elif year == today.year:
|
||||
months_with_data = [month for month in months_with_data if month <= today.month]
|
||||
|
||||
# Process only months that have data to avoid unnecessary daily scans.
|
||||
for month in months_with_data:
|
||||
monthly = aggregate_monthly(role, year, month)
|
||||
if monthly.daily: # Has data
|
||||
agg.monthly.append(monthly)
|
||||
@@ -603,28 +641,28 @@ class LocationInfo:
|
||||
)
|
||||
|
||||
|
||||
def _fmt_val(val: Optional[float], width: int = 6, decimals: int = 1) -> str:
|
||||
def _fmt_val(val: float | None, width: int = 6, decimals: int = 1) -> str:
|
||||
"""Format a value with fixed width, or dashes if None."""
|
||||
if val is None:
|
||||
return "-".center(width)
|
||||
return f"{val:>{width}.{decimals}f}"
|
||||
|
||||
|
||||
def _fmt_int(val: Optional[int], width: int = 6) -> str:
|
||||
def _fmt_int(val: int | None, width: int = 6) -> str:
|
||||
"""Format an integer with fixed width and comma separators, or dashes if None."""
|
||||
if val is None:
|
||||
return "-".center(width)
|
||||
return f"{val:>{width},}"
|
||||
|
||||
|
||||
def _fmt_time(dt: Optional[datetime], fmt: str = "%H:%M") -> str:
|
||||
def _fmt_time(dt: datetime | None, fmt: str = "%H:%M") -> str:
|
||||
"""Format a datetime, or dashes if None."""
|
||||
if dt is None:
|
||||
return "--:--"
|
||||
return dt.strftime(fmt)
|
||||
|
||||
|
||||
def _fmt_day(dt: Optional[datetime]) -> str:
|
||||
def _fmt_day(dt: datetime | None) -> str:
|
||||
"""Format datetime as day number, or dashes if None."""
|
||||
if dt is None:
|
||||
return "--"
|
||||
@@ -648,10 +686,7 @@ class Column:
|
||||
if value is None:
|
||||
text = "-"
|
||||
elif isinstance(value, int):
|
||||
if self.comma_sep:
|
||||
text = f"{value:,}"
|
||||
else:
|
||||
text = str(value)
|
||||
text = f"{value:,}" if self.comma_sep else str(value)
|
||||
elif isinstance(value, float):
|
||||
text = f"{value:.{self.decimals}f}"
|
||||
else:
|
||||
@@ -667,7 +702,7 @@ class Column:
|
||||
|
||||
def _format_row(columns: list[Column], values: list[Any]) -> str:
|
||||
"""Format a row of values using column specs."""
|
||||
return "".join(col.format(val) for col, val in zip(columns, values))
|
||||
return "".join(col.format(val) for col, val in zip(columns, values, strict=False))
|
||||
|
||||
|
||||
def _format_separator(columns: list[Column], char: str = "-") -> str:
|
||||
@@ -685,10 +720,7 @@ def _get_bat_v(m: dict[str, MetricStats], role: str) -> MetricStats:
|
||||
Returns:
|
||||
MetricStats with values in volts
|
||||
"""
|
||||
if role == "companion":
|
||||
bat = m.get("battery_mv", MetricStats())
|
||||
else:
|
||||
bat = m.get("bat", MetricStats())
|
||||
bat = m.get("battery_mv", MetricStats()) if role == "companion" else m.get("bat", MetricStats())
|
||||
|
||||
if not bat.has_data:
|
||||
return bat
|
||||
@@ -1116,10 +1148,14 @@ def format_yearly_txt(
|
||||
return format_yearly_txt_companion(agg, node_name, location)
|
||||
|
||||
|
||||
def _metric_stats_to_dict(stats: MetricStats) -> dict[str, Any]:
|
||||
def _metric_stats_to_dict(stats: MetricStats, metric: str) -> dict[str, Any]:
|
||||
"""Convert MetricStats to JSON-serializable dict."""
|
||||
result: dict[str, Any] = {"count": stats.count}
|
||||
|
||||
unit = REPORT_UNITS_RAW.get(metric)
|
||||
if unit:
|
||||
result["unit"] = unit
|
||||
|
||||
if stats.mean is not None:
|
||||
result["mean"] = round(stats.mean, 4)
|
||||
if stats.min_value is not None:
|
||||
@@ -1144,7 +1180,7 @@ def _daily_to_dict(daily: DailyAggregate) -> dict[str, Any]:
|
||||
"date": daily.date.isoformat(),
|
||||
"snapshot_count": daily.snapshot_count,
|
||||
"metrics": {
|
||||
ds: _metric_stats_to_dict(stats)
|
||||
ds: _metric_stats_to_dict(stats, ds)
|
||||
for ds, stats in daily.metrics.items()
|
||||
if stats.has_data
|
||||
},
|
||||
@@ -1167,7 +1203,7 @@ def monthly_to_json(agg: MonthlyAggregate) -> dict[str, Any]:
|
||||
"role": agg.role,
|
||||
"days_with_data": len(agg.daily),
|
||||
"summary": {
|
||||
ds: _metric_stats_to_dict(stats)
|
||||
ds: _metric_stats_to_dict(stats, ds)
|
||||
for ds, stats in agg.summary.items()
|
||||
if stats.has_data
|
||||
},
|
||||
@@ -1190,7 +1226,7 @@ def yearly_to_json(agg: YearlyAggregate) -> dict[str, Any]:
|
||||
"role": agg.role,
|
||||
"months_with_data": len(agg.monthly),
|
||||
"summary": {
|
||||
ds: _metric_stats_to_dict(stats)
|
||||
ds: _metric_stats_to_dict(stats, ds)
|
||||
for ds, stats in agg.summary.items()
|
||||
if stats.has_data
|
||||
},
|
||||
@@ -1200,7 +1236,7 @@ def yearly_to_json(agg: YearlyAggregate) -> dict[str, Any]:
|
||||
"month": m.month,
|
||||
"days_with_data": len(m.daily),
|
||||
"summary": {
|
||||
ds: _metric_stats_to_dict(stats)
|
||||
ds: _metric_stats_to_dict(stats, ds)
|
||||
for ds, stats in m.summary.items()
|
||||
if stats.has_data
|
||||
},
|
||||
|
||||
@@ -3,11 +3,12 @@
|
||||
import asyncio
|
||||
import json
|
||||
import time
|
||||
from collections.abc import Callable, Coroutine
|
||||
from pathlib import Path
|
||||
from typing import Any, Callable, Coroutine, Optional, TypeVar
|
||||
from typing import Any, TypeVar
|
||||
|
||||
from .env import get_config
|
||||
from . import log
|
||||
from .env import get_config
|
||||
|
||||
T = TypeVar("T")
|
||||
|
||||
@@ -88,7 +89,7 @@ async def with_retries(
|
||||
attempts: int = 2,
|
||||
backoff_s: float = 4.0,
|
||||
name: str = "operation",
|
||||
) -> tuple[bool, Optional[T], Optional[Exception]]:
|
||||
) -> tuple[bool, T | None, Exception | None]:
|
||||
"""
|
||||
Execute async function with retries.
|
||||
|
||||
@@ -101,7 +102,7 @@ async def with_retries(
|
||||
Returns:
|
||||
(success, result, last_exception)
|
||||
"""
|
||||
last_exception: Optional[Exception] = None
|
||||
last_exception: Exception | None = None
|
||||
|
||||
for attempt in range(1, attempts + 1):
|
||||
try:
|
||||
|
||||
99
src/meshmon/telemetry.py
Normal file
99
src/meshmon/telemetry.py
Normal file
@@ -0,0 +1,99 @@
|
||||
"""Telemetry data extraction from Cayenne LPP format."""
|
||||
|
||||
from typing import Any
|
||||
|
||||
from . import log
|
||||
|
||||
__all__ = ["extract_lpp_from_payload", "extract_telemetry_metrics"]
|
||||
|
||||
|
||||
def extract_lpp_from_payload(payload: Any) -> list | None:
|
||||
"""Extract LPP data list from telemetry payload.
|
||||
|
||||
Handles both formats returned by the MeshCore API:
|
||||
- Dict format: {'pubkey_pre': '...', 'lpp': [...]}
|
||||
- Direct list format: [...]
|
||||
|
||||
Args:
|
||||
payload: Raw telemetry payload from get_self_telemetry() or req_telemetry_sync()
|
||||
|
||||
Returns:
|
||||
The LPP data list, or None if not extractable.
|
||||
"""
|
||||
if payload is None:
|
||||
return None
|
||||
|
||||
if isinstance(payload, dict):
|
||||
lpp = payload.get("lpp")
|
||||
if lpp is None:
|
||||
log.debug("No 'lpp' key in telemetry payload dict")
|
||||
return None
|
||||
if not isinstance(lpp, list):
|
||||
log.debug(f"Unexpected LPP data type in payload: {type(lpp).__name__}")
|
||||
return None
|
||||
return lpp
|
||||
|
||||
if isinstance(payload, list):
|
||||
return payload
|
||||
|
||||
log.debug(f"Unexpected telemetry payload type: {type(payload).__name__}")
|
||||
return None
|
||||
|
||||
|
||||
def extract_telemetry_metrics(lpp_data: Any) -> dict[str, float]:
|
||||
"""Extract numeric telemetry values from Cayenne LPP response.
|
||||
|
||||
Expected format:
|
||||
[
|
||||
{"type": "temperature", "channel": 0, "value": 23.5},
|
||||
{"type": "gps", "channel": 1, "value": {"latitude": 51.5, "longitude": -0.1, "altitude": 10}}
|
||||
]
|
||||
|
||||
Keys are formatted as:
|
||||
- telemetry.{type}.{channel} for scalar values
|
||||
- telemetry.{type}.{channel}.{subkey} for compound values (e.g., GPS)
|
||||
|
||||
Returns:
|
||||
Dict mapping metric keys to float values. Invalid readings are skipped.
|
||||
"""
|
||||
if not isinstance(lpp_data, list):
|
||||
log.warn(f"Expected list for LPP data, got {type(lpp_data).__name__}")
|
||||
return {}
|
||||
|
||||
metrics: dict[str, float] = {}
|
||||
|
||||
for i, reading in enumerate(lpp_data):
|
||||
if not isinstance(reading, dict):
|
||||
log.debug(f"Skipping non-dict LPP reading at index {i}")
|
||||
continue
|
||||
|
||||
sensor_type = reading.get("type")
|
||||
if not isinstance(sensor_type, str) or not sensor_type.strip():
|
||||
log.debug(f"Skipping reading with invalid type at index {i}")
|
||||
continue
|
||||
|
||||
# Normalize sensor type for use as metric key component
|
||||
sensor_type = sensor_type.strip().lower().replace(" ", "_")
|
||||
|
||||
channel = reading.get("channel", 0)
|
||||
if not isinstance(channel, int):
|
||||
channel = 0
|
||||
|
||||
value = reading.get("value")
|
||||
base_key = f"telemetry.{sensor_type}.{channel}"
|
||||
|
||||
# Note: Check bool before int because bool is a subclass of int in Python.
|
||||
# Some sensors may report digital on/off values as booleans.
|
||||
if isinstance(value, (bool, int, float)):
|
||||
metrics[base_key] = float(value)
|
||||
elif isinstance(value, dict):
|
||||
for subkey, subval in value.items():
|
||||
if not isinstance(subkey, str):
|
||||
continue
|
||||
subkey_clean = subkey.strip().lower().replace(" ", "_")
|
||||
if not subkey_clean:
|
||||
continue
|
||||
if isinstance(subval, (bool, int, float)):
|
||||
metrics[f"{base_key}.{subkey_clean}"] = float(subval)
|
||||
|
||||
return metrics
|
||||
@@ -1,5 +1,5 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<html lang="en" data-unit-system="{{ display_unit_system | default('metric') }}">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
|
||||
@@ -1,142 +1,424 @@
|
||||
/**
|
||||
* Chart tooltip enhancement for MeshCore Stats
|
||||
* Chart Tooltip Enhancement for MeshCore Stats
|
||||
*
|
||||
* Progressive enhancement: charts work fully without JS,
|
||||
* but this adds interactive tooltips on hover.
|
||||
* Progressive enhancement: charts display fully without JavaScript.
|
||||
* This module adds interactive tooltips showing datetime and value on hover,
|
||||
* with an indicator dot that follows the data line.
|
||||
*
|
||||
* Data sources:
|
||||
* - Data points: path.dataset.points or svg.dataset.points (JSON array of {ts, v})
|
||||
* - Time range: svg.dataset.xStart, svg.dataset.xEnd (Unix timestamps)
|
||||
* - Value range: svg.dataset.yMin, svg.dataset.yMax
|
||||
* - Plot bounds: Derived from clipPath rect or line path bounding box
|
||||
*/
|
||||
(function() {
|
||||
(function () {
|
||||
'use strict';
|
||||
|
||||
// Create tooltip element
|
||||
const tooltip = document.createElement('div');
|
||||
tooltip.className = 'chart-tooltip';
|
||||
tooltip.innerHTML = '<div class="tooltip-time"></div><div class="tooltip-value"></div>';
|
||||
document.body.appendChild(tooltip);
|
||||
// ============================================================================
|
||||
// Configuration
|
||||
// ============================================================================
|
||||
|
||||
const tooltipTime = tooltip.querySelector('.tooltip-time');
|
||||
const tooltipValue = tooltip.querySelector('.tooltip-value');
|
||||
|
||||
// Track the current indicator element
|
||||
let currentIndicator = null;
|
||||
let currentSvg = null;
|
||||
|
||||
// Metric display labels and units (using firmware field names)
|
||||
const metricLabels = {
|
||||
// Companion metrics
|
||||
'battery_mv': { label: 'Voltage', unit: 'V', decimals: 2 },
|
||||
'uptime_secs': { label: 'Uptime', unit: 'days', decimals: 2 },
|
||||
'contacts': { label: 'Contacts', unit: '', decimals: 0 },
|
||||
'recv': { label: 'Received', unit: '/min', decimals: 1 },
|
||||
'sent': { label: 'Sent', unit: '/min', decimals: 1 },
|
||||
|
||||
// Repeater metrics
|
||||
'bat': { label: 'Voltage', unit: 'V', decimals: 2 },
|
||||
'bat_pct': { label: 'Charge', unit: '%', decimals: 0 },
|
||||
'uptime': { label: 'Uptime', unit: 'days', decimals: 2 },
|
||||
'last_rssi': { label: 'RSSI', unit: 'dBm', decimals: 0 },
|
||||
'last_snr': { label: 'SNR', unit: 'dB', decimals: 1 },
|
||||
'noise_floor': { label: 'Noise', unit: 'dBm', decimals: 0 },
|
||||
'tx_queue_len': { label: 'Queue', unit: '', decimals: 0 },
|
||||
'nb_recv': { label: 'Received', unit: '/min', decimals: 1 },
|
||||
'nb_sent': { label: 'Sent', unit: '/min', decimals: 1 },
|
||||
'airtime': { label: 'TX Air', unit: 's/min', decimals: 2 },
|
||||
'rx_airtime': { label: 'RX Air', unit: 's/min', decimals: 2 },
|
||||
'flood_dups': { label: 'Dropped', unit: '/min', decimals: 1 },
|
||||
'direct_dups': { label: 'Dropped', unit: '/min', decimals: 1 },
|
||||
'sent_flood': { label: 'Sent', unit: '/min', decimals: 1 },
|
||||
'recv_flood': { label: 'Received', unit: '/min', decimals: 1 },
|
||||
'sent_direct': { label: 'Sent', unit: '/min', decimals: 1 },
|
||||
'recv_direct': { label: 'Received', unit: '/min', decimals: 1 },
|
||||
var CONFIG = {
|
||||
tooltipOffset: 15,
|
||||
viewportPadding: 10,
|
||||
indicatorRadius: 5,
|
||||
indicatorStrokeWidth: 2,
|
||||
colors: {
|
||||
light: { fill: '#b45309', stroke: '#ffffff' },
|
||||
dark: { fill: '#f59e0b', stroke: '#0f1114' }
|
||||
}
|
||||
};
|
||||
var UNIT_SYSTEM =
|
||||
(document.documentElement &&
|
||||
document.documentElement.dataset &&
|
||||
document.documentElement.dataset.unitSystem) ||
|
||||
'metric';
|
||||
if (UNIT_SYSTEM !== 'imperial') {
|
||||
UNIT_SYSTEM = 'metric';
|
||||
}
|
||||
|
||||
var TELEMETRY_REGEX = /^telemetry\.([a-z0-9_]+)\.(\d+)(?:\.([a-z0-9_]+))?$/;
|
||||
|
||||
/**
|
||||
* Format a timestamp as a readable date/time string
|
||||
* Metric display configuration keyed by firmware field name.
|
||||
* Each entry defines how to format values for that metric.
|
||||
*/
|
||||
function formatTime(ts, period) {
|
||||
const date = new Date(ts * 1000);
|
||||
const options = {
|
||||
var METRIC_CONFIG = {
|
||||
// Companion metrics
|
||||
battery_mv: { label: 'Voltage', unit: 'V', decimals: 2 },
|
||||
uptime_secs: { label: 'Uptime', unit: 'days', decimals: 2 },
|
||||
contacts: { label: 'Contacts', unit: '', decimals: 0 },
|
||||
recv: { label: 'Received', unit: '/min', decimals: 1 },
|
||||
sent: { label: 'Sent', unit: '/min', decimals: 1 },
|
||||
|
||||
// Repeater metrics
|
||||
bat: { label: 'Voltage', unit: 'V', decimals: 2 },
|
||||
bat_pct: { label: 'Charge', unit: '%', decimals: 0 },
|
||||
uptime: { label: 'Uptime', unit: 'days', decimals: 2 },
|
||||
last_rssi: { label: 'RSSI', unit: 'dBm', decimals: 0 },
|
||||
last_snr: { label: 'SNR', unit: 'dB', decimals: 1 },
|
||||
noise_floor: { label: 'Noise', unit: 'dBm', decimals: 0 },
|
||||
tx_queue_len: { label: 'Queue', unit: '', decimals: 0 },
|
||||
nb_recv: { label: 'Received', unit: '/min', decimals: 1 },
|
||||
nb_sent: { label: 'Sent', unit: '/min', decimals: 1 },
|
||||
airtime: { label: 'TX Air', unit: 's/min', decimals: 2 },
|
||||
rx_airtime: { label: 'RX Air', unit: 's/min', decimals: 2 },
|
||||
flood_dups: { label: 'Dropped', unit: '/min', decimals: 1 },
|
||||
direct_dups: { label: 'Dropped', unit: '/min', decimals: 1 },
|
||||
sent_flood: { label: 'Sent', unit: '/min', decimals: 1 },
|
||||
recv_flood: { label: 'Received', unit: '/min', decimals: 1 },
|
||||
sent_direct: { label: 'Sent', unit: '/min', decimals: 1 },
|
||||
recv_direct: { label: 'Received', unit: '/min', decimals: 1 }
|
||||
};
|
||||
|
||||
// ============================================================================
|
||||
// Formatting Utilities
|
||||
// ============================================================================
|
||||
|
||||
function parseTelemetryMetric(metric) {
|
||||
var match = TELEMETRY_REGEX.exec(metric);
|
||||
if (!match) {
|
||||
return null;
|
||||
}
|
||||
|
||||
return {
|
||||
sensorType: match[1],
|
||||
channel: parseInt(match[2], 10),
|
||||
subkey: match[3] || null
|
||||
};
|
||||
}
|
||||
|
||||
function humanizeToken(token) {
|
||||
if (!token) {
|
||||
return '';
|
||||
}
|
||||
if (token.toLowerCase() === 'gps') {
|
||||
return 'GPS';
|
||||
}
|
||||
return token
|
||||
.split('_')
|
||||
.map(function (part) {
|
||||
if (!part) {
|
||||
return '';
|
||||
}
|
||||
return part.charAt(0).toUpperCase() + part.slice(1);
|
||||
})
|
||||
.join(' ');
|
||||
}
|
||||
|
||||
function getTelemetryLabel(metric) {
|
||||
var telemetry = parseTelemetryMetric(metric);
|
||||
if (!telemetry) {
|
||||
return metric;
|
||||
}
|
||||
var label = humanizeToken(telemetry.sensorType);
|
||||
if (telemetry.subkey) {
|
||||
label += ' ' + humanizeToken(telemetry.subkey);
|
||||
}
|
||||
return label + ' (CH' + telemetry.channel + ')';
|
||||
}
|
||||
|
||||
function getTelemetryFormat(sensorType, unitSystem) {
|
||||
if (sensorType === 'temperature') {
|
||||
return { unit: unitSystem === 'imperial' ? '\u00B0F' : '\u00B0C', decimals: 1 };
|
||||
}
|
||||
if (sensorType === 'humidity') {
|
||||
return { unit: '%', decimals: 1 };
|
||||
}
|
||||
if (sensorType === 'barometer' || sensorType === 'pressure') {
|
||||
return {
|
||||
unit: unitSystem === 'imperial' ? 'inHg' : 'hPa',
|
||||
decimals: unitSystem === 'imperial' ? 2 : 1
|
||||
};
|
||||
}
|
||||
if (sensorType === 'altitude') {
|
||||
return { unit: unitSystem === 'imperial' ? 'ft' : 'm', decimals: 1 };
|
||||
}
|
||||
return { unit: '', decimals: 2 };
|
||||
}
|
||||
|
||||
/**
|
||||
* Format a Unix timestamp as a localized date/time string.
|
||||
* Uses browser language preference for locale (determines 12/24 hour format).
|
||||
* Includes year only for year-period charts.
|
||||
*/
|
||||
function formatTimestamp(timestamp, period) {
|
||||
var date = new Date(timestamp * 1000);
|
||||
var options = {
|
||||
month: 'short',
|
||||
day: 'numeric',
|
||||
hour: '2-digit',
|
||||
minute: '2-digit'
|
||||
hour: 'numeric',
|
||||
minute: '2-digit',
|
||||
timeZoneName: 'short'
|
||||
};
|
||||
|
||||
// For year view, include year
|
||||
if (period === 'year') {
|
||||
options.year = 'numeric';
|
||||
}
|
||||
|
||||
return date.toLocaleString(undefined, options);
|
||||
// Use browser's language preference (navigator.language), not system locale
|
||||
// Empty array [] or undefined would use OS regional settings instead
|
||||
return date.toLocaleString(navigator.language, options);
|
||||
}
|
||||
|
||||
/**
|
||||
* Format a value with appropriate decimals and unit
|
||||
* Format a numeric value with the appropriate decimals and unit for a metric.
|
||||
*/
|
||||
function formatValue(value, metric) {
|
||||
const config = metricLabels[metric] || { label: metric, unit: '', decimals: 2 };
|
||||
const formatted = value.toFixed(config.decimals);
|
||||
return `${formatted}${config.unit ? ' ' + config.unit : ''}`;
|
||||
function formatMetricValue(value, metric) {
|
||||
var telemetry = parseTelemetryMetric(metric);
|
||||
if (telemetry) {
|
||||
var telemetryFormat = getTelemetryFormat(telemetry.sensorType, UNIT_SYSTEM);
|
||||
var telemetryFormatted = value.toFixed(telemetryFormat.decimals);
|
||||
return telemetryFormat.unit
|
||||
? telemetryFormatted + ' ' + telemetryFormat.unit
|
||||
: telemetryFormatted;
|
||||
}
|
||||
|
||||
var config = METRIC_CONFIG[metric] || { label: metric, unit: '', decimals: 2 };
|
||||
var formatted = value.toFixed(config.decimals);
|
||||
return config.unit ? formatted + ' ' + config.unit : formatted;
|
||||
}
|
||||
|
||||
function getMetricLabel(metric) {
|
||||
var telemetry = parseTelemetryMetric(metric);
|
||||
if (telemetry) {
|
||||
return getTelemetryLabel(metric);
|
||||
}
|
||||
var config = METRIC_CONFIG[metric];
|
||||
if (config && config.label) {
|
||||
return config.label;
|
||||
}
|
||||
return metric;
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Data Point Utilities
|
||||
// ============================================================================
|
||||
|
||||
/**
|
||||
* Find the closest data point to a timestamp, returning index too
|
||||
* Find the data point closest to the target timestamp.
|
||||
* Returns the point object or null if no points available.
|
||||
*/
|
||||
function findClosestPoint(dataPoints, targetTs) {
|
||||
if (!dataPoints || dataPoints.length === 0) return null;
|
||||
function findClosestDataPoint(dataPoints, targetTimestamp) {
|
||||
if (!dataPoints || dataPoints.length === 0) {
|
||||
return null;
|
||||
}
|
||||
|
||||
let closestIdx = 0;
|
||||
let minDiff = Math.abs(dataPoints[0].ts - targetTs);
|
||||
var closest = dataPoints[0];
|
||||
var minDiff = Math.abs(closest.ts - targetTimestamp);
|
||||
|
||||
for (let i = 1; i < dataPoints.length; i++) {
|
||||
const diff = Math.abs(dataPoints[i].ts - targetTs);
|
||||
for (var i = 1; i < dataPoints.length; i++) {
|
||||
var diff = Math.abs(dataPoints[i].ts - targetTimestamp);
|
||||
if (diff < minDiff) {
|
||||
minDiff = diff;
|
||||
closestIdx = i;
|
||||
closest = dataPoints[i];
|
||||
}
|
||||
}
|
||||
|
||||
return { point: dataPoints[closestIdx], index: closestIdx };
|
||||
return closest;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create or get the indicator circle for an SVG
|
||||
* Parse and cache data points on an SVG element.
|
||||
* Handles HTML entity encoding from server-side JSON embedding.
|
||||
*/
|
||||
function getDataPoints(svg, rawJson) {
|
||||
if (svg._dataPoints) {
|
||||
return svg._dataPoints;
|
||||
}
|
||||
|
||||
try {
|
||||
var json = rawJson.replace(/"/g, '"');
|
||||
svg._dataPoints = JSON.parse(json);
|
||||
return svg._dataPoints;
|
||||
} catch (error) {
|
||||
console.warn('Chart tooltip: failed to parse data points', error);
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// SVG Coordinate Utilities
|
||||
// ============================================================================
|
||||
|
||||
/**
|
||||
* Get and cache the plot area bounds for an SVG chart.
|
||||
* Prefers the clip path rect (defines full plot area) over line path bbox
|
||||
* (which only covers the actual data range).
|
||||
*/
|
||||
function getPlotAreaBounds(svg, fallbackPath) {
|
||||
if (svg._plotArea) {
|
||||
return svg._plotArea;
|
||||
}
|
||||
|
||||
var clipRect = svg.querySelector('clipPath rect');
|
||||
if (clipRect) {
|
||||
svg._plotArea = {
|
||||
x: parseFloat(clipRect.getAttribute('x')),
|
||||
y: parseFloat(clipRect.getAttribute('y')),
|
||||
width: parseFloat(clipRect.getAttribute('width')),
|
||||
height: parseFloat(clipRect.getAttribute('height'))
|
||||
};
|
||||
} else if (fallbackPath) {
|
||||
svg._plotArea = fallbackPath.getBBox();
|
||||
}
|
||||
|
||||
return svg._plotArea;
|
||||
}
|
||||
|
||||
/**
|
||||
* Find the chart line path element within an SVG.
|
||||
* Tries multiple selectors for compatibility with different SVG structures.
|
||||
*/
|
||||
function findLinePath(svg) {
|
||||
return (
|
||||
svg.querySelector('#chart-line path') ||
|
||||
svg.querySelector('path#chart-line') ||
|
||||
svg.querySelector('[gid="chart-line"] path') ||
|
||||
svg.querySelector('path[gid="chart-line"]') ||
|
||||
svg.querySelector('path[data-points]')
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert a screen X coordinate to SVG coordinate space.
|
||||
*/
|
||||
function screenToSvgX(svg, clientX) {
|
||||
var svgRect = svg.getBoundingClientRect();
|
||||
var viewBox = svg.viewBox.baseVal;
|
||||
var scale = viewBox.width / svgRect.width;
|
||||
return (clientX - svgRect.left) * scale + viewBox.x;
|
||||
}
|
||||
|
||||
/**
|
||||
* Map a timestamp to an X coordinate within the plot area.
|
||||
*/
|
||||
function timestampToX(timestamp, xStart, xEnd, plotArea) {
|
||||
var relativePosition = (timestamp - xStart) / (xEnd - xStart);
|
||||
return plotArea.x + relativePosition * plotArea.width;
|
||||
}
|
||||
|
||||
/**
|
||||
* Map a value to a Y coordinate within the plot area.
|
||||
* SVG Y-axis is inverted (0 at top), so higher values map to lower Y.
|
||||
*/
|
||||
function valueToY(value, yMin, yMax, plotArea) {
|
||||
var ySpan = yMax - yMin || 1;
|
||||
var relativePosition = (value - yMin) / ySpan;
|
||||
return plotArea.y + plotArea.height - relativePosition * plotArea.height;
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Tooltip Element
|
||||
// ============================================================================
|
||||
|
||||
var tooltip = null;
|
||||
var tooltipTimeEl = null;
|
||||
var tooltipValueEl = null;
|
||||
|
||||
/**
|
||||
* Create the tooltip DOM element (called once on init).
|
||||
*/
|
||||
function createTooltipElement() {
|
||||
tooltip = document.createElement('div');
|
||||
tooltip.className = 'chart-tooltip';
|
||||
tooltip.innerHTML =
|
||||
'<div class="tooltip-time"></div>' + '<div class="tooltip-value"></div>';
|
||||
document.body.appendChild(tooltip);
|
||||
|
||||
tooltipTimeEl = tooltip.querySelector('.tooltip-time');
|
||||
tooltipValueEl = tooltip.querySelector('.tooltip-value');
|
||||
}
|
||||
|
||||
/**
|
||||
* Update tooltip content and position it near the cursor.
|
||||
*/
|
||||
function showTooltip(event, timeText, valueText) {
|
||||
tooltipTimeEl.textContent = timeText;
|
||||
tooltipValueEl.textContent = valueText;
|
||||
|
||||
var left = event.pageX + CONFIG.tooltipOffset;
|
||||
var top = event.pageY + CONFIG.tooltipOffset;
|
||||
|
||||
// Keep tooltip within viewport
|
||||
var rect = tooltip.getBoundingClientRect();
|
||||
if (left + rect.width > window.innerWidth - CONFIG.viewportPadding) {
|
||||
left = event.pageX - rect.width - CONFIG.tooltipOffset;
|
||||
}
|
||||
if (top + rect.height > window.innerHeight - CONFIG.viewportPadding) {
|
||||
top = event.pageY - rect.height - CONFIG.tooltipOffset;
|
||||
}
|
||||
|
||||
tooltip.style.left = left + 'px';
|
||||
tooltip.style.top = top + 'px';
|
||||
tooltip.classList.add('visible');
|
||||
}
|
||||
|
||||
/**
|
||||
* Hide the tooltip.
|
||||
*/
|
||||
function hideTooltip() {
|
||||
tooltip.classList.remove('visible');
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Indicator Dot
|
||||
// ============================================================================
|
||||
|
||||
var currentIndicator = null;
|
||||
var currentIndicatorSvg = null;
|
||||
|
||||
/**
|
||||
* Get or create the indicator circle for an SVG chart.
|
||||
* Reuses existing indicator if still on the same chart.
|
||||
*/
|
||||
function getIndicator(svg) {
|
||||
if (currentSvg === svg && currentIndicator) {
|
||||
if (currentIndicatorSvg === svg && currentIndicator) {
|
||||
return currentIndicator;
|
||||
}
|
||||
|
||||
// Remove old indicator if switching charts
|
||||
// Remove indicator from previous chart
|
||||
if (currentIndicator && currentIndicator.parentNode) {
|
||||
currentIndicator.parentNode.removeChild(currentIndicator);
|
||||
}
|
||||
|
||||
// Create new indicator as an SVG circle
|
||||
const indicator = document.createElementNS('http://www.w3.org/2000/svg', 'circle');
|
||||
indicator.setAttribute('r', '5');
|
||||
// Create new indicator circle
|
||||
var indicator = document.createElementNS(
|
||||
'http://www.w3.org/2000/svg',
|
||||
'circle'
|
||||
);
|
||||
indicator.setAttribute('r', CONFIG.indicatorRadius);
|
||||
indicator.setAttribute('class', 'chart-indicator');
|
||||
indicator.setAttribute('stroke-width', CONFIG.indicatorStrokeWidth);
|
||||
indicator.style.pointerEvents = 'none';
|
||||
|
||||
// Get theme from SVG data attribute for color
|
||||
const theme = svg.dataset.theme;
|
||||
if (theme === 'dark') {
|
||||
indicator.setAttribute('fill', '#f59e0b');
|
||||
indicator.setAttribute('stroke', '#0f1114');
|
||||
} else {
|
||||
indicator.setAttribute('fill', '#b45309');
|
||||
indicator.setAttribute('stroke', '#ffffff');
|
||||
}
|
||||
indicator.setAttribute('stroke-width', '2');
|
||||
// Apply theme-appropriate colors
|
||||
var theme = svg.dataset.theme === 'dark' ? 'dark' : 'light';
|
||||
indicator.setAttribute('fill', CONFIG.colors[theme].fill);
|
||||
indicator.setAttribute('stroke', CONFIG.colors[theme].stroke);
|
||||
|
||||
svg.appendChild(indicator);
|
||||
currentIndicator = indicator;
|
||||
currentSvg = svg;
|
||||
currentIndicatorSvg = svg;
|
||||
|
||||
return indicator;
|
||||
}
|
||||
|
||||
/**
|
||||
* Hide and clean up the indicator
|
||||
* Position the indicator at a specific data point.
|
||||
*/
|
||||
function positionIndicator(svg, dataPoint, xStart, xEnd, yMin, yMax, plotArea) {
|
||||
var indicator = getIndicator(svg);
|
||||
var x = timestampToX(dataPoint.ts, xStart, xEnd, plotArea);
|
||||
var y = valueToY(dataPoint.v, yMin, yMax, plotArea);
|
||||
|
||||
indicator.setAttribute('cx', x);
|
||||
indicator.setAttribute('cy', y);
|
||||
indicator.style.display = '';
|
||||
}
|
||||
|
||||
/**
|
||||
* Hide the indicator dot.
|
||||
*/
|
||||
function hideIndicator() {
|
||||
if (currentIndicator) {
|
||||
@@ -144,185 +426,137 @@
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Event Handlers
|
||||
// ============================================================================
|
||||
|
||||
/**
|
||||
* Position tooltip near the mouse cursor
|
||||
* Convert a touch event to a mouse-like event object.
|
||||
*/
|
||||
function positionTooltip(event) {
|
||||
const offset = 15;
|
||||
let left = event.pageX + offset;
|
||||
let top = event.pageY + offset;
|
||||
|
||||
// Keep tooltip on screen
|
||||
const rect = tooltip.getBoundingClientRect();
|
||||
const viewportWidth = window.innerWidth;
|
||||
const viewportHeight = window.innerHeight;
|
||||
|
||||
if (left + rect.width > viewportWidth - 10) {
|
||||
left = event.pageX - rect.width - offset;
|
||||
}
|
||||
if (top + rect.height > viewportHeight - 10) {
|
||||
top = event.pageY - rect.height - offset;
|
||||
}
|
||||
|
||||
tooltip.style.left = left + 'px';
|
||||
tooltip.style.top = top + 'px';
|
||||
function touchToMouseEvent(touchEvent) {
|
||||
var touch = touchEvent.touches[0];
|
||||
return {
|
||||
currentTarget: touchEvent.currentTarget,
|
||||
clientX: touch.clientX,
|
||||
clientY: touch.clientY,
|
||||
pageX: touch.pageX,
|
||||
pageY: touch.pageY
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle mouse move over chart SVG
|
||||
* Handle pointer movement over a chart (mouse or touch).
|
||||
* Finds the closest data point and updates tooltip and indicator.
|
||||
*/
|
||||
function handleMouseMove(event) {
|
||||
const svg = event.currentTarget;
|
||||
const metric = svg.dataset.metric;
|
||||
const period = svg.dataset.period;
|
||||
const xStart = parseInt(svg.dataset.xStart, 10);
|
||||
const xEnd = parseInt(svg.dataset.xEnd, 10);
|
||||
const yMin = parseFloat(svg.dataset.yMin);
|
||||
const yMax = parseFloat(svg.dataset.yMax);
|
||||
function handlePointerMove(event) {
|
||||
var svg = event.currentTarget;
|
||||
|
||||
// Find the path with data-points
|
||||
const path = svg.querySelector('path[data-points]');
|
||||
if (!path) return;
|
||||
// Extract chart metadata
|
||||
var metric = svg.dataset.metric;
|
||||
var period = svg.dataset.period;
|
||||
var xStart = parseInt(svg.dataset.xStart, 10);
|
||||
var xEnd = parseInt(svg.dataset.xEnd, 10);
|
||||
var yMin = parseFloat(svg.dataset.yMin);
|
||||
var yMax = parseFloat(svg.dataset.yMax);
|
||||
|
||||
// Parse and cache data points and path coordinates on first access
|
||||
if (!path._dataPoints) {
|
||||
try {
|
||||
const json = path.dataset.points.replace(/"/g, '"');
|
||||
path._dataPoints = JSON.parse(json);
|
||||
} catch (e) {
|
||||
console.warn('Failed to parse chart data:', e);
|
||||
return;
|
||||
}
|
||||
// Find the line path and data points source
|
||||
var linePath = findLinePath(svg);
|
||||
if (!linePath) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Cache the path's bounding box for coordinate mapping
|
||||
if (!path._pathBox) {
|
||||
path._pathBox = path.getBBox();
|
||||
var rawPoints = linePath.dataset.points || svg.dataset.points;
|
||||
if (!rawPoints) {
|
||||
return;
|
||||
}
|
||||
|
||||
const pathBox = path._pathBox;
|
||||
// Parse data points (cached on svg element)
|
||||
var dataPoints = getDataPoints(svg, rawPoints);
|
||||
if (!dataPoints) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Get mouse position in SVG coordinate space
|
||||
const svgRect = svg.getBoundingClientRect();
|
||||
const viewBox = svg.viewBox.baseVal;
|
||||
// Get plot area bounds (cached on svg element)
|
||||
var plotArea = getPlotAreaBounds(svg, linePath);
|
||||
if (!plotArea) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Convert screen X coordinate to SVG coordinate
|
||||
const scaleX = viewBox.width / svgRect.width;
|
||||
const svgX = (event.clientX - svgRect.left) * scaleX + viewBox.x;
|
||||
// Convert screen position to timestamp
|
||||
var svgX = screenToSvgX(svg, event.clientX);
|
||||
var relativeX = Math.max(0, Math.min(1, (svgX - plotArea.x) / plotArea.width));
|
||||
var targetTimestamp = xStart + relativeX * (xEnd - xStart);
|
||||
|
||||
// Calculate relative X position within the plot area (pathBox)
|
||||
const relX = (svgX - pathBox.x) / pathBox.width;
|
||||
// Find and display closest data point
|
||||
var closestPoint = findClosestDataPoint(dataPoints, targetTimestamp);
|
||||
if (!closestPoint) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Clamp to plot area bounds
|
||||
const clampedRelX = Math.max(0, Math.min(1, relX));
|
||||
showTooltip(
|
||||
event,
|
||||
formatTimestamp(closestPoint.ts, period),
|
||||
getMetricLabel(metric) + ': ' + formatMetricValue(closestPoint.v, metric)
|
||||
);
|
||||
|
||||
// Map relative X position to timestamp using the chart's X-axis range
|
||||
const targetTs = xStart + clampedRelX * (xEnd - xStart);
|
||||
|
||||
// Find closest data point by timestamp
|
||||
const result = findClosestPoint(path._dataPoints, targetTs);
|
||||
if (!result) return;
|
||||
|
||||
const { point } = result;
|
||||
|
||||
// Update tooltip content
|
||||
tooltipTime.textContent = formatTime(point.ts, period);
|
||||
tooltipValue.textContent = formatValue(point.v, metric);
|
||||
|
||||
// Position and show tooltip
|
||||
positionTooltip(event);
|
||||
tooltip.classList.add('visible');
|
||||
|
||||
// Position the indicator at the data point
|
||||
const indicator = getIndicator(svg);
|
||||
|
||||
// Calculate X position: map timestamp to path coordinate space
|
||||
const pointRelX = (point.ts - xStart) / (xEnd - xStart);
|
||||
const indicatorX = pathBox.x + pointRelX * pathBox.width;
|
||||
|
||||
// Calculate Y position using the actual Y-axis range from the chart
|
||||
const ySpan = yMax - yMin || 1;
|
||||
// Y is inverted in SVG (0 at top)
|
||||
const pointRelY = 1 - (point.v - yMin) / ySpan;
|
||||
const indicatorY = pathBox.y + pointRelY * pathBox.height;
|
||||
|
||||
indicator.setAttribute('cx', indicatorX);
|
||||
indicator.setAttribute('cy', indicatorY);
|
||||
indicator.style.display = '';
|
||||
positionIndicator(svg, closestPoint, xStart, xEnd, yMin, yMax, plotArea);
|
||||
}
|
||||
|
||||
/**
|
||||
* Hide tooltip when leaving chart
|
||||
* Handle pointer leaving the chart area.
|
||||
*/
|
||||
function handleMouseLeave() {
|
||||
tooltip.classList.remove('visible');
|
||||
function handlePointerLeave() {
|
||||
hideTooltip();
|
||||
hideIndicator();
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle touch events for mobile
|
||||
* Handle touch start event.
|
||||
*/
|
||||
function handleTouchStart(event) {
|
||||
// Convert touch to mouse-like event
|
||||
const touch = event.touches[0];
|
||||
const mouseEvent = {
|
||||
currentTarget: event.currentTarget,
|
||||
clientX: touch.clientX,
|
||||
clientY: touch.clientY,
|
||||
pageX: touch.pageX,
|
||||
pageY: touch.pageY
|
||||
};
|
||||
|
||||
handleMouseMove(mouseEvent);
|
||||
}
|
||||
|
||||
function handleTouchMove(event) {
|
||||
const touch = event.touches[0];
|
||||
const mouseEvent = {
|
||||
currentTarget: event.currentTarget,
|
||||
clientX: touch.clientX,
|
||||
clientY: touch.clientY,
|
||||
pageX: touch.pageX,
|
||||
pageY: touch.pageY
|
||||
};
|
||||
|
||||
handleMouseMove(mouseEvent);
|
||||
}
|
||||
|
||||
function handleTouchEnd() {
|
||||
handleMouseLeave();
|
||||
handlePointerMove(touchToMouseEvent(event));
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize tooltips for all chart SVGs
|
||||
* Handle touch move event.
|
||||
*/
|
||||
function initTooltips() {
|
||||
// Find all chart SVGs with data attributes
|
||||
const chartSvgs = document.querySelectorAll('svg[data-metric][data-period]');
|
||||
function handleTouchMove(event) {
|
||||
handlePointerMove(touchToMouseEvent(event));
|
||||
}
|
||||
|
||||
chartSvgs.forEach(function(svg) {
|
||||
// Mouse events for desktop
|
||||
svg.addEventListener('mousemove', handleMouseMove);
|
||||
svg.addEventListener('mouseleave', handleMouseLeave);
|
||||
// ============================================================================
|
||||
// Initialization
|
||||
// ============================================================================
|
||||
|
||||
// Touch events for mobile
|
||||
/**
|
||||
* Attach event listeners to all chart SVG elements.
|
||||
*/
|
||||
function initializeChartTooltips() {
|
||||
createTooltipElement();
|
||||
|
||||
var chartSvgs = document.querySelectorAll('svg[data-metric][data-period]');
|
||||
|
||||
chartSvgs.forEach(function (svg) {
|
||||
// Desktop mouse events
|
||||
svg.addEventListener('mousemove', handlePointerMove);
|
||||
svg.addEventListener('mouseleave', handlePointerLeave);
|
||||
|
||||
// Mobile touch events
|
||||
svg.addEventListener('touchstart', handleTouchStart, { passive: true });
|
||||
svg.addEventListener('touchmove', handleTouchMove, { passive: true });
|
||||
svg.addEventListener('touchend', handleTouchEnd);
|
||||
svg.addEventListener('touchcancel', handleTouchEnd);
|
||||
svg.addEventListener('touchend', handlePointerLeave);
|
||||
svg.addEventListener('touchcancel', handlePointerLeave);
|
||||
|
||||
// Set cursor to indicate interactivity
|
||||
// Visual affordance for interactivity
|
||||
svg.style.cursor = 'crosshair';
|
||||
|
||||
// Allow vertical scrolling but prevent horizontal pan on mobile
|
||||
svg.style.touchAction = 'pan-y';
|
||||
});
|
||||
}
|
||||
|
||||
// Initialize when DOM is ready
|
||||
// Run initialization when DOM is ready
|
||||
if (document.readyState === 'loading') {
|
||||
document.addEventListener('DOMContentLoaded', initTooltips);
|
||||
document.addEventListener('DOMContentLoaded', initializeChartTooltips);
|
||||
} else {
|
||||
initTooltips();
|
||||
initializeChartTooltips();
|
||||
}
|
||||
})();
|
||||
|
||||
@@ -113,10 +113,10 @@
|
||||
<main class="main-content">
|
||||
<!-- Period Navigation -->
|
||||
<nav class="period-nav">
|
||||
<a href="{{ base_path }}/day.html"{% if period == 'day' %} class="active"{% endif %}>Day</a>
|
||||
<a href="{{ base_path }}/week.html"{% if period == 'week' %} class="active"{% endif %}>Week</a>
|
||||
<a href="{{ base_path }}/month.html"{% if period == 'month' %} class="active"{% endif %}>Month</a>
|
||||
<a href="{{ base_path }}/year.html"{% if period == 'year' %} class="active"{% endif %}>Year</a>
|
||||
<a href="{{ base_path }}day.html"{% if period == 'day' %} class="active"{% endif %}>Day</a>
|
||||
<a href="{{ base_path }}week.html"{% if period == 'week' %} class="active"{% endif %}>Week</a>
|
||||
<a href="{{ base_path }}month.html"{% if period == 'month' %} class="active"{% endif %}>Month</a>
|
||||
<a href="{{ base_path }}year.html"{% if period == 'year' %} class="active"{% endif %}>Year</a>
|
||||
</nav>
|
||||
|
||||
<header class="page-header">
|
||||
|
||||
4329
test_review/tests.md
Normal file
4329
test_review/tests.md
Normal file
File diff suppressed because it is too large
Load Diff
1
tests/__init__.py
Normal file
1
tests/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Test suite for meshcore-stats."""
|
||||
1
tests/charts/__init__.py
Normal file
1
tests/charts/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Tests for chart rendering."""
|
||||
339
tests/charts/conftest.py
Normal file
339
tests/charts/conftest.py
Normal file
@@ -0,0 +1,339 @@
|
||||
"""Fixtures for chart tests."""
|
||||
|
||||
import json
|
||||
import re
|
||||
from datetime import UTC, datetime, timedelta
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.charts import (
|
||||
CHART_THEMES,
|
||||
DataPoint,
|
||||
TimeSeries,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def light_theme():
|
||||
"""Light chart theme."""
|
||||
return CHART_THEMES["light"]
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def dark_theme():
|
||||
"""Dark chart theme."""
|
||||
return CHART_THEMES["dark"]
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_timeseries():
|
||||
"""Sample time series with 24 hours of data."""
|
||||
now = datetime.now()
|
||||
points = []
|
||||
for i in range(24):
|
||||
ts = now - timedelta(hours=23 - i)
|
||||
# Simulate battery voltage pattern (higher during day, lower at night)
|
||||
value = 3.7 + 0.3 * abs(12 - i) / 12
|
||||
points.append(DataPoint(timestamp=ts, value=value))
|
||||
|
||||
return TimeSeries(
|
||||
metric="bat",
|
||||
role="repeater",
|
||||
period="day",
|
||||
points=points,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def empty_timeseries():
|
||||
"""Empty time series (no data)."""
|
||||
return TimeSeries(
|
||||
metric="bat",
|
||||
role="repeater",
|
||||
period="day",
|
||||
points=[],
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def single_point_timeseries():
|
||||
"""Time series with single data point."""
|
||||
now = datetime.now()
|
||||
return TimeSeries(
|
||||
metric="bat",
|
||||
role="repeater",
|
||||
period="day",
|
||||
points=[DataPoint(timestamp=now, value=3.85)],
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def counter_timeseries():
|
||||
"""Sample counter time series (for rate calculation testing)."""
|
||||
now = datetime.now()
|
||||
points = []
|
||||
for i in range(24):
|
||||
ts = now - timedelta(hours=23 - i)
|
||||
# Simulate increasing counter
|
||||
value = float(i * 100)
|
||||
points.append(DataPoint(timestamp=ts, value=value))
|
||||
|
||||
return TimeSeries(
|
||||
metric="nb_recv",
|
||||
role="repeater",
|
||||
period="day",
|
||||
points=points,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def week_timeseries():
|
||||
"""Sample week time series for binning tests."""
|
||||
now = datetime.now()
|
||||
points = []
|
||||
# One point per hour for 7 days = 168 points
|
||||
for i in range(168):
|
||||
ts = now - timedelta(hours=167 - i)
|
||||
value = 3.7 + 0.2 * (i % 24) / 24
|
||||
points.append(DataPoint(timestamp=ts, value=value))
|
||||
|
||||
return TimeSeries(
|
||||
metric="bat",
|
||||
role="repeater",
|
||||
period="week",
|
||||
points=points,
|
||||
)
|
||||
|
||||
|
||||
def normalize_svg_for_snapshot(svg: str) -> str:
|
||||
"""Normalize SVG for deterministic snapshot comparison.
|
||||
|
||||
Handles matplotlib's dynamic ID generation while preserving
|
||||
semantic content that affects chart appearance. Uses sequential
|
||||
normalized IDs to preserve relationships between definitions
|
||||
and references.
|
||||
|
||||
IMPORTANT: Each ID type gets its own prefix to maintain uniqueness:
|
||||
- tick_N: matplotlib tick marks (m[0-9a-f]{8,})
|
||||
- clip_N: clipPath definitions (p[0-9a-f]{8,})
|
||||
- glyph_N: font glyph definitions (DejaVuSans-XX)
|
||||
|
||||
This ensures that:
|
||||
1. All IDs remain unique (no duplicates)
|
||||
2. References (xlink:href, url(#...)) correctly resolve
|
||||
3. SVG renders identically to the original
|
||||
"""
|
||||
# Patterns for matplotlib's random IDs, each with its own prefix
|
||||
# to maintain uniqueness across different ID types
|
||||
id_type_patterns = [
|
||||
(r'm[0-9a-f]{8,}', 'tick'), # matplotlib tick marks
|
||||
(r'p[0-9a-f]{8,}', 'clip'), # matplotlib clipPaths
|
||||
(r'DejaVuSans-[0-9a-f]+', 'glyph'), # font glyphs (hex-named)
|
||||
]
|
||||
|
||||
# Find all IDs in the document
|
||||
all_ids = re.findall(r'id="([^"]+)"', svg)
|
||||
|
||||
# Create mapping for IDs that match random patterns
|
||||
# Use separate counters per type to ensure predictable naming
|
||||
id_mapping = {}
|
||||
type_counters = {prefix: 0 for _, prefix in id_type_patterns}
|
||||
|
||||
for id_val in all_ids:
|
||||
if id_val in id_mapping:
|
||||
continue
|
||||
for pattern, prefix in id_type_patterns:
|
||||
if re.fullmatch(pattern, id_val):
|
||||
new_id = f"{prefix}_{type_counters[prefix]}"
|
||||
id_mapping[id_val] = new_id
|
||||
type_counters[prefix] += 1
|
||||
break
|
||||
|
||||
# Replace all occurrences of mapped IDs (definitions and references)
|
||||
# Process in a deterministic order (sorted by original ID) for consistency
|
||||
for old_id, new_id in sorted(id_mapping.items()):
|
||||
# Replace id definitions
|
||||
svg = svg.replace(f'id="{old_id}"', f'id="{new_id}"')
|
||||
# Replace url(#...) references
|
||||
svg = svg.replace(f'url(#{old_id})', f'url(#{new_id})')
|
||||
# Replace xlink:href references
|
||||
svg = svg.replace(f'xlink:href="#{old_id}"', f'xlink:href="#{new_id}"')
|
||||
# Replace href references (SVG 2.0 style without xlink prefix)
|
||||
svg = svg.replace(f'href="#{old_id}"', f'href="#{new_id}"')
|
||||
|
||||
# Remove matplotlib version comment (changes between versions)
|
||||
svg = re.sub(r'<!-- Created with matplotlib.*?-->', '', svg)
|
||||
|
||||
# Normalize dc:date timestamp (changes on each render)
|
||||
svg = re.sub(r'<dc:date>[^<]+</dc:date>', '<dc:date>NORMALIZED</dc:date>', svg)
|
||||
|
||||
# Normalize whitespace (but preserve newlines for readability)
|
||||
svg = re.sub(r'[ \t]+', ' ', svg)
|
||||
svg = re.sub(r' ?\n ?', '\n', svg)
|
||||
|
||||
return svg.strip()
|
||||
|
||||
|
||||
def extract_svg_data_attributes(svg: str) -> dict:
|
||||
"""Extract data-* attributes from SVG for validation.
|
||||
|
||||
Args:
|
||||
svg: SVG string
|
||||
|
||||
Returns:
|
||||
Dict with extracted data attributes
|
||||
"""
|
||||
data = {}
|
||||
|
||||
# Extract data-points JSON
|
||||
points_match = re.search(r'data-points="([^"]+)"', svg)
|
||||
if points_match:
|
||||
points_str = points_match.group(1).replace('"', '"')
|
||||
try:
|
||||
data["points"] = json.loads(points_str)
|
||||
except json.JSONDecodeError:
|
||||
data["points_raw"] = points_str
|
||||
|
||||
# Extract other data attributes
|
||||
for attr in ["data-metric", "data-period", "data-theme",
|
||||
"data-x-start", "data-x-end", "data-y-min", "data-y-max"]:
|
||||
match = re.search(rf'{attr}="([^"]+)"', svg)
|
||||
if match:
|
||||
key = attr.replace("data-", "").replace("-", "_")
|
||||
data[key] = match.group(1)
|
||||
|
||||
return data
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def snapshots_dir():
|
||||
"""Path to snapshots directory."""
|
||||
return Path(__file__).parent.parent / "snapshots" / "svg"
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_raw_points():
|
||||
"""Raw points for aggregation testing."""
|
||||
now = datetime.now()
|
||||
return [
|
||||
(now - timedelta(hours=2), 3.7),
|
||||
(now - timedelta(hours=1, minutes=45), 3.72),
|
||||
(now - timedelta(hours=1, minutes=30), 3.75),
|
||||
(now - timedelta(hours=1), 3.8),
|
||||
(now - timedelta(minutes=30), 3.82),
|
||||
(now, 3.85),
|
||||
]
|
||||
|
||||
|
||||
# --- Deterministic fixtures for snapshot testing ---
|
||||
# These use fixed timestamps to produce consistent SVG output
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def snapshot_base_time():
|
||||
"""Fixed base time for deterministic snapshot tests.
|
||||
|
||||
Uses 2024-01-15 12:00:00 UTC as a stable reference point.
|
||||
Explicitly set to UTC to ensure consistent behavior across all machines.
|
||||
"""
|
||||
return datetime(2024, 1, 15, 12, 0, 0, tzinfo=UTC)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def snapshot_gauge_timeseries(snapshot_base_time):
|
||||
"""Deterministic gauge time series for snapshot testing.
|
||||
|
||||
Creates a battery voltage pattern over 24 hours with fixed timestamps.
|
||||
"""
|
||||
points = []
|
||||
for i in range(24):
|
||||
ts = snapshot_base_time - timedelta(hours=23 - i)
|
||||
# Simulate battery voltage pattern (higher during day, lower at night)
|
||||
value = 3.7 + 0.3 * abs(12 - i) / 12
|
||||
points.append(DataPoint(timestamp=ts, value=value))
|
||||
|
||||
return TimeSeries(
|
||||
metric="bat",
|
||||
role="repeater",
|
||||
period="day",
|
||||
points=points,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def snapshot_counter_timeseries(snapshot_base_time):
|
||||
"""Deterministic counter time series for snapshot testing.
|
||||
|
||||
Creates a packet rate pattern over 24 hours with fixed timestamps.
|
||||
This represents rate values (already converted from counter deltas).
|
||||
"""
|
||||
points = []
|
||||
for i in range(24):
|
||||
ts = snapshot_base_time - timedelta(hours=23 - i)
|
||||
# Simulate packet rate - higher during day hours (6-18)
|
||||
hour = (i + 12) % 24 # Convert to actual hour of day
|
||||
value = (
|
||||
2.0 + (hour - 6) * 0.3 # 2.0 to 5.6 packets/min
|
||||
if 6 <= hour <= 18
|
||||
else 0.5 + (hour % 6) * 0.1 # 0.5 to 1.1 packets/min (night)
|
||||
)
|
||||
points.append(DataPoint(timestamp=ts, value=value))
|
||||
|
||||
return TimeSeries(
|
||||
metric="nb_recv",
|
||||
role="repeater",
|
||||
period="day",
|
||||
points=points,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def snapshot_empty_timeseries():
|
||||
"""Empty time series for snapshot testing."""
|
||||
return TimeSeries(
|
||||
metric="bat",
|
||||
role="repeater",
|
||||
period="day",
|
||||
points=[],
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def snapshot_single_point_timeseries(snapshot_base_time):
|
||||
"""Time series with single data point for snapshot testing."""
|
||||
return TimeSeries(
|
||||
metric="bat",
|
||||
role="repeater",
|
||||
period="day",
|
||||
points=[DataPoint(timestamp=snapshot_base_time, value=3.85)],
|
||||
)
|
||||
|
||||
|
||||
def normalize_svg_for_snapshot_full(svg: str) -> str:
|
||||
"""Extended SVG normalization for full snapshot comparison.
|
||||
|
||||
In addition to standard normalization, this also:
|
||||
- Removes timestamps from data-points to allow content-only comparison
|
||||
- Normalizes floating point precision
|
||||
|
||||
Used when you want to compare the visual structure but not exact data values.
|
||||
"""
|
||||
# Apply standard normalization first
|
||||
svg = normalize_svg_for_snapshot(svg)
|
||||
|
||||
# Normalize data-points timestamps (keep structure, normalize values)
|
||||
# This allows charts with different base times to still match structure
|
||||
svg = re.sub(r'"ts":\s*\d+', '"ts":0', svg)
|
||||
|
||||
# Normalize floating point values to 2 decimal places in attributes
|
||||
def normalize_float(match):
|
||||
try:
|
||||
val = float(match.group(1))
|
||||
return f'{val:.2f}'
|
||||
except ValueError:
|
||||
return match.group(0)
|
||||
|
||||
svg = re.sub(r'(\d+\.\d{3,})', normalize_float, svg)
|
||||
|
||||
return svg
|
||||
201
tests/charts/test_chart_io.py
Normal file
201
tests/charts/test_chart_io.py
Normal file
@@ -0,0 +1,201 @@
|
||||
"""Tests for chart statistics I/O functions."""
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.charts import (
|
||||
load_chart_stats,
|
||||
save_chart_stats,
|
||||
)
|
||||
|
||||
|
||||
class TestSaveChartStats:
|
||||
"""Tests for save_chart_stats function."""
|
||||
|
||||
def test_saves_stats_to_file(self, configured_env):
|
||||
"""Saves stats dict to JSON file."""
|
||||
stats = {
|
||||
"bat": {
|
||||
"day": {"min": 3.5, "avg": 3.7, "max": 3.9, "current": 3.85},
|
||||
"week": {"min": 3.4, "avg": 3.65, "max": 3.95, "current": 3.85},
|
||||
}
|
||||
}
|
||||
|
||||
path = save_chart_stats("repeater", stats)
|
||||
|
||||
assert path.exists()
|
||||
with open(path) as f:
|
||||
loaded = json.load(f)
|
||||
assert loaded == stats
|
||||
|
||||
def test_creates_directories(self, configured_env):
|
||||
"""Creates parent directories if needed."""
|
||||
stats = {"test": {"day": {"min": 1.0}}}
|
||||
|
||||
path = save_chart_stats("repeater", stats)
|
||||
|
||||
assert path.parent.exists()
|
||||
assert path.parent.name == "repeater"
|
||||
|
||||
def test_returns_path(self, configured_env):
|
||||
"""Returns path to saved file."""
|
||||
stats = {"test": {"day": {}}}
|
||||
|
||||
path = save_chart_stats("companion", stats)
|
||||
|
||||
assert isinstance(path, Path)
|
||||
assert path.name == "chart_stats.json"
|
||||
assert "companion" in str(path)
|
||||
|
||||
def test_overwrites_existing(self, configured_env):
|
||||
"""Overwrites existing stats file."""
|
||||
stats1 = {"metric1": {"day": {"min": 1.0}}}
|
||||
stats2 = {"metric2": {"day": {"min": 2.0}}}
|
||||
|
||||
path1 = save_chart_stats("repeater", stats1)
|
||||
path2 = save_chart_stats("repeater", stats2)
|
||||
|
||||
assert path1 == path2
|
||||
with open(path2) as f:
|
||||
loaded = json.load(f)
|
||||
assert loaded == stats2
|
||||
|
||||
def test_empty_stats(self, configured_env):
|
||||
"""Saves empty stats dict."""
|
||||
stats = {}
|
||||
|
||||
path = save_chart_stats("repeater", stats)
|
||||
|
||||
with open(path) as f:
|
||||
loaded = json.load(f)
|
||||
assert loaded == {}
|
||||
|
||||
def test_nested_stats_structure(self, configured_env):
|
||||
"""Preserves nested structure of stats."""
|
||||
stats = {
|
||||
"bat": {
|
||||
"day": {"min": 3.5, "avg": 3.7, "max": 3.9, "current": 3.85},
|
||||
"week": {"min": 3.4, "avg": 3.65, "max": 3.95, "current": None},
|
||||
},
|
||||
"nb_recv": {
|
||||
"day": {"min": 0, "avg": 50.5, "max": 100, "current": 75},
|
||||
}
|
||||
}
|
||||
|
||||
path = save_chart_stats("repeater", stats)
|
||||
|
||||
with open(path) as f:
|
||||
loaded = json.load(f)
|
||||
assert loaded["bat"]["week"]["current"] is None
|
||||
assert loaded["nb_recv"]["day"]["avg"] == 50.5
|
||||
|
||||
|
||||
class TestLoadChartStats:
|
||||
"""Tests for load_chart_stats function."""
|
||||
|
||||
def test_loads_existing_stats(self, configured_env):
|
||||
"""Loads stats from existing file."""
|
||||
stats = {
|
||||
"bat": {
|
||||
"day": {"min": 3.5, "avg": 3.7, "max": 3.9, "current": 3.85},
|
||||
}
|
||||
}
|
||||
save_chart_stats("repeater", stats)
|
||||
|
||||
loaded = load_chart_stats("repeater")
|
||||
|
||||
assert loaded == stats
|
||||
|
||||
def test_returns_empty_when_missing(self, configured_env):
|
||||
"""Returns empty dict when file doesn't exist."""
|
||||
loaded = load_chart_stats("repeater")
|
||||
|
||||
assert loaded == {}
|
||||
|
||||
def test_returns_empty_on_invalid_json(self, configured_env):
|
||||
"""Returns empty dict on invalid JSON."""
|
||||
stats_path = configured_env["out_dir"] / "assets" / "repeater" / "chart_stats.json"
|
||||
stats_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
stats_path.write_text("not valid json {{{", encoding="utf-8")
|
||||
|
||||
loaded = load_chart_stats("repeater")
|
||||
|
||||
assert loaded == {}
|
||||
|
||||
def test_preserves_none_values(self, configured_env):
|
||||
"""None values are preserved through save/load cycle."""
|
||||
stats = {
|
||||
"bat": {
|
||||
"day": {"min": None, "avg": None, "max": None, "current": None},
|
||||
}
|
||||
}
|
||||
save_chart_stats("repeater", stats)
|
||||
|
||||
loaded = load_chart_stats("repeater")
|
||||
|
||||
assert loaded["bat"]["day"]["min"] is None
|
||||
assert loaded["bat"]["day"]["avg"] is None
|
||||
|
||||
def test_loads_different_roles(self, configured_env):
|
||||
"""Loads correct file for each role."""
|
||||
companion_stats = {"battery_mv": {"day": {"min": 3.5}}}
|
||||
repeater_stats = {"bat": {"day": {"min": 3.6}}}
|
||||
|
||||
save_chart_stats("companion", companion_stats)
|
||||
save_chart_stats("repeater", repeater_stats)
|
||||
|
||||
assert load_chart_stats("companion") == companion_stats
|
||||
assert load_chart_stats("repeater") == repeater_stats
|
||||
|
||||
|
||||
class TestStatsRoundTrip:
|
||||
"""Tests for complete save/load round trips."""
|
||||
|
||||
def test_complex_stats_roundtrip(self, configured_env):
|
||||
"""Complex stats survive round trip unchanged."""
|
||||
stats = {
|
||||
"bat": {
|
||||
"day": {"min": 3.5, "avg": 3.7, "max": 3.9, "current": 3.85},
|
||||
"week": {"min": 3.4, "avg": 3.65, "max": 3.95, "current": 3.8},
|
||||
"month": {"min": 3.3, "avg": 3.6, "max": 4.0, "current": 3.75},
|
||||
"year": {"min": 3.2, "avg": 3.55, "max": 4.1, "current": 3.7},
|
||||
},
|
||||
"bat_pct": {
|
||||
"day": {"min": 50.0, "avg": 70.0, "max": 90.0, "current": 85.0},
|
||||
"week": {"min": 45.0, "avg": 65.0, "max": 95.0, "current": 80.0},
|
||||
"month": {"min": 40.0, "avg": 60.0, "max": 100.0, "current": 75.0},
|
||||
"year": {"min": 30.0, "avg": 55.0, "max": 100.0, "current": 70.0},
|
||||
},
|
||||
"nb_recv": {
|
||||
"day": {"min": 0, "avg": 50.5, "max": 100, "current": 75},
|
||||
"week": {"min": 0, "avg": 48.2, "max": 150, "current": 60},
|
||||
"month": {"min": 0, "avg": 45.8, "max": 200, "current": 55},
|
||||
"year": {"min": 0, "avg": 42.1, "max": 250, "current": 50},
|
||||
},
|
||||
}
|
||||
|
||||
save_chart_stats("repeater", stats)
|
||||
loaded = load_chart_stats("repeater")
|
||||
|
||||
assert loaded == stats
|
||||
|
||||
def test_float_precision_preserved(self, configured_env):
|
||||
"""Float precision is preserved in round trip."""
|
||||
stats = {
|
||||
"test": {
|
||||
"day": {
|
||||
"min": 3.141592653589793,
|
||||
"avg": 2.718281828459045,
|
||||
"max": 1.4142135623730951,
|
||||
"current": 0.0001234567890123,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
save_chart_stats("repeater", stats)
|
||||
loaded = load_chart_stats("repeater")
|
||||
|
||||
assert loaded["test"]["day"]["min"] == pytest.approx(3.141592653589793)
|
||||
assert loaded["test"]["day"]["avg"] == pytest.approx(2.718281828459045)
|
||||
433
tests/charts/test_chart_render.py
Normal file
433
tests/charts/test_chart_render.py
Normal file
@@ -0,0 +1,433 @@
|
||||
"""Tests for SVG chart rendering."""
|
||||
|
||||
import os
|
||||
from datetime import datetime, timedelta
|
||||
from pathlib import Path
|
||||
from xml.etree import ElementTree as ET
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.charts import (
|
||||
CHART_THEMES,
|
||||
DataPoint,
|
||||
TimeSeries,
|
||||
render_chart_svg,
|
||||
)
|
||||
|
||||
from .conftest import extract_svg_data_attributes, normalize_svg_for_snapshot
|
||||
|
||||
|
||||
def _svg_viewbox_dims(svg: str) -> tuple[float, float]:
|
||||
root = ET.fromstring(svg)
|
||||
viewbox = root.attrib.get("viewBox")
|
||||
assert viewbox is not None
|
||||
_, _, width, height = viewbox.split()
|
||||
return float(width), float(height)
|
||||
|
||||
|
||||
class TestRenderChartSvg:
|
||||
"""Tests for render_chart_svg function."""
|
||||
|
||||
def test_returns_svg_string(self, sample_timeseries, light_theme):
|
||||
"""Returns valid SVG string."""
|
||||
svg = render_chart_svg(sample_timeseries, light_theme)
|
||||
|
||||
assert svg.startswith("<?xml") or svg.startswith("<svg")
|
||||
assert "</svg>" in svg
|
||||
|
||||
def test_includes_svg_namespace(self, sample_timeseries, light_theme):
|
||||
"""SVG includes xmlns namespace."""
|
||||
svg = render_chart_svg(sample_timeseries, light_theme)
|
||||
|
||||
assert 'xmlns="http://www.w3.org/2000/svg"' in svg
|
||||
|
||||
def test_respects_width_height(self, sample_timeseries, light_theme):
|
||||
"""SVG respects specified dimensions."""
|
||||
svg_default = render_chart_svg(sample_timeseries, light_theme)
|
||||
svg_small = render_chart_svg(sample_timeseries, light_theme, width=600, height=200)
|
||||
|
||||
default_w, default_h = _svg_viewbox_dims(svg_default)
|
||||
small_w, small_h = _svg_viewbox_dims(svg_small)
|
||||
|
||||
assert small_w < default_w
|
||||
assert small_h < default_h
|
||||
|
||||
def test_uses_theme_colors(self, sample_timeseries, light_theme, dark_theme):
|
||||
"""Different themes produce different colors."""
|
||||
light_svg = render_chart_svg(sample_timeseries, light_theme)
|
||||
dark_svg = render_chart_svg(sample_timeseries, dark_theme)
|
||||
|
||||
# Check theme colors are present
|
||||
assert light_theme.line in light_svg or f"#{light_theme.line}" in light_svg
|
||||
assert dark_theme.line in dark_svg or f"#{dark_theme.line}" in dark_svg
|
||||
|
||||
|
||||
class TestEmptyChartRendering:
|
||||
"""Tests for rendering empty charts."""
|
||||
|
||||
def test_empty_chart_renders(self, empty_timeseries, light_theme):
|
||||
"""Empty time series renders without error."""
|
||||
svg = render_chart_svg(empty_timeseries, light_theme)
|
||||
|
||||
assert "</svg>" in svg
|
||||
|
||||
def test_empty_chart_shows_message(self, empty_timeseries, light_theme):
|
||||
"""Empty chart shows 'No data available' message."""
|
||||
svg = render_chart_svg(empty_timeseries, light_theme)
|
||||
|
||||
assert "No data available" in svg
|
||||
|
||||
|
||||
class TestDataPointsInjection:
|
||||
"""Tests for data-points attribute injection."""
|
||||
|
||||
def test_includes_data_points(self, sample_timeseries, light_theme):
|
||||
"""SVG includes data-points attribute."""
|
||||
svg = render_chart_svg(sample_timeseries, light_theme)
|
||||
|
||||
assert "data-points=" in svg
|
||||
|
||||
def test_data_points_valid_json(self, sample_timeseries, light_theme):
|
||||
"""data-points contains valid JSON array."""
|
||||
svg = render_chart_svg(sample_timeseries, light_theme)
|
||||
data = extract_svg_data_attributes(svg)
|
||||
|
||||
assert "points" in data
|
||||
assert isinstance(data["points"], list)
|
||||
|
||||
def test_data_points_count_matches(self, sample_timeseries, light_theme):
|
||||
"""data-points count matches time series points."""
|
||||
svg = render_chart_svg(sample_timeseries, light_theme)
|
||||
data = extract_svg_data_attributes(svg)
|
||||
|
||||
assert len(data["points"]) == len(sample_timeseries.points)
|
||||
|
||||
def test_data_points_structure(self, sample_timeseries, light_theme):
|
||||
"""Each data point has ts and v keys."""
|
||||
svg = render_chart_svg(sample_timeseries, light_theme)
|
||||
data = extract_svg_data_attributes(svg)
|
||||
|
||||
for point in data["points"]:
|
||||
assert "ts" in point
|
||||
assert "v" in point
|
||||
assert isinstance(point["ts"], int)
|
||||
assert isinstance(point["v"], (int, float))
|
||||
|
||||
def test_includes_metadata_attributes(self, sample_timeseries, light_theme):
|
||||
"""SVG includes metric, period, theme attributes."""
|
||||
svg = render_chart_svg(sample_timeseries, light_theme)
|
||||
data = extract_svg_data_attributes(svg)
|
||||
|
||||
assert data.get("metric") == "bat"
|
||||
assert data.get("period") == "day"
|
||||
assert data.get("theme") == "light"
|
||||
|
||||
def test_includes_axis_range_attributes(self, sample_timeseries, light_theme):
|
||||
"""SVG includes x and y axis range attributes."""
|
||||
svg = render_chart_svg(sample_timeseries, light_theme)
|
||||
data = extract_svg_data_attributes(svg)
|
||||
|
||||
assert "x_start" in data
|
||||
assert "x_end" in data
|
||||
assert "y_min" in data
|
||||
assert "y_max" in data
|
||||
|
||||
|
||||
class TestYAxisLimits:
|
||||
"""Tests for Y-axis limit handling."""
|
||||
|
||||
def test_fixed_y_limits(self, sample_timeseries, light_theme):
|
||||
"""Fixed Y limits are applied."""
|
||||
svg = render_chart_svg(
|
||||
sample_timeseries, light_theme,
|
||||
y_min=3.0, y_max=4.5
|
||||
)
|
||||
data = extract_svg_data_attributes(svg)
|
||||
|
||||
assert float(data["y_min"]) == 3.0
|
||||
assert float(data["y_max"]) == 4.5
|
||||
|
||||
def test_auto_y_limits_with_padding(self, light_theme):
|
||||
"""Auto Y limits add padding around data."""
|
||||
now = datetime.now()
|
||||
points = [
|
||||
DataPoint(timestamp=now, value=10.0),
|
||||
DataPoint(timestamp=now + timedelta(hours=1), value=20.0),
|
||||
]
|
||||
ts = TimeSeries(metric="test", role="repeater", period="day", points=points)
|
||||
|
||||
svg = render_chart_svg(ts, light_theme)
|
||||
data = extract_svg_data_attributes(svg)
|
||||
|
||||
y_min = float(data["y_min"])
|
||||
y_max = float(data["y_max"])
|
||||
|
||||
# Auto limits should extend beyond data range
|
||||
assert y_min < 10.0
|
||||
assert y_max > 20.0
|
||||
|
||||
|
||||
class TestXAxisLimits:
|
||||
"""Tests for X-axis limit handling."""
|
||||
|
||||
def test_fixed_x_limits(self, sample_timeseries, light_theme):
|
||||
"""Fixed X limits are applied."""
|
||||
x_start = datetime(2024, 1, 1, 0, 0, 0)
|
||||
x_end = datetime(2024, 1, 2, 0, 0, 0)
|
||||
|
||||
svg = render_chart_svg(
|
||||
sample_timeseries, light_theme,
|
||||
x_start=x_start, x_end=x_end
|
||||
)
|
||||
data = extract_svg_data_attributes(svg)
|
||||
|
||||
assert int(data["x_start"]) == int(x_start.timestamp())
|
||||
assert int(data["x_end"]) == int(x_end.timestamp())
|
||||
|
||||
|
||||
class TestChartThemes:
|
||||
"""Tests for chart theme constants."""
|
||||
|
||||
def test_light_theme_exists(self):
|
||||
"""Light theme is defined."""
|
||||
assert "light" in CHART_THEMES
|
||||
|
||||
def test_dark_theme_exists(self):
|
||||
"""Dark theme is defined."""
|
||||
assert "dark" in CHART_THEMES
|
||||
|
||||
def test_themes_have_required_colors(self):
|
||||
"""Themes have all required color attributes."""
|
||||
required = ["background", "canvas", "text", "axis", "grid", "line", "area"]
|
||||
|
||||
for theme in CHART_THEMES.values():
|
||||
for attr in required:
|
||||
assert hasattr(theme, attr), f"Theme missing {attr}"
|
||||
assert getattr(theme, attr), f"Theme {attr} is empty"
|
||||
|
||||
def test_theme_colors_are_valid_hex(self):
|
||||
"""Theme colors are valid hex strings."""
|
||||
import re
|
||||
hex_pattern = re.compile(r'^[0-9a-fA-F]{6,8}$')
|
||||
|
||||
for name, theme in CHART_THEMES.items():
|
||||
for attr in ["background", "canvas", "text", "axis", "grid", "line", "area"]:
|
||||
color = getattr(theme, attr)
|
||||
assert hex_pattern.match(color), f"{name}.{attr} = {color} is not valid hex"
|
||||
|
||||
|
||||
class TestSvgNormalization:
|
||||
"""Tests for SVG snapshot normalization helper."""
|
||||
|
||||
def test_normalize_removes_matplotlib_ids(self, sample_timeseries, light_theme):
|
||||
"""Normalization removes matplotlib-generated IDs."""
|
||||
svg = render_chart_svg(sample_timeseries, light_theme)
|
||||
normalized = normalize_svg_for_snapshot(svg)
|
||||
|
||||
# Should not have matplotlib's randomized IDs
|
||||
import re
|
||||
# Look for patterns like id="abc123-def456"
|
||||
random_ids = re.findall(r'id="[a-z0-9]+-[0-9a-f]{8,}"', normalized)
|
||||
assert len(random_ids) == 0
|
||||
|
||||
def test_normalize_preserves_data_attributes(self, sample_timeseries, light_theme):
|
||||
"""Normalization preserves data-* attributes."""
|
||||
svg = render_chart_svg(sample_timeseries, light_theme)
|
||||
normalized = normalize_svg_for_snapshot(svg)
|
||||
|
||||
assert "data-metric=" in normalized
|
||||
assert "data-points=" in normalized
|
||||
|
||||
def test_normalize_removes_matplotlib_comment(self, sample_timeseries, light_theme):
|
||||
"""Normalization removes matplotlib version comment."""
|
||||
svg = render_chart_svg(sample_timeseries, light_theme)
|
||||
normalized = normalize_svg_for_snapshot(svg)
|
||||
|
||||
assert "Created with matplotlib" not in normalized
|
||||
|
||||
|
||||
class TestSvgSnapshots:
|
||||
"""Snapshot tests for SVG chart rendering.
|
||||
|
||||
These tests compare rendered SVG output against saved snapshots
|
||||
to detect unintended changes in chart appearance.
|
||||
|
||||
To update snapshots, run: UPDATE_SNAPSHOTS=1 pytest tests/charts/test_chart_render.py
|
||||
"""
|
||||
|
||||
@pytest.fixture
|
||||
def update_snapshots(self):
|
||||
"""Return True if snapshots should be updated."""
|
||||
return os.environ.get("UPDATE_SNAPSHOTS", "").lower() in ("1", "true", "yes")
|
||||
|
||||
def _assert_snapshot_match(
|
||||
self,
|
||||
actual: str,
|
||||
snapshot_path: Path,
|
||||
update: bool,
|
||||
) -> None:
|
||||
"""Compare SVG against snapshot, with optional update mode."""
|
||||
# Normalize for comparison
|
||||
normalized = normalize_svg_for_snapshot(actual)
|
||||
|
||||
if update:
|
||||
# Update mode: write normalized SVG to snapshot
|
||||
snapshot_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
snapshot_path.write_text(normalized, encoding="utf-8")
|
||||
pytest.skip(f"Snapshot updated: {snapshot_path}")
|
||||
else:
|
||||
# Compare mode
|
||||
if not snapshot_path.exists():
|
||||
# Create new snapshot if it doesn't exist
|
||||
snapshot_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
snapshot_path.write_text(normalized, encoding="utf-8")
|
||||
pytest.fail(
|
||||
f"Snapshot created: {snapshot_path}\n"
|
||||
f"Run tests again to verify, or set UPDATE_SNAPSHOTS=1 to regenerate."
|
||||
)
|
||||
|
||||
expected = snapshot_path.read_text(encoding="utf-8")
|
||||
|
||||
if normalized != expected:
|
||||
# Show first difference for debugging
|
||||
norm_lines = normalized.splitlines()
|
||||
exp_lines = expected.splitlines()
|
||||
|
||||
diff_info = []
|
||||
for i, (n, e) in enumerate(zip(norm_lines, exp_lines, strict=False), 1):
|
||||
if n != e:
|
||||
diff_info.append(f"Line {i} differs:")
|
||||
diff_info.append(f" Expected: {e[:100]}...")
|
||||
diff_info.append(f" Actual: {n[:100]}...")
|
||||
if len(diff_info) > 12:
|
||||
diff_info.append(" (more differences omitted)")
|
||||
break
|
||||
|
||||
if len(norm_lines) != len(exp_lines):
|
||||
diff_info.append(
|
||||
f"Line count: expected {len(exp_lines)}, got {len(norm_lines)}"
|
||||
)
|
||||
|
||||
pytest.fail(
|
||||
f"Snapshot mismatch: {snapshot_path}\n"
|
||||
f"Set UPDATE_SNAPSHOTS=1 to regenerate.\n\n"
|
||||
+ "\n".join(diff_info)
|
||||
)
|
||||
|
||||
def test_gauge_chart_light_theme(
|
||||
self,
|
||||
snapshot_gauge_timeseries,
|
||||
light_theme,
|
||||
snapshots_dir,
|
||||
update_snapshots,
|
||||
):
|
||||
"""Gauge metric chart with light theme matches snapshot."""
|
||||
svg = render_chart_svg(
|
||||
snapshot_gauge_timeseries,
|
||||
light_theme,
|
||||
y_min=3.0,
|
||||
y_max=4.2,
|
||||
)
|
||||
|
||||
snapshot_path = snapshots_dir / "bat_day_light.svg"
|
||||
self._assert_snapshot_match(svg, snapshot_path, update_snapshots)
|
||||
|
||||
def test_gauge_chart_dark_theme(
|
||||
self,
|
||||
snapshot_gauge_timeseries,
|
||||
dark_theme,
|
||||
snapshots_dir,
|
||||
update_snapshots,
|
||||
):
|
||||
"""Gauge metric chart with dark theme matches snapshot."""
|
||||
svg = render_chart_svg(
|
||||
snapshot_gauge_timeseries,
|
||||
dark_theme,
|
||||
y_min=3.0,
|
||||
y_max=4.2,
|
||||
)
|
||||
|
||||
snapshot_path = snapshots_dir / "bat_day_dark.svg"
|
||||
self._assert_snapshot_match(svg, snapshot_path, update_snapshots)
|
||||
|
||||
def test_counter_chart_light_theme(
|
||||
self,
|
||||
snapshot_counter_timeseries,
|
||||
light_theme,
|
||||
snapshots_dir,
|
||||
update_snapshots,
|
||||
):
|
||||
"""Counter metric (rate) chart with light theme matches snapshot."""
|
||||
svg = render_chart_svg(
|
||||
snapshot_counter_timeseries,
|
||||
light_theme,
|
||||
)
|
||||
|
||||
snapshot_path = snapshots_dir / "nb_recv_day_light.svg"
|
||||
self._assert_snapshot_match(svg, snapshot_path, update_snapshots)
|
||||
|
||||
def test_counter_chart_dark_theme(
|
||||
self,
|
||||
snapshot_counter_timeseries,
|
||||
dark_theme,
|
||||
snapshots_dir,
|
||||
update_snapshots,
|
||||
):
|
||||
"""Counter metric (rate) chart with dark theme matches snapshot."""
|
||||
svg = render_chart_svg(
|
||||
snapshot_counter_timeseries,
|
||||
dark_theme,
|
||||
)
|
||||
|
||||
snapshot_path = snapshots_dir / "nb_recv_day_dark.svg"
|
||||
self._assert_snapshot_match(svg, snapshot_path, update_snapshots)
|
||||
|
||||
def test_empty_chart_light_theme(
|
||||
self,
|
||||
snapshot_empty_timeseries,
|
||||
light_theme,
|
||||
snapshots_dir,
|
||||
update_snapshots,
|
||||
):
|
||||
"""Empty chart with 'No data available' matches snapshot."""
|
||||
svg = render_chart_svg(
|
||||
snapshot_empty_timeseries,
|
||||
light_theme,
|
||||
)
|
||||
|
||||
snapshot_path = snapshots_dir / "empty_day_light.svg"
|
||||
self._assert_snapshot_match(svg, snapshot_path, update_snapshots)
|
||||
|
||||
def test_empty_chart_dark_theme(
|
||||
self,
|
||||
snapshot_empty_timeseries,
|
||||
dark_theme,
|
||||
snapshots_dir,
|
||||
update_snapshots,
|
||||
):
|
||||
"""Empty chart with dark theme matches snapshot."""
|
||||
svg = render_chart_svg(
|
||||
snapshot_empty_timeseries,
|
||||
dark_theme,
|
||||
)
|
||||
|
||||
snapshot_path = snapshots_dir / "empty_day_dark.svg"
|
||||
self._assert_snapshot_match(svg, snapshot_path, update_snapshots)
|
||||
|
||||
def test_single_point_chart(
|
||||
self,
|
||||
snapshot_single_point_timeseries,
|
||||
light_theme,
|
||||
snapshots_dir,
|
||||
update_snapshots,
|
||||
):
|
||||
"""Chart with single data point matches snapshot."""
|
||||
svg = render_chart_svg(
|
||||
snapshot_single_point_timeseries,
|
||||
light_theme,
|
||||
y_min=3.0,
|
||||
y_max=4.2,
|
||||
)
|
||||
|
||||
snapshot_path = snapshots_dir / "single_point_day_light.svg"
|
||||
self._assert_snapshot_match(svg, snapshot_path, update_snapshots)
|
||||
69
tests/charts/test_render_all_charts.py
Normal file
69
tests/charts/test_render_all_charts.py
Normal file
@@ -0,0 +1,69 @@
|
||||
"""Tests for render_all_charts metric selection behavior."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from datetime import datetime
|
||||
|
||||
import meshmon.charts as charts
|
||||
|
||||
|
||||
def test_render_all_charts_includes_repeater_telemetry_when_enabled(configured_env, monkeypatch):
|
||||
"""Repeater chart rendering auto-discovers telemetry metrics when enabled."""
|
||||
monkeypatch.setenv("TELEMETRY_ENABLED", "1")
|
||||
import meshmon.env
|
||||
meshmon.env._config = None
|
||||
|
||||
base_ts = int(datetime(2024, 1, 1, 0, 0, 0).timestamp())
|
||||
|
||||
monkeypatch.setattr(
|
||||
charts,
|
||||
"get_available_metrics",
|
||||
lambda role: [
|
||||
"bat",
|
||||
"telemetry.temperature.1",
|
||||
"telemetry.humidity.1",
|
||||
"telemetry.voltage.1",
|
||||
"telemetry.gps.0.latitude",
|
||||
],
|
||||
)
|
||||
monkeypatch.setattr(
|
||||
charts,
|
||||
"get_metrics_for_period",
|
||||
lambda role, start_ts, end_ts: {
|
||||
"telemetry.temperature.1": [
|
||||
(base_ts, 6.0),
|
||||
(base_ts + 900, 7.0),
|
||||
],
|
||||
"telemetry.humidity.1": [
|
||||
(base_ts, 84.0),
|
||||
(base_ts + 900, 85.0),
|
||||
],
|
||||
},
|
||||
)
|
||||
monkeypatch.setattr(charts, "render_chart_svg", lambda *args, **kwargs: "<svg></svg>")
|
||||
|
||||
_generated, stats = charts.render_all_charts("repeater")
|
||||
|
||||
assert "telemetry.temperature.1" in stats
|
||||
assert "telemetry.humidity.1" in stats
|
||||
assert "telemetry.voltage.1" not in stats
|
||||
assert "telemetry.gps.0.latitude" not in stats
|
||||
|
||||
|
||||
def test_render_all_charts_excludes_telemetry_when_disabled(configured_env, monkeypatch):
|
||||
"""Telemetry metrics are not rendered when TELEMETRY_ENABLED=0."""
|
||||
monkeypatch.setenv("TELEMETRY_ENABLED", "0")
|
||||
import meshmon.env
|
||||
meshmon.env._config = None
|
||||
|
||||
monkeypatch.setattr(
|
||||
charts,
|
||||
"get_available_metrics",
|
||||
lambda role: ["bat", "telemetry.temperature.1", "telemetry.humidity.1"],
|
||||
)
|
||||
monkeypatch.setattr(charts, "get_metrics_for_period", lambda role, start_ts, end_ts: {})
|
||||
monkeypatch.setattr(charts, "render_chart_svg", lambda *args, **kwargs: "<svg></svg>")
|
||||
|
||||
_generated, stats = charts.render_all_charts("repeater")
|
||||
|
||||
assert not any(metric.startswith("telemetry.") for metric in stats)
|
||||
185
tests/charts/test_statistics.py
Normal file
185
tests/charts/test_statistics.py
Normal file
@@ -0,0 +1,185 @@
|
||||
"""Tests for chart statistics calculation."""
|
||||
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.charts import (
|
||||
ChartStatistics,
|
||||
DataPoint,
|
||||
TimeSeries,
|
||||
calculate_statistics,
|
||||
)
|
||||
|
||||
BASE_TIME = datetime(2024, 1, 1, 0, 0, 0)
|
||||
|
||||
|
||||
class TestCalculateStatistics:
|
||||
"""Tests for calculate_statistics function."""
|
||||
|
||||
def test_calculates_min(self, sample_timeseries):
|
||||
"""Calculates minimum value."""
|
||||
stats = calculate_statistics(sample_timeseries)
|
||||
|
||||
assert stats.min_value is not None
|
||||
assert stats.min_value == min(p.value for p in sample_timeseries.points)
|
||||
|
||||
def test_calculates_max(self, sample_timeseries):
|
||||
"""Calculates maximum value."""
|
||||
stats = calculate_statistics(sample_timeseries)
|
||||
|
||||
assert stats.max_value is not None
|
||||
assert stats.max_value == max(p.value for p in sample_timeseries.points)
|
||||
|
||||
def test_calculates_avg(self, sample_timeseries):
|
||||
"""Calculates average value."""
|
||||
stats = calculate_statistics(sample_timeseries)
|
||||
|
||||
expected_avg = sum(p.value for p in sample_timeseries.points) / len(sample_timeseries.points)
|
||||
assert stats.avg_value is not None
|
||||
assert stats.avg_value == pytest.approx(expected_avg)
|
||||
|
||||
def test_calculates_current(self, sample_timeseries):
|
||||
"""Current is the last value."""
|
||||
stats = calculate_statistics(sample_timeseries)
|
||||
|
||||
assert stats.current_value is not None
|
||||
assert stats.current_value == sample_timeseries.points[-1].value
|
||||
|
||||
def test_empty_series_returns_none_values(self, empty_timeseries):
|
||||
"""Empty time series returns None for all stats."""
|
||||
stats = calculate_statistics(empty_timeseries)
|
||||
|
||||
assert stats.min_value is None
|
||||
assert stats.avg_value is None
|
||||
assert stats.max_value is None
|
||||
assert stats.current_value is None
|
||||
|
||||
def test_single_point_stats(self, single_point_timeseries):
|
||||
"""Single point: min=avg=max=current."""
|
||||
stats = calculate_statistics(single_point_timeseries)
|
||||
value = single_point_timeseries.points[0].value
|
||||
|
||||
assert stats.min_value == value
|
||||
assert stats.avg_value == value
|
||||
assert stats.max_value == value
|
||||
assert stats.current_value == value
|
||||
|
||||
|
||||
class TestChartStatistics:
|
||||
"""Tests for ChartStatistics dataclass."""
|
||||
|
||||
def test_to_dict(self):
|
||||
"""Converts to dict with correct keys."""
|
||||
stats = ChartStatistics(
|
||||
min_value=3.0,
|
||||
avg_value=3.5,
|
||||
max_value=4.0,
|
||||
current_value=3.8,
|
||||
)
|
||||
|
||||
d = stats.to_dict()
|
||||
|
||||
assert d == {
|
||||
"min": 3.0,
|
||||
"avg": 3.5,
|
||||
"max": 4.0,
|
||||
"current": 3.8,
|
||||
}
|
||||
|
||||
def test_to_dict_with_none_values(self):
|
||||
"""None values preserved in dict."""
|
||||
stats = ChartStatistics()
|
||||
|
||||
d = stats.to_dict()
|
||||
|
||||
assert d == {
|
||||
"min": None,
|
||||
"avg": None,
|
||||
"max": None,
|
||||
"current": None,
|
||||
}
|
||||
|
||||
def test_default_values_are_none(self):
|
||||
"""Default values are all None."""
|
||||
stats = ChartStatistics()
|
||||
|
||||
assert stats.min_value is None
|
||||
assert stats.avg_value is None
|
||||
assert stats.max_value is None
|
||||
assert stats.current_value is None
|
||||
|
||||
|
||||
class TestStatisticsWithVariousData:
|
||||
"""Tests for statistics with various data patterns."""
|
||||
|
||||
def test_constant_values(self):
|
||||
"""All same values gives min=avg=max."""
|
||||
now = BASE_TIME
|
||||
points = [DataPoint(timestamp=now + timedelta(hours=i), value=5.0) for i in range(10)]
|
||||
ts = TimeSeries(metric="test", role="companion", period="day", points=points)
|
||||
|
||||
stats = calculate_statistics(ts)
|
||||
|
||||
assert stats.min_value == 5.0
|
||||
assert stats.avg_value == 5.0
|
||||
assert stats.max_value == 5.0
|
||||
|
||||
def test_increasing_values(self):
|
||||
"""Increasing values have correct stats."""
|
||||
now = BASE_TIME
|
||||
points = [DataPoint(timestamp=now + timedelta(hours=i), value=float(i)) for i in range(10)]
|
||||
ts = TimeSeries(metric="test", role="companion", period="day", points=points)
|
||||
|
||||
stats = calculate_statistics(ts)
|
||||
|
||||
assert stats.min_value == 0.0
|
||||
assert stats.max_value == 9.0
|
||||
assert stats.avg_value == 4.5 # Mean of 0-9
|
||||
assert stats.current_value == 9.0 # Last value
|
||||
|
||||
def test_negative_values(self):
|
||||
"""Handles negative values correctly."""
|
||||
now = BASE_TIME
|
||||
points = [
|
||||
DataPoint(timestamp=now, value=-10.0),
|
||||
DataPoint(timestamp=now + timedelta(hours=1), value=-5.0),
|
||||
DataPoint(timestamp=now + timedelta(hours=2), value=0.0),
|
||||
]
|
||||
ts = TimeSeries(metric="test", role="companion", period="day", points=points)
|
||||
|
||||
stats = calculate_statistics(ts)
|
||||
|
||||
assert stats.min_value == -10.0
|
||||
assert stats.max_value == 0.0
|
||||
assert stats.avg_value == -5.0
|
||||
|
||||
def test_large_values(self):
|
||||
"""Handles large values correctly."""
|
||||
now = BASE_TIME
|
||||
points = [
|
||||
DataPoint(timestamp=now, value=1e10),
|
||||
DataPoint(timestamp=now + timedelta(hours=1), value=1e11),
|
||||
]
|
||||
ts = TimeSeries(metric="test", role="companion", period="day", points=points)
|
||||
|
||||
stats = calculate_statistics(ts)
|
||||
|
||||
assert stats.min_value == 1e10
|
||||
assert stats.max_value == 1e11
|
||||
|
||||
def test_small_decimal_values(self):
|
||||
"""Handles small decimal values correctly."""
|
||||
now = BASE_TIME
|
||||
points = [
|
||||
DataPoint(timestamp=now, value=0.001),
|
||||
DataPoint(timestamp=now + timedelta(hours=1), value=0.002),
|
||||
DataPoint(timestamp=now + timedelta(hours=2), value=0.003),
|
||||
]
|
||||
ts = TimeSeries(metric="test", role="companion", period="day", points=points)
|
||||
|
||||
stats = calculate_statistics(ts)
|
||||
|
||||
assert stats.min_value == pytest.approx(0.001)
|
||||
assert stats.max_value == pytest.approx(0.003)
|
||||
assert stats.avg_value == pytest.approx(0.002)
|
||||
227
tests/charts/test_timeseries.py
Normal file
227
tests/charts/test_timeseries.py
Normal file
@@ -0,0 +1,227 @@
|
||||
"""Tests for TimeSeries data class and loading."""
|
||||
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.charts import (
|
||||
DataPoint,
|
||||
TimeSeries,
|
||||
load_timeseries_from_db,
|
||||
)
|
||||
from meshmon.db import insert_metrics
|
||||
|
||||
BASE_TIME = datetime(2024, 1, 1, 0, 0, 0)
|
||||
|
||||
|
||||
class TestDataPoint:
|
||||
"""Tests for DataPoint dataclass."""
|
||||
|
||||
def test_stores_timestamp_and_value(self):
|
||||
"""Stores timestamp and value."""
|
||||
ts = BASE_TIME
|
||||
dp = DataPoint(timestamp=ts, value=3.85)
|
||||
|
||||
assert dp.timestamp == ts
|
||||
assert dp.value == 3.85
|
||||
|
||||
def test_value_types(self):
|
||||
"""Accepts float and int values."""
|
||||
ts = BASE_TIME
|
||||
|
||||
dp_float = DataPoint(timestamp=ts, value=3.85)
|
||||
assert dp_float.value == 3.85
|
||||
|
||||
dp_int = DataPoint(timestamp=ts, value=100)
|
||||
assert dp_int.value == 100
|
||||
|
||||
|
||||
class TestTimeSeries:
|
||||
"""Tests for TimeSeries dataclass."""
|
||||
|
||||
def test_stores_metadata(self):
|
||||
"""Stores metric, role, period metadata."""
|
||||
ts = TimeSeries(
|
||||
metric="bat",
|
||||
role="repeater",
|
||||
period="day",
|
||||
)
|
||||
|
||||
assert ts.metric == "bat"
|
||||
assert ts.role == "repeater"
|
||||
assert ts.period == "day"
|
||||
|
||||
def test_empty_by_default(self):
|
||||
"""Points list is empty by default."""
|
||||
ts = TimeSeries(metric="bat", role="repeater", period="day")
|
||||
|
||||
assert ts.points == []
|
||||
assert ts.is_empty is True
|
||||
|
||||
def test_timestamps_property(self, sample_timeseries):
|
||||
"""timestamps property returns list of timestamps."""
|
||||
timestamps = sample_timeseries.timestamps
|
||||
|
||||
assert len(timestamps) == len(sample_timeseries.points)
|
||||
assert all(isinstance(t, datetime) for t in timestamps)
|
||||
|
||||
def test_values_property(self, sample_timeseries):
|
||||
"""values property returns list of values."""
|
||||
values = sample_timeseries.values
|
||||
|
||||
assert len(values) == len(sample_timeseries.points)
|
||||
assert all(isinstance(v, float) for v in values)
|
||||
|
||||
def test_is_empty_false_with_data(self, sample_timeseries):
|
||||
"""is_empty is False when points exist."""
|
||||
assert sample_timeseries.is_empty is False
|
||||
|
||||
def test_is_empty_true_without_data(self, empty_timeseries):
|
||||
"""is_empty is True when no points."""
|
||||
assert empty_timeseries.is_empty is True
|
||||
|
||||
|
||||
class TestLoadTimeseriesFromDb:
|
||||
"""Tests for load_timeseries_from_db function."""
|
||||
|
||||
def test_loads_metric_data(self, initialized_db, configured_env):
|
||||
"""Loads metric data from database."""
|
||||
base_ts = 1704067200
|
||||
insert_metrics(base_ts, "repeater", {"bat": 3850.0}, initialized_db)
|
||||
insert_metrics(base_ts + 900, "repeater", {"bat": 3860.0}, initialized_db)
|
||||
|
||||
ts = load_timeseries_from_db(
|
||||
role="repeater",
|
||||
metric="bat",
|
||||
end_time=datetime.fromtimestamp(base_ts + 1000),
|
||||
lookback=timedelta(hours=1),
|
||||
period="day",
|
||||
)
|
||||
|
||||
assert len(ts.points) == 2
|
||||
|
||||
def test_filters_by_time_range(self, initialized_db, configured_env):
|
||||
"""Only loads data within time range."""
|
||||
base_ts = 1704067200
|
||||
|
||||
# Insert data outside and inside range
|
||||
insert_metrics(base_ts - 7200, "repeater", {"bat": 3800.0}, initialized_db) # Outside
|
||||
insert_metrics(base_ts, "repeater", {"bat": 3850.0}, initialized_db) # Inside
|
||||
insert_metrics(base_ts + 7200, "repeater", {"bat": 3900.0}, initialized_db) # Outside
|
||||
|
||||
ts = load_timeseries_from_db(
|
||||
role="repeater",
|
||||
metric="bat",
|
||||
end_time=datetime.fromtimestamp(base_ts + 1800),
|
||||
lookback=timedelta(hours=1),
|
||||
period="day",
|
||||
)
|
||||
|
||||
assert len(ts.points) == 1
|
||||
assert ts.points[0].value == pytest.approx(3.85) # Transformed to volts
|
||||
|
||||
def test_returns_correct_metadata(self, initialized_db, configured_env):
|
||||
"""Returned TimeSeries has correct metadata."""
|
||||
ts = load_timeseries_from_db(
|
||||
role="repeater",
|
||||
metric="bat",
|
||||
end_time=BASE_TIME,
|
||||
lookback=timedelta(hours=1),
|
||||
period="week",
|
||||
)
|
||||
|
||||
assert ts.metric == "bat"
|
||||
assert ts.role == "repeater"
|
||||
assert ts.period == "week"
|
||||
|
||||
def test_uses_prefetched_metrics(self, initialized_db, configured_env):
|
||||
"""Can use pre-fetched metrics dict."""
|
||||
base_ts = 1704067200
|
||||
insert_metrics(base_ts, "repeater", {"bat": 3850.0}, initialized_db)
|
||||
|
||||
# Pre-fetch metrics
|
||||
from meshmon.db import get_metrics_for_period
|
||||
all_metrics = get_metrics_for_period("repeater", base_ts - 3600, base_ts + 3600)
|
||||
|
||||
ts = load_timeseries_from_db(
|
||||
role="repeater",
|
||||
metric="bat",
|
||||
end_time=datetime.fromtimestamp(base_ts + 3600),
|
||||
lookback=timedelta(hours=2),
|
||||
period="day",
|
||||
all_metrics=all_metrics,
|
||||
)
|
||||
|
||||
assert len(ts.points) == 1
|
||||
|
||||
def test_handles_missing_metric(self, initialized_db, configured_env):
|
||||
"""Returns empty TimeSeries for missing metric."""
|
||||
ts = load_timeseries_from_db(
|
||||
role="repeater",
|
||||
metric="nonexistent_metric",
|
||||
end_time=BASE_TIME,
|
||||
lookback=timedelta(hours=1),
|
||||
period="day",
|
||||
)
|
||||
|
||||
assert ts.is_empty
|
||||
|
||||
def test_sorts_by_timestamp(self, initialized_db, configured_env):
|
||||
"""Points are sorted by timestamp."""
|
||||
base_ts = 1704067200
|
||||
|
||||
# Insert out of order
|
||||
insert_metrics(base_ts + 300, "repeater", {"bat": 3860.0}, initialized_db)
|
||||
insert_metrics(base_ts, "repeater", {"bat": 3850.0}, initialized_db)
|
||||
insert_metrics(base_ts + 150, "repeater", {"bat": 3855.0}, initialized_db)
|
||||
|
||||
ts = load_timeseries_from_db(
|
||||
role="repeater",
|
||||
metric="bat",
|
||||
end_time=datetime.fromtimestamp(base_ts + 600),
|
||||
lookback=timedelta(hours=1),
|
||||
period="day",
|
||||
)
|
||||
|
||||
timestamps = [p.timestamp for p in ts.points]
|
||||
assert timestamps == sorted(timestamps)
|
||||
|
||||
def test_telemetry_temperature_converts_to_imperial(self, initialized_db, configured_env, monkeypatch):
|
||||
"""Telemetry temperature converts from C to F when DISPLAY_UNIT_SYSTEM=imperial."""
|
||||
monkeypatch.setenv("DISPLAY_UNIT_SYSTEM", "imperial")
|
||||
import meshmon.env
|
||||
meshmon.env._config = None
|
||||
|
||||
base_ts = 1704067200
|
||||
insert_metrics(base_ts, "repeater", {"telemetry.temperature.1": 0.0}, initialized_db)
|
||||
insert_metrics(base_ts + 900, "repeater", {"telemetry.temperature.1": 10.0}, initialized_db)
|
||||
|
||||
ts = load_timeseries_from_db(
|
||||
role="repeater",
|
||||
metric="telemetry.temperature.1",
|
||||
end_time=datetime.fromtimestamp(base_ts + 1000),
|
||||
lookback=timedelta(hours=1),
|
||||
period="day",
|
||||
)
|
||||
|
||||
assert [p.value for p in ts.points] == pytest.approx([32.0, 50.0])
|
||||
|
||||
def test_telemetry_temperature_stays_metric(self, initialized_db, configured_env, monkeypatch):
|
||||
"""Telemetry temperature remains Celsius when DISPLAY_UNIT_SYSTEM=metric."""
|
||||
monkeypatch.setenv("DISPLAY_UNIT_SYSTEM", "metric")
|
||||
import meshmon.env
|
||||
meshmon.env._config = None
|
||||
|
||||
base_ts = 1704067200
|
||||
insert_metrics(base_ts, "repeater", {"telemetry.temperature.1": 0.0}, initialized_db)
|
||||
insert_metrics(base_ts + 900, "repeater", {"telemetry.temperature.1": 10.0}, initialized_db)
|
||||
|
||||
ts = load_timeseries_from_db(
|
||||
role="repeater",
|
||||
metric="telemetry.temperature.1",
|
||||
end_time=datetime.fromtimestamp(base_ts + 1000),
|
||||
lookback=timedelta(hours=1),
|
||||
period="day",
|
||||
)
|
||||
|
||||
assert [p.value for p in ts.points] == pytest.approx([0.0, 10.0])
|
||||
254
tests/charts/test_transforms.py
Normal file
254
tests/charts/test_transforms.py
Normal file
@@ -0,0 +1,254 @@
|
||||
"""Tests for chart data transformations (counter-to-rate, etc.)."""
|
||||
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.charts import (
|
||||
PERIOD_CONFIG,
|
||||
load_timeseries_from_db,
|
||||
)
|
||||
from meshmon.db import insert_metrics
|
||||
|
||||
BASE_TIME = datetime(2024, 1, 1, 0, 0, 0)
|
||||
|
||||
|
||||
class TestCounterToRateConversion:
|
||||
"""Tests for counter metric rate conversion."""
|
||||
|
||||
def test_calculates_rate_from_deltas(self, initialized_db, configured_env):
|
||||
"""Counter values are converted to rate of change."""
|
||||
base_ts = 1704067200 # 2024-01-01 00:00:00 UTC
|
||||
|
||||
# Insert increasing counter values (15 min apart)
|
||||
for i in range(5):
|
||||
ts = base_ts + (i * 900) # 15 minutes
|
||||
insert_metrics(ts, "repeater", {"nb_recv": float(i * 100)}, initialized_db)
|
||||
|
||||
ts = load_timeseries_from_db(
|
||||
role="repeater",
|
||||
metric="nb_recv",
|
||||
end_time=datetime.fromtimestamp(base_ts + 4 * 900),
|
||||
lookback=timedelta(hours=2),
|
||||
period="day",
|
||||
)
|
||||
|
||||
# Counter produces N-1 rate points from N values
|
||||
assert len(ts.points) == 4
|
||||
|
||||
# All rates should be positive (counter increasing)
|
||||
expected_rate = (100.0 / 900.0) * 60.0
|
||||
for p in ts.points:
|
||||
assert p.value == pytest.approx(expected_rate)
|
||||
|
||||
def test_handles_counter_reset(self, initialized_db, configured_env):
|
||||
"""Counter resets (negative delta) are skipped."""
|
||||
base_ts = 1704067200
|
||||
|
||||
# Insert values with a reset
|
||||
insert_metrics(base_ts, "repeater", {"nb_recv": 100.0}, initialized_db)
|
||||
insert_metrics(base_ts + 900, "repeater", {"nb_recv": 200.0}, initialized_db)
|
||||
insert_metrics(base_ts + 1800, "repeater", {"nb_recv": 50.0}, initialized_db) # Reset!
|
||||
insert_metrics(base_ts + 2700, "repeater", {"nb_recv": 150.0}, initialized_db)
|
||||
|
||||
ts = load_timeseries_from_db(
|
||||
role="repeater",
|
||||
metric="nb_recv",
|
||||
end_time=datetime.fromtimestamp(base_ts + 2700),
|
||||
lookback=timedelta(hours=1),
|
||||
period="day",
|
||||
)
|
||||
|
||||
# Reset point should be skipped, so fewer points
|
||||
assert len(ts.points) == 2 # Only valid deltas
|
||||
expected_rate = (100.0 / 900.0) * 60.0
|
||||
assert ts.points[0].timestamp == datetime.fromtimestamp(base_ts + 900)
|
||||
assert ts.points[1].timestamp == datetime.fromtimestamp(base_ts + 2700)
|
||||
assert ts.points[0].value == pytest.approx(expected_rate)
|
||||
assert ts.points[1].value == pytest.approx(expected_rate)
|
||||
|
||||
def test_counter_rate_short_interval_under_step_is_skipped(
|
||||
self,
|
||||
initialized_db,
|
||||
configured_env,
|
||||
monkeypatch,
|
||||
):
|
||||
"""Short sampling intervals are skipped to avoid rate spikes."""
|
||||
base_ts = 1704067200
|
||||
|
||||
monkeypatch.setenv("REPEATER_STEP", "900")
|
||||
import meshmon.env
|
||||
|
||||
meshmon.env._config = None
|
||||
|
||||
insert_metrics(base_ts, "repeater", {"nb_recv": 0.0}, initialized_db)
|
||||
insert_metrics(base_ts + 900, "repeater", {"nb_recv": 100.0}, initialized_db)
|
||||
insert_metrics(base_ts + 904, "repeater", {"nb_recv": 110.0}, initialized_db)
|
||||
insert_metrics(base_ts + 1800, "repeater", {"nb_recv": 200.0}, initialized_db)
|
||||
|
||||
ts = load_timeseries_from_db(
|
||||
role="repeater",
|
||||
metric="nb_recv",
|
||||
end_time=datetime.fromtimestamp(base_ts + 1800),
|
||||
lookback=timedelta(hours=2),
|
||||
period="day",
|
||||
)
|
||||
|
||||
expected_rate = (100.0 / 900.0) * 60.0
|
||||
assert len(ts.points) == 2
|
||||
assert ts.points[0].timestamp == datetime.fromtimestamp(base_ts + 900)
|
||||
assert ts.points[1].timestamp == datetime.fromtimestamp(base_ts + 1800)
|
||||
for point in ts.points:
|
||||
assert point.value == pytest.approx(expected_rate)
|
||||
|
||||
def test_applies_scale_factor(self, initialized_db, configured_env, monkeypatch):
|
||||
"""Counter rate is scaled (typically x60 for per-minute)."""
|
||||
base_ts = 1704067200
|
||||
|
||||
monkeypatch.setenv("REPEATER_STEP", "60")
|
||||
import meshmon.env
|
||||
|
||||
meshmon.env._config = None
|
||||
|
||||
# Insert values 60 seconds apart for easy math
|
||||
insert_metrics(base_ts, "repeater", {"nb_recv": 0.0}, initialized_db)
|
||||
insert_metrics(base_ts + 60, "repeater", {"nb_recv": 60.0}, initialized_db)
|
||||
|
||||
ts = load_timeseries_from_db(
|
||||
role="repeater",
|
||||
metric="nb_recv",
|
||||
end_time=datetime.fromtimestamp(base_ts + 60),
|
||||
lookback=timedelta(hours=1),
|
||||
period="day",
|
||||
)
|
||||
|
||||
# 60 packets in 60 seconds = 1/sec = 60/min with scale=60
|
||||
assert len(ts.points) == 1
|
||||
assert ts.points[0].value == pytest.approx(60.0)
|
||||
|
||||
def test_single_value_returns_empty(self, initialized_db, configured_env):
|
||||
"""Single counter value cannot compute rate."""
|
||||
base_ts = 1704067200
|
||||
insert_metrics(base_ts, "repeater", {"nb_recv": 100.0}, initialized_db)
|
||||
|
||||
ts = load_timeseries_from_db(
|
||||
role="repeater",
|
||||
metric="nb_recv",
|
||||
end_time=datetime.fromtimestamp(base_ts),
|
||||
lookback=timedelta(hours=1),
|
||||
period="day",
|
||||
)
|
||||
|
||||
assert ts.is_empty
|
||||
|
||||
|
||||
class TestGaugeValueTransform:
|
||||
"""Tests for gauge metric value transformation."""
|
||||
|
||||
def test_applies_voltage_transform(self, initialized_db, configured_env):
|
||||
"""Voltage transform converts mV to V."""
|
||||
base_ts = 1704067200
|
||||
|
||||
# Insert millivolt value
|
||||
insert_metrics(base_ts, "companion", {"battery_mv": 3850.0}, initialized_db)
|
||||
|
||||
ts = load_timeseries_from_db(
|
||||
role="companion",
|
||||
metric="battery_mv",
|
||||
end_time=datetime.fromtimestamp(base_ts),
|
||||
lookback=timedelta(hours=1),
|
||||
period="day",
|
||||
)
|
||||
|
||||
# Should be converted to volts
|
||||
assert len(ts.points) == 1
|
||||
assert ts.points[0].value == pytest.approx(3.85)
|
||||
|
||||
def test_no_transform_for_bat_pct(self, initialized_db, configured_env):
|
||||
"""Battery percentage has no transform."""
|
||||
base_ts = 1704067200
|
||||
insert_metrics(base_ts, "repeater", {"bat_pct": 75.0}, initialized_db)
|
||||
|
||||
ts = load_timeseries_from_db(
|
||||
role="repeater",
|
||||
metric="bat_pct",
|
||||
end_time=datetime.fromtimestamp(base_ts),
|
||||
lookback=timedelta(hours=1),
|
||||
period="day",
|
||||
)
|
||||
|
||||
assert ts.points[0].value == pytest.approx(75.0)
|
||||
|
||||
|
||||
class TestTimeBinning:
|
||||
"""Tests for time series aggregation/binning."""
|
||||
|
||||
def test_no_binning_for_day(self):
|
||||
"""Day period uses raw data (no binning)."""
|
||||
assert PERIOD_CONFIG["day"].bin_seconds is None
|
||||
|
||||
def test_30_min_bins_for_week(self):
|
||||
"""Week period uses 30-minute bins."""
|
||||
assert PERIOD_CONFIG["week"].bin_seconds == 1800
|
||||
|
||||
def test_2_hour_bins_for_month(self):
|
||||
"""Month period uses 2-hour bins."""
|
||||
assert PERIOD_CONFIG["month"].bin_seconds == 7200
|
||||
|
||||
def test_1_day_bins_for_year(self):
|
||||
"""Year period uses 1-day bins."""
|
||||
assert PERIOD_CONFIG["year"].bin_seconds == 86400
|
||||
|
||||
def test_binning_reduces_point_count(self, initialized_db, configured_env):
|
||||
"""Binning aggregates multiple points per bin."""
|
||||
base_ts = 1704067200
|
||||
|
||||
# Insert many points (one per minute for an hour)
|
||||
for i in range(60):
|
||||
ts = base_ts + (i * 60)
|
||||
insert_metrics(ts, "repeater", {"bat": 3850.0 + i}, initialized_db)
|
||||
|
||||
ts = load_timeseries_from_db(
|
||||
role="repeater",
|
||||
metric="bat",
|
||||
end_time=datetime.fromtimestamp(base_ts + 3600),
|
||||
lookback=timedelta(days=7), # Week period has 30-min bins
|
||||
period="week",
|
||||
)
|
||||
|
||||
# 60 points over 1 hour with 30-min bins = 2-3 bins
|
||||
assert len(ts.points) <= 3
|
||||
|
||||
|
||||
class TestEmptyData:
|
||||
"""Tests for handling empty/missing data."""
|
||||
|
||||
def test_empty_when_no_metric_data(self, initialized_db, configured_env):
|
||||
"""Returns empty TimeSeries when metric has no data."""
|
||||
ts = load_timeseries_from_db(
|
||||
role="repeater",
|
||||
metric="nonexistent",
|
||||
end_time=BASE_TIME,
|
||||
lookback=timedelta(days=1),
|
||||
period="day",
|
||||
)
|
||||
|
||||
assert ts.is_empty
|
||||
assert ts.metric == "nonexistent"
|
||||
assert ts.role == "repeater"
|
||||
assert ts.period == "day"
|
||||
|
||||
def test_empty_when_no_data_in_range(self, initialized_db, configured_env):
|
||||
"""Returns empty TimeSeries when no data in time range."""
|
||||
old_ts = 1000000 # Very old timestamp
|
||||
insert_metrics(old_ts, "repeater", {"bat": 3850.0}, initialized_db)
|
||||
|
||||
ts = load_timeseries_from_db(
|
||||
role="repeater",
|
||||
metric="bat",
|
||||
end_time=BASE_TIME,
|
||||
lookback=timedelta(hours=1),
|
||||
period="day",
|
||||
)
|
||||
|
||||
assert ts.is_empty
|
||||
1
tests/client/__init__.py
Normal file
1
tests/client/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Tests for MeshCore client wrapper."""
|
||||
99
tests/client/conftest.py
Normal file
99
tests/client/conftest.py
Normal file
@@ -0,0 +1,99 @@
|
||||
"""Fixtures for MeshCore client tests."""
|
||||
|
||||
from unittest.mock import AsyncMock, MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_meshcore_module():
|
||||
"""Mock the entire meshcore module at import level."""
|
||||
mock_mc = MagicMock()
|
||||
mock_mc.MeshCore = MagicMock()
|
||||
mock_mc.EventType = MagicMock()
|
||||
mock_mc.EventType.ERROR = "ERROR"
|
||||
|
||||
with patch.dict("sys.modules", {"meshcore": mock_mc}):
|
||||
yield mock_mc
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_meshcore_client():
|
||||
"""Create mock MeshCore client with AsyncMock for coroutines."""
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
mc.contacts = {}
|
||||
|
||||
# Async methods
|
||||
mc.disconnect = AsyncMock()
|
||||
mc.commands.send_appstart = MagicMock(return_value=AsyncMock())
|
||||
mc.commands.get_contacts = MagicMock(return_value=AsyncMock())
|
||||
mc.commands.req_status_sync = MagicMock(return_value=AsyncMock())
|
||||
|
||||
# Synchronous methods
|
||||
mc.get_contact_by_name = MagicMock(return_value=None)
|
||||
mc.get_contact_by_key_prefix = MagicMock(return_value=None)
|
||||
|
||||
return mc
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_serial_port():
|
||||
"""Mock pyserial for serial port detection."""
|
||||
mock_serial = MagicMock()
|
||||
mock_port = MagicMock()
|
||||
mock_port.device = "/dev/ttyACM0"
|
||||
mock_port.description = "Mock MeshCore Device"
|
||||
mock_serial.tools = MagicMock()
|
||||
mock_serial.tools.list_ports = MagicMock()
|
||||
mock_serial.tools.list_ports.comports = MagicMock(return_value=[mock_port])
|
||||
|
||||
with patch.dict("sys.modules", {
|
||||
"serial": mock_serial,
|
||||
"serial.tools": mock_serial.tools,
|
||||
"serial.tools.list_ports": mock_serial.tools.list_ports,
|
||||
}):
|
||||
yield mock_serial
|
||||
|
||||
|
||||
def make_mock_event(event_type: str, payload: dict = None):
|
||||
"""Helper: Create a mock MeshCore event.
|
||||
|
||||
Args:
|
||||
event_type: Event type name (e.g., "SELF_INFO", "ERROR")
|
||||
payload: Event payload dict
|
||||
|
||||
Returns:
|
||||
Mock event object
|
||||
"""
|
||||
event = MagicMock()
|
||||
event.type = MagicMock()
|
||||
event.type.name = event_type
|
||||
event.payload = payload if payload is not None else {}
|
||||
return event
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_contact():
|
||||
"""Sample contact object."""
|
||||
contact = MagicMock()
|
||||
contact.adv_name = "TestNode"
|
||||
contact.name = "Test"
|
||||
contact.pubkey_prefix = "abc123"
|
||||
contact.public_key = b"\x01\x02\x03\x04"
|
||||
contact.type = 1
|
||||
contact.flags = 0
|
||||
return contact
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_contact_dict():
|
||||
"""Sample contact as dictionary."""
|
||||
return {
|
||||
"adv_name": "TestNode",
|
||||
"name": "Test",
|
||||
"pubkey_prefix": "abc123",
|
||||
"public_key": b"\x01\x02\x03\x04",
|
||||
"type": 1,
|
||||
"flags": 0,
|
||||
}
|
||||
449
tests/client/test_connect.py
Normal file
449
tests/client/test_connect.py
Normal file
@@ -0,0 +1,449 @@
|
||||
"""Tests for MeshCore connection functions."""
|
||||
|
||||
from unittest.mock import AsyncMock, MagicMock
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.meshcore_client import (
|
||||
_acquire_lock_async,
|
||||
auto_detect_serial_port,
|
||||
connect_from_env,
|
||||
connect_with_lock,
|
||||
)
|
||||
|
||||
|
||||
def _reset_config():
|
||||
import meshmon.env
|
||||
|
||||
meshmon.env._config = None
|
||||
return meshmon.env.get_config()
|
||||
|
||||
|
||||
class TestAutoDetectSerialPort:
|
||||
"""Tests for auto_detect_serial_port function."""
|
||||
|
||||
def test_prefers_acm_devices(self, mock_serial_port):
|
||||
"""Prefers /dev/ttyACM* devices."""
|
||||
mock_port_acm = MagicMock()
|
||||
mock_port_acm.device = "/dev/ttyACM0"
|
||||
mock_port_acm.description = "ACM Device"
|
||||
|
||||
mock_port_usb = MagicMock()
|
||||
mock_port_usb.device = "/dev/ttyUSB0"
|
||||
mock_port_usb.description = "USB Device"
|
||||
|
||||
mock_serial_port.tools.list_ports.comports.return_value = [mock_port_usb, mock_port_acm]
|
||||
|
||||
result = auto_detect_serial_port()
|
||||
|
||||
assert result == "/dev/ttyACM0"
|
||||
|
||||
def test_falls_back_to_usb(self, mock_serial_port):
|
||||
"""Falls back to /dev/ttyUSB* if no ACM."""
|
||||
mock_port = MagicMock()
|
||||
mock_port.device = "/dev/ttyUSB0"
|
||||
mock_port.description = "USB Device"
|
||||
|
||||
mock_serial_port.tools.list_ports.comports.return_value = [mock_port]
|
||||
|
||||
result = auto_detect_serial_port()
|
||||
|
||||
assert result == "/dev/ttyUSB0"
|
||||
|
||||
def test_falls_back_to_first_available(self, mock_serial_port):
|
||||
"""Falls back to first available port."""
|
||||
mock_port = MagicMock()
|
||||
mock_port.device = "/dev/ttyS0"
|
||||
mock_port.description = "Serial Port"
|
||||
|
||||
mock_serial_port.tools.list_ports.comports.return_value = [mock_port]
|
||||
|
||||
result = auto_detect_serial_port()
|
||||
|
||||
assert result == "/dev/ttyS0"
|
||||
|
||||
def test_returns_none_when_no_ports(self, mock_serial_port):
|
||||
"""Returns None when no ports available."""
|
||||
mock_serial_port.tools.list_ports.comports.return_value = []
|
||||
|
||||
result = auto_detect_serial_port()
|
||||
|
||||
assert result is None
|
||||
|
||||
def test_handles_import_error(self, monkeypatch):
|
||||
"""Returns None when pyserial not installed."""
|
||||
import builtins
|
||||
|
||||
real_import = builtins.__import__
|
||||
|
||||
def mock_import(name, *args, **kwargs):
|
||||
if name in {"serial", "serial.tools.list_ports"}:
|
||||
raise ImportError("No module named 'serial'")
|
||||
return real_import(name, *args, **kwargs)
|
||||
|
||||
monkeypatch.setattr(builtins, "__import__", mock_import)
|
||||
|
||||
assert auto_detect_serial_port() is None
|
||||
|
||||
|
||||
class TestConnectFromEnv:
|
||||
"""Tests for connect_from_env function."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_returns_none_when_meshcore_unavailable(self, configured_env, monkeypatch):
|
||||
"""Returns None when meshcore library not available."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", False)
|
||||
|
||||
result = await connect_from_env()
|
||||
|
||||
assert result is None
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_serial_connection(self, configured_env, monkeypatch, mock_serial_port):
|
||||
"""Connects via serial when configured."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "serial")
|
||||
monkeypatch.setenv("MESH_SERIAL_PORT", "/dev/ttyACM0")
|
||||
monkeypatch.setenv("MESH_SERIAL_BAUD", "57600")
|
||||
monkeypatch.setenv("MESH_DEBUG", "1")
|
||||
|
||||
_reset_config()
|
||||
|
||||
mock_client = MagicMock()
|
||||
mock_create = AsyncMock(return_value=mock_client)
|
||||
mock_meshcore = MagicMock()
|
||||
mock_meshcore.create_serial = mock_create
|
||||
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MeshCore", mock_meshcore)
|
||||
|
||||
result = await connect_from_env()
|
||||
|
||||
assert result is mock_client
|
||||
mock_create.assert_called_once_with("/dev/ttyACM0", 57600, debug=True)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_tcp_connection(self, configured_env, monkeypatch):
|
||||
"""Connects via TCP when configured."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "tcp")
|
||||
monkeypatch.setenv("MESH_TCP_HOST", "localhost")
|
||||
monkeypatch.setenv("MESH_TCP_PORT", "4403")
|
||||
|
||||
_reset_config()
|
||||
|
||||
mock_client = MagicMock()
|
||||
mock_create = AsyncMock(return_value=mock_client)
|
||||
mock_meshcore = MagicMock()
|
||||
mock_meshcore.create_tcp = mock_create
|
||||
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MeshCore", mock_meshcore)
|
||||
|
||||
result = await connect_from_env()
|
||||
|
||||
assert result is mock_client
|
||||
mock_create.assert_called_once_with("localhost", 4403)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_unknown_transport(self, configured_env, monkeypatch):
|
||||
"""Returns None for unknown transport."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "unknown")
|
||||
|
||||
_reset_config()
|
||||
|
||||
result = await connect_from_env()
|
||||
|
||||
assert result is None
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_handles_connection_error(self, configured_env, monkeypatch, mock_serial_port):
|
||||
"""Returns None on connection error."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "serial")
|
||||
monkeypatch.setenv("MESH_SERIAL_PORT", "/dev/ttyACM0")
|
||||
|
||||
_reset_config()
|
||||
|
||||
mock_create = AsyncMock(side_effect=Exception("Connection failed"))
|
||||
mock_meshcore = MagicMock()
|
||||
mock_meshcore.create_serial = mock_create
|
||||
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MeshCore", mock_meshcore)
|
||||
|
||||
result = await connect_from_env()
|
||||
|
||||
assert result is None
|
||||
mock_create.assert_called_once()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_ble_connection(self, configured_env, monkeypatch):
|
||||
"""Connects via BLE when configured."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "ble")
|
||||
monkeypatch.setenv("MESH_BLE_ADDR", "AA:BB:CC:DD:EE:FF")
|
||||
monkeypatch.setenv("MESH_BLE_PIN", "123456")
|
||||
|
||||
_reset_config()
|
||||
|
||||
mock_client = MagicMock()
|
||||
mock_create = AsyncMock(return_value=mock_client)
|
||||
mock_meshcore = MagicMock()
|
||||
mock_meshcore.create_ble = mock_create
|
||||
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MeshCore", mock_meshcore)
|
||||
|
||||
result = await connect_from_env()
|
||||
|
||||
assert result is mock_client
|
||||
mock_create.assert_called_once_with("AA:BB:CC:DD:EE:FF", pin="123456")
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_ble_missing_address(self, configured_env, monkeypatch):
|
||||
"""Returns None when BLE address not configured."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "ble")
|
||||
# Don't set MESH_BLE_ADDR
|
||||
|
||||
_reset_config()
|
||||
|
||||
result = await connect_from_env()
|
||||
|
||||
assert result is None
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_serial_auto_detect(self, configured_env, monkeypatch, mock_serial_port):
|
||||
"""Auto-detects serial port when not configured."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "serial")
|
||||
# Don't set MESH_SERIAL_PORT to trigger auto-detection
|
||||
|
||||
_reset_config()
|
||||
|
||||
# Set up mock port detection
|
||||
mock_port = MagicMock()
|
||||
mock_port.device = "/dev/ttyACM0"
|
||||
mock_serial_port.tools.list_ports.comports.return_value = [mock_port]
|
||||
|
||||
mock_client = MagicMock()
|
||||
mock_create = AsyncMock(return_value=mock_client)
|
||||
mock_meshcore = MagicMock()
|
||||
mock_meshcore.create_serial = mock_create
|
||||
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MeshCore", mock_meshcore)
|
||||
|
||||
result = await connect_from_env()
|
||||
|
||||
assert result is mock_client
|
||||
mock_create.assert_called_once_with("/dev/ttyACM0", 115200, debug=False)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_serial_auto_detect_fails(self, configured_env, monkeypatch, mock_serial_port):
|
||||
"""Returns None when serial auto-detection fails."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "serial")
|
||||
# Don't set MESH_SERIAL_PORT to trigger auto-detection
|
||||
|
||||
_reset_config()
|
||||
|
||||
# No ports available
|
||||
mock_serial_port.tools.list_ports.comports.return_value = []
|
||||
|
||||
result = await connect_from_env()
|
||||
|
||||
assert result is None
|
||||
|
||||
|
||||
class TestConnectWithLock:
|
||||
"""Tests for connect_with_lock context manager."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_yields_client_on_success(self, configured_env, monkeypatch, mock_serial_port):
|
||||
"""Yields connected client on success."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "serial")
|
||||
monkeypatch.setenv("MESH_SERIAL_PORT", "/dev/ttyACM0")
|
||||
|
||||
_reset_config()
|
||||
|
||||
mock_client = MagicMock()
|
||||
mock_client.disconnect = AsyncMock()
|
||||
mock_create = AsyncMock(return_value=mock_client)
|
||||
mock_meshcore = MagicMock()
|
||||
mock_meshcore.create_serial = mock_create
|
||||
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MeshCore", mock_meshcore)
|
||||
|
||||
async with connect_with_lock() as mc:
|
||||
assert mc is mock_client
|
||||
|
||||
# Should disconnect when exiting context
|
||||
mock_client.disconnect.assert_called_once()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_yields_none_on_connection_failure(self, configured_env, monkeypatch, mock_serial_port):
|
||||
"""Yields None when connection fails."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "serial")
|
||||
monkeypatch.setenv("MESH_SERIAL_PORT", "/dev/ttyACM0")
|
||||
|
||||
_reset_config()
|
||||
|
||||
mock_create = AsyncMock(side_effect=Exception("Connection failed"))
|
||||
mock_meshcore = MagicMock()
|
||||
mock_meshcore.create_serial = mock_create
|
||||
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MeshCore", mock_meshcore)
|
||||
|
||||
async with connect_with_lock() as mc:
|
||||
assert mc is None
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_acquires_lock_for_serial(self, configured_env, monkeypatch, mock_serial_port):
|
||||
"""Acquires lock file for serial transport."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "serial")
|
||||
monkeypatch.setenv("MESH_SERIAL_PORT", "/dev/ttyACM0")
|
||||
|
||||
cfg = _reset_config()
|
||||
|
||||
mock_client = MagicMock()
|
||||
mock_client.disconnect = AsyncMock()
|
||||
mock_create = AsyncMock(return_value=mock_client)
|
||||
mock_meshcore = MagicMock()
|
||||
mock_meshcore.create_serial = mock_create
|
||||
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MeshCore", mock_meshcore)
|
||||
|
||||
async with connect_with_lock():
|
||||
# Lock file should exist while connected
|
||||
lock_path = cfg.state_dir / "serial.lock"
|
||||
assert lock_path.exists()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_no_lock_for_tcp(self, configured_env, monkeypatch):
|
||||
"""Does not acquire lock for TCP transport."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "tcp")
|
||||
monkeypatch.setenv("MESH_TCP_HOST", "localhost")
|
||||
monkeypatch.setenv("MESH_TCP_PORT", "4403")
|
||||
|
||||
cfg = _reset_config()
|
||||
|
||||
mock_client = MagicMock()
|
||||
mock_client.disconnect = AsyncMock()
|
||||
mock_create = AsyncMock(return_value=mock_client)
|
||||
mock_meshcore = MagicMock()
|
||||
mock_meshcore.create_tcp = mock_create
|
||||
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MeshCore", mock_meshcore)
|
||||
|
||||
lock_path = cfg.state_dir / "serial.lock"
|
||||
|
||||
async with connect_with_lock():
|
||||
# Lock file should not exist for TCP
|
||||
assert not lock_path.exists()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_handles_disconnect_error(self, configured_env, monkeypatch, mock_serial_port):
|
||||
"""Handles disconnect error gracefully."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "serial")
|
||||
monkeypatch.setenv("MESH_SERIAL_PORT", "/dev/ttyACM0")
|
||||
|
||||
_reset_config()
|
||||
|
||||
mock_client = MagicMock()
|
||||
mock_client.disconnect = AsyncMock(side_effect=Exception("Disconnect error"))
|
||||
mock_create = AsyncMock(return_value=mock_client)
|
||||
mock_meshcore = MagicMock()
|
||||
mock_meshcore.create_serial = mock_create
|
||||
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MeshCore", mock_meshcore)
|
||||
|
||||
# Should not raise even when disconnect fails
|
||||
async with connect_with_lock() as mc:
|
||||
assert mc is mock_client
|
||||
|
||||
# Disconnect was still called
|
||||
mock_client.disconnect.assert_called_once()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_releases_lock_on_failure(self, configured_env, monkeypatch, mock_serial_port):
|
||||
"""Releases lock even when connection fails."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "serial")
|
||||
monkeypatch.setenv("MESH_SERIAL_PORT", "/dev/ttyACM0")
|
||||
|
||||
cfg = _reset_config()
|
||||
|
||||
mock_create = AsyncMock(side_effect=Exception("Connection failed"))
|
||||
mock_meshcore = MagicMock()
|
||||
mock_meshcore.create_serial = mock_create
|
||||
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MeshCore", mock_meshcore)
|
||||
|
||||
async with connect_with_lock() as mc:
|
||||
assert mc is None
|
||||
|
||||
# Lock should be released after exiting context
|
||||
# We can verify by acquiring it again without timeout
|
||||
lock_path = cfg.state_dir / "serial.lock"
|
||||
if lock_path.exists():
|
||||
import fcntl
|
||||
with open(lock_path, "a") as f:
|
||||
fcntl.flock(f.fileno(), fcntl.LOCK_EX | fcntl.LOCK_NB)
|
||||
|
||||
|
||||
class TestAcquireLockAsync:
|
||||
"""Tests for _acquire_lock_async function."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_acquires_lock_immediately(self, tmp_path):
|
||||
"""Acquires lock when not held by others."""
|
||||
lock_file = tmp_path / "test.lock"
|
||||
|
||||
with open(lock_file, "w") as f:
|
||||
await _acquire_lock_async(f, timeout=1.0)
|
||||
# If we get here, lock was acquired
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_times_out_when_locked(self, tmp_path):
|
||||
"""Times out when lock held by another."""
|
||||
import fcntl
|
||||
|
||||
lock_file = tmp_path / "test.lock"
|
||||
|
||||
# Hold the lock in this process
|
||||
holder = open(lock_file, "w") # noqa: SIM115 - must stay open for lock
|
||||
fcntl.flock(holder.fileno(), fcntl.LOCK_EX)
|
||||
|
||||
try:
|
||||
# Try to acquire with different file handle
|
||||
with open(lock_file, "a") as f, pytest.raises(TimeoutError):
|
||||
await _acquire_lock_async(f, timeout=0.2, poll_interval=0.05)
|
||||
finally:
|
||||
holder.close()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_waits_for_lock_release(self, tmp_path):
|
||||
"""Waits and acquires when lock released."""
|
||||
import asyncio
|
||||
import fcntl
|
||||
|
||||
lock_file = tmp_path / "test.lock"
|
||||
|
||||
holder = open(lock_file, "w") # noqa: SIM115 - must stay open for lock
|
||||
fcntl.flock(holder.fileno(), fcntl.LOCK_EX)
|
||||
|
||||
async def release_later():
|
||||
await asyncio.sleep(0.1)
|
||||
holder.close()
|
||||
|
||||
# Start release task
|
||||
release_task = asyncio.create_task(release_later())
|
||||
|
||||
# Try to acquire - should succeed after release
|
||||
with open(lock_file, "a") as f:
|
||||
await _acquire_lock_async(f, timeout=2.0, poll_interval=0.05)
|
||||
|
||||
await release_task
|
||||
271
tests/client/test_contacts.py
Normal file
271
tests/client/test_contacts.py
Normal file
@@ -0,0 +1,271 @@
|
||||
"""Tests for contact lookup functions."""
|
||||
|
||||
from types import SimpleNamespace
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
|
||||
class TestGetContactByName:
|
||||
"""Tests for get_contact_by_name function."""
|
||||
|
||||
def test_returns_contact_when_found(self, mock_meshcore_client):
|
||||
"""Returns contact when found by name."""
|
||||
from meshmon.meshcore_client import get_contact_by_name
|
||||
|
||||
contact = MagicMock()
|
||||
contact.adv_name = "TestNode"
|
||||
mock_meshcore_client.get_contact_by_name.return_value = contact
|
||||
|
||||
result = get_contact_by_name(mock_meshcore_client, "TestNode")
|
||||
|
||||
assert result == contact
|
||||
mock_meshcore_client.get_contact_by_name.assert_called_once_with("TestNode")
|
||||
|
||||
def test_returns_none_when_not_found(self, mock_meshcore_client):
|
||||
"""Returns None when contact not found."""
|
||||
from meshmon.meshcore_client import get_contact_by_name
|
||||
|
||||
mock_meshcore_client.get_contact_by_name.return_value = None
|
||||
|
||||
result = get_contact_by_name(mock_meshcore_client, "NonExistent")
|
||||
|
||||
assert result is None
|
||||
mock_meshcore_client.get_contact_by_name.assert_called_once_with("NonExistent")
|
||||
|
||||
def test_returns_none_when_method_not_available(self):
|
||||
"""Returns None when get_contact_by_name method not available."""
|
||||
from meshmon.meshcore_client import get_contact_by_name
|
||||
|
||||
mc = MagicMock(spec=[]) # No methods
|
||||
|
||||
result = get_contact_by_name(mc, "TestNode")
|
||||
|
||||
assert result is None
|
||||
|
||||
def test_returns_none_on_exception(self, mock_meshcore_client):
|
||||
"""Returns None when method raises exception."""
|
||||
from meshmon.meshcore_client import get_contact_by_name
|
||||
|
||||
mock_meshcore_client.get_contact_by_name.side_effect = RuntimeError("Connection lost")
|
||||
|
||||
result = get_contact_by_name(mock_meshcore_client, "TestNode")
|
||||
|
||||
assert result is None
|
||||
mock_meshcore_client.get_contact_by_name.assert_called_once_with("TestNode")
|
||||
|
||||
|
||||
class TestGetContactByKeyPrefix:
|
||||
"""Tests for get_contact_by_key_prefix function."""
|
||||
|
||||
def test_returns_contact_when_found(self, mock_meshcore_client):
|
||||
"""Returns contact when found by key prefix."""
|
||||
from meshmon.meshcore_client import get_contact_by_key_prefix
|
||||
|
||||
contact = MagicMock()
|
||||
contact.pubkey_prefix = "abc123"
|
||||
mock_meshcore_client.get_contact_by_key_prefix.return_value = contact
|
||||
|
||||
result = get_contact_by_key_prefix(mock_meshcore_client, "abc123")
|
||||
|
||||
assert result == contact
|
||||
mock_meshcore_client.get_contact_by_key_prefix.assert_called_once_with("abc123")
|
||||
|
||||
def test_returns_none_when_not_found(self, mock_meshcore_client):
|
||||
"""Returns None when contact not found."""
|
||||
from meshmon.meshcore_client import get_contact_by_key_prefix
|
||||
|
||||
mock_meshcore_client.get_contact_by_key_prefix.return_value = None
|
||||
|
||||
result = get_contact_by_key_prefix(mock_meshcore_client, "xyz789")
|
||||
|
||||
assert result is None
|
||||
mock_meshcore_client.get_contact_by_key_prefix.assert_called_once_with("xyz789")
|
||||
|
||||
def test_returns_none_when_method_not_available(self):
|
||||
"""Returns None when get_contact_by_key_prefix method not available."""
|
||||
from meshmon.meshcore_client import get_contact_by_key_prefix
|
||||
|
||||
mc = MagicMock(spec=[]) # No methods
|
||||
|
||||
result = get_contact_by_key_prefix(mc, "abc123")
|
||||
|
||||
assert result is None
|
||||
|
||||
def test_returns_none_on_exception(self, mock_meshcore_client):
|
||||
"""Returns None when method raises exception."""
|
||||
from meshmon.meshcore_client import get_contact_by_key_prefix
|
||||
|
||||
mock_meshcore_client.get_contact_by_key_prefix.side_effect = RuntimeError("Connection lost")
|
||||
|
||||
result = get_contact_by_key_prefix(mock_meshcore_client, "abc123")
|
||||
|
||||
assert result is None
|
||||
mock_meshcore_client.get_contact_by_key_prefix.assert_called_once_with("abc123")
|
||||
|
||||
|
||||
class TestExtractContactInfo:
|
||||
"""Tests for extract_contact_info function."""
|
||||
|
||||
def test_extracts_from_dict_contact(self):
|
||||
"""Extracts info from dict-based contact."""
|
||||
from meshmon.meshcore_client import extract_contact_info
|
||||
|
||||
contact = {
|
||||
"adv_name": "TestNode",
|
||||
"name": "test",
|
||||
"pubkey_prefix": "abc123",
|
||||
"public_key": "abc123def456",
|
||||
"type": 1,
|
||||
"flags": 0,
|
||||
}
|
||||
|
||||
result = extract_contact_info(contact)
|
||||
|
||||
assert result["adv_name"] == "TestNode"
|
||||
assert result["name"] == "test"
|
||||
assert result["pubkey_prefix"] == "abc123"
|
||||
assert result["public_key"] == "abc123def456"
|
||||
assert result["type"] == 1
|
||||
assert result["flags"] == 0
|
||||
|
||||
def test_extracts_from_object_contact(self):
|
||||
"""Extracts info from object-based contact."""
|
||||
from meshmon.meshcore_client import extract_contact_info
|
||||
|
||||
contact = SimpleNamespace(
|
||||
adv_name="TestNode",
|
||||
name="test",
|
||||
pubkey_prefix="abc123",
|
||||
public_key="abc123def456",
|
||||
type=1,
|
||||
flags=0,
|
||||
)
|
||||
|
||||
result = extract_contact_info(contact)
|
||||
|
||||
assert result["adv_name"] == "TestNode"
|
||||
assert result["name"] == "test"
|
||||
assert result["pubkey_prefix"] == "abc123"
|
||||
|
||||
def test_converts_bytes_to_hex(self):
|
||||
"""Converts bytes values to hex strings."""
|
||||
from meshmon.meshcore_client import extract_contact_info
|
||||
|
||||
contact = {
|
||||
"adv_name": "TestNode",
|
||||
"public_key": bytes.fromhex("abc123def456"),
|
||||
}
|
||||
|
||||
result = extract_contact_info(contact)
|
||||
|
||||
assert result["adv_name"] == "TestNode"
|
||||
assert result["public_key"] == "abc123def456"
|
||||
|
||||
def test_converts_bytes_from_object(self):
|
||||
"""Converts bytes values from object attributes to hex."""
|
||||
from meshmon.meshcore_client import extract_contact_info
|
||||
|
||||
contact = SimpleNamespace(
|
||||
adv_name="TestNode",
|
||||
public_key=bytes.fromhex("deadbeef"),
|
||||
)
|
||||
|
||||
result = extract_contact_info(contact)
|
||||
|
||||
assert result["adv_name"] == "TestNode"
|
||||
assert result["public_key"] == "deadbeef"
|
||||
|
||||
def test_skips_none_values(self):
|
||||
"""Skips None values in contact."""
|
||||
from meshmon.meshcore_client import extract_contact_info
|
||||
|
||||
contact = {
|
||||
"adv_name": "TestNode",
|
||||
"name": None,
|
||||
"pubkey_prefix": None,
|
||||
}
|
||||
|
||||
result = extract_contact_info(contact)
|
||||
|
||||
assert result["adv_name"] == "TestNode"
|
||||
assert "name" not in result
|
||||
assert "pubkey_prefix" not in result
|
||||
|
||||
def test_skips_missing_attributes(self):
|
||||
"""Skips missing attributes in dict contact."""
|
||||
from meshmon.meshcore_client import extract_contact_info
|
||||
|
||||
contact = {"adv_name": "TestNode"}
|
||||
|
||||
result = extract_contact_info(contact)
|
||||
|
||||
assert result == {"adv_name": "TestNode"}
|
||||
|
||||
def test_empty_contact_returns_empty_dict(self):
|
||||
"""Empty contact returns empty dict."""
|
||||
from meshmon.meshcore_client import extract_contact_info
|
||||
|
||||
result = extract_contact_info({})
|
||||
|
||||
assert result == {}
|
||||
|
||||
|
||||
class TestListContactsSummary:
|
||||
"""Tests for list_contacts_summary function."""
|
||||
|
||||
def test_returns_list_of_contact_info(self):
|
||||
"""Returns list of extracted contact info."""
|
||||
from meshmon.meshcore_client import list_contacts_summary
|
||||
|
||||
contacts = [
|
||||
{"adv_name": "Node1", "type": 1},
|
||||
{"adv_name": "Node2", "type": 2},
|
||||
{"adv_name": "Node3", "type": 1},
|
||||
]
|
||||
|
||||
result = list_contacts_summary(contacts)
|
||||
|
||||
assert len(result) == 3
|
||||
assert result[0]["adv_name"] == "Node1"
|
||||
assert result[1]["adv_name"] == "Node2"
|
||||
assert result[2]["adv_name"] == "Node3"
|
||||
|
||||
def test_handles_mixed_contact_types(self):
|
||||
"""Handles mix of dict and object contacts."""
|
||||
from meshmon.meshcore_client import list_contacts_summary
|
||||
|
||||
obj_contact = SimpleNamespace(adv_name="ObjectNode")
|
||||
|
||||
contacts = [
|
||||
{"adv_name": "DictNode"},
|
||||
obj_contact,
|
||||
]
|
||||
|
||||
result = list_contacts_summary(contacts)
|
||||
|
||||
assert len(result) == 2
|
||||
assert result[0]["adv_name"] == "DictNode"
|
||||
assert result[1]["adv_name"] == "ObjectNode"
|
||||
|
||||
def test_empty_list_returns_empty_list(self):
|
||||
"""Empty contacts list returns empty list."""
|
||||
from meshmon.meshcore_client import list_contacts_summary
|
||||
|
||||
result = list_contacts_summary([])
|
||||
|
||||
assert result == []
|
||||
|
||||
def test_preserves_order(self):
|
||||
"""Preserves contact order in output."""
|
||||
from meshmon.meshcore_client import list_contacts_summary
|
||||
|
||||
contacts = [
|
||||
{"adv_name": "Zebra"},
|
||||
{"adv_name": "Alpha"},
|
||||
{"adv_name": "Middle"},
|
||||
]
|
||||
|
||||
result = list_contacts_summary(contacts)
|
||||
|
||||
assert result[0]["adv_name"] == "Zebra"
|
||||
assert result[1]["adv_name"] == "Alpha"
|
||||
assert result[2]["adv_name"] == "Middle"
|
||||
245
tests/client/test_meshcore_available.py
Normal file
245
tests/client/test_meshcore_available.py
Normal file
@@ -0,0 +1,245 @@
|
||||
"""Tests for MESHCORE_AVAILABLE flag handling."""
|
||||
|
||||
from unittest.mock import AsyncMock, MagicMock
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
class TestMeshcoreAvailableTrue:
|
||||
"""Tests when MESHCORE_AVAILABLE is True."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_run_command_executes_when_available(self, mock_meshcore_client, monkeypatch):
|
||||
"""run_command executes command when meshcore available."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
|
||||
from meshmon.meshcore_client import run_command
|
||||
|
||||
from .conftest import make_mock_event
|
||||
|
||||
event = make_mock_event("SELF_INFO", {"bat": 3850})
|
||||
|
||||
async def cmd():
|
||||
return event
|
||||
|
||||
success, event_type, payload, error = await run_command(
|
||||
mock_meshcore_client, cmd(), "test"
|
||||
)
|
||||
|
||||
assert success is True
|
||||
assert event_type == "SELF_INFO"
|
||||
assert payload == {"bat": 3850}
|
||||
assert error is None
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_connect_from_env_attempts_connection(self, monkeypatch, tmp_path):
|
||||
"""connect_from_env attempts to connect when meshcore available."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
|
||||
# Mock MeshCore.create_serial
|
||||
mock_mc = MagicMock()
|
||||
mock_meshcore = MagicMock()
|
||||
mock_meshcore.create_serial = AsyncMock(return_value=mock_mc)
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MeshCore", mock_meshcore)
|
||||
|
||||
# Configure environment
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "serial")
|
||||
monkeypatch.setenv("MESH_SERIAL_PORT", "/dev/ttyACM0")
|
||||
monkeypatch.setenv("STATE_DIR", str(tmp_path))
|
||||
monkeypatch.setenv("OUT_DIR", str(tmp_path / "out"))
|
||||
|
||||
import meshmon.env
|
||||
meshmon.env._config = None
|
||||
|
||||
from meshmon.meshcore_client import connect_from_env
|
||||
|
||||
result = await connect_from_env()
|
||||
|
||||
assert result == mock_mc
|
||||
mock_meshcore.create_serial.assert_called_once_with("/dev/ttyACM0", 115200, debug=False)
|
||||
|
||||
|
||||
class TestMeshcoreAvailableFalse:
|
||||
"""Tests when MESHCORE_AVAILABLE is False."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_run_command_returns_failure(self, mock_meshcore_client, monkeypatch):
|
||||
"""run_command returns failure when meshcore not available."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", False)
|
||||
|
||||
from meshmon.meshcore_client import run_command
|
||||
|
||||
async def cmd():
|
||||
return None
|
||||
|
||||
# Create the coroutine
|
||||
cmd_coro = cmd()
|
||||
|
||||
success, event_type, payload, error = await run_command(
|
||||
mock_meshcore_client, cmd_coro, "test"
|
||||
)
|
||||
|
||||
# Close the coroutine to prevent "never awaited" warning
|
||||
# since run_command returns early when MESHCORE_AVAILABLE=False
|
||||
cmd_coro.close()
|
||||
|
||||
assert success is False
|
||||
assert event_type is None
|
||||
assert payload is None
|
||||
assert "not available" in error
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_connect_from_env_returns_none(self, monkeypatch, tmp_path):
|
||||
"""connect_from_env returns None when meshcore not available."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", False)
|
||||
|
||||
# Configure environment
|
||||
monkeypatch.setenv("STATE_DIR", str(tmp_path))
|
||||
monkeypatch.setenv("OUT_DIR", str(tmp_path / "out"))
|
||||
|
||||
import meshmon.env
|
||||
meshmon.env._config = None
|
||||
|
||||
from meshmon.meshcore_client import connect_from_env
|
||||
|
||||
result = await connect_from_env()
|
||||
|
||||
assert result is None
|
||||
|
||||
|
||||
class TestMeshcoreImportFallback:
|
||||
"""Tests for import fallback behavior."""
|
||||
|
||||
def test_meshcore_none_when_import_fails(self, monkeypatch):
|
||||
"""MeshCore is None when import fails."""
|
||||
import builtins
|
||||
import importlib
|
||||
|
||||
import meshmon.meshcore_client as module
|
||||
|
||||
real_import = builtins.__import__
|
||||
|
||||
def mock_import(name, *args, **kwargs):
|
||||
if name == "meshcore":
|
||||
raise ImportError("No module named 'meshcore'")
|
||||
return real_import(name, *args, **kwargs)
|
||||
|
||||
monkeypatch.setattr(builtins, "__import__", mock_import)
|
||||
|
||||
module = importlib.reload(module)
|
||||
|
||||
assert module.MESHCORE_AVAILABLE is False
|
||||
assert module.MeshCore is None
|
||||
assert module.EventType is None
|
||||
|
||||
monkeypatch.setattr(builtins, "__import__", real_import)
|
||||
importlib.reload(module)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_event_type_check_handles_none(self, monkeypatch):
|
||||
"""EventType checks handle None gracefully."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
monkeypatch.setattr("meshmon.meshcore_client.EventType", None)
|
||||
|
||||
from meshmon.meshcore_client import run_command
|
||||
|
||||
from .conftest import make_mock_event
|
||||
|
||||
event = make_mock_event("SELF_INFO", {"bat": 3850})
|
||||
|
||||
async def cmd():
|
||||
return event
|
||||
|
||||
success, event_type, payload, error = await run_command(
|
||||
MagicMock(), cmd(), "test"
|
||||
)
|
||||
|
||||
assert success is True
|
||||
assert event_type == "SELF_INFO"
|
||||
assert payload == {"bat": 3850}
|
||||
assert error is None
|
||||
|
||||
|
||||
class TestContactFunctionsWithUnavailableMeshcore:
|
||||
"""Tests that contact functions work regardless of MESHCORE_AVAILABLE."""
|
||||
|
||||
def test_get_contact_by_name_works_when_unavailable(self, mock_meshcore_client, monkeypatch):
|
||||
"""get_contact_by_name works even when meshcore unavailable."""
|
||||
# Contact functions don't check MESHCORE_AVAILABLE - they work with any client
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", False)
|
||||
|
||||
from meshmon.meshcore_client import get_contact_by_name
|
||||
|
||||
contact = MagicMock()
|
||||
contact.adv_name = "TestNode"
|
||||
mock_meshcore_client.get_contact_by_name.return_value = contact
|
||||
|
||||
result = get_contact_by_name(mock_meshcore_client, "TestNode")
|
||||
|
||||
assert result == contact
|
||||
|
||||
def test_get_contact_by_key_prefix_works_when_unavailable(
|
||||
self, mock_meshcore_client, monkeypatch
|
||||
):
|
||||
"""get_contact_by_key_prefix works even when meshcore unavailable."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", False)
|
||||
|
||||
from meshmon.meshcore_client import get_contact_by_key_prefix
|
||||
|
||||
contact = MagicMock()
|
||||
contact.pubkey_prefix = "abc123"
|
||||
mock_meshcore_client.get_contact_by_key_prefix.return_value = contact
|
||||
|
||||
result = get_contact_by_key_prefix(mock_meshcore_client, "abc123")
|
||||
|
||||
assert result == contact
|
||||
|
||||
def test_extract_contact_info_works_when_unavailable(self, monkeypatch):
|
||||
"""extract_contact_info works even when meshcore unavailable."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", False)
|
||||
|
||||
from meshmon.meshcore_client import extract_contact_info
|
||||
|
||||
contact = {"adv_name": "TestNode", "type": 1}
|
||||
|
||||
result = extract_contact_info(contact)
|
||||
|
||||
assert result["adv_name"] == "TestNode"
|
||||
assert result["type"] == 1
|
||||
|
||||
def test_list_contacts_summary_works_when_unavailable(self, monkeypatch):
|
||||
"""list_contacts_summary works even when meshcore unavailable."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", False)
|
||||
|
||||
from meshmon.meshcore_client import list_contacts_summary
|
||||
|
||||
contacts = [{"adv_name": "Node1"}, {"adv_name": "Node2"}]
|
||||
|
||||
result = list_contacts_summary(contacts)
|
||||
|
||||
assert len(result) == 2
|
||||
assert result[0]["adv_name"] == "Node1"
|
||||
|
||||
|
||||
class TestAutoDetectWithUnavailablePyserial:
|
||||
"""Tests for auto_detect_serial_port when pyserial unavailable."""
|
||||
|
||||
def test_returns_none_when_pyserial_not_installed(self, monkeypatch):
|
||||
"""Returns None when pyserial not installed."""
|
||||
# Mock the import to fail
|
||||
import builtins
|
||||
|
||||
real_import = builtins.__import__
|
||||
|
||||
def mock_import(name, *args, **kwargs):
|
||||
if name == "serial.tools.list_ports" or name == "serial":
|
||||
raise ImportError("No module named 'serial'")
|
||||
return real_import(name, *args, **kwargs)
|
||||
|
||||
monkeypatch.setattr(builtins, "__import__", mock_import)
|
||||
|
||||
from meshmon.meshcore_client import auto_detect_serial_port
|
||||
|
||||
result = auto_detect_serial_port()
|
||||
|
||||
assert result is None
|
||||
237
tests/client/test_run_command.py
Normal file
237
tests/client/test_run_command.py
Normal file
@@ -0,0 +1,237 @@
|
||||
"""Tests for run_command function."""
|
||||
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.meshcore_client import run_command
|
||||
|
||||
from .conftest import make_mock_event
|
||||
|
||||
|
||||
class TestRunCommandSuccess:
|
||||
"""Tests for successful command execution."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_returns_success_tuple(self, mock_meshcore_client, monkeypatch):
|
||||
"""Returns (True, event_type, payload, None) on success."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
|
||||
event = make_mock_event("SELF_INFO", {"bat": 3850})
|
||||
|
||||
async def cmd():
|
||||
return event
|
||||
|
||||
success, event_type, payload, error = await run_command(
|
||||
mock_meshcore_client, cmd(), "test"
|
||||
)
|
||||
|
||||
assert success is True
|
||||
assert event_type == "SELF_INFO"
|
||||
assert payload == {"bat": 3850}
|
||||
assert error is None
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_extracts_payload_dict(self, mock_meshcore_client, monkeypatch):
|
||||
"""Extracts payload when it's a dict."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
|
||||
payload_data = {"voltage": 3.85, "uptime": 86400}
|
||||
event = make_mock_event("SELF_INFO", payload_data)
|
||||
|
||||
async def cmd():
|
||||
return event
|
||||
|
||||
success, _, payload, _ = await run_command(
|
||||
mock_meshcore_client, cmd(), "test"
|
||||
)
|
||||
|
||||
assert payload == payload_data
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_converts_object_payload(self, mock_meshcore_client, monkeypatch):
|
||||
"""Converts object payload to dict."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
|
||||
# Create object-like payload using a simple class with instance attributes
|
||||
# vars() only returns instance attributes, not class attributes
|
||||
class ObjPayload:
|
||||
def __init__(self):
|
||||
self.voltage = 3.85
|
||||
|
||||
obj_payload = ObjPayload()
|
||||
|
||||
event = make_mock_event("SELF_INFO", payload=obj_payload)
|
||||
|
||||
async def cmd():
|
||||
return event
|
||||
|
||||
success, _, payload, _ = await run_command(
|
||||
mock_meshcore_client, cmd(), "test"
|
||||
)
|
||||
|
||||
assert payload == {"voltage": 3.85}
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_converts_namedtuple_payload(self, mock_meshcore_client, monkeypatch):
|
||||
"""Converts namedtuple payload to dict."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
|
||||
from collections import namedtuple
|
||||
Payload = namedtuple("Payload", ["voltage", "uptime"])
|
||||
nt_payload = Payload(voltage=3.85, uptime=86400)
|
||||
|
||||
event = make_mock_event("SELF_INFO")
|
||||
event.payload = nt_payload
|
||||
|
||||
async def cmd():
|
||||
return event
|
||||
|
||||
success, _, payload, _ = await run_command(
|
||||
mock_meshcore_client, cmd(), "test"
|
||||
)
|
||||
|
||||
assert payload == {"voltage": 3.85, "uptime": 86400}
|
||||
|
||||
|
||||
class TestRunCommandFailure:
|
||||
"""Tests for command failure scenarios."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_returns_failure_when_unavailable(self, mock_meshcore_client, monkeypatch):
|
||||
"""Returns failure when meshcore not available."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", False)
|
||||
|
||||
async def cmd():
|
||||
return None
|
||||
|
||||
# Create the coroutine
|
||||
cmd_coro = cmd()
|
||||
|
||||
success, event_type, payload, error = await run_command(
|
||||
mock_meshcore_client, cmd_coro, "test"
|
||||
)
|
||||
|
||||
# Close the coroutine to prevent "never awaited" warning
|
||||
# since run_command returns early when MESHCORE_AVAILABLE=False
|
||||
cmd_coro.close()
|
||||
|
||||
assert success is False
|
||||
assert event_type is None
|
||||
assert payload is None
|
||||
assert error == "meshcore not available"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_returns_failure_on_none_event(self, mock_meshcore_client, monkeypatch):
|
||||
"""Returns failure when no event received."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
|
||||
async def cmd():
|
||||
return None
|
||||
|
||||
success, _, _, error = await run_command(
|
||||
mock_meshcore_client, cmd(), "test"
|
||||
)
|
||||
|
||||
assert success is False
|
||||
assert error == "No response received"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_returns_failure_on_error_event(self, mock_meshcore_client, monkeypatch):
|
||||
"""Returns failure on ERROR event type."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
|
||||
# Set up EventType mock
|
||||
mock_event_type = MagicMock()
|
||||
mock_event_type.ERROR = "ERROR"
|
||||
monkeypatch.setattr("meshmon.meshcore_client.EventType", mock_event_type)
|
||||
|
||||
event = MagicMock()
|
||||
event.type = mock_event_type.ERROR
|
||||
event.payload = "Command failed"
|
||||
|
||||
async def cmd():
|
||||
return event
|
||||
|
||||
success, event_type, payload, error = await run_command(
|
||||
mock_meshcore_client, cmd(), "test"
|
||||
)
|
||||
|
||||
assert success is False
|
||||
assert event_type == "ERROR"
|
||||
assert payload is None
|
||||
assert error == "Command failed"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_returns_failure_on_timeout(self, mock_meshcore_client, monkeypatch):
|
||||
"""Returns failure on timeout."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
|
||||
async def cmd():
|
||||
raise TimeoutError()
|
||||
|
||||
success, _, _, error = await run_command(
|
||||
mock_meshcore_client, cmd(), "test"
|
||||
)
|
||||
|
||||
assert success is False
|
||||
assert error == "Timeout"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_returns_failure_on_exception(self, mock_meshcore_client, monkeypatch):
|
||||
"""Returns failure on general exception."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
|
||||
async def cmd():
|
||||
raise RuntimeError("Connection lost")
|
||||
|
||||
success, _, _, error = await run_command(
|
||||
mock_meshcore_client, cmd(), "test"
|
||||
)
|
||||
|
||||
assert success is False
|
||||
assert error == "Connection lost"
|
||||
|
||||
|
||||
class TestRunCommandEventTypeParsing:
|
||||
"""Tests for event type name extraction."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_extracts_type_name_attribute(self, mock_meshcore_client, monkeypatch):
|
||||
"""Extracts event type from .type.name attribute."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
|
||||
event = make_mock_event("CUSTOM_EVENT", {})
|
||||
|
||||
async def cmd():
|
||||
return event
|
||||
|
||||
success, event_type, payload, error = await run_command(
|
||||
mock_meshcore_client, cmd(), "test"
|
||||
)
|
||||
|
||||
assert success is True
|
||||
assert event_type == "CUSTOM_EVENT"
|
||||
assert payload == {}
|
||||
assert error is None
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_falls_back_to_str_type(self, mock_meshcore_client, monkeypatch):
|
||||
"""Falls back to str(type) when no .name."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
|
||||
event = MagicMock()
|
||||
event.type = "STRING_TYPE"
|
||||
event.payload = {}
|
||||
|
||||
async def cmd():
|
||||
return event
|
||||
|
||||
success, event_type, payload, error = await run_command(
|
||||
mock_meshcore_client, cmd(), "test"
|
||||
)
|
||||
|
||||
assert success is True
|
||||
assert event_type == "STRING_TYPE"
|
||||
assert payload == {}
|
||||
assert error is None
|
||||
1
tests/config/__init__.py
Normal file
1
tests/config/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Configuration tests."""
|
||||
49
tests/config/conftest.py
Normal file
49
tests/config/conftest.py
Normal file
@@ -0,0 +1,49 @@
|
||||
"""Fixtures for configuration tests."""
|
||||
|
||||
import os
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def config_file(tmp_path, monkeypatch):
|
||||
"""Create a temporary config file and set up paths.
|
||||
|
||||
Returns a helper to write config content.
|
||||
"""
|
||||
config_path = tmp_path / "meshcore.conf"
|
||||
|
||||
# Helper function to write config content
|
||||
def write_config(content: str):
|
||||
config_path.write_text(content)
|
||||
return config_path
|
||||
|
||||
return write_config
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def isolate_config_loading(monkeypatch):
|
||||
"""Isolate config loading by clearing all mesh-related env vars.
|
||||
|
||||
This fixture goes beyond clean_env by ensuring a completely
|
||||
clean slate for testing config file loading.
|
||||
"""
|
||||
# Clear all env vars that might affect config
|
||||
env_prefixes = (
|
||||
"MESH_", "REPEATER_", "COMPANION_", "REMOTE_",
|
||||
"TELEMETRY_", "REPORT_", "RADIO_", "STATE_DIR", "OUT_DIR"
|
||||
)
|
||||
for key in list(os.environ.keys()):
|
||||
for prefix in env_prefixes:
|
||||
if key.startswith(prefix):
|
||||
monkeypatch.delenv(key, raising=False)
|
||||
break
|
||||
|
||||
# Reset config singleton
|
||||
import meshmon.env
|
||||
meshmon.env._config = None
|
||||
|
||||
yield
|
||||
|
||||
# Reset again after test
|
||||
meshmon.env._config = None
|
||||
253
tests/config/test_config_file.py
Normal file
253
tests/config/test_config_file.py
Normal file
@@ -0,0 +1,253 @@
|
||||
"""Tests for meshcore.conf file parsing."""
|
||||
|
||||
import os
|
||||
|
||||
from meshmon.env import _parse_config_value
|
||||
|
||||
|
||||
def _load_config_from_content(tmp_path, monkeypatch, content: str | None) -> None:
|
||||
import meshmon.env as env
|
||||
|
||||
config_path = tmp_path / "meshcore.conf"
|
||||
if content is not None:
|
||||
config_path.write_text(content)
|
||||
|
||||
fake_env_path = tmp_path / "src" / "meshmon" / "env.py"
|
||||
fake_env_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
fake_env_path.write_text("")
|
||||
|
||||
monkeypatch.setattr(env, "__file__", str(fake_env_path))
|
||||
env._load_config_file()
|
||||
|
||||
class TestParseConfigValueDetailed:
|
||||
"""Detailed tests for _parse_config_value."""
|
||||
|
||||
# ==========================================================================
|
||||
# Empty/whitespace handling
|
||||
# ==========================================================================
|
||||
|
||||
def test_empty_string(self):
|
||||
assert _parse_config_value("") == ""
|
||||
|
||||
def test_only_spaces(self):
|
||||
assert _parse_config_value(" ") == ""
|
||||
|
||||
def test_only_tabs(self):
|
||||
assert _parse_config_value("\t\t") == ""
|
||||
|
||||
# ==========================================================================
|
||||
# Unquoted values
|
||||
# ==========================================================================
|
||||
|
||||
def test_simple_value(self):
|
||||
assert _parse_config_value("hello") == "hello"
|
||||
|
||||
def test_value_with_leading_trailing_space(self):
|
||||
assert _parse_config_value(" hello ") == "hello"
|
||||
|
||||
def test_value_with_internal_spaces(self):
|
||||
assert _parse_config_value("hello world") == "hello world"
|
||||
|
||||
def test_numeric_value(self):
|
||||
assert _parse_config_value("12345") == "12345"
|
||||
|
||||
def test_path_value(self):
|
||||
assert _parse_config_value("/dev/ttyUSB0") == "/dev/ttyUSB0"
|
||||
|
||||
# ==========================================================================
|
||||
# Double-quoted strings
|
||||
# ==========================================================================
|
||||
|
||||
def test_double_quoted_simple(self):
|
||||
assert _parse_config_value('"hello"') == "hello"
|
||||
|
||||
def test_double_quoted_with_spaces(self):
|
||||
assert _parse_config_value('"hello world"') == "hello world"
|
||||
|
||||
def test_double_quoted_with_special_chars(self):
|
||||
assert _parse_config_value('"hello #world"') == "hello #world"
|
||||
|
||||
def test_double_quoted_unclosed(self):
|
||||
assert _parse_config_value('"hello') == "hello"
|
||||
|
||||
def test_double_quoted_empty(self):
|
||||
assert _parse_config_value('""') == ""
|
||||
|
||||
def test_double_quoted_with_trailing_content(self):
|
||||
# Only extracts content within first pair of quotes
|
||||
assert _parse_config_value('"hello" # comment') == "hello"
|
||||
|
||||
# ==========================================================================
|
||||
# Single-quoted strings
|
||||
# ==========================================================================
|
||||
|
||||
def test_single_quoted_simple(self):
|
||||
assert _parse_config_value("'hello'") == "hello"
|
||||
|
||||
def test_single_quoted_with_spaces(self):
|
||||
assert _parse_config_value("'hello world'") == "hello world"
|
||||
|
||||
def test_single_quoted_unclosed(self):
|
||||
assert _parse_config_value("'hello") == "hello"
|
||||
|
||||
def test_single_quoted_empty(self):
|
||||
assert _parse_config_value("''") == ""
|
||||
|
||||
# ==========================================================================
|
||||
# Inline comments
|
||||
# ==========================================================================
|
||||
|
||||
def test_inline_comment_with_space(self):
|
||||
assert _parse_config_value("hello # comment") == "hello"
|
||||
|
||||
def test_inline_comment_multiple_spaces(self):
|
||||
assert _parse_config_value("hello # comment here") == "hello"
|
||||
|
||||
def test_hash_without_space_kept(self):
|
||||
# Hash without preceding space is kept (not a comment)
|
||||
assert _parse_config_value("color#ffffff") == "color#ffffff"
|
||||
|
||||
def test_hash_at_start_kept(self):
|
||||
# Hash at start is kept (though unusual for a value)
|
||||
assert _parse_config_value("#ffffff") == "#ffffff"
|
||||
|
||||
# ==========================================================================
|
||||
# Mixed scenarios
|
||||
# ==========================================================================
|
||||
|
||||
def test_quoted_preserves_hash_comment_style(self):
|
||||
assert _parse_config_value('"test # not a comment"') == "test # not a comment"
|
||||
|
||||
def test_value_ending_with_hash(self):
|
||||
# "test#" has no space before #, so kept
|
||||
assert _parse_config_value("test#") == "test#"
|
||||
|
||||
|
||||
class TestLoadConfigFileBehavior:
|
||||
"""Tests for _load_config_file behavior."""
|
||||
|
||||
def test_nonexistent_file_no_error(self, tmp_path, monkeypatch, isolate_config_loading):
|
||||
"""Missing config file doesn't raise error."""
|
||||
_load_config_from_content(tmp_path, monkeypatch, content=None)
|
||||
|
||||
assert "MESH_TRANSPORT" not in os.environ
|
||||
|
||||
def test_skips_empty_lines(self, tmp_path, monkeypatch, isolate_config_loading):
|
||||
"""Empty lines are skipped."""
|
||||
config_content = """
|
||||
MESH_TRANSPORT=tcp
|
||||
|
||||
MESH_DEBUG=1
|
||||
|
||||
"""
|
||||
_load_config_from_content(tmp_path, monkeypatch, config_content)
|
||||
|
||||
assert os.environ["MESH_TRANSPORT"] == "tcp"
|
||||
assert os.environ["MESH_DEBUG"] == "1"
|
||||
|
||||
def test_skips_comment_lines(self, tmp_path, monkeypatch, isolate_config_loading):
|
||||
"""Lines starting with # are skipped."""
|
||||
config_content = """# This is a comment
|
||||
MESH_TRANSPORT=tcp
|
||||
# Another comment
|
||||
"""
|
||||
_load_config_from_content(tmp_path, monkeypatch, config_content)
|
||||
|
||||
assert os.environ["MESH_TRANSPORT"] == "tcp"
|
||||
|
||||
def test_handles_export_prefix(self, tmp_path, monkeypatch, isolate_config_loading):
|
||||
"""Lines with 'export ' prefix are handled."""
|
||||
config_content = "export MESH_TRANSPORT=tcp\n"
|
||||
_load_config_from_content(tmp_path, monkeypatch, config_content)
|
||||
|
||||
assert os.environ["MESH_TRANSPORT"] == "tcp"
|
||||
|
||||
def test_skips_lines_without_equals(self, tmp_path, monkeypatch, isolate_config_loading):
|
||||
"""Lines without = are skipped."""
|
||||
config_content = """MESH_TRANSPORT=tcp
|
||||
this line has no equals
|
||||
MESH_DEBUG=1
|
||||
"""
|
||||
_load_config_from_content(tmp_path, monkeypatch, config_content)
|
||||
|
||||
assert os.environ["MESH_TRANSPORT"] == "tcp"
|
||||
assert os.environ["MESH_DEBUG"] == "1"
|
||||
|
||||
def test_env_vars_take_precedence(self, tmp_path, monkeypatch, isolate_config_loading):
|
||||
"""Environment variables override config file values."""
|
||||
# Set env var first
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "ble")
|
||||
|
||||
# Config file has different value
|
||||
config_content = "MESH_TRANSPORT=serial\n"
|
||||
_load_config_from_content(tmp_path, monkeypatch, config_content)
|
||||
|
||||
# After loading, env var should still be "ble"
|
||||
assert os.environ.get("MESH_TRANSPORT") == "ble"
|
||||
|
||||
|
||||
class TestConfigFileFormats:
|
||||
"""Test various config file format scenarios."""
|
||||
|
||||
def test_standard_format(self):
|
||||
"""Standard KEY=value format."""
|
||||
assert _parse_config_value("value") == "value"
|
||||
|
||||
def test_spaces_around_equals(self):
|
||||
"""Key = value with spaces (handled by partition)."""
|
||||
# Note: _parse_config_value only handles the value part
|
||||
# The key=value split happens in _load_config_file
|
||||
assert _parse_config_value(" value ") == "value"
|
||||
|
||||
def test_quoted_path_with_spaces(self):
|
||||
"""Path with spaces must be quoted."""
|
||||
assert _parse_config_value('"/path/with spaces/file.txt"') == "/path/with spaces/file.txt"
|
||||
|
||||
def test_url_value(self):
|
||||
"""URL values work correctly."""
|
||||
assert _parse_config_value("https://example.com:8080/path") == "https://example.com:8080/path"
|
||||
|
||||
def test_email_value(self):
|
||||
"""Email values work correctly."""
|
||||
assert _parse_config_value("user@example.com") == "user@example.com"
|
||||
|
||||
def test_json_like_value(self):
|
||||
"""JSON-like values need quoting if they have spaces."""
|
||||
# Without spaces, works fine
|
||||
assert _parse_config_value("{key:value}") == "{key:value}"
|
||||
# With spaces, needs quotes
|
||||
assert _parse_config_value('"{key: value}"') == "{key: value}"
|
||||
|
||||
|
||||
class TestValidKeyPatterns:
|
||||
"""Test key validation patterns."""
|
||||
|
||||
def test_valid_key_patterns(self):
|
||||
"""Valid shell identifier patterns."""
|
||||
# These would be tested in _load_config_file
|
||||
# Valid: starts with letter or underscore, contains letters/numbers/underscores
|
||||
valid_keys = [
|
||||
"MESH_TRANSPORT",
|
||||
"_PRIVATE",
|
||||
"var123",
|
||||
"MY_VAR_2",
|
||||
]
|
||||
# All should match: ^[A-Za-z_][A-Za-z0-9_]*$
|
||||
import re
|
||||
pattern = r"^[A-Za-z_][A-Za-z0-9_]*$"
|
||||
for key in valid_keys:
|
||||
assert re.match(pattern, key), f"{key} should be valid"
|
||||
|
||||
def test_invalid_key_patterns(self):
|
||||
"""Invalid key patterns are rejected."""
|
||||
invalid_keys = [
|
||||
"123_starts_with_number",
|
||||
"has-dash",
|
||||
"has.dot",
|
||||
"has space",
|
||||
"",
|
||||
]
|
||||
import re
|
||||
pattern = r"^[A-Za-z_][A-Za-z0-9_]*$"
|
||||
for key in invalid_keys:
|
||||
assert not re.match(pattern, key), f"{key} should be invalid"
|
||||
228
tests/config/test_env.py
Normal file
228
tests/config/test_env.py
Normal file
@@ -0,0 +1,228 @@
|
||||
"""Tests for environment variable parsing and Config class."""
|
||||
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.env import (
|
||||
Config,
|
||||
get_bool,
|
||||
get_config,
|
||||
get_int,
|
||||
get_str,
|
||||
)
|
||||
|
||||
|
||||
class TestGetStrEdgeCases:
|
||||
"""Additional edge case tests for get_str."""
|
||||
|
||||
def test_whitespace_value_preserved(self, monkeypatch):
|
||||
"""Whitespace-only value is preserved."""
|
||||
monkeypatch.setenv("TEST_VAR", " ")
|
||||
assert get_str("TEST_VAR") == " "
|
||||
|
||||
def test_special_characters(self, monkeypatch):
|
||||
"""Special characters are preserved."""
|
||||
monkeypatch.setenv("TEST_VAR", "hello@world#123!")
|
||||
assert get_str("TEST_VAR") == "hello@world#123!"
|
||||
|
||||
|
||||
class TestGetIntEdgeCases:
|
||||
"""Additional edge case tests for get_int."""
|
||||
|
||||
def test_leading_zeros(self, monkeypatch):
|
||||
"""Leading zeros work (not octal)."""
|
||||
monkeypatch.setenv("TEST_INT", "042")
|
||||
assert get_int("TEST_INT", 0) == 42
|
||||
|
||||
def test_whitespace_around_number(self, monkeypatch):
|
||||
"""Whitespace around number is tolerated by int()."""
|
||||
monkeypatch.setenv("TEST_INT", " 42 ")
|
||||
# Python's int() handles whitespace
|
||||
assert get_int("TEST_INT", 0) == 42
|
||||
|
||||
|
||||
class TestGetBoolEdgeCases:
|
||||
"""Additional edge case tests for get_bool."""
|
||||
|
||||
def test_mixed_case(self, monkeypatch):
|
||||
"""Mixed case variants work."""
|
||||
monkeypatch.setenv("TEST_BOOL", "TrUe")
|
||||
assert get_bool("TEST_BOOL") is True
|
||||
|
||||
def test_with_spaces(self, monkeypatch):
|
||||
"""Whitespace causes a non-match since get_bool does not strip."""
|
||||
monkeypatch.setenv("TEST_BOOL", " yes ")
|
||||
# .lower() doesn't strip, so " yes " != "yes"
|
||||
# This will return False
|
||||
assert get_bool("TEST_BOOL") is False
|
||||
|
||||
|
||||
class TestConfigComplete:
|
||||
"""Complete Config class tests."""
|
||||
|
||||
def test_all_connection_settings(self, clean_env, monkeypatch):
|
||||
"""All connection settings are loaded."""
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "tcp")
|
||||
monkeypatch.setenv("MESH_SERIAL_PORT", "/dev/ttyUSB0")
|
||||
monkeypatch.setenv("MESH_SERIAL_BAUD", "9600")
|
||||
monkeypatch.setenv("MESH_TCP_HOST", "192.168.1.1")
|
||||
monkeypatch.setenv("MESH_TCP_PORT", "8080")
|
||||
monkeypatch.setenv("MESH_BLE_ADDR", "AA:BB:CC:DD:EE:FF")
|
||||
monkeypatch.setenv("MESH_BLE_PIN", "1234")
|
||||
monkeypatch.setenv("MESH_DEBUG", "true")
|
||||
|
||||
config = Config()
|
||||
|
||||
assert config.mesh_transport == "tcp"
|
||||
assert config.mesh_serial_port == "/dev/ttyUSB0"
|
||||
assert config.mesh_serial_baud == 9600
|
||||
assert config.mesh_tcp_host == "192.168.1.1"
|
||||
assert config.mesh_tcp_port == 8080
|
||||
assert config.mesh_ble_addr == "AA:BB:CC:DD:EE:FF"
|
||||
assert config.mesh_ble_pin == "1234"
|
||||
assert config.mesh_debug is True
|
||||
|
||||
def test_all_repeater_settings(self, clean_env, monkeypatch):
|
||||
"""All repeater identity settings are loaded."""
|
||||
monkeypatch.setenv("REPEATER_NAME", "HilltopRepeater")
|
||||
monkeypatch.setenv("REPEATER_KEY_PREFIX", "abc123")
|
||||
monkeypatch.setenv("REPEATER_PASSWORD", "secret")
|
||||
monkeypatch.setenv("REPEATER_DISPLAY_NAME", "Hilltop Relay")
|
||||
monkeypatch.setenv("REPEATER_PUBKEY_PREFIX", "!abc123")
|
||||
monkeypatch.setenv("REPEATER_HARDWARE", "RAK4631 with Solar")
|
||||
|
||||
config = Config()
|
||||
|
||||
assert config.repeater_name == "HilltopRepeater"
|
||||
assert config.repeater_key_prefix == "abc123"
|
||||
assert config.repeater_password == "secret"
|
||||
assert config.repeater_display_name == "Hilltop Relay"
|
||||
assert config.repeater_pubkey_prefix == "!abc123"
|
||||
assert config.repeater_hardware == "RAK4631 with Solar"
|
||||
|
||||
def test_all_timeout_settings(self, clean_env, monkeypatch):
|
||||
"""All timeout and retry settings are loaded."""
|
||||
monkeypatch.setenv("REMOTE_TIMEOUT_S", "30")
|
||||
monkeypatch.setenv("REMOTE_RETRY_ATTEMPTS", "5")
|
||||
monkeypatch.setenv("REMOTE_RETRY_BACKOFF_S", "10")
|
||||
monkeypatch.setenv("REMOTE_CB_FAILS", "10")
|
||||
monkeypatch.setenv("REMOTE_CB_COOLDOWN_S", "7200")
|
||||
|
||||
config = Config()
|
||||
|
||||
assert config.remote_timeout_s == 30
|
||||
assert config.remote_retry_attempts == 5
|
||||
assert config.remote_retry_backoff_s == 10
|
||||
assert config.remote_cb_fails == 10
|
||||
assert config.remote_cb_cooldown_s == 7200
|
||||
|
||||
def test_all_telemetry_settings(self, clean_env, monkeypatch):
|
||||
"""All telemetry settings are loaded."""
|
||||
monkeypatch.setenv("TELEMETRY_ENABLED", "yes")
|
||||
monkeypatch.setenv("TELEMETRY_TIMEOUT_S", "20")
|
||||
monkeypatch.setenv("TELEMETRY_RETRY_ATTEMPTS", "3")
|
||||
monkeypatch.setenv("TELEMETRY_RETRY_BACKOFF_S", "5")
|
||||
|
||||
config = Config()
|
||||
|
||||
assert config.telemetry_enabled is True
|
||||
assert config.telemetry_timeout_s == 20
|
||||
assert config.telemetry_retry_attempts == 3
|
||||
assert config.telemetry_retry_backoff_s == 5
|
||||
|
||||
def test_display_unit_system_defaults_to_metric(self, clean_env):
|
||||
"""DISPLAY_UNIT_SYSTEM defaults to metric."""
|
||||
config = Config()
|
||||
assert config.display_unit_system == "metric"
|
||||
|
||||
def test_display_unit_system_accepts_imperial(self, clean_env, monkeypatch):
|
||||
"""DISPLAY_UNIT_SYSTEM=imperial is honored."""
|
||||
monkeypatch.setenv("DISPLAY_UNIT_SYSTEM", "imperial")
|
||||
config = Config()
|
||||
assert config.display_unit_system == "imperial"
|
||||
|
||||
def test_display_unit_system_invalid_falls_back_to_metric(self, clean_env, monkeypatch):
|
||||
"""Invalid DISPLAY_UNIT_SYSTEM falls back to metric."""
|
||||
monkeypatch.setenv("DISPLAY_UNIT_SYSTEM", "kelvin")
|
||||
config = Config()
|
||||
assert config.display_unit_system == "metric"
|
||||
|
||||
def test_all_location_settings(self, clean_env, monkeypatch):
|
||||
"""All location/report settings are loaded."""
|
||||
monkeypatch.setenv("REPORT_LOCATION_NAME", "Mountain Peak Observatory")
|
||||
monkeypatch.setenv("REPORT_LOCATION_SHORT", "Mountain Peak")
|
||||
monkeypatch.setenv("REPORT_LAT", "46.8523")
|
||||
monkeypatch.setenv("REPORT_LON", "9.5369")
|
||||
monkeypatch.setenv("REPORT_ELEV", "2500")
|
||||
monkeypatch.setenv("REPORT_ELEV_UNIT", "ft")
|
||||
|
||||
config = Config()
|
||||
|
||||
assert config.report_location_name == "Mountain Peak Observatory"
|
||||
assert config.report_location_short == "Mountain Peak"
|
||||
assert config.report_lat == pytest.approx(46.8523)
|
||||
assert config.report_lon == pytest.approx(9.5369)
|
||||
assert config.report_elev == pytest.approx(2500)
|
||||
assert config.report_elev_unit == "ft"
|
||||
|
||||
def test_all_radio_settings(self, clean_env, monkeypatch):
|
||||
"""All radio configuration settings are loaded."""
|
||||
monkeypatch.setenv("RADIO_FREQUENCY", "915.000 MHz")
|
||||
monkeypatch.setenv("RADIO_BANDWIDTH", "125 kHz")
|
||||
monkeypatch.setenv("RADIO_SPREAD_FACTOR", "SF12")
|
||||
monkeypatch.setenv("RADIO_CODING_RATE", "CR5")
|
||||
|
||||
config = Config()
|
||||
|
||||
assert config.radio_frequency == "915.000 MHz"
|
||||
assert config.radio_bandwidth == "125 kHz"
|
||||
assert config.radio_spread_factor == "SF12"
|
||||
assert config.radio_coding_rate == "CR5"
|
||||
|
||||
def test_companion_settings(self, clean_env, monkeypatch):
|
||||
"""Companion display settings are loaded."""
|
||||
monkeypatch.setenv("COMPANION_DISPLAY_NAME", "Base Station")
|
||||
monkeypatch.setenv("COMPANION_PUBKEY_PREFIX", "!def456")
|
||||
monkeypatch.setenv("COMPANION_HARDWARE", "T-Beam Supreme")
|
||||
|
||||
config = Config()
|
||||
|
||||
assert config.companion_display_name == "Base Station"
|
||||
assert config.companion_pubkey_prefix == "!def456"
|
||||
assert config.companion_hardware == "T-Beam Supreme"
|
||||
|
||||
|
||||
class TestGetConfigSingleton:
|
||||
"""Tests for get_config singleton behavior."""
|
||||
|
||||
def test_config_persists_across_calls(self, clean_env, monkeypatch):
|
||||
"""Config values persist across multiple get_config calls."""
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "tcp")
|
||||
|
||||
config1 = get_config()
|
||||
assert config1.mesh_transport == "tcp"
|
||||
|
||||
# Change env var - should NOT affect cached config
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "ble")
|
||||
|
||||
config2 = get_config()
|
||||
assert config2.mesh_transport == "tcp" # Still tcp, cached
|
||||
assert config1 is config2
|
||||
|
||||
def test_reset_allows_new_config(self, clean_env, monkeypatch):
|
||||
"""Resetting singleton allows new config."""
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "tcp")
|
||||
|
||||
config1 = get_config()
|
||||
assert config1.mesh_transport == "tcp"
|
||||
|
||||
# Reset singleton
|
||||
import meshmon.env
|
||||
meshmon.env._config = None
|
||||
|
||||
# Change env var
|
||||
monkeypatch.setenv("MESH_TRANSPORT", "ble")
|
||||
|
||||
config2 = get_config()
|
||||
assert config2.mesh_transport == "ble"
|
||||
assert config1 is not config2
|
||||
169
tests/conftest.py
Normal file
169
tests/conftest.py
Normal file
@@ -0,0 +1,169 @@
|
||||
"""Root fixtures for all tests."""
|
||||
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def clean_env(monkeypatch):
|
||||
"""Clear mesh-related env vars and reset config singleton before each test."""
|
||||
env_prefixes = (
|
||||
"MESH_",
|
||||
"REPEATER_",
|
||||
"COMPANION_",
|
||||
"REMOTE_",
|
||||
"TELEMETRY_",
|
||||
"DISPLAY_",
|
||||
"REPORT_",
|
||||
"RADIO_",
|
||||
"STATE_DIR",
|
||||
"OUT_DIR",
|
||||
)
|
||||
|
||||
for key in list(os.environ.keys()):
|
||||
for prefix in env_prefixes:
|
||||
if key.startswith(prefix):
|
||||
monkeypatch.delenv(key, raising=False)
|
||||
break
|
||||
|
||||
# Reset config singleton
|
||||
import meshmon.env
|
||||
|
||||
meshmon.env._config = None
|
||||
|
||||
yield
|
||||
|
||||
# Reset again after test
|
||||
meshmon.env._config = None
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def tmp_state_dir(tmp_path):
|
||||
"""Create temp directory for state files (DB, circuit breaker)."""
|
||||
state_dir = tmp_path / "state"
|
||||
state_dir.mkdir()
|
||||
return state_dir
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def tmp_out_dir(tmp_path):
|
||||
"""Create temp directory for rendered output."""
|
||||
out_dir = tmp_path / "out"
|
||||
out_dir.mkdir()
|
||||
return out_dir
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def configured_env(tmp_state_dir, tmp_out_dir, monkeypatch):
|
||||
"""Set up test environment with temp directories."""
|
||||
monkeypatch.setenv("STATE_DIR", str(tmp_state_dir))
|
||||
monkeypatch.setenv("OUT_DIR", str(tmp_out_dir))
|
||||
# Reset config to pick up new values
|
||||
import meshmon.env
|
||||
|
||||
meshmon.env._config = None
|
||||
return {"state_dir": tmp_state_dir, "out_dir": tmp_out_dir}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_companion_metrics():
|
||||
"""Sample companion metrics using firmware field names."""
|
||||
return {
|
||||
"battery_mv": 3850.0,
|
||||
"uptime_secs": 86400,
|
||||
"contacts": 5,
|
||||
"recv": 1234,
|
||||
"sent": 567,
|
||||
"errors": 0,
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_repeater_metrics():
|
||||
"""Sample repeater metrics using firmware field names."""
|
||||
return {
|
||||
"bat": 3920.0,
|
||||
"uptime": 172800,
|
||||
"last_rssi": -85,
|
||||
"last_snr": 7.5,
|
||||
"noise_floor": -115,
|
||||
"tx_queue_len": 0,
|
||||
"nb_recv": 5678,
|
||||
"nb_sent": 2345,
|
||||
"airtime": 3600,
|
||||
"rx_airtime": 7200,
|
||||
"flood_dups": 12,
|
||||
"direct_dups": 5,
|
||||
"sent_flood": 100,
|
||||
"recv_flood": 200,
|
||||
"sent_direct": 50,
|
||||
"recv_direct": 75,
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def project_root():
|
||||
"""Path to the project root directory."""
|
||||
return Path(__file__).parent.parent
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def src_root(project_root):
|
||||
"""Path to the src/meshmon directory."""
|
||||
return project_root / "src" / "meshmon"
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def db_path(tmp_state_dir):
|
||||
"""Database path in temp state directory."""
|
||||
return tmp_state_dir / "metrics.db"
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def migrations_dir(project_root):
|
||||
"""Path to actual migrations directory."""
|
||||
return project_root / "src" / "meshmon" / "migrations"
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def initialized_db(db_path, configured_env, monkeypatch):
|
||||
"""Fresh database with migrations applied."""
|
||||
from meshmon.db import init_db
|
||||
|
||||
init_db()
|
||||
return db_path
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def populated_db(initialized_db, sample_companion_metrics, sample_repeater_metrics):
|
||||
"""Database with 7 days of sample data."""
|
||||
import time
|
||||
|
||||
from meshmon.db import insert_metrics
|
||||
|
||||
now = int(time.time())
|
||||
day_seconds = 86400
|
||||
|
||||
# Insert 7 days of companion data (every hour)
|
||||
for day in range(7):
|
||||
for hour in range(24):
|
||||
ts = now - (day * day_seconds) - (hour * 3600)
|
||||
metrics = sample_companion_metrics.copy()
|
||||
metrics["battery_mv"] = 3700 + (hour * 10) + (day * 5)
|
||||
metrics["recv"] = 100 * (day + 1) + hour
|
||||
metrics["sent"] = 50 * (day + 1) + hour
|
||||
insert_metrics(ts, "companion", metrics)
|
||||
|
||||
# Insert 7 days of repeater data (every 15 minutes)
|
||||
for day in range(7):
|
||||
for interval in range(96): # 24 * 4
|
||||
ts = now - (day * day_seconds) - (interval * 900)
|
||||
metrics = sample_repeater_metrics.copy()
|
||||
metrics["bat"] = 3700 + (interval * 2) + (day * 5)
|
||||
metrics["nb_recv"] = 1000 * (day + 1) + interval * 10
|
||||
metrics["nb_sent"] = 500 * (day + 1) + interval * 5
|
||||
insert_metrics(ts, "repeater", metrics)
|
||||
|
||||
return initialized_db
|
||||
1
tests/database/__init__.py
Normal file
1
tests/database/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Database tests."""
|
||||
59
tests/database/conftest.py
Normal file
59
tests/database/conftest.py
Normal file
@@ -0,0 +1,59 @@
|
||||
"""Fixtures for database tests."""
|
||||
|
||||
import time
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def db_path(tmp_state_dir):
|
||||
"""Database path in temp state directory."""
|
||||
return tmp_state_dir / "metrics.db"
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def migrations_dir():
|
||||
"""Path to actual migrations directory."""
|
||||
return Path(__file__).parent.parent.parent / "src" / "meshmon" / "migrations"
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def initialized_db(db_path, configured_env):
|
||||
"""Fresh database with migrations applied."""
|
||||
from meshmon.db import init_db
|
||||
init_db(db_path)
|
||||
return db_path
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def populated_db(initialized_db, sample_companion_metrics, sample_repeater_metrics):
|
||||
"""Database with 7 days of sample data."""
|
||||
from meshmon.db import insert_metrics
|
||||
|
||||
now = int(time.time())
|
||||
day_seconds = 86400
|
||||
|
||||
# Insert 7 days of companion data (every hour)
|
||||
for day in range(7):
|
||||
for hour in range(24):
|
||||
ts = now - (day * day_seconds) - (hour * 3600)
|
||||
metrics = sample_companion_metrics.copy()
|
||||
# Vary values slightly
|
||||
metrics["battery_mv"] = 3700 + (hour * 10) + (day * 5)
|
||||
metrics["recv"] = 100 * (day + 1) + hour
|
||||
metrics["sent"] = 50 * (day + 1) + hour
|
||||
insert_metrics(ts, "companion", metrics, initialized_db)
|
||||
|
||||
# Insert 7 days of repeater data (every 15 minutes)
|
||||
for day in range(7):
|
||||
for interval in range(96): # 24 * 4
|
||||
ts = now - (day * day_seconds) - (interval * 900)
|
||||
metrics = sample_repeater_metrics.copy()
|
||||
# Vary values slightly
|
||||
metrics["bat"] = 3700 + (interval * 2) + (day * 5)
|
||||
metrics["nb_recv"] = 1000 * (day + 1) + interval * 10
|
||||
metrics["nb_sent"] = 500 * (day + 1) + interval * 5
|
||||
insert_metrics(ts, "repeater", metrics, initialized_db)
|
||||
|
||||
return initialized_db
|
||||
179
tests/database/test_db_init.py
Normal file
179
tests/database/test_db_init.py
Normal file
@@ -0,0 +1,179 @@
|
||||
"""Tests for database initialization and migrations."""
|
||||
|
||||
import sqlite3
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.db import (
|
||||
_get_schema_version,
|
||||
get_connection,
|
||||
init_db,
|
||||
)
|
||||
|
||||
|
||||
class TestInitDb:
|
||||
"""Tests for init_db function."""
|
||||
|
||||
def test_creates_database_file(self, db_path, configured_env):
|
||||
"""Creates database file if it doesn't exist."""
|
||||
assert not db_path.exists()
|
||||
|
||||
init_db(db_path)
|
||||
|
||||
assert db_path.exists()
|
||||
|
||||
def test_creates_parent_directories(self, tmp_path, configured_env):
|
||||
"""Creates parent directories if needed."""
|
||||
nested_path = tmp_path / "deep" / "nested" / "metrics.db"
|
||||
assert not nested_path.parent.exists()
|
||||
|
||||
init_db(nested_path)
|
||||
|
||||
assert nested_path.exists()
|
||||
|
||||
def test_applies_migrations(self, db_path, configured_env):
|
||||
"""Applies schema migrations."""
|
||||
init_db(db_path)
|
||||
|
||||
with get_connection(db_path, readonly=True) as conn:
|
||||
version = _get_schema_version(conn)
|
||||
assert version >= 1
|
||||
|
||||
def test_safe_to_call_multiple_times(self, db_path, configured_env):
|
||||
"""Can be called multiple times without error."""
|
||||
init_db(db_path)
|
||||
init_db(db_path) # Should not raise
|
||||
init_db(db_path) # Should not raise
|
||||
|
||||
with get_connection(db_path, readonly=True) as conn:
|
||||
version = _get_schema_version(conn)
|
||||
assert version >= 1
|
||||
|
||||
def test_enables_wal_mode(self, db_path, configured_env):
|
||||
"""Enables WAL journal mode."""
|
||||
init_db(db_path)
|
||||
|
||||
conn = sqlite3.connect(db_path)
|
||||
try:
|
||||
cursor = conn.execute("PRAGMA journal_mode")
|
||||
mode = cursor.fetchone()[0]
|
||||
assert mode.lower() == "wal"
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
def test_creates_metrics_table(self, db_path, configured_env):
|
||||
"""Creates metrics table with correct schema."""
|
||||
init_db(db_path)
|
||||
|
||||
with get_connection(db_path, readonly=True) as conn:
|
||||
# Check table exists
|
||||
cursor = conn.execute(
|
||||
"SELECT name FROM sqlite_master WHERE type='table' AND name='metrics'"
|
||||
)
|
||||
assert cursor.fetchone() is not None
|
||||
|
||||
# Check columns
|
||||
cursor = conn.execute("PRAGMA table_info(metrics)")
|
||||
columns = {row["name"]: row for row in cursor}
|
||||
assert "ts" in columns
|
||||
assert "role" in columns
|
||||
assert "metric" in columns
|
||||
assert "value" in columns
|
||||
|
||||
def test_creates_db_meta_table(self, db_path, configured_env):
|
||||
"""Creates db_meta table for schema versioning."""
|
||||
init_db(db_path)
|
||||
|
||||
with get_connection(db_path, readonly=True) as conn:
|
||||
cursor = conn.execute(
|
||||
"SELECT name FROM sqlite_master WHERE type='table' AND name='db_meta'"
|
||||
)
|
||||
assert cursor.fetchone() is not None
|
||||
|
||||
|
||||
class TestGetConnection:
|
||||
"""Tests for get_connection context manager."""
|
||||
|
||||
def test_returns_connection(self, initialized_db):
|
||||
"""Returns a working connection."""
|
||||
with get_connection(initialized_db) as conn:
|
||||
assert conn is not None
|
||||
cursor = conn.execute("SELECT 1")
|
||||
assert cursor.fetchone()[0] == 1
|
||||
|
||||
def test_row_factory_enabled(self, initialized_db):
|
||||
"""Row factory is set to sqlite3.Row."""
|
||||
with get_connection(initialized_db) as conn:
|
||||
conn.execute(
|
||||
"INSERT INTO metrics (ts, role, metric, value) VALUES (1, 'companion', 'test', 1.0)"
|
||||
)
|
||||
with get_connection(initialized_db, readonly=True) as conn:
|
||||
cursor = conn.execute("SELECT * FROM metrics WHERE metric = 'test'")
|
||||
row = cursor.fetchone()
|
||||
# sqlite3.Row supports dict-like access
|
||||
assert row["metric"] == "test"
|
||||
assert row["value"] == 1.0
|
||||
|
||||
def test_commits_on_success(self, initialized_db):
|
||||
"""Commits transaction on normal exit."""
|
||||
with get_connection(initialized_db) as conn:
|
||||
conn.execute(
|
||||
"INSERT INTO metrics (ts, role, metric, value) VALUES (1, 'companion', 'test', 1.0)"
|
||||
)
|
||||
|
||||
# Check data persisted
|
||||
with get_connection(initialized_db, readonly=True) as conn:
|
||||
cursor = conn.execute("SELECT COUNT(*) FROM metrics WHERE metric = 'test'")
|
||||
assert cursor.fetchone()[0] == 1
|
||||
|
||||
def test_rollback_on_exception(self, initialized_db):
|
||||
"""Rolls back transaction on exception."""
|
||||
try:
|
||||
with get_connection(initialized_db) as conn:
|
||||
conn.execute(
|
||||
"INSERT INTO metrics (ts, role, metric, value) VALUES (2, 'companion', 'test2', 1.0)"
|
||||
)
|
||||
raise ValueError("Test error")
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
# Check data was rolled back
|
||||
with get_connection(initialized_db, readonly=True) as conn:
|
||||
cursor = conn.execute("SELECT COUNT(*) FROM metrics WHERE metric = 'test2'")
|
||||
assert cursor.fetchone()[0] == 0
|
||||
|
||||
def test_readonly_mode(self, initialized_db):
|
||||
"""Read-only mode prevents writes."""
|
||||
with (
|
||||
get_connection(initialized_db, readonly=True) as conn,
|
||||
pytest.raises(sqlite3.OperationalError),
|
||||
):
|
||||
conn.execute(
|
||||
"INSERT INTO metrics (ts, role, metric, value) VALUES (1, 'companion', 'test', 1.0)"
|
||||
)
|
||||
|
||||
|
||||
class TestMigrationsDirectory:
|
||||
"""Tests for migrations directory and files."""
|
||||
|
||||
def test_migrations_dir_exists(self, migrations_dir):
|
||||
"""Migrations directory exists."""
|
||||
assert migrations_dir.exists()
|
||||
assert migrations_dir.is_dir()
|
||||
|
||||
def test_has_initial_migration(self, migrations_dir):
|
||||
"""Has at least the initial schema migration."""
|
||||
sql_files = list(migrations_dir.glob("*.sql"))
|
||||
assert len(sql_files) >= 1
|
||||
|
||||
# Check for 001 prefixed file
|
||||
initial = [f for f in sql_files if f.stem.startswith("001")]
|
||||
assert len(initial) == 1
|
||||
|
||||
def test_migrations_are_numbered(self, migrations_dir):
|
||||
"""Migration files follow NNN_description.sql pattern."""
|
||||
import re
|
||||
|
||||
pattern = re.compile(r"^\d{3}_.*\.sql$")
|
||||
for sql_file in migrations_dir.glob("*.sql"):
|
||||
assert pattern.match(sql_file.name), f"{sql_file.name} doesn't match pattern"
|
||||
207
tests/database/test_db_insert.py
Normal file
207
tests/database/test_db_insert.py
Normal file
@@ -0,0 +1,207 @@
|
||||
"""Tests for database insert functions."""
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.db import (
|
||||
get_connection,
|
||||
insert_metric,
|
||||
insert_metrics,
|
||||
)
|
||||
|
||||
BASE_TS = 1704067200
|
||||
|
||||
|
||||
class TestInsertMetric:
|
||||
"""Tests for insert_metric function."""
|
||||
|
||||
def test_inserts_single_metric(self, initialized_db):
|
||||
"""Inserts a single metric successfully."""
|
||||
ts = BASE_TS
|
||||
|
||||
result = insert_metric(ts, "companion", "battery_mv", 3850.0, initialized_db)
|
||||
|
||||
assert result is True
|
||||
|
||||
with get_connection(initialized_db, readonly=True) as conn:
|
||||
cursor = conn.execute(
|
||||
"SELECT value FROM metrics WHERE ts = ? AND role = ? AND metric = ?",
|
||||
(ts, "companion", "battery_mv")
|
||||
)
|
||||
row = cursor.fetchone()
|
||||
assert row is not None
|
||||
assert row["value"] == 3850.0
|
||||
|
||||
def test_returns_false_on_duplicate(self, initialized_db):
|
||||
"""Returns False for duplicate (ts, role, metric) tuple."""
|
||||
ts = BASE_TS
|
||||
|
||||
# First insert succeeds
|
||||
assert insert_metric(ts, "companion", "test", 1.0, initialized_db) is True
|
||||
|
||||
# Second insert with same key returns False
|
||||
assert insert_metric(ts, "companion", "test", 2.0, initialized_db) is False
|
||||
|
||||
def test_different_roles_not_duplicate(self, initialized_db):
|
||||
"""Same ts/metric with different roles are not duplicates."""
|
||||
ts = BASE_TS
|
||||
|
||||
assert insert_metric(ts, "companion", "test", 1.0, initialized_db) is True
|
||||
assert insert_metric(ts, "repeater", "test", 2.0, initialized_db) is True
|
||||
|
||||
def test_different_metrics_not_duplicate(self, initialized_db):
|
||||
"""Same ts/role with different metrics are not duplicates."""
|
||||
ts = BASE_TS
|
||||
|
||||
assert insert_metric(ts, "companion", "test1", 1.0, initialized_db) is True
|
||||
assert insert_metric(ts, "companion", "test2", 2.0, initialized_db) is True
|
||||
|
||||
def test_invalid_role_raises(self, initialized_db):
|
||||
"""Invalid role raises ValueError."""
|
||||
ts = BASE_TS
|
||||
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
insert_metric(ts, "invalid", "test", 1.0, initialized_db)
|
||||
|
||||
def test_sql_injection_blocked(self, initialized_db):
|
||||
"""SQL injection attempt raises ValueError."""
|
||||
ts = BASE_TS
|
||||
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
insert_metric(ts, "'; DROP TABLE metrics; --", "test", 1.0, initialized_db)
|
||||
|
||||
|
||||
class TestInsertMetrics:
|
||||
"""Tests for insert_metrics function (bulk insert)."""
|
||||
|
||||
def test_inserts_multiple_metrics(self, initialized_db):
|
||||
"""Inserts multiple metrics from dict."""
|
||||
ts = BASE_TS
|
||||
metrics = {
|
||||
"battery_mv": 3850.0,
|
||||
"contacts": 5,
|
||||
"uptime_secs": 86400,
|
||||
}
|
||||
|
||||
count = insert_metrics(ts, "companion", metrics, initialized_db)
|
||||
|
||||
assert count == 3
|
||||
|
||||
with get_connection(initialized_db, readonly=True) as conn:
|
||||
cursor = conn.execute(
|
||||
"SELECT COUNT(*) FROM metrics WHERE ts = ?",
|
||||
(ts,)
|
||||
)
|
||||
assert cursor.fetchone()[0] == 3
|
||||
|
||||
def test_returns_insert_count(self, initialized_db):
|
||||
"""Returns correct count of inserted metrics."""
|
||||
ts = BASE_TS
|
||||
metrics = {"a": 1.0, "b": 2.0, "c": 3.0}
|
||||
|
||||
count = insert_metrics(ts, "companion", metrics, initialized_db)
|
||||
|
||||
assert count == 3
|
||||
|
||||
def test_skips_non_numeric_values(self, initialized_db):
|
||||
"""Non-numeric values are silently skipped."""
|
||||
ts = BASE_TS
|
||||
metrics = {
|
||||
"battery_mv": 3850.0, # Numeric - inserted
|
||||
"name": "test", # String - skipped
|
||||
"status": None, # None - skipped
|
||||
"flags": [1, 2, 3], # List - skipped
|
||||
"nested": {"a": 1}, # Dict - skipped
|
||||
}
|
||||
|
||||
count = insert_metrics(ts, "companion", metrics, initialized_db)
|
||||
|
||||
assert count == 1 # Only battery_mv
|
||||
|
||||
def test_handles_int_and_float(self, initialized_db):
|
||||
"""Both int and float values are inserted."""
|
||||
ts = BASE_TS
|
||||
metrics = {
|
||||
"int_value": 42,
|
||||
"float_value": 3.14,
|
||||
}
|
||||
|
||||
count = insert_metrics(ts, "companion", metrics, initialized_db)
|
||||
|
||||
assert count == 2
|
||||
|
||||
def test_converts_int_to_float(self, initialized_db):
|
||||
"""Integer values are stored as float."""
|
||||
ts = BASE_TS
|
||||
metrics = {"contacts": 5}
|
||||
|
||||
insert_metrics(ts, "companion", metrics, initialized_db)
|
||||
|
||||
with get_connection(initialized_db, readonly=True) as conn:
|
||||
cursor = conn.execute(
|
||||
"SELECT value FROM metrics WHERE metric = 'contacts'"
|
||||
)
|
||||
row = cursor.fetchone()
|
||||
assert row["value"] == 5.0
|
||||
assert isinstance(row["value"], float)
|
||||
|
||||
def test_empty_dict_returns_zero(self, initialized_db):
|
||||
"""Empty dict returns 0."""
|
||||
ts = BASE_TS
|
||||
|
||||
count = insert_metrics(ts, "companion", {}, initialized_db)
|
||||
|
||||
assert count == 0
|
||||
|
||||
def test_skips_duplicates_silently(self, initialized_db):
|
||||
"""Duplicate metrics are skipped without error."""
|
||||
ts = BASE_TS
|
||||
metrics = {"test": 1.0}
|
||||
|
||||
# First insert
|
||||
count1 = insert_metrics(ts, "companion", metrics, initialized_db)
|
||||
assert count1 == 1
|
||||
|
||||
# Second insert - same key
|
||||
count2 = insert_metrics(ts, "companion", metrics, initialized_db)
|
||||
assert count2 == 0 # Duplicate skipped
|
||||
|
||||
def test_partial_duplicates(self, initialized_db):
|
||||
"""Partial duplicates: some inserted, some skipped."""
|
||||
ts = BASE_TS
|
||||
|
||||
# First insert
|
||||
insert_metrics(ts, "companion", {"existing": 1.0}, initialized_db)
|
||||
|
||||
# Second insert with mix
|
||||
metrics = {
|
||||
"existing": 2.0, # Duplicate - skipped
|
||||
"new": 3.0, # New - inserted
|
||||
}
|
||||
count = insert_metrics(ts, "companion", metrics, initialized_db)
|
||||
|
||||
assert count == 1 # Only "new" inserted
|
||||
|
||||
def test_invalid_role_raises(self, initialized_db):
|
||||
"""Invalid role raises ValueError."""
|
||||
ts = BASE_TS
|
||||
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
insert_metrics(ts, "invalid", {"test": 1.0}, initialized_db)
|
||||
|
||||
def test_companion_metrics(self, initialized_db, sample_companion_metrics):
|
||||
"""Inserts companion metrics dict."""
|
||||
ts = BASE_TS
|
||||
|
||||
count = insert_metrics(ts, "companion", sample_companion_metrics, initialized_db)
|
||||
|
||||
# Should insert all numeric fields
|
||||
assert count == len(sample_companion_metrics)
|
||||
|
||||
def test_repeater_metrics(self, initialized_db, sample_repeater_metrics):
|
||||
"""Inserts repeater metrics dict."""
|
||||
ts = BASE_TS
|
||||
|
||||
count = insert_metrics(ts, "repeater", sample_repeater_metrics, initialized_db)
|
||||
|
||||
# Should insert all numeric fields
|
||||
assert count == len(sample_repeater_metrics)
|
||||
203
tests/database/test_db_maintenance.py
Normal file
203
tests/database/test_db_maintenance.py
Normal file
@@ -0,0 +1,203 @@
|
||||
"""Tests for database maintenance functions."""
|
||||
|
||||
import os
|
||||
import sqlite3
|
||||
|
||||
from meshmon.db import (
|
||||
get_db_path,
|
||||
init_db,
|
||||
vacuum_db,
|
||||
)
|
||||
|
||||
|
||||
class TestVacuumDb:
|
||||
"""Tests for vacuum_db function."""
|
||||
|
||||
def test_vacuums_existing_db(self, initialized_db):
|
||||
"""Vacuum should run without error on initialized database."""
|
||||
# Add some data then vacuum
|
||||
conn = sqlite3.connect(initialized_db)
|
||||
conn.execute(
|
||||
"INSERT INTO metrics (ts, role, metric, value) VALUES (1, 'companion', 'test', 1.0)"
|
||||
)
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
# Should not raise
|
||||
vacuum_db(initialized_db)
|
||||
|
||||
def test_runs_analyze(self, initialized_db):
|
||||
"""ANALYZE should be run after VACUUM."""
|
||||
conn = sqlite3.connect(initialized_db)
|
||||
conn.execute(
|
||||
"INSERT INTO metrics (ts, role, metric, value) VALUES (1, 'companion', 'test', 1.0)"
|
||||
)
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
# Vacuum includes ANALYZE
|
||||
vacuum_db(initialized_db)
|
||||
|
||||
# Check that database stats were updated
|
||||
conn = sqlite3.connect(initialized_db)
|
||||
cursor = conn.execute("SELECT COUNT(*) FROM sqlite_stat1")
|
||||
count = cursor.fetchone()[0]
|
||||
conn.close()
|
||||
assert count > 0
|
||||
|
||||
def test_uses_default_path_when_none(self, configured_env, monkeypatch):
|
||||
"""Uses get_db_path() when no path provided."""
|
||||
# Initialize db at default location
|
||||
init_db()
|
||||
|
||||
# vacuum_db with None should use default path
|
||||
vacuum_db(None)
|
||||
|
||||
def test_can_vacuum_empty_db(self, initialized_db):
|
||||
"""Can vacuum an empty database."""
|
||||
vacuum_db(initialized_db)
|
||||
|
||||
def test_reclaims_space_after_delete(self, initialized_db):
|
||||
"""Vacuum should reclaim space after deleting rows."""
|
||||
conn = sqlite3.connect(initialized_db)
|
||||
|
||||
# Insert many rows
|
||||
for i in range(1000):
|
||||
conn.execute(
|
||||
"INSERT INTO metrics (ts, role, metric, value) VALUES (?, 'companion', 'test', 1.0)",
|
||||
(i,)
|
||||
)
|
||||
conn.commit()
|
||||
|
||||
# Get size before delete
|
||||
conn.close()
|
||||
size_before = os.path.getsize(initialized_db)
|
||||
|
||||
# Delete all rows
|
||||
conn = sqlite3.connect(initialized_db)
|
||||
conn.execute("DELETE FROM metrics")
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
# Vacuum
|
||||
vacuum_db(initialized_db)
|
||||
|
||||
# Size should be smaller (or at least not larger)
|
||||
size_after = os.path.getsize(initialized_db)
|
||||
# Note: Due to WAL mode, this might not always shrink dramatically
|
||||
# but vacuum should at least complete without error
|
||||
assert size_after <= size_before + 4096 # Allow for some overhead
|
||||
|
||||
|
||||
class TestGetDbPath:
|
||||
"""Tests for get_db_path function."""
|
||||
|
||||
def test_returns_path_in_state_dir(self, configured_env):
|
||||
"""Path should be in the configured state directory."""
|
||||
path = get_db_path()
|
||||
|
||||
assert path.name == "metrics.db"
|
||||
assert str(configured_env["state_dir"]) in str(path)
|
||||
|
||||
def test_returns_path_object(self, configured_env):
|
||||
"""Should return a Path object."""
|
||||
from pathlib import Path
|
||||
|
||||
path = get_db_path()
|
||||
|
||||
assert isinstance(path, Path)
|
||||
|
||||
|
||||
class TestDatabaseIntegrity:
|
||||
"""Tests for database integrity after operations."""
|
||||
|
||||
def test_wal_mode_enabled(self, initialized_db):
|
||||
"""Database should be in WAL mode."""
|
||||
conn = sqlite3.connect(initialized_db)
|
||||
cursor = conn.execute("PRAGMA journal_mode")
|
||||
mode = cursor.fetchone()[0]
|
||||
conn.close()
|
||||
|
||||
assert mode.lower() == "wal"
|
||||
|
||||
def test_foreign_keys_disabled_by_default(self, initialized_db):
|
||||
"""Foreign keys should be disabled (SQLite default)."""
|
||||
conn = sqlite3.connect(initialized_db)
|
||||
cursor = conn.execute("PRAGMA foreign_keys")
|
||||
enabled = cursor.fetchone()[0]
|
||||
conn.close()
|
||||
|
||||
# Default is off, and we don't explicitly enable them
|
||||
assert enabled == 0
|
||||
|
||||
def test_metrics_table_exists(self, initialized_db):
|
||||
"""Metrics table should exist after init."""
|
||||
conn = sqlite3.connect(initialized_db)
|
||||
cursor = conn.execute(
|
||||
"SELECT name FROM sqlite_master WHERE type='table' AND name='metrics'"
|
||||
)
|
||||
result = cursor.fetchone()
|
||||
conn.close()
|
||||
|
||||
assert result is not None
|
||||
assert result[0] == "metrics"
|
||||
|
||||
def test_db_meta_table_exists(self, initialized_db):
|
||||
"""db_meta table should exist after init."""
|
||||
conn = sqlite3.connect(initialized_db)
|
||||
cursor = conn.execute(
|
||||
"SELECT name FROM sqlite_master WHERE type='table' AND name='db_meta'"
|
||||
)
|
||||
result = cursor.fetchone()
|
||||
conn.close()
|
||||
|
||||
assert result is not None
|
||||
|
||||
def test_metrics_index_exists(self, initialized_db):
|
||||
"""Index on metrics(role, ts) should exist."""
|
||||
conn = sqlite3.connect(initialized_db)
|
||||
cursor = conn.execute(
|
||||
"SELECT name FROM sqlite_master WHERE type='index' AND name='idx_metrics_role_ts'"
|
||||
)
|
||||
result = cursor.fetchone()
|
||||
conn.close()
|
||||
|
||||
assert result is not None
|
||||
|
||||
def test_vacuum_preserves_data(self, initialized_db):
|
||||
"""Vacuum should not lose any data."""
|
||||
conn = sqlite3.connect(initialized_db)
|
||||
for i in range(100):
|
||||
conn.execute(
|
||||
"INSERT INTO metrics (ts, role, metric, value) VALUES (?, 'companion', 'test', ?)",
|
||||
(i, float(i))
|
||||
)
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
# Vacuum
|
||||
vacuum_db(initialized_db)
|
||||
|
||||
# Check data is still there
|
||||
conn = sqlite3.connect(initialized_db)
|
||||
cursor = conn.execute("SELECT COUNT(*) FROM metrics")
|
||||
count = cursor.fetchone()[0]
|
||||
conn.close()
|
||||
|
||||
assert count == 100
|
||||
|
||||
def test_vacuum_preserves_schema_version(self, initialized_db):
|
||||
"""Vacuum should not change schema version."""
|
||||
from meshmon.db import _get_schema_version
|
||||
|
||||
conn = sqlite3.connect(initialized_db)
|
||||
version_before = _get_schema_version(conn)
|
||||
conn.close()
|
||||
|
||||
vacuum_db(initialized_db)
|
||||
|
||||
conn = sqlite3.connect(initialized_db)
|
||||
version_after = _get_schema_version(conn)
|
||||
conn.close()
|
||||
|
||||
assert version_before == version_after
|
||||
331
tests/database/test_db_migrations.py
Normal file
331
tests/database/test_db_migrations.py
Normal file
@@ -0,0 +1,331 @@
|
||||
"""Tests for database migration system."""
|
||||
|
||||
import sqlite3
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.db import (
|
||||
_apply_migrations,
|
||||
_get_migration_files,
|
||||
_get_schema_version,
|
||||
_set_schema_version,
|
||||
get_schema_version,
|
||||
)
|
||||
|
||||
|
||||
class TestGetMigrationFiles:
|
||||
"""Tests for _get_migration_files function."""
|
||||
|
||||
def test_finds_migration_files(self):
|
||||
"""Should find actual migration files in MIGRATIONS_DIR."""
|
||||
migrations = _get_migration_files()
|
||||
|
||||
assert len(migrations) >= 2
|
||||
# Should include 001 and 002
|
||||
versions = [v for v, _ in migrations]
|
||||
assert 1 in versions
|
||||
assert 2 in versions
|
||||
|
||||
def test_returns_sorted_by_version(self):
|
||||
"""Migrations should be sorted by version number."""
|
||||
migrations = _get_migration_files()
|
||||
|
||||
versions = [v for v, _ in migrations]
|
||||
assert versions == sorted(versions)
|
||||
|
||||
def test_returns_path_objects(self):
|
||||
"""Each migration should have a Path object."""
|
||||
migrations = _get_migration_files()
|
||||
|
||||
for _version, path in migrations:
|
||||
assert isinstance(path, Path)
|
||||
assert path.exists()
|
||||
assert path.suffix == ".sql"
|
||||
|
||||
def test_extracts_version_from_filename(self):
|
||||
"""Version number extracted from filename prefix."""
|
||||
migrations = _get_migration_files()
|
||||
|
||||
for version, path in migrations:
|
||||
filename_version = int(path.stem.split("_")[0])
|
||||
assert version == filename_version
|
||||
|
||||
def test_empty_when_no_migrations_dir(self, tmp_path, monkeypatch):
|
||||
"""Returns empty list when migrations dir doesn't exist."""
|
||||
fake_dir = tmp_path / "nonexistent"
|
||||
monkeypatch.setattr("meshmon.db.MIGRATIONS_DIR", fake_dir)
|
||||
|
||||
migrations = _get_migration_files()
|
||||
|
||||
assert migrations == []
|
||||
|
||||
def test_skips_invalid_filenames(self, tmp_path, monkeypatch):
|
||||
"""Skips files without valid version prefix."""
|
||||
migrations_dir = tmp_path / "migrations"
|
||||
migrations_dir.mkdir()
|
||||
|
||||
# Create valid migration
|
||||
(migrations_dir / "001_valid.sql").write_text("-- valid")
|
||||
# Create invalid migrations
|
||||
(migrations_dir / "invalid_name.sql").write_text("-- invalid")
|
||||
(migrations_dir / "abc_noversion.sql").write_text("-- no version")
|
||||
|
||||
monkeypatch.setattr("meshmon.db.MIGRATIONS_DIR", migrations_dir)
|
||||
|
||||
migrations = _get_migration_files()
|
||||
|
||||
assert len(migrations) == 1
|
||||
assert migrations[0][0] == 1
|
||||
|
||||
|
||||
class TestGetSchemaVersion:
|
||||
"""Tests for _get_schema_version internal function."""
|
||||
|
||||
def test_returns_zero_for_fresh_db(self, tmp_path):
|
||||
"""Fresh database with no db_meta returns 0."""
|
||||
db_path = tmp_path / "test.db"
|
||||
conn = sqlite3.connect(db_path)
|
||||
|
||||
version = _get_schema_version(conn)
|
||||
|
||||
assert version == 0
|
||||
conn.close()
|
||||
|
||||
def test_returns_stored_version(self, tmp_path):
|
||||
"""Returns version from db_meta table."""
|
||||
db_path = tmp_path / "test.db"
|
||||
conn = sqlite3.connect(db_path)
|
||||
conn.execute("""
|
||||
CREATE TABLE db_meta (
|
||||
key TEXT PRIMARY KEY NOT NULL,
|
||||
value TEXT NOT NULL
|
||||
)
|
||||
""")
|
||||
conn.execute(
|
||||
"INSERT INTO db_meta (key, value) VALUES ('schema_version', '5')"
|
||||
)
|
||||
conn.commit()
|
||||
|
||||
version = _get_schema_version(conn)
|
||||
|
||||
assert version == 5
|
||||
conn.close()
|
||||
|
||||
def test_returns_zero_when_key_missing(self, tmp_path):
|
||||
"""Returns 0 if db_meta exists but schema_version key is missing."""
|
||||
db_path = tmp_path / "test.db"
|
||||
conn = sqlite3.connect(db_path)
|
||||
conn.execute("""
|
||||
CREATE TABLE db_meta (
|
||||
key TEXT PRIMARY KEY NOT NULL,
|
||||
value TEXT NOT NULL
|
||||
)
|
||||
""")
|
||||
conn.execute(
|
||||
"INSERT INTO db_meta (key, value) VALUES ('other_key', 'value')"
|
||||
)
|
||||
conn.commit()
|
||||
|
||||
version = _get_schema_version(conn)
|
||||
|
||||
assert version == 0
|
||||
conn.close()
|
||||
|
||||
|
||||
class TestSetSchemaVersion:
|
||||
"""Tests for _set_schema_version internal function."""
|
||||
|
||||
def test_inserts_new_version(self, tmp_path):
|
||||
"""Can insert schema version into fresh db_meta."""
|
||||
db_path = tmp_path / "test.db"
|
||||
conn = sqlite3.connect(db_path)
|
||||
conn.execute("""
|
||||
CREATE TABLE db_meta (
|
||||
key TEXT PRIMARY KEY NOT NULL,
|
||||
value TEXT NOT NULL
|
||||
)
|
||||
""")
|
||||
|
||||
_set_schema_version(conn, 3)
|
||||
conn.commit()
|
||||
|
||||
cursor = conn.execute(
|
||||
"SELECT value FROM db_meta WHERE key = 'schema_version'"
|
||||
)
|
||||
assert cursor.fetchone()[0] == "3"
|
||||
conn.close()
|
||||
|
||||
def test_updates_existing_version(self, tmp_path):
|
||||
"""Can update existing schema version."""
|
||||
db_path = tmp_path / "test.db"
|
||||
conn = sqlite3.connect(db_path)
|
||||
conn.execute("""
|
||||
CREATE TABLE db_meta (
|
||||
key TEXT PRIMARY KEY NOT NULL,
|
||||
value TEXT NOT NULL
|
||||
)
|
||||
""")
|
||||
conn.execute(
|
||||
"INSERT INTO db_meta (key, value) VALUES ('schema_version', '1')"
|
||||
)
|
||||
conn.commit()
|
||||
|
||||
_set_schema_version(conn, 5)
|
||||
conn.commit()
|
||||
|
||||
cursor = conn.execute(
|
||||
"SELECT value FROM db_meta WHERE key = 'schema_version'"
|
||||
)
|
||||
assert cursor.fetchone()[0] == "5"
|
||||
conn.close()
|
||||
|
||||
|
||||
class TestApplyMigrations:
|
||||
"""Tests for _apply_migrations function."""
|
||||
|
||||
def test_applies_all_migrations_to_fresh_db(self, tmp_path, monkeypatch):
|
||||
"""Applies all migrations to a fresh database."""
|
||||
# Create mock migrations
|
||||
migrations_dir = tmp_path / "migrations"
|
||||
migrations_dir.mkdir()
|
||||
|
||||
(migrations_dir / "001_initial.sql").write_text("""
|
||||
CREATE TABLE IF NOT EXISTS db_meta (
|
||||
key TEXT PRIMARY KEY NOT NULL,
|
||||
value TEXT NOT NULL
|
||||
);
|
||||
CREATE TABLE test1 (id INTEGER);
|
||||
""")
|
||||
(migrations_dir / "002_second.sql").write_text("""
|
||||
CREATE TABLE test2 (id INTEGER);
|
||||
""")
|
||||
|
||||
monkeypatch.setattr("meshmon.db.MIGRATIONS_DIR", migrations_dir)
|
||||
|
||||
db_path = tmp_path / "test.db"
|
||||
conn = sqlite3.connect(db_path)
|
||||
|
||||
_apply_migrations(conn)
|
||||
|
||||
# Check both tables exist
|
||||
cursor = conn.execute(
|
||||
"SELECT name FROM sqlite_master WHERE type='table' ORDER BY name"
|
||||
)
|
||||
tables = [row[0] for row in cursor]
|
||||
assert "test1" in tables
|
||||
assert "test2" in tables
|
||||
assert "db_meta" in tables
|
||||
|
||||
# Check version is updated
|
||||
assert _get_schema_version(conn) == 2
|
||||
conn.close()
|
||||
|
||||
def test_skips_already_applied_migrations(self, tmp_path, monkeypatch):
|
||||
"""Skips migrations that have already been applied."""
|
||||
migrations_dir = tmp_path / "migrations"
|
||||
migrations_dir.mkdir()
|
||||
|
||||
(migrations_dir / "001_initial.sql").write_text("""
|
||||
CREATE TABLE IF NOT EXISTS db_meta (
|
||||
key TEXT PRIMARY KEY NOT NULL,
|
||||
value TEXT NOT NULL
|
||||
);
|
||||
CREATE TABLE test1 (id INTEGER);
|
||||
""")
|
||||
(migrations_dir / "002_second.sql").write_text("""
|
||||
CREATE TABLE test2 (id INTEGER);
|
||||
""")
|
||||
|
||||
monkeypatch.setattr("meshmon.db.MIGRATIONS_DIR", migrations_dir)
|
||||
|
||||
db_path = tmp_path / "test.db"
|
||||
conn = sqlite3.connect(db_path)
|
||||
|
||||
# Apply first time
|
||||
_apply_migrations(conn)
|
||||
|
||||
# Apply second time - should not fail
|
||||
_apply_migrations(conn)
|
||||
|
||||
assert _get_schema_version(conn) == 2
|
||||
conn.close()
|
||||
|
||||
def test_raises_when_no_migrations(self, tmp_path, monkeypatch):
|
||||
"""Raises error when no migration files exist."""
|
||||
empty_dir = tmp_path / "empty_migrations"
|
||||
empty_dir.mkdir()
|
||||
monkeypatch.setattr("meshmon.db.MIGRATIONS_DIR", empty_dir)
|
||||
|
||||
db_path = tmp_path / "test.db"
|
||||
conn = sqlite3.connect(db_path)
|
||||
|
||||
with pytest.raises(RuntimeError, match="No migration files found"):
|
||||
_apply_migrations(conn)
|
||||
|
||||
conn.close()
|
||||
|
||||
def test_rolls_back_failed_migration(self, tmp_path, monkeypatch):
|
||||
"""Rolls back if a migration fails."""
|
||||
migrations_dir = tmp_path / "migrations"
|
||||
migrations_dir.mkdir()
|
||||
|
||||
(migrations_dir / "001_initial.sql").write_text("""
|
||||
CREATE TABLE IF NOT EXISTS db_meta (
|
||||
key TEXT PRIMARY KEY NOT NULL,
|
||||
value TEXT NOT NULL
|
||||
);
|
||||
CREATE TABLE test1 (id INTEGER);
|
||||
""")
|
||||
(migrations_dir / "002_broken.sql").write_text("""
|
||||
THIS IS NOT VALID SQL;
|
||||
""")
|
||||
|
||||
monkeypatch.setattr("meshmon.db.MIGRATIONS_DIR", migrations_dir)
|
||||
|
||||
db_path = tmp_path / "test.db"
|
||||
conn = sqlite3.connect(db_path)
|
||||
|
||||
with pytest.raises(RuntimeError, match="Migration.*failed"):
|
||||
_apply_migrations(conn)
|
||||
|
||||
# Version should still be 1 (first migration applied)
|
||||
assert _get_schema_version(conn) == 1
|
||||
conn.close()
|
||||
|
||||
|
||||
class TestPublicGetSchemaVersion:
|
||||
"""Tests for public get_schema_version function."""
|
||||
|
||||
def test_returns_zero_when_db_missing(self, configured_env):
|
||||
"""Returns 0 when database file doesn't exist."""
|
||||
version = get_schema_version()
|
||||
assert version == 0
|
||||
|
||||
def test_returns_version_from_existing_db(self, initialized_db):
|
||||
"""Returns schema version from initialized database."""
|
||||
version = get_schema_version()
|
||||
|
||||
# Should be at least version 2 (we have 2 migrations)
|
||||
assert version >= 2
|
||||
|
||||
def test_uses_readonly_connection(self, initialized_db, monkeypatch):
|
||||
"""Opens database in readonly mode."""
|
||||
calls = []
|
||||
original_get_connection = __import__(
|
||||
"meshmon.db", fromlist=["get_connection"]
|
||||
).get_connection
|
||||
|
||||
from contextlib import contextmanager
|
||||
|
||||
@contextmanager
|
||||
def mock_get_connection(*args, **kwargs):
|
||||
calls.append(kwargs)
|
||||
with original_get_connection(*args, **kwargs) as conn:
|
||||
yield conn
|
||||
|
||||
monkeypatch.setattr("meshmon.db.get_connection", mock_get_connection)
|
||||
|
||||
get_schema_version()
|
||||
|
||||
assert any(call.get("readonly") is True for call in calls)
|
||||
312
tests/database/test_db_queries.py
Normal file
312
tests/database/test_db_queries.py
Normal file
@@ -0,0 +1,312 @@
|
||||
"""Tests for database query functions."""
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.db import (
|
||||
get_available_metrics,
|
||||
get_distinct_timestamps,
|
||||
get_latest_metrics,
|
||||
get_metric_count,
|
||||
get_metrics_for_period,
|
||||
insert_metrics,
|
||||
)
|
||||
|
||||
BASE_TS = 1704067200
|
||||
|
||||
|
||||
class TestGetMetricsForPeriod:
|
||||
"""Tests for get_metrics_for_period function."""
|
||||
|
||||
def test_returns_dict_by_metric(self, initialized_db):
|
||||
"""Returns dict with metric names as keys."""
|
||||
ts = BASE_TS
|
||||
insert_metrics(ts, "companion", {
|
||||
"battery_mv": 3850.0,
|
||||
"contacts": 5,
|
||||
}, initialized_db)
|
||||
|
||||
result = get_metrics_for_period(
|
||||
"companion", ts - 100, ts + 100, initialized_db
|
||||
)
|
||||
|
||||
assert isinstance(result, dict)
|
||||
assert "battery_mv" in result
|
||||
assert "contacts" in result
|
||||
|
||||
def test_returns_timestamp_value_tuples(self, initialized_db):
|
||||
"""Each metric has list of (ts, value) tuples."""
|
||||
ts = BASE_TS
|
||||
insert_metrics(ts, "companion", {"test": 1.0}, initialized_db)
|
||||
|
||||
result = get_metrics_for_period(
|
||||
"companion", ts - 100, ts + 100, initialized_db
|
||||
)
|
||||
|
||||
assert len(result["test"]) == 1
|
||||
assert result["test"][0] == (ts, 1.0)
|
||||
|
||||
def test_sorted_by_timestamp(self, initialized_db):
|
||||
"""Results are sorted by timestamp ascending."""
|
||||
base_ts = BASE_TS
|
||||
|
||||
# Insert out of order
|
||||
insert_metrics(base_ts + 200, "companion", {"test": 3.0}, initialized_db)
|
||||
insert_metrics(base_ts, "companion", {"test": 1.0}, initialized_db)
|
||||
insert_metrics(base_ts + 100, "companion", {"test": 2.0}, initialized_db)
|
||||
|
||||
result = get_metrics_for_period(
|
||||
"companion", base_ts - 100, base_ts + 300, initialized_db
|
||||
)
|
||||
|
||||
values = [v for ts, v in result["test"]]
|
||||
assert values == [1.0, 2.0, 3.0]
|
||||
|
||||
def test_respects_time_range(self, initialized_db):
|
||||
"""Only returns data within specified time range."""
|
||||
base_ts = BASE_TS
|
||||
|
||||
insert_metrics(base_ts - 200, "companion", {"test": 1.0}, initialized_db) # Outside
|
||||
insert_metrics(base_ts, "companion", {"test": 2.0}, initialized_db) # Inside
|
||||
insert_metrics(base_ts + 200, "companion", {"test": 3.0}, initialized_db) # Outside
|
||||
|
||||
result = get_metrics_for_period(
|
||||
"companion", base_ts - 100, base_ts + 100, initialized_db
|
||||
)
|
||||
|
||||
assert len(result["test"]) == 1
|
||||
assert result["test"][0][1] == 2.0
|
||||
|
||||
def test_filters_by_role(self, initialized_db):
|
||||
"""Only returns data for specified role."""
|
||||
ts = BASE_TS
|
||||
insert_metrics(ts, "companion", {"test": 1.0}, initialized_db)
|
||||
insert_metrics(ts, "repeater", {"test": 2.0}, initialized_db)
|
||||
|
||||
result = get_metrics_for_period(
|
||||
"companion", ts - 100, ts + 100, initialized_db
|
||||
)
|
||||
|
||||
assert result["test"][0][1] == 1.0
|
||||
|
||||
def test_computes_bat_pct(self, initialized_db):
|
||||
"""Computes bat_pct from battery voltage."""
|
||||
ts = BASE_TS
|
||||
# 4200 mV = 4.2V = 100%
|
||||
insert_metrics(ts, "companion", {"battery_mv": 4200.0}, initialized_db)
|
||||
|
||||
result = get_metrics_for_period(
|
||||
"companion", ts - 100, ts + 100, initialized_db
|
||||
)
|
||||
|
||||
assert "bat_pct" in result
|
||||
assert result["bat_pct"][0][1] == pytest.approx(100.0)
|
||||
|
||||
def test_bat_pct_for_repeater(self, initialized_db):
|
||||
"""Computes bat_pct for repeater using 'bat' field."""
|
||||
ts = BASE_TS
|
||||
# 3000 mV = 3.0V = 0%
|
||||
insert_metrics(ts, "repeater", {"bat": 3000.0}, initialized_db)
|
||||
|
||||
result = get_metrics_for_period(
|
||||
"repeater", ts - 100, ts + 100, initialized_db
|
||||
)
|
||||
|
||||
assert "bat_pct" in result
|
||||
assert result["bat_pct"][0][1] == pytest.approx(0.0)
|
||||
|
||||
def test_empty_period_returns_empty(self, initialized_db):
|
||||
"""Empty time period returns empty dict."""
|
||||
result = get_metrics_for_period(
|
||||
"companion", 0, 1, initialized_db
|
||||
)
|
||||
|
||||
assert result == {}
|
||||
|
||||
def test_invalid_role_raises(self, initialized_db):
|
||||
"""Invalid role raises ValueError."""
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
get_metrics_for_period("invalid", 0, 100, initialized_db)
|
||||
|
||||
|
||||
class TestGetLatestMetrics:
|
||||
"""Tests for get_latest_metrics function."""
|
||||
|
||||
def test_returns_most_recent(self, initialized_db):
|
||||
"""Returns metrics at most recent timestamp."""
|
||||
base_ts = BASE_TS
|
||||
|
||||
insert_metrics(base_ts, "companion", {"test": 1.0}, initialized_db)
|
||||
insert_metrics(base_ts + 100, "companion", {"test": 2.0}, initialized_db)
|
||||
|
||||
result = get_latest_metrics("companion", initialized_db)
|
||||
|
||||
assert result["test"] == 2.0
|
||||
assert result["ts"] == base_ts + 100
|
||||
|
||||
def test_includes_ts(self, initialized_db):
|
||||
"""Result includes 'ts' key with timestamp."""
|
||||
ts = BASE_TS
|
||||
insert_metrics(ts, "companion", {"test": 1.0}, initialized_db)
|
||||
|
||||
result = get_latest_metrics("companion", initialized_db)
|
||||
|
||||
assert "ts" in result
|
||||
assert result["ts"] == ts
|
||||
|
||||
def test_includes_all_metrics(self, initialized_db):
|
||||
"""Result includes all metrics at that timestamp."""
|
||||
ts = BASE_TS
|
||||
insert_metrics(ts, "companion", {
|
||||
"battery_mv": 3850.0,
|
||||
"contacts": 5,
|
||||
"uptime_secs": 86400,
|
||||
}, initialized_db)
|
||||
|
||||
result = get_latest_metrics("companion", initialized_db)
|
||||
|
||||
assert result["battery_mv"] == 3850.0
|
||||
assert result["contacts"] == 5.0
|
||||
assert result["uptime_secs"] == 86400.0
|
||||
|
||||
def test_computes_bat_pct(self, initialized_db):
|
||||
"""Computes bat_pct from battery voltage."""
|
||||
ts = BASE_TS
|
||||
insert_metrics(ts, "companion", {"battery_mv": 3820.0}, initialized_db)
|
||||
|
||||
result = get_latest_metrics("companion", initialized_db)
|
||||
|
||||
assert "bat_pct" in result
|
||||
assert result["bat_pct"] == pytest.approx(50.0)
|
||||
|
||||
def test_returns_none_when_empty(self, initialized_db):
|
||||
"""Returns None when no data exists."""
|
||||
result = get_latest_metrics("companion", initialized_db)
|
||||
|
||||
assert result is None
|
||||
|
||||
def test_filters_by_role(self, initialized_db):
|
||||
"""Only returns data for specified role."""
|
||||
ts = BASE_TS
|
||||
insert_metrics(ts, "companion", {"test": 1.0}, initialized_db)
|
||||
insert_metrics(ts + 100, "repeater", {"test": 2.0}, initialized_db)
|
||||
|
||||
result = get_latest_metrics("companion", initialized_db)
|
||||
|
||||
assert result["ts"] == ts
|
||||
assert result["test"] == 1.0
|
||||
|
||||
def test_invalid_role_raises(self, initialized_db):
|
||||
"""Invalid role raises ValueError."""
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
get_latest_metrics("invalid", initialized_db)
|
||||
|
||||
|
||||
class TestGetMetricCount:
|
||||
"""Tests for get_metric_count function."""
|
||||
|
||||
def test_counts_rows(self, initialized_db):
|
||||
"""Counts total metric rows for role."""
|
||||
ts = BASE_TS
|
||||
insert_metrics(ts, "companion", {"a": 1.0, "b": 2.0, "c": 3.0}, initialized_db)
|
||||
|
||||
count = get_metric_count("companion", initialized_db)
|
||||
|
||||
assert count == 3
|
||||
|
||||
def test_filters_by_role(self, initialized_db):
|
||||
"""Only counts rows for specified role."""
|
||||
ts = BASE_TS
|
||||
insert_metrics(ts, "companion", {"a": 1.0}, initialized_db)
|
||||
insert_metrics(ts, "repeater", {"b": 2.0, "c": 3.0}, initialized_db)
|
||||
|
||||
assert get_metric_count("companion", initialized_db) == 1
|
||||
assert get_metric_count("repeater", initialized_db) == 2
|
||||
|
||||
def test_returns_zero_when_empty(self, initialized_db):
|
||||
"""Returns 0 when no data exists."""
|
||||
count = get_metric_count("companion", initialized_db)
|
||||
assert count == 0
|
||||
|
||||
def test_invalid_role_raises(self, initialized_db):
|
||||
"""Invalid role raises ValueError."""
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
get_metric_count("invalid", initialized_db)
|
||||
|
||||
|
||||
class TestGetDistinctTimestamps:
|
||||
"""Tests for get_distinct_timestamps function."""
|
||||
|
||||
def test_counts_unique_timestamps(self, initialized_db):
|
||||
"""Counts distinct timestamps."""
|
||||
ts = BASE_TS
|
||||
insert_metrics(ts, "companion", {"a": 1.0, "b": 2.0}, initialized_db) # 1 ts
|
||||
insert_metrics(ts + 100, "companion", {"a": 3.0}, initialized_db) # 2nd ts
|
||||
|
||||
count = get_distinct_timestamps("companion", initialized_db)
|
||||
|
||||
assert count == 2
|
||||
|
||||
def test_filters_by_role(self, initialized_db):
|
||||
"""Only counts timestamps for specified role."""
|
||||
ts = BASE_TS
|
||||
insert_metrics(ts, "companion", {"a": 1.0}, initialized_db)
|
||||
insert_metrics(ts + 100, "companion", {"a": 2.0}, initialized_db)
|
||||
insert_metrics(ts, "repeater", {"a": 3.0}, initialized_db)
|
||||
|
||||
assert get_distinct_timestamps("companion", initialized_db) == 2
|
||||
assert get_distinct_timestamps("repeater", initialized_db) == 1
|
||||
|
||||
def test_returns_zero_when_empty(self, initialized_db):
|
||||
"""Returns 0 when no data exists."""
|
||||
count = get_distinct_timestamps("companion", initialized_db)
|
||||
assert count == 0
|
||||
|
||||
|
||||
class TestGetAvailableMetrics:
|
||||
"""Tests for get_available_metrics function."""
|
||||
|
||||
def test_returns_metric_names(self, initialized_db):
|
||||
"""Returns list of distinct metric names."""
|
||||
ts = BASE_TS
|
||||
insert_metrics(ts, "companion", {
|
||||
"battery_mv": 3850.0,
|
||||
"contacts": 5,
|
||||
"recv": 100,
|
||||
}, initialized_db)
|
||||
|
||||
metrics = get_available_metrics("companion", initialized_db)
|
||||
|
||||
assert "battery_mv" in metrics
|
||||
assert "contacts" in metrics
|
||||
assert "recv" in metrics
|
||||
|
||||
def test_sorted_alphabetically(self, initialized_db):
|
||||
"""Metrics are sorted alphabetically."""
|
||||
ts = BASE_TS
|
||||
insert_metrics(ts, "companion", {
|
||||
"zebra": 1.0,
|
||||
"apple": 2.0,
|
||||
"mango": 3.0,
|
||||
}, initialized_db)
|
||||
|
||||
metrics = get_available_metrics("companion", initialized_db)
|
||||
|
||||
assert metrics == sorted(metrics)
|
||||
|
||||
def test_filters_by_role(self, initialized_db):
|
||||
"""Only returns metrics for specified role."""
|
||||
ts = BASE_TS
|
||||
insert_metrics(ts, "companion", {"companion_metric": 1.0}, initialized_db)
|
||||
insert_metrics(ts, "repeater", {"repeater_metric": 2.0}, initialized_db)
|
||||
|
||||
companion_metrics = get_available_metrics("companion", initialized_db)
|
||||
repeater_metrics = get_available_metrics("repeater", initialized_db)
|
||||
|
||||
assert "companion_metric" in companion_metrics
|
||||
assert "repeater_metric" not in companion_metrics
|
||||
assert "repeater_metric" in repeater_metrics
|
||||
|
||||
def test_returns_empty_when_no_data(self, initialized_db):
|
||||
"""Returns empty list when no data exists."""
|
||||
metrics = get_available_metrics("companion", initialized_db)
|
||||
assert metrics == []
|
||||
210
tests/database/test_db_validation.py
Normal file
210
tests/database/test_db_validation.py
Normal file
@@ -0,0 +1,210 @@
|
||||
"""Tests for database validation and security functions."""
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.db import (
|
||||
VALID_ROLES,
|
||||
_validate_role,
|
||||
get_available_metrics,
|
||||
get_distinct_timestamps,
|
||||
get_latest_metrics,
|
||||
get_metric_count,
|
||||
get_metrics_for_period,
|
||||
insert_metric,
|
||||
insert_metrics,
|
||||
)
|
||||
|
||||
|
||||
class TestValidateRole:
|
||||
"""Tests for _validate_role function."""
|
||||
|
||||
def test_accepts_companion(self):
|
||||
"""Accepts 'companion' as valid role."""
|
||||
result = _validate_role("companion")
|
||||
assert result == "companion"
|
||||
|
||||
def test_accepts_repeater(self):
|
||||
"""Accepts 'repeater' as valid role."""
|
||||
result = _validate_role("repeater")
|
||||
assert result == "repeater"
|
||||
|
||||
def test_returns_input_on_success(self):
|
||||
"""Returns the validated role string."""
|
||||
for role in VALID_ROLES:
|
||||
result = _validate_role(role)
|
||||
assert result == role
|
||||
|
||||
def test_rejects_invalid_role(self):
|
||||
"""Rejects invalid role names."""
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
_validate_role("invalid")
|
||||
|
||||
def test_rejects_empty_string(self):
|
||||
"""Rejects empty string as role."""
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
_validate_role("")
|
||||
|
||||
def test_rejects_none(self):
|
||||
"""Rejects None as role."""
|
||||
with pytest.raises(ValueError):
|
||||
_validate_role(None)
|
||||
|
||||
def test_case_sensitive(self):
|
||||
"""Role validation is case-sensitive."""
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
_validate_role("Companion")
|
||||
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
_validate_role("REPEATER")
|
||||
|
||||
def test_rejects_whitespace_variants(self):
|
||||
"""Rejects roles with leading/trailing whitespace."""
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
_validate_role(" companion")
|
||||
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
_validate_role("repeater ")
|
||||
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
_validate_role(" companion ")
|
||||
|
||||
|
||||
class TestSqlInjectionPrevention:
|
||||
"""Tests to verify SQL injection is prevented via role validation."""
|
||||
|
||||
@pytest.mark.parametrize("malicious_role", [
|
||||
"'; DROP TABLE metrics; --",
|
||||
"admin'; DROP TABLE metrics;--",
|
||||
"companion OR 1=1",
|
||||
"companion; DELETE FROM metrics",
|
||||
"companion' UNION SELECT * FROM db_meta --",
|
||||
"companion\"; DROP TABLE metrics; --",
|
||||
"1 OR 1=1",
|
||||
"companion/*comment*/",
|
||||
])
|
||||
def test_insert_metric_rejects_injection(self, initialized_db, malicious_role):
|
||||
"""insert_metric rejects SQL injection attempts."""
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
insert_metric(1000, malicious_role, "test", 1.0, initialized_db)
|
||||
|
||||
@pytest.mark.parametrize("malicious_role", [
|
||||
"'; DROP TABLE metrics; --",
|
||||
"companion OR 1=1",
|
||||
])
|
||||
def test_insert_metrics_rejects_injection(self, initialized_db, malicious_role):
|
||||
"""insert_metrics rejects SQL injection attempts."""
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
insert_metrics(1000, malicious_role, {"test": 1.0}, initialized_db)
|
||||
|
||||
@pytest.mark.parametrize("malicious_role", [
|
||||
"'; DROP TABLE metrics; --",
|
||||
"companion OR 1=1",
|
||||
])
|
||||
def test_get_metrics_for_period_rejects_injection(self, initialized_db, malicious_role):
|
||||
"""get_metrics_for_period rejects SQL injection attempts."""
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
get_metrics_for_period(malicious_role, 0, 100, initialized_db)
|
||||
|
||||
@pytest.mark.parametrize("malicious_role", [
|
||||
"'; DROP TABLE metrics; --",
|
||||
"companion OR 1=1",
|
||||
])
|
||||
def test_get_latest_metrics_rejects_injection(self, initialized_db, malicious_role):
|
||||
"""get_latest_metrics rejects SQL injection attempts."""
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
get_latest_metrics(malicious_role, initialized_db)
|
||||
|
||||
@pytest.mark.parametrize("malicious_role", [
|
||||
"'; DROP TABLE metrics; --",
|
||||
"companion OR 1=1",
|
||||
])
|
||||
def test_get_metric_count_rejects_injection(self, initialized_db, malicious_role):
|
||||
"""get_metric_count rejects SQL injection attempts."""
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
get_metric_count(malicious_role, initialized_db)
|
||||
|
||||
@pytest.mark.parametrize("malicious_role", [
|
||||
"'; DROP TABLE metrics; --",
|
||||
"companion OR 1=1",
|
||||
])
|
||||
def test_get_distinct_timestamps_rejects_injection(self, initialized_db, malicious_role):
|
||||
"""get_distinct_timestamps rejects SQL injection attempts."""
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
get_distinct_timestamps(malicious_role, initialized_db)
|
||||
|
||||
@pytest.mark.parametrize("malicious_role", [
|
||||
"'; DROP TABLE metrics; --",
|
||||
"companion OR 1=1",
|
||||
])
|
||||
def test_get_available_metrics_rejects_injection(self, initialized_db, malicious_role):
|
||||
"""get_available_metrics rejects SQL injection attempts."""
|
||||
with pytest.raises(ValueError, match="Invalid role"):
|
||||
get_available_metrics(malicious_role, initialized_db)
|
||||
|
||||
|
||||
class TestValidRolesConstant:
|
||||
"""Tests for VALID_ROLES constant."""
|
||||
|
||||
def test_contains_companion(self):
|
||||
"""VALID_ROLES includes 'companion'."""
|
||||
assert "companion" in VALID_ROLES
|
||||
|
||||
def test_contains_repeater(self):
|
||||
"""VALID_ROLES includes 'repeater'."""
|
||||
assert "repeater" in VALID_ROLES
|
||||
|
||||
def test_is_tuple(self):
|
||||
"""VALID_ROLES is immutable (tuple)."""
|
||||
assert isinstance(VALID_ROLES, tuple)
|
||||
|
||||
def test_exactly_two_roles(self):
|
||||
"""There are exactly two valid roles."""
|
||||
assert len(VALID_ROLES) == 2
|
||||
|
||||
|
||||
class TestMetricNameValidation:
|
||||
"""Tests for metric name handling (not validated, but should handle safely)."""
|
||||
|
||||
def test_metric_name_with_special_chars(self, initialized_db):
|
||||
"""Metric names with special chars are handled via parameterized queries."""
|
||||
# These should work because we use parameterized queries
|
||||
insert_metric(1000, "companion", "test.metric", 1.0, initialized_db)
|
||||
insert_metric(1001, "companion", "test-metric", 2.0, initialized_db)
|
||||
insert_metric(1002, "companion", "test_metric", 3.0, initialized_db)
|
||||
|
||||
metrics = get_available_metrics("companion", initialized_db)
|
||||
assert "test.metric" in metrics
|
||||
assert "test-metric" in metrics
|
||||
assert "test_metric" in metrics
|
||||
|
||||
def test_metric_name_with_spaces(self, initialized_db):
|
||||
"""Metric names with spaces are handled safely."""
|
||||
insert_metric(1000, "companion", "test metric", 1.0, initialized_db)
|
||||
|
||||
metrics = get_available_metrics("companion", initialized_db)
|
||||
assert "test metric" in metrics
|
||||
|
||||
def test_metric_name_unicode(self, initialized_db):
|
||||
"""Unicode metric names are handled safely."""
|
||||
insert_metric(1000, "companion", "température", 1.0, initialized_db)
|
||||
insert_metric(1001, "companion", "温度", 2.0, initialized_db)
|
||||
|
||||
metrics = get_available_metrics("companion", initialized_db)
|
||||
assert "température" in metrics
|
||||
assert "温度" in metrics
|
||||
|
||||
def test_empty_metric_name(self, initialized_db):
|
||||
"""Empty metric name is allowed (not validated)."""
|
||||
# Empty string is allowed as metric name
|
||||
insert_metric(1000, "companion", "", 1.0, initialized_db)
|
||||
|
||||
metrics = get_available_metrics("companion", initialized_db)
|
||||
assert "" in metrics
|
||||
|
||||
def test_very_long_metric_name(self, initialized_db):
|
||||
"""Very long metric names are handled."""
|
||||
long_name = "a" * 1000
|
||||
insert_metric(1000, "companion", long_name, 1.0, initialized_db)
|
||||
|
||||
metrics = get_available_metrics("companion", initialized_db)
|
||||
assert long_name in metrics
|
||||
1
tests/html/__init__.py
Normal file
1
tests/html/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Tests for HTML generation."""
|
||||
74
tests/html/conftest.py
Normal file
74
tests/html/conftest.py
Normal file
@@ -0,0 +1,74 @@
|
||||
"""Fixtures for HTML tests."""
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_chart_stats():
|
||||
"""Sample chart statistics for template rendering."""
|
||||
return {
|
||||
"bat": {
|
||||
"day": {"min": 3.5, "avg": 3.7, "max": 3.9, "current": 3.85},
|
||||
"week": {"min": 3.4, "avg": 3.65, "max": 3.95, "current": 3.8},
|
||||
"month": {"min": 3.3, "avg": 3.6, "max": 4.0, "current": 3.75},
|
||||
"year": {"min": 3.2, "avg": 3.55, "max": 4.1, "current": 3.7},
|
||||
},
|
||||
"bat_pct": {
|
||||
"day": {"min": 50, "avg": 70, "max": 90, "current": 85},
|
||||
"week": {"min": 45, "avg": 65, "max": 95, "current": 80},
|
||||
},
|
||||
"nb_recv": {
|
||||
"day": {"min": 0, "avg": 50.5, "max": 100, "current": 75},
|
||||
"week": {"min": 0, "avg": 48.2, "max": 150, "current": 60},
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_latest_metrics():
|
||||
"""Sample latest metrics for page rendering."""
|
||||
return {
|
||||
"ts": 1704067200, # 2024-01-01 00:00:00 UTC
|
||||
"bat": 3850.0,
|
||||
"bat_pct": 75.0,
|
||||
"uptime": 86400,
|
||||
"last_rssi": -85,
|
||||
"last_snr": 7.5,
|
||||
"noise_floor": -115,
|
||||
"nb_recv": 1234,
|
||||
"nb_sent": 567,
|
||||
"tx_queue_len": 0,
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_companion_latest():
|
||||
"""Sample companion latest metrics."""
|
||||
return {
|
||||
"ts": 1704067200,
|
||||
"battery_mv": 3850.0,
|
||||
"bat_pct": 75.0,
|
||||
"uptime_secs": 86400,
|
||||
"contacts": 5,
|
||||
"recv": 1234,
|
||||
"sent": 567,
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def templates_dir():
|
||||
"""Path to templates directory."""
|
||||
return Path(__file__).parent.parent.parent / "src" / "meshmon" / "templates"
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_svg_content():
|
||||
"""Sample SVG content for testing."""
|
||||
return """<?xml version="1.0" encoding="utf-8"?>
|
||||
<svg xmlns="http://www.w3.org/2000/svg" width="800" height="280"
|
||||
data-metric="bat" data-period="day" data-theme="light">
|
||||
<rect width="100%" height="100%" fill="#ffffff"/>
|
||||
<path d="M0,0 L100,100"/>
|
||||
</svg>"""
|
||||
49
tests/html/test_chart_groups.py
Normal file
49
tests/html/test_chart_groups.py
Normal file
@@ -0,0 +1,49 @@
|
||||
"""Tests for chart group building, including telemetry grouping behavior."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import meshmon.html as html
|
||||
|
||||
|
||||
def test_repeater_appends_telemetry_group_when_enabled(configured_env, monkeypatch):
|
||||
"""Repeater chart groups append telemetry section when enabled and available."""
|
||||
monkeypatch.setenv("TELEMETRY_ENABLED", "1")
|
||||
import meshmon.env
|
||||
meshmon.env._config = None
|
||||
|
||||
monkeypatch.setattr(html, "_load_svg_content", lambda path: "<svg></svg>")
|
||||
|
||||
chart_stats = {
|
||||
"bat": {"day": {"min": 3.5, "avg": 3.7, "max": 3.9, "current": 3.8}},
|
||||
"telemetry.temperature.1": {"day": {"min": 5.0, "avg": 6.0, "max": 7.0, "current": 6.5}},
|
||||
"telemetry.humidity.1": {"day": {"min": 82.0, "avg": 84.0, "max": 86.0, "current": 85.0}},
|
||||
"telemetry.voltage.1": {"day": {"min": 3.9, "avg": 4.0, "max": 4.1, "current": 4.0}},
|
||||
"telemetry.gps.0.latitude": {"day": {"min": 52.1, "avg": 52.2, "max": 52.3, "current": 52.25}},
|
||||
}
|
||||
|
||||
groups = html.build_chart_groups("repeater", "day", chart_stats)
|
||||
|
||||
assert groups[-1]["title"] == "Telemetry"
|
||||
telemetry_metrics = [chart["metric"] for chart in groups[-1]["charts"]]
|
||||
assert "telemetry.temperature.1" in telemetry_metrics
|
||||
assert "telemetry.humidity.1" in telemetry_metrics
|
||||
assert "telemetry.voltage.1" not in telemetry_metrics
|
||||
assert "telemetry.gps.0.latitude" not in telemetry_metrics
|
||||
|
||||
|
||||
def test_repeater_has_no_telemetry_group_when_disabled(configured_env, monkeypatch):
|
||||
"""Repeater chart groups do not include telemetry section when disabled."""
|
||||
monkeypatch.setenv("TELEMETRY_ENABLED", "0")
|
||||
import meshmon.env
|
||||
meshmon.env._config = None
|
||||
|
||||
monkeypatch.setattr(html, "_load_svg_content", lambda path: "<svg></svg>")
|
||||
|
||||
chart_stats = {
|
||||
"bat": {"day": {"min": 3.5, "avg": 3.7, "max": 3.9, "current": 3.8}},
|
||||
"telemetry.temperature.1": {"day": {"min": 5.0, "avg": 6.0, "max": 7.0, "current": 6.5}},
|
||||
}
|
||||
|
||||
groups = html.build_chart_groups("repeater", "day", chart_stats)
|
||||
|
||||
assert "Telemetry" not in [group["title"] for group in groups]
|
||||
149
tests/html/test_jinja_env.py
Normal file
149
tests/html/test_jinja_env.py
Normal file
@@ -0,0 +1,149 @@
|
||||
"""Tests for Jinja2 environment and custom filters."""
|
||||
|
||||
import re
|
||||
|
||||
import pytest
|
||||
from jinja2 import Environment
|
||||
|
||||
from meshmon.html import get_jinja_env
|
||||
|
||||
|
||||
class TestGetJinjaEnv:
|
||||
"""Tests for get_jinja_env function."""
|
||||
|
||||
def test_returns_environment(self):
|
||||
"""Returns a Jinja2 Environment."""
|
||||
env = get_jinja_env()
|
||||
assert isinstance(env, Environment)
|
||||
|
||||
def test_has_autoescape(self):
|
||||
"""Environment has autoescape enabled."""
|
||||
env = get_jinja_env()
|
||||
# Default is to autoescape HTML files
|
||||
assert env.autoescape is True or callable(env.autoescape)
|
||||
|
||||
def test_can_load_templates(self, templates_dir):
|
||||
"""Can load templates from the templates directory."""
|
||||
env = get_jinja_env()
|
||||
|
||||
# Should be able to get the base template
|
||||
template = env.get_template("base.html")
|
||||
assert template is not None
|
||||
|
||||
def test_returns_same_instance(self):
|
||||
"""Returns the same environment instance (cached)."""
|
||||
env1 = get_jinja_env()
|
||||
env2 = get_jinja_env()
|
||||
assert env1 is env2
|
||||
|
||||
|
||||
class TestJinjaFilters:
|
||||
"""Tests for custom Jinja2 filters."""
|
||||
|
||||
@pytest.fixture
|
||||
def env(self):
|
||||
"""Get Jinja2 environment."""
|
||||
return get_jinja_env()
|
||||
|
||||
def test_format_number_filter_exists(self, env):
|
||||
"""format_number filter is registered."""
|
||||
assert "format_number" in env.filters
|
||||
|
||||
def test_format_number_formats_thousands(self, env):
|
||||
"""format_number adds thousand separators."""
|
||||
template = env.from_string("{{ value|format_number }}")
|
||||
|
||||
result = template.render(value=1234567)
|
||||
assert result == "1,234,567"
|
||||
|
||||
def test_format_number_handles_none(self, env):
|
||||
"""format_number handles None gracefully."""
|
||||
template = env.from_string("{{ value|format_number }}")
|
||||
|
||||
result = template.render(value=None)
|
||||
assert result == "N/A"
|
||||
|
||||
def test_format_time_filter_exists(self, env):
|
||||
"""format_time filter is registered."""
|
||||
assert "format_time" in env.filters
|
||||
|
||||
def test_format_time_formats_timestamp(self, env):
|
||||
"""format_time formats Unix timestamp."""
|
||||
template = env.from_string("{{ value|format_time }}")
|
||||
|
||||
ts = 1704067200
|
||||
result = template.render(value=ts)
|
||||
assert re.match(r"^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}$", result)
|
||||
|
||||
def test_format_time_handles_none(self, env):
|
||||
"""format_time handles None gracefully."""
|
||||
template = env.from_string("{{ value|format_time }}")
|
||||
|
||||
result = template.render(value=None)
|
||||
assert result == "N/A"
|
||||
|
||||
def test_format_uptime_filter_exists(self, env):
|
||||
"""format_uptime filter is registered."""
|
||||
assert "format_uptime" in env.filters
|
||||
|
||||
def test_format_uptime_formats_seconds(self, env):
|
||||
"""format_uptime formats seconds to human readable."""
|
||||
template = env.from_string("{{ value|format_uptime }}")
|
||||
|
||||
# 1 day, 2 hours, 30 minutes = 95400 seconds
|
||||
result = template.render(value=95400)
|
||||
assert result == "1d 2h 30m"
|
||||
|
||||
def test_format_duration_filter_exists(self, env):
|
||||
"""format_duration filter is registered."""
|
||||
assert "format_duration" in env.filters
|
||||
|
||||
def test_format_value_filter_exists(self, env):
|
||||
"""format_value filter is registered."""
|
||||
assert "format_value" in env.filters
|
||||
|
||||
def test_format_compact_number_filter_exists(self, env):
|
||||
"""format_compact_number filter is registered."""
|
||||
assert "format_compact_number" in env.filters
|
||||
|
||||
|
||||
class TestTemplateRendering:
|
||||
"""Tests for basic template rendering."""
|
||||
|
||||
def test_base_template_renders(self):
|
||||
"""Base template renders without error."""
|
||||
env = get_jinja_env()
|
||||
template = env.get_template("base.html")
|
||||
|
||||
# Render with minimal context
|
||||
html = template.render(
|
||||
role="repeater",
|
||||
period="day",
|
||||
title="Test",
|
||||
)
|
||||
|
||||
assert "</html>" in html
|
||||
|
||||
def test_node_template_extends_base(self):
|
||||
"""Node template extends base template."""
|
||||
env = get_jinja_env()
|
||||
template = env.get_template("node.html")
|
||||
|
||||
# Should have access to base template blocks
|
||||
assert template is not None
|
||||
|
||||
def test_template_has_html_structure(self):
|
||||
"""Rendered template has proper HTML structure."""
|
||||
env = get_jinja_env()
|
||||
template = env.get_template("base.html")
|
||||
|
||||
html = template.render(
|
||||
role="repeater",
|
||||
period="day",
|
||||
title="Test Page",
|
||||
)
|
||||
|
||||
assert "<!DOCTYPE html>" in html or "<!doctype html>" in html.lower()
|
||||
assert "<html" in html
|
||||
assert "<head>" in html
|
||||
assert "<body>" in html
|
||||
192
tests/html/test_metrics_builders.py
Normal file
192
tests/html/test_metrics_builders.py
Normal file
@@ -0,0 +1,192 @@
|
||||
"""Tests for metrics builder functions."""
|
||||
|
||||
|
||||
from meshmon.html import (
|
||||
_build_traffic_table_rows,
|
||||
build_companion_metrics,
|
||||
build_node_details,
|
||||
build_radio_config,
|
||||
build_repeater_metrics,
|
||||
)
|
||||
|
||||
|
||||
class TestBuildRepeaterMetrics:
|
||||
"""Tests for build_repeater_metrics function."""
|
||||
|
||||
def test_returns_dict(self, sample_repeater_metrics):
|
||||
"""Returns a dictionary."""
|
||||
# build_repeater_metrics takes a row dict (firmware field names)
|
||||
result = build_repeater_metrics(sample_repeater_metrics)
|
||||
assert isinstance(result, dict)
|
||||
|
||||
def test_returns_dict_structure(self, sample_repeater_metrics):
|
||||
"""Returns dict with expected keys."""
|
||||
result = build_repeater_metrics(sample_repeater_metrics)
|
||||
# Should have critical_metrics, secondary_metrics, traffic_metrics
|
||||
assert "critical_metrics" in result
|
||||
assert "secondary_metrics" in result
|
||||
assert "traffic_metrics" in result
|
||||
|
||||
def test_critical_metrics_is_list(self, sample_repeater_metrics):
|
||||
"""Critical metrics is a list."""
|
||||
result = build_repeater_metrics(sample_repeater_metrics)
|
||||
assert isinstance(result["critical_metrics"], list)
|
||||
|
||||
def test_handles_none(self):
|
||||
"""Handles None row."""
|
||||
result = build_repeater_metrics(None)
|
||||
assert isinstance(result, dict)
|
||||
assert result["critical_metrics"] == []
|
||||
|
||||
def test_handles_empty_dict(self):
|
||||
"""Handles empty dict."""
|
||||
result = build_repeater_metrics({})
|
||||
assert isinstance(result, dict)
|
||||
|
||||
|
||||
class TestBuildCompanionMetrics:
|
||||
"""Tests for build_companion_metrics function."""
|
||||
|
||||
def test_returns_dict(self, sample_companion_metrics):
|
||||
"""Returns a dictionary."""
|
||||
result = build_companion_metrics(sample_companion_metrics)
|
||||
assert isinstance(result, dict)
|
||||
|
||||
def test_returns_dict_structure(self, sample_companion_metrics):
|
||||
"""Returns dict with expected keys."""
|
||||
result = build_companion_metrics(sample_companion_metrics)
|
||||
assert "critical_metrics" in result
|
||||
assert "secondary_metrics" in result
|
||||
assert "traffic_metrics" in result
|
||||
|
||||
def test_handles_none(self):
|
||||
"""Handles None row."""
|
||||
result = build_companion_metrics(None)
|
||||
assert isinstance(result, dict)
|
||||
assert result["critical_metrics"] == []
|
||||
|
||||
def test_handles_empty_dict(self):
|
||||
"""Handles empty dict."""
|
||||
result = build_companion_metrics({})
|
||||
assert isinstance(result, dict)
|
||||
|
||||
|
||||
class TestBuildNodeDetails:
|
||||
"""Tests for build_node_details function."""
|
||||
|
||||
def test_returns_list(self, configured_env):
|
||||
"""Returns a list of detail items."""
|
||||
result = build_node_details("repeater")
|
||||
assert isinstance(result, list)
|
||||
|
||||
def test_items_have_label_value(self, configured_env):
|
||||
"""Each item has label and value."""
|
||||
result = build_node_details("repeater")
|
||||
for item in result:
|
||||
assert isinstance(item, dict)
|
||||
assert "label" in item
|
||||
assert "value" in item
|
||||
|
||||
def test_includes_hardware_info(self, configured_env, monkeypatch):
|
||||
"""Includes hardware model info."""
|
||||
monkeypatch.setenv("REPEATER_HARDWARE", "Test LoRa Device")
|
||||
import meshmon.env
|
||||
meshmon.env._config = None
|
||||
|
||||
result = build_node_details("repeater")
|
||||
|
||||
# Should have hardware in one of the items
|
||||
hardware = next(item for item in result if item.get("label") == "Hardware")
|
||||
assert hardware["value"] == "Test LoRa Device"
|
||||
|
||||
def test_different_roles(self, configured_env):
|
||||
"""Different roles return details."""
|
||||
repeater_details = build_node_details("repeater")
|
||||
companion_details = build_node_details("companion")
|
||||
|
||||
assert isinstance(repeater_details, list)
|
||||
assert isinstance(companion_details, list)
|
||||
|
||||
|
||||
class TestBuildRadioConfig:
|
||||
"""Tests for build_radio_config function."""
|
||||
|
||||
def test_returns_list(self, configured_env):
|
||||
"""Returns a list of config items."""
|
||||
result = build_radio_config()
|
||||
assert isinstance(result, list)
|
||||
|
||||
def test_items_have_label_value(self, configured_env):
|
||||
"""Each item has label and value."""
|
||||
result = build_radio_config()
|
||||
for item in result:
|
||||
assert isinstance(item, dict)
|
||||
assert "label" in item
|
||||
assert "value" in item
|
||||
|
||||
def test_includes_frequency_when_set(self, configured_env, monkeypatch):
|
||||
"""Includes frequency when configured."""
|
||||
monkeypatch.setenv("RADIO_FREQUENCY", "869.618 MHz")
|
||||
import meshmon.env
|
||||
meshmon.env._config = None
|
||||
|
||||
result = build_radio_config()
|
||||
|
||||
freq = next(item for item in result if item.get("label") == "Frequency")
|
||||
assert freq["value"] == "869.618 MHz"
|
||||
|
||||
def test_handles_missing_config(self, configured_env):
|
||||
"""Returns list even with default config."""
|
||||
result = build_radio_config()
|
||||
assert isinstance(result, list)
|
||||
|
||||
|
||||
class TestBuildTrafficTableRows:
|
||||
"""Tests for _build_traffic_table_rows function."""
|
||||
|
||||
def test_returns_list(self):
|
||||
"""Returns a list of rows."""
|
||||
# Input is list of traffic metric dicts
|
||||
traffic_metrics = [
|
||||
{"label": "RX", "value": "100", "raw_value": 100, "unit": "/min"},
|
||||
{"label": "TX", "value": "50", "raw_value": 50, "unit": "/min"},
|
||||
]
|
||||
rows = _build_traffic_table_rows(traffic_metrics)
|
||||
assert isinstance(rows, list)
|
||||
|
||||
def test_rows_have_structure(self):
|
||||
"""Each row has expected structure."""
|
||||
traffic_metrics = [
|
||||
{"label": "RX", "value": "100", "raw_value": 100, "unit": "/min"},
|
||||
{"label": "TX", "value": "50", "raw_value": 50, "unit": "/min"},
|
||||
]
|
||||
rows = _build_traffic_table_rows(traffic_metrics)
|
||||
|
||||
for row in rows:
|
||||
assert isinstance(row, dict)
|
||||
assert "label" in row
|
||||
assert "rx" in row
|
||||
assert "tx" in row
|
||||
assert "rx_raw" in row
|
||||
assert "tx_raw" in row
|
||||
assert "unit" in row
|
||||
|
||||
def test_handles_empty_list(self):
|
||||
"""Handles empty traffic metrics list."""
|
||||
rows = _build_traffic_table_rows([])
|
||||
assert isinstance(rows, list)
|
||||
assert len(rows) == 0
|
||||
|
||||
def test_combines_rx_tx_pairs(self):
|
||||
"""Combines RX and TX into single row."""
|
||||
traffic_metrics = [
|
||||
{"label": "Flood RX", "value": "100", "raw_value": 100, "unit": "/min"},
|
||||
{"label": "Flood TX", "value": "50", "raw_value": 50, "unit": "/min"},
|
||||
]
|
||||
rows = _build_traffic_table_rows(traffic_metrics)
|
||||
|
||||
# Should have one "Flood" row with both rx and tx
|
||||
assert len(rows) == 1
|
||||
assert rows[0]["label"] == "Flood"
|
||||
assert rows[0]["rx"] == "100"
|
||||
assert rows[0]["tx"] == "50"
|
||||
256
tests/html/test_page_context.py
Normal file
256
tests/html/test_page_context.py
Normal file
@@ -0,0 +1,256 @@
|
||||
"""Tests for page context building."""
|
||||
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.html import (
|
||||
build_page_context,
|
||||
get_status,
|
||||
)
|
||||
|
||||
FIXED_NOW = datetime(2024, 1, 1, 12, 0, 0)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def fixed_now(monkeypatch):
|
||||
class FixedDatetime(datetime):
|
||||
@classmethod
|
||||
def now(cls):
|
||||
return FIXED_NOW
|
||||
|
||||
monkeypatch.setattr("meshmon.html.datetime", FixedDatetime)
|
||||
return FIXED_NOW
|
||||
|
||||
|
||||
class TestGetStatus:
|
||||
"""Tests for get_status function."""
|
||||
|
||||
def test_online_for_recent_data(self, fixed_now):
|
||||
"""Returns 'online' for data less than 30 minutes old."""
|
||||
# 10 minutes ago
|
||||
recent_ts = int(fixed_now.timestamp()) - 600
|
||||
|
||||
status_class, status_label = get_status(recent_ts)
|
||||
|
||||
assert status_class == "online"
|
||||
|
||||
def test_stale_for_medium_age_data(self, fixed_now):
|
||||
"""Returns 'stale' for data 30 minutes to 2 hours old."""
|
||||
# 1 hour ago
|
||||
medium_ts = int(fixed_now.timestamp()) - 3600
|
||||
|
||||
status_class, status_label = get_status(medium_ts)
|
||||
|
||||
assert status_class == "stale"
|
||||
|
||||
def test_offline_for_old_data(self, fixed_now):
|
||||
"""Returns 'offline' for data more than 2 hours old."""
|
||||
# 3 hours ago
|
||||
old_ts = int(fixed_now.timestamp()) - 10800
|
||||
|
||||
status_class, status_label = get_status(old_ts)
|
||||
|
||||
assert status_class == "offline"
|
||||
|
||||
def test_offline_for_very_old_data(self, fixed_now):
|
||||
"""Returns 'offline' for very old data."""
|
||||
# 7 days ago
|
||||
very_old_ts = int(fixed_now.timestamp()) - int(timedelta(days=7).total_seconds())
|
||||
|
||||
status_class, status_label = get_status(very_old_ts)
|
||||
|
||||
assert status_class == "offline"
|
||||
|
||||
def test_offline_for_none(self):
|
||||
"""Returns 'offline' for None timestamp."""
|
||||
status_class, status_label = get_status(None)
|
||||
|
||||
assert status_class == "offline"
|
||||
|
||||
def test_offline_for_zero(self):
|
||||
"""Returns 'offline' for zero timestamp."""
|
||||
status_class, status_label = get_status(0)
|
||||
|
||||
assert status_class == "offline"
|
||||
|
||||
def test_online_for_current_time(self, fixed_now):
|
||||
"""Returns 'online' for current timestamp."""
|
||||
now_ts = int(fixed_now.timestamp())
|
||||
|
||||
status_class, status_label = get_status(now_ts)
|
||||
|
||||
assert status_class == "online"
|
||||
|
||||
def test_boundary_30_minutes(self, fixed_now):
|
||||
"""Tests boundary at exactly 30 minutes."""
|
||||
# Exactly 30 minutes ago
|
||||
boundary_ts = int(fixed_now.timestamp()) - 1800
|
||||
|
||||
status_class, _ = get_status(boundary_ts)
|
||||
assert status_class == "stale"
|
||||
|
||||
def test_boundary_2_hours(self, fixed_now):
|
||||
"""Tests boundary at exactly 2 hours."""
|
||||
# Exactly 2 hours ago
|
||||
boundary_ts = int(fixed_now.timestamp()) - 7200
|
||||
|
||||
status_class, _ = get_status(boundary_ts)
|
||||
assert status_class == "offline"
|
||||
|
||||
def test_returns_tuple(self, fixed_now):
|
||||
"""Returns tuple of (status_class, status_label)."""
|
||||
status = get_status(int(fixed_now.timestamp()))
|
||||
assert isinstance(status, tuple)
|
||||
assert len(status) == 2
|
||||
|
||||
def test_status_label_is_string(self, fixed_now):
|
||||
"""Status label is a string."""
|
||||
_, status_label = get_status(int(fixed_now.timestamp()))
|
||||
assert isinstance(status_label, str)
|
||||
|
||||
|
||||
class TestBuildPageContext:
|
||||
"""Tests for build_page_context function."""
|
||||
|
||||
@pytest.fixture
|
||||
def sample_row(self, sample_repeater_metrics, fixed_now):
|
||||
"""Create a sample row with timestamp."""
|
||||
row = sample_repeater_metrics.copy()
|
||||
row["ts"] = int(fixed_now.timestamp()) - 300 # 5 minutes ago
|
||||
return row
|
||||
|
||||
def test_returns_dict(self, configured_env, sample_row):
|
||||
"""Returns a dictionary."""
|
||||
context = build_page_context(
|
||||
role="repeater",
|
||||
period="day",
|
||||
row=sample_row,
|
||||
at_root=True,
|
||||
)
|
||||
|
||||
assert isinstance(context, dict)
|
||||
|
||||
def test_includes_role_and_period(self, configured_env, sample_row):
|
||||
"""Context includes role and period."""
|
||||
context = build_page_context(
|
||||
role="repeater",
|
||||
period="day",
|
||||
row=sample_row,
|
||||
at_root=True,
|
||||
)
|
||||
|
||||
assert context.get("role") == "repeater"
|
||||
assert context.get("period") == "day"
|
||||
|
||||
def test_includes_status(self, configured_env, sample_row):
|
||||
"""Context includes status indicator."""
|
||||
context = build_page_context(
|
||||
role="repeater",
|
||||
period="day",
|
||||
row=sample_row,
|
||||
at_root=True,
|
||||
)
|
||||
|
||||
assert context["status_class"] == "online"
|
||||
|
||||
def test_handles_none_row(self, configured_env):
|
||||
"""Handles None row gracefully."""
|
||||
context = build_page_context(
|
||||
role="repeater",
|
||||
period="day",
|
||||
row=None,
|
||||
at_root=True,
|
||||
)
|
||||
|
||||
assert context.get("status_class") == "offline"
|
||||
|
||||
def test_includes_node_name(self, configured_env, sample_row, monkeypatch):
|
||||
"""Context includes node name from config."""
|
||||
monkeypatch.setenv("REPEATER_DISPLAY_NAME", "Test Repeater")
|
||||
import meshmon.env
|
||||
meshmon.env._config = None
|
||||
|
||||
context = build_page_context(
|
||||
role="repeater",
|
||||
period="day",
|
||||
row=sample_row,
|
||||
at_root=True,
|
||||
)
|
||||
|
||||
assert "node_name" in context
|
||||
assert context["node_name"] == "Test Repeater"
|
||||
|
||||
def test_includes_period(self, configured_env, sample_row):
|
||||
"""Context includes current period."""
|
||||
context = build_page_context(
|
||||
role="repeater",
|
||||
period="day",
|
||||
row=sample_row,
|
||||
at_root=True,
|
||||
)
|
||||
|
||||
assert "period" in context
|
||||
assert context["period"] == "day"
|
||||
|
||||
def test_different_roles(self, configured_env, sample_row, sample_companion_metrics, fixed_now):
|
||||
"""Context varies by role."""
|
||||
companion_row = sample_companion_metrics.copy()
|
||||
companion_row["ts"] = int(fixed_now.timestamp()) - 300
|
||||
|
||||
repeater_context = build_page_context(
|
||||
role="repeater",
|
||||
period="day",
|
||||
row=sample_row,
|
||||
at_root=True,
|
||||
)
|
||||
companion_context = build_page_context(
|
||||
role="companion",
|
||||
period="day",
|
||||
row=companion_row,
|
||||
at_root=False,
|
||||
)
|
||||
|
||||
assert repeater_context["role"] == "repeater"
|
||||
assert companion_context["role"] == "companion"
|
||||
|
||||
def test_at_root_affects_css_path(self, configured_env, sample_row):
|
||||
"""at_root parameter affects CSS path."""
|
||||
root_context = build_page_context(
|
||||
role="repeater",
|
||||
period="day",
|
||||
row=sample_row,
|
||||
at_root=True,
|
||||
)
|
||||
non_root_context = build_page_context(
|
||||
role="companion",
|
||||
period="day",
|
||||
row=sample_row,
|
||||
at_root=False,
|
||||
)
|
||||
|
||||
assert root_context["css_path"] == ""
|
||||
assert non_root_context["css_path"] == "../"
|
||||
|
||||
def test_links_use_relative_paths(self, configured_env, sample_row):
|
||||
"""Navigation and asset links are relative for subpath deployments."""
|
||||
root_context = build_page_context(
|
||||
role="repeater",
|
||||
period="day",
|
||||
row=sample_row,
|
||||
at_root=True,
|
||||
)
|
||||
non_root_context = build_page_context(
|
||||
role="companion",
|
||||
period="day",
|
||||
row=sample_row,
|
||||
at_root=False,
|
||||
)
|
||||
|
||||
assert root_context["repeater_link"] == "day.html"
|
||||
assert root_context["companion_link"] == "companion/day.html"
|
||||
assert root_context["reports_link"] == "reports/"
|
||||
|
||||
assert non_root_context["repeater_link"] == "../day.html"
|
||||
assert non_root_context["companion_link"] == "day.html"
|
||||
assert non_root_context["reports_link"] == "../reports/"
|
||||
110
tests/html/test_reports_index.py
Normal file
110
tests/html/test_reports_index.py
Normal file
@@ -0,0 +1,110 @@
|
||||
"""Tests for reports index page generation."""
|
||||
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.html import render_reports_index
|
||||
|
||||
|
||||
class TestRenderReportsIndex:
|
||||
"""Tests for render_reports_index function."""
|
||||
|
||||
@pytest.fixture
|
||||
def sample_report_sections(self):
|
||||
"""Sample report sections for testing."""
|
||||
return [
|
||||
{
|
||||
"role": "repeater",
|
||||
"years": [
|
||||
{
|
||||
"year": 2024,
|
||||
"months": [
|
||||
{"month": 1, "name": "January"},
|
||||
{"month": 2, "name": "February"},
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"role": "companion",
|
||||
"years": [
|
||||
{
|
||||
"year": 2024,
|
||||
"months": [
|
||||
{"month": 1, "name": "January"},
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
]
|
||||
|
||||
def test_returns_html_string(self, configured_env, sample_report_sections):
|
||||
"""Returns an HTML string."""
|
||||
html = render_reports_index(sample_report_sections)
|
||||
|
||||
assert isinstance(html, str)
|
||||
assert len(html) > 0
|
||||
|
||||
def test_html_structure(self, configured_env, sample_report_sections):
|
||||
"""Generated HTML has proper structure."""
|
||||
html = render_reports_index(sample_report_sections)
|
||||
|
||||
assert "<!DOCTYPE html>" in html or "<!doctype html>" in html.lower()
|
||||
assert "</html>" in html
|
||||
|
||||
def test_includes_title(self, configured_env, sample_report_sections):
|
||||
"""Index page includes title."""
|
||||
html = render_reports_index(sample_report_sections)
|
||||
|
||||
assert "Reports Archive" in html
|
||||
|
||||
def test_includes_year(self, configured_env, sample_report_sections):
|
||||
"""Lists available years."""
|
||||
html = render_reports_index(sample_report_sections)
|
||||
|
||||
assert "/reports/repeater/2024/" in html
|
||||
|
||||
def test_handles_empty_sections(self, configured_env):
|
||||
"""Handles empty report sections."""
|
||||
html = render_reports_index([])
|
||||
|
||||
assert isinstance(html, str)
|
||||
assert "</html>" in html
|
||||
|
||||
def test_includes_role_names(self, configured_env, sample_report_sections):
|
||||
"""Includes role names in output."""
|
||||
html = render_reports_index(sample_report_sections)
|
||||
|
||||
assert "Repeater" in html
|
||||
assert "Companion" in html
|
||||
|
||||
def test_includes_descriptions(self, configured_env, sample_report_sections, monkeypatch):
|
||||
"""Includes role descriptions from config."""
|
||||
monkeypatch.setenv("REPEATER_DISPLAY_NAME", "Alpha Repeater")
|
||||
monkeypatch.setenv("COMPANION_DISPLAY_NAME", "Beta Node")
|
||||
monkeypatch.setenv("REPORT_LOCATION_SHORT", "Test Ridge")
|
||||
import meshmon.env
|
||||
meshmon.env._config = None
|
||||
|
||||
html = render_reports_index(sample_report_sections)
|
||||
|
||||
assert "Alpha Repeater — Remote node in Test Ridge" in html
|
||||
assert "Beta Node — Local USB-connected node" in html
|
||||
|
||||
def test_includes_css_reference(self, configured_env, sample_report_sections):
|
||||
"""Includes reference to stylesheet."""
|
||||
html = render_reports_index(sample_report_sections)
|
||||
|
||||
assert "styles.css" in html
|
||||
|
||||
def test_handles_sections_without_years(self, configured_env):
|
||||
"""Handles sections with no years."""
|
||||
sections = [
|
||||
{"role": "repeater", "years": []},
|
||||
{"role": "companion", "years": []},
|
||||
]
|
||||
|
||||
html = render_reports_index(sections)
|
||||
|
||||
assert isinstance(html, str)
|
||||
assert "No reports available yet." in html
|
||||
282
tests/html/test_write_site.py
Normal file
282
tests/html/test_write_site.py
Normal file
@@ -0,0 +1,282 @@
|
||||
"""Tests for write_site and related output functions."""
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.db import get_latest_metrics
|
||||
from meshmon.html import (
|
||||
copy_static_assets,
|
||||
write_site,
|
||||
)
|
||||
|
||||
BASE_TS = 1704067200
|
||||
|
||||
|
||||
def _sample_companion_metrics() -> dict[str, float]:
|
||||
return {
|
||||
"battery_mv": 3850.0,
|
||||
"uptime_secs": 86400.0,
|
||||
"contacts": 5.0,
|
||||
"recv": 1234.0,
|
||||
"sent": 567.0,
|
||||
"errors": 0.0,
|
||||
}
|
||||
|
||||
|
||||
def _sample_repeater_metrics() -> dict[str, float]:
|
||||
return {
|
||||
"bat": 3920.0,
|
||||
"uptime": 172800.0,
|
||||
"last_rssi": -85.0,
|
||||
"last_snr": 7.5,
|
||||
"noise_floor": -115.0,
|
||||
"tx_queue_len": 0.0,
|
||||
"nb_recv": 5678.0,
|
||||
"nb_sent": 2345.0,
|
||||
"airtime": 3600.0,
|
||||
"rx_airtime": 7200.0,
|
||||
"flood_dups": 12.0,
|
||||
"direct_dups": 5.0,
|
||||
"sent_flood": 100.0,
|
||||
"recv_flood": 200.0,
|
||||
"sent_direct": 50.0,
|
||||
"recv_direct": 75.0,
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def html_db_cache(tmp_path_factory):
|
||||
"""Create and populate a shared DB once for HTML write_site tests."""
|
||||
from meshmon.db import init_db, insert_metrics
|
||||
|
||||
root_dir = tmp_path_factory.mktemp("html-db")
|
||||
state_dir = root_dir / "state"
|
||||
state_dir.mkdir()
|
||||
|
||||
db_path = state_dir / "metrics.db"
|
||||
init_db(db_path=db_path)
|
||||
|
||||
now = BASE_TS
|
||||
day_seconds = 86400
|
||||
|
||||
sample_companion_metrics = _sample_companion_metrics()
|
||||
sample_repeater_metrics = _sample_repeater_metrics()
|
||||
|
||||
# Insert 7 days of companion data (every hour)
|
||||
for day in range(7):
|
||||
for hour in range(24):
|
||||
ts = now - (day * day_seconds) - (hour * 3600)
|
||||
metrics = sample_companion_metrics.copy()
|
||||
metrics["battery_mv"] = 3700 + (hour * 10) + (day * 5)
|
||||
metrics["recv"] = 100 * (day + 1) + hour
|
||||
metrics["sent"] = 50 * (day + 1) + hour
|
||||
insert_metrics(ts, "companion", metrics, db_path=db_path)
|
||||
|
||||
# Insert 7 days of repeater data (every 15 minutes)
|
||||
for day in range(7):
|
||||
for interval in range(96): # 24 * 4
|
||||
ts = now - (day * day_seconds) - (interval * 900)
|
||||
metrics = sample_repeater_metrics.copy()
|
||||
metrics["bat"] = 3700 + (interval * 2) + (day * 5)
|
||||
metrics["nb_recv"] = 1000 * (day + 1) + interval * 10
|
||||
metrics["nb_sent"] = 500 * (day + 1) + interval * 5
|
||||
insert_metrics(ts, "repeater", metrics, db_path=db_path)
|
||||
|
||||
return {"state_dir": state_dir, "db_path": db_path}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def html_env(html_db_cache, tmp_out_dir, monkeypatch):
|
||||
"""Env with shared DB and per-test output directory."""
|
||||
monkeypatch.setenv("STATE_DIR", str(html_db_cache["state_dir"]))
|
||||
monkeypatch.setenv("OUT_DIR", str(tmp_out_dir))
|
||||
|
||||
import meshmon.env
|
||||
meshmon.env._config = None
|
||||
|
||||
return {"state_dir": html_db_cache["state_dir"], "out_dir": tmp_out_dir}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def metrics_rows(html_env):
|
||||
"""Get latest metrics rows for both roles."""
|
||||
companion_row = get_latest_metrics("companion")
|
||||
repeater_row = get_latest_metrics("repeater")
|
||||
return {"companion": companion_row, "repeater": repeater_row}
|
||||
|
||||
|
||||
class TestWriteSite:
|
||||
"""Tests for write_site function."""
|
||||
|
||||
def test_creates_output_directory(self, html_env, metrics_rows):
|
||||
"""Creates output directory if it doesn't exist."""
|
||||
out_dir = html_env["out_dir"]
|
||||
|
||||
write_site(metrics_rows["companion"], metrics_rows["repeater"])
|
||||
|
||||
assert out_dir.exists()
|
||||
|
||||
def test_generates_repeater_pages(self, html_env, metrics_rows):
|
||||
"""Generates repeater HTML pages at root."""
|
||||
out_dir = html_env["out_dir"]
|
||||
|
||||
write_site(metrics_rows["companion"], metrics_rows["repeater"])
|
||||
|
||||
# Repeater pages are at root
|
||||
for period in ["day", "week", "month", "year"]:
|
||||
assert (out_dir / f"{period}.html").exists()
|
||||
|
||||
def test_generates_companion_pages(self, html_env, metrics_rows):
|
||||
"""Generates companion HTML pages in subdirectory."""
|
||||
out_dir = html_env["out_dir"]
|
||||
|
||||
write_site(metrics_rows["companion"], metrics_rows["repeater"])
|
||||
|
||||
companion_dir = out_dir / "companion"
|
||||
assert companion_dir.exists()
|
||||
for period in ["day", "week", "month", "year"]:
|
||||
assert (companion_dir / f"{period}.html").exists()
|
||||
|
||||
def test_html_files_are_valid(self, html_env, metrics_rows):
|
||||
"""Generated HTML files have valid structure."""
|
||||
out_dir = html_env["out_dir"]
|
||||
|
||||
write_site(metrics_rows["companion"], metrics_rows["repeater"])
|
||||
|
||||
html_file = out_dir / "day.html"
|
||||
content = html_file.read_text()
|
||||
|
||||
assert "<!DOCTYPE html>" in content or "<!doctype html>" in content.lower()
|
||||
assert "</html>" in content
|
||||
|
||||
def test_handles_empty_database(self, configured_env, initialized_db):
|
||||
"""Handles empty database gracefully."""
|
||||
out_dir = configured_env["out_dir"]
|
||||
|
||||
# Should not raise - pass None for empty database
|
||||
write_site(None, None)
|
||||
|
||||
# Should still generate pages
|
||||
assert (out_dir / "day.html").exists()
|
||||
|
||||
|
||||
class TestCopyStaticAssets:
|
||||
"""Tests for copy_static_assets function."""
|
||||
|
||||
def test_copies_css(self, html_env):
|
||||
"""Copies CSS stylesheet."""
|
||||
out_dir = html_env["out_dir"]
|
||||
out_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
copy_static_assets()
|
||||
|
||||
css_file = out_dir / "styles.css"
|
||||
assert css_file.exists()
|
||||
|
||||
def test_copies_javascript(self, html_env):
|
||||
"""Copies JavaScript files."""
|
||||
out_dir = html_env["out_dir"]
|
||||
out_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
copy_static_assets()
|
||||
|
||||
js_file = out_dir / "chart-tooltip.js"
|
||||
assert js_file.exists()
|
||||
|
||||
def test_css_is_valid(self, html_env):
|
||||
"""Copied CSS has expected content."""
|
||||
out_dir = html_env["out_dir"]
|
||||
out_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
copy_static_assets()
|
||||
|
||||
css_file = out_dir / "styles.css"
|
||||
content = css_file.read_text()
|
||||
|
||||
assert "--bg-primary" in content
|
||||
|
||||
def test_requires_output_directory(self, html_env):
|
||||
"""Requires output directory to exist."""
|
||||
out_dir = html_env["out_dir"]
|
||||
# Ensure out_dir exists
|
||||
out_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Should not raise when directory exists
|
||||
copy_static_assets()
|
||||
|
||||
assert (out_dir / "styles.css").exists()
|
||||
|
||||
def test_overwrites_existing(self, html_env):
|
||||
"""Overwrites existing static files."""
|
||||
out_dir = html_env["out_dir"]
|
||||
out_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Create a fake CSS file
|
||||
css_file = out_dir / "styles.css"
|
||||
css_file.write_text("/* fake */")
|
||||
|
||||
copy_static_assets()
|
||||
|
||||
# Should be overwritten with real content
|
||||
content = css_file.read_text()
|
||||
assert content != "/* fake */"
|
||||
|
||||
|
||||
class TestHtmlOutput:
|
||||
"""Tests for HTML output structure."""
|
||||
|
||||
def test_pages_include_navigation(self, html_env, metrics_rows):
|
||||
"""HTML pages include navigation."""
|
||||
out_dir = html_env["out_dir"]
|
||||
|
||||
write_site(metrics_rows["companion"], metrics_rows["repeater"])
|
||||
|
||||
content = (out_dir / "day.html").read_text()
|
||||
|
||||
# Should have links to other periods
|
||||
assert "week" in content.lower()
|
||||
assert "month" in content.lower()
|
||||
|
||||
def test_pages_include_meta_tags(self, html_env, metrics_rows):
|
||||
"""HTML pages include meta tags."""
|
||||
out_dir = html_env["out_dir"]
|
||||
|
||||
write_site(metrics_rows["companion"], metrics_rows["repeater"])
|
||||
|
||||
content = (out_dir / "day.html").read_text()
|
||||
|
||||
assert "<meta" in content
|
||||
assert "charset" in content.lower() or "utf-8" in content.lower()
|
||||
|
||||
def test_pages_include_title(self, html_env, metrics_rows):
|
||||
"""HTML pages include title tag."""
|
||||
out_dir = html_env["out_dir"]
|
||||
|
||||
write_site(metrics_rows["companion"], metrics_rows["repeater"])
|
||||
|
||||
content = (out_dir / "day.html").read_text()
|
||||
|
||||
assert "<title>" in content
|
||||
assert "</title>" in content
|
||||
|
||||
def test_pages_reference_css(self, html_env, metrics_rows):
|
||||
"""HTML pages reference stylesheet."""
|
||||
out_dir = html_env["out_dir"]
|
||||
|
||||
write_site(metrics_rows["companion"], metrics_rows["repeater"])
|
||||
|
||||
content = (out_dir / "day.html").read_text()
|
||||
|
||||
assert 'href="styles.css"' in content
|
||||
assert 'href="/styles.css"' not in content
|
||||
|
||||
def test_companion_pages_relative_css(self, html_env, metrics_rows):
|
||||
"""Companion pages use relative path to CSS."""
|
||||
out_dir = html_env["out_dir"]
|
||||
|
||||
write_site(metrics_rows["companion"], metrics_rows["repeater"])
|
||||
|
||||
content = (out_dir / "companion" / "day.html").read_text()
|
||||
|
||||
# Should reference parent directory CSS
|
||||
assert 'href="../styles.css"' in content
|
||||
assert 'href="/styles.css"' not in content
|
||||
1
tests/integration/__init__.py
Normal file
1
tests/integration/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Integration tests for end-to-end pipelines."""
|
||||
278
tests/integration/conftest.py
Normal file
278
tests/integration/conftest.py
Normal file
@@ -0,0 +1,278 @@
|
||||
"""Integration test fixtures."""
|
||||
|
||||
import os
|
||||
import time
|
||||
from unittest.mock import AsyncMock, MagicMock
|
||||
|
||||
import pytest
|
||||
|
||||
_INTEGRATION_ENV = {
|
||||
"REPORT_LOCATION_NAME": "Test Location",
|
||||
"REPORT_LOCATION_SHORT": "Test",
|
||||
"REPEATER_DISPLAY_NAME": "Test Repeater",
|
||||
"COMPANION_DISPLAY_NAME": "Test Companion",
|
||||
"MESH_TRANSPORT": "serial",
|
||||
"MESH_SERIAL_PORT": "/dev/ttyACM0",
|
||||
}
|
||||
RENDERED_CHART_METRICS = {
|
||||
"companion": ["battery_mv", "recv", "contacts"],
|
||||
"repeater": ["bat", "nb_recv", "last_rssi"],
|
||||
}
|
||||
|
||||
|
||||
def _sample_companion_metrics() -> dict[str, float]:
|
||||
return {
|
||||
"battery_mv": 3850.0,
|
||||
"uptime_secs": 86400.0,
|
||||
"contacts": 5.0,
|
||||
"recv": 1234.0,
|
||||
"sent": 567.0,
|
||||
"errors": 0.0,
|
||||
}
|
||||
|
||||
|
||||
def _sample_repeater_metrics() -> dict[str, float]:
|
||||
return {
|
||||
"bat": 3920.0,
|
||||
"uptime": 172800.0,
|
||||
"last_rssi": -85.0,
|
||||
"last_snr": 7.5,
|
||||
"noise_floor": -115.0,
|
||||
"tx_queue_len": 0.0,
|
||||
"nb_recv": 5678.0,
|
||||
"nb_sent": 2345.0,
|
||||
"airtime": 3600.0,
|
||||
"rx_airtime": 7200.0,
|
||||
"flood_dups": 12.0,
|
||||
"direct_dups": 5.0,
|
||||
"sent_flood": 100.0,
|
||||
"recv_flood": 200.0,
|
||||
"sent_direct": 50.0,
|
||||
"recv_direct": 75.0,
|
||||
}
|
||||
|
||||
|
||||
def _populate_db_with_history(
|
||||
db_path,
|
||||
sample_companion_metrics: dict[str, float],
|
||||
sample_repeater_metrics: dict[str, float],
|
||||
days: int = 30,
|
||||
companion_step_seconds: int = 3600,
|
||||
repeater_step_seconds: int = 900,
|
||||
) -> None:
|
||||
from meshmon.db import insert_metrics
|
||||
|
||||
now = int(time.time())
|
||||
day_seconds = 86400
|
||||
companion_steps = day_seconds // companion_step_seconds
|
||||
repeater_steps = day_seconds // repeater_step_seconds
|
||||
|
||||
# Insert companion data (default: 30 days, hourly)
|
||||
for day in range(days):
|
||||
for step in range(companion_steps):
|
||||
ts = now - (day * day_seconds) - (step * companion_step_seconds)
|
||||
metrics = sample_companion_metrics.copy()
|
||||
# Vary values to create realistic patterns
|
||||
metrics["battery_mv"] = 3700 + (step * 5) + (day % 7) * 10
|
||||
metrics["recv"] = 100 + day * 10 + step
|
||||
metrics["sent"] = 50 + day * 5 + step
|
||||
metrics["uptime_secs"] = (days - day) * day_seconds + step * companion_step_seconds
|
||||
insert_metrics(ts, "companion", metrics, db_path=db_path)
|
||||
|
||||
# Insert repeater data (default: 30 days, every 15 minutes)
|
||||
for day in range(days):
|
||||
for interval in range(repeater_steps):
|
||||
ts = now - (day * day_seconds) - (interval * repeater_step_seconds)
|
||||
metrics = sample_repeater_metrics.copy()
|
||||
# Vary values to create realistic patterns
|
||||
metrics["bat"] = 3800 + (interval % 24) * 5 + (day % 7) * 10
|
||||
metrics["nb_recv"] = 1000 + day * 100 + interval
|
||||
metrics["nb_sent"] = 500 + day * 50 + interval
|
||||
metrics["uptime"] = (days - day) * day_seconds + interval * repeater_step_seconds
|
||||
metrics["last_rssi"] = -90 + (interval % 20)
|
||||
metrics["last_snr"] = 5 + (interval % 10) * 0.5
|
||||
insert_metrics(ts, "repeater", metrics, db_path=db_path)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def reports_env(reports_db_cache, tmp_out_dir, monkeypatch):
|
||||
"""Integration env wired to the shared reports DB and per-test output."""
|
||||
monkeypatch.setenv("STATE_DIR", str(reports_db_cache["state_dir"]))
|
||||
monkeypatch.setenv("OUT_DIR", str(tmp_out_dir))
|
||||
for key, value in _INTEGRATION_ENV.items():
|
||||
monkeypatch.setenv(key, value)
|
||||
|
||||
import meshmon.env
|
||||
meshmon.env._config = None
|
||||
|
||||
return {
|
||||
"state_dir": reports_db_cache["state_dir"],
|
||||
"out_dir": tmp_out_dir,
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def rendered_chart_metrics():
|
||||
"""Minimal chart set to keep integration rendering tests fast."""
|
||||
return RENDERED_CHART_METRICS
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def populated_db_with_history(reports_db_cache, reports_env):
|
||||
"""Shared database populated with a fixed history window for integration tests."""
|
||||
return reports_db_cache["db_path"]
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def reports_db_cache(tmp_path_factory):
|
||||
"""Create and populate a shared reports DB once per module."""
|
||||
from meshmon.db import init_db
|
||||
|
||||
root_dir = tmp_path_factory.mktemp("reports-db")
|
||||
state_dir = root_dir / "state"
|
||||
state_dir.mkdir()
|
||||
|
||||
db_path = state_dir / "metrics.db"
|
||||
init_db(db_path=db_path)
|
||||
_populate_db_with_history(
|
||||
db_path,
|
||||
_sample_companion_metrics(),
|
||||
_sample_repeater_metrics(),
|
||||
days=14,
|
||||
companion_step_seconds=7200,
|
||||
repeater_step_seconds=7200,
|
||||
)
|
||||
|
||||
return {
|
||||
"state_dir": state_dir,
|
||||
"db_path": db_path,
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def rendered_charts_cache(tmp_path_factory):
|
||||
"""Cache rendered charts once per module to speed up integration tests."""
|
||||
from meshmon.charts import render_all_charts, save_chart_stats
|
||||
from meshmon.db import init_db
|
||||
|
||||
root_dir = tmp_path_factory.mktemp("rendered-charts")
|
||||
state_dir = root_dir / "state"
|
||||
out_dir = root_dir / "out"
|
||||
state_dir.mkdir()
|
||||
out_dir.mkdir()
|
||||
|
||||
env_keys = ["STATE_DIR", "OUT_DIR", *_INTEGRATION_ENV.keys()]
|
||||
previous_env = {key: os.environ.get(key) for key in env_keys}
|
||||
|
||||
os.environ["STATE_DIR"] = str(state_dir)
|
||||
os.environ["OUT_DIR"] = str(out_dir)
|
||||
for key, value in _INTEGRATION_ENV.items():
|
||||
os.environ[key] = value
|
||||
|
||||
import meshmon.env
|
||||
meshmon.env._config = None
|
||||
|
||||
db_path = state_dir / "metrics.db"
|
||||
init_db(db_path=db_path)
|
||||
_populate_db_with_history(
|
||||
db_path,
|
||||
_sample_companion_metrics(),
|
||||
_sample_repeater_metrics(),
|
||||
days=7,
|
||||
companion_step_seconds=3600,
|
||||
repeater_step_seconds=3600,
|
||||
)
|
||||
|
||||
for role in ["companion", "repeater"]:
|
||||
charts, stats = render_all_charts(role, metrics=RENDERED_CHART_METRICS[role])
|
||||
save_chart_stats(role, stats)
|
||||
|
||||
yield {
|
||||
"state_dir": state_dir,
|
||||
"out_dir": out_dir,
|
||||
"db_path": db_path,
|
||||
}
|
||||
|
||||
for key, value in previous_env.items():
|
||||
if value is None:
|
||||
os.environ.pop(key, None)
|
||||
else:
|
||||
os.environ[key] = value
|
||||
|
||||
meshmon.env._config = None
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def rendered_charts(rendered_charts_cache, monkeypatch):
|
||||
"""Expose cached charts with env wired for per-test access."""
|
||||
state_dir = rendered_charts_cache["state_dir"]
|
||||
out_dir = rendered_charts_cache["out_dir"]
|
||||
|
||||
monkeypatch.setenv("STATE_DIR", str(state_dir))
|
||||
monkeypatch.setenv("OUT_DIR", str(out_dir))
|
||||
for key, value in _INTEGRATION_ENV.items():
|
||||
monkeypatch.setenv(key, value)
|
||||
|
||||
import meshmon.env
|
||||
meshmon.env._config = None
|
||||
|
||||
return rendered_charts_cache
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_meshcore_successful_collection(sample_companion_metrics):
|
||||
"""Mock MeshCore client that returns successful responses."""
|
||||
mc = MagicMock()
|
||||
mc.commands = MagicMock()
|
||||
mc.contacts = {}
|
||||
mc.disconnect = AsyncMock()
|
||||
|
||||
# Helper to create successful event
|
||||
def make_event(event_type: str, payload: dict):
|
||||
event = MagicMock()
|
||||
event.type = MagicMock()
|
||||
event.type.name = event_type
|
||||
event.payload = payload
|
||||
return event
|
||||
|
||||
# Mock all commands to return success - use AsyncMock directly without invoking
|
||||
mc.commands.send_appstart = AsyncMock(return_value=make_event("SELF_INFO", {}))
|
||||
mc.commands.send_device_query = AsyncMock(return_value=make_event("DEVICE_INFO", {}))
|
||||
mc.commands.get_time = AsyncMock(return_value=make_event("TIME", {"time": 1234567890}))
|
||||
mc.commands.get_self_telemetry = AsyncMock(return_value=make_event("TELEMETRY", {}))
|
||||
mc.commands.get_custom_vars = AsyncMock(return_value=make_event("CUSTOM_VARS", {}))
|
||||
mc.commands.get_contacts = AsyncMock(
|
||||
return_value=make_event("CONTACTS", {"contact1": {}, "contact2": {}})
|
||||
)
|
||||
mc.commands.get_stats_core = AsyncMock(
|
||||
return_value=make_event(
|
||||
"STATS_CORE",
|
||||
{"battery_mv": sample_companion_metrics["battery_mv"], "uptime_secs": 86400},
|
||||
)
|
||||
)
|
||||
mc.commands.get_stats_radio = AsyncMock(
|
||||
return_value=make_event("STATS_RADIO", {"noise_floor": -115, "last_rssi": -85})
|
||||
)
|
||||
mc.commands.get_stats_packets = AsyncMock(
|
||||
return_value=make_event(
|
||||
"STATS_PACKETS",
|
||||
{"recv": sample_companion_metrics["recv"], "sent": sample_companion_metrics["sent"]},
|
||||
)
|
||||
)
|
||||
|
||||
return mc
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def full_integration_env(configured_env, monkeypatch):
|
||||
"""Full integration environment with per-test directories."""
|
||||
for key, value in _INTEGRATION_ENV.items():
|
||||
monkeypatch.setenv(key, value)
|
||||
|
||||
import meshmon.env
|
||||
meshmon.env._config = None
|
||||
|
||||
return {
|
||||
"state_dir": configured_env["state_dir"],
|
||||
"out_dir": configured_env["out_dir"],
|
||||
}
|
||||
184
tests/integration/test_collection_pipeline.py
Normal file
184
tests/integration/test_collection_pipeline.py
Normal file
@@ -0,0 +1,184 @@
|
||||
"""Integration tests for data collection pipeline."""
|
||||
|
||||
from contextlib import asynccontextmanager
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
|
||||
from tests.scripts.conftest import load_script_module
|
||||
|
||||
BASE_TS = 1704067200
|
||||
|
||||
|
||||
@pytest.mark.integration
|
||||
class TestCompanionCollectionPipeline:
|
||||
"""Test companion collection end-to-end."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_successful_collection_stores_metrics(
|
||||
self,
|
||||
mock_meshcore_successful_collection,
|
||||
full_integration_env,
|
||||
monkeypatch,
|
||||
):
|
||||
"""Successful collection should store all metrics in database."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
|
||||
# Mock connect_with_lock to return our mock client
|
||||
@asynccontextmanager
|
||||
async def mock_connect_with_lock(*args, **kwargs):
|
||||
yield mock_meshcore_successful_collection
|
||||
|
||||
with patch(
|
||||
"meshmon.meshcore_client.connect_with_lock",
|
||||
mock_connect_with_lock,
|
||||
):
|
||||
# Initialize database
|
||||
from meshmon.db import get_latest_metrics, init_db
|
||||
|
||||
init_db()
|
||||
|
||||
# Import and run collection (inline to avoid import issues)
|
||||
# Note: We import the function directly rather than the script
|
||||
from meshmon.db import insert_metrics
|
||||
|
||||
# Simulate collection logic
|
||||
ts = BASE_TS
|
||||
metrics = {}
|
||||
|
||||
async with mock_connect_with_lock() as mc:
|
||||
assert mc is not None
|
||||
|
||||
# Get stats_core
|
||||
event = await mc.commands.get_stats_core()
|
||||
if event and hasattr(event, "payload") and isinstance(event.payload, dict):
|
||||
for key, value in event.payload.items():
|
||||
if isinstance(value, (int, float)):
|
||||
metrics[key] = float(value)
|
||||
|
||||
# Get stats_packets
|
||||
event = await mc.commands.get_stats_packets()
|
||||
if event and hasattr(event, "payload") and isinstance(event.payload, dict):
|
||||
for key, value in event.payload.items():
|
||||
if isinstance(value, (int, float)):
|
||||
metrics[key] = float(value)
|
||||
|
||||
# Get contacts
|
||||
event = await mc.commands.get_contacts()
|
||||
if event and hasattr(event, "payload"):
|
||||
contacts_count = len(event.payload) if event.payload else 0
|
||||
metrics["contacts"] = float(contacts_count)
|
||||
|
||||
# Insert metrics
|
||||
inserted = insert_metrics(ts=ts, role="companion", metrics=metrics)
|
||||
assert inserted > 0
|
||||
|
||||
# Verify data was stored
|
||||
latest = get_latest_metrics("companion")
|
||||
assert latest is not None
|
||||
assert "battery_mv" in latest
|
||||
assert "recv" in latest
|
||||
assert "sent" in latest
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_collection_fails_gracefully_on_connection_error(
|
||||
self, full_integration_env, monkeypatch
|
||||
):
|
||||
"""Collection should fail gracefully when connection fails."""
|
||||
monkeypatch.setattr("meshmon.meshcore_client.MESHCORE_AVAILABLE", True)
|
||||
|
||||
@asynccontextmanager
|
||||
async def mock_connect_with_lock_failing(*args, **kwargs):
|
||||
yield None
|
||||
|
||||
with patch(
|
||||
"meshmon.meshcore_client.connect_with_lock",
|
||||
mock_connect_with_lock_failing,
|
||||
):
|
||||
from meshmon.db import get_latest_metrics, init_db
|
||||
|
||||
init_db()
|
||||
|
||||
# Simulate collection with failed connection
|
||||
async with mock_connect_with_lock_failing() as mc:
|
||||
assert mc is None
|
||||
|
||||
# Database should be empty
|
||||
latest = get_latest_metrics("companion")
|
||||
assert latest is None
|
||||
|
||||
|
||||
@pytest.mark.integration
|
||||
class TestCollectionWithCircuitBreaker:
|
||||
"""Test collection with circuit breaker integration."""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_circuit_breaker_prevents_collection_when_open(
|
||||
self, full_integration_env, monkeypatch
|
||||
):
|
||||
"""Collection should be skipped when circuit breaker is open."""
|
||||
from meshmon.retry import CircuitBreaker
|
||||
|
||||
# Create an open circuit breaker
|
||||
state_dir = full_integration_env["state_dir"]
|
||||
cb = CircuitBreaker(state_dir / "repeater_circuit.json")
|
||||
cb.record_failure(max_failures=1, cooldown_s=3600)
|
||||
|
||||
# Verify circuit is open
|
||||
assert cb.is_open() is True
|
||||
|
||||
module = load_script_module("collect_repeater.py")
|
||||
connect_called = False
|
||||
|
||||
@asynccontextmanager
|
||||
async def mock_connect_with_lock(*args, **kwargs):
|
||||
nonlocal connect_called
|
||||
connect_called = True
|
||||
yield None
|
||||
|
||||
monkeypatch.setattr(module, "connect_with_lock", mock_connect_with_lock)
|
||||
|
||||
result = await module.collect_repeater()
|
||||
|
||||
assert result == 0
|
||||
assert connect_called is False
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_circuit_breaker_records_failure(self, full_integration_env, monkeypatch):
|
||||
"""Circuit breaker should record failures."""
|
||||
|
||||
from meshmon.retry import CircuitBreaker
|
||||
|
||||
state_dir = full_integration_env["state_dir"]
|
||||
cb = CircuitBreaker(state_dir / "test_circuit.json")
|
||||
|
||||
assert cb.consecutive_failures == 0
|
||||
|
||||
# Record failures (requires max_failures and cooldown_s args)
|
||||
cb.record_failure(max_failures=5, cooldown_s=60)
|
||||
cb.record_failure(max_failures=5, cooldown_s=60)
|
||||
cb.record_failure(max_failures=5, cooldown_s=60)
|
||||
|
||||
assert cb.consecutive_failures == 3
|
||||
|
||||
# Success resets counter
|
||||
cb.record_success()
|
||||
assert cb.consecutive_failures == 0
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_circuit_breaker_state_persists(self, full_integration_env):
|
||||
"""Circuit breaker state should persist to disk."""
|
||||
from meshmon.retry import CircuitBreaker
|
||||
|
||||
state_dir = full_integration_env["state_dir"]
|
||||
state_file = state_dir / "persist_test_circuit.json"
|
||||
|
||||
# Create and configure circuit breaker
|
||||
cb1 = CircuitBreaker(state_file)
|
||||
cb1.record_failure(max_failures=1, cooldown_s=1800)
|
||||
|
||||
# Load in new instance
|
||||
cb2 = CircuitBreaker(state_file)
|
||||
|
||||
assert cb2.consecutive_failures == 1
|
||||
assert cb2.cooldown_until == cb1.cooldown_until
|
||||
231
tests/integration/test_rendering_pipeline.py
Normal file
231
tests/integration/test_rendering_pipeline.py
Normal file
@@ -0,0 +1,231 @@
|
||||
"""Integration tests for chart and HTML rendering pipeline."""
|
||||
|
||||
import contextlib
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.mark.integration
|
||||
class TestChartRenderingPipeline:
|
||||
"""Test chart rendering end-to-end."""
|
||||
|
||||
def test_renders_all_chart_periods(self, rendered_charts):
|
||||
"""Should render charts for all periods (day/week/month/year)."""
|
||||
out_dir = rendered_charts["out_dir"]
|
||||
|
||||
for role in ["companion", "repeater"]:
|
||||
assets_dir = out_dir / "assets" / role
|
||||
assert assets_dir.exists()
|
||||
|
||||
for period in ["day", "week", "month", "year"]:
|
||||
period_svgs = list(assets_dir.glob(f"*_{period}_*.svg"))
|
||||
assert period_svgs, f"No {period} charts found for {role}"
|
||||
|
||||
def test_chart_files_created(self, rendered_charts):
|
||||
"""Should create SVG chart files in output directory."""
|
||||
out_dir = rendered_charts["out_dir"]
|
||||
|
||||
# Check SVG files exist
|
||||
assets_dir = out_dir / "assets" / "repeater"
|
||||
assert assets_dir.exists()
|
||||
|
||||
# Should have SVG files
|
||||
svg_files = list(assets_dir.glob("*.svg"))
|
||||
assert len(svg_files) > 0
|
||||
|
||||
# Check stats file exists
|
||||
stats_file = assets_dir / "chart_stats.json"
|
||||
assert stats_file.exists()
|
||||
|
||||
def test_chart_statistics_calculated(self, rendered_charts):
|
||||
"""Should calculate correct statistics for charts."""
|
||||
from meshmon.charts import load_chart_stats
|
||||
|
||||
# Load and verify stats
|
||||
loaded_stats = load_chart_stats("repeater")
|
||||
|
||||
assert loaded_stats is not None
|
||||
|
||||
# Check that stats have expected structure
|
||||
# Stats are nested: {metric_name: {period: {min, max, avg, current}}}
|
||||
for _metric_name, metric_stats in loaded_stats.items():
|
||||
if metric_stats: # Skip empty stats
|
||||
# Each metric has period keys like 'day', 'week', 'month', 'year'
|
||||
for _period, period_stats in metric_stats.items():
|
||||
if period_stats:
|
||||
assert "min" in period_stats
|
||||
assert "max" in period_stats
|
||||
assert "avg" in period_stats
|
||||
assert "current" in period_stats
|
||||
|
||||
|
||||
@pytest.mark.integration
|
||||
class TestHtmlRenderingPipeline:
|
||||
"""Test HTML site rendering end-to-end."""
|
||||
|
||||
def test_renders_site_pages(self, rendered_charts):
|
||||
"""Should render all HTML site pages."""
|
||||
from meshmon.db import get_latest_metrics
|
||||
from meshmon.html import write_site
|
||||
|
||||
out_dir = rendered_charts["out_dir"]
|
||||
|
||||
# Get latest metrics for write_site
|
||||
companion_row = get_latest_metrics("companion")
|
||||
repeater_row = get_latest_metrics("repeater")
|
||||
|
||||
# Render site
|
||||
write_site(companion_row, repeater_row)
|
||||
|
||||
# Check main pages exist
|
||||
assert (out_dir / "day.html").exists()
|
||||
assert (out_dir / "week.html").exists()
|
||||
assert (out_dir / "month.html").exists()
|
||||
assert (out_dir / "year.html").exists()
|
||||
|
||||
# Check companion pages exist
|
||||
assert (out_dir / "companion" / "day.html").exists()
|
||||
assert (out_dir / "companion" / "week.html").exists()
|
||||
assert (out_dir / "companion" / "month.html").exists()
|
||||
assert (out_dir / "companion" / "year.html").exists()
|
||||
|
||||
def test_copies_static_assets(self, full_integration_env):
|
||||
"""Should copy static assets (CSS, JS)."""
|
||||
from meshmon.html import copy_static_assets
|
||||
|
||||
out_dir = full_integration_env["out_dir"]
|
||||
|
||||
copy_static_assets()
|
||||
|
||||
# Check static files exist
|
||||
assert (out_dir / "styles.css").exists()
|
||||
assert (out_dir / "chart-tooltip.js").exists()
|
||||
|
||||
def test_html_contains_chart_data(self, rendered_charts):
|
||||
"""HTML should contain embedded chart SVGs."""
|
||||
from meshmon.db import get_latest_metrics
|
||||
from meshmon.html import write_site
|
||||
|
||||
out_dir = rendered_charts["out_dir"]
|
||||
|
||||
# Get latest metrics for write_site
|
||||
companion_row = get_latest_metrics("companion")
|
||||
repeater_row = get_latest_metrics("repeater")
|
||||
|
||||
# Render site
|
||||
write_site(companion_row, repeater_row)
|
||||
|
||||
# Check HTML contains SVG
|
||||
day_html = (out_dir / "day.html").read_text()
|
||||
|
||||
# Should contain SVG elements
|
||||
assert "<svg" in day_html
|
||||
# Should contain chart data attributes
|
||||
assert "data-metric" in day_html
|
||||
assert "data-points" in day_html
|
||||
|
||||
def test_html_has_correct_status_indicator(
|
||||
self, rendered_charts
|
||||
):
|
||||
"""HTML should have correct status indicator based on data freshness."""
|
||||
from meshmon.db import get_latest_metrics
|
||||
from meshmon.html import write_site
|
||||
|
||||
out_dir = rendered_charts["out_dir"]
|
||||
|
||||
# Get latest metrics for write_site
|
||||
companion_row = get_latest_metrics("companion")
|
||||
repeater_row = get_latest_metrics("repeater")
|
||||
|
||||
write_site(companion_row, repeater_row)
|
||||
|
||||
# Check status indicator exists
|
||||
day_html = (out_dir / "day.html").read_text()
|
||||
|
||||
assert "status-badge" in day_html
|
||||
assert any(label in day_html for label in ["Online", "Stale", "Offline"])
|
||||
|
||||
|
||||
@pytest.mark.integration
|
||||
class TestFullRenderingChain:
|
||||
"""Test complete rendering chain: data -> charts -> HTML."""
|
||||
|
||||
def test_full_chain_from_database_to_html(
|
||||
self, rendered_charts
|
||||
):
|
||||
"""Complete chain: database metrics -> charts -> HTML site."""
|
||||
from meshmon.db import get_latest_metrics, get_metric_count
|
||||
from meshmon.html import copy_static_assets, write_site
|
||||
|
||||
out_dir = rendered_charts["out_dir"]
|
||||
|
||||
# 1. Verify database has data
|
||||
assert get_metric_count("repeater") > 0
|
||||
assert get_metric_count("companion") > 0
|
||||
|
||||
# 2. Verify rendered charts exist for both roles
|
||||
for role in ["repeater", "companion"]:
|
||||
assets_dir = out_dir / "assets" / role
|
||||
svg_files = list(assets_dir.glob("*.svg"))
|
||||
assert svg_files, f"No charts found for {role}"
|
||||
|
||||
# 3. Copy static assets
|
||||
copy_static_assets()
|
||||
|
||||
# 4. Get latest metrics for write_site
|
||||
companion_row = get_latest_metrics("companion")
|
||||
repeater_row = get_latest_metrics("repeater")
|
||||
|
||||
# 5. Render HTML site
|
||||
write_site(companion_row, repeater_row)
|
||||
|
||||
# 6. Verify output structure
|
||||
assert (out_dir / "day.html").exists()
|
||||
assert (out_dir / "styles.css").exists()
|
||||
assert (out_dir / "chart-tooltip.js").exists()
|
||||
assert (out_dir / "assets" / "repeater").exists()
|
||||
assert (out_dir / "assets" / "companion").exists()
|
||||
|
||||
# 7. Verify HTML is valid (basic check)
|
||||
html_content = (out_dir / "day.html").read_text()
|
||||
assert "<!DOCTYPE html>" in html_content or "<!doctype html>" in html_content.lower()
|
||||
assert "</html>" in html_content
|
||||
|
||||
def test_empty_database_renders_gracefully(
|
||||
self,
|
||||
full_integration_env,
|
||||
rendered_chart_metrics,
|
||||
):
|
||||
"""Should handle empty database gracefully."""
|
||||
from meshmon.charts import render_all_charts, save_chart_stats
|
||||
from meshmon.db import get_latest_metrics, get_metric_count, init_db
|
||||
from meshmon.html import copy_static_assets, write_site
|
||||
|
||||
full_integration_env["out_dir"]
|
||||
|
||||
# Initialize empty database
|
||||
init_db()
|
||||
|
||||
# Verify no data
|
||||
assert get_metric_count("repeater") == 0
|
||||
assert get_metric_count("companion") == 0
|
||||
|
||||
# Rendering with no data should not crash
|
||||
for role in ["repeater", "companion"]:
|
||||
charts, stats = render_all_charts(
|
||||
role, metrics=rendered_chart_metrics[role]
|
||||
)
|
||||
save_chart_stats(role, stats)
|
||||
# Should have no charts (or empty charts)
|
||||
# The important thing is it doesn't crash
|
||||
|
||||
copy_static_assets()
|
||||
|
||||
# Get empty metrics
|
||||
companion_row = get_latest_metrics("companion")
|
||||
repeater_row = get_latest_metrics("repeater")
|
||||
|
||||
# Site rendering might fail or show "no data" - verify it handles gracefully
|
||||
# Some implementations might raise an exception for empty data - acceptable
|
||||
with contextlib.suppress(Exception):
|
||||
write_site(companion_row, repeater_row)
|
||||
381
tests/integration/test_reports_pipeline.py
Normal file
381
tests/integration/test_reports_pipeline.py
Normal file
@@ -0,0 +1,381 @@
|
||||
"""Integration tests for report generation pipeline."""
|
||||
|
||||
import calendar
|
||||
import json
|
||||
from datetime import datetime
|
||||
|
||||
import pytest
|
||||
|
||||
BASE_TS = 1704067200
|
||||
|
||||
|
||||
@pytest.mark.integration
|
||||
class TestReportGenerationPipeline:
|
||||
"""Test report generation end-to-end."""
|
||||
|
||||
def test_generates_monthly_reports(self, populated_db_with_history, reports_env):
|
||||
"""Should generate monthly reports for available data."""
|
||||
from meshmon.html import render_report_page
|
||||
from meshmon.reports import aggregate_monthly, format_monthly_txt, get_available_periods
|
||||
|
||||
# Get available periods
|
||||
periods = get_available_periods("repeater")
|
||||
assert periods
|
||||
|
||||
# Get the current month (should have data)
|
||||
year, month = periods[-1]
|
||||
month_name = calendar.month_name[month]
|
||||
|
||||
# Aggregate monthly data
|
||||
agg = aggregate_monthly("repeater", year, month)
|
||||
|
||||
assert agg is not None
|
||||
assert agg.year == year
|
||||
assert agg.month == month
|
||||
assert agg.role == "repeater"
|
||||
assert agg.daily
|
||||
assert agg.summary["bat"].count > 0
|
||||
assert agg.summary["bat"].min_value is not None
|
||||
assert agg.summary["nb_recv"].total is not None
|
||||
assert agg.summary["nb_recv"].count > 0
|
||||
|
||||
# Generate TXT report
|
||||
from meshmon.reports import LocationInfo
|
||||
|
||||
location = LocationInfo(
|
||||
name="Test Location",
|
||||
lat=52.0,
|
||||
lon=4.0,
|
||||
elev=10.0,
|
||||
)
|
||||
txt_report = format_monthly_txt(agg, "Test Repeater", location)
|
||||
|
||||
assert txt_report is not None
|
||||
assert len(txt_report) > 0
|
||||
assert f"MONTHLY MESHCORE REPORT for {month_name} {year}" in txt_report
|
||||
assert "NODE: Test Repeater" in txt_report
|
||||
assert "NAME: Test Location" in txt_report
|
||||
|
||||
# Generate HTML report
|
||||
html_report = render_report_page(agg, "Test Repeater", "monthly")
|
||||
|
||||
assert html_report is not None
|
||||
assert "<html" in html_report.lower()
|
||||
assert f"{month_name} {year}" in html_report
|
||||
assert "Test Repeater" in html_report
|
||||
|
||||
def test_generates_yearly_reports(self, populated_db_with_history, reports_env):
|
||||
"""Should generate yearly reports for available data."""
|
||||
from meshmon.html import render_report_page
|
||||
from meshmon.reports import aggregate_yearly, format_yearly_txt, get_available_periods
|
||||
|
||||
# Get available periods
|
||||
periods = get_available_periods("repeater")
|
||||
assert len(periods) > 0
|
||||
|
||||
# Get the current year
|
||||
year = periods[-1][0]
|
||||
|
||||
# Aggregate yearly data
|
||||
agg = aggregate_yearly("repeater", year)
|
||||
|
||||
assert agg is not None
|
||||
assert agg.year == year
|
||||
assert agg.role == "repeater"
|
||||
assert agg.monthly
|
||||
assert agg.summary["bat"].count > 0
|
||||
assert agg.summary["nb_recv"].total is not None
|
||||
|
||||
# Generate TXT report
|
||||
from meshmon.reports import LocationInfo
|
||||
|
||||
location = LocationInfo(
|
||||
name="Test Location",
|
||||
lat=52.0,
|
||||
lon=4.0,
|
||||
elev=10.0,
|
||||
)
|
||||
txt_report = format_yearly_txt(agg, "Test Repeater", location)
|
||||
|
||||
assert txt_report is not None
|
||||
assert len(txt_report) > 0
|
||||
assert f"YEARLY MESHCORE REPORT for {year}" in txt_report
|
||||
assert "NODE: Test Repeater" in txt_report
|
||||
|
||||
# Generate HTML report
|
||||
html_report = render_report_page(agg, "Test Repeater", "yearly")
|
||||
|
||||
assert html_report is not None
|
||||
assert "<html" in html_report.lower()
|
||||
assert "Yearly report for Test Repeater" in html_report
|
||||
|
||||
def test_generates_json_reports(self, populated_db_with_history, reports_env):
|
||||
"""Should generate valid JSON reports."""
|
||||
from meshmon.reports import (
|
||||
aggregate_monthly,
|
||||
aggregate_yearly,
|
||||
get_available_periods,
|
||||
monthly_to_json,
|
||||
yearly_to_json,
|
||||
)
|
||||
|
||||
periods = get_available_periods("repeater")
|
||||
year, month = periods[-1]
|
||||
|
||||
# Monthly JSON
|
||||
monthly_agg = aggregate_monthly("repeater", year, month)
|
||||
monthly_json = monthly_to_json(monthly_agg)
|
||||
|
||||
assert monthly_json is not None
|
||||
assert monthly_json["report_type"] == "monthly"
|
||||
assert "year" in monthly_json
|
||||
assert "month" in monthly_json
|
||||
assert monthly_json["role"] == "repeater"
|
||||
assert monthly_json["days_with_data"] == len(monthly_agg.daily)
|
||||
assert "daily" in monthly_json
|
||||
assert "bat" in monthly_json["summary"]
|
||||
|
||||
# Verify it's valid JSON
|
||||
json_str = json.dumps(monthly_json)
|
||||
parsed = json.loads(json_str)
|
||||
assert parsed == monthly_json
|
||||
|
||||
# Yearly JSON
|
||||
yearly_agg = aggregate_yearly("repeater", year)
|
||||
yearly_json = yearly_to_json(yearly_agg)
|
||||
|
||||
assert yearly_json is not None
|
||||
assert yearly_json["report_type"] == "yearly"
|
||||
assert "year" in yearly_json
|
||||
assert yearly_json["role"] == "repeater"
|
||||
assert yearly_json["months_with_data"] == len(yearly_agg.monthly)
|
||||
assert "monthly" in yearly_json
|
||||
assert "bat" in yearly_json["summary"]
|
||||
|
||||
def test_report_files_created(self, populated_db_with_history, reports_env):
|
||||
"""Should create report files in correct directory structure."""
|
||||
from meshmon.html import render_report_page
|
||||
from meshmon.reports import (
|
||||
LocationInfo,
|
||||
aggregate_monthly,
|
||||
format_monthly_txt,
|
||||
get_available_periods,
|
||||
monthly_to_json,
|
||||
)
|
||||
|
||||
out_dir = reports_env["out_dir"]
|
||||
|
||||
periods = get_available_periods("repeater")
|
||||
year, month = periods[-1]
|
||||
month_name = calendar.month_name[month]
|
||||
|
||||
# Create output directory
|
||||
report_dir = out_dir / "reports" / "repeater" / str(year) / f"{month:02d}"
|
||||
report_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Generate reports
|
||||
agg = aggregate_monthly("repeater", year, month)
|
||||
location = LocationInfo(name="Test", lat=0.0, lon=0.0, elev=0.0)
|
||||
|
||||
# Write files
|
||||
html = render_report_page(agg, "Test Repeater", "monthly")
|
||||
txt = format_monthly_txt(agg, "Test Repeater", location)
|
||||
json_data = monthly_to_json(agg)
|
||||
|
||||
(report_dir / "index.html").write_text(html, encoding="utf-8")
|
||||
(report_dir / "report.txt").write_text(txt, encoding="utf-8")
|
||||
(report_dir / "report.json").write_text(json.dumps(json_data), encoding="utf-8")
|
||||
|
||||
# Verify files exist
|
||||
assert (report_dir / "index.html").exists()
|
||||
assert (report_dir / "report.txt").exists()
|
||||
assert (report_dir / "report.json").exists()
|
||||
|
||||
# Verify content is not empty
|
||||
assert len((report_dir / "index.html").read_text()) > 0
|
||||
assert len((report_dir / "report.txt").read_text()) > 0
|
||||
assert len((report_dir / "report.json").read_text()) > 0
|
||||
assert f"{month_name} {year}" in (report_dir / "index.html").read_text()
|
||||
assert "NODE: Test Repeater" in (report_dir / "report.txt").read_text()
|
||||
|
||||
parsed_json = json.loads((report_dir / "report.json").read_text())
|
||||
assert parsed_json["report_type"] == "monthly"
|
||||
assert parsed_json["year"] == year
|
||||
assert parsed_json["month"] == month
|
||||
|
||||
|
||||
@pytest.mark.integration
|
||||
class TestReportsIndex:
|
||||
"""Test reports index page generation."""
|
||||
|
||||
def test_generates_reports_index(self, populated_db_with_history, reports_env):
|
||||
"""Should generate reports index with all available periods."""
|
||||
from meshmon.html import render_reports_index
|
||||
from meshmon.reports import get_available_periods
|
||||
|
||||
out_dir = reports_env["out_dir"]
|
||||
|
||||
# Build sections data (mimicking render_reports.py)
|
||||
sections = []
|
||||
latest_periods: dict[str, tuple[int, int]] = {}
|
||||
for role in ["repeater", "companion"]:
|
||||
periods = get_available_periods(role)
|
||||
|
||||
if not periods:
|
||||
sections.append({"role": role, "years": []})
|
||||
continue
|
||||
latest_periods[role] = periods[-1]
|
||||
|
||||
years_data = {}
|
||||
for year, month in periods:
|
||||
if year not in years_data:
|
||||
years_data[year] = []
|
||||
years_data[year].append(
|
||||
{
|
||||
"month": month,
|
||||
"name": calendar.month_name[month],
|
||||
}
|
||||
)
|
||||
|
||||
years = []
|
||||
for year in sorted(years_data.keys(), reverse=True):
|
||||
years.append(
|
||||
{
|
||||
"year": year,
|
||||
"months": sorted(years_data[year], key=lambda m: m["month"]),
|
||||
}
|
||||
)
|
||||
|
||||
sections.append({"role": role, "years": years})
|
||||
|
||||
# Render index
|
||||
html = render_reports_index(sections)
|
||||
|
||||
assert html is not None
|
||||
assert "<html" in html.lower()
|
||||
assert "reports archive" in html.lower()
|
||||
|
||||
for role, (year, month) in latest_periods.items():
|
||||
assert f"../reports/{role}/{year}/" in html
|
||||
assert f"../reports/{role}/{year}/{month:02d}/" in html
|
||||
|
||||
# Write and verify file
|
||||
reports_dir = out_dir / "reports"
|
||||
reports_dir.mkdir(parents=True, exist_ok=True)
|
||||
(reports_dir / "index.html").write_text(html)
|
||||
|
||||
assert (reports_dir / "index.html").exists()
|
||||
|
||||
|
||||
@pytest.mark.integration
|
||||
class TestCounterAggregation:
|
||||
"""Test counter metrics aggregation (handles reboots)."""
|
||||
|
||||
def test_counter_aggregation_handles_reboots(self, full_integration_env):
|
||||
"""Counter aggregation should correctly handle device reboots."""
|
||||
from meshmon.db import init_db, insert_metrics
|
||||
from meshmon.reports import aggregate_daily
|
||||
|
||||
init_db()
|
||||
|
||||
# Insert data with a simulated reboot
|
||||
day_start = BASE_TS - (BASE_TS % 86400)
|
||||
|
||||
# Before reboot: counter increases
|
||||
for i in range(10):
|
||||
ts = day_start + i * 900
|
||||
insert_metrics(
|
||||
ts, "repeater", {"nb_recv": float(100 + i * 10)} # 100, 110, 120, ..., 190
|
||||
)
|
||||
|
||||
# Reboot: counter resets
|
||||
insert_metrics(day_start + 10 * 900, "repeater", {"nb_recv": 0.0})
|
||||
|
||||
# After reboot: counter increases again
|
||||
for i in range(5):
|
||||
ts = day_start + (11 + i) * 900
|
||||
insert_metrics(ts, "repeater", {"nb_recv": float(i * 20)}) # 0, 20, 40, 60, 80
|
||||
|
||||
# Aggregate daily data
|
||||
dt = datetime.fromtimestamp(day_start)
|
||||
agg = aggregate_daily("repeater", dt.date())
|
||||
|
||||
# Should have data for nb_recv
|
||||
# The counter total should account for the reboot
|
||||
assert agg is not None
|
||||
assert agg.snapshot_count == 16
|
||||
stats = agg.metrics["nb_recv"]
|
||||
assert stats.count == 16
|
||||
assert stats.reboot_count == 1
|
||||
assert stats.total == 170
|
||||
|
||||
def test_gauge_aggregation_computes_stats(self, full_integration_env):
|
||||
"""Gauge metrics should compute min/max/avg correctly."""
|
||||
from meshmon.db import init_db, insert_metrics
|
||||
from meshmon.reports import aggregate_daily
|
||||
|
||||
init_db()
|
||||
|
||||
day_start = BASE_TS - (BASE_TS % 86400)
|
||||
|
||||
# Insert battery readings with known pattern
|
||||
values = [3.7, 3.8, 3.9, 4.0, 3.85] # min=3.7, max_value=4.0, avg≈3.85
|
||||
for i, val in enumerate(values):
|
||||
ts = day_start + i * 3600
|
||||
insert_metrics(ts, "repeater", {"bat": val * 1000}) # Store in mV
|
||||
|
||||
dt = datetime.fromtimestamp(day_start)
|
||||
agg = aggregate_daily("repeater", dt.date())
|
||||
|
||||
assert agg is not None
|
||||
assert agg.snapshot_count == len(values)
|
||||
stats = agg.metrics["bat"]
|
||||
assert stats.count == len(values)
|
||||
assert stats.min_value == 3700.0
|
||||
assert stats.max_value == 4000.0
|
||||
assert stats.mean == pytest.approx(3850.0)
|
||||
assert stats.min_time == datetime.fromtimestamp(day_start)
|
||||
assert stats.max_time == datetime.fromtimestamp(day_start + 3 * 3600)
|
||||
|
||||
|
||||
@pytest.mark.integration
|
||||
class TestReportConsistency:
|
||||
"""Test consistency across different report formats."""
|
||||
|
||||
def test_txt_json_html_contain_same_data(
|
||||
self, populated_db_with_history, reports_env
|
||||
):
|
||||
"""TXT, JSON, and HTML reports should contain consistent data."""
|
||||
from meshmon.html import render_report_page
|
||||
from meshmon.reports import (
|
||||
LocationInfo,
|
||||
aggregate_monthly,
|
||||
format_monthly_txt,
|
||||
get_available_periods,
|
||||
monthly_to_json,
|
||||
)
|
||||
|
||||
periods = get_available_periods("repeater")
|
||||
year, month = periods[-1]
|
||||
|
||||
agg = aggregate_monthly("repeater", year, month)
|
||||
location = LocationInfo(name="Test", lat=52.0, lon=4.0, elev=10.0)
|
||||
|
||||
txt = format_monthly_txt(agg, "Test Repeater", location)
|
||||
json_data = monthly_to_json(agg)
|
||||
html = render_report_page(agg, "Test Repeater", "monthly")
|
||||
|
||||
# All should reference the same year/month
|
||||
month_name = calendar.month_name[month]
|
||||
assert str(year) in txt
|
||||
assert json_data["year"] == year
|
||||
assert json_data["month"] == month
|
||||
assert json_data["role"] == "repeater"
|
||||
assert json_data["report_type"] == "monthly"
|
||||
assert str(year) in html
|
||||
assert f"{month_name} {year}" in html
|
||||
|
||||
# All should have the same number of days
|
||||
num_days = len(agg.daily)
|
||||
assert len(json_data["daily"]) == num_days
|
||||
assert json_data["days_with_data"] == num_days
|
||||
1
tests/reports/__init__.py
Normal file
1
tests/reports/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Tests for report generation."""
|
||||
111
tests/reports/conftest.py
Normal file
111
tests/reports/conftest.py
Normal file
@@ -0,0 +1,111 @@
|
||||
"""Fixtures for reports tests."""
|
||||
|
||||
from datetime import date, datetime, timedelta
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_daily_data():
|
||||
"""Sample daily metrics data for report generation."""
|
||||
base_date = date(2024, 1, 15)
|
||||
return {
|
||||
"date": base_date,
|
||||
"bat": {
|
||||
"min": 3.5,
|
||||
"avg": 3.7,
|
||||
"max": 3.9,
|
||||
"count": 96, # 15-min intervals for a day
|
||||
},
|
||||
"bat_pct": {
|
||||
"min": 50.0,
|
||||
"avg": 70.0,
|
||||
"max": 90.0,
|
||||
"count": 96,
|
||||
},
|
||||
"nb_recv": {
|
||||
"total": 12000, # Counter total for the day
|
||||
"count": 96,
|
||||
},
|
||||
"nb_sent": {
|
||||
"total": 5000,
|
||||
"count": 96,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_monthly_data():
|
||||
"""Sample monthly aggregated data."""
|
||||
return {
|
||||
"year": 2024,
|
||||
"month": 1,
|
||||
"bat": {
|
||||
"min": 3.3,
|
||||
"avg": 3.65,
|
||||
"max": 4.0,
|
||||
"count": 2976, # ~31 days * 96 readings
|
||||
},
|
||||
"bat_pct": {
|
||||
"min": 40.0,
|
||||
"avg": 65.0,
|
||||
"max": 100.0,
|
||||
"count": 2976,
|
||||
},
|
||||
"nb_recv": {
|
||||
"total": 360000,
|
||||
"count": 2976,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_yearly_data():
|
||||
"""Sample yearly aggregated data."""
|
||||
return {
|
||||
"year": 2024,
|
||||
"bat": {
|
||||
"min": 3.0,
|
||||
"avg": 3.6,
|
||||
"max": 4.2,
|
||||
"count": 35040, # ~365 days * 96 readings
|
||||
},
|
||||
"nb_recv": {
|
||||
"total": 4320000,
|
||||
"count": 35040,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_counter_values():
|
||||
"""Sample counter values with timestamps for reboot detection."""
|
||||
base_ts = datetime(2024, 1, 15, 0, 0, 0)
|
||||
return [
|
||||
(base_ts, 100),
|
||||
(base_ts + timedelta(minutes=15), 150),
|
||||
(base_ts + timedelta(minutes=30), 200),
|
||||
(base_ts + timedelta(minutes=45), 250),
|
||||
(base_ts + timedelta(hours=1), 300),
|
||||
]
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_counter_values_with_reboot():
|
||||
"""Sample counter values with a device reboot."""
|
||||
base_ts = datetime(2024, 1, 15, 0, 0, 0)
|
||||
return [
|
||||
(base_ts, 100),
|
||||
(base_ts + timedelta(minutes=15), 150),
|
||||
(base_ts + timedelta(minutes=30), 200),
|
||||
(base_ts + timedelta(minutes=45), 50), # Reboot! Counter reset
|
||||
(base_ts + timedelta(hours=1), 100),
|
||||
]
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def reports_out_dir(configured_env):
|
||||
"""Output directory for reports."""
|
||||
reports_dir = configured_env["out_dir"] / "reports"
|
||||
reports_dir.mkdir(parents=True, exist_ok=True)
|
||||
return reports_dir
|
||||
215
tests/reports/test_aggregation.py
Normal file
215
tests/reports/test_aggregation.py
Normal file
@@ -0,0 +1,215 @@
|
||||
"""Tests for report data aggregation functions."""
|
||||
|
||||
from datetime import date, datetime
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.db import insert_metrics
|
||||
from meshmon.reports import (
|
||||
DailyAggregate,
|
||||
aggregate_daily,
|
||||
aggregate_monthly,
|
||||
aggregate_yearly,
|
||||
get_rows_for_date,
|
||||
)
|
||||
|
||||
BASE_DATE = date(2024, 1, 15)
|
||||
BASE_TS = int(datetime(2024, 1, 15, 0, 0, 0).timestamp())
|
||||
|
||||
|
||||
class TestGetRowsForDate:
|
||||
"""Tests for get_rows_for_date function."""
|
||||
|
||||
def test_returns_list(self, initialized_db, configured_env):
|
||||
"""Returns a list."""
|
||||
result = get_rows_for_date("repeater", BASE_DATE)
|
||||
assert isinstance(result, list)
|
||||
|
||||
def test_filters_by_date(self, initialized_db, configured_env):
|
||||
"""Only returns rows for the specified date."""
|
||||
# Insert data for different dates
|
||||
ts_jan14 = int(datetime(2024, 1, 14, 12, 0, 0).timestamp())
|
||||
ts_jan15 = int(datetime(2024, 1, 15, 12, 0, 0).timestamp())
|
||||
ts_jan16 = int(datetime(2024, 1, 16, 12, 0, 0).timestamp())
|
||||
|
||||
insert_metrics(ts_jan14, "repeater", {"bat": 3800.0})
|
||||
insert_metrics(ts_jan15, "repeater", {"bat": 3850.0})
|
||||
insert_metrics(ts_jan16, "repeater", {"bat": 3900.0})
|
||||
|
||||
result = get_rows_for_date("repeater", BASE_DATE)
|
||||
|
||||
# Should have data for Jan 15 only
|
||||
assert len(result) == 1
|
||||
assert result[0]["ts"] == ts_jan15
|
||||
assert result[0]["bat"] == 3850.0
|
||||
|
||||
def test_filters_by_role(self, initialized_db, configured_env):
|
||||
"""Only returns rows for the specified role."""
|
||||
ts = int(datetime(2024, 1, 15, 12, 0, 0).timestamp())
|
||||
|
||||
insert_metrics(ts, "repeater", {"bat": 3800.0})
|
||||
insert_metrics(ts, "companion", {"battery_mv": 3850.0})
|
||||
|
||||
repeater_result = get_rows_for_date("repeater", BASE_DATE)
|
||||
companion_result = get_rows_for_date("companion", BASE_DATE)
|
||||
|
||||
assert len(repeater_result) == 1
|
||||
assert "bat" in repeater_result[0]
|
||||
assert "battery_mv" not in repeater_result[0]
|
||||
assert len(companion_result) == 1
|
||||
assert "battery_mv" in companion_result[0]
|
||||
assert "bat" not in companion_result[0]
|
||||
|
||||
def test_returns_empty_for_no_data(self, initialized_db, configured_env):
|
||||
"""Returns empty list when no data for date."""
|
||||
result = get_rows_for_date("repeater", BASE_DATE)
|
||||
assert result == []
|
||||
|
||||
|
||||
class TestAggregateDaily:
|
||||
"""Tests for aggregate_daily function."""
|
||||
|
||||
def test_returns_daily_aggregate(self, initialized_db, configured_env):
|
||||
"""Returns a DailyAggregate."""
|
||||
result = aggregate_daily("repeater", BASE_DATE)
|
||||
assert isinstance(result, DailyAggregate)
|
||||
|
||||
def test_calculates_gauge_stats(self, initialized_db, configured_env):
|
||||
"""Calculates stats for gauge metrics."""
|
||||
# Insert several values
|
||||
for i, value in enumerate([3700.0, 3800.0, 3900.0, 4000.0]):
|
||||
insert_metrics(BASE_TS + i * 3600, "repeater", {"bat": value})
|
||||
|
||||
result = aggregate_daily("repeater", BASE_DATE)
|
||||
|
||||
assert "bat" in result.metrics
|
||||
bat_stats = result.metrics["bat"]
|
||||
assert bat_stats.count == 4
|
||||
assert bat_stats.min_value == 3700.0
|
||||
assert bat_stats.max_value == 4000.0
|
||||
assert bat_stats.mean == pytest.approx(3850.0)
|
||||
assert bat_stats.min_time == datetime.fromtimestamp(BASE_TS)
|
||||
assert bat_stats.max_time == datetime.fromtimestamp(BASE_TS + 3 * 3600)
|
||||
|
||||
def test_calculates_counter_total(self, initialized_db, configured_env):
|
||||
"""Calculates total for counter metrics."""
|
||||
# Insert increasing counter values
|
||||
for i in range(5):
|
||||
insert_metrics(BASE_TS + i * 900, "repeater", {"nb_recv": float(i * 100)})
|
||||
|
||||
result = aggregate_daily("repeater", BASE_DATE)
|
||||
|
||||
assert "nb_recv" in result.metrics
|
||||
counter_stats = result.metrics["nb_recv"]
|
||||
assert counter_stats.count == 5
|
||||
assert counter_stats.reboot_count == 0
|
||||
assert counter_stats.total == 400
|
||||
|
||||
def test_returns_empty_for_no_data(self, initialized_db, configured_env):
|
||||
"""Returns aggregate with empty metrics when no data."""
|
||||
result = aggregate_daily("repeater", BASE_DATE)
|
||||
|
||||
assert isinstance(result, DailyAggregate)
|
||||
assert result.snapshot_count == 0
|
||||
assert result.metrics == {}
|
||||
|
||||
|
||||
class TestAggregateMonthly:
|
||||
"""Tests for aggregate_monthly function."""
|
||||
|
||||
def test_returns_monthly_aggregate(self, initialized_db, configured_env):
|
||||
"""Returns a MonthlyAggregate."""
|
||||
from meshmon.reports import MonthlyAggregate
|
||||
|
||||
result = aggregate_monthly("repeater", 2024, 1)
|
||||
assert isinstance(result, MonthlyAggregate)
|
||||
|
||||
def test_aggregates_all_days(self, initialized_db, configured_env):
|
||||
"""Aggregates data from all days in month."""
|
||||
# Insert data for multiple days
|
||||
for day in [1, 5, 15, 20, 31]:
|
||||
ts = int(datetime(2024, 1, day, 12, 0, 0).timestamp())
|
||||
insert_metrics(ts, "repeater", {"bat": 3800.0 + day * 10})
|
||||
|
||||
result = aggregate_monthly("repeater", 2024, 1)
|
||||
|
||||
# Should have daily data
|
||||
assert result.year == 2024
|
||||
assert result.month == 1
|
||||
assert len(result.daily) == 5
|
||||
assert all(d.snapshot_count == 1 for d in result.daily)
|
||||
summary = result.summary["bat"]
|
||||
assert summary.count == 5
|
||||
assert summary.min_value == 3810.0
|
||||
assert summary.max_value == 4110.0
|
||||
assert summary.mean == pytest.approx(3944.0)
|
||||
assert summary.min_time.day == 1
|
||||
assert summary.max_time.day == 31
|
||||
|
||||
def test_handles_partial_month(self, initialized_db, configured_env):
|
||||
"""Handles months with partial data."""
|
||||
# Insert data for only a few days
|
||||
for day in [10, 11, 12]:
|
||||
ts = int(datetime(2024, 1, day, 12, 0, 0).timestamp())
|
||||
insert_metrics(ts, "repeater", {"bat": 3800.0})
|
||||
|
||||
result = aggregate_monthly("repeater", 2024, 1)
|
||||
|
||||
assert result.year == 2024
|
||||
assert result.month == 1
|
||||
assert len(result.daily) == 3
|
||||
summary = result.summary["bat"]
|
||||
assert summary.count == 3
|
||||
assert summary.mean == pytest.approx(3800.0)
|
||||
|
||||
|
||||
class TestAggregateYearly:
|
||||
"""Tests for aggregate_yearly function."""
|
||||
|
||||
def test_returns_yearly_aggregate(self, initialized_db, configured_env):
|
||||
"""Returns a YearlyAggregate."""
|
||||
from meshmon.reports import YearlyAggregate
|
||||
|
||||
result = aggregate_yearly("repeater", 2024)
|
||||
assert isinstance(result, YearlyAggregate)
|
||||
|
||||
def test_aggregates_all_months(self, initialized_db, configured_env):
|
||||
"""Aggregates data from all months in year."""
|
||||
# Insert data for multiple months
|
||||
for month in [1, 3, 6, 12]:
|
||||
ts = int(datetime(2024, month, 15, 12, 0, 0).timestamp())
|
||||
insert_metrics(ts, "repeater", {"bat": 3800.0 + month * 10})
|
||||
|
||||
result = aggregate_yearly("repeater", 2024)
|
||||
|
||||
assert result.year == 2024
|
||||
# Should have monthly aggregates
|
||||
assert len(result.monthly) == 4
|
||||
summary = result.summary["bat"]
|
||||
assert summary.count == 4
|
||||
assert summary.min_value == 3810.0
|
||||
assert summary.max_value == 3920.0
|
||||
assert summary.mean == pytest.approx(3855.0)
|
||||
assert summary.min_time.month == 1
|
||||
assert summary.max_time.month == 12
|
||||
|
||||
def test_returns_empty_for_no_data(self, initialized_db, configured_env):
|
||||
"""Returns aggregate with empty monthly when no data."""
|
||||
result = aggregate_yearly("repeater", 2024)
|
||||
|
||||
assert result.year == 2024
|
||||
# Empty year may have no monthly data
|
||||
assert result.monthly == []
|
||||
|
||||
def test_handles_leap_year(self, initialized_db, configured_env):
|
||||
"""Correctly handles leap years."""
|
||||
# Insert data for Feb 29 (2024 is a leap year)
|
||||
ts = int(datetime(2024, 2, 29, 12, 0, 0).timestamp())
|
||||
insert_metrics(ts, "repeater", {"bat": 3800.0})
|
||||
|
||||
result = aggregate_yearly("repeater", 2024)
|
||||
|
||||
assert result.year == 2024
|
||||
months = [monthly.month for monthly in result.monthly]
|
||||
assert 2 in months
|
||||
assert result.summary["bat"].count == 1
|
||||
424
tests/reports/test_aggregation_helpers.py
Normal file
424
tests/reports/test_aggregation_helpers.py
Normal file
@@ -0,0 +1,424 @@
|
||||
"""Tests for report aggregation helper functions."""
|
||||
|
||||
from datetime import date, datetime
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.reports import (
|
||||
DailyAggregate,
|
||||
MetricStats,
|
||||
MonthlyAggregate,
|
||||
_aggregate_daily_counter_to_summary,
|
||||
_aggregate_daily_gauge_to_summary,
|
||||
_aggregate_monthly_counter_to_summary,
|
||||
_aggregate_monthly_gauge_to_summary,
|
||||
_compute_counter_stats,
|
||||
_compute_gauge_stats,
|
||||
)
|
||||
|
||||
|
||||
class TestComputeGaugeStats:
|
||||
"""Tests for _compute_gauge_stats function."""
|
||||
|
||||
def test_returns_metric_stats(self):
|
||||
"""Returns a MetricStats dataclass."""
|
||||
values = [
|
||||
(datetime(2024, 1, 1, 0, 0), 3.8),
|
||||
(datetime(2024, 1, 1, 1, 0), 3.9),
|
||||
(datetime(2024, 1, 1, 2, 0), 4.0),
|
||||
]
|
||||
result = _compute_gauge_stats(values)
|
||||
assert isinstance(result, MetricStats)
|
||||
|
||||
def test_computes_min_max_mean(self):
|
||||
"""Computes correct min, max, and mean."""
|
||||
values = [
|
||||
(datetime(2024, 1, 1, 0, 0), 3.8),
|
||||
(datetime(2024, 1, 1, 1, 0), 3.9),
|
||||
(datetime(2024, 1, 1, 2, 0), 4.0),
|
||||
]
|
||||
result = _compute_gauge_stats(values)
|
||||
assert result.min_value == 3.8
|
||||
assert result.max_value == 4.0
|
||||
assert result.mean == pytest.approx(3.9)
|
||||
assert result.count == 3
|
||||
|
||||
def test_handles_single_value(self):
|
||||
"""Handles single value correctly."""
|
||||
values = [(datetime(2024, 1, 1, 0, 0), 3.85)]
|
||||
result = _compute_gauge_stats(values)
|
||||
assert result.min_value == 3.85
|
||||
assert result.max_value == 3.85
|
||||
assert result.mean == 3.85
|
||||
assert result.count == 1
|
||||
assert result.min_time == datetime(2024, 1, 1, 0, 0)
|
||||
assert result.max_time == datetime(2024, 1, 1, 0, 0)
|
||||
|
||||
def test_handles_empty_list(self):
|
||||
"""Handles empty list gracefully."""
|
||||
result = _compute_gauge_stats([])
|
||||
assert result.min_value is None
|
||||
assert result.max_value is None
|
||||
assert result.mean is None
|
||||
assert result.count == 0
|
||||
|
||||
def test_tracks_count(self):
|
||||
"""Tracks the number of values."""
|
||||
values = [
|
||||
(datetime(2024, 1, 1, i, 0), 3.8 + i * 0.01)
|
||||
for i in range(10)
|
||||
]
|
||||
result = _compute_gauge_stats(values)
|
||||
assert result.count == 10
|
||||
|
||||
def test_tracks_min_time(self):
|
||||
"""Tracks timestamp of minimum value."""
|
||||
values = [
|
||||
(datetime(2024, 1, 1, 0, 0), 3.9),
|
||||
(datetime(2024, 1, 1, 1, 0), 3.7), # Min
|
||||
(datetime(2024, 1, 1, 2, 0), 3.8),
|
||||
]
|
||||
result = _compute_gauge_stats(values)
|
||||
assert result.min_time == datetime(2024, 1, 1, 1, 0)
|
||||
|
||||
def test_tracks_max_time(self):
|
||||
"""Tracks timestamp of maximum value."""
|
||||
values = [
|
||||
(datetime(2024, 1, 1, 0, 0), 3.9),
|
||||
(datetime(2024, 1, 1, 1, 0), 4.1), # Max
|
||||
(datetime(2024, 1, 1, 2, 0), 3.8),
|
||||
]
|
||||
result = _compute_gauge_stats(values)
|
||||
assert result.max_time == datetime(2024, 1, 1, 1, 0)
|
||||
|
||||
|
||||
class TestComputeCounterStats:
|
||||
"""Tests for _compute_counter_stats function."""
|
||||
|
||||
def test_returns_metric_stats(self):
|
||||
"""Returns a MetricStats dataclass."""
|
||||
values = [
|
||||
(datetime(2024, 1, 1, 0, 0), 100),
|
||||
(datetime(2024, 1, 1, 1, 0), 150),
|
||||
(datetime(2024, 1, 1, 2, 0), 200),
|
||||
]
|
||||
result = _compute_counter_stats(values)
|
||||
assert isinstance(result, MetricStats)
|
||||
|
||||
def test_computes_total_delta(self):
|
||||
"""Computes total delta from counter values."""
|
||||
values = [
|
||||
(datetime(2024, 1, 1, 0, 0), 100),
|
||||
(datetime(2024, 1, 1, 1, 0), 150), # +50
|
||||
(datetime(2024, 1, 1, 2, 0), 200), # +50
|
||||
]
|
||||
result = _compute_counter_stats(values)
|
||||
# Total should be 100 (50 + 50)
|
||||
assert result.total == 100
|
||||
assert result.count == 3
|
||||
assert result.reboot_count == 0
|
||||
|
||||
def test_handles_counter_reboot(self):
|
||||
"""Handles counter reboot (value decrease)."""
|
||||
values = [
|
||||
(datetime(2024, 1, 1, 0, 0), 100),
|
||||
(datetime(2024, 1, 1, 1, 0), 150), # +50
|
||||
(datetime(2024, 1, 1, 2, 0), 20), # Reboot - counts from 0
|
||||
(datetime(2024, 1, 1, 3, 0), 50), # +30
|
||||
]
|
||||
result = _compute_counter_stats(values)
|
||||
# Total: 50 + 20 + 30 = 100
|
||||
assert result.total == 100
|
||||
assert result.reboot_count == 1
|
||||
assert result.count == 4
|
||||
|
||||
def test_tracks_reboot_count(self):
|
||||
"""Tracks number of reboots."""
|
||||
values = [
|
||||
(datetime(2024, 1, 1, 0, 0), 100),
|
||||
(datetime(2024, 1, 1, 1, 0), 150),
|
||||
(datetime(2024, 1, 1, 2, 0), 20), # Reboot 1
|
||||
(datetime(2024, 1, 1, 3, 0), 50),
|
||||
(datetime(2024, 1, 1, 4, 0), 10), # Reboot 2
|
||||
]
|
||||
result = _compute_counter_stats(values)
|
||||
assert result.reboot_count == 2
|
||||
assert result.total == 110
|
||||
assert result.count == 5
|
||||
|
||||
def test_handles_empty_list(self):
|
||||
"""Handles empty list gracefully."""
|
||||
result = _compute_counter_stats([])
|
||||
assert result.total is None
|
||||
assert result.count == 0
|
||||
assert result.reboot_count == 0
|
||||
|
||||
def test_handles_single_value(self):
|
||||
"""Handles single value (no delta possible)."""
|
||||
values = [(datetime(2024, 1, 1, 0, 0), 100)]
|
||||
result = _compute_counter_stats(values)
|
||||
# Single value means no delta can be computed
|
||||
assert result.total is None
|
||||
assert result.count == 1
|
||||
assert result.reboot_count == 0
|
||||
|
||||
|
||||
class TestAggregateDailyGaugeToSummary:
|
||||
"""Tests for _aggregate_daily_gauge_to_summary function."""
|
||||
|
||||
@pytest.fixture
|
||||
def daily_gauge_data(self):
|
||||
"""Sample daily gauge aggregates."""
|
||||
return [
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, 1),
|
||||
metrics={
|
||||
"battery": MetricStats(
|
||||
min_value=3.7, min_time=datetime(2024, 1, 1, 3, 0),
|
||||
max_value=3.9, max_time=datetime(2024, 1, 1, 15, 0),
|
||||
mean=3.8, count=96
|
||||
)
|
||||
}
|
||||
),
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, 2),
|
||||
metrics={
|
||||
"battery": MetricStats(
|
||||
min_value=3.6, min_time=datetime(2024, 1, 2, 4, 0),
|
||||
max_value=4.0, max_time=datetime(2024, 1, 2, 12, 0),
|
||||
mean=3.85, count=96
|
||||
)
|
||||
}
|
||||
),
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, 3),
|
||||
metrics={
|
||||
"battery": MetricStats(
|
||||
min_value=3.8, min_time=datetime(2024, 1, 3, 2, 0),
|
||||
max_value=4.1, max_time=datetime(2024, 1, 3, 18, 0),
|
||||
mean=3.95, count=96
|
||||
)
|
||||
}
|
||||
),
|
||||
]
|
||||
|
||||
def test_returns_metric_stats(self, daily_gauge_data):
|
||||
"""Returns a MetricStats object."""
|
||||
result = _aggregate_daily_gauge_to_summary(daily_gauge_data, "battery")
|
||||
assert isinstance(result, MetricStats)
|
||||
|
||||
def test_finds_overall_min(self, daily_gauge_data):
|
||||
"""Finds minimum across all days."""
|
||||
result = _aggregate_daily_gauge_to_summary(daily_gauge_data, "battery")
|
||||
assert result.min_value == 3.6
|
||||
assert result.min_time == datetime(2024, 1, 2, 4, 0)
|
||||
|
||||
def test_finds_overall_max(self, daily_gauge_data):
|
||||
"""Finds maximum across all days."""
|
||||
result = _aggregate_daily_gauge_to_summary(daily_gauge_data, "battery")
|
||||
assert result.max_value == 4.1
|
||||
assert result.max_time == datetime(2024, 1, 3, 18, 0)
|
||||
|
||||
def test_computes_weighted_mean(self, daily_gauge_data):
|
||||
"""Computes weighted mean based on count."""
|
||||
result = _aggregate_daily_gauge_to_summary(daily_gauge_data, "battery")
|
||||
# All have same count, so simple average: (3.8 + 3.85 + 3.95) / 3 = 3.8667
|
||||
assert result.mean == pytest.approx(3.8667, rel=0.01)
|
||||
assert result.count == 288
|
||||
|
||||
def test_handles_empty_list(self):
|
||||
"""Handles empty daily list."""
|
||||
result = _aggregate_daily_gauge_to_summary([], "battery")
|
||||
assert result.min_value is None
|
||||
assert result.max_value is None
|
||||
assert result.mean is None
|
||||
assert result.count == 0
|
||||
|
||||
def test_handles_missing_metric(self, daily_gauge_data):
|
||||
"""Handles when metric doesn't exist in daily data."""
|
||||
result = _aggregate_daily_gauge_to_summary(daily_gauge_data, "nonexistent")
|
||||
assert result.min_value is None
|
||||
assert result.max_value is None
|
||||
assert result.mean is None
|
||||
assert result.count == 0
|
||||
|
||||
|
||||
class TestAggregateDailyCounterToSummary:
|
||||
"""Tests for _aggregate_daily_counter_to_summary function."""
|
||||
|
||||
@pytest.fixture
|
||||
def daily_counter_data(self):
|
||||
"""Sample daily counter aggregates."""
|
||||
return [
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, 1),
|
||||
metrics={
|
||||
"packets_rx": MetricStats(total=1000, reboot_count=0, count=96)
|
||||
}
|
||||
),
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, 2),
|
||||
metrics={
|
||||
"packets_rx": MetricStats(total=1500, reboot_count=1, count=96)
|
||||
}
|
||||
),
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, 3),
|
||||
metrics={
|
||||
"packets_rx": MetricStats(total=800, reboot_count=0, count=96)
|
||||
}
|
||||
),
|
||||
]
|
||||
|
||||
def test_returns_metric_stats(self, daily_counter_data):
|
||||
"""Returns a MetricStats object."""
|
||||
result = _aggregate_daily_counter_to_summary(daily_counter_data, "packets_rx")
|
||||
assert isinstance(result, MetricStats)
|
||||
|
||||
def test_sums_totals(self, daily_counter_data):
|
||||
"""Sums totals across all days."""
|
||||
result = _aggregate_daily_counter_to_summary(daily_counter_data, "packets_rx")
|
||||
assert result.total == 3300 # 1000 + 1500 + 800
|
||||
assert result.count == 288
|
||||
|
||||
def test_sums_reboots(self, daily_counter_data):
|
||||
"""Sums reboot counts across all days."""
|
||||
result = _aggregate_daily_counter_to_summary(daily_counter_data, "packets_rx")
|
||||
assert result.reboot_count == 1
|
||||
|
||||
def test_handles_empty_list(self):
|
||||
"""Handles empty daily list."""
|
||||
result = _aggregate_daily_counter_to_summary([], "packets_rx")
|
||||
assert result.total is None
|
||||
assert result.count == 0
|
||||
assert result.reboot_count == 0
|
||||
|
||||
def test_handles_missing_metric(self, daily_counter_data):
|
||||
"""Handles when metric doesn't exist in daily data."""
|
||||
result = _aggregate_daily_counter_to_summary(daily_counter_data, "nonexistent")
|
||||
assert result.total is None
|
||||
assert result.count == 0
|
||||
assert result.reboot_count == 0
|
||||
|
||||
|
||||
class TestAggregateMonthlyGaugeToSummary:
|
||||
"""Tests for _aggregate_monthly_gauge_to_summary function."""
|
||||
|
||||
@pytest.fixture
|
||||
def monthly_gauge_data(self):
|
||||
"""Sample monthly gauge aggregates."""
|
||||
return [
|
||||
MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="companion",
|
||||
summary={
|
||||
"battery": MetricStats(
|
||||
min_value=3.6, min_time=datetime(2024, 1, 15, 4, 0),
|
||||
max_value=4.0, max_time=datetime(2024, 1, 20, 14, 0),
|
||||
mean=3.8, count=2976
|
||||
)
|
||||
}
|
||||
),
|
||||
MonthlyAggregate(
|
||||
year=2024,
|
||||
month=2,
|
||||
role="companion",
|
||||
summary={
|
||||
"battery": MetricStats(
|
||||
min_value=3.5, min_time=datetime(2024, 2, 10, 5, 0),
|
||||
max_value=4.1, max_time=datetime(2024, 2, 25, 16, 0),
|
||||
mean=3.9, count=2784
|
||||
)
|
||||
}
|
||||
),
|
||||
]
|
||||
|
||||
def test_returns_metric_stats(self, monthly_gauge_data):
|
||||
"""Returns a MetricStats object."""
|
||||
result = _aggregate_monthly_gauge_to_summary(monthly_gauge_data, "battery")
|
||||
assert isinstance(result, MetricStats)
|
||||
|
||||
def test_finds_overall_min(self, monthly_gauge_data):
|
||||
"""Finds minimum across all months."""
|
||||
result = _aggregate_monthly_gauge_to_summary(monthly_gauge_data, "battery")
|
||||
assert result.min_value == 3.5
|
||||
assert result.min_time == datetime(2024, 2, 10, 5, 0)
|
||||
|
||||
def test_finds_overall_max(self, monthly_gauge_data):
|
||||
"""Finds maximum across all months."""
|
||||
result = _aggregate_monthly_gauge_to_summary(monthly_gauge_data, "battery")
|
||||
assert result.max_value == 4.1
|
||||
assert result.max_time == datetime(2024, 2, 25, 16, 0)
|
||||
|
||||
def test_computes_weighted_mean(self, monthly_gauge_data):
|
||||
"""Computes weighted mean based on count."""
|
||||
result = _aggregate_monthly_gauge_to_summary(monthly_gauge_data, "battery")
|
||||
# Weighted: (3.8 * 2976 + 3.9 * 2784) / (2976 + 2784)
|
||||
expected = (3.8 * 2976 + 3.9 * 2784) / (2976 + 2784)
|
||||
assert result.mean == pytest.approx(expected, rel=0.01)
|
||||
assert result.count == 5760
|
||||
|
||||
def test_handles_empty_list(self):
|
||||
"""Handles empty monthly list."""
|
||||
result = _aggregate_monthly_gauge_to_summary([], "battery")
|
||||
assert result.min_value is None
|
||||
assert result.max_value is None
|
||||
assert result.mean is None
|
||||
assert result.count == 0
|
||||
|
||||
|
||||
class TestAggregateMonthlyCounterToSummary:
|
||||
"""Tests for _aggregate_monthly_counter_to_summary function."""
|
||||
|
||||
@pytest.fixture
|
||||
def monthly_counter_data(self):
|
||||
"""Sample monthly counter aggregates."""
|
||||
return [
|
||||
MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="companion",
|
||||
summary={
|
||||
"packets_rx": MetricStats(total=50000, reboot_count=2, count=2976)
|
||||
}
|
||||
),
|
||||
MonthlyAggregate(
|
||||
year=2024,
|
||||
month=2,
|
||||
role="companion",
|
||||
summary={
|
||||
"packets_rx": MetricStats(total=45000, reboot_count=1, count=2784)
|
||||
}
|
||||
),
|
||||
]
|
||||
|
||||
def test_returns_metric_stats(self, monthly_counter_data):
|
||||
"""Returns a MetricStats object."""
|
||||
result = _aggregate_monthly_counter_to_summary(monthly_counter_data, "packets_rx")
|
||||
assert isinstance(result, MetricStats)
|
||||
|
||||
def test_sums_totals(self, monthly_counter_data):
|
||||
"""Sums totals across all months."""
|
||||
result = _aggregate_monthly_counter_to_summary(monthly_counter_data, "packets_rx")
|
||||
assert result.total == 95000
|
||||
assert result.count == 5760
|
||||
|
||||
def test_sums_reboots(self, monthly_counter_data):
|
||||
"""Sums reboot counts across all months."""
|
||||
result = _aggregate_monthly_counter_to_summary(monthly_counter_data, "packets_rx")
|
||||
assert result.reboot_count == 3
|
||||
|
||||
def test_handles_empty_list(self):
|
||||
"""Handles empty monthly list."""
|
||||
result = _aggregate_monthly_counter_to_summary([], "packets_rx")
|
||||
assert result.total is None
|
||||
assert result.count == 0
|
||||
assert result.reboot_count == 0
|
||||
|
||||
def test_handles_missing_metric(self, monthly_counter_data):
|
||||
"""Handles when metric doesn't exist in monthly data."""
|
||||
result = _aggregate_monthly_counter_to_summary(monthly_counter_data, "nonexistent")
|
||||
assert result.total is None
|
||||
assert result.count == 0
|
||||
assert result.reboot_count == 0
|
||||
152
tests/reports/test_counter_total.py
Normal file
152
tests/reports/test_counter_total.py
Normal file
@@ -0,0 +1,152 @@
|
||||
"""Tests for counter total computation with reboot handling."""
|
||||
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.reports import compute_counter_total
|
||||
|
||||
|
||||
class TestComputeCounterTotal:
|
||||
"""Tests for compute_counter_total function."""
|
||||
|
||||
def test_calculates_total_from_deltas(self, sample_counter_values):
|
||||
"""Calculates total as sum of positive deltas."""
|
||||
total, reboots = compute_counter_total(sample_counter_values)
|
||||
|
||||
# Values: 100, 150, 200, 250, 300
|
||||
# Deltas: +50, +50, +50, +50 = 200
|
||||
assert total == 200
|
||||
assert reboots == 0
|
||||
|
||||
def test_handles_single_value(self):
|
||||
"""Single value cannot compute delta, returns None."""
|
||||
values = [(datetime(2024, 1, 15, 0, 0, 0), 100)]
|
||||
|
||||
total, reboots = compute_counter_total(values)
|
||||
|
||||
assert total is None
|
||||
assert reboots == 0
|
||||
|
||||
def test_handles_empty_values(self):
|
||||
"""Empty values returns None."""
|
||||
total, reboots = compute_counter_total([])
|
||||
|
||||
assert total is None
|
||||
assert reboots == 0
|
||||
|
||||
def test_detects_single_reboot(self, sample_counter_values_with_reboot):
|
||||
"""Detects reboot and handles counter reset."""
|
||||
total, reboots = compute_counter_total(sample_counter_values_with_reboot)
|
||||
|
||||
# Values: 100, 150, 200, 50 (reboot!), 100
|
||||
# Deltas: +50, +50, (reset to 50), +50
|
||||
# Total should be: 50 + 50 + 50 + 50 = 200
|
||||
# Or: (150-100) + (200-150) + 50 + (100-50) = 200
|
||||
assert total == 200
|
||||
assert reboots == 1
|
||||
|
||||
def test_handles_multiple_reboots(self):
|
||||
"""Handles multiple reboots in sequence."""
|
||||
base_ts = datetime(2024, 1, 15, 0, 0, 0)
|
||||
values = [
|
||||
(base_ts, 100),
|
||||
(base_ts + timedelta(minutes=15), 150), # +50
|
||||
(base_ts + timedelta(minutes=30), 50), # Reboot 1
|
||||
(base_ts + timedelta(minutes=45), 80), # +30
|
||||
(base_ts + timedelta(hours=1), 30), # Reboot 2
|
||||
(base_ts + timedelta(hours=1, minutes=15), 50), # +20
|
||||
]
|
||||
|
||||
total, reboots = compute_counter_total(values)
|
||||
|
||||
# Deltas: 50 + 50 + 30 + 30 + 20 = 180
|
||||
assert reboots == 2
|
||||
assert total == 50 + 50 + 30 + 30 + 20
|
||||
|
||||
def test_zero_delta(self):
|
||||
"""Handles zero delta (no change)."""
|
||||
base_ts = datetime(2024, 1, 15, 0, 0, 0)
|
||||
values = [
|
||||
(base_ts, 100),
|
||||
(base_ts + timedelta(minutes=15), 100), # No change
|
||||
(base_ts + timedelta(minutes=30), 100), # No change
|
||||
]
|
||||
|
||||
total, reboots = compute_counter_total(values)
|
||||
|
||||
assert total == 0
|
||||
assert reboots == 0
|
||||
|
||||
def test_large_values(self):
|
||||
"""Handles large counter values."""
|
||||
base_ts = datetime(2024, 1, 15, 0, 0, 0)
|
||||
values = [
|
||||
(base_ts, 1000000000),
|
||||
(base_ts + timedelta(minutes=15), 1000001000), # +1000
|
||||
(base_ts + timedelta(minutes=30), 1000002500), # +1500
|
||||
]
|
||||
|
||||
total, reboots = compute_counter_total(values)
|
||||
|
||||
assert total == 2500
|
||||
assert reboots == 0
|
||||
|
||||
def test_sorted_values_required(self):
|
||||
"""Function expects pre-sorted values by timestamp."""
|
||||
base_ts = datetime(2024, 1, 15, 0, 0, 0)
|
||||
# Properly sorted by timestamp
|
||||
values = [
|
||||
(base_ts, 100),
|
||||
(base_ts + timedelta(minutes=15), 150),
|
||||
(base_ts + timedelta(minutes=30), 200),
|
||||
]
|
||||
|
||||
total, reboots = compute_counter_total(values)
|
||||
|
||||
# Deltas: 50, 50 = 100
|
||||
assert total == 100
|
||||
assert reboots == 0
|
||||
|
||||
def test_two_values(self):
|
||||
"""Two values gives single delta."""
|
||||
base_ts = datetime(2024, 1, 15, 0, 0, 0)
|
||||
values = [
|
||||
(base_ts, 100),
|
||||
(base_ts + timedelta(minutes=15), 175),
|
||||
]
|
||||
|
||||
total, reboots = compute_counter_total(values)
|
||||
|
||||
assert total == 75
|
||||
assert reboots == 0
|
||||
|
||||
def test_reboot_to_zero(self):
|
||||
"""Handles reboot to exactly zero."""
|
||||
base_ts = datetime(2024, 1, 15, 0, 0, 0)
|
||||
values = [
|
||||
(base_ts, 100),
|
||||
(base_ts + timedelta(minutes=15), 150), # +50
|
||||
(base_ts + timedelta(minutes=30), 0), # Reboot to 0
|
||||
(base_ts + timedelta(minutes=45), 30), # +30
|
||||
]
|
||||
|
||||
total, reboots = compute_counter_total(values)
|
||||
|
||||
assert total == 50 + 0 + 30
|
||||
assert reboots == 1
|
||||
|
||||
def test_float_values(self):
|
||||
"""Handles float counter values."""
|
||||
base_ts = datetime(2024, 1, 15, 0, 0, 0)
|
||||
values = [
|
||||
(base_ts, 100.5),
|
||||
(base_ts + timedelta(minutes=15), 150.7),
|
||||
(base_ts + timedelta(minutes=30), 200.3),
|
||||
]
|
||||
|
||||
total, reboots = compute_counter_total(values)
|
||||
|
||||
expected = (150.7 - 100.5) + (200.3 - 150.7)
|
||||
assert total == pytest.approx(expected)
|
||||
assert reboots == 0
|
||||
316
tests/reports/test_format_json.py
Normal file
316
tests/reports/test_format_json.py
Normal file
@@ -0,0 +1,316 @@
|
||||
"""Tests for JSON report formatting."""
|
||||
|
||||
import json
|
||||
from datetime import date, datetime
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.reports import (
|
||||
DailyAggregate,
|
||||
MetricStats,
|
||||
MonthlyAggregate,
|
||||
YearlyAggregate,
|
||||
monthly_to_json,
|
||||
yearly_to_json,
|
||||
)
|
||||
|
||||
|
||||
class TestMonthlyToJson:
|
||||
"""Tests for monthly_to_json function."""
|
||||
|
||||
@pytest.fixture
|
||||
def sample_monthly_aggregate(self):
|
||||
"""Create sample MonthlyAggregate for testing."""
|
||||
daily_data = [
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, 1),
|
||||
metrics={
|
||||
"bat": MetricStats(min_value=3.7, max_value=3.9, mean=3.8, count=24),
|
||||
"nb_recv": MetricStats(total=720, count=24),
|
||||
},
|
||||
),
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, 2),
|
||||
metrics={
|
||||
"bat": MetricStats(min_value=3.6, max_value=3.85, mean=3.75, count=24),
|
||||
"nb_recv": MetricStats(total=840, count=24),
|
||||
},
|
||||
),
|
||||
]
|
||||
|
||||
return MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=daily_data,
|
||||
summary={
|
||||
"bat": MetricStats(
|
||||
min_value=3.6,
|
||||
min_time=datetime(2024, 1, 2, 1, 0),
|
||||
max_value=3.9,
|
||||
max_time=datetime(2024, 1, 1, 23, 0),
|
||||
mean=3.775,
|
||||
count=48,
|
||||
),
|
||||
"nb_recv": MetricStats(total=1560, count=48, reboot_count=1),
|
||||
},
|
||||
)
|
||||
|
||||
def test_returns_dict(self, sample_monthly_aggregate):
|
||||
"""Returns a dictionary."""
|
||||
result = monthly_to_json(sample_monthly_aggregate)
|
||||
assert isinstance(result, dict)
|
||||
|
||||
def test_includes_report_type(self, sample_monthly_aggregate):
|
||||
"""Includes report_type field."""
|
||||
result = monthly_to_json(sample_monthly_aggregate)
|
||||
assert result["report_type"] == "monthly"
|
||||
|
||||
def test_includes_year_and_month(self, sample_monthly_aggregate):
|
||||
"""Includes year and month."""
|
||||
result = monthly_to_json(sample_monthly_aggregate)
|
||||
assert result["year"] == 2024
|
||||
assert result["month"] == 1
|
||||
|
||||
def test_includes_role(self, sample_monthly_aggregate):
|
||||
"""Includes role identifier."""
|
||||
result = monthly_to_json(sample_monthly_aggregate)
|
||||
assert result["role"] == "repeater"
|
||||
|
||||
def test_includes_daily_data(self, sample_monthly_aggregate):
|
||||
"""Includes daily breakdown."""
|
||||
result = monthly_to_json(sample_monthly_aggregate)
|
||||
assert "daily" in result
|
||||
assert len(result["daily"]) == 2
|
||||
assert result["days_with_data"] == 2
|
||||
|
||||
def test_daily_data_has_date(self, sample_monthly_aggregate):
|
||||
"""Daily data includes date."""
|
||||
result = monthly_to_json(sample_monthly_aggregate)
|
||||
first_day = result["daily"][0]
|
||||
assert "date" in first_day
|
||||
assert first_day["date"] == "2024-01-01"
|
||||
|
||||
def test_daily_metrics_include_units_and_values(self, sample_monthly_aggregate):
|
||||
"""Daily metrics include units and expected values."""
|
||||
result = monthly_to_json(sample_monthly_aggregate)
|
||||
first_day = result["daily"][0]
|
||||
|
||||
bat_stats = first_day["metrics"]["bat"]
|
||||
assert bat_stats["unit"] == "mV"
|
||||
assert bat_stats["min"] == 3.7
|
||||
assert bat_stats["max"] == 3.9
|
||||
assert bat_stats["mean"] == 3.8
|
||||
assert bat_stats["count"] == 24
|
||||
|
||||
rx_stats = first_day["metrics"]["nb_recv"]
|
||||
assert rx_stats["unit"] == "packets"
|
||||
assert rx_stats["total"] == 720
|
||||
assert rx_stats["count"] == 24
|
||||
|
||||
def test_is_json_serializable(self, sample_monthly_aggregate):
|
||||
"""Result is JSON serializable."""
|
||||
result = monthly_to_json(sample_monthly_aggregate)
|
||||
# Should not raise
|
||||
json_str = json.dumps(result)
|
||||
assert isinstance(json_str, str)
|
||||
|
||||
def test_summary_includes_times_and_reboots(self, sample_monthly_aggregate):
|
||||
"""Summary includes time fields and reboot counts when provided."""
|
||||
result = monthly_to_json(sample_monthly_aggregate)
|
||||
summary = result["summary"]
|
||||
|
||||
assert summary["bat"]["min_time"] == "2024-01-02T01:00:00"
|
||||
assert summary["bat"]["max_time"] == "2024-01-01T23:00:00"
|
||||
assert summary["nb_recv"]["total"] == 1560
|
||||
assert summary["nb_recv"]["reboot_count"] == 1
|
||||
|
||||
def test_handles_empty_daily(self):
|
||||
"""Handles aggregate with no daily data."""
|
||||
agg = MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=[],
|
||||
summary={},
|
||||
)
|
||||
|
||||
result = monthly_to_json(agg)
|
||||
assert result["daily"] == []
|
||||
assert result["days_with_data"] == 0
|
||||
assert result["summary"] == {}
|
||||
|
||||
|
||||
class TestYearlyToJson:
|
||||
"""Tests for yearly_to_json function."""
|
||||
|
||||
@pytest.fixture
|
||||
def sample_yearly_aggregate(self):
|
||||
"""Create sample YearlyAggregate for testing."""
|
||||
monthly_data = [
|
||||
MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=[],
|
||||
summary={"bat": MetricStats(min_value=3.6, max_value=3.9, mean=3.75, count=720)},
|
||||
),
|
||||
MonthlyAggregate(
|
||||
year=2024,
|
||||
month=2,
|
||||
role="repeater",
|
||||
daily=[],
|
||||
summary={"bat": MetricStats(min_value=3.5, max_value=3.85, mean=3.7, count=672)},
|
||||
),
|
||||
]
|
||||
|
||||
return YearlyAggregate(
|
||||
year=2024,
|
||||
role="repeater",
|
||||
monthly=monthly_data,
|
||||
summary={"bat": MetricStats(min_value=3.5, max_value=3.9, mean=3.725, count=1392)},
|
||||
)
|
||||
|
||||
def test_returns_dict(self, sample_yearly_aggregate):
|
||||
"""Returns a dictionary."""
|
||||
result = yearly_to_json(sample_yearly_aggregate)
|
||||
assert isinstance(result, dict)
|
||||
|
||||
def test_includes_report_type(self, sample_yearly_aggregate):
|
||||
"""Includes report_type field."""
|
||||
result = yearly_to_json(sample_yearly_aggregate)
|
||||
assert result["report_type"] == "yearly"
|
||||
|
||||
def test_includes_year(self, sample_yearly_aggregate):
|
||||
"""Includes year."""
|
||||
result = yearly_to_json(sample_yearly_aggregate)
|
||||
assert result["year"] == 2024
|
||||
|
||||
def test_includes_role(self, sample_yearly_aggregate):
|
||||
"""Includes role identifier."""
|
||||
result = yearly_to_json(sample_yearly_aggregate)
|
||||
assert result["role"] == "repeater"
|
||||
|
||||
def test_includes_monthly_data(self, sample_yearly_aggregate):
|
||||
"""Includes monthly breakdown."""
|
||||
result = yearly_to_json(sample_yearly_aggregate)
|
||||
assert "monthly" in result
|
||||
assert len(result["monthly"]) == 2
|
||||
assert result["months_with_data"] == 2
|
||||
|
||||
def test_is_json_serializable(self, sample_yearly_aggregate):
|
||||
"""Result is JSON serializable."""
|
||||
result = yearly_to_json(sample_yearly_aggregate)
|
||||
json_str = json.dumps(result)
|
||||
assert isinstance(json_str, str)
|
||||
|
||||
def test_summary_and_monthly_entries(self, sample_yearly_aggregate):
|
||||
"""Summary and monthly entries include expected fields."""
|
||||
result = yearly_to_json(sample_yearly_aggregate)
|
||||
|
||||
assert result["summary"]["bat"]["count"] == 1392
|
||||
assert result["summary"]["bat"]["unit"] == "mV"
|
||||
|
||||
first_month = result["monthly"][0]
|
||||
assert first_month["year"] == 2024
|
||||
assert first_month["month"] == 1
|
||||
assert first_month["days_with_data"] == 0
|
||||
assert first_month["summary"]["bat"]["mean"] == 3.75
|
||||
|
||||
def test_handles_empty_monthly(self):
|
||||
"""Handles aggregate with no monthly data."""
|
||||
agg = YearlyAggregate(
|
||||
year=2024,
|
||||
role="repeater",
|
||||
monthly=[],
|
||||
summary={},
|
||||
)
|
||||
|
||||
result = yearly_to_json(agg)
|
||||
assert result["monthly"] == []
|
||||
assert result["months_with_data"] == 0
|
||||
assert result["summary"] == {}
|
||||
|
||||
|
||||
class TestJsonStructure:
|
||||
"""Tests for JSON output structure."""
|
||||
|
||||
def test_metric_stats_converted(self):
|
||||
"""MetricStats are properly converted to dicts."""
|
||||
agg = MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=[],
|
||||
summary={"bat": MetricStats(min_value=3.5, max_value=4.0, mean=3.75, count=100)},
|
||||
)
|
||||
|
||||
result = monthly_to_json(agg)
|
||||
|
||||
# Summary should contain stats
|
||||
assert isinstance(result["summary"], dict)
|
||||
assert result["summary"]["bat"]["min"] == 3.5
|
||||
assert result["summary"]["bat"]["max"] == 4.0
|
||||
assert result["summary"]["bat"]["mean"] == 3.75
|
||||
assert result["summary"]["bat"]["unit"] == "mV"
|
||||
|
||||
def test_nested_structure_serializes(self):
|
||||
"""Nested structures serialize correctly."""
|
||||
daily = DailyAggregate(
|
||||
date=date(2024, 1, 1),
|
||||
metrics={"bat": MetricStats(min_value=3.7, max_value=3.9, mean=3.8, count=24)},
|
||||
)
|
||||
|
||||
agg = MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="companion",
|
||||
daily=[daily],
|
||||
summary={},
|
||||
)
|
||||
|
||||
result = monthly_to_json(agg)
|
||||
json_str = json.dumps(result, indent=2)
|
||||
|
||||
# Should be valid JSON with proper structure
|
||||
reparsed = json.loads(json_str)
|
||||
assert reparsed == result
|
||||
|
||||
|
||||
class TestJsonRoundTrip:
|
||||
"""Tests for JSON data round-trip integrity."""
|
||||
|
||||
def test_parse_and_serialize_identical(self):
|
||||
"""Parsing and re-serializing produces same structure."""
|
||||
agg = MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=[],
|
||||
summary={"bat": MetricStats(min_value=3.5, max_value=4.0, mean=3.75, count=100)},
|
||||
)
|
||||
|
||||
result = monthly_to_json(agg)
|
||||
json_str = json.dumps(result)
|
||||
parsed = json.loads(json_str)
|
||||
reserialized = json.dumps(parsed)
|
||||
reparsed = json.loads(reserialized)
|
||||
|
||||
assert parsed == reparsed
|
||||
|
||||
def test_numeric_values_preserved(self):
|
||||
"""Numeric values are preserved through round-trip."""
|
||||
agg = MonthlyAggregate(
|
||||
year=2024,
|
||||
month=6,
|
||||
role="repeater",
|
||||
daily=[],
|
||||
summary={},
|
||||
)
|
||||
|
||||
result = monthly_to_json(agg)
|
||||
json_str = json.dumps(result)
|
||||
parsed = json.loads(json_str)
|
||||
|
||||
assert parsed["year"] == 2024
|
||||
assert parsed["month"] == 6
|
||||
647
tests/reports/test_format_txt.py
Normal file
647
tests/reports/test_format_txt.py
Normal file
@@ -0,0 +1,647 @@
|
||||
"""Tests for WeeWX-style ASCII text report formatting."""
|
||||
|
||||
from datetime import date
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.reports import (
|
||||
Column,
|
||||
DailyAggregate,
|
||||
LocationInfo,
|
||||
MetricStats,
|
||||
MonthlyAggregate,
|
||||
YearlyAggregate,
|
||||
_format_row,
|
||||
_format_separator,
|
||||
format_monthly_txt,
|
||||
format_yearly_txt,
|
||||
)
|
||||
|
||||
|
||||
class TestColumn:
|
||||
"""Tests for Column dataclass."""
|
||||
|
||||
def test_format_with_value(self):
|
||||
"""Formats value with specified width and alignment."""
|
||||
col = Column(width=6, align="right")
|
||||
|
||||
result = col.format(42)
|
||||
|
||||
assert result == " 42"
|
||||
|
||||
def test_format_with_none(self):
|
||||
"""Formats None as dash."""
|
||||
col = Column(width=10)
|
||||
|
||||
result = col.format(None)
|
||||
|
||||
assert result == "-".rjust(10)
|
||||
|
||||
def test_left_alignment(self):
|
||||
"""Left alignment pads on right."""
|
||||
col = Column(width=10, align="left")
|
||||
|
||||
result = col.format("Hi")
|
||||
|
||||
assert result == "Hi".ljust(10)
|
||||
|
||||
def test_right_alignment(self):
|
||||
"""Right alignment pads on left."""
|
||||
col = Column(width=10, align="right")
|
||||
|
||||
result = col.format("Hi")
|
||||
|
||||
assert result == "Hi".rjust(10)
|
||||
|
||||
def test_center_alignment(self):
|
||||
"""Center alignment pads on both sides."""
|
||||
col = Column(width=10, align="center")
|
||||
|
||||
result = col.format("Hi")
|
||||
|
||||
assert result == "Hi".center(10)
|
||||
|
||||
def test_decimals_formatting(self):
|
||||
"""Formats floats with specified decimals."""
|
||||
col = Column(width=10, decimals=2)
|
||||
|
||||
result = col.format(3.14159)
|
||||
|
||||
assert result == "3.14".rjust(10)
|
||||
|
||||
def test_comma_separator(self):
|
||||
"""Uses comma separator for large integers."""
|
||||
col = Column(width=15, comma_sep=True)
|
||||
|
||||
result = col.format(1000000)
|
||||
|
||||
assert result == "1,000,000".rjust(15)
|
||||
|
||||
|
||||
class TestFormatRow:
|
||||
"""Tests for _format_row function."""
|
||||
|
||||
def test_joins_values_with_columns(self):
|
||||
"""Joins formatted values using column specs."""
|
||||
columns = [
|
||||
Column(width=5),
|
||||
Column(width=5),
|
||||
]
|
||||
|
||||
row = _format_row(columns, [1, 2])
|
||||
|
||||
assert row == " 1 2"
|
||||
|
||||
def test_handles_fewer_values(self):
|
||||
"""Handles fewer values than columns."""
|
||||
columns = [
|
||||
Column(width=5),
|
||||
Column(width=5),
|
||||
Column(width=5),
|
||||
]
|
||||
|
||||
# Should not raise - zip stops at shorter list
|
||||
row = _format_row(columns, ["X", "Y"])
|
||||
|
||||
assert row is not None
|
||||
assert "X" in row
|
||||
assert "Y" in row
|
||||
assert len(row) == 10
|
||||
|
||||
|
||||
class TestFormatSeparator:
|
||||
"""Tests for _format_separator function."""
|
||||
|
||||
def test_creates_separator_line(self):
|
||||
"""Creates separator line matching column widths."""
|
||||
columns = [
|
||||
Column(width=10),
|
||||
Column(width=8),
|
||||
]
|
||||
|
||||
separator = _format_separator(columns)
|
||||
|
||||
assert separator == "-" * 18
|
||||
|
||||
def test_matches_total_width(self):
|
||||
"""Separator width matches total column width."""
|
||||
columns = [
|
||||
Column(width=10),
|
||||
Column(width=10),
|
||||
]
|
||||
|
||||
separator = _format_separator(columns)
|
||||
|
||||
assert len(separator) == 20
|
||||
assert set(separator) == {"-"}
|
||||
|
||||
def test_custom_separator_char(self):
|
||||
"""Uses custom separator character."""
|
||||
columns = [Column(width=10)]
|
||||
|
||||
separator = _format_separator(columns, char="=")
|
||||
|
||||
assert separator == "=" * 10
|
||||
|
||||
|
||||
class TestFormatMonthlyTxt:
|
||||
"""Tests for format_monthly_txt function."""
|
||||
|
||||
@pytest.fixture
|
||||
def sample_monthly_aggregate(self):
|
||||
"""Create sample MonthlyAggregate for testing."""
|
||||
daily_data = [
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, 1),
|
||||
metrics={
|
||||
"bat": MetricStats(min_value=3700, max_value=3900, mean=3800, count=24),
|
||||
"nb_recv": MetricStats(total=720, count=24),
|
||||
},
|
||||
),
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, 2),
|
||||
metrics={
|
||||
"bat": MetricStats(min_value=3600, max_value=3850, mean=3750, count=24),
|
||||
"nb_recv": MetricStats(total=840, count=24),
|
||||
},
|
||||
),
|
||||
]
|
||||
|
||||
return MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=daily_data,
|
||||
summary={"bat": MetricStats(min_value=3600, max_value=3900, mean=3775, count=48)},
|
||||
)
|
||||
|
||||
@pytest.fixture
|
||||
def sample_location(self):
|
||||
"""Create sample LocationInfo for testing."""
|
||||
return LocationInfo(
|
||||
name="Test Location",
|
||||
lat=52.0,
|
||||
lon=4.0,
|
||||
elev=10.0,
|
||||
)
|
||||
|
||||
def test_returns_string(self, sample_monthly_aggregate, sample_location):
|
||||
"""Returns a string."""
|
||||
result = format_monthly_txt(sample_monthly_aggregate, "Test Repeater", sample_location)
|
||||
|
||||
assert isinstance(result, str)
|
||||
|
||||
def test_includes_header(self, sample_monthly_aggregate, sample_location):
|
||||
"""Includes report header with month/year."""
|
||||
result = format_monthly_txt(sample_monthly_aggregate, "Test Repeater", sample_location)
|
||||
|
||||
assert "MONTHLY MESHCORE REPORT for January 2024" in result
|
||||
|
||||
def test_includes_node_name(self, sample_monthly_aggregate, sample_location):
|
||||
"""Includes node name."""
|
||||
result = format_monthly_txt(sample_monthly_aggregate, "Test Repeater", sample_location)
|
||||
|
||||
assert "Test Repeater" in result
|
||||
|
||||
def test_has_table_structure(self, sample_monthly_aggregate, sample_location):
|
||||
"""Has ASCII table structure with separators."""
|
||||
result = format_monthly_txt(sample_monthly_aggregate, "Test Repeater", sample_location)
|
||||
|
||||
assert "BATTERY (V)" in result
|
||||
assert result.count("-" * 95) == 2
|
||||
|
||||
def test_daily_rows_rendered(self, sample_monthly_aggregate, sample_location):
|
||||
"""Renders one row per day with battery values."""
|
||||
result = format_monthly_txt(sample_monthly_aggregate, "Test Repeater", sample_location)
|
||||
lines = result.splitlines()
|
||||
daily_lines = [line for line in lines if line[:3].strip().isdigit()]
|
||||
|
||||
assert [line[:3].strip() for line in daily_lines] == ["1", "2"]
|
||||
assert any("3.80" in line for line in daily_lines)
|
||||
assert any("3.75" in line for line in daily_lines)
|
||||
|
||||
def test_handles_empty_daily(self, sample_location):
|
||||
"""Handles aggregate with no daily data."""
|
||||
agg = MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=[],
|
||||
summary={},
|
||||
)
|
||||
|
||||
result = format_monthly_txt(agg, "Test Repeater", sample_location)
|
||||
|
||||
assert isinstance(result, str)
|
||||
lines = result.splitlines()
|
||||
daily_lines = [line for line in lines if line[:3].strip().isdigit()]
|
||||
assert daily_lines == []
|
||||
|
||||
def test_includes_location_info(self, sample_monthly_aggregate, sample_location):
|
||||
"""Includes location information."""
|
||||
result = format_monthly_txt(sample_monthly_aggregate, "Test Repeater", sample_location)
|
||||
|
||||
assert "NAME: Test Location" in result
|
||||
assert "COORDS:" in result
|
||||
assert "ELEV: 10 meters" in result
|
||||
|
||||
|
||||
class TestFormatYearlyTxt:
|
||||
"""Tests for format_yearly_txt function."""
|
||||
|
||||
@pytest.fixture
|
||||
def sample_yearly_aggregate(self):
|
||||
"""Create sample YearlyAggregate for testing."""
|
||||
monthly_data = [
|
||||
MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=[],
|
||||
summary={"bat": MetricStats(min_value=3600, max_value=3900, mean=3750, count=720)},
|
||||
),
|
||||
MonthlyAggregate(
|
||||
year=2024,
|
||||
month=2,
|
||||
role="repeater",
|
||||
daily=[],
|
||||
summary={"bat": MetricStats(min_value=3500, max_value=3850, mean=3700, count=672)},
|
||||
),
|
||||
]
|
||||
|
||||
return YearlyAggregate(
|
||||
year=2024,
|
||||
role="repeater",
|
||||
monthly=monthly_data,
|
||||
summary={"bat": MetricStats(min_value=3500, max_value=3900, mean=3725, count=1392)},
|
||||
)
|
||||
|
||||
@pytest.fixture
|
||||
def sample_location(self):
|
||||
"""Create sample LocationInfo for testing."""
|
||||
return LocationInfo(
|
||||
name="Test Location",
|
||||
lat=52.0,
|
||||
lon=4.0,
|
||||
elev=10.0,
|
||||
)
|
||||
|
||||
def test_returns_string(self, sample_yearly_aggregate, sample_location):
|
||||
"""Returns a string."""
|
||||
result = format_yearly_txt(sample_yearly_aggregate, "Test Repeater", sample_location)
|
||||
|
||||
assert isinstance(result, str)
|
||||
|
||||
def test_includes_year(self, sample_yearly_aggregate, sample_location):
|
||||
"""Includes year in header."""
|
||||
result = format_yearly_txt(sample_yearly_aggregate, "Test Repeater", sample_location)
|
||||
|
||||
assert "YEARLY MESHCORE REPORT for 2024" in result
|
||||
assert "NODE: Test Repeater" in result
|
||||
assert "NAME: Test Location" in result
|
||||
|
||||
def test_has_monthly_breakdown(self, sample_yearly_aggregate, sample_location):
|
||||
"""Shows monthly breakdown."""
|
||||
result = format_yearly_txt(sample_yearly_aggregate, "Test Repeater", sample_location)
|
||||
|
||||
lines = result.splitlines()
|
||||
monthly_lines = [line for line in lines if line.strip().startswith("2024")]
|
||||
months = [line[4:8].strip() for line in monthly_lines]
|
||||
assert months == ["01", "02"]
|
||||
|
||||
def test_handles_empty_monthly(self, sample_location):
|
||||
"""Handles aggregate with no monthly data."""
|
||||
agg = YearlyAggregate(
|
||||
year=2024,
|
||||
role="repeater",
|
||||
monthly=[],
|
||||
summary={},
|
||||
)
|
||||
|
||||
result = format_yearly_txt(agg, "Test Repeater", sample_location)
|
||||
|
||||
assert isinstance(result, str)
|
||||
|
||||
|
||||
class TestFormatYearlyCompanionTxt:
|
||||
"""Tests for format_yearly_txt with companion role."""
|
||||
|
||||
@pytest.fixture
|
||||
def sample_companion_yearly_aggregate(self):
|
||||
"""Create sample YearlyAggregate for companion role testing."""
|
||||
from datetime import datetime as dt
|
||||
monthly_data = [
|
||||
MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="companion",
|
||||
daily=[],
|
||||
summary={
|
||||
"battery_mv": MetricStats(
|
||||
min_value=3600, min_time=dt(2024, 1, 15, 4, 0),
|
||||
max_value=3900, max_time=dt(2024, 1, 20, 14, 0),
|
||||
mean=3750, count=720
|
||||
),
|
||||
"bat_pct": MetricStats(mean=75, count=720),
|
||||
"contacts": MetricStats(mean=10, count=720),
|
||||
"recv": MetricStats(total=5000, count=720),
|
||||
"sent": MetricStats(total=3000, count=720),
|
||||
},
|
||||
),
|
||||
MonthlyAggregate(
|
||||
year=2024,
|
||||
month=2,
|
||||
role="companion",
|
||||
daily=[],
|
||||
summary={
|
||||
"battery_mv": MetricStats(
|
||||
min_value=3500, min_time=dt(2024, 2, 10, 5, 0),
|
||||
max_value=3850, max_time=dt(2024, 2, 25, 16, 0),
|
||||
mean=3700, count=672
|
||||
),
|
||||
"bat_pct": MetricStats(mean=70, count=672),
|
||||
"contacts": MetricStats(mean=12, count=672),
|
||||
"recv": MetricStats(total=4500, count=672),
|
||||
"sent": MetricStats(total=2800, count=672),
|
||||
},
|
||||
),
|
||||
]
|
||||
|
||||
return YearlyAggregate(
|
||||
year=2024,
|
||||
role="companion",
|
||||
monthly=monthly_data,
|
||||
summary={
|
||||
"battery_mv": MetricStats(
|
||||
min_value=3500, min_time=dt(2024, 2, 10, 5, 0),
|
||||
max_value=3900, max_time=dt(2024, 1, 20, 14, 0),
|
||||
mean=3725, count=1392
|
||||
),
|
||||
"bat_pct": MetricStats(mean=72.5, count=1392),
|
||||
"contacts": MetricStats(mean=11, count=1392),
|
||||
"recv": MetricStats(total=9500, count=1392),
|
||||
"sent": MetricStats(total=5800, count=1392),
|
||||
},
|
||||
)
|
||||
|
||||
@pytest.fixture
|
||||
def sample_location(self):
|
||||
"""Create sample LocationInfo for testing."""
|
||||
return LocationInfo(
|
||||
name="Test Location",
|
||||
lat=52.0,
|
||||
lon=4.0,
|
||||
elev=10.0,
|
||||
)
|
||||
|
||||
def test_returns_string(self, sample_companion_yearly_aggregate, sample_location):
|
||||
"""Returns a string."""
|
||||
result = format_yearly_txt(sample_companion_yearly_aggregate, "Test Companion", sample_location)
|
||||
|
||||
assert isinstance(result, str)
|
||||
|
||||
def test_includes_year(self, sample_companion_yearly_aggregate, sample_location):
|
||||
"""Includes year in header."""
|
||||
result = format_yearly_txt(sample_companion_yearly_aggregate, "Test Companion", sample_location)
|
||||
|
||||
assert "YEARLY MESHCORE REPORT for 2024" in result
|
||||
assert "NODE: Test Companion" in result
|
||||
assert "NAME: Test Location" in result
|
||||
|
||||
def test_includes_node_name(self, sample_companion_yearly_aggregate, sample_location):
|
||||
"""Includes node name."""
|
||||
result = format_yearly_txt(sample_companion_yearly_aggregate, "Test Companion", sample_location)
|
||||
|
||||
assert "Test Companion" in result
|
||||
|
||||
def test_has_monthly_breakdown(self, sample_companion_yearly_aggregate, sample_location):
|
||||
"""Shows monthly breakdown."""
|
||||
result = format_yearly_txt(sample_companion_yearly_aggregate, "Test Companion", sample_location)
|
||||
|
||||
lines = result.splitlines()
|
||||
monthly_lines = [line for line in lines if line.strip().startswith("2024")]
|
||||
months = [line[4:8].strip() for line in monthly_lines]
|
||||
assert months == ["01", "02"]
|
||||
|
||||
def test_has_battery_data(self, sample_companion_yearly_aggregate, sample_location):
|
||||
"""Contains battery voltage data."""
|
||||
result = format_yearly_txt(sample_companion_yearly_aggregate, "Test Companion", sample_location)
|
||||
|
||||
# Battery header or VOLT should be present
|
||||
assert "BATT" in result or "VOLT" in result
|
||||
|
||||
def test_has_packet_counts(self, sample_companion_yearly_aggregate, sample_location):
|
||||
"""Contains packet count data."""
|
||||
result = format_yearly_txt(sample_companion_yearly_aggregate, "Test Companion", sample_location)
|
||||
|
||||
# RX and TX columns should be present
|
||||
assert "RX" in result
|
||||
assert "TX" in result
|
||||
|
||||
def test_handles_empty_monthly(self, sample_location):
|
||||
"""Handles aggregate with no monthly data."""
|
||||
agg = YearlyAggregate(
|
||||
year=2024,
|
||||
role="companion",
|
||||
monthly=[],
|
||||
summary={},
|
||||
)
|
||||
|
||||
result = format_yearly_txt(agg, "Test Companion", sample_location)
|
||||
|
||||
assert isinstance(result, str)
|
||||
|
||||
|
||||
class TestFormatMonthlyCompanionTxt:
|
||||
"""Tests for format_monthly_txt with companion role."""
|
||||
|
||||
@pytest.fixture
|
||||
def sample_companion_monthly_aggregate(self):
|
||||
"""Create sample MonthlyAggregate for companion role testing."""
|
||||
from datetime import datetime as dt
|
||||
daily_data = [
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, 1),
|
||||
metrics={
|
||||
"battery_mv": MetricStats(
|
||||
min_value=3700, min_time=dt(2024, 1, 1, 4, 0),
|
||||
max_value=3900, max_time=dt(2024, 1, 1, 14, 0),
|
||||
mean=3800, count=24
|
||||
),
|
||||
"bat_pct": MetricStats(mean=75, count=24),
|
||||
"contacts": MetricStats(mean=10, count=24),
|
||||
"recv": MetricStats(total=500, count=24),
|
||||
"sent": MetricStats(total=300, count=24),
|
||||
},
|
||||
),
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, 2),
|
||||
metrics={
|
||||
"battery_mv": MetricStats(
|
||||
min_value=3650, min_time=dt(2024, 1, 2, 5, 0),
|
||||
max_value=3850, max_time=dt(2024, 1, 2, 12, 0),
|
||||
mean=3750, count=24
|
||||
),
|
||||
"bat_pct": MetricStats(mean=70, count=24),
|
||||
"contacts": MetricStats(mean=11, count=24),
|
||||
"recv": MetricStats(total=450, count=24),
|
||||
"sent": MetricStats(total=280, count=24),
|
||||
},
|
||||
),
|
||||
]
|
||||
|
||||
return MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="companion",
|
||||
daily=daily_data,
|
||||
summary={
|
||||
"battery_mv": MetricStats(
|
||||
min_value=3650, min_time=dt(2024, 1, 2, 5, 0),
|
||||
max_value=3900, max_time=dt(2024, 1, 1, 14, 0),
|
||||
mean=3775, count=48
|
||||
),
|
||||
"bat_pct": MetricStats(mean=72.5, count=48),
|
||||
"contacts": MetricStats(mean=10.5, count=48),
|
||||
"recv": MetricStats(total=950, count=48),
|
||||
"sent": MetricStats(total=580, count=48),
|
||||
},
|
||||
)
|
||||
|
||||
@pytest.fixture
|
||||
def sample_location(self):
|
||||
"""Create sample LocationInfo for testing."""
|
||||
return LocationInfo(
|
||||
name="Test Location",
|
||||
lat=52.0,
|
||||
lon=4.0,
|
||||
elev=10.0,
|
||||
)
|
||||
|
||||
def test_returns_string(self, sample_companion_monthly_aggregate, sample_location):
|
||||
"""Returns a string."""
|
||||
result = format_monthly_txt(sample_companion_monthly_aggregate, "Test Companion", sample_location)
|
||||
|
||||
assert isinstance(result, str)
|
||||
|
||||
def test_includes_month_year(self, sample_companion_monthly_aggregate, sample_location):
|
||||
"""Includes month and year in header."""
|
||||
result = format_monthly_txt(sample_companion_monthly_aggregate, "Test Companion", sample_location)
|
||||
|
||||
assert "MONTHLY MESHCORE REPORT for January 2024" in result
|
||||
assert "NODE: Test Companion" in result
|
||||
|
||||
def test_has_daily_breakdown(self, sample_companion_monthly_aggregate, sample_location):
|
||||
"""Shows daily breakdown."""
|
||||
result = format_monthly_txt(sample_companion_monthly_aggregate, "Test Companion", sample_location)
|
||||
|
||||
lines = result.splitlines()
|
||||
daily_lines = [line for line in lines if line[:3].strip().isdigit()]
|
||||
assert [line[:3].strip() for line in daily_lines] == ["1", "2"]
|
||||
|
||||
def test_has_packet_counts(self, sample_companion_monthly_aggregate, sample_location):
|
||||
"""Contains packet count data."""
|
||||
result = format_monthly_txt(sample_companion_monthly_aggregate, "Test Companion", sample_location)
|
||||
|
||||
# RX and TX columns should be present
|
||||
assert "RX" in result
|
||||
assert "TX" in result
|
||||
|
||||
|
||||
class TestTextReportContent:
|
||||
"""Tests for text report content quality."""
|
||||
|
||||
@pytest.fixture
|
||||
def sample_monthly_aggregate(self):
|
||||
"""Create sample MonthlyAggregate for testing."""
|
||||
daily_data = [
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, 1),
|
||||
metrics={"bat": MetricStats(min_value=3700, max_value=3900, mean=3800, count=24)},
|
||||
),
|
||||
]
|
||||
|
||||
return MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=daily_data,
|
||||
summary={"bat": MetricStats(min_value=3700, max_value=3900, mean=3800, count=24)},
|
||||
)
|
||||
|
||||
@pytest.fixture
|
||||
def sample_location(self):
|
||||
"""Create sample LocationInfo for testing."""
|
||||
return LocationInfo(
|
||||
name="Test Location",
|
||||
lat=52.0,
|
||||
lon=4.0,
|
||||
elev=10.0,
|
||||
)
|
||||
|
||||
def test_readable_numbers(self, sample_monthly_aggregate, sample_location):
|
||||
"""Numbers are formatted readably."""
|
||||
result = format_monthly_txt(sample_monthly_aggregate, "Test Repeater", sample_location)
|
||||
|
||||
# Should contain numeric values
|
||||
assert any(c.isdigit() for c in result)
|
||||
|
||||
def test_aligned_columns(self, sample_monthly_aggregate, sample_location):
|
||||
"""Columns appear aligned."""
|
||||
result = format_monthly_txt(sample_monthly_aggregate, "Test Repeater", sample_location)
|
||||
lines = result.split("\n")
|
||||
|
||||
# Find lines that start with day numbers (data rows)
|
||||
# These are the actual data rows that should be aligned
|
||||
data_lines = [line for line in lines if line.strip() and line.strip()[:2].isdigit()]
|
||||
if len(data_lines) >= 2:
|
||||
lengths = [len(line) for line in data_lines]
|
||||
# Data rows should be same length (well aligned)
|
||||
assert max(lengths) - min(lengths) < 10
|
||||
|
||||
|
||||
class TestCompanionFormatting:
|
||||
"""Tests for companion-specific formatting."""
|
||||
|
||||
@pytest.fixture
|
||||
def companion_monthly_aggregate(self):
|
||||
"""Create sample companion MonthlyAggregate."""
|
||||
daily_data = [
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, 1),
|
||||
metrics={
|
||||
"battery_mv": MetricStats(min_value=3700, max_value=3900, mean=3800, count=24),
|
||||
"contacts": MetricStats(min_value=5, max_value=10, mean=7, count=24),
|
||||
"recv": MetricStats(total=720, count=24),
|
||||
},
|
||||
),
|
||||
]
|
||||
|
||||
return MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="companion",
|
||||
daily=daily_data,
|
||||
summary={
|
||||
"battery_mv": MetricStats(min_value=3700, max_value=3900, mean=3800, count=24),
|
||||
},
|
||||
)
|
||||
|
||||
@pytest.fixture
|
||||
def sample_location(self):
|
||||
"""Create sample LocationInfo."""
|
||||
return LocationInfo(
|
||||
name="Test Location",
|
||||
lat=52.0,
|
||||
lon=4.0,
|
||||
elev=10.0,
|
||||
)
|
||||
|
||||
def test_companion_monthly_format(self, companion_monthly_aggregate, sample_location):
|
||||
"""Companion monthly report formats correctly."""
|
||||
result = format_monthly_txt(companion_monthly_aggregate, "Test Companion", sample_location)
|
||||
|
||||
assert isinstance(result, str)
|
||||
assert "MONTHLY MESHCORE REPORT for January 2024" in result
|
||||
assert "NODE: Test Companion" in result
|
||||
assert "NAME: Test Location" in result
|
||||
189
tests/reports/test_location.py
Normal file
189
tests/reports/test_location.py
Normal file
@@ -0,0 +1,189 @@
|
||||
"""Tests for location formatting functions."""
|
||||
|
||||
|
||||
from meshmon.reports import (
|
||||
LocationInfo,
|
||||
format_lat_lon,
|
||||
format_lat_lon_dms,
|
||||
)
|
||||
|
||||
|
||||
class TestFormatLatLon:
|
||||
"""Tests for format_lat_lon function."""
|
||||
|
||||
def test_formats_positive_coordinates(self):
|
||||
"""Formats positive lat/lon with N/E."""
|
||||
lat_str, lon_str = format_lat_lon(51.5074, 0.1278)
|
||||
|
||||
assert lat_str == "51-30.44 N"
|
||||
assert lon_str == "000-07.67 E"
|
||||
|
||||
def test_formats_negative_latitude(self):
|
||||
"""Negative latitude shows S."""
|
||||
lat_str, lon_str = format_lat_lon(-33.8688, 151.2093)
|
||||
|
||||
assert lat_str == "33-52.13 S"
|
||||
assert lon_str == "151-12.56 E"
|
||||
|
||||
def test_formats_negative_longitude(self):
|
||||
"""Negative longitude shows W."""
|
||||
lat_str, lon_str = format_lat_lon(51.5074, -0.1278)
|
||||
|
||||
assert lon_str == "000-07.67 W"
|
||||
|
||||
def test_formats_positive_longitude(self):
|
||||
"""Positive longitude shows E."""
|
||||
lat_str, lon_str = format_lat_lon(0.0, 4.0)
|
||||
|
||||
assert lon_str == "004-00.00 E"
|
||||
|
||||
def test_includes_degrees_minutes(self):
|
||||
"""Includes degrees and minutes."""
|
||||
lat_str, lon_str = format_lat_lon(3.5, 7.25)
|
||||
|
||||
assert lat_str.startswith("03-")
|
||||
assert lon_str.startswith("007-")
|
||||
|
||||
def test_handles_zero(self):
|
||||
"""Handles zero coordinates."""
|
||||
lat_str, lon_str = format_lat_lon(0.0, 0.0)
|
||||
|
||||
assert lat_str == "00-00.00 N"
|
||||
assert lon_str == "000-00.00 E"
|
||||
|
||||
def test_handles_extremes(self):
|
||||
"""Handles extreme coordinates."""
|
||||
# North pole
|
||||
lat_str_north, lon_str_north = format_lat_lon(90.0, 0.0)
|
||||
assert lat_str_north == "90-00.00 N"
|
||||
|
||||
# South pole
|
||||
lat_str_south, lon_str_south = format_lat_lon(-90.0, 0.0)
|
||||
assert lat_str_south == "90-00.00 S"
|
||||
|
||||
|
||||
class TestFormatLatLonDms:
|
||||
"""Tests for format_lat_lon_dms function."""
|
||||
|
||||
def test_returns_dms_format(self):
|
||||
"""Returns degrees-minutes-seconds format."""
|
||||
result = format_lat_lon_dms(51.5074, -0.1278)
|
||||
|
||||
assert result == "51°30'26\"N 000°07'40\"W"
|
||||
|
||||
def test_includes_direction(self):
|
||||
"""Includes N/S/E/W directions."""
|
||||
result = format_lat_lon_dms(51.5074, -0.1278)
|
||||
|
||||
assert "N" in result
|
||||
assert "W" in result
|
||||
|
||||
def test_correct_conversion(self):
|
||||
"""Converts decimal to DMS correctly."""
|
||||
result = format_lat_lon_dms(0.0, 0.0)
|
||||
|
||||
assert result == "00°00'00\"N 000°00'00\"E"
|
||||
|
||||
def test_handles_fractional_seconds(self):
|
||||
"""Handles fractional seconds."""
|
||||
result = format_lat_lon_dms(51.123456, -0.987654)
|
||||
|
||||
assert result == "51°07'24\"N 000°59'15\"W"
|
||||
|
||||
def test_combines_lat_and_lon(self):
|
||||
"""Returns combined string with both lat and lon."""
|
||||
result = format_lat_lon_dms(52.0, 4.0)
|
||||
|
||||
assert result == "52°00'00\"N 004°00'00\"E"
|
||||
|
||||
|
||||
class TestLocationInfo:
|
||||
"""Tests for LocationInfo dataclass."""
|
||||
|
||||
def test_stores_all_fields(self):
|
||||
"""Stores all location fields."""
|
||||
loc = LocationInfo(
|
||||
name="Test Location",
|
||||
lat=51.5074,
|
||||
lon=-0.1278,
|
||||
elev=11.0,
|
||||
)
|
||||
|
||||
assert loc.name == "Test Location"
|
||||
assert loc.lat == 51.5074
|
||||
assert loc.lon == -0.1278
|
||||
assert loc.elev == 11.0
|
||||
|
||||
def test_format_header(self):
|
||||
"""format_header returns formatted string."""
|
||||
loc = LocationInfo(
|
||||
name="Test Location",
|
||||
lat=51.5074,
|
||||
lon=-0.1278,
|
||||
elev=11.0,
|
||||
)
|
||||
|
||||
header = loc.format_header()
|
||||
|
||||
assert header == (
|
||||
"NAME: Test Location\n"
|
||||
"COORDS: 51°30'26\"N 000°07'40\"W ELEV: 11 meters"
|
||||
)
|
||||
|
||||
def test_format_header_includes_coordinates(self):
|
||||
"""Header includes formatted coordinates."""
|
||||
loc = LocationInfo(
|
||||
name="Test Location",
|
||||
lat=51.5074,
|
||||
lon=-0.1278,
|
||||
elev=11.0,
|
||||
)
|
||||
|
||||
header = loc.format_header()
|
||||
|
||||
assert "COORDS: 51°30'26\"N 000°07'40\"W" in header
|
||||
|
||||
def test_format_header_includes_elevation(self):
|
||||
"""Header includes elevation with unit."""
|
||||
loc = LocationInfo(
|
||||
name="London",
|
||||
lat=51.5074,
|
||||
lon=-0.1278,
|
||||
elev=11.0,
|
||||
)
|
||||
|
||||
header = loc.format_header()
|
||||
|
||||
assert "ELEV: 11 meters" in header
|
||||
|
||||
|
||||
class TestLocationCoordinates:
|
||||
"""Tests for various coordinate scenarios."""
|
||||
|
||||
def test_equator(self):
|
||||
"""Handles equator (0° latitude)."""
|
||||
lat_str, lon_str = format_lat_lon(0.0, 45.0)
|
||||
|
||||
assert lat_str == "00-00.00 N"
|
||||
assert lon_str == "045-00.00 E"
|
||||
|
||||
def test_prime_meridian(self):
|
||||
"""Handles prime meridian (0° longitude)."""
|
||||
lat_str, lon_str = format_lat_lon(45.0, 0.0)
|
||||
|
||||
assert lat_str == "45-00.00 N"
|
||||
assert lon_str == "000-00.00 E"
|
||||
|
||||
def test_international_date_line(self):
|
||||
"""Handles international date line (180° longitude)."""
|
||||
lat_str, lon_str = format_lat_lon(0.0, 180.0)
|
||||
|
||||
assert lat_str == "00-00.00 N"
|
||||
assert lon_str == "180-00.00 E"
|
||||
|
||||
def test_very_precise_coordinates(self):
|
||||
"""Handles high-precision coordinates."""
|
||||
lat_str, lon_str = format_lat_lon(51.50735509, -0.12775829)
|
||||
|
||||
assert lat_str == "51-30.44 N"
|
||||
assert lon_str == "000-07.67 W"
|
||||
557
tests/reports/test_snapshots.py
Normal file
557
tests/reports/test_snapshots.py
Normal file
@@ -0,0 +1,557 @@
|
||||
"""Snapshot tests for text report formatting.
|
||||
|
||||
These tests compare generated TXT reports against saved snapshots
|
||||
to detect unintended changes in report layout and formatting.
|
||||
|
||||
To update snapshots, run: UPDATE_SNAPSHOTS=1 pytest tests/reports/test_snapshots.py
|
||||
"""
|
||||
|
||||
import os
|
||||
from datetime import date, datetime
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.reports import (
|
||||
DailyAggregate,
|
||||
LocationInfo,
|
||||
MetricStats,
|
||||
MonthlyAggregate,
|
||||
YearlyAggregate,
|
||||
format_monthly_txt,
|
||||
format_yearly_txt,
|
||||
)
|
||||
|
||||
|
||||
class TestTxtReportSnapshots:
|
||||
"""Snapshot tests for WeeWX-style ASCII text reports."""
|
||||
|
||||
@pytest.fixture
|
||||
def update_snapshots(self):
|
||||
"""Return True if snapshots should be updated."""
|
||||
return os.environ.get("UPDATE_SNAPSHOTS", "").lower() in ("1", "true", "yes")
|
||||
|
||||
@pytest.fixture
|
||||
def txt_snapshots_dir(self):
|
||||
"""Path to TXT snapshots directory."""
|
||||
return Path(__file__).parent.parent / "snapshots" / "txt"
|
||||
|
||||
@pytest.fixture
|
||||
def sample_location(self):
|
||||
"""Create sample LocationInfo for testing."""
|
||||
return LocationInfo(
|
||||
name="Test Observatory",
|
||||
lat=52.3676, # Amsterdam
|
||||
lon=4.9041,
|
||||
elev=2.0,
|
||||
)
|
||||
|
||||
@pytest.fixture
|
||||
def repeater_monthly_aggregate(self):
|
||||
"""Create sample MonthlyAggregate for repeater role testing."""
|
||||
daily_data = []
|
||||
|
||||
# Create 5 days of sample data
|
||||
for day in range(1, 6):
|
||||
daily_data.append(
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, day),
|
||||
metrics={
|
||||
"bat": MetricStats(
|
||||
min_value=3600 + day * 10,
|
||||
min_time=datetime(2024, 1, day, 4, 0),
|
||||
max_value=3900 + day * 10,
|
||||
max_time=datetime(2024, 1, day, 14, 0),
|
||||
mean=3750 + day * 10,
|
||||
count=96,
|
||||
),
|
||||
"bat_pct": MetricStats(
|
||||
mean=65.0 + day * 2,
|
||||
count=96,
|
||||
),
|
||||
"last_rssi": MetricStats(
|
||||
mean=-85.0 - day,
|
||||
count=96,
|
||||
),
|
||||
"last_snr": MetricStats(
|
||||
mean=8.5 + day * 0.2,
|
||||
count=96,
|
||||
),
|
||||
"noise_floor": MetricStats(
|
||||
mean=-115.0,
|
||||
count=96,
|
||||
),
|
||||
"nb_recv": MetricStats(
|
||||
total=500 + day * 100,
|
||||
count=96,
|
||||
reboot_count=0,
|
||||
),
|
||||
"nb_sent": MetricStats(
|
||||
total=200 + day * 50,
|
||||
count=96,
|
||||
reboot_count=0,
|
||||
),
|
||||
"airtime": MetricStats(
|
||||
total=120 + day * 20,
|
||||
count=96,
|
||||
reboot_count=0,
|
||||
),
|
||||
},
|
||||
snapshot_count=96,
|
||||
)
|
||||
)
|
||||
|
||||
return MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=daily_data,
|
||||
summary={
|
||||
"bat": MetricStats(
|
||||
min_value=3610,
|
||||
min_time=datetime(2024, 1, 1, 4, 0),
|
||||
max_value=3950,
|
||||
max_time=datetime(2024, 1, 5, 14, 0),
|
||||
mean=3780,
|
||||
count=480,
|
||||
),
|
||||
"bat_pct": MetricStats(
|
||||
mean=71.0,
|
||||
count=480,
|
||||
),
|
||||
"last_rssi": MetricStats(
|
||||
mean=-88.0,
|
||||
count=480,
|
||||
),
|
||||
"last_snr": MetricStats(
|
||||
mean=9.1,
|
||||
count=480,
|
||||
),
|
||||
"noise_floor": MetricStats(
|
||||
mean=-115.0,
|
||||
count=480,
|
||||
),
|
||||
"nb_recv": MetricStats(
|
||||
total=4000,
|
||||
count=480,
|
||||
reboot_count=0,
|
||||
),
|
||||
"nb_sent": MetricStats(
|
||||
total=1750,
|
||||
count=480,
|
||||
reboot_count=0,
|
||||
),
|
||||
"airtime": MetricStats(
|
||||
total=900,
|
||||
count=480,
|
||||
reboot_count=0,
|
||||
),
|
||||
},
|
||||
)
|
||||
|
||||
@pytest.fixture
|
||||
def companion_monthly_aggregate(self):
|
||||
"""Create sample MonthlyAggregate for companion role testing."""
|
||||
daily_data = []
|
||||
|
||||
# Create 5 days of sample data
|
||||
for day in range(1, 6):
|
||||
daily_data.append(
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, day),
|
||||
metrics={
|
||||
"battery_mv": MetricStats(
|
||||
min_value=3700 + day * 10,
|
||||
min_time=datetime(2024, 1, day, 5, 0),
|
||||
max_value=4000 + day * 10,
|
||||
max_time=datetime(2024, 1, day, 12, 0),
|
||||
mean=3850 + day * 10,
|
||||
count=1440,
|
||||
),
|
||||
"bat_pct": MetricStats(
|
||||
mean=75.0 + day * 2,
|
||||
count=1440,
|
||||
),
|
||||
"contacts": MetricStats(
|
||||
mean=8 + day,
|
||||
count=1440,
|
||||
),
|
||||
"recv": MetricStats(
|
||||
total=1000 + day * 200,
|
||||
count=1440,
|
||||
reboot_count=0,
|
||||
),
|
||||
"sent": MetricStats(
|
||||
total=500 + day * 100,
|
||||
count=1440,
|
||||
reboot_count=0,
|
||||
),
|
||||
},
|
||||
snapshot_count=1440,
|
||||
)
|
||||
)
|
||||
|
||||
return MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="companion",
|
||||
daily=daily_data,
|
||||
summary={
|
||||
"battery_mv": MetricStats(
|
||||
min_value=3710,
|
||||
min_time=datetime(2024, 1, 1, 5, 0),
|
||||
max_value=4050,
|
||||
max_time=datetime(2024, 1, 5, 12, 0),
|
||||
mean=3880,
|
||||
count=7200,
|
||||
),
|
||||
"bat_pct": MetricStats(
|
||||
mean=81.0,
|
||||
count=7200,
|
||||
),
|
||||
"contacts": MetricStats(
|
||||
mean=11.0,
|
||||
count=7200,
|
||||
),
|
||||
"recv": MetricStats(
|
||||
total=8000,
|
||||
count=7200,
|
||||
reboot_count=0,
|
||||
),
|
||||
"sent": MetricStats(
|
||||
total=4000,
|
||||
count=7200,
|
||||
reboot_count=0,
|
||||
),
|
||||
},
|
||||
)
|
||||
|
||||
@pytest.fixture
|
||||
def repeater_yearly_aggregate(self):
|
||||
"""Create sample YearlyAggregate for repeater role testing."""
|
||||
monthly_data = []
|
||||
|
||||
# Create 3 months of sample data
|
||||
for month in range(1, 4):
|
||||
monthly_data.append(
|
||||
MonthlyAggregate(
|
||||
year=2024,
|
||||
month=month,
|
||||
role="repeater",
|
||||
daily=[], # Daily details not needed for yearly summary
|
||||
summary={
|
||||
"bat": MetricStats(
|
||||
min_value=3500 + month * 50,
|
||||
min_time=datetime(2024, month, 15, 4, 0),
|
||||
max_value=3950 + month * 20,
|
||||
max_time=datetime(2024, month, 20, 14, 0),
|
||||
mean=3700 + month * 30,
|
||||
count=2976, # ~31 days * 96 readings
|
||||
),
|
||||
"bat_pct": MetricStats(
|
||||
mean=60.0 + month * 5,
|
||||
count=2976,
|
||||
),
|
||||
"last_rssi": MetricStats(
|
||||
mean=-90.0 + month,
|
||||
count=2976,
|
||||
),
|
||||
"last_snr": MetricStats(
|
||||
mean=7.5 + month * 0.5,
|
||||
count=2976,
|
||||
),
|
||||
"nb_recv": MetricStats(
|
||||
total=30000 + month * 5000,
|
||||
count=2976,
|
||||
reboot_count=0,
|
||||
),
|
||||
"nb_sent": MetricStats(
|
||||
total=15000 + month * 2500,
|
||||
count=2976,
|
||||
reboot_count=0,
|
||||
),
|
||||
},
|
||||
)
|
||||
)
|
||||
|
||||
return YearlyAggregate(
|
||||
year=2024,
|
||||
role="repeater",
|
||||
monthly=monthly_data,
|
||||
summary={
|
||||
"bat": MetricStats(
|
||||
min_value=3550,
|
||||
min_time=datetime(2024, 1, 15, 4, 0),
|
||||
max_value=4010,
|
||||
max_time=datetime(2024, 3, 20, 14, 0),
|
||||
mean=3760,
|
||||
count=8928,
|
||||
),
|
||||
"bat_pct": MetricStats(
|
||||
mean=70.0,
|
||||
count=8928,
|
||||
),
|
||||
"last_rssi": MetricStats(
|
||||
mean=-88.0,
|
||||
count=8928,
|
||||
),
|
||||
"last_snr": MetricStats(
|
||||
mean=8.5,
|
||||
count=8928,
|
||||
),
|
||||
"nb_recv": MetricStats(
|
||||
total=120000,
|
||||
count=8928,
|
||||
reboot_count=0,
|
||||
),
|
||||
"nb_sent": MetricStats(
|
||||
total=60000,
|
||||
count=8928,
|
||||
reboot_count=0,
|
||||
),
|
||||
},
|
||||
)
|
||||
|
||||
@pytest.fixture
|
||||
def companion_yearly_aggregate(self):
|
||||
"""Create sample YearlyAggregate for companion role testing."""
|
||||
monthly_data = []
|
||||
|
||||
# Create 3 months of sample data
|
||||
for month in range(1, 4):
|
||||
monthly_data.append(
|
||||
MonthlyAggregate(
|
||||
year=2024,
|
||||
month=month,
|
||||
role="companion",
|
||||
daily=[],
|
||||
summary={
|
||||
"battery_mv": MetricStats(
|
||||
min_value=3600 + month * 30,
|
||||
min_time=datetime(2024, month, 10, 5, 0),
|
||||
max_value=4100 + month * 20,
|
||||
max_time=datetime(2024, month, 25, 12, 0),
|
||||
mean=3850 + month * 25,
|
||||
count=44640, # ~31 days * 1440 readings
|
||||
),
|
||||
"bat_pct": MetricStats(
|
||||
mean=70.0 + month * 3,
|
||||
count=44640,
|
||||
),
|
||||
"contacts": MetricStats(
|
||||
mean=10 + month,
|
||||
count=44640,
|
||||
),
|
||||
"recv": MetricStats(
|
||||
total=50000 + month * 10000,
|
||||
count=44640,
|
||||
reboot_count=0,
|
||||
),
|
||||
"sent": MetricStats(
|
||||
total=25000 + month * 5000,
|
||||
count=44640,
|
||||
reboot_count=0,
|
||||
),
|
||||
},
|
||||
)
|
||||
)
|
||||
|
||||
return YearlyAggregate(
|
||||
year=2024,
|
||||
role="companion",
|
||||
monthly=monthly_data,
|
||||
summary={
|
||||
"battery_mv": MetricStats(
|
||||
min_value=3630,
|
||||
min_time=datetime(2024, 1, 10, 5, 0),
|
||||
max_value=4160,
|
||||
max_time=datetime(2024, 3, 25, 12, 0),
|
||||
mean=3900,
|
||||
count=133920,
|
||||
),
|
||||
"bat_pct": MetricStats(
|
||||
mean=76.0,
|
||||
count=133920,
|
||||
),
|
||||
"contacts": MetricStats(
|
||||
mean=12.0,
|
||||
count=133920,
|
||||
),
|
||||
"recv": MetricStats(
|
||||
total=210000,
|
||||
count=133920,
|
||||
reboot_count=0,
|
||||
),
|
||||
"sent": MetricStats(
|
||||
total=105000,
|
||||
count=133920,
|
||||
reboot_count=0,
|
||||
),
|
||||
},
|
||||
)
|
||||
|
||||
def _assert_snapshot_match(
|
||||
self,
|
||||
actual: str,
|
||||
snapshot_path: Path,
|
||||
update: bool,
|
||||
) -> None:
|
||||
"""Compare TXT report against snapshot, with optional update mode."""
|
||||
if update:
|
||||
# Update mode: write actual to snapshot
|
||||
snapshot_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
snapshot_path.write_text(actual, encoding="utf-8")
|
||||
pytest.skip(f"Snapshot updated: {snapshot_path}")
|
||||
else:
|
||||
# Compare mode
|
||||
if not snapshot_path.exists():
|
||||
# Create new snapshot if it doesn't exist
|
||||
snapshot_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
snapshot_path.write_text(actual, encoding="utf-8")
|
||||
pytest.fail(
|
||||
f"Snapshot created: {snapshot_path}\n"
|
||||
f"Run tests again to verify, or set UPDATE_SNAPSHOTS=1 to regenerate."
|
||||
)
|
||||
|
||||
expected = snapshot_path.read_text(encoding="utf-8")
|
||||
|
||||
if actual != expected:
|
||||
# Show differences for debugging
|
||||
actual_lines = actual.splitlines()
|
||||
expected_lines = expected.splitlines()
|
||||
|
||||
diff_info = []
|
||||
for i, (a, e) in enumerate(zip(actual_lines, expected_lines, strict=False), 1):
|
||||
if a != e:
|
||||
diff_info.append(f"Line {i} differs:")
|
||||
diff_info.append(f" Expected: '{e}'")
|
||||
diff_info.append(f" Actual: '{a}'")
|
||||
if len(diff_info) > 15:
|
||||
diff_info.append(" (more differences omitted)")
|
||||
break
|
||||
|
||||
if len(actual_lines) != len(expected_lines):
|
||||
diff_info.append(
|
||||
f"Line count: expected {len(expected_lines)}, got {len(actual_lines)}"
|
||||
)
|
||||
|
||||
pytest.fail(
|
||||
f"Snapshot mismatch: {snapshot_path}\n"
|
||||
f"Set UPDATE_SNAPSHOTS=1 to regenerate.\n\n"
|
||||
+ "\n".join(diff_info)
|
||||
)
|
||||
|
||||
def test_monthly_report_repeater(
|
||||
self,
|
||||
repeater_monthly_aggregate,
|
||||
sample_location,
|
||||
txt_snapshots_dir,
|
||||
update_snapshots,
|
||||
):
|
||||
"""Monthly repeater report matches snapshot."""
|
||||
result = format_monthly_txt(
|
||||
repeater_monthly_aggregate,
|
||||
"Test Repeater",
|
||||
sample_location,
|
||||
)
|
||||
|
||||
snapshot_path = txt_snapshots_dir / "monthly_report_repeater.txt"
|
||||
self._assert_snapshot_match(result, snapshot_path, update_snapshots)
|
||||
|
||||
def test_monthly_report_companion(
|
||||
self,
|
||||
companion_monthly_aggregate,
|
||||
sample_location,
|
||||
txt_snapshots_dir,
|
||||
update_snapshots,
|
||||
):
|
||||
"""Monthly companion report matches snapshot."""
|
||||
result = format_monthly_txt(
|
||||
companion_monthly_aggregate,
|
||||
"Test Companion",
|
||||
sample_location,
|
||||
)
|
||||
|
||||
snapshot_path = txt_snapshots_dir / "monthly_report_companion.txt"
|
||||
self._assert_snapshot_match(result, snapshot_path, update_snapshots)
|
||||
|
||||
def test_yearly_report_repeater(
|
||||
self,
|
||||
repeater_yearly_aggregate,
|
||||
sample_location,
|
||||
txt_snapshots_dir,
|
||||
update_snapshots,
|
||||
):
|
||||
"""Yearly repeater report matches snapshot."""
|
||||
result = format_yearly_txt(
|
||||
repeater_yearly_aggregate,
|
||||
"Test Repeater",
|
||||
sample_location,
|
||||
)
|
||||
|
||||
snapshot_path = txt_snapshots_dir / "yearly_report_repeater.txt"
|
||||
self._assert_snapshot_match(result, snapshot_path, update_snapshots)
|
||||
|
||||
def test_yearly_report_companion(
|
||||
self,
|
||||
companion_yearly_aggregate,
|
||||
sample_location,
|
||||
txt_snapshots_dir,
|
||||
update_snapshots,
|
||||
):
|
||||
"""Yearly companion report matches snapshot."""
|
||||
result = format_yearly_txt(
|
||||
companion_yearly_aggregate,
|
||||
"Test Companion",
|
||||
sample_location,
|
||||
)
|
||||
|
||||
snapshot_path = txt_snapshots_dir / "yearly_report_companion.txt"
|
||||
self._assert_snapshot_match(result, snapshot_path, update_snapshots)
|
||||
|
||||
def test_empty_monthly_report(
|
||||
self,
|
||||
sample_location,
|
||||
txt_snapshots_dir,
|
||||
update_snapshots,
|
||||
):
|
||||
"""Empty monthly report matches snapshot."""
|
||||
empty_aggregate = MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=[],
|
||||
summary={},
|
||||
)
|
||||
|
||||
result = format_monthly_txt(
|
||||
empty_aggregate,
|
||||
"Test Repeater",
|
||||
sample_location,
|
||||
)
|
||||
|
||||
snapshot_path = txt_snapshots_dir / "empty_monthly_report.txt"
|
||||
self._assert_snapshot_match(result, snapshot_path, update_snapshots)
|
||||
|
||||
def test_empty_yearly_report(
|
||||
self,
|
||||
sample_location,
|
||||
txt_snapshots_dir,
|
||||
update_snapshots,
|
||||
):
|
||||
"""Empty yearly report matches snapshot."""
|
||||
empty_aggregate = YearlyAggregate(
|
||||
year=2024,
|
||||
role="repeater",
|
||||
monthly=[],
|
||||
summary={},
|
||||
)
|
||||
|
||||
result = format_yearly_txt(
|
||||
empty_aggregate,
|
||||
"Test Repeater",
|
||||
sample_location,
|
||||
)
|
||||
|
||||
snapshot_path = txt_snapshots_dir / "empty_yearly_report.txt"
|
||||
self._assert_snapshot_match(result, snapshot_path, update_snapshots)
|
||||
363
tests/reports/test_table_builders.py
Normal file
363
tests/reports/test_table_builders.py
Normal file
@@ -0,0 +1,363 @@
|
||||
"""Tests for report table building functions."""
|
||||
|
||||
from datetime import date
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.html import (
|
||||
build_monthly_table_data,
|
||||
build_yearly_table_data,
|
||||
)
|
||||
from meshmon.reports import (
|
||||
DailyAggregate,
|
||||
MetricStats,
|
||||
MonthlyAggregate,
|
||||
YearlyAggregate,
|
||||
)
|
||||
|
||||
|
||||
class TestBuildMonthlyTableData:
|
||||
"""Tests for build_monthly_table_data function."""
|
||||
|
||||
@pytest.fixture
|
||||
def sample_monthly_aggregate(self):
|
||||
"""Create sample MonthlyAggregate for testing."""
|
||||
daily_data = [
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, 1),
|
||||
metrics={
|
||||
"bat": MetricStats(min_value=3700, max_value=3900, mean=3800, count=24),
|
||||
"last_rssi": MetricStats(min_value=-95, max_value=-80, mean=-87, count=24),
|
||||
"nb_recv": MetricStats(total=720, count=24),
|
||||
},
|
||||
),
|
||||
DailyAggregate(
|
||||
date=date(2024, 1, 2),
|
||||
metrics={
|
||||
"bat": MetricStats(min_value=3600, max_value=3850, mean=3750, count=24),
|
||||
"last_rssi": MetricStats(min_value=-93, max_value=-78, mean=-85, count=24),
|
||||
"nb_recv": MetricStats(total=840, count=24),
|
||||
},
|
||||
),
|
||||
]
|
||||
|
||||
return MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=daily_data,
|
||||
summary={
|
||||
"bat": MetricStats(min_value=3600, max_value=3900, mean=3775, count=48),
|
||||
"last_rssi": MetricStats(min_value=-95, max_value=-78, mean=-86, count=48),
|
||||
"nb_recv": MetricStats(total=1560, count=48),
|
||||
},
|
||||
)
|
||||
|
||||
def test_returns_tuple_of_three_lists(self, sample_monthly_aggregate):
|
||||
"""Returns tuple of (column_groups, headers, rows)."""
|
||||
result = build_monthly_table_data(sample_monthly_aggregate, "repeater")
|
||||
|
||||
assert isinstance(result, tuple)
|
||||
assert len(result) == 3
|
||||
|
||||
column_groups, headers, rows = result
|
||||
assert isinstance(column_groups, list)
|
||||
assert isinstance(headers, list)
|
||||
assert isinstance(rows, list)
|
||||
|
||||
def test_rows_match_daily_count(self, sample_monthly_aggregate):
|
||||
"""Number of rows matches number of daily aggregates (plus summary)."""
|
||||
_, _, rows = build_monthly_table_data(sample_monthly_aggregate, "repeater")
|
||||
|
||||
# Should have 2 data rows + 1 summary row = 3 total
|
||||
data_rows = [r for r in rows if not r.get("is_summary", False)]
|
||||
assert len(data_rows) == 2
|
||||
assert len(rows) == 3
|
||||
assert rows[-1]["is_summary"] is True
|
||||
|
||||
def test_headers_have_labels(self, sample_monthly_aggregate):
|
||||
"""Headers include label information."""
|
||||
_, headers, _ = build_monthly_table_data(sample_monthly_aggregate, "repeater")
|
||||
|
||||
expected_labels = [
|
||||
"Day",
|
||||
"Avg V",
|
||||
"Avg %",
|
||||
"Min V",
|
||||
"Max V",
|
||||
"RSSI",
|
||||
"SNR",
|
||||
"Noise",
|
||||
"RX",
|
||||
"TX",
|
||||
"Secs",
|
||||
]
|
||||
assert [header["label"] for header in headers] == expected_labels
|
||||
|
||||
def test_rows_have_date(self, sample_monthly_aggregate):
|
||||
"""Each data row includes date information via cells."""
|
||||
_, _, rows = build_monthly_table_data(sample_monthly_aggregate, "repeater")
|
||||
|
||||
data_rows = [r for r in rows if not r.get("is_summary", False)]
|
||||
for row in data_rows:
|
||||
assert isinstance(row, dict)
|
||||
# Row has cells with date value
|
||||
assert "cells" in row
|
||||
# First cell should be the day
|
||||
assert len(row["cells"]) > 0
|
||||
assert [row["cells"][0]["value"] for row in data_rows] == ["01", "02"]
|
||||
|
||||
def test_daily_row_values(self, sample_monthly_aggregate):
|
||||
"""Daily rows include formatted values and placeholders."""
|
||||
_, _, rows = build_monthly_table_data(sample_monthly_aggregate, "repeater")
|
||||
first_row = next(r for r in rows if not r.get("is_summary", False))
|
||||
cells = first_row["cells"]
|
||||
|
||||
assert cells[0]["value"] == "01"
|
||||
assert cells[1]["value"] == "3.80"
|
||||
assert cells[2]["value"] == "-"
|
||||
assert cells[5]["value"] == "-87"
|
||||
assert cells[6]["value"] == "-"
|
||||
assert cells[8]["value"] == "720"
|
||||
|
||||
def test_handles_empty_aggregate(self):
|
||||
"""Handles aggregate with no daily data."""
|
||||
agg = MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=[],
|
||||
summary={},
|
||||
)
|
||||
|
||||
result = build_monthly_table_data(agg, "repeater")
|
||||
|
||||
column_groups, headers, rows = result
|
||||
assert isinstance(rows, list)
|
||||
# Empty aggregate should have only summary row or no data rows
|
||||
data_rows = [r for r in rows if not r.get("is_summary", False)]
|
||||
assert len(data_rows) == 0
|
||||
|
||||
|
||||
class TestBuildYearlyTableData:
|
||||
"""Tests for build_yearly_table_data function."""
|
||||
|
||||
@pytest.fixture
|
||||
def sample_yearly_aggregate(self):
|
||||
"""Create sample YearlyAggregate for testing."""
|
||||
monthly_data = [
|
||||
MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=[],
|
||||
summary={"bat": MetricStats(min_value=3600, max_value=3900, mean=3750, count=720)},
|
||||
),
|
||||
MonthlyAggregate(
|
||||
year=2024,
|
||||
month=2,
|
||||
role="repeater",
|
||||
daily=[],
|
||||
summary={"bat": MetricStats(min_value=3500, max_value=3850, mean=3700, count=672)},
|
||||
),
|
||||
]
|
||||
|
||||
return YearlyAggregate(
|
||||
year=2024,
|
||||
role="repeater",
|
||||
monthly=monthly_data,
|
||||
summary={"bat": MetricStats(min_value=3500, max_value=3900, mean=3725, count=1392)},
|
||||
)
|
||||
|
||||
def test_returns_tuple_of_three_lists(self, sample_yearly_aggregate):
|
||||
"""Returns tuple of (column_groups, headers, rows)."""
|
||||
result = build_yearly_table_data(sample_yearly_aggregate, "repeater")
|
||||
|
||||
assert isinstance(result, tuple)
|
||||
assert len(result) == 3
|
||||
|
||||
column_groups, headers, rows = result
|
||||
assert isinstance(column_groups, list)
|
||||
assert isinstance(headers, list)
|
||||
assert isinstance(rows, list)
|
||||
|
||||
def test_rows_match_monthly_count(self, sample_yearly_aggregate):
|
||||
"""Number of rows matches number of monthly data (plus summary)."""
|
||||
_, _, rows = build_yearly_table_data(sample_yearly_aggregate, "repeater")
|
||||
|
||||
# Should have 2 data rows + 1 summary row
|
||||
data_rows = [r for r in rows if not r.get("is_summary", False)]
|
||||
assert len(data_rows) == 2
|
||||
assert len(rows) == 3
|
||||
assert rows[-1]["is_summary"] is True
|
||||
|
||||
def test_headers_have_labels(self, sample_yearly_aggregate):
|
||||
"""Headers include label information."""
|
||||
_, headers, _ = build_yearly_table_data(sample_yearly_aggregate, "repeater")
|
||||
|
||||
expected_labels = [
|
||||
"Year",
|
||||
"Mo",
|
||||
"Volt",
|
||||
"%",
|
||||
"High",
|
||||
"Low",
|
||||
"RSSI",
|
||||
"SNR",
|
||||
"RX",
|
||||
"TX",
|
||||
]
|
||||
assert [header["label"] for header in headers] == expected_labels
|
||||
|
||||
def test_rows_have_month(self, sample_yearly_aggregate):
|
||||
"""Each row includes month information."""
|
||||
_, _, rows = build_yearly_table_data(sample_yearly_aggregate, "repeater")
|
||||
|
||||
data_rows = [r for r in rows if not r.get("is_summary", False)]
|
||||
months = [row["cells"][1]["value"] for row in data_rows]
|
||||
assert months == ["01", "02"]
|
||||
|
||||
def test_yearly_row_values(self, sample_yearly_aggregate):
|
||||
"""Yearly rows include formatted values and placeholders."""
|
||||
_, _, rows = build_yearly_table_data(sample_yearly_aggregate, "repeater")
|
||||
first_row = next(r for r in rows if not r.get("is_summary", False))
|
||||
cells = first_row["cells"]
|
||||
|
||||
assert cells[0]["value"] == "2024"
|
||||
assert cells[1]["value"] == "01"
|
||||
assert cells[2]["value"] == "3.75"
|
||||
assert cells[3]["value"] == "-"
|
||||
|
||||
def test_handles_empty_aggregate(self):
|
||||
"""Handles aggregate with no monthly data."""
|
||||
agg = YearlyAggregate(
|
||||
year=2024,
|
||||
role="repeater",
|
||||
monthly=[],
|
||||
summary={},
|
||||
)
|
||||
|
||||
result = build_yearly_table_data(agg, "repeater")
|
||||
|
||||
column_groups, headers, rows = result
|
||||
assert isinstance(rows, list)
|
||||
# Empty aggregate should have only summary row or no data rows
|
||||
data_rows = [r for r in rows if not r.get("is_summary", False)]
|
||||
assert len(data_rows) == 0
|
||||
|
||||
|
||||
class TestTableColumnGroups:
|
||||
"""Tests for column grouping in tables."""
|
||||
|
||||
@pytest.fixture
|
||||
def monthly_aggregate_with_data(self):
|
||||
"""Aggregate with data for column group testing."""
|
||||
daily = DailyAggregate(
|
||||
date=date(2024, 1, 1),
|
||||
metrics={
|
||||
"bat": MetricStats(min_value=3700, max_value=3900, mean=3800, count=24),
|
||||
"last_rssi": MetricStats(min_value=-95, max_value=-80, mean=-87, count=24),
|
||||
"nb_recv": MetricStats(total=720, count=24),
|
||||
},
|
||||
)
|
||||
|
||||
return MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=[daily],
|
||||
summary={},
|
||||
)
|
||||
|
||||
def test_column_groups_structure(self, monthly_aggregate_with_data):
|
||||
"""Column groups have expected structure."""
|
||||
column_groups, _, _ = build_monthly_table_data(monthly_aggregate_with_data, "repeater")
|
||||
|
||||
assert column_groups == [
|
||||
{"label": "", "colspan": 1},
|
||||
{"label": "Battery", "colspan": 4},
|
||||
{"label": "Signal", "colspan": 3},
|
||||
{"label": "Packets", "colspan": 2},
|
||||
{"label": "Air", "colspan": 1},
|
||||
]
|
||||
|
||||
def test_column_groups_span_matches_headers(self, monthly_aggregate_with_data):
|
||||
"""Column group spans should add up to header count."""
|
||||
column_groups, headers, _ = build_monthly_table_data(monthly_aggregate_with_data, "repeater")
|
||||
|
||||
total_span = sum(
|
||||
g.get("span", g.get("colspan", len(g.get("columns", []))))
|
||||
for g in column_groups
|
||||
)
|
||||
|
||||
assert total_span == len(headers)
|
||||
|
||||
|
||||
class TestTableRolesHandling:
|
||||
"""Tests for different role handling in tables."""
|
||||
|
||||
@pytest.fixture
|
||||
def companion_aggregate(self):
|
||||
"""Aggregate for companion role."""
|
||||
daily = DailyAggregate(
|
||||
date=date(2024, 1, 1),
|
||||
metrics={
|
||||
"battery_mv": MetricStats(min_value=3700, max_value=3900, mean=3800, count=24),
|
||||
"contacts": MetricStats(min_value=5, max_value=10, mean=7, count=24),
|
||||
"recv": MetricStats(total=720, count=24),
|
||||
},
|
||||
)
|
||||
|
||||
return MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="companion",
|
||||
daily=[daily],
|
||||
summary={},
|
||||
)
|
||||
|
||||
def test_companion_role_works(self, companion_aggregate):
|
||||
"""Table building works for companion role."""
|
||||
result = build_monthly_table_data(companion_aggregate, "companion")
|
||||
|
||||
column_groups, headers, rows = result
|
||||
assert isinstance(rows, list)
|
||||
# 1 data row + summary row
|
||||
data_rows = [r for r in rows if not r.get("is_summary", False)]
|
||||
assert len(data_rows) == 1
|
||||
assert [header["label"] for header in headers] == [
|
||||
"Day",
|
||||
"Avg V",
|
||||
"Avg %",
|
||||
"Min V",
|
||||
"Max V",
|
||||
"Contacts",
|
||||
"RX",
|
||||
"TX",
|
||||
]
|
||||
|
||||
def test_different_roles_different_columns(self, companion_aggregate):
|
||||
"""Different roles may have different column structures."""
|
||||
# Create a repeater aggregate
|
||||
repeater_daily = DailyAggregate(
|
||||
date=date(2024, 1, 1),
|
||||
metrics={
|
||||
"bat": MetricStats(min_value=3700, max_value=3900, mean=3800, count=24),
|
||||
},
|
||||
)
|
||||
|
||||
repeater_agg = MonthlyAggregate(
|
||||
year=2024,
|
||||
month=1,
|
||||
role="repeater",
|
||||
daily=[repeater_daily],
|
||||
summary={},
|
||||
)
|
||||
|
||||
companion_result = build_monthly_table_data(companion_aggregate, "companion")
|
||||
repeater_result = build_monthly_table_data(repeater_agg, "repeater")
|
||||
|
||||
# Both should return valid data
|
||||
assert len(companion_result) == 3
|
||||
assert len(repeater_result) == 3
|
||||
assert [h["label"] for h in companion_result[1]] != [h["label"] for h in repeater_result[1]]
|
||||
1
tests/retry/__init__.py
Normal file
1
tests/retry/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Tests for retry logic and circuit breaker."""
|
||||
67
tests/retry/conftest.py
Normal file
67
tests/retry/conftest.py
Normal file
@@ -0,0 +1,67 @@
|
||||
"""Fixtures for retry and circuit breaker tests."""
|
||||
|
||||
import json
|
||||
|
||||
import pytest
|
||||
|
||||
BASE_TS = 1704067200
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def circuit_state_file(tmp_path):
|
||||
"""Path for circuit breaker state file."""
|
||||
return tmp_path / "circuit.json"
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def closed_circuit(circuit_state_file):
|
||||
"""Circuit breaker state file with closed circuit (no failures)."""
|
||||
state = {
|
||||
"consecutive_failures": 0,
|
||||
"cooldown_until": 0,
|
||||
"last_success": BASE_TS,
|
||||
}
|
||||
circuit_state_file.write_text(json.dumps(state))
|
||||
return circuit_state_file
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def open_circuit(circuit_state_file):
|
||||
"""Circuit breaker state file with open circuit (in cooldown)."""
|
||||
state = {
|
||||
"consecutive_failures": 10,
|
||||
"cooldown_until": BASE_TS + 3600, # 1 hour from BASE_TS
|
||||
"last_success": BASE_TS - 7200, # 2 hours before BASE_TS
|
||||
}
|
||||
circuit_state_file.write_text(json.dumps(state))
|
||||
return circuit_state_file
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def expired_cooldown_circuit(circuit_state_file):
|
||||
"""Circuit breaker state file with expired cooldown."""
|
||||
state = {
|
||||
"consecutive_failures": 10,
|
||||
"cooldown_until": BASE_TS - 100, # Expired 100s before BASE_TS
|
||||
"last_success": BASE_TS - 7200,
|
||||
}
|
||||
circuit_state_file.write_text(json.dumps(state))
|
||||
return circuit_state_file
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def corrupted_state_file(circuit_state_file):
|
||||
"""Circuit breaker state file with corrupted JSON."""
|
||||
circuit_state_file.write_text("not valid json {{{")
|
||||
return circuit_state_file
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def partial_state_file(circuit_state_file):
|
||||
"""Circuit breaker state file with missing keys."""
|
||||
state = {
|
||||
"consecutive_failures": 5,
|
||||
# Missing cooldown_until and last_success
|
||||
}
|
||||
circuit_state_file.write_text(json.dumps(state))
|
||||
return circuit_state_file
|
||||
325
tests/retry/test_circuit_breaker.py
Normal file
325
tests/retry/test_circuit_breaker.py
Normal file
@@ -0,0 +1,325 @@
|
||||
"""Tests for CircuitBreaker class."""
|
||||
|
||||
import json
|
||||
|
||||
import pytest
|
||||
|
||||
from meshmon.retry import CircuitBreaker
|
||||
|
||||
BASE_TS = 1704067200
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def time_controller(monkeypatch):
|
||||
"""Control time.time() within meshmon.retry."""
|
||||
state = {"now": BASE_TS}
|
||||
|
||||
def _time():
|
||||
return state["now"]
|
||||
|
||||
monkeypatch.setattr("meshmon.retry.time.time", _time)
|
||||
return state
|
||||
|
||||
|
||||
class TestCircuitBreakerInit:
|
||||
"""Tests for CircuitBreaker initialization."""
|
||||
|
||||
def test_creates_with_fresh_state(self, circuit_state_file):
|
||||
"""Fresh circuit breaker has zero failures and no cooldown."""
|
||||
cb = CircuitBreaker(circuit_state_file)
|
||||
|
||||
assert cb.consecutive_failures == 0
|
||||
assert cb.cooldown_until == 0
|
||||
assert cb.last_success == 0
|
||||
|
||||
def test_loads_existing_state(self, closed_circuit):
|
||||
"""Loads state from existing file."""
|
||||
cb = CircuitBreaker(closed_circuit)
|
||||
|
||||
assert cb.consecutive_failures == 0
|
||||
assert cb.cooldown_until == 0
|
||||
assert cb.last_success > 0
|
||||
|
||||
def test_loads_open_circuit_state(self, open_circuit, time_controller):
|
||||
"""Loads open circuit state correctly."""
|
||||
cb = CircuitBreaker(open_circuit)
|
||||
|
||||
assert cb.consecutive_failures == 10
|
||||
assert cb.cooldown_until == BASE_TS + 3600
|
||||
assert cb.is_open() is True
|
||||
|
||||
def test_handles_corrupted_file(self, corrupted_state_file):
|
||||
"""Corrupted JSON file loads defaults without crashing."""
|
||||
cb = CircuitBreaker(corrupted_state_file)
|
||||
|
||||
# Should use defaults
|
||||
assert cb.consecutive_failures == 0
|
||||
assert cb.cooldown_until == 0
|
||||
assert cb.last_success == 0
|
||||
|
||||
def test_handles_partial_state(self, partial_state_file):
|
||||
"""Missing keys in state file use defaults."""
|
||||
cb = CircuitBreaker(partial_state_file)
|
||||
|
||||
assert cb.consecutive_failures == 5 # Present in file
|
||||
assert cb.cooldown_until == 0 # Default
|
||||
assert cb.last_success == 0 # Default
|
||||
|
||||
def test_handles_nonexistent_file(self, circuit_state_file):
|
||||
"""Nonexistent state file uses defaults."""
|
||||
assert not circuit_state_file.exists()
|
||||
|
||||
cb = CircuitBreaker(circuit_state_file)
|
||||
|
||||
assert cb.consecutive_failures == 0
|
||||
|
||||
def test_stores_state_file_path(self, circuit_state_file):
|
||||
"""Stores the state file path."""
|
||||
cb = CircuitBreaker(circuit_state_file)
|
||||
|
||||
assert cb.state_file == circuit_state_file
|
||||
|
||||
|
||||
class TestCircuitBreakerIsOpen:
|
||||
"""Tests for is_open method."""
|
||||
|
||||
def test_closed_circuit_returns_false(self, closed_circuit):
|
||||
"""Closed circuit (no cooldown) returns False."""
|
||||
cb = CircuitBreaker(closed_circuit)
|
||||
|
||||
assert cb.is_open() is False
|
||||
|
||||
def test_open_circuit_returns_true(self, open_circuit, time_controller):
|
||||
"""Open circuit (in cooldown) returns True."""
|
||||
cb = CircuitBreaker(open_circuit)
|
||||
|
||||
assert cb.is_open() is True
|
||||
|
||||
def test_expired_cooldown_returns_false(self, expired_cooldown_circuit, time_controller):
|
||||
"""Expired cooldown returns False (circuit closes)."""
|
||||
cb = CircuitBreaker(expired_cooldown_circuit)
|
||||
|
||||
assert cb.is_open() is False
|
||||
|
||||
def test_cooldown_expiry(self, circuit_state_file, time_controller):
|
||||
"""Circuit closes when cooldown expires."""
|
||||
# Set cooldown to 10 seconds from now
|
||||
state = {
|
||||
"consecutive_failures": 10,
|
||||
"cooldown_until": BASE_TS + 10,
|
||||
"last_success": 0,
|
||||
}
|
||||
circuit_state_file.write_text(json.dumps(state))
|
||||
|
||||
cb = CircuitBreaker(circuit_state_file)
|
||||
assert cb.is_open() is True
|
||||
|
||||
time_controller["now"] = BASE_TS + 11
|
||||
assert cb.is_open() is False
|
||||
|
||||
|
||||
class TestCooldownRemaining:
|
||||
"""Tests for cooldown_remaining method."""
|
||||
|
||||
def test_returns_zero_when_closed(self, closed_circuit):
|
||||
"""Returns 0 when circuit is closed."""
|
||||
cb = CircuitBreaker(closed_circuit)
|
||||
|
||||
assert cb.cooldown_remaining() == 0
|
||||
|
||||
def test_returns_seconds_when_open(self, circuit_state_file, time_controller):
|
||||
"""Returns remaining seconds when in cooldown."""
|
||||
state = {
|
||||
"consecutive_failures": 10,
|
||||
"cooldown_until": BASE_TS + 100,
|
||||
"last_success": 0,
|
||||
}
|
||||
circuit_state_file.write_text(json.dumps(state))
|
||||
|
||||
cb = CircuitBreaker(circuit_state_file)
|
||||
remaining = cb.cooldown_remaining()
|
||||
|
||||
assert remaining == 100
|
||||
|
||||
def test_returns_zero_when_expired(self, expired_cooldown_circuit):
|
||||
"""Returns 0 when cooldown has expired."""
|
||||
cb = CircuitBreaker(expired_cooldown_circuit)
|
||||
|
||||
assert cb.cooldown_remaining() == 0
|
||||
|
||||
def test_returns_integer(self, open_circuit, time_controller):
|
||||
"""Returns an integer, not float."""
|
||||
cb = CircuitBreaker(open_circuit)
|
||||
|
||||
assert isinstance(cb.cooldown_remaining(), int)
|
||||
|
||||
|
||||
class TestRecordSuccess:
|
||||
"""Tests for record_success method."""
|
||||
|
||||
def test_resets_failure_count(self, circuit_state_file):
|
||||
"""Success resets consecutive failure count to 0."""
|
||||
state = {
|
||||
"consecutive_failures": 5,
|
||||
"cooldown_until": 0,
|
||||
"last_success": 0,
|
||||
}
|
||||
circuit_state_file.write_text(json.dumps(state))
|
||||
|
||||
cb = CircuitBreaker(circuit_state_file)
|
||||
cb.record_success()
|
||||
|
||||
assert cb.consecutive_failures == 0
|
||||
|
||||
def test_updates_last_success(self, circuit_state_file, time_controller):
|
||||
"""Success updates last_success timestamp."""
|
||||
cb = CircuitBreaker(circuit_state_file)
|
||||
time_controller["now"] = BASE_TS + 5
|
||||
cb.record_success()
|
||||
|
||||
assert cb.last_success == BASE_TS + 5
|
||||
|
||||
def test_persists_to_file(self, circuit_state_file):
|
||||
"""Success state is persisted to file."""
|
||||
cb = CircuitBreaker(circuit_state_file)
|
||||
cb.consecutive_failures = 5
|
||||
cb.record_success()
|
||||
|
||||
# Read file directly
|
||||
data = json.loads(circuit_state_file.read_text())
|
||||
assert data["consecutive_failures"] == 0
|
||||
assert data["last_success"] > 0
|
||||
|
||||
def test_creates_parent_dirs(self, tmp_path):
|
||||
"""Creates parent directories if they don't exist."""
|
||||
nested_path = tmp_path / "deep" / "nested" / "circuit.json"
|
||||
cb = CircuitBreaker(nested_path)
|
||||
cb.record_success()
|
||||
|
||||
assert nested_path.exists()
|
||||
|
||||
|
||||
class TestRecordFailure:
|
||||
"""Tests for record_failure method."""
|
||||
|
||||
def test_increments_failure_count(self, circuit_state_file):
|
||||
"""Failure increments consecutive failure count."""
|
||||
cb = CircuitBreaker(circuit_state_file)
|
||||
cb.record_failure(max_failures=10, cooldown_s=3600)
|
||||
|
||||
assert cb.consecutive_failures == 1
|
||||
|
||||
def test_opens_circuit_at_threshold(self, circuit_state_file, time_controller):
|
||||
"""Circuit opens when failures reach threshold."""
|
||||
cb = CircuitBreaker(circuit_state_file)
|
||||
|
||||
# Record failures up to threshold
|
||||
for _ in range(5):
|
||||
cb.record_failure(max_failures=5, cooldown_s=3600)
|
||||
|
||||
assert cb.is_open() is True
|
||||
assert cb.cooldown_until == BASE_TS + 3600
|
||||
|
||||
def test_does_not_open_before_threshold(self, circuit_state_file, time_controller):
|
||||
"""Circuit stays closed before reaching threshold."""
|
||||
cb = CircuitBreaker(circuit_state_file)
|
||||
|
||||
for _ in range(4):
|
||||
cb.record_failure(max_failures=5, cooldown_s=3600)
|
||||
|
||||
assert cb.is_open() is False
|
||||
|
||||
def test_cooldown_duration(self, circuit_state_file, time_controller):
|
||||
"""Cooldown is set to specified duration."""
|
||||
cb = CircuitBreaker(circuit_state_file)
|
||||
|
||||
for _ in range(5):
|
||||
cb.record_failure(max_failures=5, cooldown_s=100)
|
||||
|
||||
# Cooldown should be ~100 seconds from now
|
||||
assert cb.cooldown_until == BASE_TS + 100
|
||||
|
||||
def test_persists_to_file(self, circuit_state_file):
|
||||
"""Failure state is persisted to file."""
|
||||
cb = CircuitBreaker(circuit_state_file)
|
||||
cb.record_failure(max_failures=10, cooldown_s=3600)
|
||||
|
||||
data = json.loads(circuit_state_file.read_text())
|
||||
assert data["consecutive_failures"] == 1
|
||||
|
||||
|
||||
class TestToDict:
|
||||
"""Tests for to_dict method."""
|
||||
|
||||
def test_includes_all_fields(self, closed_circuit):
|
||||
"""Dict includes all state fields."""
|
||||
cb = CircuitBreaker(closed_circuit)
|
||||
d = cb.to_dict()
|
||||
|
||||
assert "consecutive_failures" in d
|
||||
assert "cooldown_until" in d
|
||||
assert "last_success" in d
|
||||
assert "is_open" in d
|
||||
assert "cooldown_remaining_s" in d
|
||||
|
||||
def test_is_open_reflects_state(self, open_circuit, time_controller):
|
||||
"""is_open in dict reflects actual circuit state."""
|
||||
cb = CircuitBreaker(open_circuit)
|
||||
d = cb.to_dict()
|
||||
|
||||
assert d["is_open"] is True
|
||||
|
||||
def test_cooldown_remaining_reflects_state(self, open_circuit, time_controller):
|
||||
"""cooldown_remaining_s reflects actual remaining time."""
|
||||
cb = CircuitBreaker(open_circuit)
|
||||
d = cb.to_dict()
|
||||
|
||||
assert d["cooldown_remaining_s"] > 0
|
||||
|
||||
def test_closed_circuit_dict(self, closed_circuit):
|
||||
"""Closed circuit has expected dict values."""
|
||||
cb = CircuitBreaker(closed_circuit)
|
||||
d = cb.to_dict()
|
||||
|
||||
assert d["consecutive_failures"] == 0
|
||||
assert d["is_open"] is False
|
||||
assert d["cooldown_remaining_s"] == 0
|
||||
|
||||
|
||||
class TestStatePersistence:
|
||||
"""Tests for state persistence across instances."""
|
||||
|
||||
def test_state_survives_reload(self, circuit_state_file):
|
||||
"""State persists across CircuitBreaker instances."""
|
||||
cb1 = CircuitBreaker(circuit_state_file)
|
||||
cb1.record_failure(max_failures=10, cooldown_s=3600)
|
||||
cb1.record_failure(max_failures=10, cooldown_s=3600)
|
||||
cb1.record_failure(max_failures=10, cooldown_s=3600)
|
||||
|
||||
# Create new instance
|
||||
cb2 = CircuitBreaker(circuit_state_file)
|
||||
|
||||
assert cb2.consecutive_failures == 3
|
||||
|
||||
def test_success_resets_across_reload(self, circuit_state_file):
|
||||
"""Success reset persists across instances."""
|
||||
cb1 = CircuitBreaker(circuit_state_file)
|
||||
for _ in range(5):
|
||||
cb1.record_failure(max_failures=10, cooldown_s=3600)
|
||||
|
||||
cb1.record_success()
|
||||
|
||||
cb2 = CircuitBreaker(circuit_state_file)
|
||||
assert cb2.consecutive_failures == 0
|
||||
|
||||
def test_open_state_survives_reload(self, circuit_state_file, time_controller):
|
||||
"""Open circuit state persists across instances."""
|
||||
cb1 = CircuitBreaker(circuit_state_file)
|
||||
for _ in range(10):
|
||||
cb1.record_failure(max_failures=10, cooldown_s=3600)
|
||||
|
||||
assert cb1.is_open() is True
|
||||
|
||||
cb2 = CircuitBreaker(circuit_state_file)
|
||||
assert cb2.is_open() is True
|
||||
assert cb2.consecutive_failures == 10
|
||||
63
tests/retry/test_get_circuit_breaker.py
Normal file
63
tests/retry/test_get_circuit_breaker.py
Normal file
@@ -0,0 +1,63 @@
|
||||
"""Tests for get_repeater_circuit_breaker function."""
|
||||
|
||||
|
||||
from meshmon.retry import CircuitBreaker, get_repeater_circuit_breaker
|
||||
|
||||
|
||||
class TestGetRepeaterCircuitBreaker:
|
||||
"""Tests for get_repeater_circuit_breaker function."""
|
||||
|
||||
def test_returns_circuit_breaker(self, configured_env):
|
||||
"""Returns a CircuitBreaker instance."""
|
||||
cb = get_repeater_circuit_breaker()
|
||||
|
||||
assert isinstance(cb, CircuitBreaker)
|
||||
|
||||
def test_uses_state_dir(self, configured_env):
|
||||
"""Uses state_dir from config."""
|
||||
cb = get_repeater_circuit_breaker()
|
||||
|
||||
expected_path = configured_env["state_dir"] / "repeater_circuit.json"
|
||||
assert cb.state_file == expected_path
|
||||
|
||||
def test_state_file_name(self, configured_env):
|
||||
"""State file is named repeater_circuit.json."""
|
||||
cb = get_repeater_circuit_breaker()
|
||||
|
||||
assert cb.state_file.name == "repeater_circuit.json"
|
||||
|
||||
def test_each_call_creates_new_instance(self, configured_env):
|
||||
"""Each call creates a new CircuitBreaker instance."""
|
||||
cb1 = get_repeater_circuit_breaker()
|
||||
cb2 = get_repeater_circuit_breaker()
|
||||
|
||||
assert cb1 is not cb2
|
||||
|
||||
def test_instances_share_state_file(self, configured_env):
|
||||
"""Multiple instances share the same state file."""
|
||||
cb1 = get_repeater_circuit_breaker()
|
||||
cb2 = get_repeater_circuit_breaker()
|
||||
|
||||
assert cb1.state_file == cb2.state_file
|
||||
|
||||
def test_state_persists_across_instances(self, configured_env):
|
||||
"""State changes persist across instances."""
|
||||
cb1 = get_repeater_circuit_breaker()
|
||||
cb1.record_failure(max_failures=10, cooldown_s=3600)
|
||||
cb1.record_failure(max_failures=10, cooldown_s=3600)
|
||||
|
||||
cb2 = get_repeater_circuit_breaker()
|
||||
|
||||
assert cb2.consecutive_failures == 2
|
||||
|
||||
def test_creates_state_file_on_write(self, configured_env):
|
||||
"""State file is created when recording success/failure."""
|
||||
state_dir = configured_env["state_dir"]
|
||||
state_file = state_dir / "repeater_circuit.json"
|
||||
|
||||
assert not state_file.exists()
|
||||
|
||||
cb = get_repeater_circuit_breaker()
|
||||
cb.record_success()
|
||||
|
||||
assert state_file.exists()
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user