-
Notifications
You must be signed in to change notification settings - Fork 1
Description
Description
Implement a compact logging format that eliminates redundant timestamp storage when multiple data samples share the same timestamp. This optimization applies to all output formats (CSV, JSON, ProtoBuf) and should reduce memory usage, SD card space, and improve streaming throughput.
Problem
Currently, each data sample includes its own timestamp even when multiple samples are acquired at the same moment. This leads to:
- Redundant data in memory buffers
- Wasted SD card space
- Increased encoding overhead across all formats (CSV, JSON, ProtoBuf)
- Reduced maximum sustainable sample rates
Proposed Solution
Core Concept
Check timestamps at acquisition time and mark samples that share the same timestamp as their predecessor. This avoids storing redundant timestamp data from the point of acquisition, benefiting all downstream encoders.
Implementation Options
Option A: Null/Zero Timestamp Marker
- If a sample's timestamp equals the previous sample's timestamp, store a null/zero value
- All decoders interpret null timestamp as "use previous timestamp"
- Minimal memory overhead (just a comparison)
Option B: Sample Grouping
- Group samples with matching timestamps into a single data structure
- Single timestamp followed by array of channel values
- More complex but potentially more efficient for all output formats
Option C: Delta Timestamps
- Store only the delta from the previous timestamp
- Zero delta indicates same timestamp
- Also reduces storage for sequential timestamps
Critical Considerations
- Check at Acquisition Time - Must determine timestamp equality when pulling ADC data, not after storage
- No Extra RAM for Redundant Data - Avoid buffering data just to deduplicate later
- Efficient Comparison - Timestamp check must be fast (single 32/64-bit compare)
- All Formats Must Support - CSV, JSON, and ProtoBuf encoders must all handle the compact format consistently
Output Format Examples
CSV - Current:
1234567890,1.23,4.56,7.89
1234567890,2.34,5.67,8.90
1234567890,3.45,6.78,9.01
CSV - Compact:
1234567890,1.23,4.56,7.89
,2.34,5.67,8.90
,3.45,6.78,9.01
JSON - Current:
{"ts":1234567890,"ch0":1.23,"ch1":4.56}
{"ts":1234567890,"ch0":2.34,"ch1":5.67}JSON - Compact:
{"ts":1234567890,"ch0":1.23,"ch1":4.56}
{"ch0":2.34,"ch1":5.67}ProtoBuf - Compact:
- Omit timestamp field when unchanged (field not present = use previous)
Files to Modify
state/data/AInSample.h- Sample structure (timestamp field handling)HAL/ADC.c- Acquisition logic (timestamp comparison)services/streaming.c- Streaming engineservices/csv_encoder.c- CSV output formatservices/JSON_Encoder.c- JSON output formatservices/DaqifiPB/NanoPB_Encoder.c- ProtoBuf output format
Acceptance Criteria
- Samples with matching timestamps are identified at acquisition time
- No additional RAM used for redundant timestamp storage
- CSV encoder outputs compact format
- JSON encoder outputs compact format
- ProtoBuf encoder outputs compact format
- Client-side decoders/parsers can reconstruct full timestamps
- Performance benchmarks show improvement in throughput
- Storage space savings documented
Related
- Evaluate SCPI responsiveness during long SD card streaming sessions #170 - SD streaming task starvation (compact logging may help reduce CPU load)