Skip to content

Latest commit

 

History

History
323 lines (246 loc) · 9.72 KB

File metadata and controls

323 lines (246 loc) · 9.72 KB

Sample Store Component - Configuration Guide

Overview

The Sample Store component provides several configuration options through ESP-IDF's menuconfig system. This guide explains each option in detail and provides recommendations for different use cases.

Accessing Configuration

To configure the component:

idf.py menuconfig

Navigate to: Component config → Sample Store Configuration

Configuration Options

NVS Partition Name

Option: CONFIG_SAMPLE_STORE_PARTITION_NAME
Type: String
Default: "sample_store_nvs"
Range: Any valid partition name

Description

Specifies the name of the NVS partition to use for storing sample data. This must not be the default NVS partition to avoid conflicts with other system data.

Important Requirements

  • The partition name must exist in your partitions.csv file
  • The partition must be of type data with subtype nvs
  • The partition should be large enough for your expected data volume

Example Partition Table Entry

# In partitions.csv
sample_store_nvs,  data,     nvs,         ,            0x600000,

Use Cases

Use Case Recommended Name Notes
Single application "sample_store_nvs" Default, simple setup
Multiple components "sensor_data_nvs" Descriptive, avoids conflicts
Development/testing "test_samples_nvs" Easy to identify in partition table

Maximum Sets

Option: CONFIG_SAMPLE_STORE_MAX_SETS
Type: Integer
Default: 5
Range: 1-255

Description

Controls the maximum number of different sets (namespaces) that can exist simultaneously in the store. When this limit is exceeded, the oldest set is automatically removed to make space for new ones.

Behavior

  • When writing to a new set that would exceed this limit:
    1. SAMPLE_STORE_EVENT_PRE_OVERWRITE_SET event is triggered
    2. Oldest set is completely removed (all samples deleted)
    3. SAMPLE_STORE_EVENT_POST_OVERWRITE_SET event is triggered
    4. New set is created

Memory Impact

  • Each set requires metadata storage (~32 bytes per set)
  • More sets = more namespace overhead in NVS
  • Minimal RAM impact (metadata cached efficiently)

Recommendations

Scenario Recommended Value Rationale
Single sensor type 1-3 Minimal overhead, simple management
Multiple sensor types 5-10 Good balance of flexibility and efficiency
Complex data logging 10-20 Maximum flexibility for categorization
Memory constrained 1-5 Minimize metadata overhead

Example Use Cases

// CONFIG_SAMPLE_STORE_MAX_SETS = 5
// Suitable for: temperature, humidity, pressure, light, motion
sample_store_write_to_set(store, "temperature", &temp_data, sizeof(temp_data));
sample_store_write_to_set(store, "humidity", &hum_data, sizeof(hum_data));
sample_store_write_to_set(store, "pressure", &press_data, sizeof(press_data));
sample_store_write_to_set(store, "light", &light_data, sizeof(light_data));
sample_store_write_to_set(store, "motion", &motion_data, sizeof(motion_data));

Maximum Samples Per Set

Option: CONFIG_SAMPLE_STORE_MAX_SAMPLES_PER_DAY
Type: Integer
Default: 1000
Range: 1-16,777,215

Description

Controls the maximum number of samples that can be stored in each individual set. When this limit is exceeded, the oldest sample in that specific set is automatically removed.

Note: Despite the name containing "PER_DAY", this setting applies to each set independently, not to daily limits.

Behavior

  • When writing a sample that would exceed this limit:
    1. SAMPLE_STORE_EVENT_PRE_OVERWRITE_SAMPLE event is triggered
    2. Oldest sample in the set is deleted
    3. SAMPLE_STORE_EVENT_POST_OVERWRITE_SAMPLE event is triggered
    4. New sample is stored

Storage Impact

  • Each sample requires NVS storage space (data size + key overhead)
  • Sample keys are 6-character hex strings (000001 to ffffff)
  • Maximum theoretical limit: 16,777,215 samples per set

Recommendations

Sampling Rate Time Period Recommended Value Storage Estimate*
1/minute 1 week 10,080 ~10MB per set
1/hour 1 month 744 ~744KB per set
1/second 1 hour 3,600 ~3.6MB per set
10/second 10 minutes 6,000 ~6MB per set

*Estimates assume ~1KB average sample size

Partition Size Planning

Calculate required partition size:

Required Size = (Max Sets) × (Max Samples Per Set) × (Average Sample Size) × 1.3

The 1.3 multiplier accounts for:

  • NVS overhead (keys, metadata, wear leveling)
  • Fragmentation
  • Safety margin

Examples

// High-frequency data logging
// CONFIG_SAMPLE_STORE_MAX_SAMPLES_PER_DAY = 86400  // 1 day at 1 sample/second

// Low-frequency monitoring  
// CONFIG_SAMPLE_STORE_MAX_SAMPLES_PER_DAY = 24     // 1 day at 1 sample/hour

// Burst sampling
// CONFIG_SAMPLE_STORE_MAX_SAMPLES_PER_DAY = 1000   // Keep last 1000 samples

Configuration Scenarios

Scenario 1: IoT Sensor Node

Requirements: Multiple sensors, hourly sampling, 1 week retention

CONFIG_SAMPLE_STORE_PARTITION_NAME="sensor_data"
CONFIG_SAMPLE_STORE_MAX_SETS=5
CONFIG_SAMPLE_STORE_MAX_SAMPLES_PER_DAY=168

Partition Table:

sensor_data,       data,     nvs,         ,            0x100000,  # 1MB

Scenario 2: High-Frequency Data Logger

Requirements: Single sensor, 1 sample/second, 1 hour retention

CONFIG_SAMPLE_STORE_PARTITION_NAME="datalog_nvs"
CONFIG_SAMPLE_STORE_MAX_SETS=1
CONFIG_SAMPLE_STORE_MAX_SAMPLES_PER_DAY=3600

Partition Table:

datalog_nvs,       data,     nvs,         ,            0x500000,  # 5MB

Scenario 3: Multi-Device Gateway

Requirements: Many device types, variable sampling, long retention

CONFIG_SAMPLE_STORE_PARTITION_NAME="gateway_store"
CONFIG_SAMPLE_STORE_MAX_SETS=20
CONFIG_SAMPLE_STORE_MAX_SAMPLES_PER_DAY=10000

Partition Table:

gateway_store,     data,     nvs,         ,            0x2000000, # 32MB

Scenario 4: Memory-Constrained Device

Requirements: Single sensor, minimal storage, basic functionality

CONFIG_SAMPLE_STORE_PARTITION_NAME="minimal_store"
CONFIG_SAMPLE_STORE_MAX_SETS=1
CONFIG_SAMPLE_STORE_MAX_SAMPLES_PER_DAY=100

Partition Table:

minimal_store,     data,     nvs,         ,            0x20000,   # 128KB

Partition Size Calculator

Use this formula to estimate the required partition size:

Base Requirements:
- NVS overhead: ~8KB
- Metadata per set: ~32 bytes
- Sample overhead: ~16 bytes per sample

Total Size = 8KB + (Sets × 32) + (Sets × Samples × (Sample_Size + 16)) × 1.5

Interactive Calculator

Parameter Value Notes
Number of Sets CONFIG_SAMPLE_STORE_MAX_SETS From menuconfig
Samples per Set CONFIG_SAMPLE_STORE_MAX_SAMPLES_PER_DAY From menuconfig
Average Sample Size User defined Your data structure size
Safety Factor 1.5 Recommended for wear leveling

Example Calculation:

  • Sets: 5
  • Samples per set: 1000
  • Sample size: 32 bytes
  • Total: 8KB + (5×32) + (5×1000×48)×1.5 = ~368KB

Best Practices

1. Partition Sizing

  • Always add 50-100% safety margin for NVS overhead
  • Consider future growth in data requirements
  • Monitor actual usage with nvs_get_stats()

2. Set Organization

  • Use descriptive set names (max 15 characters)
  • Group related data logically
  • Consider data lifecycle (some sets may need different retention)

3. Sample Management

  • Implement event handlers for retention awareness
  • Monitor storage usage in your application
  • Consider data compression for large samples

4. Performance Optimization

  • Avoid very frequent writes (>10Hz) to prevent wear
  • Use appropriate sample sizes (not too small, not too large)
  • Batch related data in single samples when possible

5. Development vs Production

  • Use larger limits during development for testing
  • Optimize for production based on actual usage patterns
  • Consider separate configurations for different build targets

Troubleshooting Configuration Issues

"No space left on device"

Causes:

  • Partition too small for configured limits
  • Sample sizes larger than expected
  • NVS fragmentation

Solutions:

  • Increase partition size
  • Reduce CONFIG_SAMPLE_STORE_MAX_SAMPLES_PER_DAY
  • Reduce CONFIG_SAMPLE_STORE_MAX_SETS
  • Implement manual cleanup in event handlers

"Partition not found"

Causes:

  • Partition name mismatch between menuconfig and partitions.csv
  • Partition table not flashed
  • Wrong partition type/subtype

Solutions:

  • Verify partition names match exactly
  • Re-flash partition table: idf.py partition-table-flash
  • Check partition type is data and subtype is nvs

High Memory Usage

Causes:

  • Too many iterators created simultaneously
  • Large samples loaded in memory
  • Memory leaks in application code

Solutions:

  • Free iterators promptly after use
  • Process samples incrementally
  • Use size queries before allocating buffers

Poor Performance

Causes:

  • Very frequent writes causing NVS wear leveling
  • Large samples causing slow I/O
  • Many sets causing metadata overhead

Solutions:

  • Reduce write frequency
  • Optimize sample data structure
  • Consolidate related sets

Configuration Changes

When changing configuration:

  1. Backup existing data if needed
  2. Erase NVS partition to avoid corruption:
    idf.py erase-flash
  3. Update configuration via menuconfig
  4. Rebuild and flash complete firmware
  5. Verify operation with test data

⚠️ Warning: Configuration changes require erasing existing sample data