The Sample Store component provides several configuration options through ESP-IDF's menuconfig system. This guide explains each option in detail and provides recommendations for different use cases.
To configure the component:
idf.py menuconfigNavigate to: Component config → Sample Store Configuration
Option: CONFIG_SAMPLE_STORE_PARTITION_NAME
Type: String
Default: "sample_store_nvs"
Range: Any valid partition name
Specifies the name of the NVS partition to use for storing sample data. This must not be the default NVS partition to avoid conflicts with other system data.
- The partition name must exist in your
partitions.csvfile - The partition must be of type
datawith subtypenvs - The partition should be large enough for your expected data volume
# In partitions.csv
sample_store_nvs, data, nvs, , 0x600000,| Use Case | Recommended Name | Notes |
|---|---|---|
| Single application | "sample_store_nvs" |
Default, simple setup |
| Multiple components | "sensor_data_nvs" |
Descriptive, avoids conflicts |
| Development/testing | "test_samples_nvs" |
Easy to identify in partition table |
Option: CONFIG_SAMPLE_STORE_MAX_SETS
Type: Integer
Default: 5
Range: 1-255
Controls the maximum number of different sets (namespaces) that can exist simultaneously in the store. When this limit is exceeded, the oldest set is automatically removed to make space for new ones.
- When writing to a new set that would exceed this limit:
SAMPLE_STORE_EVENT_PRE_OVERWRITE_SETevent is triggered- Oldest set is completely removed (all samples deleted)
SAMPLE_STORE_EVENT_POST_OVERWRITE_SETevent is triggered- New set is created
- Each set requires metadata storage (~32 bytes per set)
- More sets = more namespace overhead in NVS
- Minimal RAM impact (metadata cached efficiently)
| Scenario | Recommended Value | Rationale |
|---|---|---|
| Single sensor type | 1-3 |
Minimal overhead, simple management |
| Multiple sensor types | 5-10 |
Good balance of flexibility and efficiency |
| Complex data logging | 10-20 |
Maximum flexibility for categorization |
| Memory constrained | 1-5 |
Minimize metadata overhead |
// CONFIG_SAMPLE_STORE_MAX_SETS = 5
// Suitable for: temperature, humidity, pressure, light, motion
sample_store_write_to_set(store, "temperature", &temp_data, sizeof(temp_data));
sample_store_write_to_set(store, "humidity", &hum_data, sizeof(hum_data));
sample_store_write_to_set(store, "pressure", &press_data, sizeof(press_data));
sample_store_write_to_set(store, "light", &light_data, sizeof(light_data));
sample_store_write_to_set(store, "motion", &motion_data, sizeof(motion_data));Option: CONFIG_SAMPLE_STORE_MAX_SAMPLES_PER_DAY
Type: Integer
Default: 1000
Range: 1-16,777,215
Controls the maximum number of samples that can be stored in each individual set. When this limit is exceeded, the oldest sample in that specific set is automatically removed.
Note: Despite the name containing "PER_DAY", this setting applies to each set independently, not to daily limits.
- When writing a sample that would exceed this limit:
SAMPLE_STORE_EVENT_PRE_OVERWRITE_SAMPLEevent is triggered- Oldest sample in the set is deleted
SAMPLE_STORE_EVENT_POST_OVERWRITE_SAMPLEevent is triggered- New sample is stored
- Each sample requires NVS storage space (data size + key overhead)
- Sample keys are 6-character hex strings (
000001toffffff) - Maximum theoretical limit: 16,777,215 samples per set
| Sampling Rate | Time Period | Recommended Value | Storage Estimate* |
|---|---|---|---|
| 1/minute | 1 week | 10,080 |
~10MB per set |
| 1/hour | 1 month | 744 |
~744KB per set |
| 1/second | 1 hour | 3,600 |
~3.6MB per set |
| 10/second | 10 minutes | 6,000 |
~6MB per set |
*Estimates assume ~1KB average sample size
Calculate required partition size:
Required Size = (Max Sets) × (Max Samples Per Set) × (Average Sample Size) × 1.3
The 1.3 multiplier accounts for:
- NVS overhead (keys, metadata, wear leveling)
- Fragmentation
- Safety margin
// High-frequency data logging
// CONFIG_SAMPLE_STORE_MAX_SAMPLES_PER_DAY = 86400 // 1 day at 1 sample/second
// Low-frequency monitoring
// CONFIG_SAMPLE_STORE_MAX_SAMPLES_PER_DAY = 24 // 1 day at 1 sample/hour
// Burst sampling
// CONFIG_SAMPLE_STORE_MAX_SAMPLES_PER_DAY = 1000 // Keep last 1000 samplesRequirements: Multiple sensors, hourly sampling, 1 week retention
CONFIG_SAMPLE_STORE_PARTITION_NAME="sensor_data"
CONFIG_SAMPLE_STORE_MAX_SETS=5
CONFIG_SAMPLE_STORE_MAX_SAMPLES_PER_DAY=168
Partition Table:
sensor_data, data, nvs, , 0x100000, # 1MBRequirements: Single sensor, 1 sample/second, 1 hour retention
CONFIG_SAMPLE_STORE_PARTITION_NAME="datalog_nvs"
CONFIG_SAMPLE_STORE_MAX_SETS=1
CONFIG_SAMPLE_STORE_MAX_SAMPLES_PER_DAY=3600
Partition Table:
datalog_nvs, data, nvs, , 0x500000, # 5MBRequirements: Many device types, variable sampling, long retention
CONFIG_SAMPLE_STORE_PARTITION_NAME="gateway_store"
CONFIG_SAMPLE_STORE_MAX_SETS=20
CONFIG_SAMPLE_STORE_MAX_SAMPLES_PER_DAY=10000
Partition Table:
gateway_store, data, nvs, , 0x2000000, # 32MBRequirements: Single sensor, minimal storage, basic functionality
CONFIG_SAMPLE_STORE_PARTITION_NAME="minimal_store"
CONFIG_SAMPLE_STORE_MAX_SETS=1
CONFIG_SAMPLE_STORE_MAX_SAMPLES_PER_DAY=100
Partition Table:
minimal_store, data, nvs, , 0x20000, # 128KBUse this formula to estimate the required partition size:
Base Requirements:
- NVS overhead: ~8KB
- Metadata per set: ~32 bytes
- Sample overhead: ~16 bytes per sample
Total Size = 8KB + (Sets × 32) + (Sets × Samples × (Sample_Size + 16)) × 1.5
| Parameter | Value | Notes |
|---|---|---|
| Number of Sets | CONFIG_SAMPLE_STORE_MAX_SETS |
From menuconfig |
| Samples per Set | CONFIG_SAMPLE_STORE_MAX_SAMPLES_PER_DAY |
From menuconfig |
| Average Sample Size | User defined | Your data structure size |
| Safety Factor | 1.5 | Recommended for wear leveling |
Example Calculation:
- Sets: 5
- Samples per set: 1000
- Sample size: 32 bytes
- Total: 8KB + (5×32) + (5×1000×48)×1.5 = ~368KB
- Always add 50-100% safety margin for NVS overhead
- Consider future growth in data requirements
- Monitor actual usage with
nvs_get_stats()
- Use descriptive set names (max 15 characters)
- Group related data logically
- Consider data lifecycle (some sets may need different retention)
- Implement event handlers for retention awareness
- Monitor storage usage in your application
- Consider data compression for large samples
- Avoid very frequent writes (>10Hz) to prevent wear
- Use appropriate sample sizes (not too small, not too large)
- Batch related data in single samples when possible
- Use larger limits during development for testing
- Optimize for production based on actual usage patterns
- Consider separate configurations for different build targets
Causes:
- Partition too small for configured limits
- Sample sizes larger than expected
- NVS fragmentation
Solutions:
- Increase partition size
- Reduce
CONFIG_SAMPLE_STORE_MAX_SAMPLES_PER_DAY - Reduce
CONFIG_SAMPLE_STORE_MAX_SETS - Implement manual cleanup in event handlers
Causes:
- Partition name mismatch between menuconfig and
partitions.csv - Partition table not flashed
- Wrong partition type/subtype
Solutions:
- Verify partition names match exactly
- Re-flash partition table:
idf.py partition-table-flash - Check partition type is
dataand subtype isnvs
Causes:
- Too many iterators created simultaneously
- Large samples loaded in memory
- Memory leaks in application code
Solutions:
- Free iterators promptly after use
- Process samples incrementally
- Use size queries before allocating buffers
Causes:
- Very frequent writes causing NVS wear leveling
- Large samples causing slow I/O
- Many sets causing metadata overhead
Solutions:
- Reduce write frequency
- Optimize sample data structure
- Consolidate related sets
When changing configuration:
- Backup existing data if needed
- Erase NVS partition to avoid corruption:
idf.py erase-flash
- Update configuration via menuconfig
- Rebuild and flash complete firmware
- Verify operation with test data
⚠️ Warning: Configuration changes require erasing existing sample data