Add RocksDB Preset System and WAL Directory Support for HDD Archive Nodes (and benefiting regular nodes) #771
+1,417
−6
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
This PR implements a comprehensive solution for running Kaspa archive nodes on HDD storage, addressing Issue #681.
The implementation adds two main features:
These features enable efficient archive nodes on HDDs while maintaining the option for hybrid NVMe+HDD configurations.
Features
1. RocksDB Preset System (
--rocksdb-preset)Two configuration presets for different deployment scenarios:
Default Preset (SSD/NVMe):
Archive Preset (HDD):
Usage:
2. WAL Directory Support (
--rocksdb-wal-dir)Enables hybrid storage configurations by placing Write-Ahead Logs on fast storage (SSD/NVME or memory based like tmpfs) while keeping database files on HDDs.
Enable faster synchronization process on archival nodes.
On regular nodes, using tmpfs (or lmDisk on windows) allow “small” performance improvements but also reduce wear / tear of nvme / SSD storage devices.
Using tmpfs or memory based storage could lead to database corruption on restart ! Use with caution... (A wal recovery process was tested but would require more extensive work / review so, if needed, we could have it implemented on a separate issue)
Features:
Usage:
Benefits:
Implementation Details
Files Modified
Database Layer:
database/src/db.rs- Export RocksDbPresetdatabase/src/db/conn_builder.rs- Add preset and wal_dir supportdatabase/src/db/rocksdb_preset.rs- NEW - Preset configurationsdatabase/src/lib.rs- Module exportsApplication Layer:
kaspad/src/args.rs- CLI arguments for--rocksdb-presetand--rocksdb-wal-dirkaspad/src/daemon.rs- Parse and apply configurationconsensus/src/consensus/factory.rs- Pass settings to consensus databasesTesting:
testing/integration/src/consensus_integration_tests.rs- Updated test parametersArchive Preset Configuration Details
Based on extensive testing and community feedback (Issue #681):
Memory & Write Buffers:
write_buffer_size: 256MB (4x default)optimize_level_style_compaction()to prevent overrideLSM Tree Structure:
target_file_size_base: 256MB (reduces file count dramatically)target_file_size_multiplier: 1 (consistent size across levels)max_bytes_for_level_base: 1GBlevel_compaction_dynamic_level_bytes: true (minimizes space amplification)Compaction:
level_zero_file_num_compaction_trigger: 1 (minimize write amplification)compaction_pri: OldestSmallestSeqFirstcompaction_readahead_size: 4MB (optimized for sequential HDD reads)Compression Strategy:
zstd_max_train_bytes: 8MB (125x dictionary size)Block Cache:
BlobDB:
Rate Limiting:
Testing
Unit Tests
Integration Tests
Production Testing
Archive preset based on real-world deployment:
Backward Compatibility
✅ Fully backward compatible:
Performance Impact
Archive Preset Benefits (HDD):
Hybrid Setup Benefits (NVMe + HDD):
Documentation
User-facing documentation has been kept separate from code and will be added to the wiki/docs repository as appropriate.
Migration Notes
Existing Archive Nodes:
Compression settings cannot be changed retroactively. For optimal results with the archive preset:
--rocksdb-preset=archivefrom the startNote: Switching presets on an existing database will apply new settings to new data only. For full benefits, a fresh sync is recommended.
Related Issues
Checklist