Deploy battle-tested Solana RPC nodes with stable, proven configurations and source compilation from GitHub.
Minimum Configuration:
- CPU: AMD Ryzen 9 9950X (or equivalent)
- RAM: 192 GB minimum (256 GB recommended)
- Storage: 2-3x NVMe SSDs (1TB system + 2TB accounts OR combined 2TB+ accounts/ledger)
- OS: Ubuntu 20.04/22.04
- Network: High-bandwidth connection (1 Gbps+)
# Switch to root user
sudo su -
# Clone repository to /root
cd /root
git clone https://github.com/zydomus/solana-rpc-install.git
cd solana-rpc-install
# Step 1: Mount disks + System optimization (no reboot needed)
bash 1-prepare.sh
# Step 2: Install Solana from source (20-40 minutes)
bash 2-install-solana.sh
# Enter version when prompted (e.g., v3.0.10)
# Step 3: Download snapshot and start node
bash 3-start.shπ Why Swap Might Be Needed?
- Memory peaks can exceed 128GB during initial sync (115-130GB)
- Without swap, node may crash with OOM
- Swap provides safety buffer during sync phase
- After sync stabilizes, memory usage drops to 85-105GB
Add Swap (If needed during sync)
# Only if you see high memory pressure during sync
cd /root/solana-rpc-install
sudo bash add-swap-128g.sh
# Script automatically checks:
# β Only adds swap if system RAM < 160GB
# β Skips if swap already exists
# β Adds 32GB swap with swappiness=10 (minimal usage)Remove Swap (After sync completes)
Once synchronization completes, memory usage stabilizes at 85-105GB, and you can remove swap for optimal performance:
# Check current memory usage
systemctl status sol | grep Memory
# If memory peak < 105GB, safe to remove swap
cd /root/solana-rpc-install
sudo bash remove-swap.sh| Memory Peak | Recommended Action |
|---|---|
| < 105GB | β Can remove swap for optimal performance |
| 105-110GB | |
| > 110GB | π΄ Must keep swap to prevent OOM |
Note: If memory issues occur after removing swap, you can always add it back:
cd /root/solana-rpc-install
sudo bash add-swap-128g.sh# Real-time logs
journalctl -u sol -f
# Performance monitoring
bash /root/performance-monitor.sh snapshot
# Health check (available after 30 minutes)
/root/get_health.sh
# Sync progress
/root/catchup.shAll configurations are based on proven production deployments with thousands of hours of uptime:
- Conservative Stability > Aggressive Optimization
- Simple Defaults > Complex Customization
- Proven Performance > Theoretical Gains
- π TCP Congestion Control: Westwood (classic, stable algorithm)
- π§ TCP Buffers: 12MB (conservative, low-latency optimized)
- πΎ File Descriptors: 1M limit (sufficient for production)
- π‘οΈ Memory Management: swappiness=30 (balanced approach)
- π VM Settings: Conservative dirty ratios for stability
- β Compression Enabled: gzip + zstd (reduces memory copy overhead)
- π¦ Conservative Buffers: 50M snapshot, 200K channel (fast processing)
- π― Proven Defaults: System-managed Tokio, default HTTP/2 settings
- π‘οΈ Resource Protection: Strict filter limits prevent abuse
- π¦ Source Compilation: Latest Agave version from GitHub
- π Automatic Disk Management: Smart disk detection and mounting
- π‘οΈ Production Ready: Systemd service with memory limits and OOM protection
- π Monitoring Tools: Performance tracking and health checks included
| Port | Protocol | Purpose |
|---|---|---|
| 8899 | HTTP | RPC endpoint |
| 8900 | WebSocket | Real-time subscriptions |
| 10900 | gRPC | High-performance data streaming |
| 8000-8025 | TCP/UDP | Validator communication (dynamic) |
- Snapshot Download: Network-dependent (typically 200MB - 1GB/s)
- Memory Usage: 60-110GB during sync, 85-105GB stable (optimized for 128GB systems)
- Sync Time: 1-3 hours (from snapshot)
- CPU Usage: Multi-core optimized (32+ cores recommended)
- Stability: Proven configuration with >99.9% uptime in production
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Solana RPC Node Stack β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Agave Validator (Latest v3.0.x from source) β
β ββ Yellowstone gRPC Plugin v10.0.1 (Data streaming) β
β ββ RPC HTTP/WebSocket (Port 8899/8900) β
β ββ Accounts & Ledger (Optimized RocksDB) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β System Optimizations (Battle-Tested) β
β ββ TCP: 12MB buffers, Westwood congestion control β
β ββ Memory: swappiness=30, balanced VM settings β
β ββ File Descriptors: 1M limit, sufficient for prod β
β ββ Stability: Conservative defaults, proven in prod β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Yellowstone gRPC (Open-Source Tested Config) β
β ββ Compression: gzip+zstd enabled (fast processing) β
β ββ Buffers: 50M snapshot, 200K channel (low latency) β
β ββ Defaults: System-managed, no over-optimization β
β ββ Protection: Strict filters, resource limits β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Infrastructure β
β ββ Systemd Service (Auto-restart, graceful shutdown) β
β ββ Multi-disk Setup (System/Accounts/Ledger) β
β ββ Monitoring Tools (Performance/Health/Catchup) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Based on extensive production testing, we discovered:
-
Compression Enabled = Lower Latency
- Even on localhost, compressed data transfers faster in memory
- CPU overhead is minimal, latency reduction is significant
-
Smaller Buffers = Faster Processing
- 50M snapshot vs 250M: Less queue delay, faster throughput
- 200K channel vs 1.5M: Reduced "buffer bloat" latency
-
System Defaults = Better Stability
- No custom Tokio threads: Let system auto-manage
- No custom HTTP/2 settings: Defaults are already optimized
- Fewer custom parameters = Fewer potential issues
-
Proven in Production
- Thousands of hours of uptime
- Tested across different hardware configurations
- Battle-tested under real-world load
If you need the aggressive optimization config for specific use cases:
- Extreme config backed up as
yellowstone-config-extreme-backup.json - Accessible in repository history (commit 6cc31d9)
- Installation Guide: You're reading it!
- Troubleshooting: Check logs with
journalctl -u sol -f - Configuration: All optimizations included by default
- Monitoring: Use provided helper scripts
- Optimization Details: See
YELLOWSTONE_OPTIMIZATION.md
This project is licensed under the MIT License - see the LICENSE file for details.
β If this project helps you, please give us a Star!
Made with β€οΈ by Zydomus