A fast, safe NVIDIA GPU control tool written in Rust. Manage fan speeds, power limits, and thermal settings via NVML.
- Fan Control - Manual speed control and automatic fan curves
- Power Management - Set and monitor GPU power limits with constraint validation
- Thermal Monitoring - Real-time temperature and threshold management
- Acoustic Limiting - GPU temperature targets for noise control
- Health Monitoring - Comprehensive GPU health scoring with issue detection
- Process Monitoring - List GPU processes with memory usage and filtering
- Alert System - Configurable alerts for temperature, power, and hardware issues
- Advanced Metrics - ECC errors, PCIe bandwidth, memory temperature, video encoder/decoder
- Multi-GPU Support - Target by index, name, or UUID
- Multiple Output Formats - Table, JSON, and compact output
- Dry-Run Mode - Preview changes before applying
- Daemon Mode - Continuous control loop with custom fan curves
- Configuration Files - TOML-based persistent configuration
- Glossy Glassmorphism Design - Modern, vibrant interface with glass effects
- Real-time Monitoring - Live gauges for temperature, fan speed, power, and utilization
- Advanced Metrics Dashboard - ECC errors, PCIe bandwidth, memory temp, video encoder/decoder
- Health Score Widget - Visual GPU health gauge with component breakdown
- Interactive Fan Curves - Drag-and-drop curve editor with visual feedback
- Multi-GPU Dashboard - Overview of all GPUs with link/unlink control
- Profile System - Save and load configuration profiles
- Multi-Series Graphs - Historical tracking of temperature, power, and performance
- Per-Fan Control - Individual fan speed control with cooler target info
git clone https://github.com/your-repo/nvctl.git
cd nvctl
# Build CLI tool
cargo build --release
sudo cp target/release/nvctl /usr/local/bin/
# Build GUI (optional)
cargo build --release --package nvctl-gui
sudo cp target/release/nvctl-gui /usr/local/bin/- NVIDIA GPU with proprietary driver installed
- NVIDIA driver version 520+ recommended
- Linux with libnvidia-ml.so available (included with nvidia-utils)
# List all GPUs
nvctl list
# Show GPU information
nvctl info --all
# Check GPU health
nvctl health
# List processes using GPU
nvctl processes --top 5
# Check fan status
nvctl fan status
# Set fan to manual mode and 75% speed
sudo nvctl fan policy manual
sudo nvctl fan speed 75
# Set power limit to 250W
sudo nvctl power limit 250
# View alert rules
nvctl alerts rules
# Preview changes without applying
nvctl --dry-run fan speed 100Launch the graphical interface for visual GPU control:
# Run GUI (release mode for smooth animations)
make gui
# Or directly
nvctl-gui- Dashboard - Real-time gauges with glassmorphic design
- Temperature, fan speed, power, and utilization gauges
- ECC memory error tracking
- PCIe bandwidth and link status visualization
- Memory temperature monitoring (GDDR6X)
- Video encoder/decoder utilization
- GPU health score with component breakdown
- Quick stats (clocks, VRAM, P-state)
- Fan Control - Interactive fan curve editor with drag-and-drop points
- Power Control - Slider-based power limit adjustment with constraints display
- Thermal Control - Temperature threshold configuration
- Profiles - Save/load/delete configuration profiles
- Settings - Refresh rate, theme preferences, and startup options
make gui # Run GUI (release mode)
make gui-dev # Run GUI (debug mode)
make gui-check # Check GUI code (fmt + clippy)
make gui-test # Run GUI tests
make gui-build # Build GUI release binaryReal-time monitoring with all GPU metrics at a glance.
Interactive fan curve editor with drag-and-drop temperature/speed points.
Precise power limit control with visual feedback.
Temperature threshold configuration.
Save and load GPU configuration profiles.
Application preferences and refresh rate control.
nvctl list
nvctl list --format jsonnvctl info # Basic info
nvctl info --all # All details
nvctl info --fan # Fan info only
nvctl info --power # Power info only
nvctl info --thermal # Thermal info only
nvctl info --ecc # ECC memory errors
nvctl info --pcie # PCIe bandwidth and link status
nvctl info --memory-temp # Memory temperature (GDDR6X)
nvctl info --video # Video encoder/decoder utilization
nvctl --gpu 0 info --all # Specific GPU
nvctl --gpu-name "RTX 5080" info # By name# Check status
nvctl fan status
# Set control policy
sudo nvctl fan policy manual # Enable manual control
sudo nvctl fan policy auto # Return to automatic
# Set fan speed (requires manual policy)
sudo nvctl fan speed 50 # All fans to 50%
sudo nvctl fan speed 80 --fan-index 0 # Specific fan
# Dry run
nvctl --dry-run fan speed 100# Check current power status
nvctl power status
# Set power limit
sudo nvctl power limit 250 # Set to 250W
# Dry run
nvctl --dry-run power limit 300Control the acoustic temperature limit. The GPU throttles performance to maintain the target temperature (same as GeForce Experience temperature target).
# Check thermal status
nvctl thermal status
# Set acoustic temperature limit
sudo nvctl thermal limit 80
# Dry run
nvctl --dry-run thermal limit 75Note: Not all GPUs support acoustic temperature limits.
Check overall GPU health with component-specific scoring:
# Check GPU health
nvctl health
# JSON output for monitoring
nvctl health --format jsonHealth scoring covers:
- Thermal Health - Temperature vs. thresholds
- Power Health - Power usage efficiency
- Memory Health - ECC errors and utilization
- Performance Health - Utilization and throttling
- PCIe Health - Link status and errors
List processes running on the GPU with memory usage:
# List all processes
nvctl processes
# Show top 5 by memory
nvctl processes --top 5
# Sort by PID instead of memory
nvctl processes --sort-pid
# Filter by process type
nvctl processes --process-type graphics
nvctl processes --process-type compute
nvctl processes --process-type both
# JSON output
nvctl processes --format jsonProcess types:
- Graphics - Display/rendering workloads
- Compute - CUDA/OpenCL compute tasks
- Graphics+Compute - Hybrid workloads
Monitor GPU metrics with configurable alerts:
# List configured alert rules
nvctl alerts rules
# List active alerts
nvctl alerts list
# Start alert monitoring daemon
sudo nvctl alerts start
# Stop alert daemon
sudo nvctl alerts stop
# Acknowledge an alert
nvctl alerts ack <alert-id>
# Silence an alert temporarily
nvctl alerts silence <alert-id> --duration 1h
# Clear resolved alerts
nvctl alerts clear
# Test alert configuration
nvctl alerts testDefault alert rules (in ~/.config/nvctl/alerts.toml):
- High GPU temperature (>80°C for 30s)
- Critical temperature (>85°C for 10s)
- Emergency temperature (>90°C, shutdown risk)
- High power usage (>95% for 60s)
- ECC uncorrectable errors detected
- PCIe link errors/replay counter
Run a continuous control loop with custom fan curves:
sudo nvctl control \
--speed-pair 40:30 \
--speed-pair 50:40 \
--speed-pair 60:50 \
--speed-pair 70:70 \
--speed-pair 80:100 \
--interval 5This sets:
- 30% fan speed at 40°C
- 40% at 50°C
- 50% at 60°C
- 70% at 70°C
- 100% at 80°C+
Options:
--interval N- Check temperature every N seconds (default: 5)--single-use- Apply once and exit--retry- Retry on errors--retry-interval N- Retry wait time in seconds (default: 10)--default-speed N- Speed below first curve point (default: 30)--power-limit N- Also enforce power limit in watts
With power limit:
sudo nvctl control \
--speed-pair 60:50 \
--speed-pair 80:100 \
--power-limit 280nvctl [OPTIONS] <COMMAND>
Options:
-v, --verbose Enable verbose output
--format <FORMAT> Output format [table|json|compact]
--gpu <INDEX> Target GPU by index (0-based)
--gpu-name <NAME> Target GPU by name (partial match)
--gpu-uuid <UUID> Target GPU by UUID
--dry-run Preview changes without applying
-c, --config <FILE> Path to config file
-h, --help Print help
-V, --version Print versionCreate ~/.config/nvctl/config.toml:
[general]
verbose = false
dry_run = false
interval = 5
[gpu]
index = 0
[fan]
default_speed = 30
[[fan.curve]]
temperature = 40
speed = 30
[[fan.curve]]
temperature = 60
speed = 50
[[fan.curve]]
temperature = 75
speed = 70
[[fan.curve]]
temperature = 85
speed = 100
[power]
limit_watts = 300
[thermal]
acoustic_limit = 83Use with:
nvctl --config ~/.config/nvctl/config.toml control
# Or set environment variable
export NVCTL_CONFIG=~/.config/nvctl/config.toml
nvctl controlGPU 0: NVIDIA GeForce RTX 4090
Temperature: 45°C
Fan Speed: 35% (Auto)
Power: 85W / 450W
nvctl info --format json{
"index": 0,
"name": "NVIDIA GeForce RTX 4090",
"temperature": 45,
"fan_speed": 35,
"fan_policy": "auto",
"power_usage": 85,
"power_limit": 450
}nvctl info --format compactGPU0: 45°C 35% 85W/450W
Most read operations work without root. Write operations require root:
# Works without sudo
nvctl list
nvctl info
nvctl fan status
nvctl power status
nvctl health
nvctl processes
nvctl alerts list
nvctl alerts rules
# Requires sudo
sudo nvctl fan policy manual
sudo nvctl fan speed 75
sudo nvctl power limit 250
sudo nvctl thermal limit 80
sudo nvctl alerts start
sudo nvctl alerts stopFor non-root access, add a udev rule:
# /etc/udev/rules.d/99-nvidia.rules
KERNEL=="nvidia[0-9]*", MODE="0666"Then reload: sudo udevadm control --reload-rules && sudo udevadm trigger
Ensure NVIDIA drivers are installed:
# Check driver
nvidia-smi
# Library location
ldconfig -p | grep libnvidia-ml
# Arch Linux
sudo pacman -S nvidia-utils
# Ubuntu/Debian
sudo apt install nvidia-utils-xxx # Replace with driver version
# Fedora
sudo dnf install nvidia-driver-libsUse sudo or configure udev rules (see Permissions section).
Some GPUs (especially mobile/laptop) don't support manual fan control via NVML. Check with nvctl info --fan.
- Set policy to manual first:
sudo nvctl fan policy manual - Then set speed:
sudo nvctl fan speed 75
# List available GPUs
nvctl list
# Check NVML directly
nvidia-smi -L
# Verify driver
lsmod | grep nvidiaUses a Makefile for all build and check operations:
# Build release binary (outputs to bin/)
make build
# Run all quality checks
make check
# Full CI pipeline (clean + checks + build)
make ci-full
# Show all available targets
make helpmake release # Build optimized release binary
make debug # Build debug binary
make test # Run tests
make lint # Run clippy
make fmt # Format code
make clean # Remove build artifacts
make doc # Generate and open documentation
make install # Install to /usr/local/bin (requires sudo)make run ARGS='list --format json'
make run-release ARGS='info --all'nvctl supports tab-completion for bash, zsh, and fish.
# Generate completions for all shells
make completions
# Install system-wide (requires sudo)
sudo make install-completions# Generate for specific shell
nvctl completions bash > nvctl.bash
nvctl completions zsh > _nvctl
nvctl completions fish > nvctl.fishBash:
# System-wide
sudo cp nvctl.bash /usr/share/bash-completion/completions/nvctl
# User only
mkdir -p ~/.local/share/bash-completion/completions
cp nvctl.bash ~/.local/share/bash-completion/completions/nvctlZsh:
# System-wide
sudo cp _nvctl /usr/share/zsh/site-functions/_nvctl
# User only (add to fpath in .zshrc)
mkdir -p ~/.zfunc
cp _nvctl ~/.zfunc/_nvctl
# Add to .zshrc: fpath=(~/.zfunc $fpath)Fish:
# System-wide
sudo cp nvctl.fish /usr/share/fish/vendor_completions.d/nvctl.fish
# User only
cp nvctl.fish ~/.config/fish/completions/nvctl.fish- Fork the repository
- Create a feature branch
- Write tests for new functionality
- Ensure all tests pass:
cargo test - Ensure no clippy warnings:
cargo clippy -- -D warnings - Submit a pull request
- No
.unwrap()or.expect()in library code - use Result - All public items must have
///documentation - Domain types validate input on construction
- Tests required for new functionality
- Follow existing patterns in the codebase
MIT License - see LICENSE for details.
- nvml-wrapper - Rust bindings for NVML
- clap - CLI argument parsing
- iced - Cross-platform GUI framework for the GUI application
- NVIDIA for the NVML library





