A terminal dashboard for monitoring an Ollama server, built with Rust and Ratatui.
ollama-tui ● Connected v0.5.1 Refresh: 2s
┌ Running Models (2) ──────────────────────────────────────────┐
│ Name Size VRAM Quant Ctx Expires │
│▶llama3:8b 4.7 GB 100% Q4_K_M 8192 4m 12s │
│ mistral:7b 4.1 GB 50% Q4_0 4096 2m 30s │
└──────────────────────────────────────────────────────────────┘
┌ Available Models (3) ────────────────────────────────────────┐
│ Name Size Family Params Quant Modified │
│ llama3:8b 4.7 GB llama 8.0B Q4_K_M 2d ago │
│ mistral:7b 4.1 GB llama 7.2B Q4_0 5d ago │
│ gemma3:4b 2.5 GB gemma3 3.9B Q4_K_M 12d ago │
└──────────────────────────────────────────────────────────────┘
q: Quit r: Refresh ↑↓/jk: Scroll Tab: Switch panel ?: Help
- Live monitoring -- automatically polls the Ollama API and updates the display
- Running models -- shows loaded models with VRAM/RAM allocation, quantization level, context length, and expiration countdown
- Model catalog -- lists all locally available models with size, family, parameter count, and modification date
- Connection status -- color-coded indicator (green/yellow/red) with server version display
- VRAM color coding -- green for full GPU, yellow for partial offload, red for CPU-only
- Keyboard-driven -- vim-style navigation with scrollable tables and panel switching
- Graceful degradation -- continues polling and displays status when the server is unreachable
- Help overlay -- press
?for a quick keybinding reference
cargo install --path .
Or build from source:
cargo build --release
The binary will be at target/release/ollama-tui.
ollama-tui [OPTIONS]
| Flag | Default | Description |
|---|---|---|
--host <HOST> |
localhost |
Ollama server hostname |
--port <PORT> |
11434 |
Ollama server port |
--refresh <SECS> |
2 |
Polling interval in seconds (minimum: 1) |
# Monitor local Ollama with defaults
ollama-tui
# Connect to a remote server
ollama-tui --host 192.168.1.100
# Custom port and slower polling
ollama-tui --port 8080 --refresh 5| Key | Action |
|---|---|
q / Ctrl+C |
Quit |
r |
Refresh immediately |
Tab |
Next panel |
Shift+Tab |
Previous panel |
j / Down |
Scroll down |
k / Up |
Scroll up |
g / Home |
Go to top |
G / End |
Go to bottom |
? |
Toggle help overlay |
Esc |
Close help overlay |
src/
├── main.rs Entry point, terminal setup, async event loop
├── app.rs Application state and input handling
├── cli.rs Command-line argument parsing
├── event.rs Event multiplexing (keyboard, tick, data updates)
├── ollama.rs HTTP client and API data structures
├── ui.rs Ratatui rendering (status bar, tables, help overlay)
└── format.rs Display formatting (bytes, countdowns, relative dates)
The application uses a background poller task that periodically calls the Ollama API (/api/ps, /api/tags, /api/version) and sends updates through a channel to the main event loop. Terminal input, tick events, and data updates are multiplexed with tokio::select!, keeping the UI responsive while polling runs independently.
- Rust 1.86+ (edition 2024)
- A running Ollama server
MIT