@@ -11,7 +11,7 @@ OneVox uses a model-centric architecture where the backend is automatically sele
1111
1212| Feature | whisper.cpp | ONNX Runtime |
1313| ---------| -------------| --------------|
14- | ** Build** | Default | ` --features onnx ` |
14+ | ** Build** | Default | Default (included) |
1515| ** Selection** | Auto (GGML models) | Auto (ONNX/Parakeet models) |
1616| ** Stability** | Production-ready | Experimental |
1717| ** Speed** | 50-200ms | Varies by model |
@@ -104,7 +104,7 @@ let transcription = model.transcribe(&audio_samples, 16000)?;
104104
105105** Build:**
106106``` bash
107- cargo build --release --features onnx
107+ cargo build --release # ONNX support included by default
108108```
109109
110110** Implementation:** ` src/models/onnx_runtime.rs ` (571 lines)
@@ -195,11 +195,11 @@ pub trait ModelRuntime: Send + Sync {
195195[model ]
196196# Backend auto-detected from model_path
197197# - GGML models (ggml-*) use whisper.cpp
198- # - Parakeet/ONNX models use ONNX Runtime (requires --features onnx )
198+ # - Parakeet/ONNX models use ONNX Runtime (included by default )
199199
200200model_path = " ggml-base.en" # English-only (whisper.cpp)
201201# model_path = "ggml-base" # Multilingual, 99+ languages (whisper.cpp)
202- # model_path = "parakeet-ctc-0.6b" # ONNX model (requires --features onnx )
202+ # model_path = "parakeet-ctc-0.6b" # ONNX model (included by default )
203203
204204# Device selection
205205device = " auto" # auto, cpu, gpu
@@ -225,7 +225,7 @@ preload = true
225225- ` ggml-large-v3 ` (2.9GB)
226226- ` ggml-large-v3-turbo ` (1.6GB)
227227
228- * ONNX (requires --features onnx ):*
228+ * ONNX (included by default ):*
229229- ` parakeet-ctc-0.6b ` - Multilingual, INT8 quantized
230230
231231** Switching models:**
@@ -237,11 +237,11 @@ preload = true
237237
238238``` toml
239239[features ]
240- default = [" whisper-cpp" , " overlay-indicator" ]
240+ default = [" whisper-cpp" , " onnx " , " overlay-indicator" ]
241241
242- # Model backends (mutually exclusive in practice, but can coexist)
243- whisper-cpp = [" whisper-rs" ] # Native whisper.cpp (recommended )
244- onnx = [" ort" , " ort-sys" , " ndarray" ] # ONNX Runtime (multilingual )
242+ # Model backends
243+ whisper-cpp = [" whisper-rs" ] # Native whisper.cpp (default )
244+ onnx = [" ort" , " ort-sys" , " ndarray" ] # ONNX Runtime (default )
245245candle = [" candle-core" , " candle-nn" , " candle-transformers" ] # Pure Rust (experimental)
246246
247247# GPU acceleration (whisper-cpp only)
@@ -257,20 +257,17 @@ overlay-indicator = ["eframe", "winit"] # Visual recording indicator
257257
258258** Build examples:**
259259``` bash
260- # Default (whisper.cpp + overlay )
260+ # Default (includes both whisper.cpp and ONNX )
261261cargo build --release
262262
263- # With ONNX support
264- cargo build --release --features onnx
265-
266- # Both backends available (larger binary)
267- cargo build --release --features " whisper-cpp,onnx"
263+ # Whisper.cpp only (minimal build)
264+ cargo build --release --no-default-features --features whisper-cpp
268265
269266# GPU-accelerated whisper.cpp (macOS)
270267cargo build --release --features metal
271268
272- # ONNX + TUI
273- cargo build --release --features " onnx,tui "
269+ # GPU-accelerated with ONNX
270+ cargo build --release --features " metal "
274271```
275272
276273## Design Principles
0 commit comments