File tree Expand file tree Collapse file tree 1 file changed +11
-0
lines changed
Expand file tree Collapse file tree 1 file changed +11
-0
lines changed Original file line number Diff line number Diff line change @@ -18,6 +18,7 @@ High-performance inference of [OpenAI's Whisper](https://github.com/openai/whisp
1818- Mixed F16 / F32 precision
1919- [ 4-bit and 5-bit integer quantization support] ( https://github.com/ggerganov/whisper.cpp#quantization )
2020- Zero memory allocations at runtime
21+ - Vulkan support
2122- Support for CPU-only inference
2223- [ Efficient GPU support for NVIDIA] ( https://github.com/ggerganov/whisper.cpp#nvidia-gpu-support-via-cublas )
2324- [ OpenVINO Support] ( https://github.com/ggerganov/whisper.cpp#openvino-support )
@@ -429,6 +430,16 @@ make clean
429430GGML_CUDA=1 make -j
430431```
431432
433+ ## Vulkan GPU support
434+ Cross-vendor solution which allows you to accelerate workload on your GPU.
435+ First, make sure your graphics card driver provides support for Vulkan API.
436+
437+ Now build ` whisper.cpp ` with Vulkan support:
438+ ```
439+ make clean
440+ make GGML_VULKAN=1
441+ ```
442+
432443## BLAS CPU support via OpenBLAS
433444
434445Encoder processing can be accelerated on the CPU via OpenBLAS.
You can’t perform that action at this time.
0 commit comments