@@ -17,7 +17,7 @@ High-performance inference of [OpenAI's Whisper](https://github.com/openai/whisp
1717-  Plain C/C++ implementation without dependencies
1818-  Apple Silicon first-class citizen - optimized via ARM NEON, Accelerate framework, Metal and [ Core ML] ( #core-ml-support ) 
1919-  AVX intrinsics support for x86 architectures
20- -  VSX intrinsics support for POWER architectures
20+ -  [ VSX intrinsics support for POWER architectures] ( #power-vsx-intrinsics ) 
2121-  Mixed F16 / F32 precision
2222-  [ Integer quantization support] ( #quantization ) 
2323-  Zero memory allocations at runtime
@@ -139,6 +139,20 @@ make -j large-v3-turbo
139139|  medium |  1.5 GiB |  ~ 2.1 GB | 
140140|  large  |  2.9 GiB |  ~ 3.9 GB | 
141141
142+ ## POWER VSX Intrinsics  
143+ 
144+ ` whisper.cpp `  supports POWER architectures and includes code which
145+ significantly speeds operation on Linux running on POWER9/10, making it
146+ capable of faster-than-realtime transcription on underclocked Raptor
147+ Talos II. Ensure you have a BLAS package installed, and replace the
148+ standard cmake setup with:
149+ 
150+ ``` bash 
151+ #  build with GGML_BLAS defined
152+ cmake -B build -DGGML_BLAS=1
153+ cmake --build build --config Release
154+ ./build/bin/whisper-cli [ .. etc .. ]
155+ 
142156# # Quantization
143157
144158` whisper.cpp` ` ggml` 
0 commit comments