@@ -14,7 +14,7 @@ High-performance inference of [OpenAI's Whisper](https://github.com/openai/whisp
1414-  Plain C/C++ implementation without dependencies
1515-  Apple Silicon first-class citizen - optimized via ARM NEON, Accelerate framework, Metal and [ Core ML] ( #core-ml-support ) 
1616-  AVX intrinsics support for x86 architectures
17- -  VSX intrinsics support for POWER architectures
17+ -  VSX intrinsics support for POWER architectures(#power-vsx-intrinsics) 
1818-  Mixed F16 / F32 precision
1919-  [ Integer quantization support] ( #quantization ) 
2020-  Zero memory allocations at runtime
@@ -136,6 +136,20 @@ make -j large-v3-turbo
136136|  medium |  1.5 GiB |  ~ 2.1 GB | 
137137|  large  |  2.9 GiB |  ~ 3.9 GB | 
138138
139+ ## POWER VSX Intrinsics  
140+ 
141+ ` whisper.cpp `  supports POWER architectures and includes code which
142+ significantly speeds operation on Linux running on POWER9/10, making it
143+ capable of faster-than-realtime transcription on underclocked Raptor
144+ Talos II. Ensure you have a BLAS package installed, and replace the
145+ standard cmake setup with:
146+ 
147+ ``` bash 
148+ #  build with GGML_BLAS defined
149+ cmake -B build -DGGML_BLAS=1
150+ cmake --build build --config Release
151+ ./build/bin/whisper-cli [ .. etc .. ]
152+ 
139153# # Quantization
140154
141155` whisper.cpp` ` ggml` 
0 commit comments