openai-whisper does not use GPU or CUDA #1694
Replies: 3 comments
-
let's close this. now using whisper-faster since you do not offer install guide for cuda. appreciate the project tho. thanks |
Beta Was this translation helpful? Give feedback.
-
In case anyone else stumbles across this, make sure you install torch by passing in the CUDA version, not just Per PyTorch Start Locally I used: pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121 |
Beta Was this translation helpful? Give feedback.
-
according to PyTorch Official PyTorch only support python 3.8-3.11 currently, use
in PyTorch Official make your selection according to your device, so you get your suitable command at bottom: "Run this Command" |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
just installed it with
pip install -U openai-whisper
on an Windows environment that has nVidia and where SD and h2oai work perfectly.openai-whisper never uses the GPU and errors out when adding
--device cuda
:RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
How do we activate CUDA? do we need to compile it from source? it's not in the README. where is the wiki for Windows?
cuda version:
NVIDIA-SMI 528.24 Driver Version: 528.24 CUDA Version: 12.0
Beta Was this translation helpful? Give feedback.
All reactions