WhisperAi: can't use any model higher than Basic. How to fix? #1577
Unanswered
waldemdunn
asked this question in
Q&A
Replies: 1 comment 10 replies
-
It looks like you may have a GPU with small memory. With only have 2GB VRAM for example, the larger models cannot be used. See also |
Beta Was this translation helpful? Give feedback.
10 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Trying to use Whisper on my PC, and I only can transcribe with Basic model. When try to use Small or Medium - it returns the error (even for a small, 1-minute file):
_"Microsoft Windows [Version 10.0.19045.3208]
C:\Users\Vladimir\YandexDisk\Video\PRIVATE>whisper Tuesday.mov --model medium
Traceback (most recent call last):
File "C:\Users\Vladimir\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 197, in _run_module_as_main
return run_code(code, main_globals, None,
File "C:\Users\Vladimir\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 87, in run_code
exec(code, run_globals)
File "C:\Users\Vladimir\AppData\Local\Programs\Python\Python39\Scripts\whisper.exe_main.py", line 7, in
File "C:\Users\Vladimir\AppData\Local\Programs\Python\Python39\lib\site-packages\whisper\transcribe.py", line 444, in cli
model = load_model(model_name, device=device, download_root=model_dir)
File "C:\Users\Vladimir\AppData\Local\Programs\Python\Python39\lib\site-packages\whisper_init.py", line 154, in load_model
return model.to(device)
File "C:\Users\Vladimir\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "C:\Users\Vladimir\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "C:\Users\Vladimir\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "C:\Users\Vladimir\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "C:\Users\Vladimir\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 820, in apply
param_applied = fn(param)
File "C:\Users\Vladimir\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 2.00 GiB total capacity; 1.72 GiB already allocated; 0 bytes free; 1.72 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF"
I followed the video https://youtu.be/msj3wuYf3d8 (How to do Free Speech-to-Text Transcription Better Than Google Premium API with OpenAI Whisper Model by SECourses) to set up Whisper. But couldn't find the solution for my problem.
How is it possible to fix it?
Beta Was this translation helpful? Give feedback.
All reactions