Replies: 2 comments
-
Check https://github.com/openai/whisper/blob/main/whisper/__init__.py, *.pt in dict _MODELS are all FP16 models. |
Beta Was this translation helpful? Give feedback.
0 replies
-
yes the i asked a question #1175 but no answer
you can check out https://github.com/guillaumekln/faster-whisper for lower VRAM |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I have a 6GB gpu that barely fit the medium model and get the same out of memory error when using FP16 flag. This project running whisper with FP16 CPU https://github.com/ggerganov/whisper.cpp#memory-usage seem to only requre 1.7GB RAM. Is there a FP16 model that only requires 2GB VRAM on medium?
Beta Was this translation helpful? Give feedback.
All reactions