Replies: 2 comments 5 replies
-
you can try to free up gpu memory: |
Beta Was this translation helpful? Give feedback.
5 replies
-
Looks like the your VRAM (8GB) is not big enough to keep running the |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello
I use the whisper model for English audio files, the first time I used the whisper with GPU, it worked properly, but after that, it had a memory error.
"Cuda is out of memory"!
This is my code:
model = whisper.load_model('large-v2','cpu')
result = model.transcribe("1.wav")
How I can fix the error?
Beta Was this translation helpful? Give feedback.
All reactions