Replies: 2 comments 1 reply
-
It looks like you are reloading the model multiple times. In this code for audio_file in audio_files_list:
run_whisper(audio_file)
model = whisper.load_model(model_size)
for audio_file in audio_files_list:
result = model.transcribe(audio_file, **options) |
Beta Was this translation helpful? Give feedback.
-
Thanks @glangford. You're right it was by design to load the model inside the The motivation behind that design was to compare the outputs for each run while the model has started "afresh" for each. Is there a way for me to "reset" the model for each run/file without reloading the model? Kind regards |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, while trying to transcribe several files I run out of RAM. It looks like Whiper does not release RAM after each call. I have tried using del and gc but no effect. What am I doing wrong?
Below is the code. The environent is colab pro and the model runs on T4 GPU.
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions