Replies: 4 comments
-
I have the same exact issue. I started using multiprocessing to run whisper and it never actually does the transcription. |
Beta Was this translation helpful? Give feedback.
-
I actually pin pointed it to the STFT under the log_mel_spectrogram function. |
Beta Was this translation helpful? Give feedback.
-
@molitoris I think i figured out the problem. Pytorch is unable to support CUDA absed processes under multiprocessing. I had to set import torch.multiprocessing as mp
mp.set_start_method("spawn", force=True) |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
My goal was to run whisper in a separate
multiprocessing.Process
. The Process dies silently (without raising an error) when it callsself.model.transcribe(audio, language='english')
.The call
audio = F.pad(audio, (0, padding))
inwhisper.audio.py
is the triggerAccording to pytorch it is recommended to replace
multiprocessing
withtorch.multiprocessing
. This did not change the behavior.How can I run whisper in a
multiprocessing.Process
?System
Minimal example
Beta Was this translation helpful? Give feedback.
All reactions