Pytorch 2.1.0 causes a seg fault when running whisper in a container. #1696
neilshevlin
started this conversation in
General
Replies: 1 comment
-
This issue also occurs with the PyTorch 2.6.0 release. I had to roll back to version 2.5.1 to resolve it. Hope this helps! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
To start: Whisper has no issue running in a container when using the stated torch version 1.10.1
However the latest version of torch(2.1.0) causes a 139 SIGSEGV fault. The issue is not related to the download or write of .pt models for whisper. Whisper by default will download and write the model to
~/.cache/whisper
. Improper permissions can cause the segfault; So to avoid this, you can configure with proper write access in your container.The issue, however, is that Pytorch 2.1.0 causes this fault. The issue pops up in the encoder forward pass when doing a convolution with the spectrogram tensor. Specifically, pytorch seems to be failing when doing the conv1d operation when it hands off to
torch.F.conv1d()
.This becomes an issue with Pytorch 2.1.0; Works fine with 2.0.1.
Beta Was this translation helpful? Give feedback.
All reactions