Replies: 1 comment
-
did u try to run in python/ipython console ? and in cli ? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone,
I recently installed Whisper.AI in a fresh Anaconda environment on my ASUS ROG 16 laptop with the following specs:
CPU: AMD Ryzen 9
GPU: NVIDIA RTX 3060 (laptop)
OS: Windows 11
CUDA & PyTorch: properly installed, CUDA is recognized
Whisper / WhisperX: installed via pip in a new conda environment
Environment: running inside Jupyter Notebook
The installation itself went smoothly, and the model downloads correctly. The command torch.cuda.is_available() returns True, and the GPU is properly detected. However, as soon as I start transcription using device='cuda', the Jupyter kernel crashes immediately, with no meaningful error message.
Details:
Using device='cpu' works without issue.
The crash happens right when Whisper attempts to run on the GPU (not during import or setup).
Even when trying smaller models (tiny, base), the same crash occurs.
nvidia-smi detects the GPU, and there are no memory issues before starting the task.
Things I've already tried:
Verified CUDA and torch versions are compatible (torch 2.1, CUDA 11.8)
Tried reinstalling the environment and dependencies
GPU driver is up to date
No other heavy GPU processes running
Also happens in WhisperX
Question:
Has anyone experienced similar crashes with RTX 30-series GPUs on laptops when using Whisper with CUDA?
Is this related to a driver or Jupyter issue on Windows? Any tips on how to debug this more effectively (e.g. view core dumps, detailed logs) or what could be the problem here?
Thanks in advance for any suggestions or ideas!
Best regards
Beta Was this translation helpful? Give feedback.
All reactions