Replies: 7 comments 6 replies
-
what u get if run this ?
|
Beta Was this translation helpful? Give feedback.
-
Here's the dump:
PS C:\WINDOWS\system32> python -m torch.utils.collect_env
Collecting environment information...
PyTorch version: 2.1.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Home
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.10 (tags/v3.10.10:aad5f6a, Feb 7 2023, 17:20:36) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22621-SP0
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070 Laptop GPU
Nvidia driver version: 537.42
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=2304
DeviceID=CPU0
Family=198
L2CacheSize=10240
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=2304
Name=11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz
ProcessorType=3
Revision=
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] torch==2.1.0+cu121
[pip3] torchaudio==2.1.0
[pip3] torchvision==0.16.0
[conda] Could not collect
Thanks.
…On Wed, Oct 18, 2023 at 10:27 AM Phan Tuấn Anh ***@***.***> wrote:
what u get if run this ?
python -m torch.utils.collect_env
—
Reply to this email directly, view it on GitHub
<#1717 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AANIRWL7FZCE56YKNNVXARTYAAGQVAVCNFSM6AAAAAA6ER3HJSVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM3TGMJYGI4DG>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Hi,
I run the command
whisper "mytestvideo.mp4" --language en --model large-v2 --task transcribe
--device gpu
Thanks.
…On Thu, Oct 19, 2023 at 12:32 AM Phan Tuấn Anh ***@***.***> wrote:
how u launch whisper ?
—
Reply to this email directly, view it on GitHub
<#1717 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AANIRWJFIZAUFVTFQPKNXSLYADJRHAVCNFSM6AAAAAA6ER3HJSVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM3TGMRTGQYTE>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Ok, so after a lot of troubleshooting, seems that my problem was mismatched
versions between the nvida CUDA and pytorch's cuda. Once those were matched
exactly(i had to downgrade my nvidia cuda) i was able to get it working
with 'device cuda' parameter.
Thanks.
…On Thu, Oct 19, 2023 at 12:43 AM Phan Tuấn Anh ***@***.***> wrote:
device cuda
—
Reply to this email directly, view it on GitHub
<#1717 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AANIRWIPF2CFRUCMPYORQDDYADKZ3AVCNFSM6AAAAAA6ER3HJSVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM3TGMRTGUZDI>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Hi, I have similar issue ("torch.cuba.is_available() is False") and I cannot figure out how to fix this. I did the command you said and I have this, it doesn't help me tho :
|
Beta Was this translation helpful? Give feedback.
-
I also got this error message but while I wasn't able to fix the error message after installing installing pytorch for cuda 11.8 and matching it with cuda runtime 11.8 my gpu now gets used while running whisper. When I installed whisper initially I was getting around 60% cpu usage and 0% GPU but now I get around 10% cpu usage and 100% gpu usage which has substantially sped up the output. No special command is needed to get it to start using the gpu once you have the proper pytorch and cuda versions installed the basic "whisper english.wav --language English" command works. I also installed cuDNN for 11.x at the same time but I don't know if that was actually necessary or not. |
Beta Was this translation helpful? Give feedback.
-
Hi all! I'm trying to use whisper and leverage the GPU. Here is the output as requested python -m torch.utils.collect_env: PyTorch version: 2.1.1+cu118 OS: Майкрософт Windows 10 Pro Python version: 3.9.9 (tags/v3.9.9:ccb0e6a, Nov 15 2021, 18:08:50) [MSC v.1929 64 bit (AMD64)] (64-bit runtime) CPU: Versions of relevant libraries: But this is the error I get when I try to process an audio file through whisper with the command whisper audio.mp3 --device CUDA: |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I have a Gigabyte Aorus notebook with a RTX3070 card and i'm unable to run Whisper with CUDA or GPU. I get the same error every time and i'm not sure how to resolve it. Any help would be appreciated.
Here's the error dump:
File "C:\Python\lib\runpy.py", line 196, in _run_module_as_main
return run_code(code, main_globals, None,
File "C:\Python\lib\runpy.py", line 86, in run_code
exec(code, run_globals)
File "C:\Python\Scripts\whisper.exe_main.py", line 7, in
File "C:\Python\lib\site-packages\whisper\transcribe.py", line 444, in cli
model = load_model(model_name, device=device, download_root=model_dir)
File "C:\Python\lib\site-packages\whisper_init.py", line 144, in load_model
checkpoint = torch.load(fp, map_location=device)
File "C:\Python\lib\site-packages\torch\serialization.py", line 1014, in load
return _load(opened_zipfile,
File "C:\Python\lib\site-packages\torch\serialization.py", line 1422, in _load
result = unpickler.load()
File "C:\Python\lib\site-packages\torch\serialization.py", line 1392, in persistent_load
typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
File "C:\Python\lib\site-packages\torch\serialization.py", line 1366, in load_tensor
wrap_storage=restore_location(storage, location),
File "C:\Python\lib\site-packages\torch\serialization.py", line 1296, in restore_location
return default_restore_location(storage, map_location)
File "C:\Python\lib\site-packages\torch\serialization.py", line 381, in default_restore_location
result = fn(storage, location)
File "C:\Python\lib\site-packages\torch\serialization.py", line 304, in _hpu_deserialize
assert hpu is not None, "HPU device module is not loaded"
AssertionError: HPU device module is not loaded
I installed the NVIDIA toolkit. CUDA returns true when i test for IsAvailable.
I even installed the cuda runtime from NVIDIA
pip install nvidia-cuda-runtime-cu12
Nothing works. If i don't use the device parameter, it will default to CPU and it works fine. However, GPU keeps returning the error.
A final note is that the way i'm checking to see if it's using the GPU is that i'm checking the Taskmanager and looking at GPU usage. I also look at the AORUS control center in the dashboard to check the GPU loading as well.
Thanks for your help.
Beta Was this translation helpful? Give feedback.
All reactions