DefaultCPUAllocator / enforcing GPU instead of CPU not working ? #1110
Unanswered
thisintheway
asked this question in
Q&A
Replies: 1 comment
-
Did you get the solution? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Not sure if this issue is specific to the whisper package, don't hesitate to tell me if I need to ask this elsewhere.
I have checked on several forum posts and could not find a solution. Sorry if it's silly.
Here is my python script in a nutshell :
However, it is returning the following error on the last line, hinting that it's trying to run it on cpu :
I am using Jupyter, and i checked that the pytorch version it's using was the cuda/gpu one and not a cpu-locked version :
print(torch.__version__)
So I really don't get it. Could there be a conflict of pythorch libraries ? Am I doing something wrong ? Is the transcribe() function indeed using cpu instead of gpu ?
Am I right in assuming
torch.cuda.init()
,device = "cuda"
andresult = model.transcribe(etc)
should be enough to enforce gpu usage ?I am using anaconda3, here is what
conda list
returns, in case it helps :conda list
Beta Was this translation helpful? Give feedback.
All reactions