Replies: 5 comments 15 replies
-
18 options = whisper.DecodingOptions(fp16 = False), it worked for me |
Beta Was this translation helpful? Give feedback.
-
Yes , i am running on CPU on raspberry pi
…On Fri, Oct 7, 2022, 12:10 PM Batuhanapa ***@***.***> wrote:
how to solve it??\
İf u wanna use float16 type u should work on GPU
—
Reply to this email directly, view it on GitHub
<#92 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AOJYIZ7AZKE7GXJQEJGYUJ3WB7AV3ANCNFSM6AAAAAAQUQPDLA>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
I ran into the same error. I was able to resolve it after changing this line: to |
Beta Was this translation helpful? Give feedback.
-
I added this option (--fp32) and it works for me, for example:
|
Beta Was this translation helpful? Give feedback.
-
updating ultralytics to 8.0.109 also works, just in case u have changed your ultralytics version |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I ran the short code snippet in the readme and ran it on my own colab notebook. I received an error on the final line (result=whisper.decode(model, mel, options)
Here's error message and traceback:
RuntimeError Traceback (most recent call last)
in
17 # decode the audio
18 options = whisper.DecodingOptions()
---> 19 result = whisper.decode(model, mel, options)
20
21 # print the recognized text
10 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight, bias)
302 _single(0), self.dilation, self.groups)
303 return F.conv1d(input, weight, bias, self.stride,
--> 304 self.padding, self.dilation, self.groups)
305
306 def forward(self, input: Tensor) -> Tensor:
RuntimeError: "slow_conv2d_cpu" not implemented for 'Half
Beta Was this translation helpful? Give feedback.
All reactions