Replies: 4 comments 1 reply
-
It seems like hooks are not fully functioning at this moment (link) The issue is that the |
Beta Was this translation helpful? Give feedback.
-
Right now You need to do this instead
Then you should see a whole bunch of logs, there's still tons of small issues (mostly minor source code changes to whisper) preventing this from being performant but we'll take a look cc @malfet |
Beta Was this translation helpful? Give feedback.
-
Has anyone succeeded in this endeavor? Getting multiple "unsupported" errors while compiling the transcribe:
|
Beta Was this translation helpful? Give feedback.
-
Try the recently announced turbo model - |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Has anyone experienced any performance gains using
torch.compile()
in PyTorch 2.0?At the moment I'm doing this but not seeing any gains (RTX 3090):
Curious to see if anyone has looked into where else Whisper could be accelerated
Edit: I think the correct way of doing this is:
I'm seeing some errors right now but I'll try to see if I can find a backend that works
Beta Was this translation helpful? Give feedback.
All reactions