Replies: 1 comment
-
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am using a whisper-large v2 model in a single GPU (NVIDIA Tesla V100) computing environment. The performance of the model is a bit slow. For example, while running the 27 mins audio file it is taking more than 10 mins. How do I run the parallel processing or multiple GPU environment? Could someone provide some points for that?
I would like to reduce the execution time as much as possible. How I improve the model execution time?
Beta Was this translation helpful? Give feedback.
All reactions