Is using load_model in_memory supposed to be faster? #522
Unanswered
arabcoders
asked this question in
Q&A
Replies: 2 comments 1 reply
-
I am seeing the same behavior on a |
Beta Was this translation helpful? Give feedback.
1 reply
-
Here is the explanation of the parameter "in_memory" in the github: in_memory: bool Hope it answer |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I am noticing something off with my setup, using the python script with the following parameters
loading and transcribing using the following model parameters
self.model = whisper.load_model( self.model, self.device, in_memory=True)
is slower than havingin_memory=False
i tested on 45min audio file the results as the followingwith
in_memory=False
translate: [/files/test_45min.mp4] is complete. Took [0:14:47.214375]
while with
in_memory=True
translate: [/files/test_45min.mp4] is complete. Took [0:20:02.661830]
PC in question
Beta Was this translation helpful? Give feedback.
All reactions