Replies: 2 comments 3 replies
-
Hey! Did you also find a solution for faster_whisper when this happens? Or only for openai/whisper? |
Beta Was this translation helpful? Give feedback.
1 reply
-
Hey @ILG2021, can we do the same thing if we also use prompt/condition_on_previous_text = True? From my analysis of both paper and code, I understand that 224 is preserved for prompt and 224 is reserved for output. But can vary if the prompt given is less than 224 tokens. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Recently I found faster whisper will output not complete text and cut off the last char, and I have tried to locate the bug. When I use transformers to inference, it is ok. But it occurs on faster whisper and openai whisper. So I dig into the code, and I found openai whisper has a line of code to limit the decode output length to 224:
self.sample_len: int = options.sample_len or model.dims.n_text_ctx // 2
It is on line 529 of decoding.py of project openai whisper. When I change it to:
self.sample_len: int = options.sample_len or model.dims.n_text_ctx
everything it is ok.
Beta Was this translation helpful? Give feedback.
All reactions