Replies: 1 comment
-
Yep, I found out while fine-tuning Whisper with timestamps but without ever giving it previous tokens. While debugging the transcribe function, it seems that with |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Unless I am missing something, the calculation for prompt always includes at least the previous segment tokens even if condition_on_previous_text = False since prompt_reset_since is calculated before the tokens from the current segment are added. This behavior is only in the word-level-timestamps branch currently. The fix is to just move this calculation to the end like the main branch has it.
whisper/whisper/transcribe.py
Line 259 in 8eb29c3
Beta Was this translation helpful? Give feedback.
All reactions