Replies: 4 comments
-
Beta Was this translation helpful? Give feedback.
0 replies
-
@jsaluja how did you resolved all these issues ? |
Beta Was this translation helpful? Give feedback.
0 replies
-
@itaipee This worked for Panjabi.
|
Beta Was this translation helpful? Give feedback.
0 replies
-
@jsaluja are you open to sharing this dataset ? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Context
I finetuned whisper-large-v3 following instructions on https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/fine_tune_whisper.ipynb
My training dataset is 8000 examples, 30 hours of native Punjabi speaker reading Punjabi text.
I generated inferences for the training dataset on fine tuned and base model.
Problem
The inferences are exactly the same for fine tuned and base model
Training hyperparameters
The following hyperparameters were used during training:
Training results
0.1744 | 0.26 | 250 | 0.0974
0.0788 | 0.52 | 500 | 0.0747
0.0637 | 0.78 | 750 | 0.0582
Questions
Beta Was this translation helpful? Give feedback.
All reactions