There were missing keys in the checkpoint model loaded: ['proj_out.weight']. #2302
Unanswered
dmehta-huro
asked this question in
Q&A
Replies: 3 comments 4 replies
-
Hi, |
Beta Was this translation helpful? Give feedback.
2 replies
-
As it looks transformers related, you might try posting an issue here: https://github.com/huggingface/transformers/issues It also might be worth trying this workaround: https://discuss.huggingface.co/t/unable-to-load-checkpoint-after-finetuning/50693 |
Beta Was this translation helpful? Give feedback.
0 replies
-
I got the same exact issue as well. I am not able to find |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am trying to fine tune the model using the technique shown in this tutorial: https://huggingface.co/blog/fine-tune-whisper
My dataset is much larger and my PC cannot handle it all at once so I am training it in batches. I initialized the training_args and the trained as shown in the code. I removed the train_dataset and eval_dataset dataset from the trainer and add that later since the batches of data keep changing.
Here is my training procedure:
The training works fine for the first batch but for every batch after that, I get a warning message saying
There were missing keys in the checkpoint model loaded: ['proj_out.weight'].
This is not an error and does not stop the training but I still wanted to confirm that the model is being trained correctly or not. This warning message also led me to wonder that is the model being trained on the 2nd batch using the updated weights after training on the first batch, or is it just using a fresh set of default weights?
Thanks
Beta Was this translation helpful? Give feedback.
All reactions