Strange dev accuracy during dependency parse model fine-tuning #8518
danielvasic
started this conversation in
Help: Best practices
Replies: 1 comment 8 replies
-
Are you using the same dependency annotation scheme as in |
Beta Was this translation helpful? Give feedback.
8 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello everyone,
I have a question about dependency parsing model, firstly I know that some of the things I will be referencing will not make sense but I have some strange behaviour during training of my model in English language. First we have developed a new corpus that contains around 8000 tokens annotated in CONLL format. I successfully converted the corpus in spaCy format and trained and tested the model on it. To clarify I used the same dataset for training and development (maybe this is the problem?) And the results of the training is as shown:
The LAS and UAS keep decreasing until the end of training. As in TAG_ACC I would expect the LAS and UAS scores would increase (since the same dataset is used for development and training, but as You can se this is not the case).
This is my configuration file:
Beta Was this translation helpful? Give feedback.
All reactions