Evaluate Command Line method returns NER P R F equals to null #8488
-
Hi every one. I don't understand why precision accuracy and f are equals to null. Is there something wring with the creation of my model, or with my evaluationData.spacy file ? Thanks for any answers, |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments
-
Hi, I don't know for sure without more details, but this typically indicates that the docs in the evaluation data don't have Do you see entities if you just run the model on some of the plain training texts? If you do see predictions, it's probably the eval data. If so, can you show how you created the evaluation docs? If you don't see any predictions, can you provide more information about how you created the training docs and trained the model? |
Beta Was this translation helpful? Give feedback.
-
I have this jsonl file that I transform into a spacy file with this convert.py script
|
Beta Was this translation helpful? Give feedback.
-
Hi, this isn't the right conversion script for your data. You're setting See an example for entities here (you'll have to adjust it to read JSONL instead of JSON and to use the correct dict keys for the spans/offsets): |
Beta Was this translation helpful? Give feedback.
Hi, this isn't the right conversion script for your data. You're setting
doc.cats
instead ofdoc.ents
in the linedoc.cats = line["annotations"]
.See an example for entities here (you'll have to adjust it to read JSONL instead of JSON and to use the correct dict keys for the spans/offsets):
https://github.com/explosion/projects/blob/6e2a4ff98c2cfcda93431ffc9361470795609592/pipelines/ner_demo/scripts/convert.py