Why nlp.evaluate
does not update examples
in-place?
#10056
-
I'd like to inspect/save the predictions made during evaluation. But, |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
The reason that scores1 = nlp.evaluate(examples)
scores2 = nlp.evaluate(examples) You should be able to do something like this, which is just a simplified excerpt of docs = nlp.pipe((eg.predicted for eg in examples))
for eg, doc in zip(examples, docs):
eg.predicted = doc
scorer = Scorer(nlp=nlp)
scores = scorer.score(examples) |
Beta Was this translation helpful? Give feedback.
The reason that
nlp.evaluate
runs on a copy is because many components don't overwrite existing annotation or make different predictions if some annotation is already saved in the predicted docs, so running evaluate twice might result in two different sets of predicted docs and scores, which could get really confusing:You should be able to do something like this, which is just a simplified excerpt of
Language.evaluate
: