Skip to content
Discussion options

You must be logged in to vote

The reason that nlp.evaluate runs on a copy is because many components don't overwrite existing annotation or make different predictions if some annotation is already saved in the predicted docs, so running evaluate twice might result in two different sets of predicted docs and scores, which could get really confusing:

scores1 = nlp.evaluate(examples)
scores2 = nlp.evaluate(examples)

You should be able to do something like this, which is just a simplified excerpt of Language.evaluate:

docs = nlp.pipe((eg.predicted for eg in examples))
for eg, doc in zip(examples, docs):
    eg.predicted = doc
scorer = Scorer(nlp=nlp)
scores = scorer.score(examples)

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by polm
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feat / scorer Feature: Scorer
2 participants