Skip to content
Discussion options

You must be logged in to vote

The lemmatizer, parser and tagger pipe use the tok2vec pipe to get the contextualized word representations. So you have to add the tok2vec pipe as well and ensure that it runs before the other pipes. E.g. the following does work correctly:

nlp = spacy.load("en_coreference_web_trf")
for p in ['tok2vec', 'tagger', 'parser', 'lemmatizer', 'attribute_ruler']:
    try:
        nlp.add_pipe(p, source=spacy.load("en_core_web_sm"))
    except ValueError:
        continue

Replies: 2 comments

Comment options

You must be logged in to vote
0 replies
Answer selected by danieldk
Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feat / pipeline Feature: Processing pipeline and components
2 participants