Skip to content
Discussion options

You must be logged in to vote

I'm a little confused at what you're trying to do here.

nlp = spacy.load("./output_training_11.11")
ner = nlp.get_pipe("ner") # it looks like you're not using the ner you created here?
config = {"model": DEFAULT_TOK2VEC_MODEL} # where are you using this, and what is DEFAULT_TOK2VEC_MODEL?
nlp.add_pipe("tok2vec",  before='ner')`

You don't want a tok2vec and a transformer component in the same pipeline - they replace each other. (This can be different if you use replace_listeners, but that doesn't look like what you're doing here.)

Also I see you're using a custom training loop, but note we don't recommend that in v3 - it's much easier to avoid problems using the training config. Maybe tak…

Replies: 1 comment 3 replies

Comment options

You must be logged in to vote
3 replies
@laith07
Comment options

@polm
Comment options

@laith07
Comment options

Answer selected by laith07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feat / pipeline Feature: Processing pipeline and components feat / tok2vec Feature: Token-to-vector layer and pretraining feat / transformer Feature: Transformer
2 participants