Skip to content
Discussion options

You must be logged in to vote

Transformers pipelines use a "transformer" in the same way that non-transformers use a "tok2vec", so you should change the replace_listeners to refer to the transformer layer rather than the tok2vec layer. (We have had some issues with this in the past, but I think this should work at present.)

You get nonsense labels because, as the warning suggests, the ner component from the other pipeline was trained with different vectors.

As a general note, non-transformer models are not "word vector" models. The tok2vec layer is a CNN, which optionally uses word vectors as input features.

Is it possible to use a word vector NER model in the same pipeline than a transformers NER model?

Yes.

Is th…

Replies: 2 comments 6 replies

Comment options

You must be logged in to vote
1 reply
@dave-espinosa
Comment options

Answer selected by dave-espinosa
Comment options

You must be logged in to vote
5 replies
@dave-espinosa
Comment options

@polm
Comment options

@dave-espinosa
Comment options

@polm
Comment options

@dave-espinosa
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feat / ner Feature: Named Entity Recognizer feat / transformer Feature: Transformer
2 participants