Include a Tagger/Parser in a NER transformer-based pipeline with no training data #9359
-
I am training a NER using a transformer architecture. In the pipeline, I would like to have a Tagger and DependencyParser to sentencize the text, however I don't want to train these two components since I do not have any training date for the tagger and Parser.
Is there a way to add a Tagger and Parser in a Transformer-based pipeline without having to train them ? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
Hey oliviercwa ✨, The performance decreases if your NER uses the same embedding layer as the other components. You can add a new layer just for the NER (with slower runtime as a trade-off), to maintain the performance of the frozen components. |
Beta Was this translation helpful? Give feedback.
Hey oliviercwa ✨,
your first approach of freezing the components is the right choice here. However, it would be helpful if you could also send us your whole config file.
The performance decreases if your NER uses the same embedding layer as the other components. You can add a new layer just for the NER (with slower runtime as a trade-off), to maintain the performance of the frozen components.