Skip to content
Discussion options

You must be logged in to vote

Hey, thanks for your question!
Please make sure to correctly format your posts, as it makes it easier for us to read them.

You're correct; by freezing the tok2vec component while training the textcat, the tok2vec will not get updated.
To implement it the way you described it, you need to make two training runs.

  1. Train the tok2vec together with the ner without the textcat
  2. Source the tok2vec and ner from the trained model, put them in frozen_components and train the textcat on top

You can read more about freezing components here and about sourcing components from trained pipelines here.

Hope that helps!

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by thomashacker
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feat / ner Feature: Named Entity Recognizer feat / textcat Feature: Text Classifier feat / tok2vec Feature: Token-to-vector layer and pretraining
2 participants