Skip to content
Discussion options

You must be logged in to vote

For statistical components like transformer and tagger, you can use frozen_components for components that have been sourced from another trained pipeline. It doesn't make sense to include and then freeze uninitialized/untrained components as in this config, which is why you're running into errors. (Also a note, there are bugs with freezing transformer components, so for now we recommend training in separate pipelines and then using spacy assemble to generate your final combined pipeline.)

To back up a step, are you trying to fine-tune ner from a model like en_core_web_trf without modifying the other components? If that's the case, have a look at this demo project:

https://github.com/explo…

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@jefhil
Comment options

Answer selected by jefhil
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
training Training and updating models feat / config Feature: Training config
2 participants
Converted from issue

This discussion was converted from issue #11605 on October 11, 2022 07:37.