Skip to content
Discussion options

You must be logged in to vote

Yes - I think you should be able to make this work with a little hacking to avoid retraining. First you add the transformer from the other pipeline and give it a new name with nlp.add_pipe(transformer, name="other_transformer", source=...). Then you fetch the component from the other pipeline that is trained on other_transformer, let's say it's the parser component.

I think some hack like this should work:

parser.model.get_ref("tok2vec").layers[0].upstream_name = "other_transformer"

Because layers[0] should be the TransformerListener if I'm not mistaken.

You might also have to reset the listeners of the corresponding components and call nlp._link_components() again to ensure the listener…

Replies: 7 comments 2 replies

Comment options

You must be logged in to vote
2 replies
@badri-rutgers
Comment options

@polm
Comment options

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Answer selected by ines
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feat / pipeline Feature: Processing pipeline and components feat / config Feature: Training config
5 participants
Converted from issue

This discussion was converted from issue #6366 on December 10, 2020 13:19.