Skip to content
Discussion options

You must be logged in to vote

What are the benefits of training the dependency parser model together with the pos tagger model? Is it better to have a separate model (e.g. Pointer Generator + CRF) for lemmatization & pos tagging components and a separate one (e.g. statistical transition-based) for the dependency parser component? Or is it better to share a single transformer between these components?

Often basic features that are relevant for POS prediction are also relevant for the dependency parse - for example, nmod usually attaches to an adjective and noun pair. There's no guarantee that's optimal, but we also don't have some other architectures (like pointer generators and CRFs) in spaCy.

Do dependency parsing…

Replies: 1 comment 16 replies

Comment options

You must be logged in to vote
16 replies
@adrianeboyd
Comment options

@kanayer
Comment options

@polm
Comment options

@kanayer
Comment options

@adrianeboyd
Comment options

Answer selected by svlandeg
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lang / ko Korean language data and models feat / parser Feature: Dependency Parser
3 participants