Skip to content
Discussion options

You must be logged in to vote

I hope you've had a chance to read the docs and found them helpful. To address a few of your questions...

When a regular English pipeline (sm, md, lg) is used, dependency parsing requires statistical modules to be run previously (tokenizer, tagger)

This is not correct. Some components depend on a tok2vec/transformer, but unlike classical NLP pipelines, the parser in spaCy doesn't depend on the tagger, for example.

The only major difference between the trf and non-trf pipelines we distribute is the use of transformers as a feature source. They are trained on the same data, and the implementations of the individual components are the same. See parts of the docs like this section on sharin…

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@uodedeoglu
Comment options

Answer selected by uodedeoglu
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
docs Documentation and website feat / transformer Feature: Transformer
2 participants