Skip to content
Discussion options

You must be logged in to vote

Some of the pretrained pipelines are small because they don't use transformers. The pretrained transformers pipelines, like en_core_web_trf, are over 400MB in size - transformers are just big. If you train a model without transformers it should be much smaller, though it depends on your use of word vectors, for example.

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by funghetto
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feat / serialize Feature: Serialization, saving and loading
2 participants