Skip to content
Discussion options

You must be logged in to vote

A GPU should always give increased performance (in terms of words per second processed). For CPU models this difference is noticeable but not major, for Transformers it's very significant. For Transformers particularly the difference is more exaggerated in training, but also present even if you're just doing inference.

It should be easy to run the Transformers models on CPU and see if the speed is acceptable for you - for training I assume it's basically unusable. Some people have apparently been able to use it for inference, but I would assume that's the exception rather than the rule.

Replies: 1 comment 7 replies

Comment options

You must be logged in to vote
7 replies
@KTRosenberg
Comment options

@polm
Comment options

@BramVanroy
Comment options

@KTRosenberg
Comment options

@BramVanroy
Comment options

Answer selected by svlandeg
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
gpu Using spaCy on GPU
3 participants