How to use Cython to speed up pipeline of model inference #13121
Replies: 1 comment
-
I suspect that cython isn't going to help much here, since most of the time is spent in the Have you evaluated the performance of For text classification, you can often get competitive performance with simpler and faster approaches. I can find several recent papers on this topic, just to pick one relatively recent one: https://aclanthology.org/2023.acl-short.160.pdf Especially if you're only doing text classification and not using any other token-level tasks like parsing or NER, I'd definitely recommend evaluating some of the (fast) options outside spacy. If you'd still like to have a |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I have a pipeline of 3 transformer-based
textcat
components. I followed the recommendations here and am usingpipe
after disabling all components except mytextcat
component, but inference is exceedingly slow (especially when the whole pipeline is Dockerized). I am wondering how I might use Cython to speed this up.At the following link, it seems like the Cython API isn't available until after you run text through the documents.
Is there another way to improve speed for multiple transformer-based models pipelined in sequence? Thanks!
Beta Was this translation helpful? Give feedback.
All reactions