Handling long text #10467
Unanswered
nishchay47b
asked this question in
Help: Coding & Implementations
Handling long text
#10467
Replies: 1 comment
-
This is covered at the top of the documentation for Transformers - the sum of embeddings is used for tokens. Please see the docs for more detailed explanation, including how to change the way it works. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
How is spacy able to handle long text with BERT like models. BERT like models have a limitation of 512 tokens, I understand there is window operation going, but with windows and strides running over the text you will get multiple prediction for the same tokens, how is that handled then?
Beta Was this translation helpful? Give feedback.
All reactions