Performance of transformer model with and without NER #7172
rykcode
started this conversation in
Language Support
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am doing some performance tests with spacy version 3 for right sizing my instances in production. I am observing the following
Observation:
Why is there no significant difference between the with NER and without NER scenarios in the case of the transformer model? Is NER just an incremental task after POS tagging in the case of en_core_web_trf?
Test environment: GPU instance
Test code:
Beta Was this translation helpful? Give feedback.
All reactions