Skip to content
Discussion options

You must be logged in to vote

You can define a token_match pattern that is matched against each full token as described here:

https://spacy.io/usage/linguistic-features#how-tokenizer-works

Replies: 2 comments

Comment options

You must be logged in to vote
0 replies
Answer selected by dennymarcels
Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feat / tokenizer Feature: Tokenizer
2 participants