Skip to content

Commit 25cacb9

Browse files
authored
Added information about the use of slow tokenizers
Added information about the use of slow tokenizers to generate vocab files in ML.
1 parent bf9335d commit 25cacb9

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

explore-analyze/machine-learning/nlp/ml-nlp-model-ref.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ products:
1515
The minimum dedicated ML node size for deploying and using the {{nlp}} models is 16 GB in {{ech}} if [deployment autoscaling](../../../deploy-manage/autoscaling.md) is turned off. Turning on autoscaling is recommended because it allows your deployment to dynamically adjust resources based on demand. Better performance can be achieved by using more allocations or more threads per allocation, which requires bigger ML nodes. Autoscaling provides bigger nodes when required. If autoscaling is turned off, you must provide suitably sized nodes yourself.
1616
::::
1717

18-
The {{stack-ml-features}} support transformer models that conform to the standard BERT model interface and use the WordPiece tokenization algorithm.
18+
The {{stack-ml-features}} support transformer models that conform to the standard BERT model interface and use the WordPiece tokenization algorithm. {{stack-ml-features}} will always use the non-fast ("slow") tokenizer variant for all supported models. This ensures deterministic and stable tokenization results across different platforms and avoids potential differences in handling between fast and slow implementations.
1919

2020
The current list of supported architectures is:
2121

0 commit comments

Comments
 (0)