Skip to content

Commit 2de8432

Browse files
ppf2davidkyle
andauthored
Update explore-analyze/machine-learning/nlp/ml-nlp-model-ref.md
Co-authored-by: David Kyle <[email protected]>
1 parent 25cacb9 commit 2de8432

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

explore-analyze/machine-learning/nlp/ml-nlp-model-ref.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ products:
1515
The minimum dedicated ML node size for deploying and using the {{nlp}} models is 16 GB in {{ech}} if [deployment autoscaling](../../../deploy-manage/autoscaling.md) is turned off. Turning on autoscaling is recommended because it allows your deployment to dynamically adjust resources based on demand. Better performance can be achieved by using more allocations or more threads per allocation, which requires bigger ML nodes. Autoscaling provides bigger nodes when required. If autoscaling is turned off, you must provide suitably sized nodes yourself.
1616
::::
1717

18-
The {{stack-ml-features}} support transformer models that conform to the standard BERT model interface and use the WordPiece tokenization algorithm. {{stack-ml-features}} will always use the non-fast ("slow") tokenizer variant for all supported models. This ensures deterministic and stable tokenization results across different platforms and avoids potential differences in handling between fast and slow implementations.
18+
The {{stack-ml-features}} support transformer models with the following architectures:
1919

2020
The current list of supported architectures is:
2121

0 commit comments

Comments
 (0)