Skip to content

Commit b8bedb2

Browse files
ppf2davidkylevishaangelovashainaraskasbmorelli25
authored
Added information about the use of slow tokenizers (#2517)
Added information about the use of slow tokenizers to generate vocab files in ML. --------- Co-authored-by: David Kyle <[email protected]> Co-authored-by: Visha Angelova <[email protected]> Co-authored-by: shainaraskas <[email protected]> Co-authored-by: Brandon Morelli <[email protected]>
1 parent 1affd66 commit b8bedb2

File tree

1 file changed

+1
-3
lines changed

1 file changed

+1
-3
lines changed

explore-analyze/machine-learning/nlp/ml-nlp-model-ref.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -15,9 +15,7 @@ products:
1515
The minimum dedicated ML node size for deploying and using the {{nlp}} models is 16 GB in {{ech}} if [deployment autoscaling](../../../deploy-manage/autoscaling.md) is turned off. Turning on autoscaling is recommended because it allows your deployment to dynamically adjust resources based on demand. Better performance can be achieved by using more allocations or more threads per allocation, which requires bigger ML nodes. Autoscaling provides bigger nodes when required. If autoscaling is turned off, you must provide suitably sized nodes yourself.
1616
::::
1717

18-
The {{stack-ml-features}} support transformer models that conform to the standard BERT model interface and use the WordPiece tokenization algorithm.
19-
20-
The current list of supported architectures is:
18+
The {{stack-ml-features}} support transformer models with the following architectures:
2119

2220
* BERT
2321
* BART

0 commit comments

Comments
 (0)