diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-import-model.md b/explore-analyze/machine-learning/nlp/ml-nlp-import-model.md index b45e0fea41..d4b2f4100c 100644 --- a/explore-analyze/machine-learning/nlp/ml-nlp-import-model.md +++ b/explore-analyze/machine-learning/nlp/ml-nlp-import-model.md @@ -10,6 +10,12 @@ products: # Import the trained model and vocabulary [ml-nlp-import-model] +::::{warning} +PyTorch models can execute code on your {{es}} server, exposing your cluster to potential security vulnerabilities. + +**Only use models from trusted sources and never use models from unverified or unknown providers.** +:::: + ::::{important} If you want to install a trained model in a restricted or closed network, refer to [these instructions](eland://reference/machine-learning.md#ml-nlp-pytorch-air-gapped). :::: diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-model-ref.md b/explore-analyze/machine-learning/nlp/ml-nlp-model-ref.md index a049339e72..a239e1f015 100644 --- a/explore-analyze/machine-learning/nlp/ml-nlp-model-ref.md +++ b/explore-analyze/machine-learning/nlp/ml-nlp-model-ref.md @@ -11,6 +11,14 @@ products: # Compatible third party models [ml-nlp-model-ref] +::::{warning} +PyTorch models can execute code on your {{es}} server, exposing your cluster to potential security vulnerabilities. + +**Only use models from trusted sources and never use models from unverified or unknown providers.** + +The models listed on this page are all from a trusted source – Hugging Face. +:::: + ::::{note} The minimum dedicated ML node size for deploying and using the {{nlp}} models is 16 GB in {{ech}} if [deployment autoscaling](../../../deploy-manage/autoscaling.md) is turned off. Turning on autoscaling is recommended because it allows your deployment to dynamically adjust resources based on demand. Better performance can be achieved by using more allocations or more threads per allocation, which requires bigger ML nodes. Autoscaling provides bigger nodes when required. If autoscaling is turned off, you must provide suitably sized nodes yourself. ::::