Skip to content
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions explore-analyze/machine-learning/nlp/ml-nlp-import-model.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,12 @@ products:

# Import the trained model and vocabulary [ml-nlp-import-model]

::::{warning}
Untrusted models can execute arbitrary code on your {{es}} server, exposing your cluster to remote code execution (RCE) vulnerabilities.

**Only use models from trusted sources and never use models from unverified or unknown providers.**
::::

::::{important}
If you want to install a trained model in a restricted or closed network, refer to [these instructions](eland://reference/machine-learning.md#ml-nlp-pytorch-air-gapped).
::::
Expand Down
9 changes: 9 additions & 0 deletions explore-analyze/machine-learning/nlp/ml-nlp-model-ref.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,15 @@ products:

# Compatible third party models [ml-nlp-model-ref]

::::{warning}
Uploading and running untrusted models can expose your {es} cluster to remote code execution (RCE) vulnerabilities.
NLP models are a mixture of code and data. If a malicious model is uploaded and used, the model can execute arbitrary code on the {es} server.

**Upload and run models only from providers you trust. Do not upload models from unverified or unknown sources.**

The models listed on this page are all from a trusted source – Hugging Face.
::::

::::{note}
The minimum dedicated ML node size for deploying and using the {{nlp}} models is 16 GB in {{ech}} if [deployment autoscaling](../../../deploy-manage/autoscaling.md) is turned off. Turning on autoscaling is recommended because it allows your deployment to dynamically adjust resources based on demand. Better performance can be achieved by using more allocations or more threads per allocation, which requires bigger ML nodes. Autoscaling provides bigger nodes when required. If autoscaling is turned off, you must provide suitably sized nodes yourself.
::::
Expand Down
Loading