diff --git a/explore-analyze/images/machine-learning-ml-nlp-deployment-id-elser-v2.png b/explore-analyze/images/machine-learning-ml-nlp-deployment-id-elser-v2.png deleted file mode 100644 index d549ea8154..0000000000 Binary files a/explore-analyze/images/machine-learning-ml-nlp-deployment-id-elser-v2.png and /dev/null differ diff --git a/explore-analyze/images/ml-nlp-deployment-id-elser.png b/explore-analyze/images/ml-nlp-deployment-id-elser.png new file mode 100644 index 0000000000..92d276cf8f Binary files /dev/null and b/explore-analyze/images/ml-nlp-deployment-id-elser.png differ diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-deploy-model.md b/explore-analyze/machine-learning/nlp/ml-nlp-deploy-model.md index 35129e45d5..f9d149125c 100644 --- a/explore-analyze/machine-learning/nlp/ml-nlp-deploy-model.md +++ b/explore-analyze/machine-learning/nlp/ml-nlp-deploy-model.md @@ -16,7 +16,7 @@ You can deploy a model multiple times by assigning a unique deployment ID when s You can optimize your deplyoment for typical use cases, such as search and ingest. When you optimize for ingest, the throughput will be higher, which increases the number of {{infer}} requests that can be performed in parallel. When you optimize for search, the latency will be lower during search processes. When you have dedicated deployments for different purposes, you ensure that the search speed remains unaffected by ingest workloads, and vice versa. Having separate deployments for search and ingest mitigates performance issues resulting from interactions between the two, which can be hard to diagnose. -:::{image} /explore-analyze/images/machine-learning-ml-nlp-deployment-id-elser-v2.png +:::{image} /explore-analyze/images/ml-nlp-deployment-id-elser.png :alt: Model deployment on the Trained Models UI. :screenshot: ::: diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-elser.md b/explore-analyze/machine-learning/nlp/ml-nlp-elser.md index 6013cb80d4..c4898ca846 100644 --- a/explore-analyze/machine-learning/nlp/ml-nlp-elser.md +++ b/explore-analyze/machine-learning/nlp/ml-nlp-elser.md @@ -107,7 +107,7 @@ You can also download and deploy ELSER either from **{{ml-app}}** > **Trained Mo 3. After the download is finished, start the deployment by clicking the **Start deployment** button. 4. Provide a deployment ID, select the priority, and set the number of allocations and threads per allocation values. - :::{image} /explore-analyze/images/machine-learning-ml-nlp-deployment-id-elser-v2.png + :::{image} /explore-analyze/images/ml-nlp-deployment-id-elser.png :alt: Deploying ELSER :screenshot: :::