From 39dd76a3291457e61ace1358f3da92b9ca581231 Mon Sep 17 00:00:00 2001 From: kosabogi Date: Mon, 17 Mar 2025 14:12:05 +0100 Subject: [PATCH] Fixing inference-id and harmonizing model-id --- explore-analyze/machine-learning/nlp/ml-nlp-elser.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-elser.md b/explore-analyze/machine-learning/nlp/ml-nlp-elser.md index 3fde5ba19f..fccca3c7cd 100644 --- a/explore-analyze/machine-learning/nlp/ml-nlp-elser.md +++ b/explore-analyze/machine-learning/nlp/ml-nlp-elser.md @@ -57,7 +57,7 @@ The easiest and recommended way to download and deploy ELSER is to use the [{{in 2. Create an {{infer}} endpoint with the ELSER service by running the following API request: ```console -PUT _inference/sparse_embedding/my-elser-model +PUT _inference/sparse_embedding/my-elser-endpoint { "service": "elasticsearch", "service_settings": { @@ -67,7 +67,7 @@ PUT _inference/sparse_embedding/my-elser-model "max_number_of_allocations": 10 }, "num_threads": 1, - "model_id": ".elser_model_2_linux-x86_64" + "model_id": ".elser_model_2" } } ```