You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
PUT _inference/sparse_embedding/my-elser-endpoint <1>
22
22
{
23
-
"service": "elser", <2>
23
+
"service": "elasticsearch", <2>
24
24
"service_settings": {
25
25
"adaptive_allocations": { <3>
26
26
"enabled": true,
@@ -35,7 +35,7 @@ PUT _inference/sparse_embedding/my-elser-endpoint <1>
35
35
<1> The task type is `sparse_embedding` in the path as the `elser` service will
36
36
be used and ELSER creates sparse vectors. The `inference_id` is
37
37
`my-elser-endpoint`.
38
-
<2> The `elser` service is used in this example.
38
+
<2> The `elasticsearch` service is used in this example.
39
39
<3> This setting enables and configures adaptive allocations.
40
40
Adaptive allocations make it possible for ELSER to automatically scale up or down resources based on the current load on the process.
41
41
@@ -59,7 +59,7 @@ PUT semantic-embeddings
59
59
"mappings": {
60
60
"properties": {
61
61
"semantic_text": { <1>
62
-
"type": "semantic_text",
62
+
"type": "semantic_text",
63
63
"inference_id": "my-elser-endpoint" <2>
64
64
},
65
65
"content": { <3>
@@ -99,7 +99,7 @@ Download the file and upload it to your cluster using the {kibana-ref}/connect-t
99
99
==== Reindex the data for hybrid search
100
100
101
101
Reindex the data from the `test-data` index into the `semantic-embeddings` index.
102
-
The data in the `content` field of the source index is copied into the `content` field of the destination index.
102
+
The data in the `content` field of the source index is copied into the `content` field of the destination index.
103
103
The `copy_to` parameter set in the index mapping creation ensures that the content is copied into the `semantic_text` field. The data is processed by the {infer} endpoint at ingest time to generate embeddings.
104
104
105
105
[NOTE]
@@ -211,7 +211,7 @@ After performing the hybrid search, the query will return the top 10 documents t
211
211
"hits": [
212
212
{
213
213
"_index": "semantic-embeddings",
214
-
"_id": "wv65epIBEMBRnhfTsOFM",
214
+
"_id": "wv65epIBEMBRnhfTsOFM",
215
215
"_score": 0.032786883,
216
216
"_rank": 1,
217
217
"_source": {
@@ -237,7 +237,7 @@ After performing the hybrid search, the query will return the top 10 documents t
PUT _inference/sparse_embedding/elser_embeddings <1>
38
38
{
39
-
"service": "elser",
39
+
"service": "elasticsearch",
40
40
"service_settings": {
41
41
"num_allocations": 1,
42
42
"num_threads": 1
@@ -206,7 +206,7 @@ PUT _inference/text_embedding/google_vertex_ai_embeddings <1>
206
206
<2> A valid service account in JSON format for the Google Vertex AI API.
207
207
<3> For the list of the available models, refer to the https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/text-embeddings-api[Text embeddings API] page.
208
208
<4> The name of the location to use for the {infer} task. Refer to https://cloud.google.com/vertex-ai/generative-ai/docs/learn/locations[Generative AI on Vertex AI locations] for available locations.
209
-
<5> The name of the project to use for the {infer} task.
209
+
<5> The name of the project to use for the {infer} task.
0 commit comments