Skip to content

Commit 25a0602

Browse files
committed
Use elasticsearch service
1 parent cb35dc9 commit 25a0602

File tree

2 files changed

+8
-8
lines changed

2 files changed

+8
-8
lines changed

docs/reference/search/search-your-data/semantic-text-hybrid-search

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ Create an inference endpoint by using the <<put-inference-api>>:
2020
------------------------------------------------------------
2121
PUT _inference/sparse_embedding/my-elser-endpoint <1>
2222
{
23-
"service": "elser", <2>
23+
"service": "elasticsearch", <2>
2424
"service_settings": {
2525
"adaptive_allocations": { <3>
2626
"enabled": true,
@@ -35,7 +35,7 @@ PUT _inference/sparse_embedding/my-elser-endpoint <1>
3535
<1> The task type is `sparse_embedding` in the path as the `elser` service will
3636
be used and ELSER creates sparse vectors. The `inference_id` is
3737
`my-elser-endpoint`.
38-
<2> The `elser` service is used in this example.
38+
<2> The `elasticsearch` service is used in this example.
3939
<3> This setting enables and configures adaptive allocations.
4040
Adaptive allocations make it possible for ELSER to automatically scale up or down resources based on the current load on the process.
4141

@@ -59,7 +59,7 @@ PUT semantic-embeddings
5959
"mappings": {
6060
"properties": {
6161
"semantic_text": { <1>
62-
"type": "semantic_text",
62+
"type": "semantic_text",
6363
"inference_id": "my-elser-endpoint" <2>
6464
},
6565
"content": { <3>
@@ -99,7 +99,7 @@ Download the file and upload it to your cluster using the {kibana-ref}/connect-t
9999
==== Reindex the data for hybrid search
100100

101101
Reindex the data from the `test-data` index into the `semantic-embeddings` index.
102-
The data in the `content` field of the source index is copied into the `content` field of the destination index.
102+
The data in the `content` field of the source index is copied into the `content` field of the destination index.
103103
The `copy_to` parameter set in the index mapping creation ensures that the content is copied into the `semantic_text` field. The data is processed by the {infer} endpoint at ingest time to generate embeddings.
104104

105105
[NOTE]
@@ -211,7 +211,7 @@ After performing the hybrid search, the query will return the top 10 documents t
211211
"hits": [
212212
{
213213
"_index": "semantic-embeddings",
214-
"_id": "wv65epIBEMBRnhfTsOFM",
214+
"_id": "wv65epIBEMBRnhfTsOFM",
215215
"_score": 0.032786883,
216216
"_rank": 1,
217217
"_source": {
@@ -237,7 +237,7 @@ After performing the hybrid search, the query will return the top 10 documents t
237237
"out": 1.0991782,
238238
"##io": 1.0794281,
239239
"last": 1.0474665,
240-
(...)
240+
(...)
241241
}
242242
}
243243
]

docs/reference/tab-widgets/inference-api/infer-api-task.asciidoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ the `cosine` measures are equivalent.
3636
------------------------------------------------------------
3737
PUT _inference/sparse_embedding/elser_embeddings <1>
3838
{
39-
"service": "elser",
39+
"service": "elasticsearch",
4040
"service_settings": {
4141
"num_allocations": 1,
4242
"num_threads": 1
@@ -206,7 +206,7 @@ PUT _inference/text_embedding/google_vertex_ai_embeddings <1>
206206
<2> A valid service account in JSON format for the Google Vertex AI API.
207207
<3> For the list of the available models, refer to the https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/text-embeddings-api[Text embeddings API] page.
208208
<4> The name of the location to use for the {infer} task. Refer to https://cloud.google.com/vertex-ai/generative-ai/docs/learn/locations[Generative AI on Vertex AI locations] for available locations.
209-
<5> The name of the project to use for the {infer} task.
209+
<5> The name of the project to use for the {infer} task.
210210

211211
// end::google-vertex-ai[]
212212

0 commit comments

Comments
 (0)