Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Create an inference endpoint by using the <<put-inference-api>>:
------------------------------------------------------------
PUT _inference/sparse_embedding/my-elser-endpoint <1>
{
"service": "elser", <2>
"service": "elasticsearch", <2>
"service_settings": {
"adaptive_allocations": { <3>
"enabled": true,
Expand All @@ -35,7 +35,7 @@ PUT _inference/sparse_embedding/my-elser-endpoint <1>
<1> The task type is `sparse_embedding` in the path as the `elser` service will
be used and ELSER creates sparse vectors. The `inference_id` is
`my-elser-endpoint`.
<2> The `elser` service is used in this example.
<2> The `elasticsearch` service is used in this example.
<3> This setting enables and configures adaptive allocations.
Adaptive allocations make it possible for ELSER to automatically scale up or down resources based on the current load on the process.

Expand All @@ -59,7 +59,7 @@ PUT semantic-embeddings
"mappings": {
"properties": {
"semantic_text": { <1>
"type": "semantic_text",
"type": "semantic_text",
"inference_id": "my-elser-endpoint" <2>
},
"content": { <3>
Expand Down Expand Up @@ -99,7 +99,7 @@ Download the file and upload it to your cluster using the {kibana-ref}/connect-t
==== Reindex the data for hybrid search

Reindex the data from the `test-data` index into the `semantic-embeddings` index.
The data in the `content` field of the source index is copied into the `content` field of the destination index.
The data in the `content` field of the source index is copied into the `content` field of the destination index.
The `copy_to` parameter set in the index mapping creation ensures that the content is copied into the `semantic_text` field. The data is processed by the {infer} endpoint at ingest time to generate embeddings.

[NOTE]
Expand Down Expand Up @@ -211,7 +211,7 @@ After performing the hybrid search, the query will return the top 10 documents t
"hits": [
{
"_index": "semantic-embeddings",
"_id": "wv65epIBEMBRnhfTsOFM",
"_id": "wv65epIBEMBRnhfTsOFM",
"_score": 0.032786883,
"_rank": 1,
"_source": {
Expand All @@ -237,7 +237,7 @@ After performing the hybrid search, the query will return the top 10 documents t
"out": 1.0991782,
"##io": 1.0794281,
"last": 1.0474665,
(...)
(...)
}
}
]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ the `cosine` measures are equivalent.
------------------------------------------------------------
PUT _inference/sparse_embedding/elser_embeddings <1>
{
"service": "elser",
"service": "elasticsearch",
"service_settings": {
"num_allocations": 1,
"num_threads": 1
Expand Down Expand Up @@ -206,7 +206,7 @@ PUT _inference/text_embedding/google_vertex_ai_embeddings <1>
<2> A valid service account in JSON format for the Google Vertex AI API.
<3> For the list of the available models, refer to the https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/text-embeddings-api[Text embeddings API] page.
<4> The name of the location to use for the {infer} task. Refer to https://cloud.google.com/vertex-ai/generative-ai/docs/learn/locations[Generative AI on Vertex AI locations] for available locations.
<5> The name of the project to use for the {infer} task.
<5> The name of the project to use for the {infer} task.

// end::google-vertex-ai[]

Expand Down
Loading