Skip to content

Commit 60341e1

Browse files
szabosteveMikep86
andcommitted
[DOCS] Documents that ELSER is the default service for semantic_text (#114615)
Co-authored-by: Mike Pellegrini <[email protected]>
1 parent d67d8ea commit 60341e1

File tree

2 files changed

+33
-50
lines changed

2 files changed

+33
-50
lines changed

docs/reference/mapping/types/semantic-text.asciidoc

Lines changed: 24 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,25 +13,47 @@ Long passages are <<auto-text-chunking, automatically chunked>> to smaller secti
1313
The `semantic_text` field type specifies an inference endpoint identifier that will be used to generate embeddings.
1414
You can create the inference endpoint by using the <<put-inference-api>>.
1515
This field type and the <<query-dsl-semantic-query,`semantic` query>> type make it simpler to perform semantic search on your data.
16+
If you don't specify an inference endpoint, the <<infer-service-elser,ELSER service>> is used by default.
1617

1718
Using `semantic_text`, you won't need to specify how to generate embeddings for your data, or how to index it.
1819
The {infer} endpoint automatically determines the embedding generation, indexing, and query to use.
1920

21+
If you use the ELSER service, you can set up `semantic_text` with the following API request:
22+
2023
[source,console]
2124
------------------------------------------------------------
2225
PUT my-index-000001
26+
{
27+
"mappings": {
28+
"properties": {
29+
"inference_field": {
30+
"type": "semantic_text"
31+
}
32+
}
33+
}
34+
}
35+
------------------------------------------------------------
36+
37+
NOTE: In Serverless, you must create an {infer} endpoint using the <<put-inference-api>> and reference it when setting up `semantic_text` even if you use the ELSER service.
38+
39+
If you use a service other than ELSER, you must create an {infer} endpoint using the <<put-inference-api>> and reference it when setting up `semantic_text` as the following example demonstrates:
40+
41+
[source,console]
42+
------------------------------------------------------------
43+
PUT my-index-000002
2344
{
2445
"mappings": {
2546
"properties": {
2647
"inference_field": {
2748
"type": "semantic_text",
28-
"inference_id": "my-elser-endpoint"
49+
"inference_id": "my-openai-endpoint" <1>
2950
}
3051
}
3152
}
3253
}
3354
------------------------------------------------------------
3455
// TEST[skip:Requires inference endpoint]
56+
<1> The `inference_id` of the {infer} endpoint to use to generate embeddings.
3557

3658

3759
The recommended way to use semantic_text is by having dedicated {infer} endpoints for ingestion and search.
@@ -40,7 +62,7 @@ After creating dedicated {infer} endpoints for both, you can reference them usin
4062

4163
[source,console]
4264
------------------------------------------------------------
43-
PUT my-index-000002
65+
PUT my-index-000003
4466
{
4567
"mappings": {
4668
"properties": {

docs/reference/search/search-your-data/semantic-search-semantic-text.asciidoc

Lines changed: 9 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -21,45 +21,11 @@ This tutorial uses the <<inference-example-elser,`elser` service>> for demonstra
2121
[[semantic-text-requirements]]
2222
==== Requirements
2323

24-
To use the `semantic_text` field type, you must have an {infer} endpoint deployed in
25-
your cluster using the <<put-inference-api>>.
24+
This tutorial uses the <<infer-service-elser,ELSER service>> for demonstration, which is created automatically as needed.
25+
To use the `semantic_text` field type with an {infer} service other than ELSER, you must create an inference endpoint using the <<put-inference-api>>.
2626

27-
[discrete]
28-
[[semantic-text-infer-endpoint]]
29-
==== Create the {infer} endpoint
30-
31-
Create an inference endpoint by using the <<put-inference-api>>:
27+
NOTE: In Serverless, you must create an {infer} endpoint using the <<put-inference-api>> and reference it when setting up `semantic_text` even if you use the ELSER service.
3228

33-
[source,console]
34-
------------------------------------------------------------
35-
PUT _inference/sparse_embedding/my-elser-endpoint <1>
36-
{
37-
"service": "elser", <2>
38-
"service_settings": {
39-
"adaptive_allocations": { <3>
40-
"enabled": true,
41-
"min_number_of_allocations": 3,
42-
"max_number_of_allocations": 10
43-
},
44-
"num_threads": 1
45-
}
46-
}
47-
------------------------------------------------------------
48-
// TEST[skip:TBD]
49-
<1> The task type is `sparse_embedding` in the path as the `elser` service will
50-
be used and ELSER creates sparse vectors. The `inference_id` is
51-
`my-elser-endpoint`.
52-
<2> The `elser` service is used in this example.
53-
<3> This setting enables and configures {ml-docs}/ml-nlp-auto-scale.html#nlp-model-adaptive-allocations[adaptive allocations].
54-
Adaptive allocations make it possible for ELSER to automatically scale up or down resources based on the current load on the process.
55-
56-
[NOTE]
57-
====
58-
You might see a 502 bad gateway error in the response when using the {kib} Console.
59-
This error usually just reflects a timeout, while the model downloads in the background.
60-
You can check the download progress in the {ml-app} UI.
61-
If using the Python client, you can set the `timeout` parameter to a higher value.
62-
====
6329

6430
[discrete]
6531
[[semantic-text-index-mapping]]
@@ -75,8 +41,7 @@ PUT semantic-embeddings
7541
"mappings": {
7642
"properties": {
7743
"content": { <1>
78-
"type": "semantic_text", <2>
79-
"inference_id": "my-elser-endpoint" <3>
44+
"type": "semantic_text" <2>
8045
}
8146
}
8247
}
@@ -85,18 +50,14 @@ PUT semantic-embeddings
8550
// TEST[skip:TBD]
8651
<1> The name of the field to contain the generated embeddings.
8752
<2> The field to contain the embeddings is a `semantic_text` field.
88-
<3> The `inference_id` is the inference endpoint you created in the previous step.
89-
It will be used to generate the embeddings based on the input text.
90-
Every time you ingest data into the related `semantic_text` field, this endpoint will be used for creating the vector representation of the text.
53+
Since no `inference_id` is provided, the <<infer-service-elser,ELSER service>> is used by default.
54+
To use a different {infer} service, you must create an {infer} endpoint first using the <<put-inference-api>> and then specify it in the `semantic_text` field mapping using the `inference_id` parameter.
9155

9256
[NOTE]
9357
====
94-
If you're using web crawlers or connectors to generate indices, you have to
95-
<<indices-put-mapping,update the index mappings>> for these indices to
96-
include the `semantic_text` field. Once the mapping is updated, you'll need to run
97-
a full web crawl or a full connector sync. This ensures that all existing
98-
documents are reprocessed and updated with the new semantic embeddings,
99-
enabling semantic search on the updated data.
58+
If you're using web crawlers or connectors to generate indices, you have to <<indices-put-mapping,update the index mappings>> for these indices to include the `semantic_text` field.
59+
Once the mapping is updated, you'll need to run a full web crawl or a full connector sync.
60+
This ensures that all existing documents are reprocessed and updated with the new semantic embeddings, enabling semantic search on the updated data.
10061
====
10162

10263

0 commit comments

Comments
 (0)