You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/reference/inference/inference-apis.asciidoc
+18Lines changed: 18 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,6 +34,24 @@ Elastic –, then create an {infer} endpoint by the <<put-inference-api>>.
34
34
Now use <<semantic-search-semantic-text, semantic text>> to perform
35
35
<<semantic-search, semantic search>> on your data.
36
36
37
+
38
+
[discrete]
39
+
[[default-enpoints]]
40
+
=== Default {infer} endpoints
41
+
42
+
Your {es} deployment contains some preconfigured {infer} endpoints that makes it easier for you to use them when defining `semantic_text` fields or {infer} processors.
43
+
The following list contains the default {infer} endpoints listed by `inference_id`:
44
+
45
+
* `.elser-2-elasticsearch`: uses the {ml-docs}/ml-nlp-elser.html[ELSER] built-in trained model for `sparse_embedding` tasks (recommended for English language texts)
46
+
* `.multilingual-e5-small-elasticsearch`: uses the {ml-docs}/ml-nlp-e5.html[E5] built-in trained model for `text_embedding` tasks (recommended for non-English language texts)
47
+
48
+
Use the `inference_id` of the endpoint in a <<semantic-text,`semantic_text`>> field definition or when creating an <<inference-processor,{infer} processor>>.
49
+
The API call will automatically download and deploy the model which might take a couple of minutes.
50
+
Default {infer} enpoints have {ml-docs}/ml-nlp-auto-scale.html#nlp-model-adaptive-allocations[adaptive allocations] enabled.
51
+
For these models, the minimum number of allocations is `0`.
52
+
If there is no {infer} activity that uses the endpoint, the number of allocations will scale down to `0` automatically after 15 minutes.
Copy file name to clipboardExpand all lines: docs/reference/inference/service-elasticsearch.asciidoc
+84-10Lines changed: 84 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,12 +1,9 @@
1
1
[[infer-service-elasticsearch]]
2
2
=== Elasticsearch {infer} service
3
3
4
-
Creates an {infer} endpoint to perform an {infer} task with the `elasticsearch`
5
-
service.
4
+
Creates an {infer} endpoint to perform an {infer} task with the `elasticsearch` service.
6
5
7
-
NOTE: If you use the E5 model through the `elasticsearch` service, the API
8
-
request will automatically download and deploy the model if it isn't downloaded
9
-
yet.
6
+
NOTE: If you use the ELSER or the E5 model through the `elasticsearch` service, the API request will automatically download and deploy the model if it isn't downloaded yet.
10
7
11
8
12
9
[discrete]
@@ -56,6 +53,11 @@ These settings are specific to the `elasticsearch` service.
Copy file name to clipboardExpand all lines: docs/reference/inference/service-elser.asciidoc
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,6 +2,7 @@
2
2
=== ELSER {infer} service
3
3
4
4
Creates an {infer} endpoint to perform an {infer} task with the `elser` service.
5
+
You can also deploy ELSER by using the <<infer-service-elasticsearch>>.
5
6
6
7
NOTE: The API request will automatically download and deploy the ELSER model if
7
8
it isn't already downloaded.
@@ -128,7 +129,7 @@ If using the Python client, you can set the `timeout` parameter to a higher valu
128
129
129
130
[discrete]
130
131
[[inference-example-elser-adaptive-allocation]]
131
-
==== Setting adaptive allocation for the ELSER service
132
+
==== Setting adaptive allocations for the ELSER service
132
133
133
134
NOTE: For more information on how to optimize your ELSER endpoints, refer to {ml-docs}/ml-nlp-elser.html#elser-recommendations[the ELSER recommendations] section in the model documentation.
134
135
To learn more about model autoscaling, refer to the {ml-docs}/ml-nlp-auto-scale.html[trained model autoscaling] page.
0 commit comments