You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
summary: Enable Failure Store for new logs-*-* data streams
3
+
area: Data streams
4
+
type: feature
5
+
issues:
6
+
- 131105
7
+
highlight:
8
+
title: Enable Failure Store for new logs data streams
9
+
body: |-
10
+
The [Failure Store](docs-content://manage-data/data-store/data-streams/failure-store.md) is now enabled by default for new logs data streams matching the pattern `logs-*-*`. This means that such data streams will now store invalid documents in a
11
+
dedicated failure index instead of rejecting them, allowing better visibility and control over data quality issues without loosing data. This can be [enabled manually](docs-content://manage-data/data-store/data-streams/failure-store.md#set-up-failure-store-existing) for existing data streams.
12
+
Note: With the failure store enabled, the http response code clients receive when indexing invalid documents will change from `400 Bad Request` to `201 Created`, with an additional response attribute `"failure_store" : "used"`.
You can use either preconfigured endpoints in your `semantic_text` fields which
43
+
are ideal for most use cases or create custom endpoints and reference them in
44
+
the field mappings.
45
+
46
+
### Using the default ELSER endpoint
47
+
40
48
If you use the preconfigured `.elser-2-elasticsearch` endpoint, you can set up
41
49
`semantic_text` with the following API request:
42
50
@@ -53,6 +61,8 @@ PUT my-index-000001
53
61
}
54
62
```
55
63
64
+
### Using a custom endpoint
65
+
56
66
To use a custom {{infer}} endpoint instead of the default
57
67
`.elser-2-elasticsearch`, you
58
68
must [Create {{infer}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put)
@@ -96,6 +106,35 @@ PUT my-index-000003
96
106
}
97
107
```
98
108
109
+
### Using ELSER on EIS
110
+
111
+
```{applies_to}
112
+
stack: preview 9.1
113
+
serverless: preview
114
+
```
115
+
116
+
If you use the preconfigured `.elser-2-elastic` endpoint that utilizes the ELSER model as a service through the Elastic Inference Service ([ELSER on EIS](docs-content://explore-analyze/elastic-inference/eis.md#elser-on-eis)), you can
117
+
set up `semantic_text` with the following API request:
118
+
119
+
```console
120
+
PUT my-index-000001
121
+
{
122
+
"mappings": {
123
+
"properties": {
124
+
"inference_field": {
125
+
"type": "semantic_text",
126
+
"inference_id": ".elser-2-elastic"
127
+
}
128
+
}
129
+
}
130
+
}
131
+
```
132
+
133
+
::::{note}
134
+
While we do encourage experimentation, we do not recommend implementing production use cases on top of this feature while it is in Technical Preview.
135
+
136
+
::::
137
+
99
138
## Parameters for `semantic_text` fields [semantic-text-params]
Copy file name to clipboardExpand all lines: docs/reference/query-languages/esql/_snippets/commands/layout/completion.md
+52-6Lines changed: 52 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,10 +9,26 @@ The `COMPLETION` command allows you to send prompts and context to a Large Langu
9
9
10
10
**Syntax**
11
11
12
+
::::{tab-set}
13
+
14
+
:::{tab-item} 9.2.0+
15
+
12
16
```esql
13
-
COMPLETION [column =] prompt WITH inference_id
17
+
COMPLETION [column =] prompt WITH { "inference_id" : "my_inference_endpoint" }
14
18
```
15
19
20
+
:::
21
+
22
+
:::{tab-item} 9.1.x only
23
+
24
+
```esql
25
+
COMPLETION [column =] prompt WITH my_inference_endpoint
26
+
```
27
+
28
+
:::
29
+
30
+
::::
31
+
16
32
**Parameters**
17
33
18
34
`column`
@@ -24,7 +40,7 @@ COMPLETION [column =] prompt WITH inference_id
24
40
: The input text or expression used to prompt the LLM.
25
41
This can be a string literal or a reference to a column containing text.
26
42
27
-
`inference_id`
43
+
`my_inference_endpoint`
28
44
: The ID of the [inference endpoint](docs-content://explore-analyze/elastic-inference/inference-api.md) to use for the task.
29
45
The inference endpoint must be configured with the `completion` task type.
30
46
@@ -46,16 +62,46 @@ including:
46
62
**Requirements**
47
63
48
64
To use this command, you must deploy your LLM model in Elasticsearch as
49
-
an [≈inference endpoint](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put) with the
65
+
an [inference endpoint](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put) with the
50
66
task type `completion`.
51
67
68
+
#### Handling timeouts
69
+
70
+
`COMPLETION`commands may time out when processing large datasets or complex prompts. The default timeout is 10 minutes, but you can increase this limit if necessary.
71
+
72
+
How you increase the timeout depends on your deployment type:
73
+
74
+
::::{tab-set}
75
+
:::{tab-item} {{ech}}
76
+
* You can adjust {{es}} settings in the [Elastic Cloud Console](docs-content://deploy-manage/deploy/elastic-cloud/edit-stack-settings.md)
77
+
* You can also adjust the `search.default_search_timeout` cluster setting using [Kibana's Advanced settings](kibana://reference/advanced-settings.md#kibana-search-settings)
78
+
:::
79
+
80
+
:::{tab-item} Self-managed
81
+
* You can configure at the cluster level by setting `search.default_search_timeout` in `elasticsearch.yml` or updating via [Cluster Settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings)
82
+
* You can also adjust the `search:timeout` setting using [Kibana's Advanced settings](kibana://reference/advanced-settings.md#kibana-search-settings)
83
+
* Alternatively, you can add timeout parameters to individual queries
84
+
:::
85
+
86
+
:::{tab-item} {{serverless-full}}
87
+
* Requires a manual override from Elastic Support because you cannot modify timeout settings directly
88
+
:::
89
+
::::
90
+
91
+
If you don't want to increase the timeout limit, try the following:
92
+
93
+
* Reduce data volume with `LIMIT` or more selective filters before the `COMPLETION` command
94
+
* Split complex operations into multiple simpler queries
95
+
* Configure your HTTP client's response timeout (Refer to [HTTP client configuration](/reference/elasticsearch/configuration-reference/networking-settings.md#_http_client_configuration))
96
+
97
+
52
98
**Examples**
53
99
54
100
Use the default column name (results stored in `completion` column):
55
101
56
102
```esql
57
103
ROW question = "What is Elasticsearch?"
58
-
| COMPLETION question WITH test_completion_model
104
+
| COMPLETION question WITH { "inference_id" : "my_inference_endpoint" }
59
105
| KEEP question, completion
60
106
```
61
107
@@ -67,7 +113,7 @@ Specify the output column (results stored in `answer` column):
67
113
68
114
```esql
69
115
ROW question = "What is Elasticsearch?"
70
-
| COMPLETION answer = question WITH test_completion_model
0 commit comments