You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: deploy-manage/monitor/orchestrators/ece-monitoring-ece-set-retention.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,7 +20,7 @@ You might need to adjust the retention period for one of the following reasons:
20
20
To customize the retention period, set up a custom lifecycle policy for logs and metrics indices:
21
21
22
22
1.[Create a new index lifecycle management (ILM) policy](../../../manage-data/lifecycle/index-lifecycle-management/configure-lifecycle-policy.md) in the logging and metrics cluster.
23
-
2. Create a new, legacy-style, index template that matches the data view (formerly *index pattern*) that you want to customize the lifecycle for.
24
-
3. Specify a lifecycle policy in the index template settings.
25
-
4. Choose a higher `order` for the template so the specified lifecycle policy will be used instead of the default.
23
+
2. Create a new composable index template that matches the data view (formerly *index pattern*) for the data stream you want to customize the lifecycle for.
24
+
3. Specify a custom lifecycle policy in the index template settings.
25
+
4. Choose a higher `priority` for the template so the specified lifecycle policy will be used instead of the default.
Copy file name to clipboardExpand all lines: deploy-manage/tools/snapshot-and-restore/ece-restore-snapshots-into-new-deployment.md
+4Lines changed: 4 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,6 +16,10 @@ products:
16
16
1. First, [create a new deployment](../../deploy/cloud-enterprise/create-deployment.md) and select **Restore snapshot data**. Select the deployment that you want to restore a snapshot *from*. If you don’t know the exact name, you can enter a few characters and then select from the list of matching deployments.
17
17
2. Select the snapshot that you want to restore from. If none is chosen, the latest successful snapshot from the cluster you selected is restored on the new cluster when you create it.
18
18
19
+
:::{important}
20
+
Note that only snapshots from the `found-snapshots` repository are accepted. Snapshots from a custom repository are not allowed.
21
+
:::
22
+
19
23

20
24
21
25
3. Manually recreate users using the X-Pack security features or using Shield on the new cluster. User information is not included when you restore across clusters.
Copy file name to clipboardExpand all lines: manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md
Copy file name to clipboardExpand all lines: reference/fleet/migrate-auditbeat-to-agent.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,7 +34,7 @@ The following table describes the integrations you can use instead of {{auditbea
34
34
| {{fleet}} [system](integration-docs://reference/system/index.md) integration | Collect login events for Windows through the [Security event log](integration-docs://reference/system/index.md#security). |
35
35
|[System.package](beats://reference/auditbeat/auditbeat-dataset-system-package.md) dataset |[System Audit](integration-docs://reference/system_audit/index.md) integration | This integration is a direct replacement of the System Package dataset. Starting in {{stack}} 8.7, you can port rules and configuration settings to this integration. This integration currently schedules collection of information such as:<br><br>*[rpm_packages](https://www.osquery.io/schema/5.1.0/#rpm_packages)<br>*[deb_packages](https://www.osquery.io/schema/5.1.0/#deb_packages)<br>*[homebrew_packages](https://www.osquery.io/schema/5.1.0/#homebrew_packages)<br> |
36
36
|[Osquery](integration-docs://reference/osquery/index.md) or [Osquery Manager](integration-docs://reference/osquery_manager/index.md) integration | Schedule collection of information like:<br><br>*[rpm_packages](https://www.osquery.io/schema/5.1.0/#rpm_packages)<br>*[deb_packages](https://www.osquery.io/schema/5.1.0/#deb_packages)<br>*[homebrew_packages](https://www.osquery.io/schema/5.1.0/#homebrew_packages)<br>*[apps](https://www.osquery.io/schema/5.1.0/#apps) (MacOS)<br>*[programs](https://www.osquery.io/schema/5.1.0/#programs) (Windows)<br>*[npm_packages](https://www.osquery.io/schema/5.1.0/#npm_packages)<br>*[atom_packages](https://www.osquery.io/schema/5.1.0/#atom_packages)<br>*[chocolatey_packages](https://www.osquery.io/schema/5.1.0/#chocolatey_packages)<br>*[portage_packages](https://www.osquery.io/schema/5.1.0/#portage_packages)<br>*[python_packages](https://www.osquery.io/schema/5.1.0/#python_packages)<br> |
37
-
|[System.process](beats://reference/auditbeat/auditbeat-dataset-system-process.md) dataset |[Endpoint](/solutions/security/configure-elastic-defend/install-elastic-defend.md)| Best replacement because out of the box it reports events forevery process in [ECS](integration-docs://reference/index.md) format and has excellentintegration in [Kibana](/get-started/the-stack.md). |
37
+
|[System.process](beats://reference/auditbeat/auditbeat-dataset-system-process.md) dataset |[Endpoint](/solutions/security/configure-elastic-defend/install-elastic-defend.md)| Best replacement because out of the box it reports events forevery process in [ECS](integration-docs://reference/index.md) format and has excellent integration in {{kib}}. |
38
38
|[Custom Windows event log](integration-docs://reference/winlog/index.md) and [Sysmon](integration-docs://reference/sysmon_linux/index.md) integrations | Provide process data. |
39
39
|[Osquery](integration-docs://reference/osquery/index.md) or[Osquery Manager](integration-docs://reference/osquery_manager/index.md) integration | Collect data from the [process](https://www.osquery.io/schema/5.1.0/#process) table on some OSeswithout polling. |
40
40
|[System.socket](beats://reference/auditbeat/auditbeat-dataset-system-socket.md) dataset |[Endpoint](/solutions/security/configure-elastic-defend/install-elastic-defend.md)| Best replacement because it supports monitoring network connections on Linux,Windows, and MacOS. Includes process and user metadata. Currently does notdo flow accounting (byte and packet counts) or domain name enrichment (but doescollect DNS queries separately). |
Copy file name to clipboardExpand all lines: solutions/observability/apm/apm-server/binary.md
+4-2Lines changed: 4 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,7 +21,8 @@ You’ll need:
21
21
***{{es}}** for storing and indexing data.
22
22
***{{kib}}** for visualizing with the Applications UI.
23
23
24
-
We recommend you use the same version of {{es}}, {{kib}}, and APM Server. See [Installing the {{stack}}](/get-started/the-stack.md) for more information about installing these products.
24
+
We recommend you use the same version of {{es}}, {{kib}}, and APM Server.
25
+
For more information about installing these products, refer to [](/deploy-manage/deploy.md).
@@ -30,7 +31,8 @@ We recommend you use the same version of {{es}}, {{kib}}, and APM Server. See [I
30
31
## Step 1: Install [apm-installing]
31
32
32
33
::::{note}
33
-
**Before you begin**: If you haven’t installed the {{stack}}, do that now. See [Learn how to install the {{stack}} on your own hardware](/get-started/the-stack.md).
34
+
**Before you begin**: If you haven’t installed the {{stack}}, do that now.
35
+
Refer to [](/deploy-manage/deploy.md).
34
36
::::
35
37
36
38
To download and install APM Server, use the commands below that work with your system. If you use `apt` or `yum`, you can [install APM Server from our repositories](#apm-setup-repositories) to update to the newest version more easily.
Copy file name to clipboardExpand all lines: solutions/observability/apm/apm-server/fleet-managed.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,7 +22,7 @@ This guide will explain how to set up and configure a Fleet-managed APM Server.
22
22
23
23
You need {{es}} for storing and searching your data, and {{kib}} for visualizing and managing it. When setting these components up, you need:
24
24
25
-
* {{es}} cluster and {{kib}} (version 9.0) with a basic license or higher. [Learn how to install the {{stack}} on your own hardware](/get-started/the-stack.md).
25
+
* {{es}} cluster and {{kib}} (version 9.0) with a basic license or higher. Refer to [](/deploy-manage/deploy.md).
26
26
* Secure, encrypted connection between {{kib}} and {{es}}. For more information, refer to [](/deploy-manage/security/self-setup.md).
27
27
* Internet connection for {{kib}} to download integration packages from the {{package-registry}}. Make sure the {{kib}} server can connect to `https://epr.elastic.co` on port `443`. If your environment has network traffic restrictions, there are ways to work around this requirement. See [Air-gapped environments](/reference/fleet/air-gapped.md) for more information.
28
28
* {{kib}} user with `All` privileges on {{fleet}} and {{integrations}}. Since many Integrations assets are shared across spaces, users need the {{kib}} privileges in all spaces.
Copy file name to clipboardExpand all lines: solutions/search/agent-builder/mcp-server.md
+61-2Lines changed: 61 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -40,12 +40,13 @@ Most MCP clients (such as Claude Desktop, Cursor, VS Code, etc.) have similar co
40
40
],
41
41
"env": {
42
42
"KIBANA_URL": "${KIBANA_URL}",
43
-
"AUTH_HEADER": "ApiKey ${API_KEY}"
43
+
"AUTH_HEADER": "ApiKey ${API_KEY}"<1>
44
44
}
45
45
}
46
46
}
47
47
}
48
48
```
49
+
1. Refer to [](#api-key-application-privileges)
49
50
50
51
:::{note}
51
52
Set the following environment variables:
@@ -57,5 +58,63 @@ export API_KEY="your-api-key"
57
58
58
59
For information on generating API keys, refer to [API keys](https://www.elastic.co/docs/solutions/search/search-connection-details).
59
60
60
-
Tools execute with the scope assigned to the API key. Make sure your API key has the appropriate permissions to only access the indices and data that you want to expose through the MCP server.
61
+
Tools execute with the scope assigned to the API key. Make sure your API key has the appropriate permissions to only access the indices and data that you want to expose through the MCP server. To learn more, refer to [](#api-key-application-privileges).
61
62
:::
63
+
64
+
## API key application privileges
65
+
66
+
To access the MCP server endpoint, your API key must include {{kib}} application privileges.
67
+
68
+
### Development and testing
69
+
70
+
For development and testing purposes, you can create an unrestricted API key with full access:
71
+
72
+
```json
73
+
POST /_security/api_key
74
+
{
75
+
"name": "my-mcp-api-key",
76
+
"expiration": "1d",
77
+
"role_descriptors": {
78
+
"unrestricted": {
79
+
"cluster": ["all"],
80
+
"indices": [
81
+
{
82
+
"names": ["*"],
83
+
"privileges": ["all"]
84
+
}
85
+
]
86
+
}
87
+
}
88
+
}
89
+
```
90
+
91
+
### Production
92
+
93
+
For production environments, use a restricted API key with specific application privileges:
94
+
95
+
```json
96
+
POST /_security/api_key
97
+
{
98
+
"name": "my-mcp-api-key",
99
+
"expiration": "1d",
100
+
"role_descriptors": {
101
+
"mcp-access": {
102
+
"cluster": ["all"],
103
+
"indices": [
104
+
{
105
+
"names": ["*"],
106
+
"privileges": ["read", "view_index_metadata"]
107
+
}
108
+
],
109
+
"applications": [
110
+
{
111
+
"application": "kibana-.kibana",
112
+
"privileges": ["read_onechat", "space_read"], <1>
113
+
"resources": ["space:default"]
114
+
}
115
+
]
116
+
}
117
+
}
118
+
}
119
+
```
120
+
1. The `read_onechat` and `space_read` application privileges are required to authorize access to the MCP endpoint. Without these privileges, you'll receive a 403 Forbidden error.
Copy file name to clipboardExpand all lines: solutions/search/semantic-search/semantic-search-semantic-text.md
+65-7Lines changed: 65 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,6 +27,16 @@ This tutorial uses the `elasticsearch` service for demonstration, which is creat
27
27
28
28
The mapping of the destination index - the index that contains the embeddings that the inference endpoint will generate based on your input text - must be created. The destination index must have a field with the [`semantic_text`](elasticsearch://reference/elasticsearch/mapping-reference/semantic-text.md) field type to index the output of the used inference endpoint.
29
29
30
+
You can run {{infer}} either using the [Elastic {{infer-cap}} Service](/explore-analyze/elastic-inference/eis.md) or on your own ML-nodes. The following examples show you both scenarios.
31
+
32
+
:::::::{tab-set}
33
+
34
+
::::::{tab-item} Using EIS on Serverless
35
+
36
+
```{applies_to}
37
+
serverless: ga
38
+
```
39
+
30
40
```console
31
41
PUT semantic-embeddings
32
42
{
@@ -41,7 +51,61 @@ PUT semantic-embeddings
41
51
```
42
52
43
53
1. The name of the field to contain the generated embeddings.
44
-
2. The field to contain the embeddings is a `semantic_text` field. Since no `inference_id` is provided, the default endpoint `.elser-2-elasticsearch` for the `elasticsearch` service is used. To use a different {{infer}} service, you must create an {{infer}} endpoint first using the [Create {{infer}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put) and then specify it in the `semantic_text` field mapping using the `inference_id` parameter.
54
+
2. The field to contain the embeddings is a `semantic_text` field. Since no `inference_id` is provided, the default endpoint `.elser-v2-elastic` for the `elasticsearch` service is used. This {{infer}} endpoint uses the [Elastic {{infer-cap}} Service (EIS)](/explore-analyze/elastic-inference/eis.md).
55
+
56
+
::::::
57
+
58
+
::::::{tab-item} Using EIS in Cloud
59
+
60
+
```{applies_to}
61
+
stack: ga
62
+
deployment:
63
+
self: unavailable
64
+
```
65
+
66
+
```console
67
+
PUT semantic-embeddings
68
+
{
69
+
"mappings": {
70
+
"properties": {
71
+
"content": { <1>
72
+
"type": "semantic_text", <2>
73
+
"inference_id": ".elser-v2-elastic" <3>
74
+
}
75
+
}
76
+
}
77
+
}
78
+
```
79
+
80
+
1. The name of the field to contain the generated embeddings.
81
+
2. The field to contain the embeddings is a `semantic_text` field.
82
+
3. The `.elser-v2-elastic` preconfigured {{infer}} endpoint for the `elasticsearch` service is used. This {{infer}} endpoint uses the [Elastic {{infer-cap}} Service (EIS)](/explore-analyze/elastic-inference/eis.md).
83
+
84
+
::::::
85
+
86
+
::::::{tab-item} Using ML-nodes
87
+
88
+
```console
89
+
PUT semantic-embeddings
90
+
{
91
+
"mappings": {
92
+
"properties": {
93
+
"content": { <1>
94
+
"type": "semantic_text", <2>
95
+
"inference_id": ".elser-2-elasticsearch" <3>
96
+
}
97
+
}
98
+
}
99
+
}
100
+
```
101
+
102
+
1. The name of the field to contain the generated embeddings.
103
+
2. The field to contain the embeddings is a `semantic_text` field.
104
+
3. The `.elser-2-elasticsearch` preconfigured {{infer}} endpoint for the `elasticsearch` service is used. To use a different {{infer}} service, you must create an {{infer}} endpoint first using the [Create {{infer}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put) and then specify it in the `semantic_text` field mapping using the `inference_id` parameter.
105
+
106
+
::::::
107
+
108
+
:::::::
45
109
46
110
To try the ELSER model on the Elastic Inference Service, explicitly set the `inference_id` to `.elser-2-elastic`. For instructions, refer to [Using `semantic_text` with ELSER on EIS](https://www.elastic.co/docs/reference/elasticsearch/mapping-reference/semantic-text#using-elser-on-eis).
47
111
@@ -50,8 +114,6 @@ If you’re using web crawlers or connectors to generate indices, you have to [u
50
114
51
115
::::
52
116
53
-
54
-
55
117
## Load data [semantic-text-load-data]
56
118
57
119
In this step, you load the data that you later use to create embeddings from it.
@@ -60,7 +122,6 @@ Use the `msmarco-passagetest2019-top1000` data set, which is a subset of the MS
60
122
61
123
Download the file and upload it to your cluster using the [Data Visualizer](../../../manage-data/ingest/upload-data-files.md) in the {{ml-app}} UI. After your data is analyzed, click **Override settings**. Under **Edit field names**, assign `id` to the first column and `content` to the second. Click **Apply**, then **Import**. Name the index `test-data`, and click **Import**. After the upload is complete, you will see an index named `test-data` with 182,469 documents.
62
124
63
-
64
125
## Reindex the data [semantic-text-reindex-data]
65
126
66
127
Create the embeddings from the text by reindexing the data from the `test-data` index to the `semantic-embeddings` index. The data in the `content` field will be reindexed into the `content` semantic text field of the destination index. The reindexed data will be processed by the {{infer}} endpoint associated with the `content` semantic text field.
@@ -70,7 +131,6 @@ This step uses the reindex API to simulate data ingestion. If you are working wi
70
131
71
132
::::
72
133
73
-
74
134
```console
75
135
POST _reindex?wait_for_completion=false
76
136
{
@@ -86,7 +146,6 @@ POST _reindex?wait_for_completion=false
86
146
87
147
1. The default batch size for reindexing is 1000. Reducing size to a smaller number makes the update of the reindexing process quicker which enables you to follow the progress closely and detect errors early.
88
148
89
-
90
149
The call returns a task ID to monitor the progress:
91
150
92
151
```console
@@ -99,7 +158,6 @@ Reindexing large datasets can take a long time. You can test this workflow using
After the data has been indexed with the embeddings, you can query the data using semantic search. Choose between [Query DSL](/explore-analyze/query-filter/languages/querydsl.md) or [{{esql}}](elasticsearch://reference/query-languages/esql.md) syntax to execute the query.
0 commit comments