Skip to content

Commit 960bdae

Browse files
authored
Merge branch 'main' into leemthompo/move-syntax-guide
2 parents 894b034 + d90d435 commit 960bdae

File tree

11 files changed

+419
-361
lines changed

11 files changed

+419
-361
lines changed

deploy-manage/monitor/orchestrators/ece-monitoring-ece-set-retention.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ You might need to adjust the retention period for one of the following reasons:
2020
To customize the retention period, set up a custom lifecycle policy for logs and metrics indices:
2121

2222
1. [Create a new index lifecycle management (ILM) policy](../../../manage-data/lifecycle/index-lifecycle-management/configure-lifecycle-policy.md) in the logging and metrics cluster.
23-
2. Create a new, legacy-style, index template that matches the data view (formerly *index pattern*) that you want to customize the lifecycle for.
24-
3. Specify a lifecycle policy in the index template settings.
25-
4. Choose a higher `order` for the template so the specified lifecycle policy will be used instead of the default.
23+
2. Create a new composable index template that matches the data view (formerly *index pattern*) for the data stream you want to customize the lifecycle for.
24+
3. Specify a custom lifecycle policy in the index template settings.
25+
4. Choose a higher `priority` for the template so the specified lifecycle policy will be used instead of the default.
2626

deploy-manage/tools/snapshot-and-restore/ece-restore-snapshots-into-new-deployment.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,10 @@ products:
1616
1. First, [create a new deployment](../../deploy/cloud-enterprise/create-deployment.md) and select **Restore snapshot data**. Select the deployment that you want to restore a snapshot *from*. If you don’t know the exact name, you can enter a few characters and then select from the list of matching deployments.
1717
2. Select the snapshot that you want to restore from. If none is chosen, the latest successful snapshot from the cluster you selected is restored on the new cluster when you create it.
1818

19+
:::{important}
20+
Note that only snapshots from the `found-snapshots` repository are accepted. Snapshots from a custom repository are not allowed.
21+
:::
22+
1923
![Restoring from a snapshot](/deploy-manage/images/cloud-enterprise-restore-from-snapshot.png "")
2024

2125
3. Manually recreate users using the X-Pack security features or using Shield on the new cluster. User information is not included when you restore across clusters.

manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md

Lines changed: 277 additions & 343 deletions
Large diffs are not rendered by default.

reference/fleet/migrate-auditbeat-to-agent.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ The following table describes the integrations you can use instead of {{auditbea
3434
| {{fleet}} [system](integration-docs://reference/system/index.md) integration | Collect login events for Windows through the [Security event log](integration-docs://reference/system/index.md#security). |
3535
| [System.package](beats://reference/auditbeat/auditbeat-dataset-system-package.md) dataset | [System Audit](integration-docs://reference/system_audit/index.md) integration | This integration is a direct replacement of the System Package dataset. Starting in {{stack}} 8.7, you can port rules and configuration settings to this integration. This integration currently schedules collection of information such as:<br><br>* [rpm_packages](https://www.osquery.io/schema/5.1.0/#rpm_packages)<br>* [deb_packages](https://www.osquery.io/schema/5.1.0/#deb_packages)<br>* [homebrew_packages](https://www.osquery.io/schema/5.1.0/#homebrew_packages)<br> |
3636
| [Osquery](integration-docs://reference/osquery/index.md) or [Osquery Manager](integration-docs://reference/osquery_manager/index.md) integration | Schedule collection of information like:<br><br>* [rpm_packages](https://www.osquery.io/schema/5.1.0/#rpm_packages)<br>* [deb_packages](https://www.osquery.io/schema/5.1.0/#deb_packages)<br>* [homebrew_packages](https://www.osquery.io/schema/5.1.0/#homebrew_packages)<br>* [apps](https://www.osquery.io/schema/5.1.0/#apps) (MacOS)<br>* [programs](https://www.osquery.io/schema/5.1.0/#programs) (Windows)<br>* [npm_packages](https://www.osquery.io/schema/5.1.0/#npm_packages)<br>* [atom_packages](https://www.osquery.io/schema/5.1.0/#atom_packages)<br>* [chocolatey_packages](https://www.osquery.io/schema/5.1.0/#chocolatey_packages)<br>* [portage_packages](https://www.osquery.io/schema/5.1.0/#portage_packages)<br>* [python_packages](https://www.osquery.io/schema/5.1.0/#python_packages)<br> |
37-
| [System.process](beats://reference/auditbeat/auditbeat-dataset-system-process.md) dataset | [Endpoint](/solutions/security/configure-elastic-defend/install-elastic-defend.md) | Best replacement because out of the box it reports events forevery process in [ECS](integration-docs://reference/index.md) format and has excellentintegration in [Kibana](/get-started/the-stack.md). |
37+
| [System.process](beats://reference/auditbeat/auditbeat-dataset-system-process.md) dataset | [Endpoint](/solutions/security/configure-elastic-defend/install-elastic-defend.md) | Best replacement because out of the box it reports events forevery process in [ECS](integration-docs://reference/index.md) format and has excellent integration in {{kib}}. |
3838
| [Custom Windows event log](integration-docs://reference/winlog/index.md) and [Sysmon](integration-docs://reference/sysmon_linux/index.md) integrations | Provide process data. |
3939
| [Osquery](integration-docs://reference/osquery/index.md) or[Osquery Manager](integration-docs://reference/osquery_manager/index.md) integration | Collect data from the [process](https://www.osquery.io/schema/5.1.0/#process) table on some OSeswithout polling. |
4040
| [System.socket](beats://reference/auditbeat/auditbeat-dataset-system-socket.md) dataset | [Endpoint](/solutions/security/configure-elastic-defend/install-elastic-defend.md) | Best replacement because it supports monitoring network connections on Linux,Windows, and MacOS. Includes process and user metadata. Currently does notdo flow accounting (byte and packet counts) or domain name enrichment (but doescollect DNS queries separately). |
-400 KB
Loading

solutions/observability/apm/apm-server/binary.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,8 @@ You’ll need:
2121
* **{{es}}** for storing and indexing data.
2222
* **{{kib}}** for visualizing with the Applications UI.
2323

24-
We recommend you use the same version of {{es}}, {{kib}}, and APM Server. See [Installing the {{stack}}](/get-started/the-stack.md) for more information about installing these products.
24+
We recommend you use the same version of {{es}}, {{kib}}, and APM Server.
25+
For more information about installing these products, refer to [](/deploy-manage/deploy.md).
2526

2627
:::{image} /solutions/images/observability-apm-architecture-diy.png
2728
:alt: Install Elastic APM yourself
@@ -30,7 +31,8 @@ We recommend you use the same version of {{es}}, {{kib}}, and APM Server. See [I
3031
## Step 1: Install [apm-installing]
3132

3233
::::{note}
33-
**Before you begin**: If you haven’t installed the {{stack}}, do that now. See [Learn how to install the {{stack}} on your own hardware](/get-started/the-stack.md).
34+
**Before you begin**: If you haven’t installed the {{stack}}, do that now.
35+
Refer to [](/deploy-manage/deploy.md).
3436
::::
3537

3638
To download and install APM Server, use the commands below that work with your system. If you use `apt` or `yum`, you can [install APM Server from our repositories](#apm-setup-repositories) to update to the newest version more easily.

solutions/observability/apm/apm-server/fleet-managed.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ This guide will explain how to set up and configure a Fleet-managed APM Server.
2222

2323
You need {{es}} for storing and searching your data, and {{kib}} for visualizing and managing it. When setting these components up, you need:
2424

25-
* {{es}} cluster and {{kib}} (version 9.0) with a basic license or higher. [Learn how to install the {{stack}} on your own hardware](/get-started/the-stack.md).
25+
* {{es}} cluster and {{kib}} (version 9.0) with a basic license or higher. Refer to [](/deploy-manage/deploy.md).
2626
* Secure, encrypted connection between {{kib}} and {{es}}. For more information, refer to [](/deploy-manage/security/self-setup.md).
2727
* Internet connection for {{kib}} to download integration packages from the {{package-registry}}. Make sure the {{kib}} server can connect to `https://epr.elastic.co` on port `443`. If your environment has network traffic restrictions, there are ways to work around this requirement. See [Air-gapped environments](/reference/fleet/air-gapped.md) for more information.
2828
* {{kib}} user with `All` privileges on {{fleet}} and {{integrations}}. Since many Integrations assets are shared across spaces, users need the {{kib}} privileges in all spaces.

solutions/observability/llm-performance-matrix.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,10 +33,10 @@ Models from third-party LLM providers.
3333
| Amazon Bedrock | **Claude Sonnet 3.7** | Excellent | Excellent | Excellent | Excellent | Excellent | Excellent | Great | Excellent |
3434
| Amazon Bedrock | **Claude Sonnet 4** | Excellent | Excellent | Excellent | Excellent | Excellent | Excellent | Great | Excellent |
3535
| Amazon Bedrock | **Claude Sonnet 4.5** | Excellent | Excellent | Excellent | Excellent | Excellent | Excellent | Good | Excellent |
36-
| OpenAI | **GPT-4.1** | Excellent | Excellent | Excellent | Excellent | Excellent | Great | Good | Excellent |
3736
| Google Gemini | **Gemini 2.0 Flash** | Excellent | Good | Excellent | Excellent | Excellent | Good | Good | Excellent |
3837
| Google Gemini | **Gemini 2.5 Flash** | Excellent | Good | Excellent | Excellent | Excellent | Great | Good | Excellent |
3938
| Google Gemini | **Gemini 2.5 Pro** | Excellent | Great | Excellent | Excellent | Excellent | Great | Good | Excellent |
39+
| OpenAI | **GPT-4.1** | Excellent | Excellent | Excellent | Excellent | Excellent | Great | Good | Excellent |
4040

4141

4242
## Open-source models [_open_source_models]
@@ -50,6 +50,7 @@ Models you can [deploy and manage yourself](/solutions/observability/connect-to-
5050

5151
| Provider | Model | **Alert questions** | **APM questions** | **Contextual insights** | **Documentation retrieval** | **Elasticsearch operations** | **{{esql}} generation** | **Execute connector** | **Knowledge retrieval** |
5252
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
53+
| DeepSeek | **DeepSeek-V3.1** | Excellent | Excellent | Excellent | Excellent | Excellent | Great | Great | Excellent |
5354
| Meta | **Llama-3.3-70B-Instruct** | Excellent | Good | Great | Excellent | Excellent | Good | Good | Excellent |
5455
| Mistral | **Mistral-Small-3.2-24B-Instruct-2506** | Excellent | Poor | Great | Great | Excellent | Good | Good | Excellent |
5556
| Alibaba Cloud | **Qwen2.5-72b-Instruct** | Excellent | Good | Great | Excellent | Excellent | Good | Good | Excellent |

solutions/search/agent-builder/mcp-server.md

Lines changed: 61 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -40,12 +40,13 @@ Most MCP clients (such as Claude Desktop, Cursor, VS Code, etc.) have similar co
4040
],
4141
"env": {
4242
"KIBANA_URL": "${KIBANA_URL}",
43-
"AUTH_HEADER": "ApiKey ${API_KEY}"
43+
"AUTH_HEADER": "ApiKey ${API_KEY}" <1>
4444
}
4545
}
4646
}
4747
}
4848
```
49+
1. Refer to [](#api-key-application-privileges)
4950

5051
:::{note}
5152
Set the following environment variables:
@@ -57,5 +58,63 @@ export API_KEY="your-api-key"
5758

5859
For information on generating API keys, refer to [API keys](https://www.elastic.co/docs/solutions/search/search-connection-details).
5960

60-
Tools execute with the scope assigned to the API key. Make sure your API key has the appropriate permissions to only access the indices and data that you want to expose through the MCP server.
61+
Tools execute with the scope assigned to the API key. Make sure your API key has the appropriate permissions to only access the indices and data that you want to expose through the MCP server. To learn more, refer to [](#api-key-application-privileges).
6162
:::
63+
64+
## API key application privileges
65+
66+
To access the MCP server endpoint, your API key must include {{kib}} application privileges.
67+
68+
### Development and testing
69+
70+
For development and testing purposes, you can create an unrestricted API key with full access:
71+
72+
```json
73+
POST /_security/api_key
74+
{
75+
"name": "my-mcp-api-key",
76+
"expiration": "1d",
77+
"role_descriptors": {
78+
"unrestricted": {
79+
"cluster": ["all"],
80+
"indices": [
81+
{
82+
"names": ["*"],
83+
"privileges": ["all"]
84+
}
85+
]
86+
}
87+
}
88+
}
89+
```
90+
91+
### Production
92+
93+
For production environments, use a restricted API key with specific application privileges:
94+
95+
```json
96+
POST /_security/api_key
97+
{
98+
"name": "my-mcp-api-key",
99+
"expiration": "1d",
100+
"role_descriptors": {
101+
"mcp-access": {
102+
"cluster": ["all"],
103+
"indices": [
104+
{
105+
"names": ["*"],
106+
"privileges": ["read", "view_index_metadata"]
107+
}
108+
],
109+
"applications": [
110+
{
111+
"application": "kibana-.kibana",
112+
"privileges": ["read_onechat", "space_read"], <1>
113+
"resources": ["space:default"]
114+
}
115+
]
116+
}
117+
}
118+
}
119+
```
120+
1. The `read_onechat` and `space_read` application privileges are required to authorize access to the MCP endpoint. Without these privileges, you'll receive a 403 Forbidden error.

solutions/search/semantic-search/semantic-search-semantic-text.md

Lines changed: 65 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,16 @@ This tutorial uses the `elasticsearch` service for demonstration, which is creat
2727

2828
The mapping of the destination index - the index that contains the embeddings that the inference endpoint will generate based on your input text - must be created. The destination index must have a field with the [`semantic_text`](elasticsearch://reference/elasticsearch/mapping-reference/semantic-text.md) field type to index the output of the used inference endpoint.
2929

30+
You can run {{infer}} either using the [Elastic {{infer-cap}} Service](/explore-analyze/elastic-inference/eis.md) or on your own ML-nodes. The following examples show you both scenarios.
31+
32+
:::::::{tab-set}
33+
34+
::::::{tab-item} Using EIS on Serverless
35+
36+
```{applies_to}
37+
serverless: ga
38+
```
39+
3040
```console
3141
PUT semantic-embeddings
3242
{
@@ -41,7 +51,61 @@ PUT semantic-embeddings
4151
```
4252

4353
1. The name of the field to contain the generated embeddings.
44-
2. The field to contain the embeddings is a `semantic_text` field. Since no `inference_id` is provided, the default endpoint `.elser-2-elasticsearch` for the `elasticsearch` service is used. To use a different {{infer}} service, you must create an {{infer}} endpoint first using the [Create {{infer}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put) and then specify it in the `semantic_text` field mapping using the `inference_id` parameter.
54+
2. The field to contain the embeddings is a `semantic_text` field. Since no `inference_id` is provided, the default endpoint `.elser-v2-elastic` for the `elasticsearch` service is used. This {{infer}} endpoint uses the [Elastic {{infer-cap}} Service (EIS)](/explore-analyze/elastic-inference/eis.md).
55+
56+
::::::
57+
58+
::::::{tab-item} Using EIS in Cloud
59+
60+
```{applies_to}
61+
stack: ga
62+
deployment:
63+
self: unavailable
64+
```
65+
66+
```console
67+
PUT semantic-embeddings
68+
{
69+
"mappings": {
70+
"properties": {
71+
"content": { <1>
72+
"type": "semantic_text", <2>
73+
"inference_id": ".elser-v2-elastic" <3>
74+
}
75+
}
76+
}
77+
}
78+
```
79+
80+
1. The name of the field to contain the generated embeddings.
81+
2. The field to contain the embeddings is a `semantic_text` field.
82+
3. The `.elser-v2-elastic` preconfigured {{infer}} endpoint for the `elasticsearch` service is used. This {{infer}} endpoint uses the [Elastic {{infer-cap}} Service (EIS)](/explore-analyze/elastic-inference/eis.md).
83+
84+
::::::
85+
86+
::::::{tab-item} Using ML-nodes
87+
88+
```console
89+
PUT semantic-embeddings
90+
{
91+
"mappings": {
92+
"properties": {
93+
"content": { <1>
94+
"type": "semantic_text", <2>
95+
"inference_id": ".elser-2-elasticsearch" <3>
96+
}
97+
}
98+
}
99+
}
100+
```
101+
102+
1. The name of the field to contain the generated embeddings.
103+
2. The field to contain the embeddings is a `semantic_text` field.
104+
3. The `.elser-2-elasticsearch` preconfigured {{infer}} endpoint for the `elasticsearch` service is used. To use a different {{infer}} service, you must create an {{infer}} endpoint first using the [Create {{infer}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put) and then specify it in the `semantic_text` field mapping using the `inference_id` parameter.
105+
106+
::::::
107+
108+
:::::::
45109

46110
To try the ELSER model on the Elastic Inference Service, explicitly set the `inference_id` to `.elser-2-elastic`. For instructions, refer to [Using `semantic_text` with ELSER on EIS](https://www.elastic.co/docs/reference/elasticsearch/mapping-reference/semantic-text#using-elser-on-eis).
47111

@@ -50,8 +114,6 @@ If you’re using web crawlers or connectors to generate indices, you have to [u
50114

51115
::::
52116

53-
54-
55117
## Load data [semantic-text-load-data]
56118

57119
In this step, you load the data that you later use to create embeddings from it.
@@ -60,7 +122,6 @@ Use the `msmarco-passagetest2019-top1000` data set, which is a subset of the MS
60122

61123
Download the file and upload it to your cluster using the [Data Visualizer](../../../manage-data/ingest/upload-data-files.md) in the {{ml-app}} UI. After your data is analyzed, click **Override settings**. Under **Edit field names**, assign `id` to the first column and `content` to the second. Click **Apply**, then **Import**. Name the index `test-data`, and click **Import**. After the upload is complete, you will see an index named `test-data` with 182,469 documents.
62124

63-
64125
## Reindex the data [semantic-text-reindex-data]
65126

66127
Create the embeddings from the text by reindexing the data from the `test-data` index to the `semantic-embeddings` index. The data in the `content` field will be reindexed into the `content` semantic text field of the destination index. The reindexed data will be processed by the {{infer}} endpoint associated with the `content` semantic text field.
@@ -70,7 +131,6 @@ This step uses the reindex API to simulate data ingestion. If you are working wi
70131

71132
::::
72133

73-
74134
```console
75135
POST _reindex?wait_for_completion=false
76136
{
@@ -86,7 +146,6 @@ POST _reindex?wait_for_completion=false
86146

87147
1. The default batch size for reindexing is 1000. Reducing size to a smaller number makes the update of the reindexing process quicker which enables you to follow the progress closely and detect errors early.
88148

89-
90149
The call returns a task ID to monitor the progress:
91150

92151
```console
@@ -99,7 +158,6 @@ Reindexing large datasets can take a long time. You can test this workflow using
99158
POST _tasks/<task_id>/_cancel
100159
```
101160

102-
103161
## Semantic search [semantic-text-semantic-search]
104162

105163
After the data has been indexed with the embeddings, you can query the data using semantic search. Choose between [Query DSL](/explore-analyze/query-filter/languages/querydsl.md) or [{{esql}}](elasticsearch://reference/query-languages/esql.md) syntax to execute the query.

0 commit comments

Comments
 (0)