Skip to content

Commit a83706f

Browse files
Merge branch 'main' into patch-1
2 parents 1935a87 + 69b258f commit a83706f

37 files changed

+594
-243
lines changed

deploy-manage/deploy/cloud-enterprise/configure-host-ubuntu.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ Make sure to use a supported combination of Linux distribution and container eng
5353
4. Install the correct version of the `docker-ce` package. The following is an example of installing Docker 27.0. If you decide to install a different Docker version, make sure to replace with the desired version in the commands below.
5454

5555
```sh
56-
sudo apt install -y docker-ce=5:27.0.* docker-ce-cli=5:27.0.* containerd.io
56+
sudo apt update && sudo apt install -y docker-ce=5:27.0.* docker-ce-cli=5:27.0.* containerd.io
5757
```
5858

5959

deploy-manage/deploy/cloud-enterprise/ece-install-offline-no-registry.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -17,8 +17,8 @@ To perform an offline installation without a private Docker registry, you have t
1717

1818
```sh subs=true
1919
docker pull docker.elastic.co/cloud-enterprise/elastic-cloud-enterprise:{{version.ece}}
20-
docker pull docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.18.0
21-
docker pull docker.elastic.co/cloud-release/kibana-cloud:8.18.0
20+
docker pull docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.18.2
21+
docker pull docker.elastic.co/cloud-release/kibana-cloud:8.18.2
2222
docker pull docker.elastic.co/cloud-release/elastic-agent-cloud:8.18.0
2323
docker pull docker.elastic.co/cloud-release/enterprise-search-cloud:8.18.0
2424
docker pull docker.elastic.co/cloud-release/elasticsearch-cloud-ess:9.0.0
@@ -35,8 +35,8 @@ To perform an offline installation without a private Docker registry, you have t
3535

3636
```sh subs=true
3737
docker save -o ece.{{version.ece}}.tar docker.elastic.co/cloud-enterprise/elastic-cloud-enterprise:{{version.ece}}
38-
docker save -o es.8.18.0.tar docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.18.0
39-
docker save -o kibana.8.18.0.tar docker.elastic.co/cloud-release/kibana-cloud:8.18.0
38+
docker save -o es.8.18.0.tar docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.18.2
39+
docker save -o kibana.8.18.0.tar docker.elastic.co/cloud-release/kibana-cloud:8.18.2
4040
docker save -o apm.8.18.0.tar docker.elastic.co/cloud-release/elastic-agent-cloud:8.18.0
4141
docker save -o enterprise-search.8.18.0.tar docker.elastic.co/cloud-release/enterprise-search-cloud:8.18.0
4242
docker save -o es.9.0.0.tar docker.elastic.co/cloud-release/elasticsearch-cloud-ess:9.0.0

deploy-manage/deploy/elastic-cloud/create-an-organization.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -64,6 +64,7 @@ During the free 14 day trial, Elastic provides access to one hosted deployment a
6464
* The deployment size is limited to 8GB RAM and approximately 360GB of storage, depending on the specified hardware profile
6565
* Machine learning nodes are available up to 4GB RAM, or up to 8GB when using Reranker
6666
* Custom {{es}} plugins are not enabled
67+
* We monitor token usage per account for the Elastic Managed LLM. If an account uses over one million tokens in 24 hours, we will inform you and then disable access to the LLM. This is in accordance with our fair use policy for trials.
6768

6869
For more information, check the [{{ech}} documentation](cloud-hosted.md).
6970

@@ -73,6 +74,7 @@ For more information, check the [{{ech}} documentation](cloud-hosted.md).
7374
* Search Power is limited to 100. This setting only exists in {{es-serverless}} projects
7475
* Search Boost Window is limited to 7 days. This setting only exists in {{es-serverless}} projects
7576
* Scaling is limited for serverless projects in trials. Failures might occur if the workload requires memory or compute beyond what the above search power and search boost window setting limits can provide.
77+
* We monitor token usage per account for the Elastic Managed LLM. If an account uses over one million tokens in 24 hours, we will inform you and then disable access to the LLM. This is in accordance with our fair use policy for trials.
7678

7779
**Remove limitations**
7880

deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -151,7 +151,13 @@ After uploading your files, you can select to enable them when creating a new {{
151151

152152
While you can update the ZIP file for any plugin or bundle, these are downloaded and made available only when a node is started.
153153

154-
You should be careful when updating an extension. If you update an existing extension with a new file, and if the file is broken for some reason, all the nodes could be in trouble, as a restart or move node could make even HA clusters non-available.
154+
:::{important}
155+
Be careful when updating an extension. If you update an existing extension with a new file, and if the file is broken for any reason, all the nodes could be impacted, as either a restart or a move node could make even HA clusters non-available. Also, shards of your indices may become unassigned if there's anything wrong with the bundle, for example if a file referenced by an index is missing due to the update.
156+
:::
157+
158+
:::{tip}
159+
If you need to update your extension, instead of updating an existing extension with a new file directly, we recommend that you create a new extension to test the behavior first, verify it's validity, and then reflect it to your deployment.
160+
:::
155161

156162
If the extension is not in use by any deployments, then you are free to update the files or extension details as much as you like. However, if the extension is in use, and if you need to update it with a new file, it is recommended to [create a new extension](../../../deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md#ec-add-your-plugin) rather than updating the existing one that is in use.
157163

manage-data/lifecycle/index-lifecycle-management/manage-existing-indices.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ If you’ve been using Curator or some other mechanism to manage periodic indice
1515
* Reindex into an {{ilm-init}}-managed index.
1616

1717
::::{note}
18-
Starting in Curator version 5.7, Curator ignores {{ilm-init}} managed indices.
18+
Starting in Curator version 5.7, Curator ignores {{ilm-init}}-managed indices.
1919
::::
2020

2121

@@ -103,5 +103,4 @@ To reindex into the managed index:
103103

104104
Querying using this alias will now search your new data and all of the reindexed data.
105105

106-
6. Once you have verified that all of the reindexed data is available in the new managed indices, you can safely remove the old indices.
107-
106+
6. Once you have verified that all of the reindexed data is available in the new managed indices, you can safely remove the old indices.

manage-data/lifecycle/index-lifecycle-management/tutorial-automate-rollover.md

Lines changed: 152 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,9 +11,12 @@ products:
1111

1212
When you continuously index timestamped documents into {{es}}, you typically use a [data stream](../../data-store/data-streams.md) so you can periodically [roll over](rollover.md) to a new index. This enables you to implement a [hot-warm-cold architecture](../data-tiers.md) to meet your performance requirements for your newest data, control costs over time, enforce retention policies, and still get the most out of your data.
1313

14-
::::{tip}
15-
[Data streams](../../data-store/data-streams.md) are best suited for [append-only](../../data-store/data-streams.md#data-streams-append-only) use cases. If you need to update or delete existing time series data, you can perform update or delete operations directly on the data stream backing index. If you frequently send multiple documents using the same `_id` expecting last-write-wins, you may want to use an index alias with a write index instead. You can still use [ILM](/manage-data/lifecycle/index-lifecycle-management/tutorial-automate-rollover.md) to manage and [roll over](rollover.md) the alias’s indices. Skip to [Manage time series data without data streams](/manage-data/lifecycle/index-lifecycle-management/tutorial-automate-rollover.md#manage-time-series-data-without-data-streams).
16-
::::
14+
To simplify index management and automate rollover, select one of the scenarios that best applies to your situation:
15+
16+
* **Roll over data streams with ILM.** When ingesting write-once, timestamped data that doesn't change, follow the steps in [Manage time series data with data streams](/manage-data/lifecycle/index-lifecycle-management/tutorial-automate-rollover.md#manage-time-series-data-with-data-streams) for simple, automated data stream rollover. ILM-managed backing indices are automatically created under a single data stream alias. ILM also tracks and transitions the backing indices through the lifecycle automatically.
17+
* **Roll over time series indices with ILM.** Data streams are best suited for [append-only](../../data-store/data-streams.md#data-streams-append-only) use cases. If you need to update or delete existing time series data, you can perform update or delete operations directly on the data stream backing index. If you frequently send multiple documents using the same `_id` expecting last-write-wins, you may want to use an index alias with a write index instead. You can still use [ILM](/manage-data/lifecycle/index-lifecycle-management/tutorial-automate-rollover.md) to manage and roll over the alias’s indices. Follow the steps in [Manage time series data without data streams](/manage-data/lifecycle/index-lifecycle-management/tutorial-automate-rollover.md#manage-time-series-data-without-data-streams) for more information.
18+
* **Roll over general content as data streams with ILM.** If some of your indices store data that isn't timestamped, but you would like to get the benefits of automatic rotation when the index reaches a certain size or age, or delete already rotated indices after a certain amount of time, follow the steps in [Manage general content with data streams](/manage-data/lifecycle/index-lifecycle-management/tutorial-automate-rollover.md#manage-general-content-with-data-streams). These steps include injecting a timestamp field during indexing time to mimic time series data.
19+
1720

1821
## Manage time series data with data streams [manage-time-series-data-with-data-streams]
1922

@@ -295,3 +298,149 @@ Retrieving the status information for managed indices is very similar to the dat
295298
GET timeseries-*/_ilm/explain
296299
```
297300

301+
## Manage general content with data streams [manage-general-content-with-data-streams]
302+
303+
Data streams are specifically designed for time series data.
304+
If you want to manage general content (data without timestamps) with data streams, you can set up [ingest pipelines](/manage-data/ingest/transform-enrich/ingest-pipelines.md) to transform and enrich your general content by adding a timestamp field at [ingest](/manage-data/ingest.md) time and get the benefits of time-based data management.
305+
306+
For example, search use cases such as knowledge base, website content, e-commerce, or product catalog search, might require you to frequently index general content (data without timestamps). As a result, your index can grow significantly over time, which might impact storage requirements, query performance, and cluster health. Following the steps in this procedure (including a timestamp field and moving to ILM-managed data streams) can help you rotate your indices in a simpler way, based on their size or lifecycle phase.
307+
308+
To roll over your general content from indices to a data stream, you:
309+
310+
1. [Create an ingest pipeline](/manage-data/lifecycle/index-lifecycle-management/tutorial-automate-rollover.md#manage-general-content-with-data-streams-ingest) to process your general content and add a `@timestamp` field.
311+
312+
1. [Create a lifecycle policy](/manage-data/lifecycle/index-lifecycle-management/tutorial-automate-rollover.md#manage-general-content-with-data-streams-policy) that meets your requirements.
313+
314+
1. [Create an index template](/manage-data/lifecycle/index-lifecycle-management/tutorial-automate-rollover.md#manage-general-content-with-data-streams-template) that uses the created ingest pipeline and lifecycle policy.
315+
316+
1. [Create a data stream](/manage-data/lifecycle/index-lifecycle-management/tutorial-automate-rollover.md#manage-general-content-with-data-streams-create-stream).
317+
318+
1. *Optional:* If you have an existing, non-managed index and want to migrate your data to the data stream you created, [reindex with a data stream](/manage-data/lifecycle/index-lifecycle-management/tutorial-automate-rollover.md#manage-general-content-with-data-streams-reindex).
319+
320+
1. [Update your ingest endpoint](/manage-data/lifecycle/index-lifecycle-management/tutorial-automate-rollover.md#manage-general-content-with-data-streams-endpoint) to target the created data stream.
321+
322+
1. *Optional:* You can use the [ILM explain API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-explain-lifecycle) to get status information for your managed indices.
323+
For more information, refer to [Check lifecycle progress](/manage-data/lifecycle/index-lifecycle-management/tutorial-automate-rollover.md#ilm-gs-check-progress).
324+
325+
326+
### Create an ingest pipeline to transform your general content [manage-general-content-with-data-streams-ingest]
327+
328+
Create an ingest pipeline that uses the [`set` enrich processor](elasticsearch://reference/enrich-processor/set-processor.md) to add a `@timestamp` field:
329+
330+
```console
331+
PUT _ingest/pipeline/ingest_time_1
332+
{
333+
"description": "Add an ingest timestamp",
334+
"processors": [
335+
{
336+
"set": {
337+
"field": "@timestamp",
338+
"value": "{{_ingest.timestamp}}"
339+
}
340+
}]
341+
}
342+
```
343+
344+
### Create a lifecycle policy [manage-general-content-with-data-streams-policy]
345+
346+
In this example, the policy is configured to roll over when the shard size reaches 10 GB:
347+
348+
```console
349+
PUT _ilm/policy/indextods
350+
{
351+
"policy": {
352+
"phases": {
353+
"hot": {
354+
"min_age": "0ms",
355+
"actions": {
356+
"set_priority": {
357+
"priority": 100
358+
},
359+
"rollover": {
360+
"max_primary_shard_size": "10gb"
361+
}
362+
}
363+
}
364+
}
365+
}
366+
}
367+
```
368+
369+
For more information about lifecycle phases and available actions, check [Create a lifecycle policy](configure-lifecycle-policy.md#ilm-create-policy).
370+
371+
372+
### Create an index template to apply the ingest pipeline and lifecycle policy [manage-general-content-with-data-streams-template]
373+
374+
Create an index template that uses the created ingest pipeline and lifecycle policy:
375+
376+
```console
377+
PUT _index_template/index_to_dot
378+
{
379+
"template": {
380+
"settings": {
381+
"index": {
382+
"lifecycle": {
383+
"name": "indextods"
384+
},
385+
"default_pipeline": "ingest_time_1"
386+
}
387+
},
388+
"mappings": {
389+
"_source": {
390+
"excludes": [],
391+
"includes": [],
392+
"enabled": true
393+
},
394+
"_routing": {
395+
"required": false
396+
},
397+
"dynamic": true,
398+
"numeric_detection": false,
399+
"date_detection": true,
400+
"dynamic_date_formats": [
401+
"strict_date_optional_time",
402+
"yyyy/MM/dd HH:mm:ss Z||yyyy/MM/dd Z"
403+
]
404+
}
405+
},
406+
"index_patterns": [
407+
"movetods"
408+
],
409+
"data_stream": {
410+
"hidden": false,
411+
"allow_custom_routing": false
412+
}
413+
}
414+
```
415+
416+
### Create a data stream [manage-general-content-with-data-streams-create-stream]
417+
418+
Create a data stream using the [_data_stream API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-create-data-stream):
419+
420+
```console
421+
PUT /_data_stream/movetods
422+
```
423+
424+
### Optional: Reindex your data with a data stream [manage-general-content-with-data-streams-reindex]
425+
426+
If you want to copy your documents from an existing index to the data stream you created, reindex with a data stream using the [_reindex API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-reindex):
427+
428+
```console
429+
POST /_reindex
430+
{
431+
"source": {
432+
"index": "indextods"
433+
},
434+
"dest": {
435+
"index": "movetods",
436+
"op_type": "create"
437+
438+
}
439+
}
440+
```
441+
442+
For more information, check [Reindex with a data stream](../../data-store/data-streams/use-data-stream.md#reindex-with-a-data-stream).
443+
444+
### Update your ingest endpoint to target the created data stream [manage-general-content-with-data-streams-endpoint]
445+
446+
If you use Elastic clients, scripts, or any other third party tool to ingest data to {{es}}, make sure you update these to use the created data stream.

redirects.yml

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -299,4 +299,8 @@ redirects:
299299
# Related to
300300
'solutions/observability/apm/get-started-serverless.md': 'solutions/observability/apm/get-started.md'
301301
'solutions/observability/apm/get-started-fleet-managed-apm-server.md': 'reference/fleet/get-started-managed-apm-server.md'
302-
'solutions/observability/apm/get-started-apm-server-binary.md': 'reference/fleet/get-started-apm-server-binary.md'
302+
'solutions/observability/apm/get-started-apm-server-binary.md': 'reference/fleet/get-started-apm-server-binary.md'
303+
304+
# Related to https://github.com/elastic/docs-content/pull/2396
305+
'solutions/security/configure-elastic-defend/enable-access-for-macos-monterey.md': 'solutions/security/configure-elastic-defend/enable-access-for-macos.md'
306+
'solutions/security/configure-elastic-defend/enable-access-for-macos-ventura-higher.md': 'solutions/security/configure-elastic-defend/enable-access-for-macos.md'

reference/fleet/fleet-agent-serverless-restrictions.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ products:
1515

1616
If you are using {{agent}} with [{{serverless-full}}](/deploy-manage/deploy/elastic-cloud/serverless.md), note these differences from use with {{ech}} and self-managed {{es}}:
1717

18-
* The number of {{agents}} that may be connected to an {{serverless-full}} project is limited to 10 thousand.
18+
* A maximum of 10,000 {{agents}} may be connected to an {{serverless-full}} project.
1919
* The minimum supported version of {{agent}} supported for use with {{serverless-full}} is 8.11.0.
2020

2121
### Outputs
@@ -33,9 +33,9 @@ For more information, see [](upgrade-elastic-agent.md) and [](upgrade-standalone
3333

3434
The path to get to the {{fleet}} application in {{kib}} differs across projects:
3535

36-
* In {{ech}} deployments, navigate to **Management > Fleet**.
37-
* In {{serverless-short}} {{observability}} projects, navigate to **Project settings > Fleet**.
38-
* In {{serverless-short}} Security projects, navigate to **Assets > Fleet**.
36+
* In {{ech}} deployments, navigate to **Management****Fleet**.
37+
* In {{serverless-short}} {{observability}} projects, navigate to **Project settings****Fleet**.
38+
* In {{serverless-short}} Security projects, navigate to **Assets****Fleet**.
3939

4040

4141
## {{fleet-server}} [fleet-server-serverless-restrictions]

reference/fleet/running-on-kubernetes-managed-by-fleet.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,13 @@ You can find {{agent}} Docker images [here](https://www.docker.elastic.co/r/elas
5757
Download the manifest file, substituting `{agent_version}` with the version number:
5858

5959
```sh
60-
curl -L -O https://github.com/elastic/elastic-agent/blob/v{agent_version}/deploy/kubernetes/elastic-agent-managed-kubernetes.yaml
60+
curl -L -O https://raw.githubusercontent.com/elastic/elastic-agent/refs/tags/v{agent_version}/deploy/kubernetes/elastic-agent-managed-kubernetes.yaml
61+
```
62+
63+
For example, to download the manifest of the latest {{version.stack}} release:
64+
65+
```sh subs=true
66+
curl -L -O https://raw.githubusercontent.com/elastic/elastic-agent/refs/tags/v{{version.stack}}/deploy/kubernetes/elastic-agent-managed-kubernetes.yaml
6167
```
6268

6369
::::{note}

release-notes/fleet-elastic-agent/known-issues.md

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,19 @@ Known issues are significant defects or limitations that may impact your impleme
1717

1818
% :::
1919

20+
:::{dropdown} [Windows] {{agent}} does not process Windows security events
21+
22+
**Applies to: {{agent}} 8.19.0, 9.1.0 (Windows only)**
23+
24+
On August 1, 2025, a known issue was discovered where {{agent}} does not process Windows security events on hosts running Windows 10, Windows 11, and Windows Server 2022.
25+
26+
For more information, check [Issue #45693](https://github.com/elastic/beats/issues/45693).
27+
28+
**Workaround**
29+
30+
No workaround is available at the moment, but a fix is expected to be available in {{agent}} 8.19.1 and 9.1.1.
31+
:::
32+
2033
:::{dropdown} {{agents}} remain in an "Upgrade scheduled" state
2134

2235
**Applies to: {{agent}} 8.18.0, 8.18.1, 8.18.2, 8.18.3, 8.18.4, 8.19.0, 9.0.0, 9.0.1, 9.0.2, 9.0.3, 9.1.0**

0 commit comments

Comments
 (0)