Skip to content

Commit 3a2c055

Browse files
[ON Week] Fix errors in code blocks in Fleet docs (#3982)
This PR fixes errors in code blocks in the Fleet docs using an AI assistant (Cursor) and semantic code search in Elastic repos for identifying and confirming the errors. It also fixes a couple of smaller issues: - removes empty columns - removes a reference to the apm-data plugin (unrelated to the specific doc’s other content) - makes some descriptions (in tables) a bit more accurate. —— AI-assisted by Cursor using the claude-4.5-sonnet (thinking) model; the necessary changes were identified in multiple iterations.
1 parent 1b014d5 commit 3a2c055

17 files changed

+40
-37
lines changed

reference/fleet/add_docker_metadata-processor.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ If the Docker daemon is restarted, the mounted socket will become invalid, and m
5252
#match_source: true
5353
#match_source_index: 4
5454
#match_short_id: true
55-
#cleanup_timeout: 60
55+
#cleanup_timeout: 60s
5656
#labels.dedot: false
5757
# To connect to Docker over TLS you must specify a client and CA certificate.
5858
#ssl:

reference/fleet/add_host_metadata-processor.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -95,5 +95,5 @@ If `host.*` fields already exist in the event, they are overwritten by default u
9595
| `geo.city_name` | No | | Name of the city. |
9696
| `geo.country_iso_code` | No | | ISO country code. |
9797
| `geo.region_iso_code` | No | | ISO region code. |
98-
| `replace_fields` | No | `true` | Whether to replace original host fields from the event. If set `false`, original host fields from the event are not replaced by host fields from `add_host_metadata`. |
98+
| `replace_fields` | No | `true` | Whether to replace existing host fields in the event. If `true` (default), the processor always runs and overwrites any existing `host.*` fields with metadata from `add_host_metadata`. If `false`, the processor only adds metadata when no `host.*` fields exist in the event or when only `host.name` is present. If other host fields exist, the processor is skipped entirely. |
9999

reference/fleet/add_kubernetes_metadata-processor.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ This configuration enables the processor on an {{agent}} running as a process on
6363
host: <hostname>
6464
# If kube_config is not set, KUBECONFIG environment variable will be checked
6565
# and if not present it will fall back to InCluster
66-
kube_config: ${fleet} and {agent} Guide/.kube/config
66+
kube_config: ~/.kube/config
6767
# Defining indexers and matchers manually is required for {beatname_lc}, for instance:
6868
#indexers:
6969
# - ip_port:

reference/fleet/configure-standalone-elastic-agents.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,8 @@ inputs:
3939
data_stream.namespace: default
4040
use_output: default
4141
streams:
42-
- metricset: cpu
42+
- metricsets:
43+
- cpu
4344
data_stream.dataset: system.cpu
4445
```
4546

reference/fleet/data-streams-scenario2.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ products:
1010
# Scenario 2: Apply an ILM policy to specific data streams generated from Fleet integrations across all namespaces [data-streams-scenario2]
1111

1212

13-
Mappings and settings for data streams can be customized through the creation of `*@custom` component templates, which are referenced by the index templates created by the {{es}} apm-data plugin. The easiest way to configure a custom index lifecycle policy per data stream is to edit this template.
13+
Mappings and settings for data streams can be customized through the creation of `*@custom` component templates, which are referenced by the index templates created by each integration. The easiest way to configure a custom index lifecycle policy per data stream is to edit this template.
1414

1515
This tutorial explains how to apply a custom index lifecycle policy to the `logs-system.auth` data stream.
1616

reference/fleet/decode_duration-processor.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -27,8 +27,8 @@ processors:
2727
2828
## Configuration settings [_configuration_settings_21]
2929
30-
| Name | Required | Default | Description | |
31-
| --- | --- | --- | --- | --- |
32-
| `field` | yes | | Which field of event needs to be decoded as `time.Duration` | |
33-
| `format` | yes | `milliseconds` | Supported formats: `milliseconds`/`seconds`/`minutes`/`hours` | |
30+
| Name | Required | Default | Description |
31+
| --- | --- | --- | --- |
32+
| `field` | yes | | Which field of event needs to be decoded as `time.Duration` |
33+
| `format` | yes | `milliseconds` | Supported formats: `milliseconds`/`seconds`/`minutes`/`hours` |
3434

reference/fleet/decode_xml_wineventlog-processor.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -149,10 +149,10 @@ If `map_ecs_fields` is enabled then the following field mappings are also perfor
149149
| --- | --- | --- |
150150
| `event.code` | `winlog.event_id` | |
151151
| `event.kind` | `"event"` | |
152-
| `event.provider` | `<Event><System><Provider>` | `Name` attribute |
153-
| `event.action` | `<Event><RenderingInfo><Task>` | |
154-
| `event.host.name` | `<Event><System><Computer>` | |
152+
| `event.provider` | `winlog.provider_name` | `Name` attribute |
153+
| `event.action` | `winlog.task` | |
155154
| `event.outcome` | `winlog.outcome` | |
155+
| `host.name` | `winlog.computer_name` | |
156156
| `log.level` | `winlog.level` | |
157157
| `message` | `winlog.message` | |
158158
| `error.code` | `winlog.error.code` | |

reference/fleet/dynamic-input-configuration.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -184,7 +184,8 @@ inputs:
184184
- id: unique-system-metrics-id
185185
type: system/metrics
186186
streams:
187-
- metricset: load
187+
- metricsets:
188+
- load
188189
data_stream.dataset: system.cpu
189190
condition: ${host.platform} != 'windows'
190191
```
@@ -196,7 +197,8 @@ inputs:
196197
- id: unique-system-metrics-id
197198
type: system/metrics
198199
streams:
199-
- metricset: load
200+
- metricsets:
201+
- load
200202
data_stream.dataset: system.cpu
201203
processors:
202204
- add_fields:

reference/fleet/elastic-agent-container.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -189,10 +189,10 @@ If you’d like to run {{agent}} in a Docker container on a read-only file syste
189189
For example:
190190

191191
```bash subs=true
192-
docker run --rm --mount source=$(pwd)/state,destination=/state -e {STATE_PATH}=/state --read-only docker.elastic.co/elastic-agent/elastic-agent:{{version.stack}} <1>
192+
docker run --rm --mount source=$(pwd)/state,destination=/state -e STATE_PATH=/state --read-only docker.elastic.co/elastic-agent/elastic-agent:{{version.stack}} <1>
193193
```
194194

195-
1. Where `{STATE_PATH}` is the path to a stateful directory to mount where {{agent}} application data can be stored.
195+
1. Where `STATE_PATH` is the path to a stateful directory to mount where {{agent}} application data can be stored.
196196

197197
You can also add `type=tmpfs` to the mount parameter (`--mount type=tmpfs,destination=/state...`) to specify a temporary file storage location. This should be done with caution as it can cause data duplication, particularly for logs, when the container is restarted, as no state data is persisted.
198198

reference/fleet/elasticsearch-output.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -248,7 +248,7 @@ Settings used to parse, filter, and transform data.
248248
```yaml
249249
outputs:
250250
default:
251-
type: elasticsearchoutput.elasticsearch:
251+
type: elasticsearch
252252
hosts: ["http://localhost:9200"]
253253
pipeline: my_pipeline_id
254254
```

0 commit comments

Comments
 (0)