Skip to content

Commit dad2518

Browse files
[OnWeek] Fix Vale rule warnings in reference/fleet (pt2) (#3917)
This PR is part of the OnWeek project about fixing Vale rule violations in docs-content. It fixes warnings in part of the `reference/fleet` folder. Assisted by Cursor. --------- Co-authored-by: Karen Metts <[email protected]>
1 parent ea41b2c commit dad2518

20 files changed

+36
-36
lines changed

reference/fleet/install-standalone-elastic-agent.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ To install and run {{agent}} standalone:
8484
```
8585

8686
:::::{note}
87-
If you need to uninstall an {{agent}} package on Debian Linux, note that the `dpkg -r` command to remove a package leaves the flavor file in place. Instead, `dpkg -P` must to be used to purge all package content and reset the flavor.
87+
If you need to uninstall an {{agent}} package on Debian Linux, the `dpkg -r` command to remove a package leaves the flavor file in place. Instead, `dpkg -P` must to be used to purge all package content and reset the flavor.
8888
:::::
8989

9090
::::::
@@ -148,7 +148,7 @@ To install and run {{agent}} standalone:
148148
5. From the agent directory, run the following commands to install {{agent}} and start it as a service.
149149

150150
::::{note}
151-
On macOS, Linux (tar package), and Windows, run the `install` command to install {{agent}} as a managed service and start the service. The DEB and RPM packages include a service unit for Linux systems with systemd, so just enable then start the service.
151+
On macOS, Linux (tar package), and Windows, run the `install` command to install {{agent}} as a managed service and start the service. The DEB and RPM packages include a service unit for Linux systems with systemd, so enable then start the service.
152152
::::
153153

154154
:::::{tab-set}

reference/fleet/integration-level-outputs.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,9 +8,9 @@ products:
88

99
# Set integration-level outputs [integration-level-outputs]
1010

11-
If your [Elastic subscription level](https://www.elastic.co/subscriptions) supports **per integration output assignment**, you can configure {{agent}} data to be sent to different outputs for different integration policies. Note that the output clusters that you send data to must also be on the same subscription level.
11+
If your [Elastic subscription level](https://www.elastic.co/subscriptions) supports **per integration output assignment**, you can configure {{agent}} data to be sent to different outputs for different integration policies. The output clusters that you send data to must also be on the same subscription level.
1212

13-
Integration-level outputs are very useful for certain scenarios. For example:
13+
Integration-level outputs are useful for certain scenarios. For example:
1414

1515
* You may want to send security logs monitored by an {{agent}} to one {{ls}} output, while informational logs are sent to another {{ls}} output.
1616
* If you operate multiple {{beats}} on a system and want to migrate these to {{agent}}, integration-level outputs enable you to maintain the distinct outputs that are currently used by each Beat.

reference/fleet/kubernetes-provider.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -141,7 +141,7 @@ These are the fields available within config templating. The `kubernetes.*` fiel
141141
::::
142142

143143

144-
Note that not all of these fields are available by default and special configuration options are needed in order to include them.
144+
Not all of these fields are available by default and special configuration options are needed in order to include them.
145145

146146
For example, if the Kubernetes provider provides the following inventory:
147147

@@ -212,7 +212,7 @@ providers.kubernetes:
212212
enabled: true
213213
```
214214

215-
Note that this resource is only available with `scope: cluster` setting and `node` cannot be used as scope.
215+
This resource is only available with `scope: cluster` setting and `node` cannot be used as scope.
216216

217217
The available keys are:
218218

reference/fleet/kubernetes_secrets-provider.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ providers.kubernetes_secrets:
3535
: (Optional) Configure additional options for the Kubernetes client. Supported options are `qps` and `burst`. If not set, the Kubernetes client’s default QPS and burst settings are used.
3636

3737
`cache_disable`
38-
: (Optional) Disables the cache for the secrets. When disabled, thus is set to `true`, code makes a request to the API Server to obtain the value. To continue using the cache, set the variable to `false`. Default is `false`.
38+
: (Optional) Disables the cache for the secrets. When disabled (set to `true`), code makes a request to the API Server to obtain the value. To continue using the cache, set the variable to `false`. Default is `false`.
3939

4040
`cache_refresh_interval`
4141
: (Optional) Defines the period to update all secret values kept in the cache. Defaults to `60s`.

reference/fleet/ls-output-settings.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ output {
5757
| $$$ls-logstash-hosts$$$<br>**{{ls}} hosts**<br> | The addresses your {{agent}}s will use to connect to {{ls}}. Use the format `host:port`. Click **add** row to specify additional {{ls}} addresses.<br><br>**Examples:**<br><br>* `192.0.2.0:5044`<br>* `mylogstashhost:5044`<br><br>Refer to the [{{fleet-server}}](/reference/fleet/fleet-server.md) documentation for default ports and other configuration details.<br> |
5858
| $$$ls-server-ssl-certificate-authorities-setting$$$<br>**Server SSL certificate authorities**<br> | The CA certificate to use to connect to {{ls}}. This is the CA used to generate the certificate and key for {{ls}}. Copy and paste in the full contents for the CA certificate.<br><br>This setting is optional.<br> |
5959
| $$$ls-client-ssl-certificate-setting$$$<br>**Client SSL certificate**<br> | The certificate generated for the client. Copy and paste in the full contents of the certificate. This is the certificate that all the agents will use to connect to {{ls}}.<br><br>In cases where each client has a unique certificate, the local path to that certificate can be placed here. The agents will pick the certificate in that location when establishing a connection to {{ls}}.<br> |
60-
| $$$ls-client-ssl-certificate-key-setting$$$<br>**Client SSL certificate key**<br> | The private key generated for the client. This must be in PKCS 8 key. Copy and paste in the full contents of the certificate key. This is the certificate key that all the agents will use to connect to {{ls}}.<br><br>In cases where each client has a unique certificate key, the local path to that certificate key can be placed here. The agents will pick the certificate key in that location when establishing a connection to {{ls}}.<br><br>To prevent unauthorized access the certificate key is stored as a secret value. While secret storage is recommended, you can choose to override this setting and store the key as plain text in the agent policy definition. Secret storage requires {{fleet-server}} version 8.12 or higher.<br><br>Note that this setting can also be stored as a secret value or as plain text for preconfigured outputs. See [Preconfiguration settings](kibana://reference/configuration-reference/fleet-settings.md#_preconfiguration_settings_for_advanced_use_cases) in the {{kib}} Guide to learn more.<br> |
60+
| $$$ls-client-ssl-certificate-key-setting$$$<br>**Client SSL certificate key**<br> | The private key generated for the client. This must be in PKCS 8 key. Copy and paste in the full contents of the certificate key. This is the certificate key that all the agents will use to connect to {{ls}}.<br><br>In cases where each client has a unique certificate key, the local path to that certificate key can be placed here. The agents will pick the certificate key in that location when establishing a connection to {{ls}}.<br><br>To prevent unauthorized access the certificate key is stored as a secret value. While secret storage is recommended, you can choose to override this setting and store the key as plain text in the agent policy definition. Secret storage requires {{fleet-server}} version 8.12 or higher.<br><br>This setting can also be stored as a secret value or as plain text for preconfigured outputs. See [Preconfiguration settings](kibana://reference/configuration-reference/fleet-settings.md#_preconfiguration_settings_for_advanced_use_cases) in the {{kib}} Guide to learn more.<br> |
6161
| $$$ls-agent-proxy-output$$$<br>**Proxy**<br> | Select a proxy URL for {{agent}} to connect to {{ls}}. To learn about proxy configuration, refer to [Using a proxy server with {{agent}} and {{fleet}}](/reference/fleet/fleet-agent-proxy-support.md).<br> |
6262
| $$$ls-output-advanced-yaml-setting$$$<br>**Advanced YAML configuration**<br> | YAML settings that will be added to the {{ls}} output section of each policy that uses this output. Make sure you specify valid YAML. The UI does not currently provide validation.<br><br>See [Advanced YAML configuration](#ls-output-settings-yaml-config) for descriptions of the available settings.<br> |
6363
| $$$ls-agent-integrations-output$$$<br>**Make this output the default for agent integrations**<br> | When this setting is on, {{agent}}s use this output to send data if no other output is set in the [agent policy](/reference/fleet/agent-policy.md).<br><br>Output to {{ls}} is not supported for agent integrations in a policy used by {{fleet-server}} or APM.<br> |

reference/fleet/manage-integrations.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ Integrations are available for a wide array of popular services and platforms. T
2525

2626
{applies_to}`stack: preview 9.2.0` {{fleet}} also supports installing {{agent}} integration packages for collecting and visualizing OpenTelemetry data. For more information, refer to [Collect OpenTelemetry data with {{agent}} integrations](/reference/fleet/otel-integrations.md).
2727

28-
Note that the **Integrations** app in {{kib}} needs access to the public {{package-registry}} to discover integrations. If your deployment has network restrictions, you can [deploy your own self-managed {{package-registry}}](/reference/fleet/air-gapped.md#air-gapped-diy-epr).
28+
The **Integrations** app in {{kib}} needs access to the public {{package-registry}} to discover integrations. If your deployment has network restrictions, you can [deploy your own self-managed {{package-registry}}](/reference/fleet/air-gapped.md#air-gapped-diy-epr).
2929

3030
::::{note}
3131
Some integrations may function differently across different spaces, with some working only in the default space. Review the documentation specific to your integration for any space-related considerations.

reference/fleet/migrate-auditbeat-to-agent.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ The following table describes the integrations you can use instead of {{auditbea
2525
| If you use… | You can use this instead… | Notes |
2626
| --- | --- | --- |
2727
| [Auditd](beats://reference/auditbeat/auditbeat-module-auditd.md) module | [Auditd Manager](integration-docs://reference/auditd_manager/index.md) integration | This integration is a direct replacement of the module. You can port rules and configuration to this integration. Starting in {{stack}} 8.4, you can also set the`immutable` flag in the audit configuration. |
28-
| [Auditd Logs](integration-docs://reference/auditd/index.md) integration | Use this integration if you dont need to manage rules. It only parses logs from the audit daemon `auditd`. Note that the events created by this integration are different than the ones created by [Auditd Manager](integration-docs://reference/auditd_manager/index.md), since the latter merges all related messages in a single event while [Auditd Logs](integration-docs://reference/auditd/index.md) creates one event per message. |
28+
| [Auditd Logs](integration-docs://reference/auditd/index.md) integration | Use this integration if you don't need to manage rules. It only parses logs from the audit daemon `auditd`. The events created by this integration are different than the ones created by [Auditd Manager](integration-docs://reference/auditd_manager/index.md), since the latter merges all related messages in a single event while [Auditd Logs](integration-docs://reference/auditd/index.md) creates one event per message. |
2929
| [File Integrity](beats://reference/auditbeat/auditbeat-module-file_integrity.md) module | [File Integrity Monitoring](integration-docs://reference/fim/index.md) integration | This integration is a direct replacement of the module. It reports real-time events, but cannot report who made the changes. If you need to track this information, use [{{elastic-defend}}](/solutions/security/configure-elastic-defend/install-elastic-defend.md) instead. |
3030
| [System](beats://reference/auditbeat/auditbeat-module-system.md) module | It depends… | There is not a single integration that collects all this information. |
3131
| [System.host](beats://reference/auditbeat/auditbeat-dataset-system-host.md) dataset | [Osquery](integration-docs://reference/osquery/index.md) or [Osquery Manager](integration-docs://reference/osquery_manager/index.md) integration | Schedule collection of information like:<br><br>* [system_info](https://www.osquery.io/schema/5.1.0/#system_info) for hostname, unique ID, and architecture<br>* [os_version](https://www.osquery.io/schema/5.1.0/#os_version)<br>* [interface_addresses](https://www.osquery.io/schema/5.1.0/#interface_addresses) for IPs and MACs<br> |

reference/fleet/migrate-elastic-agent.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ when the target cluster is available you’ll need to adjust a few settings. Tak
6262
:::
6363

6464
3. Open the {{fleet}} **Settings** tab.
65-
4. Examine the configurations captured there for {{fleet}}. Note that these settings are scopied from the snapshot of the source cluster and may not have a meaning in the target cluster, so they need to be modified accordingly.
65+
4. Examine the configurations captured there for {{fleet}}. These settings are copied from the snapshot of the source cluster and may not have a meaning in the target cluster, so they need to be modified accordingly.
6666

6767
In the following example, both the **Fleet Server hosts** and the **Outputs** settings are copied over from the source cluster:
6868

@@ -102,7 +102,7 @@ You have now created an {{es}} output that agents can use to write data to the n
102102

103103
### Modify the {{fleet-server}} host [migrate-elastic-agent-fleet-host]
104104

105-
Like the {{es}} host, the {{fleet-server}} host has also changed with the new target cluster. Note that if youre deploying {{fleet-server}} on premise, the host has probably not changed address and this setting does not need to be modified. We still recommend that you ensure the agents are able to reach the the on-premise {{fleet-server}} host (which they should be able to as they were able to connect to it prior to the migration).
105+
Like the {{es}} host, the {{fleet-server}} host has also changed with the new target cluster. If you're deploying {{fleet-server}} on premise, the host has probably not changed address and this setting does not need to be modified. We still recommend that you ensure the agents are able to reach the the on-premise {{fleet-server}} host (which they should be able to as they were able to connect to it prior to the migration).
106106

107107
The {{ecloud}} {{fleet-server}} host has a similar format to the {{es}} output:
108108

@@ -137,7 +137,7 @@ The easiest way to find the `deployment-id` is from the deployment URL:
137137

138138
### Reset the {{ecloud}} policy [migrate-elastic-agent-reset-policy]
139139

140-
On your target cluster, certain settings from the original {{ecloud}} {{agent}} policiy may still be retained, and need to be updated to reference the new cluster. For example, in the APM policy installed to the {{ecloud}} {{agent}} policy, the original and outdated APM URL is preserved. This can be fixed by running the `reset_preconfigured_agent_policies` API request. Note that when you reset the policy, all APM Integration settings are reset, including the secret key or any tail-based sampling.
140+
On your target cluster, certain settings from the original {{ecloud}} {{agent}} policiy may still be retained, and need to be updated to reference the new cluster. For example, in the APM policy installed to the {{ecloud}} {{agent}} policy, the original and outdated APM URL is preserved. This can be fixed by running the `reset_preconfigured_agent_policies` API request. When you reset the policy, all APM Integration settings are reset, including the secret key or any tail-based sampling.
141141

142142
To reset the {{ecloud}} {{agent}} policy:
143143

reference/fleet/migrate-from-beats-to-elastic-agent.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@ After deploying an {{agent}} to a host, view details about the agent and inspect
106106
:screenshot:
107107
:::
108108

109-
4. Go to **Analytics > Discover** and examine the data streams. Note that documents indexed by {{agent}} match these patterns:
109+
4. Go to **Analytics > Discover** and examine the data streams. Documents indexed by {{agent}} match these patterns:
110110

111111
* `logs-*`
112112
* `metrics-*`
@@ -284,7 +284,7 @@ These aliases must be added to both the index template and existing indices.
284284
::::
285285

286286

287-
Note that custom dashboards will show duplicated data until you remove {{beats}} from your hosts.
287+
Custom dashboards will show duplicated data until you remove {{beats}} from your hosts.
288288

289289
For more information, see the [Aliases documentation](/manage-data/data-store/aliases.md).
290290

reference/fleet/monitor-elastic-agent.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ The **Agents** tab in **{{fleet}}** displays a maximum of 10,000 agents, shown o
5050
| --- | --- |
5151
| **Healthy** | {{agent}}s are enrolled and checked in. There are no agent policy updates or automatic agent binary updates in progress, but the agent binary may still be out of date. {{agent}}s continuously check in to the {{fleet-server}} for required updates. |
5252
| **Unhealthy** | {{agent}}s have errors or are running in a degraded state. An agent will be reported as `unhealthy` as a result of a configuration problem on the host system. For example, an {{agent}} may not have the correct permissions required to run an integration that has been added to the {{agent}} policy. In this case, you may need to investigate and address the situation. |
53-
| **Orphaned** | For {{agent}}s enrolled in {{elastic-defend}}, the `orphaned` status indicates an error in the communication between the {{agent}} service on the host system and the endpoint security service provided by {{elastic-defend}}. Note that on agents reported as `orphaned`, the {{elastic-defend}} integration is still running and protecting the host. |
53+
| **Orphaned** | For {{agent}}s enrolled in {{elastic-defend}}, the `orphaned` status indicates an error in the communication between the {{agent}} service on the host system and the endpoint security service provided by {{elastic-defend}}. On agents reported as `orphaned`, the {{elastic-defend}} integration is still running and protecting the host. |
5454
| **Updating** | {{agent}}s are updating the agent policy, updating the binary, or enrolling or unenrolling from {{fleet}}. |
5555
| **Offline** | {{agent}}s have stayed in an unhealthy status for a period of time. Offline agent’s API keys remain valid. You can still see these {{agent}}s in the {{fleet}} UI and investigate them for further diagnosis if required. |
5656
| **Inactive** | {{agent}}s have been offline for longer than the time set in your [inactivity timeout](/reference/fleet/set-inactivity-timeout.md). These {{agent}}s are valid, but have been removed from the main {{fleet}} UI. |
@@ -191,7 +191,7 @@ When you select a new setting the change is saved automatically.
191191

192192
When available, the new diagnostic bundle will be listed on this page, as well as any in-progress or previously collected bundles for the {{agent}}.
193193

194-
Note that the bundles are stored in {{es}} and are removed automatically after 7 days. You can also delete any previously created bundle by clicking the `trash can` icon.
194+
The bundles are stored in {{es}} and are removed automatically after 7 days. You can also delete any previously created bundle by clicking the `trash can` icon.
195195

196196

197197
## View the {{agent}} metrics dashboard [view-agent-metrics]

0 commit comments

Comments
 (0)