Skip to content

Commit 5f874ab

Browse files
committed
apm: update discard_on_write_failure descriptions to match writing guidelines
1 parent ad5d6c0 commit 5f874ab

File tree

4 files changed

+6
-4
lines changed

4 files changed

+6
-4
lines changed

reference/apm/cloud/apm-settings.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,9 +54,10 @@ Some settings that could break your cluster if set incorrectly are blocklisted.
5454
::::
5555

5656
### Version 9.1+ [ec_version_9_1]
57+
This {{stack}} version adds support for the following settings:
5758

5859
`apm-server.sampling.tail.discard_on_write_failure`
59-
: Defines the indexing behavior when trace events fail to be written to storage (e.g. when the storage limit is reached). When set to `false`, traces will bypass sampling and always be indexed, significantly increasing the indexing load. When set to `true`, traces will be discarded, there will be data loss potentially resulting in broken traces. The default is `false`.
60+
: Defines the indexing behavior when trace events fail to be written to storage (for example, when the storage limit is reached). When set to `false`, traces bypass sampling and are always indexed, which significantly increases the indexing load. When set to `true`, traces are discarded, causing data loss which can result in broken traces. The default is `false`.
6061

6162
### Version 8.0+ [ec_version_8_0_3]
6263

solutions/observability/apm/configure-apm-server.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -78,9 +78,10 @@ Some settings that could break your cluster if set incorrectly are blocklisted.
7878
::::
7979

8080
### Version 9.1+ [ec_version_9_1]
81+
This {{stack}} version adds support for the following settings:
8182

8283
`apm-server.sampling.tail.discard_on_write_failure`
83-
: Defines the indexing behavior when trace events fail to be written to storage (e.g. when the storage limit is reached). When set to `false`, traces will bypass sampling and always be indexed, significantly increasing the indexing load. When set to `true`, traces will be discarded, there will be data loss potentially resulting in broken traces. The default is `false`.
84+
: Defines the indexing behavior when trace events fail to be written to storage (for example, when the storage limit is reached). When set to `false`, traces bypass sampling and are always indexed, which significantly increases the indexing load. When set to `true`, traces are discarded, causing data loss which can result in broken traces. The default is `false`.
8485

8586
### Version 8.0+ [ec_version_8_0_3]
8687

solutions/observability/apm/tail-based-sampling.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -87,7 +87,7 @@ Policies map trace events to a sample rate. Each policy must specify a sample ra
8787

8888
### Discard On Write Failure [sampling-tail-discard-on-write-failure-ref]
8989

90-
Defines the indexing behavior when trace events fail to be written to storage (e.g. when the storage limit is reached). When set to `false`, traces will bypass sampling and always be indexed, significantly increasing the indexing load. When set to `true`, traces will be discarded, there will be data loss potentially resulting in broken traces. The default is `false`.
90+
Defines the indexing behavior when trace events fail to be written to storage (for example, when the storage limit is reached). When set to `false`, traces bypass sampling and are always indexed, which significantly increases the indexing load. When set to `true`, traces are discarded, causing data loss which can result in broken traces. The default is `false`.
9191

9292
Default: `false`. (bool)
9393

solutions/observability/apm/transaction-sampling.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -146,7 +146,7 @@ Due to [OpenTelemetry tail-based sampling limitations](/solutions/observability/
146146

147147
Tail-based sampling (TBS), by definition, requires storing events locally temporarily, such that they can be retrieved and forwarded when a sampling decision is made.
148148

149-
In an APM Server implementation, the events are stored temporarily on disk instead of in memory for better scalability. Therefore, it requires local disk storage proportional to the APM event ingestion rate and additional memory to facilitate disk reads and writes. If the [storage limit](/solutions/observability/apm/tail-based-sampling.md#sampling-tail-storage_limit-ref) is insufficient, trace events will be indexed or discarded based on the [discard on write failure](/solutions/observability/apm/tail-based-sampling.md#sampling-tail-discard-on-write-failure-ref) configuration.
149+
In an APM Server implementation, the events are stored temporarily on disk instead of in memory for better scalability. Therefore, it requires local disk storage proportional to the APM event ingestion rate and additional memory to facilitate disk reads and writes. If the [storage limit](/solutions/observability/apm/tail-based-sampling.md#sampling-tail-storage_limit-ref) is insufficient, trace events are indexed or discarded based on the [discard on write failure](/solutions/observability/apm/tail-based-sampling.md#sampling-tail-discard-on-write-failure-ref) configuration.
150150

151151
It is recommended to use fast disks, ideally Solid State Drives (SSD) with high I/O per second (IOPS), when enabling tail-based sampling. Disk throughput and I/O may become performance bottlenecks for tail-based sampling and APM event ingestion overall. Disk writes are proportional to the event ingest rate, while disk reads are proportional to both the event ingest rate and the sampling rate.
152152

0 commit comments

Comments
 (0)