You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: solutions/observability/apps/transaction-sampling.md
+23-17Lines changed: 23 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -135,28 +135,34 @@ Due to [OpenTelemetry tail-based sampling limitations](../../../solutions/observ
135
135
136
136
### Tail-based sampling performance and requirements [_tail_based_sampling_performance_and_requirements]
137
137
138
-
Tail-based sampling, by definition, requires storing events locally temporarily, such that they can be retrieved and forwarded once sampling decision is made.
138
+
Tail-based sampling (TBS), by definition, requires storing events locally temporarily, such that they can be retrieved and forwarded once sampling decision is made.
139
139
140
140
In APM Server implementation, the events are stored temporarily on disk instead of memory for better scalability. Therefore, it requires local disk storage proportional to APM event ingestion rate, and additional memory to facilitate disk reads and writes. Insufficient [storage limit](../../../solutions/observability/apps/transaction-sampling.md#sampling-tail-storage_limit) causes sampling to be bypassed.
141
141
142
142
It is recommended to use fast disks, such as NVMe SSDs, when enabling tail-based sampling. Disk throughput and I/O may become performance bottlenecks for tail-based sampling and APM event ingestion overall. Disk writes are proportional to the event ingest rate, while disk reads are proportional to both the event ingest rate and the sampling rate.
143
143
144
-
To demonstrate the performance overhead and requirements, here are some numbers from a standalone APM Server deployed on AWS EC2, under full load receiving APM events containing only traces, assuming no backpressure from Elasticsearch, and 10% sample rate in tail sampling policy. They are for reference only, and may vary depending on factors like sampling rate, average event size, and average number of events per distributed trace.
145
-
146
-
| APM Server version | EC2 instance size | TBS enabled, Disk | Event ingestion rate (throughput from APM agent to APM Server) in events/s | Event indexing rate (throughput from APM Server to Elasticsearch) in events/s | Memory usage (max Resident Set Size) in GB | Disk usage in GB |
To demonstrate the performance overhead and requirements, here are some reference numbers from a standalone APM Server deployed on AWS EC2 under full load, receiving APM events containing only traces. These numbers assume no backpressure from Elasticsearch and a 10% sample rate in the tail sampling policy. Please note that these figures are for reference only and may vary depending on factors such as sampling rate, average event size, and the average number of events per distributed trace.
145
+
146
+
Terminology:
147
+
148
+
* Event Ingestion Rate: The throughput from the APM agent to the APM Server using the Intake v2 protocol (the protocol used by Elastic APM agents), measured in events per second.
149
+
* Event Indexing Rate: The throughput from the APM Server to Elasticsearch, measured in events per second or documents per second.
150
+
* Memory Usage: The maximum Resident Set Size (RSS) of APM Server process observed throughout the benchmark.
151
+
152
+
| APM Server version | EC2 instance size | TBS and disk configuration | Event ingestion rate (events/s) | Event indexing rate (events/s) | Memory usage (GB) | Disk usage (GB) |
The tail-based sampling implementation in version 9.0 offers significantly better performance compared to version 8.18, primarily due to a rewritten storage layer. This new implementation cleans up expired data more reliably, resulting in reduced load on disk, memory, and compute resources. This improvement is particularly evident in the event indexing rate on slower disks.
0 commit comments