Skip to content

Commit cf1b9e8

Browse files
committed
Apply initial comments
1 parent 08f41b4 commit cf1b9e8

File tree

1 file changed

+29
-47
lines changed

1 file changed

+29
-47
lines changed
Lines changed: 29 additions & 47 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
2-
navigation_title: Trace export errors
3-
description: Learn how to resolve trace export failures caused by `sending_queue` overflow and Elasticsearch exporter timeouts in the EDOT Collector.
2+
navigation_title: Export errors from the EDOT Collector
3+
description: Learn how to resolve export failures caused by `sending_queue` overflow and Elasticsearch exporter timeouts in the EDOT Collector.
44
applies_to:
55
serverless: all
66
product:
@@ -10,26 +10,31 @@ products:
1010
- id: edot-collector
1111
---
1212

13-
# Trace export errors from the EDOT Collector
13+
# Export failures when sending telemetry data from the EDOT Collector
1414

15-
During high traffic or load testing scenarios, the EDOT Collector might fail to export trace data to {{es}}. This typically happens when the internal queue for outgoing data fills up faster than it can be drained, resulting in timeouts and dropped data.
15+
During high traffic or load testing scenarios, the EDOT Collector might fail to export telemetry data (traces, metrics, or logs) to {{es}}. This typically happens when the internal queue for outgoing data fills up faster than it can be drained, resulting in timeouts and dropped data.
1616

1717
## Symptoms
1818

1919
You might see one or more of the following messages in the EDOT Collector logs:
2020

2121
* `bulk indexer flush error: failed to execute the request: context deadline exceeded`
2222
* `Exporting failed. Rejecting data. sending queue is full`
23-
* Repeated `otelcol.signal: "traces"` errors from the exporter
2423

25-
These errors indicate the Collector is overwhelmed and unable to export traces fast enough, leading to queue overflows and data loss.
24+
These errors indicate the Collector is overwhelmed and unable to export data fast enough, leading to queue overflows and data loss.
2625

2726
## Causes
2827

29-
This issue typically occurs when the `sending_queue` configuration is misaligned with the incoming trace volume. Common contributing factors include:
28+
This issue typically occurs when the `sending_queue` configuration is misaligned with the incoming telemetry volume.
29+
30+
:::{important}
31+
The sending queue is disabled by default in versions earlier than **v0.138.0** and enabled by default from **v0.138.0** onward. If you're using an earlier version, verify that `enabled: true` is explicitly set — otherwise any queue configuration will be ignored.
32+
:::
33+
34+
Common contributing factors include:
3035

3136
* `sending_queue.block_on_overflow` is not enabled (it defaults to `false`), so data is dropped when the queue is full.
32-
* `num_consumers` is too low to keep up with the incoming trace volume and drain the queue efficiently.
37+
* `num_consumers` is too low to keep up with the incoming data volume.
3338
* The queue size (`queue_size`) is too small for the traffic load.
3439
* Export batching is disabled, increasing processing overhead.
3540
* EDOT Collector resources (CPU, memory) are not sufficient for the traffic volume.
@@ -40,56 +45,33 @@ Increasing the `timeout` value (for example from 30s to 90s) doesn't help if the
4045

4146
## Resolution
4247

43-
Update the EDOT Collector configuration as follows:
48+
The resolution approach depends on which EDOT Collector version you're using.
4449

45-
:::::{stepper}
50+
### For EDOT Collector versions earlier than v0.138.0
4651

47-
::::{step} Enable `block_on_overflow`
48-
49-
Prevent silent trace drops by enabling blocking behavior when the queue is full:
52+
Enable the sending queue and block on overflow to prevent silent data drops:
5053

5154
```yaml
5255
sending_queue:
53-
enabled: true
54-
queue_size: 1000
55-
num_consumers: 10
56-
block_on_overflow: true
56+
enabled: true
57+
queue_size: 1000
58+
num_consumers: 10
59+
block_on_overflow: true
5760
```
58-
::::
59-
60-
::::{step} Increase `num_consumers`
61-
62-
Raise the number of queue consumers to increase parallel processing of queued items. Start with 20–30 and adjust based on throughput and resource usage.
63-
64-
::::
65-
66-
::::{step} Tune `queue_size`
67-
68-
Increase the queue size to handle spikes in trace volume. Ensure sufficient memory is allocated to support the larger buffer.
6961
70-
::::
62+
### For EDOT Collector v0.138.0 and later
7163
72-
::::{step} Enable batching
64+
The `sending_queue` behavior is managed internally by the exporter. Adjusting its parameters has a limited effect on throughput. In these versions, the most effective optimizations are:
7365

74-
If not already enabled, configure batching to reduce the per-span export cost and improve throughput.
66+
* Increase Collector resources by ensuring the EDOT Collector pod has enough CPU and memory. Scale vertically (more resources) or horizontally (more replicas) if you experience backpressure.
7567

76-
::::
77-
78-
::::{step} Check resource allocation
79-
80-
Verify the EDOT Collector pod has enough CPU and memory. Increase limits or scale out the deployment if necessary.
81-
82-
::::
83-
84-
::::{step} Evaluate {{es}} performance
85-
86-
Check for indexing delays or errors on the {{es}} side. Bottlenecks here can also contribute to timeouts and queue buildup.
87-
88-
::::
89-
90-
:::::
68+
* Optimize Elasticsearch performance by checking for indexing delays, rejected bulk requests, or cluster resource limits. Bottlenecks in {{es}} often manifest as Collector export timeouts.
9169

70+
:::{tip}
71+
Focus tuning efforts on the Collector’s resource allocation and the downstream Elasticsearch cluster rather than queue parameters for v0.138.0+.
72+
:::
9273

9374
## Resources
9475

95-
* [Upstream documentation - OpenTelemetry Collector configuration](https://opentelemetry.io/docs/collector/configuration)
76+
* [Upstream documentation - OpenTelemetry Collector configuration](https://opentelemetry.io/docs/collector/configuration)
77+
* [Elasticsearch exporter configuration reference](elastic-agent://reference/edot-collector/components/elasticsearchexporter.md)

0 commit comments

Comments
 (0)