Skip to content
Original file line number Diff line number Diff line change
Expand Up @@ -135,11 +135,43 @@

### Internal batching (default)

By default, the exporter performs its own buffering and batching, as configured through the `flush` setting, unless the `sending_queue` and `batcher` settings are defined.
By default, the exporter performs its own buffering and batching, as configured through the `flush` setting, unless the `sending_queue` and/or `batcher` settings are defined.

### Using sending queue

The Elasticsearch exporter supports the `sending_queue` setting, which supports both queueing and batching. However, the sending queue is currently deactivated by default. You can turn on the sending queue by setting `sending_queue` to true. Batching support in sending queue is also deactivated by default and can be turned on by defining `sending_queue::batch`. For example:
```{applies_to}

Check failure on line 142 in docs/reference/edot-collector/components/elasticsearchexporter.md

View workflow job for this annotation

GitHub Actions / docs-preview / build

Unable to parse applies_to directive: stack: 9.0 9.1 (Line: 1, Col: 1, Idx: 0) - (Line: 1, Col: 1, Idx: 0): Exception during deserialization
stack: 9.0 9.1
```

The sending queue can be enabled and configured with the `batcher` section, using [common `batcher` settings](https://github.com/open-telemetry/opentelemetry-collector/blob/main/exporter/exporterhelper/internal/queue_sender.go).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not technically correct. In previous versions, sending_queue and batcher are both supported, but they are separate configs configuring the queue and batch correspondingly.

Copy link
Contributor Author

@inge4pres inge4pres Oct 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok makes sense thanks - I thought sending_queue was a replacement for batcher, but no it's another setting on top.

How would you rephrase it?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this paragraph (and the header) should focus on batcher and state that to keep the interaction async after enabling batcher, one should enable sending queue.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks I only have one doubt: sending_queue does not seem to be aviable before v0.132.0.
if that's correct, we should not include a sending_queue section for 9.0 and 9.1.

I was thinking to structure the document with separate sections about sending queue and batching, but only if sending queue is available in 9.0 and 9.1

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nvm I just re-read the docs in the upstream repo, sending_queue was supported in older versions too.
I'll update with dedicated sections


- `batcher`:
- `enabled` (default=unset): Enable batching of requests into 1 or more bulk requests. On a batcher flush, it is possible for a batched request to be translated to more than 1 bulk request due to `flush::bytes`.
- `sizer` (default=items): Unit of `min_size` and `max_size`. Currently supports only "items", in the future will also support "bytes".
- `min_size` (default=5000): Minimum batch size to be exported to Elasticsearch, measured in units according to `batcher::sizer`.
- `max_size` (default=0): Maximum batch size to be exported to Elasticsearch, measured in units according to `batcher::sizer`. To limit bulk request size, configure `flush::bytes` instead. :warning: It is recommended to keep `max_size` as 0 as a non-zero value may lead to broken metrics grouping and indexing rejections.
- `flush_timeout` (default=10s): Maximum time of the oldest item spent inside the batcher buffer, aka "max age of batcher buffer". A batcher flush will happen regardless of the size of content in batcher buffer.

For example:

```yaml subs=true
exporters:
elasticsearch:
endpoint: https://elasticsearch:9200
batcher:
enabled: true
min_size: 1000
max_size: 10000
flush_timeout: 5s
```
```{applies to}
stack: 9.2
```
The Elasticsearch exporter supports the `sending_queue` setting, which supports both queueing and batching.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The first sentence kind of implies that sending queue was not supported in the past, which is not true. The key change in 9.2 (i'm not sure off the top of my head) is that we also support sending_queue::batch

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, makes more sens after reading the previous comments too.
Let me rephrase it.
I wonder: if this distinction is not so clear from the upstream repo docs, should we also amend there?

However, the sending queue is currently deactivated by default.
You can turn on the sending queue by setting `sending_queue::enabled` to true. Batching support in sending queue is also deactivated by default and can be turned on by defining `sending_queue::batch`. For example:

```yaml subs=true
exporters:
Expand Down
Loading