Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -12,4 +12,5 @@ products:
This section contains information about advanced concepts and operations for [time series data streams](/manage-data/data-store/data-streams/time-series-data-stream-tsds.md):

- [](/manage-data/data-store/data-streams/time-bound-tsds.md)
- [](/manage-data/data-store/data-streams/reindex-tsds.md)
- [](/manage-data/data-store/data-streams/reindex-tsds.md)
- [](/manage-data/data-store/data-streams/tsds-ingest-otlp.md)
8 changes: 7 additions & 1 deletion manage-data/data-store/data-streams/quickstart-tsds.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,12 @@ Use this quickstart to set up a time series data stream (TSDS), ingest a few doc

A _time series_ is a sequence of data points collected at regular time intervals. For example, you might track CPU usage or stock price over time. This quickstart uses simplified weather sensor readings to show how a TSDS helps you analyze metrics data over time.

::::{note}
If you're looking to ingest OpenTelemetry metrics,
follow the [OpenTelemetry quickstarts](/solutions/observability/get-started/opentelemetry/quickstart/index.md).
This allows you to start sending data into a TSDS without having to worry about manually setting up data streams or configuring mappings.
::::
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One the fence about having it here as well.. will defer to @marciw

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO, linking from the TSDS quickstart to the OTel quickstart kinda makes sense.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm ok with the addition, except it kind of takes over the top of the page. But I can maybe work on making it a bit smaller -- OK as is for now


## Prerequisites

* Access to [{{dev-tools-app}} Console](/explore-analyze/query-filter/tools/console.md) in {{kib}}, or another way to make {{es}} API requests
Expand Down Expand Up @@ -320,4 +326,4 @@ For more information about the APIs used in this quickstart, review the {{es}} A

* [Bulk API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-bulk)
* [Index template API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-index-template)
* [Search API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search)
* [Search API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search)
10 changes: 8 additions & 2 deletions manage-data/data-store/data-streams/set-up-tsds.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,13 @@ products:

# Set up a time series data stream [set-up-tsds]

This page shows you how to set up a [time series data stream](/manage-data/data-store/data-streams/time-series-data-stream-tsds.md) (TSDS).
This page shows you how to manually set up a [time series data stream](/manage-data/data-store/data-streams/time-series-data-stream-tsds.md) (TSDS).

::::{note}
If you're looking to ingest OpenTelemetry metrics,
follow the [OpenTelemetry quickstarts](/solutions/observability/get-started/opentelemetry/quickstart/index.md).
This allows you to start sending data into a TSDS without having to worry about manually setting up data streams or configuring mappings.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe: "For more details on how this is handled in Elasticsearch, check out [OTLP/HTTP endpoint
]{...}"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After this is merged into my branch, I'll make sure everything matches Elastic docs style (this note can be shorter and more direct)

::::

## Before you begin [tsds-prereqs]

Expand Down Expand Up @@ -188,4 +194,4 @@ Now that you've set up a time series data stream, you can manage and use it like
* [Use a data stream](use-data-stream.md) for indexing and searching
* [Change data stream settings](modify-data-stream.md#data-streams-change-mappings-and-settings) as needed
* Query time series data using the {{esql}} [`TS` command](elasticsearch://reference/query-languages/esql/commands/ts.md)
* Use [data stream APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-data-stream)
* Use [data stream APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-data-stream)
Original file line number Diff line number Diff line change
Expand Up @@ -117,6 +117,7 @@ You can use the {{esql}} [`TS` command](elasticsearch://reference/query-language

* Try the [quickstart](/manage-data/data-store/data-streams/quickstart-tsds.md) for a hands-on introduction
* [Set up a time series data stream](/manage-data/data-store/data-streams/set-up-tsds.md)
* [Ingest data using the OpenTelemetry Protocol (OTLP)](/manage-data/data-store/data-streams/tsds-ingest-otlp.md)
* Learn about [downsampling](/manage-data/data-store/data-streams/downsampling-time-series-data-stream.md) to reduce storage footprint

% suppress anchor warnings until chicken-and-egg resolved
Expand All @@ -125,4 +126,4 @@ You can use the {{esql}} [`TS` command](elasticsearch://reference/query-language
##### [tsds-look-back-time]
##### [time-bound-indices]
##### [dimension-based-routing]
##### [tsds-accepted-time-range]
##### [tsds-accepted-time-range]
103 changes: 103 additions & 0 deletions manage-data/data-store/data-streams/tsds-ingest-otlp.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
---
navigation_title: "OTLP/HTTP endpoint"
applies_to:
stack: preview 9.2
deployment:
self:
products:
- id: elasticsearch
---

# OTLP/HTTP endpoint

:::{important}
The recommended way to send OTLP data for most use cases is through an OpenTelemetry Collector in [Gateway mode](elastic-agent://reference/edot-collector/config/default-config-standalone.md#gateway-mode) or, if you're on {{ecloud}}, directly to the [{{motlp}}](opentelemetry:/reference/motlp.md). Refer to [Best practices](#best-practices) for more details.
:::

In addition to the ingestion of metrics data through the bulk API,
{{es}} offers an alternative way to ingest data through the [OpenTelemetry Protocol (OTLP)](https://opentelemetry.io/docs/specs/otlp).

The endpoint is available under `/_otlp/v1/metrics`.

Ingesting metrics data using the OTLP endpoint has the following advantages:

* Improved ingestion performance, especially if the data contains many resource attributes.
* Simplified index mapping:
there's no need to manually create data streams, index templates, or define dimensions and metrics.
Metrics are dynamically mapped using the metadata included in the OTLP requests.

:::{note}
{{es}} only supports [OTLP/HTTP](https://opentelemetry.io/docs/specs/otlp/#otlphttp),
not [OTLP/gRPC](https://opentelemetry.io/docs/specs/otlp/#otlpgrpc).
:::

## How to send data to the OTLP endpoint

To send data from an OpenTelemetry Collector to the {{es}} OTLP endpoint,
use the [`OTLP/HTTP` exporter](https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/otlphttpexporter).
This is an example configuration:

```yaml
extensions:
basicauth/elasticsearch:
client_auth:
username: <user>
password: <password>
exporters:
otlphttp/elasticsearch-metrics:
endpoint: <es_endpoint>/_otlp
sending_queue:
enabled: true
sizer: bytes
queue_size: 50_000_000 # 50MB uncompressed
block_on_overflow: true
batch:
flush_timeout: 1s
min_size: 1_000_000 # 1MB uncompressed
max_size: 4_000_000 # 4MB uncompressed
auth:
authenticator: basicauth/elasticsearch
service:
extensions: [basicauth/elasticsearch]
pipelines:
metrics:
exporters: [otlphttp/elasticsearch-metrics]
receivers: ...
```

The supported options for `compression` are `gzip` (default value of the `OTLP/HTTP` exporter) and `none`.

% TODO we might actually also support snappy and zstd, test and update accordingly)

To track metrics in your custom application,
use the [OpenTelemetry language SDK](https://opentelemetry.io/docs/getting-started/dev/) of your choice.

:::{note}
Only `encoding: proto` is supported, which the `OTLP/HTTP` exporter uses by default.
:::

## Best practices

Don't send metrics from applications directly to the {{es}} OTLP endpoint, especially if there are many individual applications that periodically send a small amount of metrics. Instead, send data to an OpenTelemetry Collector first. This helps with handling many connections, and with creating bigger batches to improve ingestion performance.

On {{ecloud}}, use the [{{motlp}}](opentelemetry:/reference/motlp.md) and for self-managed use cases, you can use the [Elastic Distribution of OpenTelemetry Collector](elastic-agent:/reference/edot-collector/index.md).

For more details on the recommended way to set up OpenTelemetry-based data ingestion, refer to the [EDOT reference architecture](opentelemetry:/reference/architecture/index.md).

## Send data to different data streams

By default, metrics are ingested into the `metrics-generic.otel-default` data stream. You can influence the target data stream by setting specific attributes on your data:

- `data_stream.dataset` or `data_stream.namespace` in attributes, with the following order of precedence: data point attribute -> scope attribute -> resource attribute
- Otherwise, if the scope name contains `/receiver/<somereceiver>`, `data_stream.dataset` is set to the receiver name.
- Otherwise, `data_stream.dataset` falls back to `generic` and `data_stream.namespace` falls back to `default`.

The target data stream name is constructed as `metrics-${data_stream.dataset}.otel-${data_stream.namespace}`.

## Limitations

* Only the OTLP metrics endpoint (`/_otlp/v1/metrics`) is supported.
To ingest logs, traces, and profiles, use a distribution of the OpenTelemetry Collector that includes the [{{es}} exporter](opentelemetry:/reference/edot-collector/components/elasticsearchexporter.md),
such as the [Elastic Distribution of OpenTelemetry (EDOT) Collector](opentelemetry:/reference/edot-collector/index.md).
* Histograms are only supported in delta temporality. Set the temporality preference to delta in your SDKs, or use the [`cumulativetodelta` processor](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/cumulativetodeltaprocessor) to avoid cumulative histograms to be dropped.
* Exemplars are not supported.
7 changes: 4 additions & 3 deletions manage-data/toc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,10 @@ toc:
- file: data-store/data-streams/run-downsampling.md
- file: data-store/data-streams/query-downsampled-data.md
- file: data-store/data-streams/advanced-topics-tsds.md
children:
- file: data-store/data-streams/time-bound-tsds.md
children:
- file: data-store/data-streams/time-bound-tsds.md
- file: data-store/data-streams/reindex-tsds.md
- file: data-store/data-streams/tsds-ingest-otlp.md
- file: data-store/data-streams/logs-data-stream.md
- file: data-store/data-streams/failure-store.md
children:
Expand Down Expand Up @@ -167,4 +168,4 @@ toc:
- file: migrate/migrate-from-a-self-managed-cluster-with-a-self-signed-certificate-using-remote-reindex.md
- file: migrate/migrate-internal-indices.md
- file: migrate/migrate-data-between-elasticsearch-clusters-with-minimal-downtime.md
- file: use-case-use-elasticsearch-to-manage-time-series-data.md
- file: use-case-use-elasticsearch-to-manage-time-series-data.md
Loading