diff --git a/manage-data/data-store/data-streams/advanced-topics-tsds.md b/manage-data/data-store/data-streams/advanced-topics-tsds.md index 36ae51226b..6d08a3eb32 100644 --- a/manage-data/data-store/data-streams/advanced-topics-tsds.md +++ b/manage-data/data-store/data-streams/advanced-topics-tsds.md @@ -12,4 +12,5 @@ products: This section contains information about advanced concepts and operations for [time series data streams](/manage-data/data-store/data-streams/time-series-data-stream-tsds.md): - [](/manage-data/data-store/data-streams/time-bound-tsds.md) -- [](/manage-data/data-store/data-streams/reindex-tsds.md) \ No newline at end of file +- [](/manage-data/data-store/data-streams/reindex-tsds.md) +- [](/manage-data/data-store/data-streams/tsds-ingest-otlp.md) diff --git a/manage-data/data-store/data-streams/quickstart-tsds.md b/manage-data/data-store/data-streams/quickstart-tsds.md index 086d49605f..09ddd18a63 100644 --- a/manage-data/data-store/data-streams/quickstart-tsds.md +++ b/manage-data/data-store/data-streams/quickstart-tsds.md @@ -13,6 +13,12 @@ Use this quickstart to set up a time series data stream (TSDS), ingest a few doc A _time series_ is a sequence of data points collected at regular time intervals. For example, you might track CPU usage or stock price over time. This quickstart uses simplified weather sensor readings to show how a TSDS helps you analyze metrics data over time. +::::{note} +If you're looking to ingest OpenTelemetry metrics, +follow the [OpenTelemetry quickstarts](/solutions/observability/get-started/opentelemetry/quickstart/index.md). +This allows you to start sending data into a TSDS without having to worry about manually setting up data streams or configuring mappings. +:::: + ## Prerequisites * Access to [{{dev-tools-app}} Console](/explore-analyze/query-filter/tools/console.md) in {{kib}}, or another way to make {{es}} API requests @@ -320,4 +326,4 @@ For more information about the APIs used in this quickstart, review the {{es}} A * [Bulk API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-bulk) * [Index template API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-index-template) -* [Search API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search) \ No newline at end of file +* [Search API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search) diff --git a/manage-data/data-store/data-streams/set-up-tsds.md b/manage-data/data-store/data-streams/set-up-tsds.md index 7abf1ae833..a97de12ec7 100644 --- a/manage-data/data-store/data-streams/set-up-tsds.md +++ b/manage-data/data-store/data-streams/set-up-tsds.md @@ -11,7 +11,13 @@ products: # Set up a time series data stream [set-up-tsds] -This page shows you how to set up a [time series data stream](/manage-data/data-store/data-streams/time-series-data-stream-tsds.md) (TSDS). +This page shows you how to manually set up a [time series data stream](/manage-data/data-store/data-streams/time-series-data-stream-tsds.md) (TSDS). + +::::{note} +If you're looking to ingest OpenTelemetry metrics, +follow the [OpenTelemetry quickstarts](/solutions/observability/get-started/opentelemetry/quickstart/index.md). +This allows you to start sending data into a TSDS without having to worry about manually setting up data streams or configuring mappings. +:::: ## Before you begin [tsds-prereqs] @@ -188,4 +194,4 @@ Now that you've set up a time series data stream, you can manage and use it like * [Use a data stream](use-data-stream.md) for indexing and searching * [Change data stream settings](modify-data-stream.md#data-streams-change-mappings-and-settings) as needed * Query time series data using the {{esql}} [`TS` command](elasticsearch://reference/query-languages/esql/commands/ts.md) -* Use [data stream APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-data-stream) \ No newline at end of file +* Use [data stream APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-data-stream) diff --git a/manage-data/data-store/data-streams/time-series-data-stream-tsds.md b/manage-data/data-store/data-streams/time-series-data-stream-tsds.md index 2366977f4f..f0466968b2 100644 --- a/manage-data/data-store/data-streams/time-series-data-stream-tsds.md +++ b/manage-data/data-store/data-streams/time-series-data-stream-tsds.md @@ -117,6 +117,7 @@ You can use the {{esql}} [`TS` command](elasticsearch://reference/query-language * Try the [quickstart](/manage-data/data-store/data-streams/quickstart-tsds.md) for a hands-on introduction * [Set up a time series data stream](/manage-data/data-store/data-streams/set-up-tsds.md) +* [Ingest data using the OpenTelemetry Protocol (OTLP)](/manage-data/data-store/data-streams/tsds-ingest-otlp.md) * Learn about [downsampling](/manage-data/data-store/data-streams/downsampling-time-series-data-stream.md) to reduce storage footprint % suppress anchor warnings until chicken-and-egg resolved @@ -125,4 +126,4 @@ You can use the {{esql}} [`TS` command](elasticsearch://reference/query-language ##### [tsds-look-back-time] ##### [time-bound-indices] ##### [dimension-based-routing] -##### [tsds-accepted-time-range] \ No newline at end of file +##### [tsds-accepted-time-range] diff --git a/manage-data/data-store/data-streams/tsds-ingest-otlp.md b/manage-data/data-store/data-streams/tsds-ingest-otlp.md new file mode 100644 index 0000000000..6f7a497451 --- /dev/null +++ b/manage-data/data-store/data-streams/tsds-ingest-otlp.md @@ -0,0 +1,103 @@ +--- +navigation_title: "OTLP/HTTP endpoint" +applies_to: + stack: preview 9.2 + deployment: + self: +products: + - id: elasticsearch +--- + +# OTLP/HTTP endpoint + +:::{important} +The recommended way to send OTLP data for most use cases is through an OpenTelemetry Collector in [Gateway mode](elastic-agent://reference/edot-collector/config/default-config-standalone.md#gateway-mode) or, if you're on {{ecloud}}, directly to the [{{motlp}}](opentelemetry:/reference/motlp.md). Refer to [Best practices](#best-practices) for more details. +::: + +In addition to the ingestion of metrics data through the bulk API, +{{es}} offers an alternative way to ingest data through the [OpenTelemetry Protocol (OTLP)](https://opentelemetry.io/docs/specs/otlp). + +The endpoint is available under `/_otlp/v1/metrics`. + +Ingesting metrics data using the OTLP endpoint has the following advantages: + +* Improved ingestion performance, especially if the data contains many resource attributes. +* Simplified index mapping: + there's no need to manually create data streams, index templates, or define dimensions and metrics. + Metrics are dynamically mapped using the metadata included in the OTLP requests. + +:::{note} +{{es}} only supports [OTLP/HTTP](https://opentelemetry.io/docs/specs/otlp/#otlphttp), +not [OTLP/gRPC](https://opentelemetry.io/docs/specs/otlp/#otlpgrpc). +::: + +## How to send data to the OTLP endpoint + +To send data from an OpenTelemetry Collector to the {{es}} OTLP endpoint, +use the [`OTLP/HTTP` exporter](https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/otlphttpexporter). +This is an example configuration: + +```yaml +extensions: + basicauth/elasticsearch: + client_auth: + username: + password: +exporters: + otlphttp/elasticsearch-metrics: + endpoint: /_otlp + sending_queue: + enabled: true + sizer: bytes + queue_size: 50_000_000 # 50MB uncompressed + block_on_overflow: true + batch: + flush_timeout: 1s + min_size: 1_000_000 # 1MB uncompressed + max_size: 4_000_000 # 4MB uncompressed + auth: + authenticator: basicauth/elasticsearch +service: + extensions: [basicauth/elasticsearch] + pipelines: + metrics: + exporters: [otlphttp/elasticsearch-metrics] + receivers: ... +``` + +The supported options for `compression` are `gzip` (default value of the `OTLP/HTTP` exporter) and `none`. + +% TODO we might actually also support snappy and zstd, test and update accordingly) + +To track metrics in your custom application, +use the [OpenTelemetry language SDK](https://opentelemetry.io/docs/getting-started/dev/) of your choice. + +:::{note} +Only `encoding: proto` is supported, which the `OTLP/HTTP` exporter uses by default. +::: + +## Best practices + +Don't send metrics from applications directly to the {{es}} OTLP endpoint, especially if there are many individual applications that periodically send a small amount of metrics. Instead, send data to an OpenTelemetry Collector first. This helps with handling many connections, and with creating bigger batches to improve ingestion performance. + +On {{ecloud}}, use the [{{motlp}}](opentelemetry:/reference/motlp.md) and for self-managed use cases, you can use the [Elastic Distribution of OpenTelemetry Collector](elastic-agent:/reference/edot-collector/index.md). + +For more details on the recommended way to set up OpenTelemetry-based data ingestion, refer to the [EDOT reference architecture](opentelemetry:/reference/architecture/index.md). + +## Send data to different data streams + +By default, metrics are ingested into the `metrics-generic.otel-default` data stream. You can influence the target data stream by setting specific attributes on your data: + +- `data_stream.dataset` or `data_stream.namespace` in attributes, with the following order of precedence: data point attribute -> scope attribute -> resource attribute +- Otherwise, if the scope name contains `/receiver/`, `data_stream.dataset` is set to the receiver name. +- Otherwise, `data_stream.dataset` falls back to `generic` and `data_stream.namespace` falls back to `default`. + +The target data stream name is constructed as `metrics-${data_stream.dataset}.otel-${data_stream.namespace}`. + +## Limitations + +* Only the OTLP metrics endpoint (`/_otlp/v1/metrics`) is supported. + To ingest logs, traces, and profiles, use a distribution of the OpenTelemetry Collector that includes the [{{es}} exporter](opentelemetry:/reference/edot-collector/components/elasticsearchexporter.md), + such as the [Elastic Distribution of OpenTelemetry (EDOT) Collector](opentelemetry:/reference/edot-collector/index.md). +* Histograms are only supported in delta temporality. Set the temporality preference to delta in your SDKs, or use the [`cumulativetodelta` processor](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/cumulativetodeltaprocessor) to avoid cumulative histograms to be dropped. +* Exemplars are not supported. diff --git a/manage-data/toc.yml b/manage-data/toc.yml index f6604aad98..fe7be01008 100644 --- a/manage-data/toc.yml +++ b/manage-data/toc.yml @@ -21,9 +21,10 @@ toc: - file: data-store/data-streams/run-downsampling.md - file: data-store/data-streams/query-downsampled-data.md - file: data-store/data-streams/advanced-topics-tsds.md - children: - - file: data-store/data-streams/time-bound-tsds.md + children: + - file: data-store/data-streams/time-bound-tsds.md - file: data-store/data-streams/reindex-tsds.md + - file: data-store/data-streams/tsds-ingest-otlp.md - file: data-store/data-streams/logs-data-stream.md - file: data-store/data-streams/failure-store.md children: @@ -167,4 +168,4 @@ toc: - file: migrate/migrate-from-a-self-managed-cluster-with-a-self-signed-certificate-using-remote-reindex.md - file: migrate/migrate-internal-indices.md - file: migrate/migrate-data-between-elasticsearch-clusters-with-minimal-downtime.md - - file: use-case-use-elasticsearch-to-manage-time-series-data.md \ No newline at end of file + - file: use-case-use-elasticsearch-to-manage-time-series-data.md