Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 6 additions & 2 deletions solutions/observability/apm/limitations.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,13 @@ products:

## OpenTelemetry logs [apm-open-telemetry-logs-intake]

* [preview] The OpenTelemetry logs intake via Elastic is in technical preview.
* The OpenTelemetry logs intake through the [APM Server](/solutions/observability/apm/upstream-opentelemetry-collectors-language-sdks.md#apm-open-telemetry-proxy-apm) is in technical preview. {applies_to}`product: preview`
* The application logs data stream (`app_logs`) has dynamic mapping disabled. This means the automatic detection and mapping of new fields is disabled (see issue [#9093](https://github.com/elastic/apm-server/issues/9093)).

::::{note}
Logs ingest through the [EDOT Collector](opentelemetry://reference/edot-collector/index.md) is fully supported.
::::

## OpenTelemetry Line Protocol (OTLP) [apm-open-telemetry-otlp-limitations]

Elastic supports both the [OTLP/gRPC](https://opentelemetry.io/docs/specs/otlp/#otlpgrpc) and [OTLP/HTTP](https://opentelemetry.io/docs/specs/otlp/#otlphttp) protocol with ProtoBuf payload. Elastic does not yet support JSON Encoding for OTLP/HTTP.
Expand All @@ -50,4 +54,4 @@ When using OpenTelemetry with Elastic APM, there are two different implementatio

Using the [tailsamplingprocessor](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/tailsamplingprocessor) in the OpenTelemetry Collector comes with an important limitation. Elastic’s APM backend calculates span and transaction metrics based on the incoming span events. These metrics are accurate for 100% sampling scenarios. In scenarios with probabilistic sampling, Elastic’s APM backend is being informed about the sampling rate of spans and can extrapolate throughput metrics based on the incoming, partial data. However, with tail-based sampling there’s no clear probability for sampling decisions as the rules can be more complex and the OpenTelemetry Collector does not provide sampling probability information to the Elastic backend that could be used for extrapolation of data. Therefore, there’s no way for Elastic APM to properly extrapolate throughput and count metrics that are derived from span events that have been tail-based sampled in the OpenTelemetry Collector. In these scenarios, derived throughput and count metrics are likely to be inaccurate.

Therefore, we recommend using Elastic’s native tail-based sampling when integrating with OpenTelemetry.
Therefore, we recommend using Elastic’s native tail-based sampling when integrating with OpenTelemetry.
Original file line number Diff line number Diff line change
Expand Up @@ -13,13 +13,15 @@ products:

# Upstream OpenTelemetry Collectors and language SDKs [apm-open-telemetry-direct]

The {{stack}} natively supports the OpenTelemetry protocol (OTLP). This means trace data and metrics collected from your applications and infrastructure can be sent directly to the {{stack}}.
The {{stack}} natively supports the OpenTelemetry protocol (OTLP). This means logs, metrics, and trace data collected from your applications and infrastructure can be sent directly to the {{stack}}.

* Send data to Elastic from an upstream [OpenTelemetry Collector](/solutions/observability/apm/upstream-opentelemetry-collectors-language-sdks.md#apm-connect-open-telemetry-collector)
* Send data to Elastic from an upstream [OpenTelemetry language SDK](/solutions/observability/apm/upstream-opentelemetry-collectors-language-sdks.md#apm-instrument-apps-otel)

::::{note}
This is one of several approaches you can use to integrate Elastic with OpenTelemetry. To compare approaches and choose the best approach for your use case, refer to [OpenTelemetry](/solutions/observability/apm/use-opentelemetry-with-apm.md).
To compare approaches and choose the best approach for your use case, refer to [OpenTelemetry](/solutions/observability/apm/use-opentelemetry-with-apm.md).

::::{important}
The Elastic Distribution of OpenTelemetry Collector (EDOT Collector) include additional features and configurations to seamlessly integrate with Elastic. Refer to [EDOT compared to upstream OpenTelemetry](opentelemetry:///reference/compatibility/edot-vs-upstream.md) for a comparison.
::::

## Send data from an upstream OpenTelemetry Collector [apm-connect-open-telemetry-collector]
Expand Down Expand Up @@ -75,13 +77,13 @@ service:
```

1. The receivers, like the [OTLP receiver](https://github.com/open-telemetry/opentelemetry-collector/tree/main/receiver/otlpreceiver), that forward data emitted by APM agents, or the [host metrics receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/hostmetricsreceiver).
2. We recommend using the [Batch processor](https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/batchprocessor/README.md) and the [memory limiter processor](https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/memorylimiterprocessor/README.md). For more information, see [recommended processors](https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/README.md#recommended-processors).
2. Use the [Batch processor](https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/batchprocessor/README.md) and the [memory limiter processor](https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/memorylimiterprocessor/README.md). For more information, see [recommended processors](https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/README.md#recommended-processors).
3. The [debug exporter](https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/debugexporter) is helpful for troubleshooting, and supports configurable verbosity levels: `basic` (default), `normal`, and `detailed`.
4. Elastic {{observability}} endpoint configuration. APM Server supports a ProtoBuf payload via both the OTLP protocol over gRPC transport [(OTLP/gRPC)](https://opentelemetry.io/docs/specs/otlp/#otlpgrpc) and the OTLP protocol over HTTP transport [(OTLP/HTTP)](https://opentelemetry.io/docs/specs/otlp/#otlphttp). To learn more about these exporters, see the OpenTelemetry Collector documentation: [OTLP/HTTP Exporter](https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/otlphttpexporter) or [OTLP/gRPC exporter](https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/otlpexporter). When adding an endpoint to an existing configuration an optional name component can be added, like `otlp/elastic`, to distinguish endpoints as described in the [OpenTelemetry Collector Configuration Basics](https://opentelemetry.io/docs/collector/configuration/#basics).
5. Hostname and port of the APM Server endpoint. For example, `elastic-apm-server:8200`.
6. Credential for Elastic APM [secret token authorization](/solutions/observability/apm/secret-token.md) (`Authorization: "Bearer a_secret_token"`) or [API key authorization](/solutions/observability/apm/api-keys.md) (`Authorization: "ApiKey an_api_key"`).
7. Environment-specific configuration parameters can be conveniently passed in as environment variables documented [here](https://opentelemetry.io/docs/collector/configuration/#environment-variables) (e.g. `ELASTIC_APM_SERVER_ENDPOINT` and `ELASTIC_APM_SECRET_TOKEN`).
8. [preview] To send OpenTelemetry logs to {{stack}} version 8.0+, declare a `logs` pipeline.
8. To send OpenTelemetry logs to {{stack}} version 8.0+, declare a `logs` pipeline. {applies_to}`product: preview`

:::

Expand Down Expand Up @@ -133,23 +135,22 @@ service:
5. Hostname and port of the Elastic endpoint. For example, `elastic-apm-server:8200`.
6. Credential for Elastic APM API key authorization (`Authorization: "ApiKey an_api_key"`).
7. Environment-specific configuration parameters can be conveniently passed in as environment variables documented [here](https://opentelemetry.io/docs/collector/configuration/#configuration-environment-variables) (e.g. `ELASTIC_APM_SERVER_ENDPOINT` and `ELASTIC_APM_API_KEY`).
8. [preview] To send OpenTelemetry logs to your project, declare a `logs` pipeline.
8. To send OpenTelemetry logs to your project, declare a `logs` pipeline. {applies_to}`product: preview`

:::

::::

You’re now ready to export traces and metrics from your services and applications.

::::{tip}
When using the OpenTelemetry Collector, you should always prefer sending data via the [`OTLP` exporter](https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/otlphttpexporter). Using other methods, like the [`elasticsearch` exporter](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/elasticsearchexporter), will bypass all of the validation and data processing that Elastic performs. In addition, your data will not be viewable in your Observability project if you use the `elasticsearch` exporter.
::::{important}
When using the OpenTelemetry Collector, send data through the [`OTLP` exporter](https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/otlphttpexporter). Using other methods, like the [`elasticsearch` exporter](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/elasticsearchexporter), bypasses all of the validation and data processing that Elastic performs. In addition, your data will not be viewable in your Observability project if you use the `elasticsearch` exporter.
::::

## Send data from an upstream OpenTelemetry SDK [apm-instrument-apps-otel]

::::{note}
This document outlines how to send data directly from an upstream OpenTelemetry SDK to Elastic, which is appropriate when getting started. However, in many cases you should use the OpenTelemetry SDK to send data to an OpenTelemetry Collector that processes and exports data to Elastic. Read more about when and how to use a collector in the [OpenTelemetry documentation](https://opentelemetry.io/docs/collector/#when-to-use-a-collector).

The following instructions show how to send data directly from an upstream OpenTelemetry SDK to Elastic, which is appropriate when getting started. However, sending data from an OpenTelemetry SDK to the OpenTelemetry Collector is preferred, as the Collector processes and exports data to Elastic. Read more about when and how to use a collector in the [OpenTelemetry documentation](https://opentelemetry.io/docs/collector/#when-to-use-a-collector).
::::

To export traces and metrics to Elastic, instrument your services and applications with the OpenTelemetry API, SDK, or both. For example, if you are a Java developer, you need to instrument your Java app with the [OpenTelemetry agent for Java](https://github.com/open-telemetry/opentelemetry-java-instrumentation). See the [OpenTelemetry Instrumentation guides](https://opentelemetry.io/docs/instrumentation/) to download the OpenTelemetry agent or SDK for your language.
Expand All @@ -173,7 +174,7 @@ java -javaagent:/path/to/opentelemetry-javaagent-all.jar \
com.mycompany.checkout.CheckoutServiceServer
```

1. [preview] The OpenTelemetry logs intake via APM Server is currently in technical preview.
1. The OpenTelemetry logs intake via APM Server is currently in technical preview. {applies_to}`product: preview`

`OTEL_RESOURCE_ATTRIBUTES`
: Fields that describe the service and the environment that the service runs in. See [attributes](/solutions/observability/apm/attributes.md) for more information.
Expand Down Expand Up @@ -214,7 +215,7 @@ java -javaagent:/path/to/opentelemetry-javaagent-all.jar \
com.mycompany.checkout.CheckoutServiceServer
```

1. [preview] The OpenTelemetry logs intake via Elastic is currently in technical preview.
1. The OpenTelemetry logs intake via Elastic is currently in technical preview. {applies_to}`product: preview`

`OTEL_RESOURCE_ATTRIBUTES`
: Fields that describe the service and the environment that the service runs in. See [attributes](/solutions/observability/apm/attributes.md) for more information.
Expand Down Expand Up @@ -246,11 +247,15 @@ You are now ready to collect traces and [metrics](/solutions/observability/apm/c

## Proxy requests to APM Server [apm-open-telemetry-proxy-apm]

APM Server supports both the [OTLP/gRPC](https://opentelemetry.io/docs/specs/otlp/#otlpgrpc) and [OTLP/HTTP](https://opentelemetry.io/docs/specs/otlp/#otlphttp) protocol on the same port as Elastic APM agent requests. For ease of setup, we recommend using OTLP/HTTP when proxying or load balancing requests to Elastic.
```{applies_to}
product: preview
```

APM Server supports both the [OTLP/gRPC](https://opentelemetry.io/docs/specs/otlp/#otlpgrpc) and [OTLP/HTTP](https://opentelemetry.io/docs/specs/otlp/#otlphttp) protocol on the same port as Elastic APM agent requests. For ease of setup, use OTLP/HTTP when proxying or load balancing requests to Elastic.

If you use the OTLP/gRPC protocol, requests to Elastic must use either HTTP/2 over TLS or HTTP/2 Cleartext (H2C). No matter which protocol is used, OTLP/gRPC requests will have the header: `"Content-Type: application/grpc"`.

When using a layer 7 (L7) proxy like AWS ALB, requests must be proxied in a way that ensures requests to Elastic follow the rules outlined above. For example, with ALB you can create rules to select an alternative backend protocol based on the headers of requests coming into ALB. In this example, you’d select the gRPC protocol when the `"Content-Type: application/grpc"` header exists on a request.
When using a layer 7 (L7) proxy like AWS ALB, proxy the requests in a way that ensures they follow the rules outlined previously. For example, with ALB you can create rules to select an alternative backend protocol based on the headers of requests coming into ALB. In this example, you’d select the gRPC protocol when the `"Content-Type: application/grpc"` header exists on a request.

Many L7 load balancers handle HTTP and gRPC traffic separately and rely on explicitly defined routes and service configurations to correctly proxy requests. Since APM Server serves both protocols on the same port, it may not be compatible with some L7 load balancers. For example, to work around this issue in [Ingress NGINX Controller for Kubernetes](https://github.com/kubernetes/ingress-nginx), either:

Expand All @@ -263,7 +268,6 @@ For more information on how to configure an AWS ALB to support gRPC, see this AW

For more information on how APM Server services gRPC requests, see [Muxing gRPC and HTTP/1.1](https://github.com/elastic/apm-server/blob/main/dev_docs/otel.md#muxing-grpc-and-http11).


:::{include} _snippets/apm-server-vs-mis.md
:::

Expand Down
Loading