You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Update docs/open-telemetry.asciidoc
Co-authored-by: István Zoltán Szabó <[email protected]>
Update docs/open-telemetry.asciidoc
Co-authored-by: István Zoltán Szabó <[email protected]>
Update docs/open-telemetry.asciidoc
Co-authored-by: István Zoltán Szabó <[email protected]>
Update docs/open-telemetry.asciidoc
Co-authored-by: István Zoltán Szabó <[email protected]>
Update docs/open-telemetry.asciidoc
Co-authored-by: István Zoltán Szabó <[email protected]>
Update docs/open-telemetry.asciidoc
Co-authored-by: István Zoltán Szabó <[email protected]>
Update docs/open-telemetry.asciidoc
Co-authored-by: István Zoltán Szabó <[email protected]>
Update docs/open-telemetry.asciidoc
Co-authored-by: István Zoltán Szabó <[email protected]>
Update docs/open-telemetry.asciidoc
Co-authored-by: István Zoltán Szabó <[email protected]>
Update docs/open-telemetry.asciidoc
Co-authored-by: István Zoltán Szabó <[email protected]>
Update docs/open-telemetry.asciidoc
Co-authored-by: István Zoltán Szabó <[email protected]>
Update docs/open-telemetry.asciidoc
Co-authored-by: István Zoltán Szabó <[email protected]>
Update docs/open-telemetry.asciidoc
Co-authored-by: István Zoltán Szabó <[email protected]>
Minor change to docs
Copy file name to clipboardExpand all lines: docs/open-telemetry.asciidoc
+14-11Lines changed: 14 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,29 +1,30 @@
1
1
[[opentelemetry]]
2
2
=== Using OpenTelemetry
3
3
4
-
You can use https://opentelemetry.io/[OpenTelemetry] to monitor the performance and behavior of your Elasticsearch requests through the Ruby Client.
4
+
You can use https://opentelemetry.io/[OpenTelemetry] to monitor the performance and behavior of your {es} requests through the Ruby Client.
5
5
The Ruby Client comes with built-in OpenTelemetry instrumentation that emits https://www.elastic.co/guide/en/apm/guide/current/apm-distributed-tracing.html[distributed tracing spans] by default.
6
-
With that, applications https://opentelemetry.io/docs/instrumentation/ruby/manual/[instrumented with OpenTelemetry] or using the https://opentelemetry.io/docs/instrumentation/ruby/automatic/[OpenTelemetry Ruby SDK] are inherently enriched with additional spans that contain insightful information about the execution of the Elasticsearch requests.
6
+
With that, applications https://opentelemetry.io/docs/instrumentation/ruby/manual/[instrumented with OpenTelemetry] or using the https://opentelemetry.io/docs/instrumentation/ruby/automatic/[OpenTelemetry Ruby SDK] are inherently enriched with additional spans that contain insightful information about the execution of the {es} requests.
7
7
8
-
The native instrumentation in the Ruby Client follows the https://opentelemetry.io/docs/specs/semconv/database/elasticsearch/[OpenTelemetry Semantic Conventions for Elasticsearch]. In particular, the instrumentation in the client covers the logical Elasticsearch request layer, thus, creates a single span per request the service executes against the Ruby Client. The following image shows a resulting trace in which two different Elasticsearch requests are executed, i.e. `ping` and a search `request`:
8
+
The native instrumentation in the Ruby Client follows the https://opentelemetry.io/docs/specs/semconv/database/elasticsearch/[OpenTelemetry Semantic Conventions for {es}]. In particular, the instrumentation in the client covers the logical layer of {es} requests. A single span per request is created that is processed by the service through the Ruby Client. The following image shows a trace that records the handling of two different {es} requests: a `ping` request and a `search` request.
9
9
10
10
[role="screenshot"]
11
11
image::images/otel-waterfall-without-http.png[alt="Distributed trace with Elasticsearch spans",align="center"]
12
12
13
-
Usually, OpenTelemetry auto-instrumentation modules come with instrumentation support for HTTP-level communication. In this case, in addition to the logical Elasticsearch client requests, spans will be captured for the physical HTTP requests emitted by the client. The following image shows a trace with both, Elasticsearch spans (in blue) and the corresponding HTTP-level spans (in red):
13
+
Usually, OpenTelemetry auto-instrumentation modules come with instrumentation support for HTTP-level communication. In this case, in addition to the logical {es} client requests, spans will be captured for the physical HTTP requests emitted by the client. The following image shows a trace with both, {es} spans (in blue) and the corresponding HTTP-level spans (in red):
14
14
15
15
[role="screenshot"]
16
16
image::images/otel-waterfall-with-http.png[alt="Distributed trace with Elasticsearch spans",align="center"]
17
17
18
-
Advanced Ruby Client behavior such as nodes round-robin and request retries are revealed through the combination of logical Elasticsearch spans and the physical HTTP spans. The following example shows an `search` request in a scenario with two Elasticsearch nodes:
18
+
Advanced Ruby Client behavior such as nodes round-robin and request retries are revealed through the combination of logical {es} spans and the physical HTTP spans. The following example shows a `search` request in a scenario with two nodes:
19
19
20
20
[role="screenshot"]
21
21
image::images/otel-waterfall-retry.png[alt="Distributed trace with Elasticsearch spans",align="center"]
22
22
23
-
The first node is unavailable and results in an HTTP error, while the retry to the second node succeeds. Both HTTP requests are subsumed by the logical Elasticsearch request span (in blue).
23
+
The first node is unavailable and results in an HTTP error, while the retry to the second node succeeds. Both HTTP requests are subsumed by the logical {es} request span (in blue).
24
24
25
25
[discrete]
26
26
==== Setup the OpenTelemetry instrumentation
27
+
27
28
When using the https://opentelemetry.io/docs/instrumentation/ruby/manual[OpenTelemetry Ruby SDK manually] or using the https://opentelemetry.io/docs/instrumentation/ruby/automatic/[OpenTelemetry Ruby Auto-Instrumentations], the Ruby Client's OpenTelemetry instrumentation is enabled by default and uses the global OpenTelemetry SDK with the global tracer provider. You can provide a tracer provider via the Ruby Client configuration option `opentelemetry_tracer_provider` when instantiating the client. This is sometimes useful for testing or other specific use cases.
28
29
29
30
[discrete]
@@ -47,19 +48,20 @@ With this configuration option you can enable (default) or disable the built-in
47
48
[discrete]
48
49
===== Capture search request bodies
49
50
50
-
Per default, the built-in OpenTelemetry instrumentation does not capture request bodies because of data privacy reasons. You can use this option to enable capturing of search queries from the the request bodies of Elasticsearch search requests in case you wish to capture this information, regardless. The options are to capture the raw search query, sanitize the query with a default list of sensitive keys, or not capture it at all.
51
+
Per default, the built-in OpenTelemetry instrumentation does not capture request bodies due to data privacy considerations. You can use this option to enable capturing of search queries from the request bodies of {es} search requests in case you wish to gather this information regardless. The options are to capture the raw search query, sanitize the query with a default list of sensitive keys, or not capture it at all.
The OpenTelemetry instrumentation (as any other monitoring approach) may come with a little overhead on CPU, memory and/or latency. The overhead may only occur when the instrumentation is enabled (default) and an OpenTelemetry SDK is active in the target application. In case that either the instrumentation is disabled or no OpenTelemetry SDK is active with the target application, there is no monitoring overhead expected when using the client.
79
80
80
-
Even when the instrumentation is enabled and is being actively used (by an OpenTelemetry SDK), in the vast majority of cases the overhead is very small and negligible. In edge cases in which there is a noticable overhead the <<opentelemetry-config-enable,instrumentation can be explicitly disabled>> to eliminate any potential overhead effect of the instrumentation.
81
+
The OpenTelemetry instrumentation (as any other monitoring approach) may come with a slight overhead on CPU, memory, and/or latency. The overhead may only occur when the instrumentation is enabled (default) and an OpenTelemetry SDK is active in the target application. When the instrumentation is disabled or no OpenTelemetry SDK is active within the target application, monitoring overhead is not expected when using the client.
82
+
83
+
Even in cases where the instrumentation is enabled and is actively used (by an OpenTelemetry SDK), the overhead is minimal and negligible in the vast majority of cases. In edge cases where there is a noticeable overhead, the <<opentelemetry-config-enable,instrumentation can be explicitly disabled>> to eliminate any potential impact on performance.
0 commit comments