Skip to content

Conversation

@kamilkisiela
Copy link
Contributor

@kamilkisiela kamilkisiela commented Dec 2, 2025

This pull request adds comprehensive OpenTelemetry‑based tracing.
It adds a set of GraphQL‑specific spans, connects the Router to Hive Console's Tracing and other OpenTelemetry exporters (via OTLP).

Goals of this Pull Request:

  • Introduce OpenTelemetry tracing
  • Integrate with Hive Console's Tracing
  • Support OTLP exporters (gRPC and HTTP)
  • Tracing propagation (trace context, baggage, jeager, b3/zipkin)

Things out of scope for this Pull Request:

  • No debug-level spans (we can add them when needed)
  • Customization and filtering of spans (future Pull Request)
  • Metrics (after the point above is done)
  • Logs (after the work on polishing the logs is done)

Code changes

Almost all code that is responsible for tracing and setting up OpenTelemetry is in internal crate's telemetry directory.
You will see there span definitions, exporters setup, Hive Console Tracing integration and things like that.

Every span created by Hive Router has hive.kind attribute that describes what kind of action the span represents. This is useful to filter spans without relying on span names, which can change over time.

Performance costs?

To minimize the performance impact of tracing to minimum, spans are only created when tracing is enabled. All the computation that is needed to create attributes is also only done when tracing is enabled. I spent some time making sure the overhead is zero or near-zero when tracing is disabled.

With tracing enabled, the overhead is still as low as I could make it, by avoiding allocations, using static strings for attributes names and avoiding complex computations when creating spans and attributes. There's of course some cost (TODO: measure it once again and get the numbers).

How spans are created and used?

Spans are created using tracing::span_info. Every "phase" has its own span and a struct that helps with creating and managing it. This is intentional, to store all span-related logic in one place and make it easy to understand what attributes are set on each span, and what methods are available to manipulate the span and not pollute the rest of the codebase with span-related logic. Imagine looking for the parsing span and the attributes if it's not centralized in one place.

For example, the parsing phase has GraphQLParseSpan struct that creates a span and has the record_cache_hit method to record whether the parsing was a cache hit or miss.

The tracing crate "requires" attributes to be static strings, and predefined at the span creation. This is for performance reasons.
That's why all the standard attributes are there from the beginning, that we later on fill in with data, like previously mentioned record_cache_hit method that fills in the cache.hit attribute.

You will notice that attributes names are defined as constants in the telemetry::attributes module, but not used when the spans are created. The reason for it is that tracing crate requires attributes to be static strings, so we can't use variables there.
I stored them as constants to avoid typos and have a single source of truth for attribute names, when we need to refer to them later on (like when to fill them in with data, or transforming them for many reasons). This gives us some kind of type safety.
To fillup the gap between static strings used for span creation and constants defined in the attributes module, I added a bunch of tests that verify that all attributes defined in the attributes module are actually used when creating spans. This way we can be sure that there are no typos and that all attributes are defined and match.

Spans are started and stopped automatically by either using the guard pattern (calling span.enter() returns a guard that stops the span when dropped) or instrumenting the futures with the span (using instrument method).
Spans are created by calling ::new() method on the span struct, which returns the struct instance. The span itself is stored inside the struct.

What exporters are supported?

Currently, two exporters are supported:

  • Hive Console Tracing exporter
  • OTLP exporter (gRPC and HTTP)

The OTLP exporter can be configured to send traces to any OTLP-compatible backend, like Jaeger, Zipkin, Honeycomb, NewRelic, Datadog and many others.

I tried to make everything configurable, so you can enable/disable specific exporters, set endpoints, headers and other options via environment variables or expressions.
Even the collection of spans can be configured, the batching options can be set, so users can tune the performance of the tracing to their needs.

Semantic conventions for HTTP

There's a standard for HTTP semantic conventions defined by OpenTelemetry for attributes, but not every tool supports it yet.
In Hive Router, we support both deprecated attributes and the stable ones, so that we can interoperate with more tools.

This is also configurable, so users can choose which set of attributes they want to use. We have three modes:

  • spec_compliant: use only stable attributes
  • deprecated: use only deprecated attributes
  • spec_and_deprecated: use both sets of attributes

How deduplicated requests are traced?

The http.inflight span is created for each outgoing HTTP request to subgraphs when request deduplication is enabled. Multiple concurrent identical requests will create separate http.inflight spans in different traces, but only the leader executes the HTTP call.

Leader role (hive.inflight.role="leader"):

  • First request to win the deduplication race
  • Creates http.client as a child span to execute the actual HTTP request
  • All joiners wait for this leader's result

Joiner role (hive.inflight.role="joiner"):

  • Subsequent identical concurrent requests
  • Does not create http.client (that span exists in the leader's trace)
  • Waits for the leader's HTTP response
  • References leader via hive.inflight.key (same fingerprint value)

Cross-trace correlation: All inflight spans (leader + joiners) for the same deduplicated request share the same hive.inflight.key value, allowing observability tools to correlate them across different traces. In future we may introduce a span link.

Leader trace (first/winning request):

GraphQL Subgraph Operation
  └─ http.inflight (role="leader", key=12345)
      └─ http.client  <- executes the actual HTTP request

Joiner trace (deduplicated request):

GraphQL Subgraph Operation
  └─ http.inflight (role="joiner", key=12345)
      └─ [no http.client - it's in the leader's trace]

Changes in Hive SDK

Hive Console's Tracing requires the operation hash to be sent as an attribute, and represent the same hash as in Hive SDK (Usage Reporting). I made changes to Hive SDK and exposed a function to compute the operation hash, and used it in Hive Router to set the hive.operation.hash attribute on the graphql.operation span.

I noticed that Hive SDK relies on graphql_parser to minifiy and normalize the query before hashing it, but it's super inefficient (we do Document to String conversions etc) that I hope to reimplement in a more efficient way in near future, but that requires forking and owning the graphql_parser crate.

@kamilkisiela kamilkisiela force-pushed the kamil-tracing branch 2 times, most recently from d35c609 to 761d5d5 Compare December 11, 2025 11:36
@github-actions
Copy link

github-actions bot commented Dec 11, 2025

k6-benchmark results

     ✓ response code was 200
     ✓ no graphql errors
     ✓ valid response structure

     █ setup

     checks.........................: 100.00% ✓ 215262      ✗ 0    
     data_received..................: 6.3 GB  209 MB/s
     data_sent......................: 84 MB   2.8 MB/s
     http_req_blocked...............: avg=3.49µs   min=661ns  med=1.7µs   max=5.71ms   p(90)=2.34µs  p(95)=2.66µs  
     http_req_connecting............: avg=1.1µs    min=0s     med=0s      max=2.99ms   p(90)=0s      p(95)=0s      
     http_req_duration..............: avg=20.42ms  min=2.34ms med=19.52ms max=228.35ms p(90)=27.65ms p(95)=30.73ms 
       { expected_response:true }...: avg=20.42ms  min=2.34ms med=19.52ms max=228.35ms p(90)=27.65ms p(95)=30.73ms 
     http_req_failed................: 0.00%   ✓ 0           ✗ 71774
     http_req_receiving.............: avg=144.81µs min=25.1µs med=38.29µs max=165.6ms  p(90)=82.99µs p(95)=371.74µs
     http_req_sending...............: avg=23.29µs  min=5.14µs med=10.42µs max=25.21ms  p(90)=15µs    p(95)=25.42µs 
     http_req_tls_handshaking.......: avg=0s       min=0s     med=0s      max=0s       p(90)=0s      p(95)=0s      
     http_req_waiting...............: avg=20.25ms  min=2.29ms med=19.39ms max=66.69ms  p(90)=27.37ms p(95)=30.45ms 
     http_reqs......................: 71774   2386.732845/s
     iteration_duration.............: avg=20.9ms   min=5.4ms  med=19.87ms max=274.37ms p(90)=28.09ms p(95)=31.17ms 
     iterations.....................: 71754   2386.067776/s
     vus............................: 50      min=50        max=50 
     vus_max........................: 50      min=50        max=50 

@github-actions
Copy link

github-actions bot commented Dec 11, 2025

🐋 This PR was built and pushed to the following Docker images:

Image Names: ghcr.io/graphql-hive/router

Platforms: linux/amd64,linux/arm64

Image Tags: ghcr.io/graphql-hive/router:pr-598 ghcr.io/graphql-hive/router:sha-ecaa441

Docker metadata
{
"buildx.build.ref": "builder-da6e5150-edcb-4fd3-a096-a2e287411dc4/builder-da6e5150-edcb-4fd3-a096-a2e287411dc40/j98wsrj2j2ovzux3mmp3mx360",
"containerimage.descriptor": {
  "mediaType": "application/vnd.oci.image.index.v1+json",
  "digest": "sha256:addd9dd42c3df1e28e933374b8bc928c88c0777acc6b0c8013a8c1db06f30479",
  "size": 1609
},
"containerimage.digest": "sha256:addd9dd42c3df1e28e933374b8bc928c88c0777acc6b0c8013a8c1db06f30479",
"image.name": "ghcr.io/graphql-hive/router:pr-598,ghcr.io/graphql-hive/router:sha-ecaa441"
}

Use `CARGO_PKG_VERSION` for the instrumentation scope version instead of
a hardcoded "v0". Also, rename the scope to "graphql-hive.router" for
better clarity.
This change adds the `#[inline]` attribute to several functions within
the tracing module. This is a micro-optimization that may improve
performance by allowing the compiler to potentially reducie function
call overhead.
@kamilkisiela kamilkisiela marked this pull request as ready for review January 8, 2026 15:31
@kamilkisiela kamilkisiela changed the title Telemetry Tracing with OpenTelemetry Jan 8, 2026
@kamilkisiela
Copy link
Contributor Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This is an impressive and comprehensive pull request that introduces OpenTelemetry-based tracing to the router. The changes are well-structured, with clear separation of concerns in the new telemetry module. The addition of custom spans for GraphQL pipeline stages, HTTP requests, and the thoughtful implementation of context propagation and semantic convention compatibility are excellent.

My main feedback is a high-severity issue regarding the use of info_span! for instrumentation on hot paths, which goes against the repository's performance-first style guide. I've left a detailed comment with a suggestion to address this. Overall, this is a fantastic contribution that will significantly improve the observability of the router.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant