Replies: 1 comment
-
I'm currently running into the same problem for long-lasting streaming RPCs in a manually instrumented system, so I had a look at the interceptor implementation. However, I did not check whether this also applies to the Java agent.
TL;DR, it's likely impossible to change this behaviour; I suggest potential workarounds as part of the answer to the second question. From how the interceptor is designed, I would say it's at least partially expected, and it might be impossible to implement the behaviour you're likely looking for:
This implementation seems fine for short-lived streaming RPCs where the individual messages don't represent unrelated events/updates. As such, I'd say the current implementation is working as expected for this type of streaming RPC. However, the implementation seems far from ideal for any RPC where individual messages represent separate sub-calls (e.g. your example). In my opinion, any use case where the streaming RPC roughly represents (reversed) sub-RPCs would profit from separate spans created for each message. If the messages are caused by an external event, I would even suggest not making the per-message spans a sub-span of the streaming RPC span, as it's more helpful to associate them with the span of the external event that caused the update. However, it might be impossible to implement per-message spans in a general-purpose instrumentation library like gRPC over HTTP2 only sends the headers once. Each request/response starts with the headers followed by an arbitrary number of length-prefixed messages1:
In other words, any header-based context propagation can't be applied to individual messages. To circumvent this limitation, the instrumentation would need to change the actual message to a wrapper message that contains the required context information. However, this can't be implemented by a general-purpose instrumentation library, as it actively changes the public API exposed by the gRPC service. Even if there were a way to transfer per-message context information, one big problem with per-message spans is determining when to end them: without a way to associate a response with the original message (in bidirectional streaming RPCs serving as a fancy "Protobuf over TCP" implementation) or if no response is sent at all (e.g. in server-side streaming RPCs), there's no clear definition of an end that the implementation could use. In your use case, the per-message span created by the server would either end immediately after the message was In bidirectional streaming RPCs where a response is expected, I can't think of a good definition of a span end at all that doesn't involve manually instrumenting the entire RPC.
If I interpret the interceptor implementation correctly, no, there's no way to change this. For my use case, I think I'll change the protocol for my streaming RPCs to include OTel context information. However, this might not be an option when using the Java agent or if you don't own the gRPC service definition. As a temporary workaround, you could consider explicitly creating a new (root) span in the client. Assuming this span wouldn't last multiple days, it could improve your application's observability at least a little. Footnotes |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I have more like a theoretical question. Let's say I have a server and client application. The server provides a GRPC service which has a method that uses a streaming response. The response can go on for many days or even weeks in our case, its like a push communication for changes. Both services are instrumented via the latest Otel Java Agent.
What we see is that from the start of the application onwards the traceId on the client when receiving stream messages is the same for all messages. Doesn't matter if it's 1 minute after boot or 2 days. There is no custom instrumentation in place (in both client&server).
Two questions:
Used versions:
Beta Was this translation helpful? Give feedback.
All reactions