Skip to content

Commit c88a62d

Browse files
fix(server/mcp): scope defer span.End inside loop iteration (googleapis#2558)
## Description In `readInputStream()`, `defer span.End()` is called inside a `for` loop. Since `defer` schedules execution when the enclosing **function** returns (not the current loop iteration), all spans created per message accumulate without ever being ended for the lifetime of the stdio session. For a long-lived stdio server, this means: - **Memory leak**: span objects accumulate indefinitely - **Incorrect traces**: spans are never properly closed, producing misleading telemetry data ## Changes Wrap the per-message processing logic in an immediately invoked function expression (IIFE), so `defer span.End()` correctly fires at the end of each iteration. This follows the same pattern already used in `InitializeConfigs` (`server.go` lines 85-98) for scoping spans inside loops. ## Test All existing tests pass. The `TestStdioSession` test exercises `readLine` and `write` on the stdio session. The `readInputStream` loop is covered by the existing MCP endpoint tests that exercise the full message processing pipeline. Fixes googleapis#2549
1 parent 81253a0 commit c88a62d

File tree

1 file changed

+27
-21
lines changed

1 file changed

+27
-21
lines changed

internal/server/mcp.go

Lines changed: 27 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -195,31 +195,37 @@ func (s *stdioSession) readInputStream(ctx context.Context) error {
195195
}
196196
return err
197197
}
198-
// This ensures the transport span becomes a child of the client span
199-
msgCtx := extractTraceContext(ctx, []byte(line))
200198

201-
// Create span for STDIO transport
202-
msgCtx, span := s.server.instrumentation.Tracer.Start(msgCtx, "toolbox/server/mcp/stdio",
203-
trace.WithSpanKind(trace.SpanKindServer),
204-
)
205-
defer span.End()
199+
if err := func() error {
200+
// This ensures the transport span becomes a child of the client span
201+
msgCtx := extractTraceContext(ctx, []byte(line))
206202

207-
v, res, err := processMcpMessage(msgCtx, []byte(line), s.server, s.protocol, "", "", nil, "")
208-
if err != nil {
209-
// errors during the processing of message will generate a valid MCP Error response.
210-
// server can continue to run.
211-
s.server.logger.ErrorContext(msgCtx, err.Error())
212-
span.SetStatus(codes.Error, err.Error())
213-
}
203+
// Create span for STDIO transport
204+
msgCtx, span := s.server.instrumentation.Tracer.Start(msgCtx, "toolbox/server/mcp/stdio",
205+
trace.WithSpanKind(trace.SpanKindServer),
206+
)
207+
defer span.End()
214208

215-
if v != "" {
216-
s.protocol = v
217-
}
218-
// no responses for notifications
219-
if res != nil {
220-
if err = s.write(msgCtx, res); err != nil {
221-
return err
209+
v, res, err := processMcpMessage(msgCtx, []byte(line), s.server, s.protocol, "", "", nil, "")
210+
if err != nil {
211+
// errors during the processing of message will generate a valid MCP Error response.
212+
// server can continue to run.
213+
s.server.logger.ErrorContext(msgCtx, err.Error())
214+
span.SetStatus(codes.Error, err.Error())
222215
}
216+
217+
if v != "" {
218+
s.protocol = v
219+
}
220+
// no responses for notifications
221+
if res != nil {
222+
if err = s.write(msgCtx, res); err != nil {
223+
return err
224+
}
225+
}
226+
return nil
227+
}(); err != nil {
228+
return err
223229
}
224230
}
225231
}

0 commit comments

Comments
 (0)