Skip to content

ref(langchain): Remove agent name from on_llm_start and clean up tests

30e22ba
Select commit
Loading
Failed to load commit list.
Open

feat(langchain): Change LLM span operation to generate_text #5705

ref(langchain): Remove agent name from on_llm_start and clean up tests
30e22ba
Select commit
Loading
Failed to load commit list.
@sentry/warden / warden: code-review completed Mar 20, 2026 in 1m 6s

1 issue

code-review: Found 1 issue (1 medium)

Medium

GEN_AI_AGENT_NAME not captured in on_llm_start despite PR description - `sentry_sdk/integrations/langchain.py:381`

The PR description states that "when an LLM is invoked within an agent context, the agent name is now captured on the span via GEN_AI_AGENT_NAME", but the on_llm_start method does not call _get_current_agent() or set SPANDATA.GEN_AI_AGENT_NAME. The parallel method on_chat_model_start (lines 463-465) does capture the agent name correctly. This means LLM calls via on_llm_start will not have agent context attached, reducing observability as promised.


Duration: 1m 4s · Tokens: 248.5k in / 3.3k out · Cost: $0.67 (+extraction: $0.00, +fix_gate: $0.00)

Annotations

Check warning on line 381 in sentry_sdk/integrations/langchain.py

See this annotation in the file changed.

@sentry-warden sentry-warden / warden: code-review

GEN_AI_AGENT_NAME not captured in on_llm_start despite PR description

The PR description states that "when an LLM is invoked within an agent context, the agent name is now captured on the span via GEN_AI_AGENT_NAME", but the `on_llm_start` method does not call `_get_current_agent()` or set `SPANDATA.GEN_AI_AGENT_NAME`. The parallel method `on_chat_model_start` (lines 463-465) does capture the agent name correctly. This means LLM calls via `on_llm_start` will not have agent context attached, reducing observability as promised.