feat(langchain): Change LLM span operation to generate_text #5705
@sentry/warden / warden
completed
Mar 20, 2026 in 1m 37s
1 issue
Medium
GEN_AI_AGENT_NAME not captured in on_llm_start despite PR description - `sentry_sdk/integrations/langchain.py:381`
The PR description states that "when an LLM is invoked within an agent context, the agent name is now captured on the span via GEN_AI_AGENT_NAME", but the on_llm_start method does not call _get_current_agent() or set SPANDATA.GEN_AI_AGENT_NAME. The parallel method on_chat_model_start (lines 463-465) does capture the agent name correctly. This means LLM calls via on_llm_start will not have agent context attached, reducing observability as promised.
4 skills analyzed
| Skill | Findings | Duration | Cost |
|---|---|---|---|
| code-review | 1 | 1m 4s | $0.66 |
| find-bugs | 0 | 1m 34s | $1.48 |
| skill-scanner | 0 | 1m 12s | $0.35 |
| security-review | 0 | 1m 17s | $0.26 |
Duration: 5m 6s · Tokens: 1.1M in / 12.2k out · Cost: $2.76 (+extraction: $0.01, +fix_gate: $0.00, +dedup: $0.00)
Loading