feat(langchain): Change LLM span operation to generate_text#5705
feat(langchain): Change LLM span operation to generate_text#5705ericapisani wants to merge 3 commits intomasterfrom
Conversation
…re agent name Update the LangChain integration to track LLM operations as gen_ai.generate_text instead of gen_ai.pipeline. This provides more specific operation semantics for LLM invocations. Additionally, capture the agent name when an LLM is invoked within an agent context using _push_agent/_pop_agent tracking. - Change span operation from OP.GEN_AI_PIPELINE to OP.GEN_AI_GENERATE_TEXT - Update span name to include model information - Add GEN_AI_OPERATION_NAME span data - Add GEN_AI_AGENT_NAME span data when agent context is active - Add GEN_AI_PIPELINE_NAME span data when pipeline name is provided - Update tests to check for new operation type and span data fields Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Codecov Results 📊✅ 13 passed | Total: 13 | Pass Rate: 100% | Execution Time: 6.91s 📊 Comparison with Base Branch
✨ No test changes detected All tests are passing successfully. ❌ Patch coverage is 0.00%. Project has 14382 uncovered lines. Files with missing lines (1)
Coverage diff@@ Coverage Diff @@
## main #PR +/-##
==========================================
+ Coverage 25.33% 30.24% +4.91%
==========================================
Files 189 189 —
Lines 20613 20617 +4
Branches 6738 6740 +2
==========================================
+ Hits 5222 6235 +1013
- Misses 15391 14382 -1009
- Partials 429 474 +45Generated by Codecov Action |
Codecov Results 📊✅ 8 passed | Total: 8 | Pass Rate: 100% | Execution Time: 2.05s 📊 Comparison with Base Branch
All tests are passing successfully. ❌ Patch coverage is 0.00%. Project has 14877 uncovered lines. Files with missing lines (1)
Coverage diff@@ Coverage Diff @@
## main #PR +/-##
==========================================
- Coverage 28.33% 27.81% -0.52%
==========================================
Files 189 189 —
Lines 20600 20607 +7
Branches 6732 6736 +4
==========================================
+ Hits 5836 5730 -106
- Misses 14764 14877 +113
- Partials 545 525 -20Generated by Codecov Action |
sentry_sdk/integrations/langchain.py
Outdated
| agent_name = _get_current_agent() | ||
| if agent_name: | ||
| span.set_data(SPANDATA.GEN_AI_AGENT_NAME, agent_name) |
There was a problem hiding this comment.
on_llm_start can be triggered by an agent, so this adds extra information that we were previously missing (mirrors what we do for chat spans)
There was a problem hiding this comment.
Looks good in general, please see my Slack message regarding the agent name 🙏
Here's the comment by the way, so it doesn't get lost
I feel like I'm missing something, but currently langchain has no agent name parameter.
We extract the run_name to set the agent name, but this is semantically wrong. The agent name is an attribute of the agent that is constant across invocations. The run_name is just the name of the specific invocation, not the name of the agent.
We should consider removing the gen_ai.agent.name attribute from all langchain spans. Sending the wrong data is worse than not sending the attribute at all.
Move agent name capture out of on_llm_start to be handled in follow-up PRs. Remove associated tests and unused imports. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Semver Impact of This PR🟡 Minor (new features) 📋 Changelog PreviewThis is how your changes will appear in the changelog. New Features ✨
Bug Fixes 🐛Anthropic
Documentation 📚
Internal Changes 🔧
Other
🤖 This preview updates automatically when you update the PR. |
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

Update the LangChain integration to use
gen_ai.generate_textas the span operation for LLM calls instead ofgen_ai.pipeline. This aligns with the more specific semantics of what's happening when an LLM is invoked directly, as opposed to a broader pipeline execution.Changes:
OP.GEN_AI_PIPELINEtoOP.GEN_AI_GENERATE_TEXTgenerate_text {model}to include model infoGEN_AI_OPERATION_NAMEspan data set to"generate_text"GEN_AI_PIPELINE_NAMEspan data set when a pipeline name is provided in kwargs