Skip to content

Commit ed9af1a

Browse files
authored
feat(langchain): add docs for langchain integration (#14669)
1 parent 64f7ec2 commit ed9af1a

File tree

2 files changed

+201
-0
lines changed

2 files changed

+201
-0
lines changed

docs/platforms/python/integrations/index.mdx

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,7 @@ The Sentry SDK uses integrations to hook into the functionality of popular libra
4343
| <LinkWithPlatformIcon platform="anthropic" label="Anthropic" url="/platforms/python/integrations/anthropic" /> ||
4444
| <LinkWithPlatformIcon platform="openai" label="OpenAI" url="/platforms/python/integrations/openai" /> ||
4545
| <LinkWithPlatformIcon platform="openai-agents" label="OpenAI Agents SDK" url="/platforms/python/integrations/openai-agents" /> | |
46+
| <LinkWithPlatformIcon platform="langchain" label="LangChain" url="/platforms/python/integrations/langchain" /> | |
4647

4748
### Data Processing
4849

Lines changed: 200 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,200 @@
1+
---
2+
title: LangChain
3+
description: "Learn about using Sentry for LangChain."
4+
---
5+
6+
This integration connects Sentry with [LangChain](https://github.com/langchain-ai/langchain) in Python.
7+
8+
Once you've installed this SDK, you can use Sentry AI Agents Monitoring, a Sentry dashboard that helps you understand what's going on with your AI requests. Sentry AI Monitoring will automatically collect information about prompts, tools, tokens, and models. Learn more about the [AI Agents Dashboard](/product/insights/ai/agents).
9+
10+
## Install
11+
12+
Install `sentry-sdk` from PyPI with the `langchain` extra:
13+
14+
```bash {tabTitle:pip}
15+
pip install "sentry-sdk[langchain]"
16+
```
17+
18+
```bash {tabTitle:uv}
19+
uv add "sentry-sdk[langchain]"
20+
```
21+
22+
## Configure
23+
24+
If you have the `langchain` package in your dependencies, the LangChain integration will be enabled automatically when you initialize the Sentry SDK. For correct token accounting, you need to disable the integration for the model provider you are using (e.g. OpenAI or Anthropic).
25+
26+
```python {tabTitle:OpenAI}
27+
import sentry_sdk
28+
from sentry_sdk.integrations.langchain import LangchainIntegration
29+
from sentry_sdk.integrations.openai import OpenAIIntegration
30+
31+
sentry_sdk.init(
32+
dsn="___PUBLIC_DSN___",
33+
environment="local",
34+
traces_sample_rate=1.0,
35+
send_default_pii=True,
36+
debug=True,
37+
integrations=[
38+
LangchainIntegration(),
39+
],
40+
disabled_integrations=[OpenAIIntegration()],
41+
)
42+
```
43+
44+
```python {tabTitle:Anthropic}
45+
import sentry_sdk
46+
from sentry_sdk.integrations.langchain import LangchainIntegration
47+
from sentry_sdk.integrations.anthropic import AnthropicIntegration
48+
49+
sentry_sdk.init(
50+
dsn="___PUBLIC_DSN___",
51+
environment="local",
52+
traces_sample_rate=1.0,
53+
send_default_pii=True,
54+
debug=True,
55+
integrations=[
56+
LangchainIntegration(),
57+
],
58+
disabled_integrations=[AnthropicIntegration()],
59+
)
60+
61+
```
62+
63+
## Verify
64+
65+
Verify that the integration works by initializing a transaction and invoking an agent. In these examples, we're providing a function tool to roll a die.
66+
67+
```python {tabTitle:OpenAI}
68+
import random
69+
70+
from langchain.agents import AgentExecutor, create_openai_functions_agent
71+
from langchain.chat_models import init_chat_model
72+
from langchain_core.messages import HumanMessage, SystemMessage
73+
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
74+
from langchain_core.tools import tool
75+
76+
@tool
77+
def roll_die(sides: int = 6) -> str:
78+
"""Roll a die with a given number of sides"""
79+
return f"Rolled a {random.randint(1, sides)} on a {sides}-sided die."
80+
81+
82+
with sentry_sdk.start_transaction(name="langchain-openai"):
83+
model = init_chat_model(
84+
"gpt-4o-mini",
85+
model_provider="openai",
86+
model_kwargs={"stream_options": {"include_usage": True}},
87+
)
88+
tools = [roll_die]
89+
prompt = ChatPromptTemplate.from_messages(
90+
[
91+
SystemMessage(
92+
content="Greet the user and use the die roll tool. Do not terminate before using the tool."
93+
),
94+
HumanMessage(content="{input}"),
95+
MessagesPlaceholder("agent_scratchpad"),
96+
]
97+
)
98+
99+
agent = create_openai_functions_agent(model, tools, prompt)
100+
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
101+
102+
result = agent_executor.invoke(
103+
{
104+
"input": "Hello, my name is Alice! Please roll a six-sided die.",
105+
"chat_history": [],
106+
}
107+
)
108+
print(result)
109+
```
110+
111+
```python {tabTitle:Anthropic}
112+
import random
113+
114+
from langchain.agents import AgentExecutor, create_tool_calling_agent
115+
from langchain.chat_models import init_chat_model
116+
from langchain_core.messages import HumanMessage, SystemMessage
117+
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
118+
from langchain_core.tools import tool
119+
120+
@tool
121+
def roll_die(sides: int = 6) -> str:
122+
"""Roll a die with a given number of sides"""
123+
return f"Rolled a {random.randint(1, sides)} on a {sides}-sided die."
124+
125+
126+
with sentry_sdk.start_transaction(name="langchain-anthropic"):
127+
model = init_chat_model(
128+
"claude-3-5-sonnet-20241022",
129+
model_provider="anthropic",
130+
)
131+
tools = [roll_die]
132+
prompt = ChatPromptTemplate.from_messages(
133+
[
134+
SystemMessage(
135+
content="Greet the user and use the die roll tool. Do not terminate before using the tool."
136+
),
137+
HumanMessage(content="{input}"),
138+
MessagesPlaceholder("agent_scratchpad"),
139+
]
140+
)
141+
142+
agent = create_tool_calling_agent(model, tools, prompt)
143+
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
144+
145+
result = agent_executor.invoke(
146+
{
147+
"input": "Hello, my name is Alice! Please roll a six-sided die.",
148+
"chat_history": [],
149+
}
150+
)
151+
print(result)
152+
```
153+
154+
After running this script, the resulting data should show up in the `"AI Spans"` tab on the `"Explore" > "Traces"` page on Sentry.io, and in the [AI Agents Dashboard](/product/insights/ai/agents).
155+
156+
It may take a couple of moments for the data to appear in [sentry.io](https://sentry.io).
157+
158+
## Behavior
159+
160+
- The LangChain integration will connect Sentry with all supported LangChain methods automatically.
161+
162+
- All exceptions are reported.
163+
164+
- Sentry considers LLM and tokenizer inputs/outputs as PII (Personally identifiable information) and doesn't include PII data by default. If you want to include the data, set `send_default_pii=True` in the `sentry_sdk.init()` call. To explicitly exclude prompts and outputs despite `send_default_pii=True`, configure the integration with `include_prompts=False` as shown in the [Options section](#options) below.
165+
166+
## Options
167+
168+
By adding `LangchainIntegration` to your `sentry_sdk.init()` call explicitly, you can set options for `LangchainIntegration` to change its behavior:
169+
170+
```python
171+
import sentry_sdk
172+
from sentry_sdk.integrations.langchain import LangchainIntegration
173+
174+
sentry_sdk.init(
175+
# ...
176+
# Add data like inputs and responses;
177+
# see https://docs.sentry.io/platforms/python/data-management/data-collected/ for more info
178+
send_default_pii=True,
179+
integrations=[
180+
LangchainIntegration(
181+
include_prompts=False, # LLM inputs/outputs will be not sent to Sentry, despite send_default_pii=True
182+
),
183+
],
184+
)
185+
```
186+
187+
You can pass the following keyword arguments to `LangchainIntegration()`:
188+
189+
- `include_prompts`
190+
191+
Whether LLM and tokenizer inputs and outputs should be sent to Sentry. Sentry considers this data personal identifiable data (PII) by default. If you want to include the data, set `send_default_pii=True` in the `sentry_sdk.init()` call. To explicitly exclude prompts and outputs despite `send_default_pii=True`, configure the integration with `include_prompts=False`.
192+
193+
The default is `True`.
194+
195+
## Supported Versions
196+
197+
- OpenAI: 1.0+
198+
- Python: 3.9+
199+
- langchain: 0.1.11+
200+

0 commit comments

Comments
 (0)