-
Notifications
You must be signed in to change notification settings - Fork 467
Open
Labels
Description
Tracer Version(s)
3.16.0
Python Version(s)
3.11.2
Pip Version(s)
24.0
Bug Report
Using structured outputs with Langchain seems to break LLMObs's with_annotation_context context manager, such that only the first call is tagged. See code below.
Reproduction Code
Outputs get tagged correctly:
from ddtrace.llmobs import LLMObs
from langchain_openai import AzureChatOpenAI
with LLMObs.annotation_context(
tags = {
"sample tag": "works!",
},
):
gpt_4 = AzureChatOpenAI(
openai_api_key=...,
temperature=0.0,
model_name="gpt-4.1",
deployment_name=...,
azure_endpoint=...,
openai_api_version=...,
openai_api_type=...,
)
outputs = gpt_4.batch(["Hi how are you? say something nice about the weather"]*5)
outputs2 = gpt_4.batch(["Hi how are you? say something nice about the sun"]*5)
Only first batch call is tagged:
from pydantic import BaseModel, Field
class Schema(BaseModel):
user_response: str = Field(..., description="Respond to the User nicely")
with LLMObs.annotation_context(
tags = {
"sample tag": "doesn't work :(",
},
):
gpt_4 = AzureChatOpenAI(
openai_api_key=...,
temperature=0.0,
model_name="gpt-4.1",
deployment_name=...,
azure_endpoint=...,
openai_api_version=...,
openai_api_type=...,
).with_structured_output(Schema)
outputs = gpt_4.batch(["Hi how are you? say something nice about the weather"]*5)
outputs2 = gpt_4.batch(["Hi how are you? say something nice about the sun"]*5)
Error Logs
None
Libraries in Use
pydantic == "2.11.9"
langchain_openai == "0.3.8"
Operating System
No response