Support Responses API in create_react_agent #4764
humungasaurus
started this conversation in
Ideas
Replies: 2 comments
-
Hello, thanks for posting this. Can you clarify some of the errors you're running into? Code, stack traces, error messages would be helpful. As far as I know the Responses API (including reasoning summaries) is supported. Here is an example: from langchain.chat_models import init_chat_model
from langgraph.prebuilt import create_react_agent
def get_weather(location: str) -> str:
"""Get the weather at a location."""
return "It's sunny."
tools = [get_weather]
reasoning = {"effort": "medium", "summary": "auto"}
llm = init_chat_model(
"openai:o4-mini",
use_responses_api=True,
model_kwargs={"reasoning": reasoning}
)
agent = create_react_agent(llm, tools)
input_message = {
"role": "user",
"content": (
"What is the weather now at the city that had the third "
"highest population in the world in the year 2000?"
),
}
for step in agent.stream(
{"messages": [input_message]},
stream_mode="values",
):
message = step["messages"][-1]
message.pretty_print()
if reasoning := message.additional_kwargs.get("reasoning"):
print(f"Reasoning: {reasoning}")
(this is with latest langchain-openai and langgraph). |
Beta Was this translation helpful? Give feedback.
0 replies
-
Looking at this now, im thinking that maybe the issue is actually the callback handler and not the create_react_agent implementation? So, this works:
But this fails (same create_simple_agent):
Heres the stack trace:
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
The create_react_agent prebuilt is a very useful, and provides most of the functionality out of the box that most folks need but it doesn't support OpenAI's responses api. When you pass in an LLM like this:
ChatOpenAI(model=LLM_MODEL_NAME, use_responses_api=True)
Graph executions start failing.
The biggest issue I am having with this is it means that reasoning output available in newer models cannot be accessed when using the create_react_agent prebuilt.
Beta Was this translation helpful? Give feedback.
All reactions