Skip to content
Discussion options

You must be logged in to vote

it's not a bug, but i can see how this can be confusing - you can pass dedicated prompt for the structured output LLM call by passing a tuple (prompt, Schema) to response_format -- see API reference https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.chat_agent_executor.create_react_agent

Replies: 1 comment 4 replies

Comment options

You must be logged in to vote
4 replies
@Courvoisier13
Comment options

@vbarda
Comment options

@Courvoisier13
Comment options

@HelloThisIsFlo
Comment options

Answer selected by Courvoisier13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants