Structured outputs and provider-agnostic LLMs in LangGraph (without LangChain) #5821
Unanswered
VLazarevic
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi all,
I'm working on a project using LangGraph to orchestrate LLM workflows. We're aiming for two core goals:
Most structured output examples I’ve found rely heavily on LangChain, especially components like
PydanticOutputParser
, prompt templates, and LLM wrappers likeChatOpenAI
. From what I’ve seen, LangGraph doesn’t currently provide built-in equivalents, or maybe I’ve missed something.For reference:
🔗 https://python.langchain.com/docs/concepts/structured_outputs/
My main question is:
Ideally, we’d like to stay within LangGraph and avoid adding LangChain just for prompt formatting and response parsing.
Any guidance or examples would be really appreciated!
Beta Was this translation helpful? Give feedback.
All reactions