LangGraph Runnable LLM Tool Execution: High Latency + Message Role Inconsistency vs. AgentExecutor/Direct OpenAI Call #4595
Unanswered
Android-Unix
asked this question in
Q&A
Replies: 1 comment 3 replies
-
Share code if you want help |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I’m encountering two core issues when using LangGraph with ChatOpenAI (o3-mini) and tool calls in a Runnable-style assistant setup. Here’s the situation and my findings:
I’ve implemented a LangGraph assistant that:
Uses ChatOpenAI (o3-mini)
Uses LangChain-style tool calling (@tool)
Routing logic works as expected and tools are called correctly
I also have:
A baseline test using initialize_agent (Chain-of-thought, zero-shot-agent)
A fully native OpenAI SDK version
❗Problem 1: High Latency in LangGraph
LangGraph executions for identical user queries take significantly longer (15–18s) than agent or direct LLM (7–9s total).
Chain-of-thought agent: ~18s total (with tool)

LangGraph assistant: ~15s total (with tool)

Direct OpenAI Call: ~ 5 sec (with tool)

Questions
❗Problem 2: Role Mismatch in LangSmith Trace
All tool and model messages show up as human messages

Question:
4. Why are tool responses and LLM replies not typed correctly (role=tool, role=assistant) in LangGraph’s trace output?
5. Is there a LangGraph-specific RunStep wrapper that overrides or bypasses standard role tracking?
6. Should we manually enforce role tagging in state passing across nodes?
7. Are there any best practices or examples for proper role propagation in LangGraph traces to appear correctly in LangSmith waterfall?
Would appreciate any insight, suggestions, or patterns for:
• Fixing role mislabeling
• Reducing execution latency
• Improving LangSmith visibility across LangGraph-based executions
Beta Was this translation helpful? Give feedback.
All reactions