Not routing to correct agent in langgraph #1572
Unanswered
palaklanger
asked this question in
Q&A
Replies: 1 comment
-
Hi @palaklanger, I may be wrong, but from the description, this seems like it could be primarily a prompting issue. I'd recommend looking at a langsmith trace of what your supervisor LLM is provided by your graph, looking at the descriptions for the different agents, and then thinking of how you can better describe the tasks and scenarios when it should rely on various agents so that the content you provide the agent is less ambiguous. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
llm=load_llm()
members = ["Jira_userstory_analyst","Product Quality Engineer","General_agent"]
system_prompt = (
"You are a supervisor tasked with managing"
" following workers: {members}. Product Quality Engineer worker has details about failure cases, serial numbers affected and their issue classifications. Given the following user request,"
" respond with the worker to act next. Each worker will perform a"
" task and respond with their results and status. When finished,"
" respond with FINISH."
)
Our team supervisor is an LLM node. It just picks the next agent to process
and decides when the work is completed
options = ["FINISH"] + members
Using openai function calling can make output parsing easier for us
function_def = {
"name": "route",
"description": "Select the next role.",
"parameters": {
"title": "routeSchema",
"type": "object",
"properties": {
"next": {
"title": "Next",
"anyOf": [
{"enum": options},
],
}
},
"required": ["next"],
},
}
prompt = ChatPromptTemplate.from_messages(
[
("system", system_prompt),
MessagesPlaceholder(variable_name="messages"),
(
"system",
"Given the conversation above, who should act next?"
" Or should we FINISH? Select one of: {options}",
),
]
).partial(options=str(options), members=", ".join(members))
supervisor_chain = (
prompt
| llm.bind_functions(functions=[function_def], function_call="route")
| JsonOutputFunctionsParser()
)
There are three members that we are using in Langgraph:["Jira_userstory_analyst","Product Quality Engineer","General_agent"]
Supervisor prompt: You are a supervisor tasked with managing following workers: Jira_userstory_analyst, Product Quality Engineer, General_agent. Product Quality Engineer worker has details about failure cases, serial numbers affected and their issue classifications. Given the following user request, respond with the worker to act next. Each worker will perform a task and respond with their results and status. When finished, respond with FINISH. System: Given the conversation above, who should act next? Or should we FINISH? Select one of: ['FINISH', 'Jira_userstory_analyst', 'Product Quality Engineer', 'General_agent']"
Question prompt: What are serial number affected by issue classification by x in y?
In this scenario, it should go to Product Quality Engineer , but it goes to Jira_userstory_analyst.
Please advise
System Info
openai==1.35.3
langchain==0.2.5
langchain-openai==0.1.9
langchain-community==0.2.5
langchain_experimental==0.0.60
tabulate==0.9.0
snowflake-sqlalchemy==1.5.1
Beta Was this translation helpful? Give feedback.
All reactions