Replies: 5 comments 1 reply
-
Hey @ayuranjan! I'm here to help with any questions or bugs you might have. Let's crack this code conundrum together! 🤖 To use multi-agent LangGraph with Streamlit to stream the contents as soon as they are generated, you can follow the example provided for using an agent with tools. Here is a sample code snippet that demonstrates how to set this up: import streamlit as st
from langchain import hub
from langchain.agents import AgentExecutor, create_tool_calling_agent, load_tools
from langchain_openai import OpenAI
from langchain_community.callbacks.streamlit import StreamlitCallbackHandler
from langchain_core.prompts import ChatPromptTemplate
# Initialize the language model with streaming enabled
llm = OpenAI(temperature=0, streaming=True)
# Load the necessary tools
tools = load_tools(["ddg-search"])
# Define the prompt
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant."),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
]
)
# Create the agent
agent = create_tool_calling_agent(llm, tools, prompt)
# Create the agent executor
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
# Streamlit app setup
if prompt := st.chat_input():
st.chat_message("user").write(prompt)
with st.chat_message("assistant"):
# Initialize the Streamlit callback handler
st_callback = StreamlitCallbackHandler(st.container())
# Invoke the agent executor with the callback handler
response = agent_executor.stream(
{"input": prompt}, {"callbacks": [st_callback]}
)
# Display the response
for chunk in response:
st.write(chunk["output"]) Make sure to set the Additionally, LangChain provides methods like LangChain Expression Language (LCEL) can also be used to build chains that support streaming. For example: from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")
parser = StrOutputParser()
chain = prompt | model | parser
async for chunk in chain.astream({"topic": "parrot"}):
print(chunk, end="|", flush=True) This setup ensures that streaming works even when using parsers at the end of the chain. LCEL chains benefit from an automatic implementation of |
Beta Was this translation helpful? Give feedback.
-
So you mentioned this
Which is equivalent to
But this gives the result of the Nodes of the graph i.e First it returns the final result by Supervisor, then it returns the full final result from language agent. Now since I asked who is Cristiano Ronaldo, it took it some time for the language agent to return result. Once it completely returned the answer , it was displayed at once in streamlit. Ideally it should write in chunks so that it looks like someone is typing. But currently it waits and return the full answer at once from each agent. I believe this is due to the fact that child agents are called using invoke so it returns the final answer at once. |
Beta Was this translation helpful? Give feedback.
-
I have some working code for streaming langgraph results and streamlit here in case it's a useful reference: https://github.com/JoshuaC215/agent-service-toolkit (I was looking at this discussion during research for building it :) ) |
Beta Was this translation helpful? Give feedback.
-
@ayuranjan if you're still looking I'd check out this group of cookbooks of common UI/UX implementations for LangGraph with Streamlit here. github.com/shiv248/Streamlit-x-LangGraph-Cookbooks. |
Beta Was this translation helpful? Give feedback.
-
hi guys, Im new in using langgraph and im having problems with streamlit and langgraph when it comes to handling user feedback. I would like some help on how I can make the ai wait at least for the user to read its response and provide some feedback before continuing with the graph. here is my langgraph code and my streamlit UI implementation. Guys please help me langgragh.py `import os os.environ["OPENAI_API_KEY"] = os.environ.get("OPENAI_API_KEY") user_id = st.session_state.get("user_id")[0] def get_stage_members(user_id):
llm = ChatOpenAI(model_name="gpt-4", temperature=0.7,max_tokens=1000) class State(MessagesState): def create_agent(llm, tools, prompt): def create_agent_nodes(flow_steps):
def process_flow_steps(stage_info): def process_node(agent,node_name, state: State) -> Command[Literal["human"]]: def get_human_feedback() -> str: def human_node(state: State):# -> Command[Literal["supervisor"]]:
def setup_graph(stage_info):
` my streamlit implementation : `if not brief_output:
` |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Checked other resources
Commit to Help
Example Code
Description
I am trying to make a multi agent system with langgraph ( similar to Research Team mentioned in the documentation example here - https://github.com/langchain-ai/langgraph/blob/main/examples/multi_agent/hierarchical_agent_teams.ipynb -- create_agent and create_team_supervisor are taken from the example ).
manager.research_chain returns the compiled graph
My compiled graph looks like -

I read the streamline document as well as the documentation mentioned in langchain regarding using streamlit.
I am not able to stream the output( i.e I want to print the chunks as soon as It is generated but currently its waiting for the final result and then gives the whole result at once).
I tried this
for s in research_chain.stream({"messages": [prompt]}, config):
if "end" not in s:
st.write_stream(s)
But this prints the agent name only.
I tried this -
for s in research_chain.stream({"messages": [prompt]}, config):
if "end" not in s:
st.write_stream(s)
But this also waits for the output of each agent to get completed and then display the output of each agent at once.
Ideally I would want it to display the final contents of the agent in iteration / stream.
I also thought this could be due to the fact that, our child agent is running using .invoke() in the def agent_node but changing that to stream doesn't makes sense since these will be internal calls and we need the full result to decide on what to do next. But at least it would be better if the final result of each agent could be steamed so the user doesn't feel the lag since for some uses cases it would take time to complete them, so if the user knows about what is going on then it would be better. Any suggestion to improve this user experience is welcomed.
System Info
System Information
Package Information
Beta Was this translation helpful? Give feedback.
All reactions