when i try to use the stream and history to create a chatting robot ,but it consoles” Error in RootListenersTracer.on_chain_end callback: ValueError()” #25291
Replies: 1 comment
-
Hey there, @future-mike! I'm here to assist you with any bugs, questions, or contributions while you wait for a human maintainer. Let's tackle this issue together! To resolve the "Error in RootListenersTracer.on_chain_end callback: ValueError()" when using the stream and history to create a chatting robot in your Python code, you should ensure that your environment variables for LangSmith tracing are correctly set. Here is how you can set them in a Jupyter notebook: import getpass
import os
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_API_KEY"] = getpass.getpass() Make sure to replace the Additionally, ensure that you have correctly installed the necessary packages and set up your message history and streaming as shown in the provided tutorial. Here is a snippet to help you set up the streaming part correctly: config = {"configurable": {"session_id": "abc15"}}
for r in with_message_history.stream(
{
"messages": [HumanMessage(content="hi! I'm todd. tell me a joke")],
"language": "English",
},
config=config,
):
print(r.content, end="|") If you are using the Coze API for your chatbot, ensure that you have correctly configured the from langchain_core.messages import HumanMessage
from langchain_community.chat_models.coze import ChatCoze
chat_model = ChatCoze(
coze_api_key="your_coze_api_key",
bot_id="your_bot_id",
conversation_id="your_conversation_id",
user="your_user_id",
streaming=True
)
messages = [HumanMessage(content="hi! I'm todd. tell me a joke")]
for response in chat_model._stream(messages=messages):
print(response.message.content, end="|") To further ensure that your environment is correctly set up, you can follow the guidelines from the LangChain chat bot feedback template. Make sure to set the necessary environment variables: export OPENAI_API_KEY=sk-...
export LANGSMITH_API_KEY=...
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_PROJECT=my-project # Set to the project you want to save to Additionally, you can use the from functools import partial
from typing import Optional, Callable, List
from langserve import RemoteRunnable
from langchain.callbacks.manager import tracing_v2_enabled
from langchain_core.messages import BaseMessage, AIMessage, HumanMessage
# Update with the URL provided by your LangServe server
chain = RemoteRunnable("http://127.0.0.1:8031/chat-bot-feedback")
def stream_content(
text: str,
chat_history: Optional[List[BaseMessage]] = None,
last_run_id: Optional[str] = None,
on_chunk: Callable = None,
):
results = []
with tracing_v2_enabled() as cb:
for chunk in chain.stream(
{"text": text, "chat_history": chat_history, "last_run_id": last_run_id},
):
on_chunk(chunk)
results.append(chunk)
last_run_id = cb.latest_run.id if cb.latest_run else None
return last_run_id, "".join(results)
chat_history = []
text = "Where are my keys?"
last_run_id, response_message = stream_content(text, on_chunk=partial(print, end=""))
print()
chat_history.extend([HumanMessage(content=text), AIMessage(content=response_message)])
text = "I CAN'T FIND THEM ANYWHERE"
last_run_id, response_message = stream_content(
text,
chat_history=chat_history,
last_run_id=str(last_run_id),
on_chunk=partial(print, end=""),
)
print()
chat_history.extend([HumanMessage(content=text), AIMessage(content=response_message)]) If you are using the DeepInfra API for your chatbot, ensure that you have correctly configured the from langchain_core.messages import HumanMessage
from langchain_community.chat_models.deepinfra import ChatDeepInfra
chat_model = ChatDeepInfra(
deepinfra_api_token="your_deepinfra_api_token",
model_name="meta-llama/Llama-2-70b-chat-hf",
streaming=True
)
messages = [HumanMessage(content="hi! I'm todd. tell me a joke")]
for response in chat_model._stream(messages=messages):
print(response.message.content, end="|") If you are using the MiniMax API for your chatbot, ensure that you have correctly configured the from langchain_core.messages import HumanMessage
from langchain_community.chat_models.minimax import MiniMaxChat
chat_model = MiniMaxChat(
api_key="your_minimax_api_key",
model="abab6.5-chat",
streaming=True
)
messages = [HumanMessage(content="hi! I'm todd. tell me a joke")]
for response in chat_model._stream(messages=messages):
print(response.message.content, end="|") By following these steps and ensuring that your environment variables and API configurations are correctly set, you should be able to resolve the error and have a functioning chatbot with streaming capabilities [1][2][3][4][5]. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Checked other resources
Commit to Help
Example Code
Description
its error is at :
for response in responses:
result+=respons
System Info
windows python 3.9
Beta Was this translation helpful? Give feedback.
All reactions