-
Checked other resources
Commit to Help
Example Codefrom pydantic.v1 import BaseModel
from langchain_openai import AzureChatOpenAI
from typing import Optional
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_mongodb import MongoDBChatMessageHistory
class MyStructuredOutput(BaseModel):
long_explanation: str
short_explanation: Optional[str] = None
def interact_with_llm(user_question: str) -> MyStructuredOutput:
chat_llm_with_history = _get_chat_llm_with_history()
config = {"configurable": {"session_id": "some-test-session-id"}}
return chat_llm_with_history.invoke({"question": user_question}, config=config)
def _get_chat_llm_with_history():
chat_llm = AzureChatOpenAI().with_structured_output(MyStructuredOutput)
prompt_template: ChatPromptTemplate = _get_prompt_template()
runnable = prompt_template | chat_llm
return RunnableWithMessageHistory(runnable,
_get_message_history,
input_messages_key="question",
history_messages_key="history",
)
def _get_message_history(session_id: str):
return MongoDBChatMessageHistory(
session_id=session_id,
connection_string="my-mongo-connection-string",
database_name="my-db-name",
collection_name="chat_histories",
)
def _get_prompt_template() -> ChatPromptTemplate:
return ChatPromptTemplate.from_messages(
[
("system", "You are a knowledgeable AI assistant good at math."),
MessagesPlaceholder(variable_name="history"),
("human", "{question}"),
]
)
if __name__ == "__main__":
interact_with_llm("Explain to me theory of relativity") DescriptionI'm trying to use the langchain library to implement chat with history support using structured output (with I expect langchain to persist in MongoDB the human messages and the AI responses, but no history is being saved and this error is observed in the logs:
I tried using one of the following keys as So is it possible to use at all structured output with System Infolangchain==0.1.14 |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 4 replies
-
Hello, @segoranov! I'm here to assist you with any bugs, questions, or contributions you might have. Let's work together to resolve the issue at hand. Yes, it is possible to use structured output with Here is an example of how you can use from pydantic.v1 import BaseModel
from langchain_openai import AzureChatOpenAI
from typing import Optional
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_mongodb import MongoDBChatMessageHistory
class MyStructuredOutput(BaseModel):
long_explanation: str
short_explanation: Optional[str] = None
def interact_with_llm(user_question: str) -> MyStructuredOutput:
chat_llm_with_history = _get_chat_llm_with_history()
config = {"configurable": {"session_id": "some-test-session-id"}}
return chat_llm_with_history.invoke({"question": user_question}, config=config)
def _get_chat_llm_with_history():
chat_llm = AzureChatOpenAI().with_structured_output(MyStructuredOutput)
prompt_template: ChatPromptTemplate = _get_prompt_template()
runnable = prompt_template | chat_llm
return RunnableWithMessageHistory(
runnable,
_get_message_history,
input_messages_key="question",
history_messages_key="history",
output_messages_key=None # Ensure the whole structured output is stored
)
def _get_message_history(session_id: str):
return MongoDBChatMessageHistory(
session_id=session_id,
connection_string="my-mongo-connection-string",
database_name="my-db-name",
collection_name="chat_histories",
)
def _get_prompt_template() -> ChatPromptTemplate:
return ChatPromptTemplate.from_messages(
[
("system", "You are a knowledgeable AI assistant good at math."),
MessagesPlaceholder(variable_name="history"),
("human", "{question}"),
]
)
if __name__ == "__main__":
interact_with_llm("Explain to me theory of relativity") In this example, the Additionally, ensure that your MongoDB setup is correctly configured and that the connection string, database name, and collection name are accurate. This will help in persisting the chat history as expected. By following this approach, you can effectively use structured output with |
Beta Was this translation helpful? Give feedback.
-
I'm facing the same problem, did you find a solution for this? Or did you have to move away from RunnableWithMessageHistory? |
Beta Was this translation helpful? Give feedback.
-
You can't use it with with_structured_output unfortunately. RunnableWithMessageHistroy expects messages on the output of the wrapped runnable. When you use Your options are:
In both cases, you'll may need to restructure the LCEL chain a bit to make sure that RunnableWithMessageHistories is wrapping a chain that is returning an AIMessage rather than any other type. This might be challenging for folks not as comfortable with LCEL, I'd recomend swapping to langgraph for adding chat history to the conversation it'll be more obvious how to do this. |
Beta Was this translation helpful? Give feedback.
You can't use it with with_structured_output unfortunately. RunnableWithMessageHistroy expects messages on the output of the wrapped runnable. When you use
with_structured_output
the output is not an AIMessage, but either a dict or a a pydantic object.Your options are:
In both cases, you'll may need to restructure the LCEL chain a bit to make sure…