Replies: 3 comments 5 replies
-
From what I can tell this is not readily supported by So create_pandas_dataframe_agent could stand to be more flexible. But it is not too hard to work around it if you're looking for a solution in the short term. Here is a simple hack: from langchain.agents import AgentExecutor, create_openai_functions_agent
from langchain_core.messages import AIMessage, HumanMessage
from langchain_core.prompts import MessagesPlaceholder
from langchain_experimental.agents.agent_toolkits.pandas.base import _get_functions_single_prompt
from langchain_experimental.tools.python.tool import PythonAstREPLTool
from langchain_openai import ChatOpenAI
import pandas as pd
df = pd.DataFrame(
[
{"name": "apple", "color": "red"},
{"name": "grape", "color": "purple"},
{"name": "orange", "color": "orange"},
]
)
# This is a hack
prompt = _get_functions_single_prompt(df)
prompt.input_variables.append("chat_history")
prompt.messages.insert(1, MessagesPlaceholder(variable_name="chat_history"))
tools = [PythonAstREPLTool(locals={"df": df})]
chat_model = ChatOpenAI(model="gpt-3.5-turbo-1106")
agent = create_openai_functions_agent(chat_model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True) Here you would manage the memory yourself: chat_history = []
query = "how many rows are in the dataframe?"
response = agent_executor.invoke({"input": query, "chat_history": chat_history})
print(response["output"])
chat_history.extend(
[
HumanMessage(content=query),
AIMessage(content=response["output"])
],
)
query = "What were we talking about?"
response = agent_executor.invoke({"input": query, "chat_history": chat_history})
print(response["output"])
You could also use from typing import List
from langchain_core.chat_history import BaseChatMessageHistory
from langchain_core.messages import BaseMessage
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.runnables import ConfigurableFieldSpec
from langchain_core.runnables.history import RunnableWithMessageHistory
class InMemoryHistory(BaseChatMessageHistory, BaseModel):
"""In memory implementation of chat message history."""
messages: List[BaseMessage] = Field(default_factory=list)
def add_message(self, message: BaseMessage) -> None:
"""Add a self-created message to the store"""
self.messages.append(message)
def clear(self) -> None:
self.messages = []
store = {}
def get_session_history(session_id: str) -> BaseChatMessageHistory:
if session_id not in store:
store[session_id] = InMemoryHistory()
return store[session_id]
chain = RunnableWithMessageHistory(
agent_executor,
get_session_history=get_session_history,
input_messages_key="input",
history_messages_key="chat_history",
) chain.invoke(
{"input": "How many rows are in the dataframe?"},
config={
"configurable": {"session_id": "abc123"}
},
) # remembers
chain.invoke(
{"input": "What were we talking about?"},
config={
"configurable": {"session_id": "abc123"}
},
) # different session_id --> does not remember conversation
chain.invoke(
{"input": "What were we talking about?"},
config={
"configurable": {"session_id": "def234"}
},
) |
Beta Was this translation helpful? Give feedback.
-
It's a bit janky but you can actually support memory with from langchain.agents import AgentExecutor
from langchain.agents.agent_types import AgentType
from langchain.memory import ConversationBufferMemory
from langchain_experimental.agents.agent_toolkits import create_pandas_dataframe_agent
from langchain_openai import OpenAI
llm = OpenAI(temperature=0)
suffix = """
This is the result of `print(df.head())`:
{df_head}
Conversation history:
{history}
Begin!
Question: {input}
{agent_scratchpad}""" # adapted from langchain_experimental.agents.agent_toolkits.pandas.prompt.SUFFIX_WITH_DF
agent_executor = create_pandas_dataframe_agent(
llm,
df,
verbose=True,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
suffix=suffix,
include_df_in_prompt=None,
) memory = ConversationBufferMemory()
query = "how many rows are in the dataframe?"
response = agent_executor.invoke({"input": query, "history": memory.buffer})
print(response["output"])
memory.save_context({"input": query}, {"output": response["output"]})
query = "Now multiply that by 2"
response = agent_executor.invoke({"input": query, "history": memory.buffer})
print(response["output"])
|
Beta Was this translation helpful? Give feedback.
-
hey, how difficult of an add would this be to the agent function itself? I find the short-term solution not ideal (mainly bcs it uses underlying functions which shouldn't be used) I was experimenting with adding
Probably naive (I've not looked at the source code much), but wonder what you think 🤔 @ccurme |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Checked
Feature request
Enable memory implementation in pandas dataframe agent
Motivation
I have researching thoroughly around and does not found any solid solution to implement memory towards Pandas dataframe agent. And also tried everything, but the agent does not remember the conversation.
Proposal (If applicable)
No response
Beta Was this translation helpful? Give feedback.
All reactions