Replies: 1 comment 2 replies
-
Answer generated by a 🤖 AnswerFrom your description, it seems like you're having an issue with the chatbot losing context during the conversation. You're using the The proposed solution to this issue is to switch to using the Here's how you can modify your code: from langchain.prompts import (
ChatPromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate
)
from langchain.chains import ConversationChain
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory, ReadOnlySharedMemory
prompt = ChatPromptTemplate.from_messages([
SystemMessagePromptTemplate.from_template(
"The following is a friendly conversation between a human and an AI. The AI is talkative and "
"provides lots of specific details from its context. If the AI does not know the answer to a "
"question, it truthfully says it does not know."
),
MessagesPlaceholder(variable_name="history"),
HumanMessagePromptTemplate.from_template("{input}")
])
llm = ChatOpenAI(temperature=0)
memory = ConversationBufferMemory(return_messages=True)
readonlymemory = ReadOnlySharedMemory(memory=memory)
pdf_qa = ConversationChain(memory=readonlymemory, prompt=prompt, llm=llm) After this modification, you can generate the initial and follow-up answers as you did before. This change should help the AI maintain the context of the conversation and provide more relevant responses. Please note that this is one potential solution based on the information provided. If this doesn't solve your issue, there might be other factors at play that we would need to investigate further. If you find this solution helpful and believe it could benefit other users, I encourage you to make a pull request to update the documentation. This way, others facing a similar issue can also benefit from your experience. Sources
This response is meant to be useful, save you time, and share context. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi all,
I'm writing a script that takes as input an appliance manual (e.g. from a dishwasher) and a certain issue (e.g. "Dishwasher is not draining") and should output resolution steps based on the manual in a conversational manner. I am using few-shot learning in the prompt. This is how I generate the answer:
result = pdf_qa({"question": prompt})
result["answer"]
The answer is relevant:
"This could be due to a clog or blockage. Here's what you can do:\n\n- Check the bottom of the dishwasher for any food particles or small objects that might be blocking the drain.\n\n- Inspect the drain hose under the sink. It should be high looped. If it's not, water from the sink could be going into the dishwasher.\n\n- Check the filter at the bottom of the dishwasher. If it's clogged, clean it out.\n\nIf the issue persists, I will go ahead and schedule a technician for you. Try these solutions and tell me... did it work?"
Then, when generating the follow-up answer...
... the chatbot suddenly seems to be lost:
"As an AI, I don't have personal experiences. However, I can help you troubleshoot further if the proposed solutions for your dishwasher issue didn't work. Could you please specify what the issue is?"
Is this an issue because of the memory object I'm using? Or is it a matter of prompt engineering? Would appreciate any help!
Beta Was this translation helpful? Give feedback.
All reactions