Replies: 1 comment
-
🤖 Hello, Thank you for your detailed question. It seems like you're experiencing an issue where the In the There are a couple of potential solutions to this issue:
prompt_template = """If the context is not relevant,
please answer the question by using your own knowledge about the topic
{context}
Question: {question}
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
chain_type_kwargs = {"prompt": PROMPT}
# Initialise Langchain - QA chain
qa = RetrievalQA.from_chain_type(llm=self.llm,
chain_type="stuff",
retriever=vectorstore.as_retriever(),
chain_type_kwargs=chain_type_kwargs)
I hope this helps! If you have any further questions, please don't hesitate to ask. Best, Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I have a chat working using ConversationalRetrievalChain answering questions on my vector database content. I maintain the chat_history and pass it and the question through to the chain to get a response. All works okay as expected and I can ask it a series of questions on the related topic and it will return sensible answers.
The question I have, and excuse my limited knowledge in this area, is that if I ask another question that is on another topic it won't provide a proper answer even though the topic is in my content. If I refresh the chat again and ask the same question I get a perfect answer. It's as though the chat_history is playing some part in developing an answer that is over and above just maintaining the conversation.
I am not providing a prompt at all and using the default. Is there a default? I've been reading up on prompting, and looked at the source, and it looks like there's 3 pieces of information that come into play context, chat_history and the question. Can anyone explain how these pieces of information come into play and how they are impacting my chat? Wondering whether this is expected behaviour or whether I've got an issue with my code.
Beta Was this translation helpful? Give feedback.
All reactions