Replies: 2 comments 2 replies
-
🤖 Based on your description, it seems like you want the language model to be able to answer general questions based on all the embeddings in the VectorStore, not just the two documents it retrieves based on the query. There are two main ways to approach this:
prompt_template = """If the context is not relevant,
please answer the question by using your own knowledge about the topic
{context}
Question: {question}
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
chain_type_kwargs = {"prompt": PROMPT}
# Initialise Langchain - QA chain
qa = RetrievalQA.from_chain_type(llm=self.llm,
chain_type="stuff",
retriever=vectorstore.as_retriever(),
chain_type_kwargs=chain_type_kwargs)
def _get_docs(
self,
question: str,
inputs: Dict[str, Any],
*,
run_manager: CallbackManagerForChainRun,
) -> List[Document]:
"""Get docs."""
# Retrieve all documents from the VectorStore
docs = self.vectorstore.get_all_documents()
return self._reduce_tokens_below_limit(docs) Please note that this modification will significantly increase the computational cost and time, as the model will have to consider all documents in the VectorStore for each question. It may also decrease the relevance of the responses, as the model will consider a lot of potentially irrelevant information. I hope this helps! Let me know if you have any other questions. Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
So, do you have any solution for this situation? Because I also use ConversationalRetrievalChain and it also sometime give its own knowledge for the response |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I am using
ConversationalRetrievalChain
and passing it complete vector store from Pinecone as below:My LLM always gets 2 documents from the Vectorstore according to the query that is passed as context.
Now, my problem is I have hundreds of embeddings and each embedding discussed doing different tasks using different software (i.e how to register on Facebook, how to use Excel tables).
When I query the LLM "What can you help me with?" or "Which software do you know about?" or these general questions related to LLM knowledge base. It either responds with "I don't know" (as I have asked it to say I dont know if question is not related to the provided context)" or sometimes it will only say "I can help you with ABC or XYZ" ABC and XYZ are the two embeddings which are passed.
Is there a way that it can have knowledge of all the embeddings and can respond to such questions based on all the knowledge?
Below is my prompt
Beta Was this translation helpful? Give feedback.
All reactions