Replies: 1 comment
-
Answer generated by a 🤖 AnswerI understand that you're trying to integrate Pinecone and Max Marginal Relevance (MMR) in your code and you're unsure where to set the parameters In your code, you can set these parameters in the vectorstore = Pinecone.from_documents(docs, embeddings, index_name=index_name)
retriever = vectorstore.as_retriever(search_type="mmr")
# Set fetch_k and k parameters in the get_relevant_documents method
matched_docs = retriever.get_relevant_documents(query, k=10, fetch_k=100)
for i, d in enumerate(matched_docs):
print(f"\n## Document {i}\n")
print(d.page_content)
model = ChatOpenAI(model_name="gpt-3.5-turbo") # 'ada' 'gpt-3.5-turbo' 'gpt-4',
qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever) In this code, the It's generally recommended to set Please note that this is one potential solution based on the information provided. If you find this solution helpful and believe it could benefit others, you might consider making a pull request to update the documentation. This way, other users facing the same issue can also benefit from your findings. Sources
This response is meant to be useful, save you time, and share context. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Here I create code
for combining pinecone and mmr(max marginal relevance)
I want to set
fetch_k = 100
and k = 10
where can i set paramter {"fetch_k":100, "k":10} in this code?
Beta Was this translation helpful? Give feedback.
All reactions