Custom Prompt from PromptTemplate and pass it to the RetrievalQA. #26954
Replies: 1 comment 6 replies
-
Hello, @mrutunjay-kinagi! I'm here to assist you with any bugs, questions, or contributions you may have. To pass the actual query asked by the end user and the retrieved chunks as context to the from langchain.chains import RetrievalQA
from langchain_core.prompts import PromptTemplate
from langchain_community.llms import BedrockChat
from langchain_core.callbacks import StdOutCallbackHandler
def get_retrieval_chain(data, source):
prompt_template = """
Role: You are an assistant for question-answering tasks.
Use the following pieces of retrieved context to answer the question.
If you don't know the answer, just say that you don't know or NA.
Keep the answer to the point follow the guidelines below.
Guidelines:
- If the query asks for a numerical value (e.g., amounts, percentages, dates), return only the relevant value.
- For yes/no questions, answer with either 'Yes' or 'No' based on the context.
- If no relevant information is available, return 'NA'.
- Do not provide any additional commentary or filler text. Focus on precision and brevity.
Use the following pieces of retrieved context to answer.
Question: {query}
Context: {context}
"""
prompt = PromptTemplate(template=prompt_template, input_variables=["query", "context"])
llm = BedrockChat(
model_id="anthropic.claude-3-haiku-20240307-v1:0", region_name="us-east-1"
)
handler = StdOutCallbackHandler()
vectorstore = get_opensearch_vectorstore(data, source)
# create the retriever
search_kwargs = {
"filter": (
{"term": {"metadata.source.keyword": source}} if source else {}
),
"top_k": 7, # Retrieves the top 7 most relevant documents
}
retriever = vectorstore.as_retriever(search_kwargs=search_kwargs)
# create the retriever chain
qa_chain = RetrievalQA.from_chain_type(
llm=llm,
retriever=retriever,
callbacks=[handler],
return_source_documents=True,
verbose=True,
chain_type_kwargs={"prompt": prompt},
)
return qa_chain
qa_chain = get_retrieval_chain(data, source)
response = qa_chain.invoke({"query": "Your actual query here"})
print(response) In this example, the |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Checked other resources
Commit to Help
Example Code
Description
I'm currently using langchain version 0.1.16 in my project. I'm not able to figure out how to pass the actual query asked by end user and the retrieved chunks as context to the PromptTemplate ?
System Info
langchain==0.1.16
Beta Was this translation helpful? Give feedback.
All reactions