Skip to content
Discussion options

You must be logged in to vote

🤖

Hi,

The behavior you're observing is not a memory issue but rather a feature of the LangChain model. The model is designed to generate a Q&A within the response based on the prompt template defined in the PROMPT variable in the stuff_prompt.py file. This template guides the language model to generate a final answer and sources based on the provided document and question.

Here is the relevant code:

template = """Given the following extracted parts of a long document and a question, create a final answer with references ("SOURCES"). 
If you don't know the answer, just say that you don't know. Don't try to make up an answer.
ALWAYS return a "SOURCES" part in your answer.

QUESTION: {question}

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by TheJokerBG
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
1 participant