"Continue Generating" feature using ConversationalRetrievalChain #11170
Replies: 1 comment
-
🤖 Hello, Based on your description, it seems like you want the Language Learning Model (LLM) to generate a continuation of its previous answer, rather than regenerating the previous answer in a different form. This can be achieved by modifying the way the chat history is used in the question generation process. In the If you want the LLM to generate a continuation of its previous answer, you need to modify the Here is an example of how you can modify the # This controls how the standalone question is generated.
# Should take `chat_history`, `question`, and `previous_answer` as input variables.
template = (
"Combine the chat history, previous answer, and follow up question into "
"a standalone question. Chat History: {chat_history} "
"Previous answer: {previous_answer} "
"Follow up question: {question}"
)
prompt = PromptTemplate.from_template(template) In this template, Please note that this modification requires you to keep track of the previous answer and pass it as an input to the I hope this helps! If you have any further questions, feel free to ask. Best, Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm building a chat bot that answers questions based on context and I have set LLM's token limit to 100. But what should I do if I want to get LLM's answer so that it's literally the continuation of the previous answer? First thing that came to my mind was just trying to type "continue" or "continue generating" as a prompt while using
chat_history
variable that includes previous question and LLM's previous answer, but as a result I've got the output that is not the one I wanted (it just regenerates the previous answer in other form)Beta Was this translation helpful? Give feedback.
All reactions