Chaining two prompts while keeping memory #9040
Replies: 2 comments 1 reply
-
Answer generated by a 🤖 AnswerHello, From your description, it seems like you're trying to chain two prompts where the second chain's input is dependent on the first chain's output. The error message you're encountering suggests that one of your chains is expecting more than one input. In LangChain, the To resolve this issue, you need to modify your second chain to only expect one input, which should be the output of the first chain. Here's an example of how you can modify your second chain: combine_prompt_template = """Given the following extracted parts of a long document and a question, create a final answer with references ("SOURCES").
If you don't know the answer, just say that you don't know. Don't try to make up an answer.
ALWAYS return a "SOURCES" part in your answer.
{input}
=========
FINAL ANSWER:"""
COMBINE_PROMPT = PromptTemplate(
template=combine_prompt_template, input_variables=["input"]
) In this modification, the second chain now expects a single input, which should be the output of the first chain. This way, you can chain the two prompts together without encountering the error. As for maintaining context between chains, the I hope this helps! If you have any other questions or need further clarification, feel free to ask. Best, Sources
This response is meant to be useful, save you time, and share context. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. |
Beta Was this translation helpful? Give feedback.
-
Answer generated by a 🤖 AnswerHello, From your description, it seems like you want to make two separate calls to the GPT model, one after each prompt. However, the current implementation of To achieve what you want, you would need to modify your code to call the from langchain import PromptTemplate
# Create two separate PromptTemplates
prompt1 = PromptTemplate.from_template("Say {foo}")
prompt2 = PromptTemplate.from_template("Then say {bar}")
# Format each prompt separately
formatted_prompt1 = prompt1.format(foo="Hello")
formatted_prompt2 = prompt2.format(bar="World")
# Pass each formatted prompt to GPT separately
gpt_response1 = gpt_model.generate(formatted_prompt1)
gpt_response2 = gpt_model.generate(formatted_prompt2) In this example, Please note that this approach will require you to manage the conversation history manually if you want the second prompt to be aware of the context from the first prompt's output. I hope this helps! If you have any further questions, feel free to ask. SourcesThis response is meant to be useful, save you time, and share context. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I want to chain two prompts of this kind:
X being the name of a sport.
I've implemented this with 2 independent chains and linked both with a SimpleSequentialChain.
Is it the right process? When I create the overall chain, I get the following error message:
Chains used in SimplePipeline should all have one input
The fact is my second chain does not have inputs in the prompt since the inputs are given by the IA's answer. How can I handle such a chain? Do I have to use a ConversationBufferMemory class? I don't find the examples given in the documentation very insightful...
Beta Was this translation helpful? Give feedback.
All reactions