Multiple chains fails when there is more than one inputs #4433
Replies: 2 comments 1 reply
-
🤖 Hey @vijayraghavanv! Great to see you back here with another intriguing puzzle. How's it going? Based on your code and the issue you're facing, it seems like the problem arises when the To resolve this, you can modify your Here's an example of how you can do this: const transformInput = new Transform({
transform: (input) => {
return input.formattedQuestion; // or any other property that contains the text
},
});
const answerChain = RunnableSequence.from([
{
context: retriever.pipe(formatDocumentsAsString),
},
answerTemplate,
transformInput, // add this before llm
llm,
new StringOutputParser(),
]); In this code, Please note that you need to replace I hope this helps! If you have any other questions or need further clarification, feel free to ask. For more information on how to use Sources
|
Beta Was this translation helpful? Give feedback.
-
🤖 In the LangChainJS framework, you can add a transformation step by creating a custom class that implements the import { ChatOpenAI, EmbedQuery } from "@langchain/openai";
import { StringOutputParser, Runnable, Stream } from "@langchain/core";
class TextExtractor implements Runnable {
async run(input: any, stream: Stream): Promise<void> {
const text = input.message; // replace this with your actual extraction logic
await stream.push(new EmbedQuery(text));
}
}
const chain = RunnableSequence.from([
new ChatOpenAI({ temperature: 0 }),
new TextExtractor(),
new StringOutputParser(),
]);
const stream = await chain.stream({ message: "Hello there!" });
for await (const chunk of stream) {
console.log(chunk);
} In this code, You can find more information about the Regarding the structure of the input object that is causing the issue in the const queryInputSchema = /* #__PURE__ */ z.object({
query: /* #__PURE__ */ z
.string()
.describe("text string to compare to document contents"),
filter: /* #__PURE__ */ z
.string()
.optional()
.describe("logical condition statement for filtering documents"),
}); This schema suggests that the input object should have the following structure: {
query: string; // A text string to compare to document contents
filter?: string; // An optional logical condition statement for filtering documents
} This object is used in the async outputProcessor({
query,
filter,
}: z.infer<typeof queryInputSchema>): Promise<StructuredQuery> {
let myQuery = query;
if (myQuery.length === 0) {
myQuery = " ";
}
if (filter === "NO_FILTER" || filter === undefined) {
return new StructuredQuery(query);
} else {
const parsedFilter = await this.queryTransformer.parse(filter);
return new StructuredQuery(query, parsedFilter);
}
} If there is an issue with the Sources
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
In the above code, I'm trying to combine two chains. The first chain is the questionGeneratorChain that provides a formattedQuestion, This should function as an input to the second chain(answerChain) along with the "type" that was provided while invoking the chain. So:
However the issue is that the input to embedQuery (That is called internally ) is an object and is of shape: {question: 'My Question', type: "My type', formattedQuestion: 'The formatted question returned by question generator'}. EmbedQuery expects a text and not an object and this fails.
How do I overcome this? I would prefer using RunnableSequences and not ConversationalQAChain.
Beta Was this translation helpful? Give feedback.
All reactions