Replies: 1 comment
-
To address your issues with creating a RAG (Retrieval-Augmented Generation) setup with prompt, chat history, and tools, here are some solutions and explanations: 1. Ensuring Tools Are Called CorrectlyTo ensure that tools are called correctly without enforcing them using Here's how you can modify your code: const llmWithTools = streamingModel.bindTools(tools, {
tool_choice: "auto",
}); 2. Understanding Why Tools Are Not Called Unless EnforcedThe issue you're facing where tools are not called unless enforced might be due to the way the model interprets the input and decides whether to use a tool. When 3. Handling Callbacks for Tool InvocationTo ensure that your callback handlers capture all relevant events during tool invocation, follow these best practices:
Here is an example of how to set up a comprehensive callback handler: class MyCallbackHandler {
handleToolStart(toolName, input) {
console.log(`Tool ${toolName} started with input:`, input);
}
handleToolEnd(toolName, output) {
console.log(`Tool ${toolName} ended with output:`, output);
}
handleLLMEnd(llmName, output) {
console.log(`LLM ${llmName} ended with output:`, output);
}
// Implement other methods as needed
}
const handler = new MyCallbackHandler();
const response = await ragChain.invoke({
input,
chat_history: chatHistory.chat_history
}, {
callbacks: [handler],
}); Example Code with AdjustmentsHere is your adjusted code with the const streamingModel = new ChatOpenAI({
streaming: true,
modelName: MODEL_NAME,
temperature: TEMPRATURE,
verbose: false,
modelKwargs: {
seed: 1,
},
});
const addTool = new DynamicStructuredTool({
name: "add",
description: "Adds a and b.",
schema: z.object({
a: z.number(),
b: z.number(),
}),
func: async ({ a, b }) => {
return `${a + b}`;
},
});
const multiplyTool = new DynamicStructuredTool({
name: "multiply",
description: "Can perform multiplications operations.",
schema: z.object({
a: z.number(),
b: z.number(),
}),
func: async ({ a, b }) => {
console.log(`Multiply tool called with input: { a: ${a}, b: ${b} }`);
const result = a * b * a;
return result.toString();
},
});
const tools = [addTool, multiplyTool];
const llmWithTools = streamingModel.bindTools(tools, {
tool_choice: "auto",
});
const ragChain = await createRetrievalChain({
retriever: await createHistoryAwareRetriever({
llm: nonStreamingModel,
retriever: customRetriever,
rephrasePrompt: contextualizeQPrompt,
}),
combineDocsChain: await createStuffDocumentsChain({
llm: llmWithTools,
prompt: qaPrompt,
}),
});
const chatHistory = await memory.loadMemoryVariables({});
const response = await ragChain.invoke({
input,
chat_history: chatHistory.chat_history
}, {
callbacks: [
{
handleLLMNewToken: (token) => {
console.log(`Received token: ${token}`);
onTokenStream(token);
},
handleLLMEnd: async (output) => {
console.log('LLM End Output:', JSON.stringify(output, null, 2));
const generation = output.generations[0][0];
if ('message' in generation) {
const chatGeneration = generation as ChatGeneration;
if (chatGeneration.message.additional_kwargs?.tool_calls) {
for (const toolCall of chatGeneration.message.additional_kwargs.tool_calls) {
console.log('Tool Call:', JSON.stringify(toolCall, null, 2));
const functionName = toolCall.function.name;
const args = JSON.parse(toolCall.function.arguments);
console.log(`Function called: ${functionName}`);
console.log('Arguments:', args);
// Execute the tool and stream only its result
const tool = tools.find(t => t.name === functionName);
if (tool) {
try {
const result = await tool.invoke(args);
// Stream only the result
onTokenStream(result.toString());
} catch (error) {
console.error(`Error executing tool ${functionName}:`, error);
}
} else {
console.warn(`Tool ${functionName} not found`);
}
}
}
}
},
handleToolStart: (toolStart) => {
console.log(`Tool Start: ${toolStart.name}`);
console.log(`Tool Input: ${JSON.stringify(toolStart)}`);
// Add any additional logging or processing for tool start here
},
},
],
}); This setup should help ensure that tools are called correctly and that all relevant events are captured during tool invocation [1][2][3]. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Checked other resources
Commit to Help
Example Code
Description
I am trying to incorporate tools into my RagChain and i have setup the above code.
It will work, but I have few issues:
I must enforce the tools
if I don't do it and pass something like
the LLMmodel would not call the Multiply tool if I ask how much is 12 times 12.
and
I have set up a handler to catch different callbacks but what I see is (when tools are enforced)
without showing any tool start even when I have added handleToolStart anywhere possible in my code
So my only option is to catch the tool call in the handleLLMEnd, as also it does not appear initially in the handleLLMStart
So I ask for help in:
Want to know if there is another way to call tools and using createRetrievalChain
understanding why the llmwithtools will not call the tools unless it is enforced.
regarding question 2 I have a hint in the logs of the callbacks as with tools not enforced I get
and with the tools enforced I get
so the presence of
will change if the tool is called or not, but I might be mistaking
System Info
[email protected]
"@langchain/core": "^0.2.18",
"@langchain/langgraph": "^0.0.31",
"@langchain/openai": "^0.2.5",
Beta Was this translation helpful? Give feedback.
All reactions