Replies: 1 comment 1 reply
-
🤖 To support a 'sub-agent' or a tool with its own language model call and context, you can create a new instance of the Here is an example of how you can do it: import { AgentExecutor, OpenAIAgent } from "langchain/agents";
import { LLMChain } from "langchain/chains";
import { OpenAI } from "langchain/llms/openai";
import { SerpAPI } from "langchain/tools";
import { Calculator } from "langchain/tools/calculator";
const model = new OpenAI({ temperature: 0 });
const tools = [
new SerpAPI(process.env.SERPAPI_API_KEY, {
location: "Austin,Texas,United States",
hl: "en",
gl: "us",
}),
new Calculator(),
];
const prefix = `Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:`;
const suffix = `Begin! Remember to speak as a pirate when giving your final answer. Use lots of "Args"
Question: {input}
{agent_scratchpad}`;
const createPromptArgs = {
suffix,
prefix,
inputVariables: ["input", "agent_scratchpad"],
};
const prompt = OpenAIAgent.createPrompt(tools, createPromptArgs);
const llmChain = new LLMChain({ llm: model, prompt });
const agent = new OpenAIAgent({
llmChain,
allowedTools: ["search", "calculator"],
});
const agentExecutor = AgentExecutor.fromAgentAndTools({ agent, tools });
const input = `Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?`;
const result = await agentExecutor.call({ input });
console.log(`Got output ${result.output}`); The The output of the 'sub-agent' can be utilized by the main agent through the In the context shared, the You can find more information about this in the following sources:
I hope this helps! If you have any other questions, feel free to ask. Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I have a main agent that can invoke a lot of different tools.
I want to know the best practice on how to set up a tool that wants to have an LLM call within it.
This tool wants to have a different context that's different from the main agent's system prompt. What's the best way to support this on where the main agent can also take the output of the
sub-agent
or tool that has an LLM call?Beta Was this translation helpful? Give feedback.
All reactions