Replies: 1 comment
-
Answer generated by a 🤖 AnswerI understand that you're trying to access the original user input from within a custom tool in LangChain. You're correct that the Agent takes the original user input, processes it, and then generates a call to the appropriate tool(s) with a potentially modified prompt. You're looking for a way to retrieve the original user input from within the tool itself. You can indeed access the original user input from inside the tool. The Here's a snippet of the relevant code: static createPrompt(
tools: StructuredTool[],
args?: StructuredChatCreatePromptArgs
) {
const {
prefix = PREFIX,
suffix = SUFFIX,
inputVariables = ["input", "agent_scratchpad"],
memoryPrompts = [],
} = args ?? {};
const template = [prefix, FORMAT_INSTRUCTIONS, suffix].join("\n\n");
const humanMessageTemplate = "{input}\n\n{agent_scratchpad}";
const messages = [
new SystemMessagePromptTemplate(
new PromptTemplate({
template,
inputVariables,
partialVariables: {
tool_schemas: StructuredChatAgent.createToolSchemasString(tools),
tool_names: tools.map((tool) => tool.name).join(", "),
},
})
),
...memoryPrompts,
new HumanMessagePromptTemplate(
new PromptTemplate({
template: humanMessageTemplate,
inputVariables,
})
),
];
return ChatPromptTemplate.fromPromptMessages(messages);
} So, within your custom tool, you should be able to access the original user input via the Please note that this is one potential solution based on my understanding of your issue. If you find this solution helpful and believe it might be useful to others, I encourage you to make a pull request to update the LangChain documentation. This way, other developers facing the same issue can benefit from your experience. SourcesThis response is meant to be useful, save you time, and share context. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
As I understand it, the Agent takes the original user input, looks at what tools it has available, picks the best tool(s), and generates a call to those tools with a prompt that's often different than what the user entered.
What would be the best way to retrieve the original user input from inside the tool? I see some mention of agent_scratchpad though that seems more like the agent/model's thought process.
I've tried adding agent_scratchpad and chat_history properties to the zod schema of the tool, added an agent_scratchpad input variable to agentArgs. Looked through a bit of the langchainjs source code, nothing immediate obvious that I could find there.
This isn't a long-running agent instance, I create an instance of the agent for each incoming user message, process the message and throw away the agent at the end of generating the response. So I could just pass the context/input I need to the tool constructor as I build up my array of tools but that feels limiting/wrong as I start to build up more complex chains.
Beta Was this translation helpful? Give feedback.
All reactions