Replies: 2 comments 10 replies
-
🤖 Hey @scottsuhy! Good to see you back, hope you've been coding away! Let's dive into this intriguing issue you're encountering. It seems like the issue you're experiencing is related to the way LangChain's agent handles function calls. In your case, the agent is designed to fetch the most accurate and up-to-date answer for each question, which is why it's calling the To avoid this, you could implement a caching mechanism. Before calling the agent function, you can check if the answer for the current question is already in the cache. If it is, you can return the cached answer. If it's not, you can call the agent function, store the answer in the cache, and then return it. Here's a simplified example of how you could implement this: // Create a cache object
const cache = {};
// ...
app.post('/checkText', async (req, res) => {
const input = req.body.content;
const user_email = req.body.user_email;
try {
const thisUsersMemories = getThisUsersMemories(user_email);
// Check if the answer for the current question is in the cache
if (cache[input]) {
const response = {
output: cache[input],
};
res.json(response);
} else {
// If the answer is not in the cache, call the agent function
const runnableAgent = RunnableSequence.from([...]);
const executor = AgentExecutor.fromAgentAndTools({...});
let input_with_phrase = "This question is from user: " + user_email + ": " + input;
const result = await executor.call({ input: input_with_phrase, user_email });
await thisUsersMemories.saveContext({ question: input_with_phrase }, { answer: result.output });
// Store the answer in the cache
cache[input] = result.output;
const response = {
output: result.output,
};
res.json(response);
}
} catch (error) {
res.status(500).send("An error occurred while processing the query: " + error.message);
}
}); Please note that this is a simple caching mechanism and it might not be suitable for all use cases. For example, if the answers for the questions can change over time, this mechanism might return outdated answers. Also, if you have a lot of unique questions, this mechanism might use a lot of memory. I hope this helps! If you have any other questions, feel free to ask. SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
Closing in favor of your newer issue |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I turned this example into a node.js server
The functionality of the model works, but the memory management is acting strange (it calls the functions used to get previous answers every time a new question is asked).
I did change the sample code to manage the individual memories of each user on the server.
explaining the problem:
When a user asks a question such as "get the stock price of Apple" the answer is returned fine by the agent using the GETSTOCKPRICE() function call to get the data for the agent to provide the answer.
However, when the user asks a second question such as "get the earnings call transcript for Apple for Q3 of 2023", for some reason the Agent calls the GETSTOCKPRICE() function again before calling the GETTRANSCRIPT() function. This is not what I want.
What I would prefer would be to have the previous answer in memory, and a second question would not cause the agent to call the first function again.
Any guidance on what I did wrong would be appreciated.
Beta Was this translation helpful? Give feedback.
All reactions