-
Notifications
You must be signed in to change notification settings - Fork 2k
Description
Expected Behavior
I want to be able to provide Spring AI with one or more functions that could be used to provide additional information or functionality. (See https://platform.openai.com/docs/guides/gpt/function-calling). Moreover, when the response to a prompt indicates that a function should be called, I'd like it if that decision, the call to the function, and the reissuing of a new prompt were all handled internally by Spring AI. This behavior is what I believe is commonly referred to as an "agent".
In short, I'd like to be able to configure Spring AI with one or more functions (perhaps as lambdas or method references) that could be used later. Then, if I ask a question that the LLM needs more information to answer, Spring AI would call those functions for me and resubmit the prompt. I don't want to have to do what is shown as "step 2" in the example at https://platform.openai.com/docs/guides/gpt/function-calling. That decision making should be handled internally by Spring AI.
My primary desire is to have function support, but almost as importantly, I want to not have to deal with the back and forth interactions in my own code. I want Spring AI to handle that internally.
Current Behavior
Spring AI does not yet support functions, so at this point there's no need for agent behavior. Currently Spring AI only handles one-turn Q&A type interactions.
Context
As an example (drawn from the aforementioned documentation), if I were to ask what the weather is in Boston, most/all LLMs wouldn't know that answer because they're not trained on real-time or even relatively current data. But if I provide a function that can lookup the weather in a city, then instead of the LLM responding that it doesn't know what the weather is, it could respond with an instruction to call the function. The application (or the agent on behalf of the application) would call that function as instructed, then submit a new prompt with weather data from which the LLM would be able to generate the desired answer.