Skip to content

Conversation

ddobrin
Copy link

@ddobrin ddobrin commented Oct 8, 2025

The ADK Spring AI Integration Library provides a bridge between the Agent Development Kit (ADK) and Spring AI, enabling developers to use Spring AI models within the ADK framework.

This library supports multiple AI providers, streaming responses, function calling, and comprehensive observability.

Copy link
Contributor

Summary of Changes

Hello @ddobrin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a new google-adk-spring-ai module, providing a robust integration layer between the Agent Development Kit (ADK) for Java and Spring AI 1.1.0. It enables ADK developers to seamlessly utilize various Spring AI-compatible LLM providers, supporting core features like chat generation (blocking and streaming), function calling, and text embeddings. The integration includes Spring Boot auto-configuration, comprehensive observability, and standardized error handling, all backed by extensive testing.

Highlights

  • Intent: This pull request introduces a new module to integrate Spring AI 1.1.0 into the Agent Development Kit (ADK) for Java. The primary goal is to provide a robust bridge that allows ADK developers to leverage various Spring AI models, supporting features like streaming responses, function calling, and comprehensive observability.
  • Key Changes: A new Maven module google-adk-spring-ai has been added under contrib/spring-ai. This module includes core adapter classes such as SpringAI (for chat models) and SpringAIEmbedding (for embedding models), which wrap Spring AI's ChatModel, StreamingChatModel, and EmbeddingModel interfaces. Key conversion utilities like MessageConverter, ConfigMapper, ToolConverter, and EmbeddingConverter are introduced to facilitate seamless translation between ADK and Spring AI data formats. Error handling is standardized with SpringAIErrorMapper, and observability features are managed by SpringAIObservabilityHandler. The integration also provides Spring Boot auto-configuration (SpringAIAutoConfiguration) and configuration properties (SpringAIProperties) for easy setup and customization. Extensive unit and integration tests have been added to validate functionality across various providers and scenarios.
  • New Functionality: Users can now integrate any Spring AI-compatible LLM provider (e.g., OpenAI, Anthropic, Google Gemini, Vertex AI, Azure OpenAI, Ollama) into their ADK agents with minimal configuration. The integration supports both blocking and streaming chat interactions, function calling (tools), and text embedding generation. Auto-configuration simplifies the setup process for Spring Boot applications, automatically wiring up ADK's SpringAI and SpringAIEmbedding beans based on available Spring AI models. Comprehensive observability (metrics, logging) and error mapping are built-in to enhance monitoring and debugging.
  • Configuration & Usage: The new module can be included via Maven dependencies. Users can configure Spring AI providers using standard Spring Boot application.yml or application.properties. The ADK Spring AI integration offers specific configuration properties (e.g., adk.spring-ai.default-model, adk.spring-ai.temperature, adk.spring-ai.validation.enabled) to fine-tune behavior. Quick start examples are provided for auto-configuration, manual setup, and using multiple providers within an ADK application.
  • Testing: The pull request includes a comprehensive suite of tests, including unit tests for ConfigMapper, MessageConverter, ToolConverter, StreamingResponseAggregator, and SpringAIErrorMapper. Integration tests (SpringAIIntegrationTest, SpringAIRealIntegrationTest) demonstrate the functionality with dummy models and real API calls (for Anthropic, Gemini, OpenAI) when API keys are provided. A dedicated OllamaTestContainer is used for local model integration testing.
  • Reviewer Activity: No specific reviewer activity was provided in the context.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This is an impressive pull request that adds a comprehensive Spring AI integration to the ADK. The architecture is well-structured, covering configuration, message conversion, observability, and error handling. The inclusion of auto-configuration and support for multiple providers is excellent.

I've identified a few areas for improvement, mainly concerning error handling, logging, and testing practices. Addressing these will enhance the robustness and maintainability of this new module. My key findings are:

  • Correctness: There's a potential bug in MessageConverter where chat options might be lost when using tools.
  • Maintainability: I've noted some opportunities to reduce code duplication and improve logging by using a standard logger instead of System.out.
  • Design: The custom observability handler is functional but could be improved by integrating with Micrometer for better ecosystem compatibility.
  • Testing: Some integration tests lack specific assertions, which reduces their effectiveness.

Overall, this is a great addition. My comments are intended to help polish the implementation.

Comment on lines +188 to +193
} catch (Exception e) {
observabilityHandler.recordError(context, e);
SpringAIErrorMapper.MappedError mappedError = SpringAIErrorMapper.mapError(e);

return Flowable.error(new RuntimeException(mappedError.getNormalizedMessage(), e));
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The error handling logic within the try-catch block is duplicated in several places, such as in generateContent and the onError handlers of generateStreamingContent. To improve maintainability and reduce code duplication, consider extracting this logic into a private helper method.

For example:

private <T> Flowable<T> handleError(SpringAIObservabilityHandler.RequestContext context, Throwable e) {
    observabilityHandler.recordError(context, e);
    SpringAIErrorMapper.MappedError mappedError = SpringAIErrorMapper.mapError(e);
    return Flowable.error(new RuntimeException(mappedError.getNormalizedMessage(), e));
}

TestUtils.askAgent(agent, false, "What's the weather like in San Francisco?");

// Should have multiple events: function call, function response, final answer
assertThat(events).hasSizeGreaterThanOrEqualTo(1);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The assertion assertThat(events).hasSizeGreaterThanOrEqualTo(1); is too weak for an integration test. It only confirms that at least one event was produced. To make this test more meaningful, you should add more specific assertions. For example, you could verify that a function call event was generated for getWeatherInfo and that the final response contains content from the tool's execution.

@ddobrin
Copy link
Author

ddobrin commented Oct 9, 2025

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a comprehensive integration for Spring AI into the ADK, providing a solid bridge to leverage Spring AI's multi-provider capabilities. The implementation is well-structured, with clear separation of concerns for message conversion, configuration mapping, error handling, and observability. The inclusion of extensive tests, including integration tests with real APIs and local models via Testcontainers, is commendable and ensures the robustness of the new module. I've provided several suggestions to improve code clarity, maintainability, and robustness, such as simplifying verbose logic, handling potential nulls more safely, and improving error logging. Additionally, I've pointed out a couple of inconsistencies in the documentation and build configuration that should be addressed.

new AssistantMessage.ToolCall(
functionCall.id().orElse(""),
"function",
functionCall.name().orElse(""),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

A function call must have a name to be valid. Using orElse("") can hide an invalid state where the name is missing, potentially leading to runtime errors downstream. It would be safer to throw an exception if the name is not present, ensuring consistency with how function responses are handled.

                functionCall.name().orElseThrow(() -> new IllegalStateException("Function call name is missing")),


<properties>
<java.version>17</java.version>
<spring-ai.version>1.1.0-M2</spring-ai.version>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The spring-ai.version in the "Complete Example pom.xml" is 1.1.0-M2, which is inconsistent with the version 1.1.0-M3 used in the "Basic Setup" example and the module's actual pom.xml. Please update this example to use 1.1.0-M3 for consistency.

Suggested change
<spring-ai.version>1.1.0-M2</spring-ai.version>
<spring-ai.version>1.1.0-M3</spring-ai.version>

Comment on lines 60 to 66
if (contentConfig.stopSequences().isPresent()) {
List<String> stopSequences = new ArrayList<>(contentConfig.stopSequences().get());
if (!stopSequences.isEmpty()) {
// Spring AI ChatOptions uses stop strings array, not a list
optionsBuilder.stopSequences(stopSequences);
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This block for handling stopSequences can be simplified and made more idiomatic by using Java's Optional stream features. This improves readability and conciseness.

    contentConfig.stopSequences()
        .filter(sequences -> !sequences.isEmpty())
        .ifPresent(optionsBuilder::stopSequences);

Comment on lines +115 to +153
// Create new ChatOptions with tools included
ToolCallingChatOptions.Builder optionsBuilder = ToolCallingChatOptions.builder();

// Always set tool callbacks
optionsBuilder.toolCallbacks(toolCallbacks);

// Copy existing chat options properties if present
if (chatOptions != null) {
// Copy all relevant properties from existing ChatOptions
if (chatOptions.getTemperature() != null) {
optionsBuilder.temperature(chatOptions.getTemperature());
}
if (chatOptions.getMaxTokens() != null) {
optionsBuilder.maxTokens(chatOptions.getMaxTokens());
}
if (chatOptions.getTopP() != null) {
optionsBuilder.topP(chatOptions.getTopP());
}
if (chatOptions.getTopK() != null) {
optionsBuilder.topK(chatOptions.getTopK());
}
if (chatOptions.getStopSequences() != null) {
optionsBuilder.stopSequences(chatOptions.getStopSequences());
}
// Copy model name if present
if (chatOptions.getModel() != null) {
optionsBuilder.model(chatOptions.getModel());
}
// Copy frequency penalty if present
if (chatOptions.getFrequencyPenalty() != null) {
optionsBuilder.frequencyPenalty(chatOptions.getFrequencyPenalty());
}
// Copy presence penalty if present
if (chatOptions.getPresencePenalty() != null) {
optionsBuilder.presencePenalty(chatOptions.getPresencePenalty());
}
}

chatOptions = optionsBuilder.build();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for creating ToolCallingChatOptions by manually copying properties is verbose and can be simplified. Using ToolCallingChatOptions.from(chatOptions) provides a much cleaner and more maintainable way to achieve the same result.

Suggested change
// Create new ChatOptions with tools included
ToolCallingChatOptions.Builder optionsBuilder = ToolCallingChatOptions.builder();
// Always set tool callbacks
optionsBuilder.toolCallbacks(toolCallbacks);
// Copy existing chat options properties if present
if (chatOptions != null) {
// Copy all relevant properties from existing ChatOptions
if (chatOptions.getTemperature() != null) {
optionsBuilder.temperature(chatOptions.getTemperature());
}
if (chatOptions.getMaxTokens() != null) {
optionsBuilder.maxTokens(chatOptions.getMaxTokens());
}
if (chatOptions.getTopP() != null) {
optionsBuilder.topP(chatOptions.getTopP());
}
if (chatOptions.getTopK() != null) {
optionsBuilder.topK(chatOptions.getTopK());
}
if (chatOptions.getStopSequences() != null) {
optionsBuilder.stopSequences(chatOptions.getStopSequences());
}
// Copy model name if present
if (chatOptions.getModel() != null) {
optionsBuilder.model(chatOptions.getModel());
}
// Copy frequency penalty if present
if (chatOptions.getFrequencyPenalty() != null) {
optionsBuilder.frequencyPenalty(chatOptions.getFrequencyPenalty());
}
// Copy presence penalty if present
if (chatOptions.getPresencePenalty() != null) {
optionsBuilder.presencePenalty(chatOptions.getPresencePenalty());
}
}
chatOptions = optionsBuilder.build();
// Create new ChatOptions with tools included
var builder = (chatOptions != null) ?
ToolCallingChatOptions.from(chatOptions).toBuilder() :
ToolCallingChatOptions.builder();
chatOptions = builder.toolCallbacks(toolCallbacks).build();

Comment on lines +200 to +201
functionResponse.name().orElseThrow(),
toJson(functionResponse.response().orElseThrow())));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using orElseThrow() without a supplier is a deprecated practice since Java 10 and can have minor performance implications as the exception is created eagerly. It's better to use the orElseThrow(Supplier) variant to create the exception lazily.

                    functionResponse.name().orElseThrow(() -> new IllegalStateException("Function response name is missing")),
                    toJson(functionResponse.response().orElseThrow(() -> new IllegalStateException("Function response body is missing"))));

Comment on lines +281 to +283
} catch (Exception e) {
// Ignore errors in token extraction
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The exceptions in extractTokenCount, extractInputTokenCount, and extractOutputTokenCount are silently ignored. This can make debugging difficult if there's an issue with parsing token usage from the model's response, as the error would be completely hidden. It's better to log these exceptions at a WARN or DEBUG level. You'll need to add a Logger field to the class to implement this.

Comment on lines +151 to +158
response -> {
// Extract token usage if available
int totalTokens = 0;
if (response.getMetadata() != null && response.getMetadata().getUsage() != null) {
totalTokens = response.getMetadata().getUsage().getTotalTokens();
}
observabilityHandler.recordSuccess(context, totalTokens, totalTokens, 0);
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for extracting token usage could fail if the response metadata is missing or has an unexpected format. For improved robustness and consistency with the SpringAI class, it's advisable to wrap this token extraction logic in a try-catch block and log any exceptions that occur.

        .doOnSuccess(
            response -> {
              // Extract token usage if available
              int totalTokens = 0;
              try {
                if (response.getMetadata() != null && response.getMetadata().getUsage() != null) {
                  totalTokens = response.getMetadata().getUsage().getTotalTokens();
                }
              } catch (Exception e) {
                // Consider adding a logger to the class and logging this exception, e.g.:
                // logger.warn("Could not extract token count from embedding response metadata.", e);
              }
              observabilityHandler.recordSuccess(context, totalTokens, totalTokens, 0);
            })

*/
public class StreamingResponseAggregator {

private final StringBuffer textAccumulator = new StringBuffer();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

StringBuffer is thread-safe, but its use here is redundant because all access to textAccumulator is already protected by a ReadWriteLock. Using a non-synchronized StringBuilder would be more performant in this context.

Suggested change
private final StringBuffer textAccumulator = new StringBuffer();
private final StringBuilder textAccumulator = new StringBuilder();

@ddobrin
Copy link
Author

ddobrin commented Oct 10, 2025

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a comprehensive integration for Spring AI into the ADK, providing a solid bridge between the two frameworks. The implementation is well-structured, featuring robust components for message conversion, tool handling, auto-configuration, observability, and error mapping. The code is also extensively tested, including unit, integration, and even real API tests, which is commendable. My feedback focuses on a few areas to enhance correctness, performance, and maintainability, such as ensuring tool call IDs are preserved, improving object instantiation efficiency, and promoting consistency between manual and auto-configured setups.

Comment on lines 339 to 341
Map<String, Object> args =
objectMapper.readValue(toolCall.arguments(), MAP_TYPE_REFERENCE);
parts.add(Part.fromFunctionCall(toolCall.name(), args));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current implementation drops the tool call ID when converting from a Spring AI AssistantMessage.ToolCall to an ADK Part. The ID is crucial for correctly associating tool responses with their corresponding calls, especially in multi-turn scenarios. Losing it can lead to incorrect behavior. The Part should be constructed with a FunctionCall object that includes the ID to ensure proper tracking.

          Map<String, Object> args =
              objectMapper.readValue(toolCall.arguments(), MAP_TYPE_REFERENCE);
          FunctionCall adkFunctionCall = FunctionCall.builder()
              .name(toolCall.name())
              .args(args)
              .id(toolCall.id())
              .build();
          parts.add(Part.fromFunctionCall(adkFunctionCall));

<description>Spring AI integration for the Agent Development Kit.</description>

<properties>
<spring-ai.version>1.1.0-M3</spring-ai.version>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The project uses a milestone version (1.1.0-M3) of Spring AI. While this might be necessary for new features, it introduces a potential risk for consumers of this library, as milestone releases can have breaking changes and are not considered stable. It would be beneficial to either switch to a stable release if one that meets the requirements is available, or to add a comment here clarifying why this specific milestone version is necessary.

List<ToolResponseMessage.ToolResponse> responses =
List.of(
new ToolResponseMessage.ToolResponse(
functionResponse.id().orElse(""),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using orElse("") for a missing FunctionResponse ID can hide potential issues and lead to silent failures, as the ID is essential for matching a tool's response to its original call. To improve robustness and align with the stricter handling of FunctionCall IDs, consider using orElseThrow() to ensure that a tool response always includes an ID.

Suggested change
functionResponse.id().orElse(""),
functionResponse.id().orElseThrow(() -> new IllegalStateException("Function response ID is missing")),

Comment on lines +259 to +264
private static String extractModelName(Object model) {
// Spring AI models may not always have a straightforward way to get model name
// This is a fallback that can be overridden by providing explicit model name
String className = model.getClass().getSimpleName();
return className.toLowerCase().replace("chatmodel", "").replace("model", "");
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The extractModelName method here provides a very basic fallback for determining the model name, which is inconsistent with the more robust, reflection-based approach used in SpringAIAutoConfiguration. To ensure consistent behavior for both auto-configured and manually instantiated SpringAI clients, the more advanced logic from SpringAIAutoConfiguration.extractModelNameFromInstance() should be centralized here or in a shared utility. This would make manual instantiation more reliable.

*/
public class StreamingResponseAggregator {

private final StringBuffer textAccumulator = new StringBuffer();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

StringBuffer is used for textAccumulator, which is a legacy thread-safe class. Since all access to this field is already synchronized using a ReadWriteLock, you can replace it with the more modern and generally more performant StringBuilder without compromising thread safety.

Suggested change
private final StringBuffer textAccumulator = new StringBuffer();
private final StringBuilder textAccumulator = new StringBuilder();

// Call the ADK tool and wait for the result
Map<String, Object> result = tool.runAsync(processedArgs, null).blockingGet();
// Convert result back to JSON string
return new com.fasterxml.jackson.databind.ObjectMapper().writeValueAsString(result);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

A new ObjectMapper is instantiated within the lambda for every tool execution. This is inefficient and can impact performance, especially with frequent tool calls. It's better to create a single ObjectMapper instance and reuse it. Consider passing the ObjectMapper from MessageConverter into ToolConverter's constructor and storing it as a field.

@ddobrin
Copy link
Author

ddobrin commented Oct 10, 2025

Fixed all high priority and several medium priority items

  • Replace System.out/err logging with SLF4J in ToolConverter.java
  • Add test assertions to GeminiApiIntegrationTest
  • Add test assertions to AnthropicApiIntegrationTest
  • Preserve ChatOptions properties in MessageConverter.java
  • Integrate Micrometer metrics in SpringAIObservabilityHandler.java
  • Fix function name validation in MessageConverter.java
  • Fix README version inconsistency
  • Simplify stop sequences handling in ConfigMapper.java
  • Fix Tool Call ID Loss in MessageConverter.java
  • Fix Tool Call ID Loss in MessageConverter

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant