Skip to content

Conversation

@mutianf
Copy link
Contributor

@mutianf mutianf commented Dec 30, 2025

…ol resizing and refresh

Before this change ChannelPool creates a new SingleThreadScheduledExecutor every time getTransportChannel is called. We can reuse the same background executor from the clientSettings and can reduce the number of threads when the user creates many instances of the client.

@product-auto-label product-auto-label bot added the size: s Pull request size is small. label Dec 30, 2025
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @mutianf, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a significant optimization by allowing the ChannelPool to reuse a shared ScheduledExecutorService for its background tasks, such as channel resizing and refresh. Previously, each ChannelPool instance would create its own single-threaded executor, leading to an unnecessary proliferation of threads when many client instances were initialized. By providing an API to share a common background executor, this change aims to reduce resource consumption and enhance the overall performance and scalability of applications using GAX gRPC clients.

Highlights

  • Executor Sharing: The ChannelPool now accepts a ScheduledExecutorService from the InstantiatingGrpcChannelProvider instead of creating its own, allowing for a shared background executor.
  • Resource Optimization: This change reduces the number of threads created when multiple client instances are used, preventing resource exhaustion and improving efficiency.
  • API Enhancement: A new setBackgroundExecutor method has been added to InstantiatingGrpcChannelProvider.Builder to allow users to configure the shared background executor.
  • Code Clarity: The internal executor field in ChannelPool has been renamed to backgroundExecutor for better readability and consistency.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to reduce thread creation by allowing a shared ScheduledExecutorService to be used for ChannelPool maintenance tasks. While the goal is good, the current implementation introduces critical lifecycle management issues. Specifically, the ChannelPool now shuts down the provided executor, which is incorrect for a shared resource. Additionally, the InstantiatingGrpcChannelProvider creates a default executor that can be leaked if not used. My review includes comments detailing these issues and suggesting a safer approach to manage the executor's lifecycle.

@product-auto-label product-auto-label bot added size: m Pull request size is medium. and removed size: s Pull request size is small. labels Dec 30, 2025
@blakeli0
Copy link
Contributor

blakeli0 commented Jan 5, 2026

I think this is generally a good change. However, what are the use cases that customers have to create many instances of clients?

In general we recommend to use only one client. Creating many clients have other implications, for example, the effort of loading credentials and creating channels would be duplicated. Customers can always pass their own CredentialProvider and ChannelProvider but it may easily misconfigured.

@mutianf
Copy link
Contributor Author

mutianf commented Jan 6, 2026

@blakeli0 They have an application that targets multiple bigtable instances with different retry settings.

@blakeli0
Copy link
Contributor

blakeli0 commented Jan 6, 2026

@blakeli0 They have an application that targets multiple bigtable instances with different retry settings.

I see, would having a request level retry setting help in this case?

@blakeli0
Copy link
Contributor

blakeli0 commented Jan 6, 2026

/gcbrun

}

@Override
public TransportChannelProvider withBackgroundExecutor(ScheduledExecutorService executor) {
Copy link
Contributor

@blakeli0 blakeli0 Jan 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do we plan to use it? Do we expect customers to create an InstantiatingGrpcChannelProvider with their own executor?
I see that the same executor is being passed to both executor and backgroundExecutor in ClientContext, if that's the expected use case, we don't have to create new setter methods in TransportChannel.

Copy link
Contributor Author

@mutianf mutianf Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ChannelProvider can have 2 executors: background executor and executor. Executor is used for handling rpc callbakcs. Background executor is used for all other background tasks. In the GrpcChannelProvider case, it doesn't need an executor because Grpc ManagedChannel provides a default executor that's optimized for performance. And the background executor is used for managing channels in the channel pool. Normally we don't want to use the same background executor for callbacks and background tasks because it impacts the performances. The settings you see in ClientContext is a bit confusing and it's from an old fix where we deprecated overriding the default grpc executor on the managed channel: 95c4c7b#diff-4e2f6e463b9b7d89de68e3f1a87765080045880c8dad018750f269e311f2471f. I don't think we actually go into the if statement by default

    if (transportChannelProvider.needsExecutor() && settings.getExecutorProvider() != null) {
      transportChannelProvider = transportChannelProvider.withExecutor(backgroundExecutor);
    }

because settings.getExecutorProvider() is null. Maybe it'll be easier to understand if the code is transportChannelProvider.withExecutor(settings.getExecutorProvider().getExecutor());? I made the change.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome, that's much easier to understand. Thanks!

@blakeli0
Copy link
Contributor

blakeli0 commented Jan 7, 2026

/gcbrun

@blakeli0 blakeli0 merged commit 178182c into googleapis:main Jan 8, 2026
50 of 55 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size: m Pull request size is medium.