-
Notifications
You must be signed in to change notification settings - Fork 25.6k
Configurable limit on concurrent shard closing #121267
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Configurable limit on concurrent shard closing #121267
Conversation
Today we limit the number of shards concurrently closed by the `IndicesClusterStateService`, but this limit is currently a function of the CPU count of the node. On nodes with plentiful CPU but poor IO performance we may want to restrict this limit further. This commit exposes the throttling limit as a setting.
|
Pinging @elastic/es-distributed-coordination (Team:Distributed Coordination) |
|
|
DiannaHohensee
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code change looks good, I'm having trouble understanding the test though.
server/src/main/java/org/elasticsearch/indices/cluster/IndicesClusterStateService.java
Show resolved
Hide resolved
| public class ShardCloseExecutorTests extends ESTestCase { | ||
|
|
||
| public void testThrottling() { | ||
| final var defaultProcessors = EsExecutors.NODE_PROCESSORS_SETTING.get(Settings.EMPTY).roundUp(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What are the expectations around the value of defaultProcessors for tests? You have if-statements later, and I'm wondering what runs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is the number of CPUs of the machine on which the tests are running, so can be more or less than 10. And it's not permitted to increase node.processors to greater than the default, which is why we have to skip some tests on low-CPU machiens.
|
|
||
| assertEquals(expectedLimit, tasksToRun.size()); // didn't enqueue the final task yet | ||
|
|
||
| for (int i = 0; i < tasksToRun.size(); i++) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm struggling to understand this method. Is there any way you could refactor or document it to make it easier to understand?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added some comments in d1fd519, does that help?
DiannaHohensee
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, that's made it easier for me to understand. LGTM!
Today we limit the number of shards concurrently closed by the `IndicesClusterStateService`, but this limit is currently a function of the CPU count of the node. On nodes with plentiful CPU but poor IO performance we may want to restrict this limit further. This commit exposes the throttling limit as a setting.
💚 Backport successful
|
Today we limit the number of shards concurrently closed by the `IndicesClusterStateService`, but this limit is currently a function of the CPU count of the node. On nodes with plentiful CPU but poor IO performance we may want to restrict this limit further. This commit exposes the throttling limit as a setting.
| */ | ||
| public static final Setting<Integer> CONCURRENT_SHARD_CLOSE_LIMIT = Setting.intSetting( | ||
| "indices.store.max_concurrent_closing_shards", | ||
| settings -> Integer.toString(Math.min(10, EsExecutors.NODE_PROCESSORS_SETTING.get(settings).roundUp())), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Previously the default max was
final var maxThreads = Math.max(EsExecutors.NODE_PROCESSORS_SETTING.get(settings).roundUp(), 10);
Note Math.max instead of Math.min. Is this change intentional?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it was a (my) mistake to use max here in the first place. I noticed the issue when I saw a small (IO-bound) node struggling to close lots of shards at once.
Today we limit the number of shards concurrently closed by the
IndicesClusterStateService, but this limit is currently a function ofthe CPU count of the node. On nodes with plentiful CPU but poor IO
performance we may want to restrict this limit further. This commit
exposes the throttling limit as a setting.