Skip to content

Commit 8c3e853

Browse files
committed
fix replicaset preference in leftover calculation in PodReplacementPolicy KEP
1 parent 382fe4b commit 8c3e853

File tree

1 file changed

+2
-2
lines changed
  • keps/sig-apps/3973-consider-terminating-pods-deployment

1 file changed

+2
-2
lines changed

keps/sig-apps/3973-consider-terminating-pods-deployment/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -299,7 +299,7 @@ newReplicaSetReplicas = replicasBeforeScale * \frac{deploymentMaxReplicas}{deplo
299299
$$
300300

301301
This is currently done in the [getReplicaSetFraction](https://github.com/kubernetes/kubernetes/blob/1cfaa95cab0f69ecc62ad9923eec2ba15f01fc2a/pkg/controller/deployment/util/deployment_util.go#L492-L512)
302-
function. The leftover pods are added to the newest ReplicaSet.
302+
function. The leftover pods are added to the largest ReplicaSet (or newest if more than one ReplicaSet has the largest number of pods).
303303

304304
This results in the following scaling behavior.
305305

@@ -364,7 +364,7 @@ As we can see, we will get a slightly different result when compared to the firs
364364
due to the consecutive scales and the fact that the last scale is not yet fully completed.
365365

366366
The consecutive partial scaling behavior is a best effort. We still adhere to all deployment
367-
constraints and have a bias toward scaling the newest ReplicaSet. To implement this properly we
367+
constraints and have a bias toward scaling the largest ReplicaSet. To implement this properly we
368368
would have to introduce a full scaling history, which is probably not worth the added complexity.
369369

370370
### kubectl Changes

0 commit comments

Comments
 (0)