Skip to content

Commit b774e57

Browse files
authored
Merge pull request kubernetes#3570 from towca/jtuznik/scale-down-after-delete-fix
Remove ScaleDownNodeDeleted status since we no longer delete nodes synchronously
2 parents 7a264f5 + bf18d57 commit b774e57

File tree

2 files changed

+1
-3
lines changed

2 files changed

+1
-3
lines changed

cluster-autoscaler/core/static_autoscaler.go

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -529,7 +529,7 @@ func (a *StaticAutoscaler) RunOnce(currentTime time.Time) errors.AutoscalerError
529529

530530
scaleDownStatus.RemovedNodeGroups = removedNodeGroups
531531

532-
if scaleDownStatus.Result == status.ScaleDownNodeDeleted {
532+
if scaleDownStatus.Result == status.ScaleDownNodeDeleteStarted {
533533
a.lastScaleDownDeleteTime = currentTime
534534
a.clusterStateRegistry.Recalculate()
535535
}

cluster-autoscaler/processors/status/scale_down_status_processor.go

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -88,8 +88,6 @@ const (
8888
ScaleDownNoUnneeded
8989
// ScaleDownNoNodeDeleted - unneeded nodes present but not available for deletion.
9090
ScaleDownNoNodeDeleted
91-
// ScaleDownNodeDeleted - a node was deleted.
92-
ScaleDownNodeDeleted
9391
// ScaleDownNodeDeleteStarted - a node deletion process was started.
9492
ScaleDownNodeDeleteStarted
9593
// ScaleDownNotTried - the scale down wasn't even attempted, e.g. an autoscaling iteration was skipped, or

0 commit comments

Comments
 (0)