Skip to content

Commit 9ae1fc3

Browse files
Store nodes before calling EnsureLoadBalancer
I am having difficulties convincing myself if this is better or worse. I didn't implement this originally because I didn't want to store nodes that we weren't sure we've configured. However: if EnsureLoadBalancer fails we should retry the call from the service controller. Doing it like this might save us one update call from the node controller side for calls which have already started executing from the service controller's side...is this really that expensive at this point though? Is it really that dangerous to not do either, given that we retry failed calls? Ahhhhh!!! Opinions, please! Help, please!
1 parent 60338c7 commit 9ae1fc3

File tree

2 files changed

+9
-10
lines changed

2 files changed

+9
-10
lines changed

staging/src/k8s.io/cloud-provider/controllers/service/controller.go

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -443,13 +443,13 @@ func (c *Controller) ensureLoadBalancer(ctx context.Context, service *v1.Service
443443
if len(nodes) == 0 {
444444
c.eventRecorder.Event(service, v1.EventTypeWarning, "UnAvailableLoadBalancer", "There are no available nodes for LoadBalancer")
445445
}
446+
c.storeLastSyncedNodes(service, nodes)
446447
// - Not all cloud providers support all protocols and the next step is expected to return
447448
// an error for unsupported protocols
448449
status, err := c.balancer.EnsureLoadBalancer(ctx, c.clusterName, service, nodes)
449450
if err != nil {
450451
return nil, err
451452
}
452-
c.storeLastSyncedNodes(service, nodes)
453453
return status, nil
454454
}
455455

staging/src/k8s.io/cloud-provider/controllers/service/controller_test.go

Lines changed: 8 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1413,13 +1413,14 @@ func TestSlowNodeSync(t *testing.T) {
14131413
updateCallCh <- update
14141414
}
14151415

1416-
// Three update calls are expected. This is because this test calls
1417-
// controller.syncNodes once with two existing services, so we will have an
1418-
// update call for each service, and controller.syncService once. The end
1419-
// result is therefore three update calls. Each update call takes
1420-
// cloudProvider.RequestDelay to process. The test asserts that the order of
1421-
// the Hosts defined by the update calls is respected, but doesn't
1422-
// necessarily assert the order of the Service. This is because the
1416+
// Two update calls are expected. This is because this test calls
1417+
// controller.syncNodes once with two existing services, but with one
1418+
// controller.syncService while that is happening. The end result is
1419+
// therefore two update calls - since the second controller.syncNodes won't
1420+
// trigger an update call because the syncService already did. Each update
1421+
// call takes cloudProvider.RequestDelay to process. The test asserts that
1422+
// the order of the Hosts defined by the update calls is respected, but
1423+
// doesn't necessarily assert the order of the Service. This is because the
14231424
// controller implementation doesn't use a deterministic order when syncing
14241425
// services. The test therefor works out which service is impacted by the
14251426
// slow node sync (which will be whatever service is not synced first) and
@@ -1429,8 +1430,6 @@ func TestSlowNodeSync(t *testing.T) {
14291430
{Service: service1, Hosts: []*v1.Node{node1, node2}},
14301431
// Second update call for impacted service from controller.syncService
14311432
{Service: service2, Hosts: []*v1.Node{node1, node2, node3}},
1432-
// Third update call for second service from controller.syncNodes.
1433-
{Service: service2, Hosts: []*v1.Node{node1, node2, node3}},
14341433
}
14351434

14361435
wg := sync.WaitGroup{}

0 commit comments

Comments
 (0)