Skip to content

Commit e8dc400

Browse files
authored
fix magnum master references (kubernetes#1875)
1 parent 8a6d7e7 commit e8dc400

File tree

3 files changed

+15
-7
lines changed

3 files changed

+15
-7
lines changed

docs/magnum-auto-healer/using-magnum-auto-healer.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ Like cluster-autoscaler, magnum-auto-healer is implemented to use together with
2828
There are some considerations when we were designing the magnum-auto-healer service:
2929

3030
- We want to have a single component for the cluster autohealing purpose. There are already some other components out there in the community to deal with some specific tasks separately, combining them together with some customization may work, but will lead to much complexity and maintenance overhead.
31-
- Support both master nodes and worker nodes.
31+
- Support both control-plane nodes and worker nodes.
3232
- Cluster administrator is able to disable the autohealing feature on the fly, which is very important for the cluster operations like upgrade or scheduled maintenance.
3333
- The Kubernetes cluster is not necessary to be exposed to either the public or the OpenStack control plane. For example, In Magnum, the end user may create a private cluster which is not accessible even from Magnum control services.
3434
- The health check should be pluggable. Deployers should be able to write their own health check plugin with customized health check parameters.
@@ -38,7 +38,7 @@ There are some considerations when we were designing the magnum-auto-healer serv
3838

3939
### Prerequisites
4040

41-
1. A multi-node cluster(3 masters and 3 workers) is created in Magnum.
41+
1. A multi-node cluster(3 control-planes and 3 workers) is created in Magnum.
4242

4343
```
4444
$ openstack coe cluster list
@@ -64,7 +64,7 @@ There are some considerations when we were designing the magnum-auto-healer serv
6464
6565
### Deploy magnum-auto-healer
6666
67-
It's recommended to run magnum-auto-healer service as a DaemonSet on the master nodes, the service is running in active-passive mode using leader election mechanism. There is a sample manifest file in `manifests/magnum-auto-healer/magnum-auto-healer.yaml`, you need to change some variables as needed before actually running `kubectl apply` command. The following commands are just examples:
67+
It's recommended to run magnum-auto-healer service as a DaemonSet on the control-plane nodes, the service is running in active-passive mode using leader election mechanism. There is a sample manifest file in `manifests/magnum-auto-healer/magnum-auto-healer.yaml`, you need to change some variables as needed before actually running `kubectl apply` command. The following commands are just examples:
6868
6969
```shell
7070
magnum_cluster_uuid=c418c335-0e52-42fc-bd68-baa8d264e072
@@ -163,7 +163,7 @@ spec:
163163
- effect: NoExecute
164164
operator: Exists
165165
nodeSelector:
166-
node-role.kubernetes.io/master: ""
166+
node-role.kubernetes.io/control-plane: ""
167167
containers:
168168
- name: magnum-auto-healer
169169
image: ${image}

manifests/magnum-auto-healer/magnum-auto-healer.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ spec:
8585
- effect: NoExecute
8686
operator: Exists
8787
nodeSelector:
88-
node-role.kubernetes.io/master: ""
88+
node-role.kubernetes.io/control-plane: ""
8989
containers:
9090
- name: magnum-auto-healer
9191
image: docker.io/k8scloudprovider/magnum-auto-healer:latest

pkg/autohealing/controller/controller.go

Lines changed: 10 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -62,8 +62,12 @@ const (
6262

6363
// LabelNodeRoleMaster specifies that a node is a master
6464
// Related discussion: https://github.com/kubernetes/kubernetes/pull/39112
65+
// TODO: remove >= k8s 1.25
6566
LabelNodeRoleMaster = "node-role.kubernetes.io/master"
6667

68+
// LabelNodeRoleControlPlane specifies that a node is control-plane
69+
LabelNodeRoleControlPlane = "node-role.kubernetes.io/control-plane"
70+
6771
leaderElectionResourceLockNamespace = "kube-system"
6872
leaderElectionResourceLockName = "magnum-auto-healer"
6973
)
@@ -248,7 +252,9 @@ func (c *Controller) getUnhealthyMasterNodes() ([]healthcheck.NodeInfo, error) {
248252
return nil, err
249253
}
250254
for _, node := range nodeList.Items {
251-
if _, hasMasterRoleLabel := node.Labels[LabelNodeRoleMaster]; hasMasterRoleLabel {
255+
_, masterLabel := node.Labels[LabelNodeRoleMaster]
256+
_, controlPlaneLabel := node.Labels[LabelNodeRoleControlPlane]
257+
if masterLabel || controlPlaneLabel {
252258
if time.Now().Before(node.ObjectMeta.GetCreationTimestamp().Add(c.config.CheckDelayAfterAdd)) {
253259
log.V(4).Infof("The node %s is created less than the configured check delay, skip", node.Name)
254260
continue
@@ -280,7 +286,9 @@ func (c *Controller) getUnhealthyWorkerNodes() ([]healthcheck.NodeInfo, error) {
280286
return nil, err
281287
}
282288
for _, node := range nodeList.Items {
283-
if _, hasMasterRoleLabel := node.Labels[LabelNodeRoleMaster]; hasMasterRoleLabel {
289+
_, masterLabel := node.Labels[LabelNodeRoleMaster]
290+
_, controlPlaneLabel := node.Labels[LabelNodeRoleControlPlane]
291+
if masterLabel || controlPlaneLabel {
284292
continue
285293
}
286294
if len(node.Status.Conditions) == 0 {

0 commit comments

Comments
 (0)