Skip to content

Commit cd071fc

Browse files
K8s: node selection page edits (#1968)
* use yaml embeds * make RZA info consistent * copy edit
1 parent fd2ebb3 commit cd071fc

File tree

2 files changed

+67
-112
lines changed

2 files changed

+67
-112
lines changed

content/operate/kubernetes/recommendations/node-selection.md

Lines changed: 60 additions & 110 deletions
Original file line numberDiff line numberDiff line change
@@ -11,28 +11,16 @@ linkTitle: Node selection
1111
weight: 80
1212
---
1313

14-
Many Kubernetes cluster deployments have different kinds of nodes that have
15-
different CPU and memory resources available for scheduling cluster workloads.
16-
Redis Enterprise for Kubernetes has various abilities to control the scheduling
17-
Redis Enterprise cluster node pods through properties specified in the
18-
Redis Enterprise cluster custom resource definition (CRD).
14+
Kubernetes clusters often include nodes with different CPU and memory profiles. You control where Redis Enterprise cluster (REC) pods run by setting fields in the REC custom resource (CRD).
1915

20-
A Redis Enterprise cluster (REC) is deployed as a StatefulSet which manages the Redis Enterprise cluster node pods.
21-
The scheduler chooses a node to deploy a new Redis Enterprise cluster node pod on when:
16+
A Redis Enterprise cluster (REC) runs as a StatefulSet. The Kubernetes scheduler assigns nodes when you create or resize the cluster, or when a pod restarts.
2217

23-
- The cluster is created
24-
- The cluster is resized
25-
- A pod fails
18+
Use these options to control pod placement:
2619

27-
Here are the ways that you can control the pod scheduling:
20+
## Use node selectors
2821

29-
## Using node selectors
30-
31-
The [`nodeSelector`]({{<relref "/operate/kubernetes/reference/api/redis_enterprise_cluster_api#spec">}})
32-
property of the cluster specification uses the same values and structures as
33-
the [Kubernetes `nodeSelector`](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector).
34-
In general, node labels are a simple way to make sure that specific nodes are used for Redis Enterprise pods.
35-
For example, if nodes 'n1' and 'n2' are labeled as "high memory":
22+
The [`nodeSelector`]({{<relref "/operate/kubernetes/reference/api/redis_enterprise_cluster_api#spec">}}) field matches the Kubernetes [`nodeSelector`](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) syntax.
23+
Label the nodes you want to target. For example, if nodes 'n1' and 'n2' are labeled with `memory=high`:
3624

3725
```sh
3826
kubectl label nodes n1 memory=high
@@ -52,21 +40,13 @@ spec:
5240
memory: high
5341
```
5442
55-
Then, when the operator creates the StatefulSet associated with the pod, the nodeSelector
56-
section is part of the pod specification. When the scheduler attempts to
57-
create new pods, it needs to satisfy the node selection constraints.
58-
43+
The operator copies [`nodeSelector`]({{< relref "/operate/kubernetes/reference/api/redis_enterprise_cluster_api#spec" >}}) into the pod spec. The scheduler places pods only on nodes that match the selector.
5944

60-
## Using node pools
45+
## Use node pools
6146

62-
A node pool is a common part of the underlying infrastructure of the Kubernetes cluster deployment and provider.
63-
Often, node pools are similarly-configured classes of nodes such as nodes with the same allocated amount of memory and CPU.
64-
Implementors often label these nodes with a consistent set of labels.
47+
Node pools group similar nodes. Providers label nodes by pool.
6548

66-
On Google Kubernetes Engine (GKE), all node pools have the label `cloud.google.com/gke-nodepool` with a value of the name used during configuration.
67-
On Microsoft Azure Kubernetes System (AKS), you can create node pools with a specific set of labels. Other managed cluster services may have similar labeling schemes.
68-
69-
You can use the `nodeSelector` section to request a specific node pool by label values. For example, on GKE:
49+
Use [`nodeSelector`]({{< relref "/operate/kubernetes/reference/api/redis_enterprise_cluster_api#spec" >}}) to target a pool by label. For example, on GKE:
7050

7151
```yaml
7252
apiVersion: app.redislabs.com/v1
@@ -79,24 +59,36 @@ spec:
7959
cloud.google.com/gke-nodepool: 'high-memory'
8060
```
8161

82-
## Using node taints
62+
### Provider resources
63+
64+
Cloud providers label nodes by pool. See links below for specific documentation.
65+
66+
- GKE:
67+
- [Create and manage cluster and node pool labels](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-managing-labels)
68+
- [Update node labels and taints for existing node pools](https://cloud.google.com/kubernetes-engine/docs/how-to/update-existing-nodepools)
69+
- AKS:
70+
- [Use labels in an AKS cluster](https://learn.microsoft.com/en-us/azure/aks/use-labels)
71+
- [Manage node pools in AKS](https://learn.microsoft.com/en-us/azure/aks/manage-node-pools)
72+
- EKS:
73+
- [Create a managed node group with labels (AWS CLI)](https://docs.aws.amazon.com/cli/latest/reference/eks/create-nodegroup.html)
74+
- [Update a managed node group to add labels (AWS CLI)](https://docs.aws.amazon.com/cli/latest/reference/eks/update-nodegroup-config.html)
75+
76+
77+
## Use node taints
8378

84-
You can use multiple node taints with a set of tolerations to control Redis Enterprise cluster node pod scheduling.
85-
The `podTolerations` property of the cluster specification specifies a list of pod tolerations to use.
86-
The value is a list of [Kubernetes tolerations](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/#concepts).
79+
Use node taints and pod tolerations to control REC pod scheduling. Set tolerations with [`spec.podTolerations`]({{< relref "/operate/kubernetes/reference/api/redis_enterprise_cluster_api#specpodtolerations" >}}) (standard [Kubernetes tolerations](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/#concepts)).
8780

88-
For example, if the cluster has a single node pool, the node taints can control the allowed workloads for a node.
89-
You can add taints to the node, for example nodes n1, n2, and n3, reserve a set of nodes for the Redis Enterprise cluster:
81+
Example: on a single node pool, reserve nodes n1–n3 for REC by adding taints:
9082

9183
```sh
9284
kubectl taint nodes n1 db=rec:NoSchedule
9385
kubectl taint nodes n2 db=rec:NoSchedule
9486
kubectl taint nodes n3 db=rec:NoSchedule
9587
```
9688

97-
This prevents any pods from being scheduled onto the nodes unless the pods can tolerate the taint `db=rec`.
89+
This blocks pods unless they tolerate the `db=rec` taint.
9890

99-
You can then add the toleration for this taint to the cluster specification:
91+
Then add a matching toleration to the REC:
10092

10193
```yaml
10294
apiVersion: app.redislabs.com/v1
@@ -107,17 +99,17 @@ spec:
10799
nodes: 3
108100
podTolerations:
109101
- key: db
110-
operator: Equal
102+
operator: Equal
111103
value: rec
112104
effect: NoSchedule
113105
```
114106

115107
A set of taints can also handle more complex use cases.
116108
For example, a `role=test` or `role=dev` taint can be used to designate a node as dedicated for testing or development workloads via pod tolerations.
117109

118-
## Using pod anti-affinity
110+
## Use pod anti-affinity
119111

120-
By default, the Redis Enterprise node pods are not allowed to be placed on the same node for the same cluster:
112+
By default, REC node pods are not scheduled on the same node within the same cluster:
121113

122114
```yaml
123115
podAntiAffinity:
@@ -130,10 +122,9 @@ podAntiAffinity:
130122
topologyKey: kubernetes.io/hostname
131123
```
132124

133-
Each pod has the three labels above where `redis.io/cluster` is the label for the name of your cluster.
125+
Each pod has these labels. `redis.io/cluster` is your cluster name.
134126

135-
You can change this rule to restrict or include nodes that the Redis Enterprise cluster node pods can run on.
136-
For example, you can delete the `redis.io/cluster` label so that even Redis Enterprise node pods from different clusters cannot be scheduled on the same Kubernetes node:
127+
Modify this rule to widen or narrow placement. For example, remove the `redis.io/cluster` label to prevent pods from different clusters from sharing a node:
137128

138129
```yaml
139130
apiVersion: app.redislabs.com/v1
@@ -151,9 +142,7 @@ spec:
151142
topologyKey: kubernetes.io/hostname
152143
```
153144

154-
or you can prevent Redis Enterprise nodes from being schedule with other workloads.
155-
For example, if all database workloads have the label 'local/role: database', you
156-
can use this label to avoid scheduling two databases on the same node:
145+
To avoid co-locating with other database workloads, label those pods `local/role: database` and add anti-affinity to keep one database per node:
157146

158147
```yaml
159148
apiVersion: app.redislabs.com/v1
@@ -175,39 +164,36 @@ spec:
175164
topologyKey: kubernetes.io/hostname
176165
```
177166

178-
In this case, any pods that are deployed with the label `local/role: database` cannot be scheduled on the same node.
167+
Kubernetes will not schedule two pods with label `local/role: database` on the same node.
179168

169+
## Enable rack awareness
180170

181-
## Using rack awareness
182-
183-
You can configure Redis Enterprise with rack-zone awareness to increase availability
184-
during partitions or other rack (or region) related failures.
171+
Enable rack-zone awareness to improve availability during rack or zone failures.
185172

186173
{{%note%}}When creating your rack-zone ID, there are some constraints to consider; see [rack-zone awareness]({{< relref "/operate/rs/clusters/configure/rack-zone-awareness#rack-zone-id-rules" >}}) for more info. {{%/note%}}
187174

188-
Rack-zone awareness is a single property in the Redis Enterprise cluster CRD named `rackAwarenessNodeLabel`.
175+
Configure it with [`spec.rackAwarenessNodeLabel`]({{< relref "/operate/kubernetes/reference/api/redis_enterprise_cluster_api#spec" >}}) in the REC.
189176

190177
### Choose a node label
191178

192179
The most common label used for rack-zone awareness is topology.kubernetes.io/zone, a standard Kubernetes label that shows the zone a node runs in. Many Kubernetes platforms add this label to nodes by default, as noted in the [Kubernetes documentation](https://kubernetes.io/docs/setup/best-practices/multiple-zones/#nodes-are-labeled).
193180

194181
If your platform doesn’t set this label automatically, you can use any custom label that describes the node’s topology (such as rack, zone, or region).
195182

196-
### Node labeling requirements
183+
### Label all eligible nodes
197184

198185
{{< warning >}}
199-
200-
**All eligible nodes must have the label for rack-awareness to work. The operator requires every node that might run Redis Enterprise pods to be labeled. If any are missing the label, reconciliation will fail.
186+
All eligible nodes **must** have the label for rack awareness to work. The operator requires every node that might run Redis Enterprise pods to be labeled. If any nodes are missing the label, reconciliation fails.
201187
{{< /warning >}}
202188

203-
Eligible nodes are all nodes where Redis Enterprise pods can be scheduled. By default, these are all worker nodes in the cluster, but you can limit them using `spec.nodeSelector` in the Redis Enterprise cluster (REC) configuration.
189+
Eligible nodes are nodes where REC pods can run. By default, this means all worker nodes. You can limit eligibility with [`spec.nodeSelector`]({{< relref "/operate/kubernetes/reference/api/redis_enterprise_cluster_api#spec" >}}).
204190

205-
The value for the chosen label must indicate the topology information (rack, zone, region, etc.) for each node.
191+
Give each eligible node a label value that reflects its rack, zone, or region.
206192

207-
You can check the value for this label in your nodes with the command:
193+
Check node label values:
208194

209195
```sh
210-
$ kubectl get nodes -o custom-columns="name:metadata.name","rack\\zone:metadata.labels.topology\.kubernetes\.io/zone"
196+
kubectl get nodes -o custom-columns="name:metadata.name","rack\\zone:metadata.labels.topology\.kubernetes\.io/zone"
211197
212198
name rack\zone
213199
ip-10-0-x-a.eu-central-1.compute.internal eu-central-1a
@@ -216,71 +202,35 @@ ip-10-0-x-c.eu-central-1.compute.internal eu-central-1b
216202
ip-10-0-x-d.eu-central-1.compute.internal eu-central-1b
217203
```
218204

219-
### Enabling the cluster role
220-
221-
For the operator to read the cluster node information, you must create a cluster role for the operator and then bind the role to the service account.
205+
### Enable the cluster role
222206

223-
Here's a cluster role:
224-
225-
```yaml
226-
kind: ClusterRole
227-
apiVersion: rbac.authorization.k8s.io/v1
228-
metadata:
229-
name: redis-enterprise-operator
230-
rules:
231-
# needed for rack awareness
232-
- apiGroups: [""]
233-
resources: ["nodes"]
234-
verbs: ["list", "get", "watch"]
235-
```
207+
Grant the operator read access to node labels with a ClusterRole and ClusterRoleBinding.
236208

237-
And here's how to apply the role:
209+
ClusterRole:
238210

239-
```sh
240-
kubectl apply -f https://raw.githubusercontent.com/RedisLabs/redis-enterprise-k8s-docs/master/rack_awareness/rack_aware_cluster_role.yaml
241-
```
211+
{{<embed-yaml "k8s/rack_aware_cluster_role.md" "rack-aware-cluster-role.yaml">}}
242212

243-
The binding is typically to the `redis-enterprise-operator` service account:
213+
Bind to the `redis-enterprise-operator` service account:
244214

245-
```yaml
246-
kind: ClusterRoleBinding
247-
apiVersion: rbac.authorization.k8s.io/v1
248-
metadata:
249-
name: redis-enterprise-operator
250-
subjects:
251-
- kind: ServiceAccount
252-
namespace: OPERATOR_NAMESPACE
253-
name: redis-enterprise-operator
254-
roleRef:
255-
kind: ClusterRole
256-
name: redis-enterprise-operator
257-
apiGroup: rbac.authorization.k8s.io
258-
```
215+
{{<embed-yaml "k8s/rack_aware_cluster_role_binding.md" "rack-aware-cluster-role-binding.yaml">}}
259216

260-
and it can be applied by running:
217+
Apply these files with `kubectl apply`. For example:
261218

262219
```sh
263-
kubectl apply -f https://raw.githubusercontent.com/RedisLabs/redis-enterprise-k8s-docs/master/rack_awareness/rack_aware_cluster_role_binding.yaml
220+
kubectl apply -f rack-aware-cluster-role.yaml
221+
kubectl apply -f rack-aware-cluster-role-binding.yaml
264222
```
265223

266-
Once the cluster role and the binding have been applied, you can configure Redis Enterprise clusters to use rack awareness labels.
224+
After you apply the role and binding, you can configure rack awareness.
267225

268-
### Configuring rack awareness
226+
### Configure rack awareness
269227

270-
You can configure the node label to read for the rack zone by setting the `rackAwarenessNodeLabel` property:
228+
Set [`spec.rackAwarenessNodeLabel`]({{< relref "/operate/kubernetes/reference/api/redis_enterprise_cluster_api#spec" >}}) to the node label to use:
271229

272-
```yaml
273-
apiVersion: app.redislabs.com/v1
274-
kind: RedisEnterpriseCluster
275-
metadata:
276-
name: example-redisenterprisecluster
277-
spec:
278-
nodes: 3
279-
rackAwarenessNodeLabel: topology.kubernetes.io/zone
280-
```
230+
{{<embed-yaml "k8s/rack_aware_rec.md" "rack-aware-cluster.yaml">}}
281231

282232
{{< note >}}
283-
When you use the `rackAwarenessNodeLabel` property, the operator will change the topologyKey for the anti-affinity rule to the label name used unless you have specified the `podAntiAffinity` property as well. If you use `rackAwarenessNodeLabel` and `podAntiAffinity` together, you must make sure that the `topologyKey` in your pod anti-affinity rule is set to the node label name.
233+
When you set [`spec.rackAwarenessNodeLabel`]({{< relref "/operate/kubernetes/reference/api/redis_enterprise_cluster_api#spec" >}}), the operator sets the anti-affinity `topologyKey` to that label unless you define [`spec.podAntiAffinity`]({{< relref "/operate/kubernetes/reference/api/redis_enterprise_cluster_api#specpodantiaffinity" >}}). If you define both, make sure `topologyKey` matches your node label.
284234
{{< /note >}}
285235

286236
### Rack awareness limitations

content/operate/kubernetes/reference/yaml/rack-awareness.md

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ This page provides YAML examples for deploying Redis Enterprise with [rack aware
1616

1717
- Label [Kubernetes nodes](https://kubernetes.io/docs/concepts/architecture/nodes/) with zone information
1818
- Typically uses the standard label `topology.kubernetes.io/zone`
19-
- Verify node labels: `kubectl get nodes --show-labels`
19+
- Verify node labels: `kubectl get nodes -o custom-columns="name:metadata.name","rack\\zone:metadata.labels.topology\.kubernetes\.io/zone"`
2020
- Install the [Redis Enterprise operator]({{< relref "/operate/kubernetes/deployment" >}})
2121

2222
For complete deployment instructions, see [Deploy on Kubernetes]({{< relref "/operate/kubernetes/deployment" >}}).
@@ -34,11 +34,13 @@ Rack awareness requires additional permissions to read [node labels](https://kub
3434
{{<embed-yaml "k8s/rack_aware_cluster_role.md" "rack-aware-cluster-role.yaml">}}
3535

3636
Cluster role configuration:
37+
3738
- `name`: ClusterRole name for rack awareness permissions
3839
- `rules`: Permissions to read nodes and their labels cluster-wide
3940
- `resources`: Access to `nodes` resource for zone label discovery
4041

4142
Key permissions:
43+
4244
- `nodes`: Read access to discover node zone labels
4345
- `get, list, watch`: Monitor node changes and zone assignments
4446

@@ -49,6 +51,7 @@ The [ClusterRoleBinding](https://kubernetes.io/docs/reference/access-authn-authz
4951
{{<embed-yaml "k8s/rack_aware_cluster_role_binding.md" "rack-aware-cluster-role-binding.yaml">}}
5052

5153
Cluster role binding configuration:
54+
5255
- `subjects.name`: Must match the service account name
5356
- `subjects.namespace`: Namespace where the operator is deployed
5457
- `roleRef.name`: Must match the cluster role name
@@ -60,6 +63,7 @@ The rack-aware [REC configuration]({{< relref "/operate/kubernetes/reference/api
6063
{{<embed-yaml "k8s/rack_aware_rec.md" "rack-aware-cluster.yaml">}}
6164

6265
Rack-aware cluster configuration:
66+
6367
- `metadata.name`: Cluster name (cannot be changed after creation)
6468
- `spec.rackAwarenessNodeLabel`: Node label used for zone identification
6569
- `spec.nodes`: Minimum 3 nodes, ideally distributed across zones
@@ -69,14 +73,15 @@ Edit the values in the downloaded YAML file based on your environment, such as i
6973
### Common zone labels
7074

7175
Different Kubernetes distributions use different zone labels:
76+
7277
- `Standard`: `topology.kubernetes.io/zone`
7378
- `Legacy`: `failure-domain.beta.kubernetes.io/zone`
7479
- `Custom`: Your organization's specific labeling scheme
7580

7681
Verify the correct label on your nodes:
7782

7883
```bash
79-
kubectl get nodes -o custom-columns=NAME:.metadata.name,ZONE:.metadata.labels.'topology\.kubernetes\.io/zone'
84+
kubectl get nodes -o custom-columns="name:metadata.name","rack\\zone:metadata.labels.topology\.kubernetes\.io/zone"
8085
```
8186

8287
## Redis Enterprise database

0 commit comments

Comments
 (0)