Skip to content

Commit 4f77c1a

Browse files
committed
Clarify controller and worker use cases
1 parent 7000f7b commit 4f77c1a

File tree

1 file changed

+15
-5
lines changed

1 file changed

+15
-5
lines changed

docs/reference/embedded-config.mdx

Lines changed: 15 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -79,9 +79,13 @@ Roles are not updated or changed after a node is added. If you need to change a
7979
8080
### controller
8181
82-
The controller role is required in any cluster. Nodes with this role are “controller workers” because they run the control plane and can run other workloads too. The first node in a cluster will always have the controller role because a cluster needs a control plane. Any node that doesn't have the controller role is a worker node.
82+
Controller nodes run the Kubernetes control plane and can also run other workloads, such as application or Replicated workloads. For this reason, nodes with the controller role are considered "controller workers".
8383
84-
By default, the controller role is called “controller.” You can customize the name of the controller role with the `spec.roles.controller.name` field, like this:
84+
All clusters require at least one node with the controller role because a cluster needs a control plane. The first node in a cluster always has the controller role.
85+
86+
For multi-node clusters with high availability, at least three controller nodes are required. If you use more than three controller nodes, it is recommended to always use an odd number of nodes. Using an odd number of nodes ensures that the cluster can make decisions by allowing for quorum calculations and also avoiding split-brain scenarios. For more information about highly-available clusters, see [Options for Highly Available Topology](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/) in the Kubernetes documentation.
87+
88+
By default, the controller role is named “controller.” You can customize the name of the controller role with the `spec.roles.controller.name` field, as shown below:
8589

8690
```yaml
8791
apiVersion: embeddedcluster.replicated.com/v1beta1
@@ -94,7 +98,13 @@ spec:
9498

9599
### custom
96100

97-
You can define custom roles for other purposes in the cluster. This is particularly useful when combined with labels.
101+
You can optionally define custom roles for any non-controller, or _worker_, nodes in the cluster. Worker nodes can run any workloads that are deployed to the cluster by Embedded Cluster, such as application or Replicated workloads. Unlike controller nodes, worker nodes cannot run the Kubernetes control plane.
102+
103+
Custom roles are particularly useful when combined with [labels](#labels) for the purpose of assigning specific workloads to nodes.
104+
105+
Some example use cases for defining custom roles include:
106+
* Your application has a workload that must run on GPU, so you assign a custom role to a worker node that runs the GPU workload
107+
* Your application has a database that requires a lot of resources, so you assign a custom role to a worker node that is dedicated to running that workload only
98108

99109
Custom roles are defined with the `spec.roles.custom` array, as shown in the example below:
100110

@@ -109,9 +119,9 @@ spec:
109119

110120
### labels
111121

112-
Roles can have associated Kubernetes labels that are applied to any node in the cluster that is assigned that role. This is useful for things like assigning workloads to nodes.
122+
Roles can have associated Kubernetes labels that are applied to any node in the cluster that is assigned that role. Labels are useful for tasks like assigning workloads to nodes.
113123

114-
Labels are defined for the controller role and custom roles, as shown in the example below:
124+
Labels can be defined for the controller role and any custom roles, as shown in the example below:
115125

116126
```yaml
117127
apiVersion: embeddedcluster.replicated.com/v1beta1

0 commit comments

Comments
 (0)