Skip to content

Commit 3901462

Browse files
authored
Merge pull request #44426 from EricPonvelle/OSDOCS-2924_No-More-Masters
OSDOCS-2924: Removed instances of Master from OSD/ROSA
2 parents 79c4fdf + 52184c2 commit 3901462

14 files changed

+22
-22
lines changed

modules/aws-limits.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -21,13 +21,13 @@ The following table summarizes the AWS components whose limits can impact your a
2121
|At a minimum, each cluster creates the following instances:
2222

2323
* One bootstrap machine, which is removed after installation
24-
* Three master nodes
24+
* Three control plane nodes
2525
* Two infrastructure nodes for a single availability zone; three infrascture nodes for multi-availability zones
2626
* Two worker nodes for a single availability zone; three worker nodes for multi-availability zones
2727

2828
These instance type counts are within a new account's default limit. To deploy more worker nodes, deploy large workloads, or use a different instance type, review your account limits to ensure that your cluster can deploy the machines that you need.
2929

30-
In most regions, the worker machines use an `m6i.large` machine and the bootstrap and master machines use `m6i.xlarge` instances. In some regions, including all regions that do not support these instance types, `m5.large` and `m5.xlarge` instances are used instead.
30+
In most regions, the bootstrap and worker machines uses an `m4.large` machines and the control plane machines use `m4.xlarge` instances. In some regions, including all regions that do not support these instance types, `m5.large` and `m5.xlarge` instances are used instead.
3131

3232
|Elastic IPs (EIPs)
3333
|0 to 1
@@ -50,7 +50,7 @@ To use the `us-east-1` region, you must increase the EIP limit for your account.
5050
|Elastic Load Balancing (ELB/NLB)
5151
|3
5252
|20 per region
53-
|By default, each cluster creates internal and external network load balancers for the master API server and a single classic elastic load balancer for the router. Deploying more Kubernetes LoadBalancer Service objects will create additional link:https://aws.amazon.com/elasticloadbalancing/[load balancers].
53+
|By default, each cluster creates internal and external network load balancers for the primary API server and a single classic elastic load balancer for the router. Deploying more Kubernetes LoadBalancer Service objects will create additional link:https://aws.amazon.com/elasticloadbalancing/[load balancers].
5454

5555

5656
|NAT Gateways

modules/osd-monitoring-assigning-tolerations-to-monitoring-components.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
[id="assigning-tolerations-to-monitoring-components_{context}"]
77
= Assigning tolerations to components that monitor user-defined projects
88

9-
You can assign tolerations to the components that monitor user-defined projects, to enable moving them to tainted worker nodes. Scheduling is not permitted on master or infrastructure nodes.
9+
You can assign tolerations to the components that monitor user-defined projects, to enable moving them to tainted worker nodes. Scheduling is not permitted on control plane or infrastructure nodes.
1010

1111
.Prerequisites
1212

modules/osd-monitoring-moving-monitoring-components-to-different-nodes.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
[id="moving-monitoring-components-to-different-nodes_{context}"]
77
= Moving monitoring components to different nodes
88

9-
You can move any of the components that monitor workloads for user-defined projects to specific worker nodes. It is not permitted to move components to master or infrastructure nodes.
9+
You can move any of the components that monitor workloads for user-defined projects to specific worker nodes. It is not permitted to move components to control plane or infrastructure nodes.
1010

1111
.Prerequisites
1212

modules/policy-disaster-recovery.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
= Disaster recovery
88

99

10-
{product-title} provides disaster recovery for failures that occur at the pod, worker node, infrastructure node, master node, and availability zone levels.
10+
{product-title} provides disaster recovery for failures that occur at the pod, worker node, infrastructure node, control plane node, and availability zone levels.
1111

1212
All disaster recovery requires that the customer use best practices for deploying highly available applications, storage, and cluster architecture (for example, single-zone deployment vs. multi-zone deployment) to account for the level of desired availability.
1313

modules/policy-failure-points.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,9 +35,9 @@ When accounting for possible node failures, it is also important to understand h
3535

3636
[id="cluster-failure_{context}"]
3737
== Cluster failure
38-
{product-title} clusters have at least three master nodes and three infrastructure nodes that are preconfigured for high availability, either in a single zone or across multiple zones depending on the type of cluster you have selected. This means that master and infrastructure nodes have the same resiliency of worker nodes, with the added benefit of being managed completely by Red Hat.
38+
{product-title} clusters have at least three control plane nodes and three infrastructure nodes that are preconfigured for high availability, either in a single zone or across multiple zones depending on the type of cluster you have selected. This means that control plane and infrastructure nodes have the same resiliency of worker nodes, with the added benefit of being managed completely by Red Hat.
3939

40-
In the event of a complete master outage, the OpenShift APIs will not function, and existing worker node pods will be unaffected. However, if there is also a pod or node outage at the same time, the masters will have to recover before new pods or nodes can be added or scheduled.
40+
In the event of a complete control plane node outage, the OpenShift APIs will not function, and existing worker node pods will be unaffected. However, if there is also a pod or node outage at the same time, the control plane nodes will have to recover before new pods or nodes can be added or scheduled.
4141

4242
All services running on infrastructure nodes are configured by Red Hat to be highly available and distributed across infrastructure nodes. In the event of a complete infrastructure outage, these services will be unavailable until these nodes have been recovered.
4343

modules/policy-incident.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ All {product-title} clusters are backed up using cloud provider snapshots. Notab
8484

8585
[id="cluster-capacity_{context}"]
8686
== Cluster capacity
87-
Evaluating and managing cluster capacity is a responsibility that is shared between Red Hat and the customer. Red Hat SRE is responsible for the capacity of all master and infrastructure nodes on the cluster.
87+
Evaluating and managing cluster capacity is a responsibility that is shared between Red Hat and the customer. Red Hat SRE is responsible for the capacity of all control plane and infrastructure nodes on the cluster.
8888

8989
Red Hat SRE also evaluates cluster capacity during upgrades and in response to cluster alerts. The impact of a cluster upgrade on capacity is evaluated as part of the upgrade testing process to ensure that capacity is not negatively impacted by new additions to the cluster. During a cluster upgrade, additional worker nodes are added to make sure that total cluster capacity is maintained during the upgrade process.
9090

modules/policy-responsibilities.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ If the `cluster-admin` role is enabled on a cluster, see the responsibilities an
4141

4242
|Virtual networking |Shared |Shared |Shared |Shared |Shared
4343

44-
|Master and infrastructure nodes |Red Hat |Red Hat |Red Hat |Red Hat |Red Hat
44+
|Control plane and infrastructure nodes |Red Hat |Red Hat |Red Hat |Red Hat |Red Hat
4545

4646
|Worker nodes |Red Hat |Red Hat |Red Hat |Red Hat |Red Hat
4747

modules/policy-shared-responsibility.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ The customer is responsible for incident and operations management of customer a
3434

3535
[id="change-management_{context}"]
3636
== Change management
37-
Red Hat is responsible for enabling changes to the cluster infrastructure and services that the customer will control, as well as maintaining versions for the master nodes, infrastructure nodes and services, and worker nodes. The customer is responsible for initiating infrastructure change requests and installing and maintaining optional services and networking configurations on the cluster, as well as all changes to customer data and customer applications.
37+
Red Hat is responsible for enabling changes to the cluster infrastructure and services that the customer will control, as well as maintaining versions for the control plane nodes, infrastructure nodes and services, and worker nodes. The customer is responsible for initiating infrastructure change requests and installing and maintaining optional services and networking configurations on the cluster, as well as all changes to customer data and customer applications.
3838

3939
[cols="2a,3a,3a",options="header"]
4040
|===
@@ -66,7 +66,7 @@ Red Hat is responsible for enabling changes to the cluster infrastructure and se
6666

6767
|Cluster networking
6868
|* Set up cluster management components, such as public or private service endpoints and necessary integration with virtual networking components.
69-
* Set up internal networking components required for internal cluster communication between worker, infrastructure, and master nodes.
69+
* Set up internal networking components required for internal cluster communication between worker, infrastructure, and control plane nodes.
7070
|* Provide optional non-default IP address ranges for machine CIDR, service CIDR, and pod CIDR if needed through {cluster-manager} when the cluster is provisioned.
7171
* Request that the API service endpoint be made public or private on cluster creation or after cluster creation through {cluster-manager}.
7272

@@ -84,7 +84,7 @@ Red Hat is responsible for enabling changes to the cluster infrastructure and se
8484
* Test customer applications on minor and maintenance versions to ensure compatibility.
8585

8686
|Capacity management
87-
|* Monitor utilization of control plane (master nodes and infrastructure nodes).
87+
|* Monitor utilization of control plane (control plane nodes and infrastructure nodes).
8888
* Scale or resize control plane nodes to maintain quality of service.
8989
* Monitor utilization of customer resources including Network, Storage and Compute capacity. Where autoscaling features are not enabled alert customer for any changes required to cluster resources (for example, new compute nodes to scale, additional storage, etc).
9090
|* Use the provided {cluster-manager} controls to add or remove additional worker nodes as required.

modules/rosa-create-objects.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -114,7 +114,7 @@ $ rosa create cluster --cluster=<cluster_name> | <cluster_id> [arguments]
114114
|Block of IP addresses (ipNet) from which pod IP addresses are allocated. Example: `10.128.0.0/14`
115115

116116
|--private
117-
|Restricts master API endpoint and application routes to direct, private connectivity.
117+
|Restricts primary API endpoint and application routes to direct, private connectivity.
118118

119119
|--private-link
120120
| Specifies to use AWS PrivateLink to provide private connectivity between VPCs and services. The `--subnet-ids` argument is required when using `--private-link`.
@@ -126,7 +126,7 @@ $ rosa create cluster --cluster=<cluster_name> | <cluster_id> [arguments]
126126
|Block of IP addresses (ipNet) for services. Example: `172.30.0.0/16`
127127

128128
|--subnet-ids
129-
|The subnet IDs (string) to use when installing the cluster. Subnet IDs must be in pairs with one private subnet ID and one public subnet ID per availability zone. Subnets are comma-delimited. Example: `--subnet-ids=subnet-1,subnet-2`. Leave the value empty for installer-provisioned subnet IDs.
129+
|The subnet IDs (string) to use when installing the cluster. Subnet IDs must be in pairs with one private subnet ID and one public subnet ID per availability zone. Subnets are comma-delimited. Example: `--subnet-ids=subnet-1,subnet-2`. Leave the value empty for installer-provisioned subnet IDs.
130130

131131

132132
When using `--private-link`, the `--subnet-ids` argument is required and only one private subnet is allowed per zone.

modules/rosa-edit-objects.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ $ rosa edit cluster --cluster=<cluster_name> | <cluster_id> [arguments]
2929
|Required: The name or ID (string) of the cluster to edit.
3030

3131
|--private
32-
|Restricts a master API endpoint to direct, private connectivity.
32+
|Restricts a primary API endpoint to direct, private connectivity.
3333

3434
|===
3535

0 commit comments

Comments
 (0)