Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion hosted_control_planes/hcp-deploy/hcp-deploy-ibm-power.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[]

toc::[]

You can deploy hosted control planes by configuring a cluster to function as a hosting cluster. The hosting cluster is an {product-title} cluster where the control planes are hosted. The hosting cluster is also known as the _management_ cluster.
You can deploy {hcp} by configuring a cluster to function as a hosting cluster. The hosting cluster is an {product-title} cluster where the control planes are hosted. The hosting cluster is also known as the _management_ cluster.

[NOTE]
====
Expand Down
4 changes: 2 additions & 2 deletions hosted_control_planes/hcp-disconnected/hcp-deploy-dc-bm.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[]

toc::[]

When you provision hosted control planes on bare metal, you use the Agent platform. The Agent platform and {mce} work together to enable disconnected deployments. The Agent platform uses the central infrastructure management service to add worker nodes to a hosted cluster. For an introduction to the central infrastructure management service, see link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/clusters/cluster_mce_overview#enable-cim[Enabling the central infrastructure management service].
When you provision {hcp} on bare metal, you use the Agent platform. The Agent platform and {mce} work together to enable disconnected deployments. The Agent platform uses the central infrastructure management service to add worker nodes to a hosted cluster. For an introduction to the central infrastructure management service, see link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/clusters/cluster_mce_overview#enable-cim[Enabling the central infrastructure management service].

include::modules/hcp-dc-bm-arch.adoc[leveloffset=+1]
include::modules/hcp-dc-bm-reqs.adoc[leveloffset=+1]
Expand Down Expand Up @@ -51,4 +51,4 @@ include::modules/hcp-nodepool-hc.adoc[leveloffset=+2]
include::modules/hcp-dc-infraenv.adoc[leveloffset=+2]
include::modules/hcp-worker-hc.adoc[leveloffset=+2]
include::modules/hcp-bm-hosts.adoc[leveloffset=+2]
include::modules/hcp-dc-scale-np.adoc[leveloffset=+2]
include::modules/hcp-dc-scale-np.adoc[leveloffset=+2]
Original file line number Diff line number Diff line change
Expand Up @@ -6,15 +6,15 @@ include::_attributes/common-attributes.adoc[]

toc::[]

Before you get started with hosted control planes for {product-title}, you must properly label nodes so that the pods of hosted clusters can be scheduled into infrastructure nodes. Node labeling is also important for the following reasons:
Before you get started with {hcp} for {product-title}, you must properly label nodes so that the pods of hosted clusters can be scheduled into infrastructure nodes. Node labeling is also important for the following reasons:

* To ensure high availability and proper workload deployment. For example, you can set the `node-role.kubernetes.io/infra` label to avoid having the control-plane workload count toward your {product-title} subscription.
* To ensure that control plane workloads are separate from other workloads in the management cluster.
//lahinson - sept. 2023 - commenting out the following lines until those levels are supported for self-managed hypershift
//* To ensure that control plane workloads are configured at one of the following multi-tenancy distribution levels:
//** Everything shared: Control planes for hosted clusters can run on any node that is designated for control planes.
//** Request serving isolation: Serving pods are requested in their own dedicated nodes.
//** Nothing shared: Every control plane has its own dedicated nodes.
//** Nothing shared: Every control plane has its own dedicated nodes.

[IMPORTANT]
====
Expand All @@ -23,4 +23,4 @@ Do not use the management cluster for your workload. Workloads must not run on n

include::modules/hcp-labels-taints.adoc[leveloffset=+1]
include::modules/hcp-priority-classes.adoc[leveloffset=+1]
include::modules/hcp-virt-taints-tolerations.adoc[leveloffset=+1]
include::modules/hcp-virt-taints-tolerations.adoc[leveloffset=+1]
4 changes: 2 additions & 2 deletions hosted_control_planes/hcp-prepare/hcp-sizing-guidance.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[]

toc::[]

Many factors, including hosted cluster workload and worker node count, affect how many hosted control planes can fit within a certain number of worker nodes. Use this sizing guide to help with hosted cluster capacity planning. This guidance assumes a highly available {hcp} topology. The load-based sizing examples were measured on a bare-metal cluster. Cloud-based instances might have different limiting factors, such as memory size.
Many factors, including hosted cluster workload and worker node count, affect how many {hcp} can fit within a certain number of worker nodes. Use this sizing guide to help with hosted cluster capacity planning. This guidance assumes a highly available {hcp} topology. The load-based sizing examples were measured on a bare-metal cluster. Cloud-based instances might have different limiting factors, such as memory size.

You can override the following resource utilization sizing measurements and disable the metric service monitoring.

Expand Down Expand Up @@ -38,4 +38,4 @@ include::modules/hcp-shared-infra.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources

* xref:../../hosted_control_planes/hcp-prepare/hcp-sizing-guidance.adoc[Sizing guidance for {hcp}]
* xref:../../hosted_control_planes/hcp-prepare/hcp-sizing-guidance.adoc[Sizing guidance for {hcp}]
4 changes: 2 additions & 2 deletions hosted_control_planes/hcp_high_availability/about-hcp-ha.adoc
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
:_mod-docs-content-type: ASSEMBLY
[id="about-hcp-ha"]
= About high availability for hosted control planes
include::_attributes/common-attributes.adoc[]
= About high availability for {hcp}
:context: about-hcp-ha

toc::[]

You can maintain high availability (HA) of hosted control planes by implementing the following actions:
You can maintain high availability (HA) of {hcp} by implementing the following actions:

* Recover etcd members for a hosted cluster.
* Back up and restore etcd for a hosted cluster.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ You must meet the following prerequisites on the management cluster:
* You have access to the {oadp-short} subscription through a catalog source.
* You have access to a cloud storage provider that is compatible with {oadp-short}, such as S3, {azure-full}, {gcp-full}, or MinIO.
* In a disconnected environment, you have access to a self-hosted storage provider, for example link:https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/[{odf-full}] or link:https://min.io/[MinIO], that is compatible with {oadp-short}.
* Your hosted control planes pods are up and running.
* Your {hcp} pods are up and running.

[id="prepare-aws-oadp_{context}"]
== Preparing {aws-short} to use {oadp-short}
Expand Down
12 changes: 6 additions & 6 deletions modules/hcp-enable-manual-addon.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,19 +5,19 @@
[id="hcp-enable-manual-addon_{context}"]
= Manually enabling the hypershift-addon managed cluster add-on for local-cluster

Enabling the hosted control planes feature automatically enables the `hypershift-addon` managed cluster add-on. If you need to enable the `hypershift-addon` managed cluster add-on manually, complete the following steps to use the `hypershift-addon` to install the HyperShift Operator on `local-cluster`.
Enabling the {hcp} feature automatically enables the `hypershift-addon` managed cluster add-on. If you need to enable the `hypershift-addon` managed cluster add-on manually, complete the following steps to use the `hypershift-addon` to install the HyperShift Operator on `local-cluster`.

.Procedure

. Create the `ManagedClusterAddon` HyperShift add-on by creating a file that resembles the following example:
. Create the `ManagedClusterAddon` add-on named `hypershift-addon` by creating a file that resembles the following example:
+
[source,yaml]
----
apiVersion: addon.open-cluster-management.io/v1alpha1
kind: ManagedClusterAddOn
metadata:
name: hypershift-addon
namespace: local-cluster
namespace: local-cluster
spec:
installNamespace: open-cluster-management-agent-addon
----
Expand All @@ -29,9 +29,9 @@ spec:
$ oc apply -f <filename>
----
+
Replace `filename` with the name of the file that you created.
Replace `filename` with the name of the file that you created.

. Confirm that the `hypershift-addon` is installed by running the following command:
. Confirm that the `hypershift-addon` managed cluster add-on is installed by running the following command:
+
[source,terminal]
----
Expand All @@ -46,4 +46,4 @@ NAME AVAILABLE DEGRADED PROGRESSING
hypershift-addon True
----

Your HyperShift add-on is installed and the hosting cluster is available to create and manage hosted clusters.
Your `hypershift-addon` managed cluster add-on is installed and the hosting cluster is available to create and manage hosted clusters.
4 changes: 2 additions & 2 deletions modules/hcp-ibm-z-dns.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -37,12 +37,12 @@ api-int IN A 1xx.2x.2xx.1xx
;
;EOF
----
<1> The record refers to the IP address of the API load balancer that handles ingress and egress traffic for hosted control planes.
<1> The record refers to the IP address of the API load balancer that handles ingress and egress traffic for {hcp}.

For {ibm-title} z/VM, add IP addresses that correspond to the IP address of the agent.

[source,terminal]
----
compute-0 IN A 1xx.2x.2xx.1yy
compute-1 IN A 1xx.2x.2xx.1yy
----
----
6 changes: 3 additions & 3 deletions modules/hcp-labels-taints.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
[id="hcp-labels-taints_{context}"]
= Labeling management cluster nodes

Proper node labeling is a prerequisite to deploying hosted control planes.
Proper node labeling is a prerequisite to deploying {hcp}.

As a management cluster administrator, you use the following labels and taints in management cluster nodes to schedule a control plane workload:

Expand All @@ -30,7 +30,7 @@ $ oc label node/worker-2a node/worker-2b topology.kubernetes.io/zone=rack2
Pods for a hosted cluster have tolerations, and the scheduler uses affinity rules to schedule them. Pods tolerate taints for `control-plane` and the `cluster` for the pods. The scheduler prioritizes the scheduling of pods into nodes that are labeled with `hypershift.openshift.io/control-plane` and `hypershift.openshift.io/cluster: ${HostedControlPlane Namespace}`.

For the `ControllerAvailabilityPolicy` option, use `HighlyAvailable`, which is the default value that the hosted control planes command line interface, `hcp`, deploys. When you use that option, you can schedule pods for each deployment within a hosted cluster across different failure domains by setting `topology.kubernetes.io/zone` as the topology key. Control planes that are not highly available are not supported.
For the `ControllerAvailabilityPolicy` option, use `HighlyAvailable`, which is the default value that the {hcp} command-line interface, `hcp`, deploys. When you use that option, you can schedule pods for each deployment within a hosted cluster across different failure domains by setting `topology.kubernetes.io/zone` as the topology key. Control planes that are not highly available are not supported.

.Procedure

Expand All @@ -43,4 +43,4 @@ To enable a hosted cluster to require its pods to be scheduled into infrastructu
role.kubernetes.io/infra: ""
----

This way, hosted control planes for each hosted cluster are eligible infrastructure node workloads, and you do not need to entitle the underlying {product-title} nodes.
This way, {hcp} for each hosted cluster are eligible infrastructure node workloads, and you do not need to entitle the underlying {product-title} nodes.
2 changes: 1 addition & 1 deletion modules/hcp-managed-aws-infra-mgmt.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
[id="hcp-managed-aws-infra-mgmt_{context}"]
= Infrastructure requirements for a management {aws-short} account

When your infrastructure is managed by hosted control planes in a management AWS account, the infrastructure requirements differ depending on whether your clusters are public, private, or a combination.
When your infrastructure is managed by {hcp} in a management AWS account, the infrastructure requirements differ depending on whether your clusters are public, private, or a combination.

For accounts with public clusters, the infrastructure requirements are as follows:

Expand Down
2 changes: 1 addition & 1 deletion modules/hcp-mce-acm-relationship-intro.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ You can use the {mce-short} with {product-title} as a standalone cluster manager
A management cluster is also known as the hosting cluster.
====

You can deploy {product-title} clusters by using two different control plane configurations: standalone or hosted control planes. The standalone configuration uses dedicated virtual machines or physical machines to host the control plane. With {hcp} for {product-title}, you create control planes as pods on a management cluster without the need for dedicated virtual or physical machines for each control plane.
You can deploy {product-title} clusters by using two different control plane configurations: standalone or {hcp}. The standalone configuration uses dedicated virtual machines or physical machines to host the control plane. With {hcp} for {product-title}, you create control planes as pods on a management cluster without the need for dedicated virtual or physical machines for each control plane.

.{rh-rhacm} and the {mce-short} introduction diagram
image::rhacm-flow.png[{rh-rhacm} and the {mce-short} introduction diagram]
2 changes: 1 addition & 1 deletion modules/hcp-mgmt-component-loss-impact.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ If the management cluster component fails, your workload remains unaffected. In

The following table covers the impact of a failed management cluster component on the control plane and the data plane. However, the table does not cover all scenarios for the management cluster component failures.

.Impact of the failed component on hosted control planes
.Impact of the failed component on {hcp}
[cols="1,1,1",options="header"]
|===
|Name of the failed component |Hosted control plane API status |Hosted cluster data plane status
Expand Down
2 changes: 1 addition & 1 deletion modules/hcp-pod-limits.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,4 @@

The `maxPods` setting for each node affects how many hosted clusters can fit in a control-plane node. It is important to note the `maxPods` value on all control-plane nodes. Plan for about 75 pods for each highly available hosted control plane.

For bare-metal nodes, the default `maxPods` setting of 250 is likely to be a limiting factor because roughly three hosted control planes fit for each node given the pod requirements, even if the machine has plenty of resources to spare. Setting the `maxPods` value to 500 by configuring the `KubeletConfig` value allows for greater hosted control plane density, which can help you take advantage of additional compute resources.
For bare-metal nodes, the default `maxPods` setting of 250 is likely to be a limiting factor because roughly three {hcp} fit for each node given the pod requirements, even if the machine has plenty of resources to spare. Setting the `maxPods` value to 500 by configuring the `KubeletConfig` value allows for greater hosted control plane density, which can help you take advantage of additional compute resources.
4 changes: 2 additions & 2 deletions modules/hcp-resource-limit.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,6 @@
[id="hcp-resource-limit_{context}"]
= Request-based resource limit

The maximum number of hosted control planes that the cluster can host is calculated based on the hosted control plane CPU and memory requests from the pods.
The maximum number of {hcp} that the cluster can host is calculated based on the hosted control plane CPU and memory requests from the pods.

A highly available hosted control plane consists of 78 pods that request 5 vCPUs and 18 GB memory. These baseline numbers are compared to the cluster worker node resource capacities to estimate the maximum number of hosted control planes.
A highly available hosted control plane consists of 78 pods that request 5 vCPUs and 18 GB memory. These baseline numbers are compared to the cluster worker node resource capacities to estimate the maximum number of {hcp}.
4 changes: 2 additions & 2 deletions modules/hosted-control-planes-version-support.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ You can use the `hypershift.openshift.io` API resources, such as, `HostedCluster

The API version policy generally aligns with the policy for link:https://kubernetes.io/docs/reference/using-api/#api-versioning[Kubernetes API versioning].

Updates for {hcp} involve updating the hosted cluster and the node pools. For more information, see "Updates for hosted control planes".
Updates for {hcp} involve updating the hosted cluster and the node pools. For more information, see "Updates for {hcp}".

[id="hcp-versioning-cpo_{context}"]
== Control Plane Operator
Expand All @@ -73,4 +73,4 @@ The Control Plane Operator is released as part of each {product-title} payload r

* amd64
* arm64
* multi-arch
* multi-arch
4 changes: 2 additions & 2 deletions modules/hosted-restart-hcp-components.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
[id="hosted-restart-hcp-components_{context}"]
= Restarting hosted control plane components

If you are an administrator for hosted control planes, you can use the `hypershift.openshift.io/restart-date` annotation to restart all control plane components for a particular `HostedCluster` resource. For example, you might need to restart control plane components for certificate rotation.
If you are an administrator for {hcp}, you can use the `hypershift.openshift.io/restart-date` annotation to restart all control plane components for a particular `HostedCluster` resource. For example, you might need to restart control plane components for certificate rotation.

.Procedure

Expand Down Expand Up @@ -50,4 +50,4 @@ The following components are restarted:
* openshift-oauth-apiserver
* packageserver
* redhat-marketplace-catalog
* redhat-operators-catalog
* redhat-operators-catalog