You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
== Creating infrastructure machine sets for production environments
21
21
22
-
In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Both OpenShift Logging and {ProductName} deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. A configuration like this requires three different machine sets, one for each availability zone.
22
+
In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Both OpenShift Logging and {ProductName} deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. A configuration like this requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability.
Copy file name to clipboardExpand all lines: modules/machine-api-overview.adoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -32,4 +32,4 @@ Cluster autoscaler:: This resource is based on the upstream cluster autoscaler p
32
32
33
33
Machine health check:: The `MachineHealthCheck` resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine.
34
34
35
-
In {product-title} version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with {product-title} version 4.1, this process is easier. Each machine set is scoped to a single zone, so the installation program sends out machine sets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. The autoscaler provides best-effort balancing over the life of a cluster.
35
+
In {product-title} version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with {product-title} version 4.1, this process is easier. Each machine set is scoped to a single zone, so the installation program sends out machine sets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. The autoscaler provides best-effort balancing over the life of a cluster.
Copy file name to clipboardExpand all lines: modules/machine-health-checks-resource.adoc
+4-7Lines changed: 4 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -50,22 +50,19 @@ The `matchLabels` are examples only; you must map your machine groups based on y
50
50
Short circuiting ensures that machine health checks remediate machines only when the cluster is healthy.
51
51
Short-circuiting is configured through the `maxUnhealthy` field in the `MachineHealthCheck` resource.
52
52
53
-
If the user defines a value for the `maxUnhealthy` field,
54
-
before remediating any machines, the `MachineHealthCheck` compares the value of `maxUnhealthy`
55
-
with the number of machines within its target pool that it has determined to be unhealthy.
56
-
Remediation is not performed if the number of unhealthy machines exceeds the `maxUnhealthy` limit.
53
+
If the user defines a value for the `maxUnhealthy` field, before remediating any machines, the `MachineHealthCheck` compares the value of `maxUnhealthy` with the number of machines within its target pool that it has determined to be unhealthy. Remediation is not performed if the number of unhealthy machines exceeds the `maxUnhealthy` limit.
57
54
58
55
[IMPORTANT]
59
56
====
60
57
If `maxUnhealthy` is not set, the value defaults to `100%` and the machines are remediated regardless of the state of the cluster.
61
58
====
62
59
63
-
The appropriate `maxUnhealthy` value depends on the scale of the cluster you deploy and how many machines the `MachineHealthCheck` covers. For example, you can use the `maxUnhealthy` value to cover multiple machine sets across multiple availability zones so that if you lose an entire zone, your `maxUnhealthy` setting prevents further remediation within the cluster.
60
+
The appropriate `maxUnhealthy` value depends on the scale of the cluster you deploy and how many machines the `MachineHealthCheck` covers. For example, you can use the `maxUnhealthy` value to cover multiple machine sets across multiple availability zones so that if you lose an entire zone, your `maxUnhealthy` setting prevents further remediation within the cluster. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability.
64
61
65
62
The `maxUnhealthy` field can be set as either an integer or percentage.
66
63
There are different remediation implementations depending on the `maxUnhealthy` value.
67
64
68
-
=== Setting `maxUnhealthy` by using an absolute value
65
+
=== Setting maxUnhealthy by using an absolute value
69
66
70
67
If `maxUnhealthy` is set to `2`:
71
68
@@ -74,7 +71,7 @@ If `maxUnhealthy` is set to `2`:
74
71
75
72
These values are independent of how many machines are being checked by the machine health check.
76
73
77
-
=== Setting `maxUnhealthy` by using percentages
74
+
=== Setting maxUnhealthy by using percentages
78
75
79
76
If `maxUnhealthy` is set to `40%` and there are 25 machines being checked:
<1> Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
Copy file name to clipboardExpand all lines: modules/migration-known-issues.adoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,4 +33,4 @@ spec:
33
33
- 6666
34
34
----
35
35
36
-
* If you perform direct volume migration with nodes that are in different availability zones, the migration might fail because the migrated pods cannot access the PVC. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1947487[*BZ#1947487*])
36
+
* If you perform direct volume migration with nodes that are in different availability zones or availability sets, the migration might fail because the migrated pods cannot access the PVC. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1947487[*BZ#1947487*])
_Pod affinity_ and _pod anti-affinity_ allow you to constrain which nodes your pod is eligible to be scheduled on based on the key/value labels on other pods.
9
+
_Pod affinity_ and _pod anti-affinity_ allow you to constrain which nodes your pod is eligible to be scheduled on based on the key/value labels on other pods.
10
10
11
11
* Pod affinity can tell the scheduler to locate a new pod on the same node as other pods if the label selector on the new pod matches the label on the current pod.
12
12
* Pod anti-affinity can prevent the scheduler from locating a new pod on the same node as pods with the same labels if the label selector on the new pod matches the label on the current pod.
13
13
14
-
For example, using affinity rules, you could spread or pack pods within a service or relative to pods in other services. Anti-affinity rules allow you to prevent pods of a particular service from scheduling on the same nodes as pods of another service that are known to interfere with the performance of the pods of the first service. Or, you could spread the pods of a service across nodesor availability zones to reduce correlated failures.
14
+
For example, using affinity rules, you could spread or pack pods within a service or relative to pods in other services. Anti-affinity rules allow you to prevent pods of a particular service from scheduling on the same nodes as pods of another service that are known to interfere with the performance of the pods of the first service. Or, you could spread the pods of a service across nodes, availability zones, or availability sets to reduce correlated failures.
15
15
16
16
There are two types of pod affinity rules: _required_ and _preferred_.
You can create a machine set to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment.
511
511
512
-
In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Both OpenShift Logging and {ProductName} deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. A configuration like this requires three different machine sets, one for each availability zone.
512
+
In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Both OpenShift Logging and {ProductName} deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. A configuration like this requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability.
513
513
514
514
For information on infrastructure nodes and which components can run on infrastructure nodes, see xref:../machine_management/creating-infrastructure-machinesets.adoc#creating-infrastructure-machinesets[Creating infrastructure machine sets].
0 commit comments