You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You can use infrastructure machine sets to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and the components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment.
13
13
14
+
In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Both OpenShift Logging and {SMProductName} deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. This configuration requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability.
For information about infrastructure nodes and which components can run on infrastructure nodes, see the "Red Hat OpenShift control plane and infrastructure nodes" section in the link:https://www.redhat.com/en/resources/openshift-subscription-sizing-guide[OpenShift sizing and subscription guide for enterprise Kubernetes] document.
17
19
20
+
To create an infrastructure node, you can xref:../machine_management/creating-infrastructure-machinesets.adoc#machineset-creating_creating-infrastructure-machinesets[use a machine set], xref:../machine_management/creating-infrastructure-machinesets.adoc#creating-an-infra-node_creating-infrastructure-machinesets[label the node], or xref:../machine_management/creating-infrastructure-machinesets.adoc#creating-infra-machines_creating-infrastructure-machinesets[use a machine config pool].
== Moving resources to infrastructure machine sets
102
106
103
-
Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure machine sets that you created.
107
+
Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure machine sets that you created by adding the infrastructure node selector, as shown:
108
+
109
+
[source,yaml]
110
+
----
111
+
spec:
112
+
nodePlacement: <1>
113
+
nodeSelector:
114
+
matchLabels:
115
+
node-role.kubernetes.io/infra: ""
116
+
tolerations:
117
+
- effect: NoSchedule
118
+
key: node-role.kubernetes.io/infra
119
+
value: reserved
120
+
- effect: NoExecute
121
+
key: node-role.kubernetes.io/infra
122
+
value: reserved
123
+
----
124
+
<1> Add a `nodeSelector` parameter with the appropriate value to the component you want to move. You can use a `nodeSelector` in the format shown or use `<key>: <value>` pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration.
125
+
126
+
Applying a specific node selector to all infrastructure components causes {product-title} to xref:../machine_management/creating-infrastructure-machinesets.adoc#moving-resources-to-infrastructure-machinesets[schedule those workloads on nodes with that label].
Copy file name to clipboardExpand all lines: modules/infrastructure-moving-logging.adoc
+15-1Lines changed: 15 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -42,6 +42,13 @@ spec:
42
42
nodeCount: 3
43
43
nodeSelector: <1>
44
44
node-role.kubernetes.io/infra: ''
45
+
tolerations:
46
+
- effect: NoSchedule
47
+
key: node-role.kubernetes.io/infra
48
+
value: reserved
49
+
- effect: NoExecute
50
+
key: node-role.kubernetes.io/infra
51
+
value: reserved
45
52
redundancyPolicy: SingleRedundancy
46
53
resources:
47
54
limits:
@@ -57,6 +64,13 @@ spec:
57
64
kibana:
58
65
nodeSelector: <1>
59
66
node-role.kubernetes.io/infra: ''
67
+
tolerations:
68
+
- effect: NoSchedule
69
+
key: node-role.kubernetes.io/infra
70
+
value: reserved
71
+
- effect: NoExecute
72
+
key: node-role.kubernetes.io/infra
73
+
value: reserved
60
74
proxy:
61
75
resources: null
62
76
replicas: 1
@@ -65,7 +79,7 @@ spec:
65
79
66
80
...
67
81
----
68
-
<1> Add a `nodeSelector` parameter with the appropriate value to the component you want to move. You can use a `nodeSelector` in the format shown or use `<key>: <value>` pairs, based on the value specified for the node.
82
+
<1> Add a `nodeSelector` parameter with the appropriate value to the component you want to move. You can use a `nodeSelector` in the format shown or use `<key>: <value>` pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration.
Copy file name to clipboardExpand all lines: modules/infrastructure-moving-monitoring.adoc
+65-13Lines changed: 65 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,12 +7,16 @@
7
7
= Moving the monitoring solution
8
8
9
9
The monitoring stack includes multiple components, including Prometheus, Thanos Querier, and Alertmanager.
10
-
The Cluster Monitoring Operator manages this stack.
11
-
To redeploy the monitoring stack to infrastructure nodes, you can create and apply a custom config map.
10
+
The Cluster Monitoring Operator manages this stack. To redeploy the monitoring stack to infrastructure nodes, you can create and apply a custom config map.
12
11
13
12
.Procedure
14
13
15
-
. Save the following `ConfigMap` definition as the `cluster-monitoring-configmap.yaml` file:
14
+
. Edit the `cluster-monitoring-config` config map and change the `nodeSelector` to use the `infra` label:
Running this config map forces the components of the monitoring stack to redeploy to infrastructure nodes.
53
-
54
-
. Apply the new config map:
55
-
+
56
-
[source,terminal]
57
-
----
58
-
$ oc create -f cluster-monitoring-configmap.yaml
59
-
----
111
+
<1> Add a `nodeSelector` parameter with the appropriate value to the component you want to move. You can use a `nodeSelector` in the format shown or use `<key>: <value>` pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration.
60
112
61
113
. Watch the monitoring pods move to the new machines:
. Modify the `spec` section of the object to resemble the following YAML:
63
61
+
64
62
[source,yaml]
65
63
----
@@ -74,9 +72,17 @@ spec:
74
72
weight: 100
75
73
logLevel: Normal
76
74
managementState: Managed
77
-
nodeSelector:
75
+
nodeSelector: <1>
78
76
node-role.kubernetes.io/infra: ""
77
+
tolerations:
78
+
- effect: NoSchedule
79
+
key: node-role.kubernetes.io/infra
80
+
value: reserved
81
+
- effect: NoExecute
82
+
key: node-role.kubernetes.io/infra
83
+
value: reserved
79
84
----
85
+
<1> Add a `nodeSelector` parameter with the appropriate value to the component you want to move. You can use a `nodeSelector` in the format shown or use `<key>: <value>` pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration.
80
86
81
87
. Verify the registry pod has been moved to the infrastructure node.
Add the `nodeSelector` stanza that references the `infra` label to the `spec` section, as shown:
61
-
+
62
60
[source,yaml]
63
61
----
64
62
spec:
65
63
nodePlacement:
66
64
nodeSelector:
67
65
matchLabels:
68
66
node-role.kubernetes.io/infra: ""
69
-
----
67
+
tolerations:
68
+
- effect: NoSchedule
69
+
key: node-role.kubernetes.io/infra
70
+
value: reserved
71
+
- effect: NoExecute
72
+
key: node-role.kubernetes.io/infra
73
+
value: reserved
74
+
----
75
+
<1> Add a `nodeSelector` parameter with the appropriate value to the component you want to move. You can use a `nodeSelector` in the format shown or use `<key>: <value>` pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration.
70
76
71
77
. Confirm that the router pod is running on the `infra` node.
72
78
.. View the list of router pods and note the node name of the running pod:
You can use infrastructure machine sets to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and the components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment.
13
+
14
+
In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Both OpenShift Logging and {SMProductName} deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. This configuration requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability.
For information about infrastructure nodes and which components can run on infrastructure nodes, see the "Red Hat OpenShift control plane and infrastructure nodes" section in the link:https://www.redhat.com/en/resources/openshift-subscription-sizing-guide[OpenShift sizing and subscription guide for enterprise Kubernetes] document.
19
+
20
+
To create an infrastructure node, you can xref:../../machine_management/creating-infrastructure-machinesets.adoc#machineset-creating_creating-infrastructure-machinesets[use a machine set], xref:../../nodes/nodes/nodes-nodes-creating-infrastructure-nodes.adoc#creating-an-infra-node_creating-infrastructure-nodes[label the node], or xref:../../machine_management/creating-infrastructure-machinesets.adoc#creating-infra-machines_creating-infrastructure-machinesets[use a machine config pool].
Copy file name to clipboardExpand all lines: post_installation_configuration/cluster-tasks.adoc
+5-2Lines changed: 5 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -521,7 +521,11 @@ In a production deployment, it is recommended that you deploy at least three com
521
521
522
522
For information on infrastructure nodes and which components can run on infrastructure nodes, see xref:../machine_management/creating-infrastructure-machinesets.adoc#creating-infrastructure-machinesets[Creating infrastructure machine sets].
523
523
524
-
For sample machine sets that you can use with these procedures, see xref:../machine_management/creating-infrastructure-machinesets.adoc#creating-infrastructure-machinesets-clouds[Creating infrastructure machine sets for different clouds].
524
+
To create an infrastructure node, you can xref:../post_installation_configuration/cluster-tasks.adoc#machineset-creating_post-install-cluster-tasks[use a machine set], post_installation_configuration/cluster-tasks.adoc#creating-an-infra-node_post-install-cluster-tasks[assign a label to the nodes], or xref:../post_installation_configuration/cluster-tasks.adoc#creating-infra-machines_post-install-cluster-tasks[use a machine config pool].
525
+
526
+
For sample machine sets that you can use with these procedures, see xref:../machine_management/creating-infrastructure-machinesets.adoc#creating-infrastructure-machinesets-clouds[Creating machine sets for different clouds].
527
+
528
+
Applying a specific node selector to all infrastructure components causes {product-title} to xref:../post_installation_configuration/cluster-tasks.adoc#moving-resources-to-infrastructure-machinesets[schedule those workloads on nodes with that label].
0 commit comments