Skip to content

Commit fd4e152

Browse files
committed
Hosted control planes: Review user replaceable values
1 parent 485f36f commit fd4e152

7 files changed

+56
-98
lines changed

modules/backup-etcd-hosted-cluster.adoc

Lines changed: 11 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -19,22 +19,28 @@ This procedure requires API downtime.
1919
+
2020
[source,terminal]
2121
----
22-
$ oc patch -n clusters hostedclusters/${CLUSTER_NAME} -p '{"spec":{"pausedUntil":"'${PAUSED_UNTIL}'"}}' --type=merge
22+
$ oc patch -n clusters hostedclusters/<hosted_cluster_name> -p '{"spec":{"pausedUntil":"true"}}' --type=merge
2323
----
2424

2525
. Stop all etcd-writer deployments by entering this command:
2626
+
2727
[source,terminal]
2828
----
29-
$ oc scale deployment -n ${HOSTED_CLUSTER_NAMESPACE} --replicas=0 kube-apiserver openshift-apiserver openshift-oauth-apiserver
29+
$ oc scale deployment -n <hosted_cluster_namespace> --replicas=0 kube-apiserver openshift-apiserver openshift-oauth-apiserver
3030
----
3131

32-
. Take an etcd snapshot by using the `exec` command in each etcd container:
32+
. To take an etcd snapshot, use the `exec` command in each etcd container by running the following command:
3333
+
3434
[source,terminal]
3535
----
36-
$ oc exec -it etcd-0 -n ${HOSTED_CLUSTER_NAMESPACE} -- env ETCDCTL_API=3 /usr/bin/etcdctl --cacert /etc/etcd/tls/client/etcd-client-ca.crt --cert /etc/etcd/tls/client/etcd-client.crt --key /etc/etcd/tls/client/etcd-client.key --endpoints=localhost:2379 snapshot save /var/lib/data/snapshot.db
37-
$ oc exec -it etcd-0 -n ${HOSTED_CLUSTER_NAMESPACE} -- env ETCDCTL_API=3 /usr/bin/etcdctl -w table snapshot status /var/lib/data/snapshot.db
36+
$ oc exec -it <etcd_pod_name> -n <hosted_cluster_namespace> -- env ETCDCTL_API=3 /usr/bin/etcdctl --cacert /etc/etcd/tls/client/etcd-client-ca.crt --cert /etc/etcd/tls/client/etcd-client.crt --key /etc/etcd/tls/client/etcd-client.key --endpoints=localhost:2379 snapshot save /var/lib/data/snapshot.db
37+
----
38+
39+
. To check the snapshot status, use the `exec` command in each etcd container by running the following command:
40+
+
41+
[source,terminal]
42+
----
43+
$ oc exec -it <etcd_pod_name> -n <hosted_cluster_namespace> -- env ETCDCTL_API=3 /usr/bin/etcdctl -w table snapshot status /var/lib/data/snapshot.db
3844
----
3945

4046
. Copy the snapshot data to a location where you can retrieve it later, such as an S3 bucket, as shown in the following example.

modules/hosted-cluster-etcd-status.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ To check the status of your hosted cluster, complete the following steps.
1414
+
1515
[source,terminal]
1616
----
17-
$ oc rsh -n <control_plane_namespace> -c etcd etcd-0
17+
$ oc rsh -n <control_plane_namespace> -c etcd <etcd_pod_name>
1818
----
1919

2020
. Set up the etcdctl environment by entering the following commands:
@@ -49,4 +49,4 @@ sh-4.4$ export ETCDCTL_ENDPOINTS=https://etcd-client:2379
4949
[source,terminal]
5050
----
5151
sh-4.4$ etcdctl endpoint health --cluster -w table
52-
----
52+
----

modules/hosted-cluster-single-node-recovery.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -32,14 +32,14 @@ etcd-2 1/2 CrashLoopBackOff 1 (5s ago) 64m
3232
+
3333
[source,terminal]
3434
----
35-
$ oc delete pvc/data-etcd-2 pod/etcd-2 --wait=false
35+
$ oc delete pvc/<pvc_name> pod/<etcd_pod_name> --wait=false
3636
----
3737

3838
. When the pod restarts, verify that the etcd member is added back to the etcd cluster and is correctly functioning by entering the following command:
3939
+
4040
[source,terminal]
4141
----
42-
$ oc get pods -l app=etcd -n $CONTROL_PLANE_NAMESPACE
42+
$ oc get pods -l app=etcd -n <control_plane_namespace>
4343
----
4444
+
4545
.Example output
@@ -49,4 +49,4 @@ NAME READY STATUS RESTARTS AGE
4949
etcd-0 2/2 Running 0 67m
5050
etcd-1 2/2 Running 0 48m
5151
etcd-2 2/2 Running 0 2m2s
52-
----
52+
----

modules/hosted-control-planes-monitoring-dashboard.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ data:
3535
----
3636

3737
+
38-
When monitoring dashboards are enabled, for each hosted cluster that the HyperShift Operator manages, the Operator creates a config map named `cp-[NAMESPACE]-[NAME]` in the `openshift-config-managed` namespace, where `NAMESPACE` is the namespace of the hosted cluster and `NAME` is the name of the hosted cluster. As a result, a new dashboard is added in the administrative console of the management cluster.
38+
When monitoring dashboards are enabled, for each hosted cluster that the HyperShift Operator manages, the Operator creates a config map named `cp-<hosted_cluster_namespace>-<hosted_cluster_name>` in the `openshift-config-managed` namespace, where `<hosted_cluster_namespace>` is the namespace of the hosted cluster and `<hosted_cluster_name>` is the name of the hosted cluster. As a result, a new dashboard is added in the administrative console of the management cluster.
3939

4040
. To view the dashboard, log in to the management cluster's console and go to the dashboard for the hosted cluster by clicking *Observe -> Dashboards*.
4141

@@ -44,7 +44,7 @@ When monitoring dashboards are enabled, for each hosted cluster that the HyperSh
4444
[#hosted-control-planes-customize-dashboards]
4545
== Dashboard customization
4646

47-
To generate dashboards for each hosted cluster, the HyperShift Operator uses a template that is stored in the `monitoring-dashboard-template` config map in the operator namespace (`hypershift`). This template contains a set of Grafana panels that contain the metrics for the dashboard. You can edit the content of the config map to customize the dashboards.
47+
To generate dashboards for each hosted cluster, the HyperShift Operator uses a template that is stored in the `monitoring-dashboard-template` config map in the Operator namespace (`hypershift`). This template contains a set of Grafana panels that contain the metrics for the dashboard. You can edit the content of the config map to customize the dashboards.
4848

4949
When a dashboard is generated, the following strings are replaced with values that correspond to a specific hosted cluster:
5050

modules/hosted-control-planes-pause-reconciliation.adoc

Lines changed: 7 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -10,24 +10,22 @@ If you are a cluster instance administrator, you can pause the reconciliation of
1010

1111
.Procedure
1212

13-
. To pause reconciliation for a hosted cluster and hosted control plane, populate the `pausedUntil` field of the `HostedCluster` resource, as shown in the following examples. In the examples, the value for `pausedUntil` is defined in an environment variable prior to the command.
13+
. To pause reconciliation for a hosted cluster and hosted control plane, populate the `pausedUntil` field of the `HostedCluster` resource.
1414
+
15-
** To pause the reconciliation until a specific time, specify an RFC339 timestamp:
15+
** To pause the reconciliation until a specific time, enter the following command:
1616
+
1717
[source,terminal]
1818
----
19-
PAUSED_UNTIL="2022-03-03T03:28:48Z"
20-
oc patch -n <hosted-cluster-namespace> hostedclusters/<hosted-cluster-name> -p '{"spec":{"pausedUntil":"'${PAUSED_UNTIL}'"}}' --type=merge
19+
$ oc patch -n <hosted_cluster_namespace> hostedclusters/<hosted_cluster_name> -p '{"spec":{"pausedUntil":"<timestamp>"}}' --type=merge <1>
2120
----
2221
+
23-
The reconciliation is paused until the specified time is passed.
22+
<1> Specify a timestamp in the RFC339 format, for example, `2024-03-03T03:28:48Z`. The reconciliation is paused until the specified time is passed.
2423
+
25-
** To pause the reconciliation indefinitely, pass a Boolean value of `true`:
24+
** To pause the reconciliation indefinitely, enter the following command:
2625
+
2726
[source,terminal]
2827
----
29-
PAUSED_UNTIL="true"
30-
oc patch -n <hosted-cluster-namespace> hostedclusters/<hosted-cluster-name> -p '{"spec":{"pausedUntil":"'${PAUSED_UNTIL}'"}}' --type=merge
28+
$ oc patch -n <hosted_cluster_namespace> hostedclusters/<hosted_cluster_name> -p '{"spec":{"pausedUntil":"true"}}' --type=merge
3129
----
3230
+
3331
The reconciliation is paused until you remove the field from the `HostedCluster` resource.
@@ -38,5 +36,5 @@ When the pause reconciliation field is populated for the `HostedCluster` resourc
3836
+
3937
[source,terminal]
4038
----
41-
oc patch -n <hosted-cluster-namespace> hostedclusters/<hosted-cluster-name> -p '{"spec":{"pausedUntil":null}}' --type=merge
39+
$ oc patch -n <hosted_cluster_namespace> hostedclusters/<hosted_cluster_name> -p '{"spec":{"pausedUntil":null}}' --type=merge
4240
----

modules/hosted-control-planes-troubleshooting.adoc

Lines changed: 24 additions & 75 deletions
Original file line numberDiff line numberDiff line change
@@ -37,32 +37,20 @@ Although the output does not contain any secret objects from the cluster, it can
3737
3838
.Procedure
3939

40-
* To gather output for troubleshooting, enter the following commands:
41-
+
42-
[source,terminal]
43-
----
44-
$ CLUSTERNAME="samplecluster"
45-
----
46-
+
47-
[source,terminal]
48-
----
49-
$ CLUSTERNS="clusters"
50-
----
51-
+
52-
[source,terminal]
53-
----
54-
$ mkdir clusterDump-${CLUSTERNS}-${CLUSTERNAME}
55-
----
40+
* To gather the output for troubleshooting, enter the following command:
5641
+
5742
[source,terminal]
5843
----
5944
$ hypershift dump cluster \
60-
--name ${CLUSTERNAME} \
61-
--namespace ${CLUSTERNS} \
45+
--name <hosted_cluster_name> \// <1>
46+
--namespace <hosted_cluster_namespace> \ <2>
6247
--dump-guest-cluster \
63-
--artifact-dir clusterDump-${CLUSTERNS}-${CLUSTERNAME}
48+
--artifact-dir clusterDump-<hosted_cluster_namespace>-<hosted_cluster_name>
6449
----
6550
+
51+
<1> Specify your hosted cluster name.
52+
<2> Specify your hosted cluster namespace, for example, `clusters`.
53+
+
6654
.Example output
6755
+
6856
[source,terminal]
@@ -77,71 +65,32 @@ The service account must have enough permissions to query all of the objects fro
7765
+
7866
If your username or service account does not have enough permissions, the output contains only the objects that you have permissions to access. During that process, you might see `forbidden` errors.
7967
+
80-
** To use impersonation by using a service account, enter the following commands. Replace values as necessary:
81-
+
82-
[source,terminal]
83-
----
84-
$ CLUSTERNAME="samplecluster"
85-
----
86-
+
87-
[source,terminal]
88-
----
89-
$ CLUSTERNS="clusters"
90-
----
91-
+
92-
[source,terminal]
93-
----
94-
$ SA="samplesa"
95-
----
96-
+
97-
[source,terminal]
98-
----
99-
$ SA_NAMESPACE="default"
100-
----
101-
+
102-
[source,terminal]
103-
----
104-
$ mkdir clusterDump-${CLUSTERNS}-${CLUSTERNAME}
105-
----
68+
** To use impersonation by using a service account, enter the following command:
10669
+
10770
[source,terminal]
10871
----
10972
$ hypershift dump cluster \
110-
--name ${CLUSTERNAME} \
111-
--namespace ${CLUSTERNS} \
73+
--name <hosted_cluster_name> \// <1>
74+
--namespace <hosted_cluster_namespace> \// <2>
11275
--dump-guest-cluster \
113-
--as "system:serviceaccount:${SA_NAMESPACE}:${SA}" \
114-
--artifact-dir clusterDump-${CLUSTERNS}-${CLUSTERNAME}
76+
--as "system:serviceaccount:<service_account_namespace>:<service_account_name>" \ <3>
77+
--artifact-dir clusterDump-<hosted_cluster_namespace>-<hosted_cluster_name>
11578
----
79+
<1> Specify your hosted cluster name.
80+
<2> Specify your hosted cluster namespace, for example, `clusters`.
81+
<3> Specify the `default` namespace and name, for example, `"system:serviceaccount:default:samplesa"`.
11682
117-
** To use impersonation by using a username, enter the following commands. Replace values as necessary:
118-
+
119-
[source,terminal]
120-
----
121-
$ CLUSTERNAME="samplecluster"
122-
----
123-
+
124-
[source,terminal]
125-
----
126-
$ CLUSTERNS="clusters"
127-
----
128-
+
129-
[source,terminal]
130-
----
131-
$ CLUSTERUSER="cloud-admin"
132-
----
133-
+
134-
[source,terminal]
135-
----
136-
$ mkdir clusterDump-${CLUSTERNS}-${CLUSTERNAME}
137-
----
83+
** To use impersonation by using a username, enter the following command:
13884
+
13985
[source,terminal]
14086
----
14187
$ hypershift dump cluster \
142-
--name ${CLUSTERNAME} \
143-
--namespace ${CLUSTERNS} \
88+
--name <hosted_cluster_name> \// <1>
89+
--namespace <hosted_cluster_namespace> \// <2>
14490
--dump-guest-cluster \
145-
--as "${CLUSTERUSER}" \
146-
--artifact-dir clusterDump-${CLUSTERNS}-${CLUSTERNAME}
147-
----
91+
--as "<cluster_user_name>" \ <3>
92+
--artifact-dir clusterDump-<hosted_cluster_namespace>-<hosted_cluster_name>
93+
----
94+
<1> Specify your hosted cluster name.
95+
<2> Specify your hosted cluster namespace, for example, `clusters`.
96+
<3> Specify your cluster user name, for example, `cloud-admin`.

modules/updating-node-pools-for-hcp.adoc

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,9 +15,14 @@ On hosted control planes, you update your version of {product-title} by updating
1515
+
1616
[source,terminal]
1717
----
18-
$ oc -n NAMESPACE patch HC HCNAME --patch '{"spec":{"release":{"image": "example"}}}' --type=merge
18+
$ oc -n <hosted_cluster_namespace> patch hostedcluster <hosted_cluster_name> --patch '{"spec":{"release":{"image": "<image_name>"}}}' --type=merge
1919
----
2020
2121
.Verification
2222

23-
* To verify that the new version was rolled out, check the `.status.version` value and the status conditions.
23+
* To verify that the new version was rolled out, check the `.status.version` and `.status.conditions` values in the `HostedCluster` custom resource (CR) by running the following command:
24+
+
25+
[source,terminal]
26+
----
27+
$ oc get hostedcluster <hosted_cluster_name> -o yaml
28+
----

0 commit comments

Comments
 (0)