Skip to content

Commit f9a3402

Browse files
mburke5678Michael Burke
authored andcommitted
Replace machine-os-content with new format base image
1 parent a2a3c1e commit f9a3402

File tree

5 files changed

+440
-0
lines changed

5 files changed

+440
-0
lines changed

_topic_maps/_topic_map.yml

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -530,6 +530,12 @@ Topics:
530530
File: cluster-capabilities
531531
- Name: Configuring additional devices in an IBM Z or LinuxONE environment
532532
File: ibmz-post-install
533+
- Name: Red Hat Enterprise Linux CoreOS image layering
534+
File: coreos-layering
535+
Distros: openshift-enterprise
536+
- Name: Fedora CoreOS (FCOS) image layering
537+
File: coreos-layering
538+
Distros: openshift-origin
533539
---
534540
Name: Updating clusters
535541
Dir: updating
Lines changed: 203 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,203 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * post-installation_configuration/coreos-layering.adoc
4+
5+
:_content-type: PROCEDURE
6+
[id="coreos-layering-configuring_{context}"]
7+
= Applying a {op-system} custom layered image
8+
9+
You can easily configure {op-system-first} image layering on the nodes in specific machine config pools. The Machine Config Operator (MCO) reboots those nodes with the new custom layered image, overriding the base {op-system-first} image.
10+
11+
To apply a custom layered image to your cluster, you must have the custom layered image in a repository that your cluster can access. Then, create a `MachineConfig` object that points to the custom layered image. You need a separate `MachineConfig` object for each machine config pool that you want to configure.
12+
13+
[IMPORTANT]
14+
====
15+
When you configure a custom layered image, {product-title} no longer automatically updates any node that uses the custom layered image. You become responsible for manually updating your nodes as appropriate. If you roll back the custom layer, {product-title} will again automatically update the node. See the Additional resources section that follows for important information about updating nodes that use a custom layered image.
16+
====
17+
18+
.Prerequisites
19+
20+
* You must create a custom layered image that is based on an {product-title} image digest, not a tag.
21+
+
22+
[NOTE]
23+
====
24+
You should use the same base {op-system} image that is installed on the rest of your cluster. Use the `oc adm release info --image-for rhel-coreos-8` command to obtain the base image being used in your cluster.
25+
====
26+
+
27+
For example, the following Containerfile creates a custom layered image from an {product-title} 4.12 image and a Hotfix package:
28+
+
29+
.Example Containerfile for a custom layer image
30+
[source,yaml]
31+
----
32+
# Using a 4.12.0 image
33+
FROM quay.io/openshift-release/ocp-release@sha256:6499bc69a0707fcad481c3cb73225b867d <1>
34+
#Install hotfix rpm
35+
RUN rpm-ostree override replace https://example.com/hotfixes/haproxy-1.0.16-5.el8.src.rpm && \ <2>
36+
rpm-ostree cleanup -m && \
37+
ostree container commit
38+
----
39+
<1> Specifies an {product-title} release image.
40+
<2> Specifies the path to the Hotfix package.
41+
+
42+
[NOTE]
43+
====
44+
Instructions on how to create a Containerfile are beyond the scope of this documentation.
45+
====
46+
47+
* You must push the custom layered image to a repository that your cluster can access.
48+
49+
.Procedure
50+
51+
. Create a machine config file.
52+
53+
.. Create a YAML file similar to the following:
54+
+
55+
[source,yaml]
56+
----
57+
apiVersion: machineconfiguration.openshift.io/v1
58+
kind: MachineConfig
59+
metadata:
60+
labels:
61+
machineconfiguration.openshift.io/role: worker <1>
62+
name: os-layer-hotfix
63+
spec:
64+
osImageURL: quay.io/my-registry/custom-image@sha256:306b606615dcf8f0e5e7d87fee3 <2>
65+
----
66+
<1> Specifies the machine config pool to apply the custom layered image.
67+
<2> Specifies the path to the custom layered image in the repository.
68+
69+
.. Create the `MachineConfig` object:
70+
+
71+
[source,terminal]
72+
----
73+
$ oc create -f <file_name>.yaml
74+
----
75+
+
76+
[IMPORTANT]
77+
====
78+
It is strongly recommended that you test your images outside of your production environment before rolling out to your cluster.
79+
====
80+
81+
.Verification
82+
83+
You can verify that the custom layered image is applied by performing any of the following checks:
84+
85+
. Check that the worker machine config pool has rolled out with the new machine config:
86+
87+
.. Check that the new machine config is created:
88+
+
89+
[source,terminal]
90+
----
91+
$ oc get mc
92+
----
93+
+
94+
.Sample output
95+
[source,terminal]
96+
----
97+
NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE
98+
00-master 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m
99+
00-worker 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m
100+
01-master-container-runtime 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m
101+
01-master-kubelet 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m
102+
01-worker-container-runtime 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m
103+
01-worker-kubelet 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m
104+
99-master-generated-registries 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m
105+
99-master-ssh 3.2.0 98m
106+
99-worker-generated-registries 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m
107+
99-worker-ssh 3.2.0 98m
108+
os-layer-hotfix 10s <1>
109+
rendered-master-15961f1da260f7be141006404d17d39b 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m
110+
rendered-worker-5aff604cb1381a4fe07feaf1595a797e 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m
111+
rendered-worker-5de4837625b1cbc237de6b22bc0bc873 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 4s <2>
112+
----
113+
<1> New machine config
114+
<2> New rendered machine config
115+
116+
.. Check that the `osImageURL` value in the new machine config points to the expected image:
117+
+
118+
[source,terminal]
119+
----
120+
$ oc describe mc rendered-master-4e8be63aef68b843b546827b6ebe0913
121+
----
122+
+
123+
.Example output
124+
[source,terminal]
125+
----
126+
Name: rendered-master-4e8be63aef68b843b546827b6ebe0913
127+
Namespace:
128+
Labels: <none>
129+
Annotations: machineconfiguration.openshift.io/generated-by-controller-version: 8276d9c1f574481043d3661a1ace1f36cd8c3b62
130+
machineconfiguration.openshift.io/release-image-version: 4.12.0-ec.3
131+
API Version: machineconfiguration.openshift.io/v1
132+
Kind: MachineConfig
133+
...
134+
Os Image URL: quay.io/my-registry/custom-image@sha256:306b606615dcf8f0e5e7d87fee3
135+
----
136+
137+
.. Check that the associated machine config pool is updating with the new machine config:
138+
+
139+
[source,terminal]
140+
----
141+
$ oc get mcp
142+
----
143+
+
144+
.Sample output
145+
[source,terminal]
146+
----
147+
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
148+
master rendered-master-6faecdfa1b25c114a58cf178fbaa45e2 True False False 3 3 3 0 39m
149+
worker rendered-worker-6b000dbc31aaee63c6a2d56d04cd4c1b False True False 3 0 0 0 39m <1>
150+
----
151+
<1> When the `UPDATING` field is `True`, the machine config pool is updating with the new machine config. When the field becomes `False`, the worker machine config pool has rolled out to the new machine config.
152+
153+
.. Check the nodes to see that scheduling on the nodes is disabled. This indicates that the change is being applied:
154+
+
155+
[source,terminal]
156+
----
157+
$ oc get nodes
158+
----
159+
+
160+
.Example output
161+
[source,terminal]
162+
----
163+
NAME STATUS ROLES AGE VERSION
164+
ip-10-0-148-79.us-west-1.compute.internal Ready worker 32m v1.25.0+3ef6ef3
165+
ip-10-0-155-125.us-west-1.compute.internal Ready,SchedulingDisabled worker 35m v1.25.0+3ef6ef3
166+
ip-10-0-170-47.us-west-1.compute.internal Ready control-plane,master 42m v1.25.0+3ef6ef3
167+
ip-10-0-174-77.us-west-1.compute.internal Ready control-plane,master 42m v1.25.0+3ef6ef3
168+
ip-10-0-211-49.us-west-1.compute.internal Ready control-plane,master 42m v1.25.0+3ef6ef3
169+
ip-10-0-218-151.us-west-1.compute.internal Ready worker 31m v1.25.0+3ef6ef3
170+
----
171+
172+
. When the node is back in the `Ready` state, check that the node is using the custom layered image:
173+
174+
.. Open an `oc debug` session to the node. For example:
175+
+
176+
[source,terminal]
177+
----
178+
$ oc debug node/ip-10-0-155-125.us-west-1.compute.internal
179+
----
180+
181+
.. Set `/host` as the root directory within the debug shell:
182+
+
183+
[source,terminal]
184+
----
185+
sh-4.4# chroot /host
186+
----
187+
188+
.. Run the `rpm-ostree status` command to view that the custom layered image is in use:
189+
+
190+
[source,terminal]
191+
----
192+
sh-4.4# sudo rpm-ostree status
193+
----
194+
+
195+
.Example output
196+
+
197+
----
198+
State: idle
199+
Deployments:
200+
* ostree-unverified-registry:quay.io/my-registry/custom-image@sha256:306b606615dcf8f0e5e7d87fee3
201+
Digest: sha256:306b606615dcf8f0e5e7d87fee3
202+
----
203+

modules/coreos-layering-removing.adoc

Lines changed: 93 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,93 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * post-installation_configuration/coreos-layering.adoc
4+
5+
:_content-type: PROCEDURE
6+
[id="coreos-layering-removing_{context}"]
7+
= Removing a {op-system} custom layered image
8+
9+
You can easily revert {op-system-first} image layering from the nodes in specific machine config pools. The Machine Config Operator (MCO) reboots those nodes with the cluster base {op-system-first} image, overriding the custom layered image.
10+
11+
To remove a {op-system-first} custom layered image from your cluster, you need to delete the machine config that applied the image.
12+
13+
.Procedure
14+
15+
. Delete the machine config that applied the custom layered image.
16+
+
17+
[source,terminal]
18+
----
19+
$ oc delete mc os-layer-hotfix
20+
----
21+
+
22+
After deleting the machine config, the nodes reboot.
23+
24+
.Verification
25+
26+
You can verify that the custom layered image is removed by performing any of the following checks:
27+
28+
. Check that the worker machine config pool is updating with the previous machine config:
29+
+
30+
[source,terminal]
31+
----
32+
$ oc get mcp
33+
----
34+
+
35+
.Sample output
36+
[source,terminal]
37+
----
38+
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
39+
master rendered-master-6faecdfa1b25c114a58cf178fbaa45e2 True False False 3 3 3 0 39m
40+
worker rendered-worker-6b000dbc31aaee63c6a2d56d04cd4c1b False True False 3 0 0 0 39m <1>
41+
----
42+
<1> When the `UPDATING` field is `True`, the machine config pool is updating with the previous machine config. When the field becomes `False`, the worker machine config pool has rolled out to the previous machine config.
43+
44+
. Check the nodes to see that scheduling on the nodes is disabled. This indicates that the change is being applied:
45+
+
46+
[source,terminal]
47+
----
48+
$ oc get nodes
49+
----
50+
+
51+
.Example output
52+
[source,terminal]
53+
----
54+
NAME STATUS ROLES AGE VERSION
55+
ip-10-0-148-79.us-west-1.compute.internal Ready worker 32m v1.25.0+3ef6ef3
56+
ip-10-0-155-125.us-west-1.compute.internal Ready,SchedulingDisabled worker 35m v1.25.0+3ef6ef3
57+
ip-10-0-170-47.us-west-1.compute.internal Ready control-plane,master 42m v1.25.0+3ef6ef3
58+
ip-10-0-174-77.us-west-1.compute.internal Ready control-plane,master 42m v1.25.0+3ef6ef3
59+
ip-10-0-211-49.us-west-1.compute.internal Ready control-plane,master 42m v1.25.0+3ef6ef3
60+
ip-10-0-218-151.us-west-1.compute.internal Ready worker 31m v1.25.0+3ef6ef3
61+
----
62+
63+
. When the node is back in the `Ready` state, check that the node is using the base image:
64+
65+
.. Open an `oc debug` session to the node. For example:
66+
+
67+
[source,terminal]
68+
----
69+
$ oc debug node/ip-10-0-155-125.us-west-1.compute.internal
70+
----
71+
72+
.. Set `/host` as the root directory within the debug shell:
73+
+
74+
[source,terminal]
75+
----
76+
sh-4.4# chroot /host
77+
----
78+
79+
.. Run the `rpm-ostree status` command to view that the custom layered image is in use:
80+
+
81+
[source,terminal]
82+
----
83+
sh-4.4# sudo rpm-ostree status
84+
----
85+
+
86+
.Example output
87+
+
88+
----
89+
State: idle
90+
Deployments:
91+
* ostree-unverified-registry:podman pull quay.io/openshift-release-dev/ocp-release@sha256:e2044c3cfebe0ff3a99fc207ac5efe6e07878ad59fd4ad5e41f88cb016dacd73
92+
Digest: sha256:e2044c3cfebe0ff3a99fc207ac5efe6e07878ad59fd4ad5e41f88cb016dacd73
93+
----

modules/coreos-layering-updating.adoc

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * post-installation_configuration/coreos-layering.adoc
4+
5+
:_content-type: REFERENCE
6+
[id="coreos-layering-updating_{context}"]
7+
= Updating with a {op-system} custom layered image
8+
9+
When you configure {op-system-first} image layering, {product-title} no longer automatically updates the node pool that uses the custom layered image. You become responsible to manually update your nodes as appropriate.
10+
11+
To update a node that uses a custom layered image, follow these general steps:
12+
13+
. The cluster automatically upgrades to version x.y.z+1, except for the nodes that use the custom layered image.
14+
15+
. You could then create a new Containerfile that references the updated {product-title} image and the RPM that you had previously applied.
16+
17+
. Create a new machine config that points to the updated custom layered image.
18+
19+
Updating a node with a custom layered image is not required. However, if that node gets too far behind the current {product-title} version, you could experience unexpected results.
20+

0 commit comments

Comments
 (0)