Skip to content

Commit 4bc7215

Browse files
authored
Merge pull request #49291 from amolnar-rh/TELCODOCS-676
TELCODOCS-676: Update variables in RAN docs
2 parents dd22ff7 + 88eeb53 commit 4bc7215

14 files changed

+55
-52
lines changed

modules/cnf-topology-aware-lifecycle-manager-installation-cli.adoc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -51,10 +51,10 @@ $ oc get csv -n openshift-operators
5151
----
5252
+
5353
.Example output
54-
[source,terminal]
54+
[source,terminal,subs="attributes+"]
5555
----
56-
NAME DISPLAY VERSION REPLACES PHASE
57-
topology-aware-lifecycle-manager.4.10.0-202206301927 Topology Aware Lifecycle Manager 4.10.0-202206301927 Succeeded
56+
NAME DISPLAY VERSION REPLACES PHASE
57+
topology-aware-lifecycle-manager.{product-version}.x Topology Aware Lifecycle Manager {product-version}.x Succeeded
5858
----
5959

6060
. Verify that the {cgu-operator} is up and running:
@@ -69,4 +69,4 @@ $ oc get deploy -n openshift-operators
6969
----
7070
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
7171
openshift-operators cluster-group-upgrades-controller-manager 1/1 1 1 14s
72-
----
72+
----

modules/cnf-topology-aware-lifecycle-manager-operator-update.adoc

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ You can perform an Operator update with the {cgu-operator}.
2222
. Update the `PolicyGenTemplate` CR for the Operator update.
2323
.. Update the `du-upgrade` `PolicyGenTemplate` CR with the following additional contents in the `du-upgrade.yaml` file:
2424
+
25-
[source,yaml]
25+
[source,yaml,subs="attributes+"]
2626
----
2727
apiVersion: ran.openshift.io/v1
2828
kind: PolicyGenTemplate
@@ -42,7 +42,7 @@ spec:
4242
name: redhat-operators
4343
spec:
4444
displayName: Red Hat Operators Catalog
45-
image: registry.example.com:5000/olm/redhat-operators:v4.10 <1>
45+
image: registry.example.com:5000/olm/redhat-operators:v{product-version} <1>
4646
updateStrategy: <2>
4747
registryPoll:
4848
interval: 1h
@@ -96,6 +96,11 @@ spec:
9696
----
9797

9898
.. Remove the specified subscriptions channels in the common `PolicyGenTemplate` CR, if they exist. The default subscriptions channels from the ZTP image are used for the update.
99+
+
100+
[NOTE]
101+
====
102+
The default channel for the Operators applied through ZTP {product-version} is `stable`, except for the `performance-addon-operator`. As of {product-title} 4.11, the `performance-addon-operator` functionality was moved to the `node-tuning-operator`. For the 4.10 release, the default channel for PAO is `v4.10`. You can also specify the default channels in the common `PolicyGenTemplate` CR.
103+
====
99104

100105
.. Push the `PolicyGenTemplate` CRs updates to the ZTP Git repository.
101106
+
@@ -165,7 +170,7 @@ spec:
165170
enable: false
166171
----
167172
<1> The policy is needed by the image pre-caching feature to retrieve the operator images from the catalog source.
168-
<2> The policy contains Operator subscriptions. If you have upgraded ZTP from 4.9 to 4.10 by following "Upgrade ZTP from 4.9 to 4.10", all Operator subscriptions are grouped into the `common-subscriptions-policy` policy.
173+
<2> The policy contains Operator subscriptions. If you have followed the structure and content of the reference `PolicyGenTemplates`, all Operator subscriptions are grouped into the `common-subscriptions-policy` policy.
169174
+
170175
[NOTE]
171176
====

modules/cnf-topology-aware-lifecycle-manager-platform-update.adoc

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -22,9 +22,9 @@ You can perform a platform update with the {cgu-operator}.
2222
. Create a `PolicyGenTemplate` CR for the platform update:
2323
.. Save the following contents of the `PolicyGenTemplate` CR in the `du-upgrade.yaml` file.
2424
+
25-
.Example of `PolicyGenTemplate` for platform update
25+
.Example of `PolicyGenTemplate` for platform update
2626
+
27-
[source,yaml]
27+
[source,yaml,subs="attributes+"]
2828
----
2929
apiVersion: ran.openshift.io/v1
3030
kind: PolicyGenTemplate
@@ -36,7 +36,7 @@ spec:
3636
group-du-sno: ""
3737
mcp: "master"
3838
remediationAction: inform
39-
sourceFiles:
39+
sourceFiles:
4040
- fileName: ImageSignature.yaml <1>
4141
policyName: "platform-upgrade-prep"
4242
binaryData:
@@ -60,20 +60,20 @@ spec:
6060
annotations:
6161
ran.openshift.io/ztp-deploy-wave: "1"
6262
spec:
63-
channel: "stable-4.10"
64-
upstream: http://upgrade.example.com/images/upgrade-graph_stable-4.10
63+
channel: "stable-{product-version}"
64+
upstream: http://upgrade.example.com/images/upgrade-graph_stable-{product-version}
6565
- fileName: ClusterVersion.yaml <5>
6666
policyName: "platform-upgrade"
6767
metadata:
6868
name: version
6969
spec:
70-
channel: "stable-4.10"
71-
upstream: http://upgrade.example.com/images/upgrade-graph_stable-4.10
70+
channel: "stable-{product-version}"
71+
upstream: http://upgrade.example.com/images/upgrade-graph_stable-{product-version}
7272
desiredUpdate:
73-
version: 4.10.4
73+
version: {product-version}.4
7474
status:
7575
history:
76-
- version: 4.10.4
76+
- version: {product-version}.4
7777
state: "Completed"
7878
----
7979
<1> The `ConfigMap` CR contains the signature of the desired release image to update to.

modules/cnf-topology-aware-lifecycle-manager-preparing-for-updates.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ If you have deployed spoke clusters with distributed unit (DU) profiles using th
1111
[id="talo-platform-prepare-for-update_{context}"]
1212
== Preparing for the updates
1313

14-
If both the hub and the spoke clusters are running {product-title} 4.9, you must update ZTP from version 4.9 to 4.10. If {product-title} 4.10 is used, you can set up the environment.
14+
This procedure makes use of the Topology Aware Lifecycle Manager (TALM) which requires the 4.10 version or later of the ZTP container for compatibility.
1515

1616
[id="talo-platform-prepare-for-update-env-setup_{context}"]
1717
== Setting up the environment
@@ -97,9 +97,9 @@ For more information about how to set up the graph on the hub cluster, see link:
9797

9898
.. Make a local copy of the upstream graph. Host the update graph on an `http` or `https` server in the disconnected environment that has access to the spoke cluster. To download the update graph, use the following command:
9999
+
100-
[source,terminal]
100+
[source,terminal,subs="attributes+"]
101101
----
102-
$ curl -s https://api.openshift.com/api/upgrades_info/v1/graph?channel=stable-4.10 -o ~/upgrade-graph_stable-4.10
102+
$ curl -s https://api.openshift.com/api/upgrades_info/v1/graph?channel=stable-{product-version} -o ~/upgrade-graph_stable-{product-version}
103103
----
104104

105105
* For Operator updates, you must perform the following task:

modules/nw-rfhe-installing-operator-cli.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -96,8 +96,8 @@ $ oc get csv -n openshift-bare-metal-events -o custom-columns=Name:.metadata.nam
9696
----
9797

9898
.Example output
99-
[source,terminal]
99+
[source,terminal,subs="attributes+"]
100100
----
101-
Name Phase
102-
bare-metal-event-relay.4.10.0-202206301927 Succeeded
101+
Name Phase
102+
bare-metal-event-relay.{product-version}.0-xxxxxxxxxxxx Succeeded
103103
----

modules/ztp-adding-new-content-to-gitops-ztp.adoc

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -24,33 +24,33 @@ ztp-update/
2424

2525
. Add the following content to the `ztp-update.in` Containerfile:
2626
+
27-
[source,text]
27+
[source,text,subs="attributes+"]
2828
----
29-
FROM registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.10
29+
FROM registry.redhat.io/openshift4/ztp-site-generate-rhel8:v{product-version}
3030

3131
ADD example-cr2.yaml /kustomize/plugin/ran.openshift.io/v1/policygentemplate/source-crs/
3232
ADD example-cr1.yaml /kustomize/plugin/ran.openshift.io/v1/policygentemplate/source-crs/
3333
----
3434

3535
. Open a terminal at the `ztp-update/` folder and rebuild the container:
3636
+
37-
[source,terminal]
37+
[source,terminal,subs="attributes+"]
3838
----
39-
$ podman build -t ztp-site-generate-rhel8-custom:v4.10-custom-1
39+
$ podman build -t ztp-site-generate-rhel8-custom:v{product-version}-custom-1
4040
----
4141

4242
. Push the built container image to your disconnected registry, for example:
4343
+
44-
[source,terminal]
44+
[source,terminal,subs="attributes+"]
4545
----
46-
$ podman push localhost/ztp-site-generate-rhel8-custom:v4.10-custom-1 registry.example.com:5000/ztp-site-generate-rhel8-custom:v4.10-custom-1
46+
$ podman push localhost/ztp-site-generate-rhel8-custom:v{product-version}-custom-1 registry.example.com:5000/ztp-site-generate-rhel8-custom:v{product-version}-custom-1
4747
----
4848

4949
. Patch the Argo CD instance on the hub cluster to point to the newly built container image:
5050
+
51-
[source,terminal]
51+
[source,terminal,subs="attributes+"]
5252
----
53-
$ oc patch -n openshift-gitops argocd openshift-gitops --type=json -p '[{"op": "replace", "path":"/spec/repo/initContainers/0/image", "value": "registry.example.com:5000/ztp-site-generate-rhel8-custom:v4.10-custom-1"} ]'
53+
$ oc patch -n openshift-gitops argocd openshift-gitops --type=json -p '[{"op": "replace", "path":"/spec/repo/initContainers/0/image", "value": "registry.example.com:5000/ztp-site-generate-rhel8-custom:v{product-version}-custom-1"} ]'
5454
----
5555
+
5656
When the Argo CD instance is patched, the `openshift-gitops-repo-server` pod automatically restarts.

modules/ztp-deploying-a-site.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ The folder includes example files for single node, three-node, and standard clus
7373

7474
.. Change the cluster and host details in the example file to match the type of cluster you want. The following file is a composite of the three files that explains the configuration of each cluster type:
7575
+
76-
[source,yaml]
76+
[source,yaml,subs="attributes+"]
7777
----
7878
# example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-sno
7979
---
@@ -86,7 +86,7 @@ spec:
8686
baseDomain: "example.com"
8787
pullSecretRef:
8888
name: "assisted-deployment-pull-secret"
89-
clusterImageSetNameRef: "openshift-4.10" <1>
89+
clusterImageSetNameRef: "openshift-{product-version}" <1>
9090
sshPublicKey: "ssh-rsa AAAA..."
9191
clusters:
9292
- clusterName: "example-sno"

modules/ztp-manually-install-a-single-managed-cluster.adoc

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -51,22 +51,20 @@ provisioner_cluster_registry }}/ocp4:{{ mirror_version_spoke_release }}
5151
5252
* You mirrored the ISO and `rootfs` used to generate the spoke cluster ISO to an HTTP server and configured the settings to pull images from there.
5353
+
54-
The images must match the version of the `ClusterImageSet`. To deploy a 4.9.0 version, the `rootfs` and
55-
ISO must be set at 4.9.0.
56-
54+
The images must match the version of the `ClusterImageSet`. For example, to deploy a {product-version}.0 version, the `rootfs` and ISO must be set to `{product-version}.0`.
5755
5856
.Procedure
5957

6058
. Create a `ClusterImageSet` for each specific cluster version that needs to be deployed. A `ClusterImageSet` has the following format:
6159
+
62-
[source,yaml]
60+
[source,yaml,subs="attributes+"]
6361
----
6462
apiVersion: hive.openshift.io/v1
6563
kind: ClusterImageSet
6664
metadata:
67-
name: openshift-4.9.0-rc.0 <1>
65+
name: openshift-{product-version}.0 <1>
6866
spec:
69-
releaseImage: quay.io/openshift-release-dev/ocp-release:4.9.0-x86_64 <2>
67+
releaseImage: quay.io/openshift-release-dev/ocp-release:{product-version}.0-x86_64 <2>
7068
----
7169
<1> The descriptive version that you want to deploy.
7270
<2> Specifies the `releaseImage` to deploy and determines the OS Image version. The discovery ISO is based on an OS image version as the `releaseImage`, or latest if the exact version is unavailable.

modules/ztp-preparing-for-the-gitops-ztp-upgrade.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,9 +19,9 @@ Use the following procedure to prepare your site for the GitOps zero touch provi
1919
$ mkdir -p ./out
2020
----
2121
+
22-
[source,terminal]
22+
[source,terminal,subs="attributes+"]
2323
----
24-
$ podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.10 extract /home/ztp --tar | tar x -C ./out
24+
$ podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v{product-version} extract /home/ztp --tar | tar x -C ./out
2525
----
2626
+
2727
The `/out` directory contains the following subdirectories:

modules/ztp-preparing-the-hub-cluster-for-ztp.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -10,9 +10,9 @@ You can configure your hub cluster with a set of ArgoCD applications that genera
1010

1111
.Prerequisites
1212

13-
* Openshift Cluster 4.8 or 4.9 as the hub cluster
14-
* {rh-rhacm-first} Operator 2.3 or 2.4 installed on the hub cluster
15-
* Red Hat OpenShift GitOps Operator 1.3 on the hub cluster
13+
* Openshift Cluster 4.11 as the hub cluster
14+
* {rh-rhacm-first} Operator 2.5 installed on the hub cluster
15+
* Red Hat OpenShift GitOps Operator 1.5 on the hub cluster
1616
1717
.Procedure
1818

0 commit comments

Comments
 (0)