Skip to content

Commit 1660970

Browse files
authored
Merge pull request #97222 from dfitzmau/OSDOCS-15470-4-19
[enterprise-4.19] OSDOCS-15470: Removed low-level example output blocks from Network Op…
2 parents 0a868df + c5cbe91 commit 1660970

23 files changed

+33
-216
lines changed

modules/configuring-egress-proxy-edns-operator.adoc

Lines changed: 2 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -33,15 +33,10 @@ $ oc -n external-dns-operator patch subscription external-dns-operator --type='j
3333

3434
.Verification
3535

36-
* After the deployment of the External DNS Operator is completed, verify that the trusted CA environment variable is added to the `external-dns-operator` deployment by running the following command:
36+
* After the deployment of the External DNS Operator is completed, verify that the trusted CA environment variable is added, outputted as `trusted-ca`, to the `external-dns-operator` deployment by running the following command:
3737
+
3838
[source,terminal]
3939
----
4040
$ oc -n external-dns-operator exec deploy/external-dns-operator -c external-dns-operator -- printenv TRUSTED_CA_CONFIGMAP_NAME
4141
----
42-
+
43-
.Example output
44-
[source,terminal]
45-
----
46-
trusted-ca
47-
----
42+

modules/k8s-nmstate-deploying-nmstate-CLI.adoc

Lines changed: 1 addition & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -72,13 +72,6 @@ EOF
7272
$ oc get clusterserviceversion -n openshift-nmstate \
7373
-o custom-columns=Name:.metadata.name,Phase:.status.phase
7474
----
75-
+
76-
.Example output
77-
[source,terminal,subs="attributes+"]
78-
----
79-
Name Phase
80-
kubernetes-nmstate-operator.{product-version}.0-202210210157 Succeeded
81-
----
8275

8376
. Create an instance of the `nmstate` Operator:
8477
+
@@ -116,19 +109,10 @@ $ oc apply -f <filename>.yaml
116109

117110
.Verification
118111

119-
. Verify that all pods for the NMState Operator are in a `Running` state:
112+
* Verify that all pods for the NMState Operator have the `Running` status by entering the following command:
120113
+
121114
[source,terminal]
122115
----
123116
$ oc get pod -n openshift-nmstate
124117
----
125-
+
126-
.Example output
127-
[source,terminal,subs="attributes+"]
128-
----
129-
Name Ready Status Restarts Age
130-
pod/nmstate-handler-wn55p 1/1 Running 0 77s
131-
pod/nmstate-operator-f6bb869b6-v5m92 1/1 Running 0 4m51s
132-
...
133-
----
134118

modules/k8s-nmstate-installing-the-kubernetes-nmstate-operator.adoc

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
[id="installing-the-kubernetes-nmstate-operator-web-console_{context}"]
88
= Installing the Kubernetes NMState Operator by using the web console
99

10-
You can install the Kubernetes NMState Operator by using the web console. After it is installed, the Operator can deploy the NMState State Controller as a daemon set across all of the cluster nodes.
10+
You can install the Kubernetes NMState Operator by using the web console. After you install the Kubernetes NMState Operator, the Operator has deployed the NMState State Controller as a daemon set across all of the cluster nodes.
1111

1212
.Prerequisites
1313

@@ -38,6 +38,3 @@ The name restriction is a known issue. The instance is a singleton for the entir
3838

3939
. Accept the default settings and click *Create* to create the instance.
4040

41-
.Summary
42-
43-
After you install the Kubernetes NMState Operator, the Operator has deployed the NMState State Controller as a daemon set across all of the cluster nodes.

modules/nw-autoscaling-ingress-controller.adoc

Lines changed: 1 addition & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -193,18 +193,12 @@ $ oc apply -f ingress-autoscaler.yaml
193193
.Verification
194194
* Verify that the default Ingress Controller is scaled out to match the value returned by the `kube-state-metrics` query by running the following commands:
195195
196-
** Use the `grep` command to search the Ingress Controller YAML file for replicas:
196+
** Use the `grep` command to search the Ingress Controller YAML file for the number of replicas:
197197
+
198198
[source,terminal]
199199
----
200200
$ oc get -n openshift-ingress-operator ingresscontroller/default -o yaml | grep replicas:
201201
----
202-
+
203-
.Example output
204-
[source,terminal]
205-
----
206-
replicas: 3
207-
----
208202

209203
** Get the pods in the `openshift-ingress` project:
210204
+

modules/nw-aws-load-balancer-operator.adoc

Lines changed: 1 addition & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -15,31 +15,19 @@ The AWS Load Balancer Operator supports the Kubernetes service resource of type
1515

1616
.Procedure
1717

18-
. You can deploy the AWS Load Balancer Operator on demand from OperatorHub, by creating a `Subscription` object by running the following command:
18+
. To deploy the AWS Load Balancer Operator on-demand from OperatorHub, create a `Subscription` object by running the following command:
1919
+
2020
[source,terminal]
2121
----
2222
$ oc -n aws-load-balancer-operator get sub aws-load-balancer-operator --template='{{.status.installplan.name}}{{"\n"}}'
2323
----
24-
+
25-
.Example output
26-
[source,terminal]
27-
----
28-
install-zlfbt
29-
----
3024

3125
. Check if the status of an install plan is `Complete` by running the following command:
3226
+
3327
[source,terminal]
3428
----
3529
$ oc -n aws-load-balancer-operator get ip <install_plan_name> --template='{{.status.phase}}{{"\n"}}'
3630
----
37-
+
38-
.Example output
39-
[source,terminal]
40-
----
41-
Complete
42-
----
4331

4432
. View the status of the `aws-load-balancer-operator-controller-manager` deployment by running the following command:
4533
+

modules/nw-bpfman-operator-deploy.adoc

Lines changed: 1 addition & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -89,19 +89,5 @@ Replace `<pod_name>` with the name of an XDP program pod, such as `go-xdp-counte
8989
----
9090
2024/08/13 15:20:06 15016 packets received
9191
2024/08/13 15:20:06 93581579 bytes received
92-
93-
2024/08/13 15:20:09 19284 packets received
94-
2024/08/13 15:20:09 99638680 bytes received
95-
96-
2024/08/13 15:20:12 23522 packets received
97-
2024/08/13 15:20:12 105666062 bytes received
98-
99-
2024/08/13 15:20:15 27276 packets received
100-
2024/08/13 15:20:15 112028608 bytes received
101-
102-
2024/08/13 15:20:18 29470 packets received
103-
2024/08/13 15:20:18 112732299 bytes received
104-
105-
2024/08/13 15:20:21 32588 packets received
106-
2024/08/13 15:20:21 113813781 bytes received
92+
...
10793
----

modules/nw-control-dns-records-public-hosted-zone-aws.adoc

Lines changed: 5 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -10,24 +10,22 @@ You can create DNS records on a public hosted zone for AWS by using the Red Hat
1010

1111
.Procedure
1212

13-
. Check the user. The user must have access to the `kube-system` namespace. If you don’t have the credentials, as you can fetch the credentials from the `kube-system` namespace to use the cloud provider client:
13+
. Check the user profile, such as `system:admin`, by running the following command. The user profile must have access to the `kube-system` namespace. If you do not have the credentials, you can fetch the credentials from the `kube-system` namespace to use the cloud provider client by running the following command:
1414
+
1515
[source,terminal]
1616
----
1717
$ oc whoami
1818
----
19+
20+
. Fetch the values from aws-creds secret present in `kube-system` namespace.
1921
+
20-
.Example output
2122
[source,terminal]
2223
----
23-
system:admin
24+
$ export AWS_ACCESS_KEY_ID=$(oc get secrets aws-creds -n kube-system --template={{.data.aws_access_key_id}} | base64 -d)
2425
----
25-
26-
. Fetch the values from aws-creds secret present in `kube-system` namespace.
2726
+
2827
[source,terminal]
2928
----
30-
$ export AWS_ACCESS_KEY_ID=$(oc get secrets aws-creds -n kube-system --template={{.data.aws_access_key_id}} | base64 -d)
3129
$ export AWS_SECRET_ACCESS_KEY=$(oc get secrets aws-creds -n kube-system --template={{.data.aws_secret_access_key}} | base64 -d)
3230
----
3331

@@ -45,7 +43,7 @@ openshift-console console console-openshift-console.apps.te
4543
openshift-console downloads downloads-openshift-console.apps.testextdnsoperator.apacshift.support downloads http edge/Redirect None
4644
----
4745

48-
. Get the list of dns zones to find the one which corresponds to the previously found route's domain:
46+
. Get the list of DNS zones and find the DNS zone that corresponds to the domain of the route that you previously queried:
4947
+
5048
[source,terminal]
5149
----

modules/nw-control-dns-records-public-managed-zone-gcp.adoc

Lines changed: 1 addition & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -57,18 +57,12 @@ openshift-console console console-openshift-console.apps.te
5757
openshift-console downloads downloads-openshift-console.apps.test.gcp.example.com downloads http edge/Redirect None
5858
----
5959

60-
. Get a list of managed zones by running the following command:
60+
. Get a list of managed zones, such as `qe-cvs4g-private-zone test.gcp.example.com`, by running the following command:
6161
+
6262
[source,terminal]
6363
----
6464
$ gcloud dns managed-zones list | grep test.gcp.example.com
6565
----
66-
+
67-
.Example output
68-
[source,terminal]
69-
----
70-
qe-cvs4g-private-zone test.gcp.example.com
71-
----
7266

7367
. Create a YAML file, for example, `external-dns-sample-gcp.yaml`, that defines the `ExternalDNS` object:
7468
+

modules/nw-dns-cache-tuning.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ spec:
4545
+
4646
[source,terminal]
4747
----
48-
oc get configmap/dns-default -n openshift-dns -o yaml
48+
$ oc get configmap/dns-default -n openshift-dns -o yaml
4949
----
5050

5151
. Verify that you see entries that look like the following example:

modules/nw-dns-forward.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@ spec:
8181
clusterDomain: cluster.local
8282
clusterIP: x.y.z.10
8383
conditions:
84-
...
84+
...
8585
----
8686
<1> Must comply with the `rfc6335` service name syntax.
8787
<2> Must conform to the definition of a subdomain in the `rfc1123` service name syntax. The cluster domain, `cluster.local`, is an invalid subdomain for the `zones` field.

0 commit comments

Comments
 (0)