Skip to content

Commit d755e7d

Browse files
authored
Merge pull request #28383 from bergerhoffer/openshift-typos
Fixing some typos
2 parents 8e6780f + 0f59625 commit d755e7d

10 files changed

+21
-22
lines changed

modules/cluster-logging-visualizer-indices.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ An index pattern defines the Elasticsearch indices that you want to visualize. T
1111

1212
* A user must have the `cluster-admin` role, the `cluster-reader` role, or both roles to view the *infra* and *audit* indices in Kibana. The default `kubeadmin` user has proper permissions to view these indices.
1313
+
14-
If you can view the pods and logs in the `default`, `kube-` and `openshift-` projects, you should be able to access the these indices. You can use the following command to check if the current user has appropriate permissions:
14+
If you can view the pods and logs in the `default`, `kube-` and `openshift-` projects, you should be able to access these indices. You can use the following command to check if the current user has appropriate permissions:
1515
+
1616
[source,terminal]
1717
----

modules/cluster-logging-visualizer-kibana.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,9 +13,9 @@ You view cluster logs in the Kibana web console. The methods for viewing and vis
1313

1414
* Kibana index patterns must exist.
1515

16-
* A user must have the `cluster-admin` role, the `cluster-reader` role, or both roles to view the *infra* and *audit* indices in Kibana. The default `kubeadmin` user has proper permissions to view these indices.
16+
* A user must have the `cluster-admin` role, the `cluster-reader` role, or both roles to view the *infra* and *audit* indices in Kibana. The default `kubeadmin` user has proper permissions to view these indices.
1717
+
18-
If you can view the pods and logs in the `default`, `kube-` and `openshift-` projects, you should be able to access the these indices. You can use the following command to check if the current user has appropriate permissions:
18+
If you can view the pods and logs in the `default`, `kube-` and `openshift-` projects, you should be able to access these indices. You can use the following command to check if the current user has appropriate permissions:
1919
+
2020
[source,terminal]
2121
----

modules/cluster-logging-visualizer-launch.adoc

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -10,9 +10,9 @@ pie charts, heat maps, built-in geospatial support, and other visualizations.
1010

1111
.Prerequisites
1212

13-
* To list the *infra* and *audit* indices in Kibana, a user must have the `cluster-admin` role, the `cluster-reader` role, or both roles. The default `kubeadmin` user has proper permissions to list these indices.
13+
* To list the *infra* and *audit* indices in Kibana, a user must have the `cluster-admin` role, the `cluster-reader` role, or both roles. The default `kubeadmin` user has proper permissions to list these indices.
1414
+
15-
If you can view the pods and logs in the `default`, `kube-*` and `openshift-*` projects, you should be able to access the these indices. You can use the following command to check if the current user has proper permissions:
15+
If you can view the pods and logs in the `default`, `kube-*` and `openshift-*` projects, you should be able to access these indices. You can use the following command to check if the current user has proper permissions:
1616
+
1717
[source,terminal]
1818
----
@@ -44,4 +44,3 @@ The Kibana interface launches.
4444
====
4545
If you get a *security_exception* error in the Kibana console and cannot access your Kibana indices, you might have an expired OAuth token. If you see this error, log out of the Kibana console, and then log back in. This refreshes your OAuth tokens and you should be able to access your indices.
4646
====
47-

modules/ipi-install-additional-install-config-parameters.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -143,7 +143,7 @@ a|`provisioningNetworkCIDR`
143143

144144
|`bootstrapProvisioningIP`
145145
|The second IP address of the `provisioningNetworkCIDR`.
146-
|The IP on the bootstrap VM where the provisioning services run while the the installer is deploying the control plane (master) nodes. Defaults to the second IP of the `provisioning` subnet. For example, `172.22.0.2`
146+
|The IP on the bootstrap VM where the provisioning services run while the installer is deploying the control plane (master) nodes. Defaults to the second IP of the `provisioning` subnet. For example, `172.22.0.2`
147147
ifdef::upstream[]
148148
ifeval::[{release} >= 4.5]
149149
or `2620:52:0:1307::2`

modules/nodes-scheduler-default-creating.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,7 @@ For example:
8989
$ oc patch Scheduler cluster --type='merge' -p '{"spec":{"policy":{"name":"scheduler-policy"}}}' --type=merge
9090
----
9191
+
92-
After making the change to the `Scheduler` config resource, wait for the `opensift-kube-apiserver` pods to redeploy. This can take several minutes. Until the pods redeploy, new scheduler does not take effect.
92+
After making the change to the `Scheduler` config resource, wait for the `openshift-kube-apiserver` pods to redeploy. This can take several minutes. Until the pods redeploy, new scheduler does not take effect.
9393

9494
. Verify the scheduler policy is configured by viewing the log of a scheduler pod in the `openshift-kube-scheduler` namespace. The following command checks for the predicates and priorities that are being registered by the scheduler:
9595
+

modules/nodes-scheduler-node-selectors-cluster.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ the pod on nodes with matching labels.
1212

1313
You configure cluster-wide node selectors by creating a Scheduler Operator custom resource (CR). You add labels to a node by editing a `Node` object, a `MachineSet` object, or a `MachineConfig` object. Adding the label to the machine set ensures that if the node or machine goes down, new nodes have the label. Labels added to a node or machine config do not persist if the node or machine goes down.
1414

15-
For example, the the Scheduler configures the cluster-wide `region=east` node selector:
15+
For example, the Scheduler configures the cluster-wide `region=east` node selector:
1616

1717
.Example Scheduler Operator custom resource
1818
[source,yaml]

modules/serverless-config-replicas.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
[id="serverless-config-replicas_{context}"]
66
= Configuring high availability replicas on {ServerlessProductName}
77

8-
High availability (HA) functionality is available by default on {ServerlessProductName} for the `autoscaler-hpa`, `controller`, `activator` , `kourier-control`, and `kourier-gateway` controllers. These components are configured with two replicas by default.
8+
High availability (HA) functionality is available by default on {ServerlessProductName} for the `autoscaler-hpa`, `controller`, `activator`, `kourier-control`, and `kourier-gateway` controllers. These components are configured with two replicas by default.
99

1010
You modify the number of replicas that are created per controller by changing the configuration of `KnativeServing.spec.highAvailability` in the KnativeServing custom resource definition.
1111
// This field also specifies the minimum number of _activators_ if you are using the horizontal pod autoscaler (HPA). For more information about HPA, see

modules/troubleshooting-disabling-autoreboot-mco.adoc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
// * support/troubleshooting/troubleshooting-operator-issues.adoc
44

55
[id="troubleshooting-disabling-autoreboot-mco_{context}"]
6-
= Disabling Machine Config Operator from automatically rebooting
6+
= Disabling Machine Config Operator from automatically rebooting
77

88
When configuration changes are made by the Machine Config Operator (MCO), {op-system-first} must reboot for the changes to take effect. Whether the configuration change is automatic, such as when a `kube-apiserver-to-kubelet-signer` certificate authority (CA) is rotated, or manual, an {op-system} node reboots automatically unless it is paused.
99

@@ -15,7 +15,7 @@ The following modifications do not trigger a node reboot:
1515
* changes to the global pull secret or pull secret in the `openshift-config` namespace
1616
* changes to the `/etc/containers/registries.conf` file, such as adding or editing an `ImageContentSourcePolicy` object
1717
18-
When the MCO detects any of these changes, it drains the corresponding nodes, applies the changes, and uncordons the nodes.
18+
When the MCO detects any of these changes, it drains the corresponding nodes, applies the changes, and uncordons the nodes.
1919
====
2020

2121
To avoid unwanted disruptions, you can modify the machine config pool to prevent automatic rebooting after the Operator makes changes to the machine config.
@@ -62,7 +62,7 @@ Pausing a machine config pool stops all system reboot processes and all configur
6262
# oc get machineconfigpool/worker --template='{{.spec.paused}}'
6363
----
6464
+
65-
The `spec.paused` field is `true` and the the machine config pool is paused.
65+
The `spec.paused` field is `true` and the machine config pool is paused.
6666

6767
. Alternatively, to unpause the autoreboot process:
6868

@@ -99,7 +99,7 @@ By unpausing a machine config pool, all paused changes are applied at reboot.
9999
# oc get machineconfigpool/worker --template='{{.spec.paused}}'
100100
----
101101
+
102-
The `spec.paused` field is `false` and the the machine config pool is unpaused.
102+
The `spec.paused` field is `false` and the machine config pool is unpaused.
103103

104104
. To see if the machine config pool has pending changes:
105105
+

modules/virt-importing-vm-datavolume.adoc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
[id="virt-importing-vm-datavolume_{context}"]
66
= Importing a virtual machine image into a persistent volume claim by using a data volume
77

8-
You can import a virtual machine image into a persistent volume claim (PVC) by using a data volume.
8+
You can import a virtual machine image into a persistent volume claim (PVC) by using a data volume.
99

1010
The virtual machine image can be hosted at an HTTP or HTTPS endpoint, or the image can be built into a container disk and stored in a container registry.
1111

@@ -75,11 +75,11 @@ spec:
7575
storage: 10Gi
7676
storageClassName: local
7777
source:
78-
http: <1>
78+
http: <1>
7979
url: "https://download.fedoraproject.org/pub/fedora/linux/releases/33/Cloud/x86_64/images/Fedora-Cloud-Base-33-1.2.x86_64.qcow2" <2>
8080
secretRef: "" <3>
8181
certConfigMap: "" <4>
82-
status: {}
82+
status: {}
8383
running: true
8484
template:
8585
metadata:
@@ -105,7 +105,7 @@ spec:
105105
name: datavolumedisk1
106106
status: {}
107107
----
108-
<1> The source type to import the image from. This example uses a HTTP endpoint. To import a container disk from a registry, replace `http` with `registry`.
108+
<1> The source type to import the image from. This example uses an HTTP endpoint. To import a container disk from a registry, replace `http` with `registry`.
109109
<2> The source of the virtual machine image you want to import. This example references a virtual machine image at an HTTP endpoint. An example of a container registry endpoint is `url: "docker://kubevirt/fedora-cloud-container-disk-demo:latest"`.
110110
<3> The `secretRef` parameter is optional.
111111
<4> The `certConfigMap` is required for communicating with servers that use self-signed certificates or certificates not signed by the system CA bundle. The referenced config map must be in the same namespace as the data volume.

post_installation_configuration/machine-configuration-tasks.adoc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -27,14 +27,14 @@ include::modules/checking-mco-status.adoc[leveloffset=+2]
2727

2828
[id="using-machineconfigs-to-change-machines"]
2929
== Using MachineConfigs to configure nodes
30-
Tasks in this section let you create MachineConfig objects to modify files, systemd unit files, and other operating system features running on {product-title} nodes. For more ideas on working with MachineConfigs, see
30+
Tasks in this section let you create MachineConfig objects to modify files, systemd unit files, and other operating system features running on {product-title} nodes. For more ideas on working with MachineConfigs, see
3131
content related to link:https://access.redhat.com/solutions/5307301[changing MTU network settings], link:https://access.redhat.com/solutions/5096731[adding] or
32-
link:https://access.redhat.com/solutions/4510281[updating] SSH authorized keys, , link:https://access.redhat.com/solutions/4518671[replacing DNS nameservers], link:https://access.redhat.com/verify-images-ocp4[verifying image signatures], link:https://access.redhat.com/solutions/4727321[enabling SCTP], and link:https://access.redhat.com/solutions/5170251[configuring iSCSI initiatornames] for {product-title}.
32+
link:https://access.redhat.com/solutions/4510281[updating] SSH authorized keys, link:https://access.redhat.com/solutions/4518671[replacing DNS nameservers], link:https://access.redhat.com/verify-images-ocp4[verifying image signatures], link:https://access.redhat.com/solutions/4727321[enabling SCTP], and link:https://access.redhat.com/solutions/5170251[configuring iSCSI initiatornames] for {product-title}.
3333

34-
MachineConfigs
34+
MachineConfigs
3535

3636
{product-title} version 4.6 supports
37-
link:https://github.com/coreos/ignition/blob/master/docs/configuration-v3_1.md[Ignition specification version 3.1].
37+
link:https://github.com/coreos/ignition/blob/master/docs/configuration-v3_1.md[Ignition specification version 3.1].
3838
All new MachineConfigs you create going forward should be based on
3939
Ignition specification version 3.1. If you are upgrading your {product-title} cluster,
4040
any existing Ignition specification version 2.x MachineConfigs will be translated automatically to

0 commit comments

Comments
 (0)