Skip to content

Commit 913c3dc

Browse files
committed
Typo fixes
1 parent 2a4d65d commit 913c3dc

21 files changed

+42
-42
lines changed

installing/install_config/enabling-cgroup-v2.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,9 +10,9 @@ toc::[]
1010
ifndef::openshift-origin[]
1111
By default, {product-title} uses link:https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v1.html[Linux control group version 1] (cgroup v1) in your cluster. You can enable link:https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html[Linux control group version 2] (cgroup v2) upon installation. Enabling cgroup v2 in {product-title} disables all cgroup version 1 controllers and hierarchies in your cluster.
1212

13-
cgroup v2 is the next version of the Linux cgroup API. cgroup v2 offers several improvements over cgroup v1, including a unified hierarchy, safer sub-tree delegation, new features such as link:https://www.kernel.org/doc/html/latest/accounting/psi.html[Pressure Stall Information], and enhanced resource management and isolation.
13+
cgroup v2 is the next version of the Linux cgroup API. cgroup v2 offers several improvements over cgroup v1, including a unified hierarchy, safer sub-tree delegation, new features such as link:https://www.kernel.org/doc/html/latest/accounting/psi.html[Pressure Stall Information], and enhanced resource management and isolation.
1414

15-
You can switch between cgroup v1 and cgroup v2, as needed, by editing the the `node.config` object. For more information, see "Configuring the Linux cgroup on your nodes" in the "Additional resources" of this section.
15+
You can switch between cgroup v1 and cgroup v2, as needed, by editing the `node.config` object. For more information, see "Configuring the Linux cgroup on your nodes" in the "Additional resources" of this section.
1616
endif::openshift-origin[]
1717

1818
ifdef::openshift-origin[]

modules/cnf-configuring-node-groups-for-the-numaresourcesoperator.adoc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,11 +7,11 @@
77
[id="cnf-configuring-node-groups-for-the-numaresourcesoperator_{context}"]
88
= Optional: Configuring polling operations for NUMA resources updates
99

10-
The daemons controlled by the NUMA Resources Operator in their `nodeGroup` poll resources to retrieve updates about available NUMA resources. You can fine-tune polling operations for these daemons by configuring the `spec.nodeGroups` specification in the `NUMAResourcesOperator` custom resource (CR). This provides advanced control of polling operations. Configure these specifications to improve scheduling behaviour and troubleshoot suboptimal scheduling decisions.
10+
The daemons controlled by the NUMA Resources Operator in their `nodeGroup` poll resources to retrieve updates about available NUMA resources. You can fine-tune polling operations for these daemons by configuring the `spec.nodeGroups` specification in the `NUMAResourcesOperator` custom resource (CR). This provides advanced control of polling operations. Configure these specifications to improve scheduling behaviour and troubleshoot suboptimal scheduling decisions.
1111

1212
The configuration options are the following:
1313

14-
* `infoRefreshMode`: Determines the trigger condition for polling the kublet. The NUMA Resources Operator reports the resulting information to the API server.
14+
* `infoRefreshMode`: Determines the trigger condition for polling the kubelet. The NUMA Resources Operator reports the resulting information to the API server.
1515
* `infoRefreshPeriod`: Determines the duration between polling updates.
1616
* `podsFingerprinting`: Determines if point-in-time information for the current set of pods running on a node is exposed in polling updates.
1717
+
@@ -44,7 +44,7 @@ spec:
4444
podsFingerprinting: Enabled <3>
4545
name: worker
4646
----
47-
<1> Valid values are `Periodic`, `Events`, `PeriodicAndEvents`. Use `Periodic` to poll the kublet at intervals that you define in `infoRefreshPeriod`. Use `Events` to poll the kublet at every pod lifecycle event. Use `PeriodicAndEvents` to enable both methods.
47+
<1> Valid values are `Periodic`, `Events`, `PeriodicAndEvents`. Use `Periodic` to poll the kubelet at intervals that you define in `infoRefreshPeriod`. Use `Events` to poll the kubelet at every pod lifecycle event. Use `PeriodicAndEvents` to enable both methods.
4848
<2> Define the polling interval for `Periodic` or `PeriodicAndEvents` refresh modes. The field is ignored if the refresh mode is `Events`.
4949
<3> Valid values are `Enabled` or `Disabled`. Setting to `Enabled` is a requirement for the `cacheResyncPeriod` specification in the `NUMAResourcesScheduler`.
5050

@@ -70,4 +70,4 @@ $ oc get numaresop numaresourcesoperator -o json | jq '.status'
7070
"name": "worker"
7171
7272
...
73-
----
73+
----

modules/cnf-scheduling-numa-aware-workloads-overview-with-manual-performance-settings.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,4 +5,4 @@
55
[id="cnf-scheduling-numa-aware-workloads-with-manual-perofrmance-settings_{context}"]
66
= Scheduling NUMA-aware workloads with manual performance settings
77

8-
Clusters running latency-sensitive workloads typically feature performance profiles that help to minimize workload latency and optimize performance. However, you can schedule NUMA-aware workloads in a pristine cluster that does not feature a performance profile. The following workflow features a pristine cluster that you can manually configure for performance by using the `KubletConfig` resource. This is not the typical environment for scheduling NUMA-aware workloads.
8+
Clusters running latency-sensitive workloads typically feature performance profiles that help to minimize workload latency and optimize performance. However, you can schedule NUMA-aware workloads in a pristine cluster that does not feature a performance profile. The following workflow features a pristine cluster that you can manually configure for performance by using the `KubeletConfig` resource. This is not the typical environment for scheduling NUMA-aware workloads.

modules/deployment-plug-in-cluster.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -91,5 +91,5 @@ You can see the list of the enabled plugins on the *Overview* page or by navigat
9191

9292
[NOTE]
9393
====
94-
It can take a few minutes for the new plugin configuration to appear. If you do not see your plugin, you might need to refresh your browser if the plugin was recently enabled. If you recieve any errors at runtime, check the JS console in browser developer tools to look for any errors in your plugin code.
94+
It can take a few minutes for the new plugin configuration to appear. If you do not see your plugin, you might need to refresh your browser if the plugin was recently enabled. If you receive any errors at runtime, check the JS console in browser developer tools to look for any errors in your plugin code.
9595
====

modules/ipi-install-configuring-host-dual-network-interfaces-in-the-install-config.yaml-file.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -112,7 +112,7 @@ hosts:
112112
next-hop-interface: bond0 <14>
113113
table-id: 254
114114
----
115-
<1> The `networkConfig`` field contains information about the network configuration of the host, with subfields including `interfaces``,`dns-resolver`, and `routes`.
115+
<1> The `networkConfig` field contains information about the network configuration of the host, with subfields including `interfaces`, `dns-resolver`, and `routes`.
116116
<2> The `interfaces` field is an array of network interfaces defined for the host.
117117
<3> The name of the interface.
118118
<4> The type of interface. This example creates a ethernet interface.

modules/lvms-scaling-storage-of-single-node-openshift-cluster-using-rhacm.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
[id="lvms-scaling-storage-of-single-node-openshift-cluster-using-rhacm_{context}"]
77
= Scaling up storage by adding capacity to your {sno} cluster using {rh-rhacm}
88

9-
You can scale the the storage capacity of your configured worker nodes on a {sno} cluster using {rh-rhacm}.
9+
You can scale the storage capacity of your configured worker nodes on a {sno} cluster using {rh-rhacm}.
1010

1111
.Prerequisites
1212

modules/machine-lifecycle-hook-deletion-format.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ spec:
4040
...
4141
----
4242
<1> The name of the `preTerminate` lifecycle hook.
43-
<2> The hook-implementing controller that that manages the `preTerminate` lifecycle hook.
43+
<2> The hook-implementing controller that manages the `preTerminate` lifecycle hook.
4444

4545
[discrete]
4646
[id="machine-lifecycle-hook-deletion-example_{context}"]

modules/machineset-upi-reqs-ignition-config.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ endif::[]
1414

1515
Provisioning virtual machines (VMs) requires a valid Ignition configuration. The Ignition configuration contains the `machine-config-server` address and a system trust bundle for obtaining further Ignition configurations from the Machine Config Operator.
1616

17-
By default, this configuration is stored in the `worker-user-data` secret in the the `machine-api-operator` namespace. Compute machine sets reference the secret during the machine creation process.
17+
By default, this configuration is stored in the `worker-user-data` secret in the `machine-api-operator` namespace. Compute machine sets reference the secret during the machine creation process.
1818

1919
.Procedure
2020

modules/microshift-accessing-cluster-open-firewall.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ Use the following procedure to open the firewall so that a remote user can acces
1414
1515
.Prerequisites
1616

17-
* You have installed the the `oc` binary.
17+
* You have installed the `oc` binary.
1818
1919
.Procedure
2020

@@ -32,4 +32,4 @@ Use the following procedure to open the firewall so that a remote user can acces
3232
[source,terminal]
3333
----
3434
[user@microshift]$ oc get all -A
35-
----
35+
----

modules/microshift-accessing-cluster-remotely.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ Use the following procedure to access the {product-title} cluster from a remote
1414
1515
.Prerequisites
1616

17-
* You have installed the the `oc` binary.
17+
* You have installed the `oc` binary.
1818
1919
* The `@user@microshift` has opened the firewall from the local host.
2020

0 commit comments

Comments
 (0)