Skip to content

Commit b3fce82

Browse files
committed
adding sno variables and updating existing incorrect references
1 parent 409acff commit b3fce82

File tree

33 files changed

+58
-57
lines changed

33 files changed

+58
-57
lines changed

_attributes/common-attributes.adoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -132,3 +132,5 @@ endif::[]
132132
:ibmzProductName: IBM Z
133133
// Red Hat Quay Container Security Operator
134134
:rhq-cso: Red Hat Quay Container Security Operator
135+
:sno: single-node Openshift
136+
:sno-caps: Single-node Openshift

_topic_maps/_topic_map.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1511,7 +1511,7 @@ Topics:
15111511
File: osdk-working-bundle-images
15121512
- Name: Validating Operators using the scorecard
15131513
File: osdk-scorecard
1514-
- Name: High-availability or single node cluster detection and support
1514+
- Name: High-availability or single-node cluster detection and support
15151515
File: osdk-ha-sno
15161516
- Name: Configuring built-in monitoring with Prometheus
15171517
File: osdk-monitoring-prometheus
@@ -2233,13 +2233,13 @@ Topics:
22332233
- Name: Creating a performance profile
22342234
File: cnf-create-performance-profiles
22352235
Distros: openshift-origin,openshift-enterprise
2236-
- Name: Deploying distributed units manually on single node OpenShift
2236+
- Name: Deploying distributed units manually on single-node OpenShift
22372237
File: ztp-configuring-single-node-cluster-deployment-during-installation
22382238
Distros: openshift-origin,openshift-enterprise
22392239
- Name: Provisioning and deploying a distributed unit (DU)
22402240
File: cnf-provisioning-and-deploying-a-distributed-unit
22412241
Distros: openshift-webscale
2242-
- Name: Workload partitioning on single node OpenShift
2242+
- Name: Workload partitioning on single-node OpenShift
22432243
File: sno-du-enabling-workload-partitioning-on-single-node-openshift
22442244
Distros: openshift-origin,openshift-enterprise
22452245
- Name: Deploying distributed units at scale in a disconnected environment

installing/installing-preparing.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,7 @@ endif::[]
8989

9090
////
9191
[id="installing-preparing-single-node"]
92-
=== Are you installing single node clusters at the edge?
92+
=== Are you installing single-node clusters at the edge?
9393
9494
You can use the assisted installer to deploy xref:../installing/installing_sno/install-sno-installing-sno.adoc#installing-sno[single node] clusters for edge workloads.
9595
////

modules/cnf-du-configuring-workload-partitioning.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,4 +17,4 @@ are correctly scheduled to run on the management CPU partition.
1717

1818
. For pods and namespaces that are correctly annotated, the CPU request values are zeroed out and converted to `<workload-type>.workload.openshift.io/cores`. This modified resource allows the pods to be constrained to the restricted CPUs.
1919

20-
. The single node cluster starts with management components constrained to a subset of available CPUs.
20+
. The single-node cluster starts with management components constrained to a subset of available CPUs.

modules/cnf-du-management-pods.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66

77
= Cluster Management pods
88

9-
For the purposes of achieving 2-core (4 HT CPU) installation of single node clusters, the set of pods that are considered _management_ are limited to:
9+
For the purposes of achieving 2-core (4 HT CPU) installation of single-node clusters, the set of pods that are considered _management_ are limited to:
1010

1111
* Core Operators
1212
* Day 2 Operators

modules/cnf-du-partitioning-management-workloads.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -14,18 +14,18 @@ number of CPUs within the host.
1414

1515
Server resources installed at the edge, such as cores, are expensive and limited. Application workloads require nearly all cores and the resources consumed by infrastructure is a key reason for the selection of a vRAN infrastructure. A hypothetical distributed unit (DU) example is an unusually resource-intensive workload, typically requiring 20 dedicated cores. Partitioning management workloads mitigates much of this activity by separating management tasks from normal workloads.
1616

17-
When you use workload partitioning, the CPU resources used by {product-title} for cluster management are isolated to a partitioned set of CPU resources on a single node cluster with a DU profile applied. This falls broadly into two categories:
17+
When you use workload partitioning, the CPU resources used by {product-title} for cluster management are isolated to a partitioned set of CPU resources on a single-node cluster with a DU profile applied. This falls broadly into two categories:
1818

1919
* Isolates cluster management functions to the defined number of CPUs. All cluster management functions operate solely on that `cpuset`.
2020

2121
* Tunes the cluster configuration (with the applied DU profile) so the actual CPU usage fits within the assigned `cpuset`.
2222

2323
[NOTE]
2424
====
25-
This feature is only available on single node cluster in this release.
25+
This feature is only available on {sno} in this release.
2626
====
2727

28-
The minimum number of reserved CPUs required for the management partition for a single node cluster is four CPU HTs. Inclusion of Operators or workloads outside of the set of accepted management pods requires additional CPU HTs.
28+
The minimum number of reserved CPUs required for the management partition for a single-node cluster is four CPU HTs. Inclusion of Operators or workloads outside of the set of accepted management pods requires additional CPU HTs.
2929

3030
Workload partitioning isolates the workloads away from the non-management workloads using the normal scheduling capabilities of Kubernetes to manage the number of pods that can be placed onto those cores, and avoids mixing cluster management workloads and user workloads.
3131

modules/cnf-performing-end-to-end-tests-for-platform-verification.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -253,9 +253,9 @@ $ docker run -v $(pwd)/:/kubeconfig -e KUBECONFIG=/kubeconfig/kubeconfig registr
253253
----
254254

255255
[id="cnf-performing-end-to-end-tests-running-in-single-node-cluster_{context}"]
256-
== Running in a single node cluster
256+
== Running in a single-node cluster
257257

258-
Running tests on a single node cluster causes the following limitations to be imposed:
258+
Running tests on a single-node cluster causes the following limitations to be imposed:
259259

260260
* Longer timeouts for certain tests, including SR-IOV and SCTP tests
261261
* Tests requiring master and worker nodes are skipped

modules/cnf-running-the-performance-creator-profile-offline.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -219,7 +219,7 @@ The Performance Profile Creator arguments are shown in the Performance Profile C
219219
* `mcp-name`
220220
* `rt-kernel`
221221
222-
The `mcp-name` argument in this example is set to `worker-cnf` based on the output of the command `oc get mcp`. For Single Node OpenShift (SNO) use `--mcp-name=master`.
222+
The `mcp-name` argument in this example is set to `worker-cnf` based on the output of the command `oc get mcp`. For {sno} use `--mcp-name=master`.
223223
====
224224
225225
. Review the created YAML file:

modules/cnf-running-the-performance-creator-profile.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,7 @@ The Performance Profile Creator arguments are shown in the Performance Profile C
120120
* `mcp-name`
121121
* `rt-kernel`
122122
123-
The `mcp-name` argument in this example is set to `worker-cnf` based on the output of the command `oc get mcp`. For Single Node OpenShift (SNO) use `--mcp-name=master`.
123+
The `mcp-name` argument in this example is set to `worker-cnf` based on the output of the command `oc get mcp`. For {sno} use `--mcp-name=master`.
124124
====
125125

126126
. Review the created YAML file:

modules/install-sno-about-installing-on-a-single-node.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,9 @@
66
[id="install-sno-about-installing-on-a-single-node_{context}"]
77
= About OpenShift on a single node
88

9-
You can create a single node cluster with standard installation methods. {product-title} on a single node is a specialized installation that requires the creation of a special ignition configuration ISO. The primary use case is for edge computing workloads, including intermittent connectivity, portable clouds, and 5G radio access networks (RAN) close to a base station. The major tradeoff with an installation on a single node is the lack of high availability.
9+
You can create a single-node cluster with standard installation methods. {product-title} on a single node is a specialized installation that requires the creation of a special ignition configuration ISO. The primary use case is for edge computing workloads, including intermittent connectivity, portable clouds, and 5G radio access networks (RAN) close to a base station. The major tradeoff with an installation on a single node is the lack of high availability.
1010

1111
[IMPORTANT]
1212
====
13-
The use of OpenShiftSDN with single-node OpenShift is deprecated. OVN-Kubernetes is the default networking solution for single-node OpenShift deployments.
14-
====
13+
The use of OpenShiftSDN with {sno} is deprecated. OVN-Kubernetes is the default networking solution for {sno} deployments.
14+
====

0 commit comments

Comments
 (0)