Skip to content

Commit 1d2c767

Browse files
committed
TELCODOCS-325: Renamed the low latency tuning section to omit Performance Addon Operator..
1 parent ad166ba commit 1d2c767

13 files changed

+121
-347
lines changed

_topic_maps/_topic_map.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ Topics:
5959
Distros: openshift-enterprise
6060
Topics:
6161
- Name: Kubernetes overview
62-
File: kubernetes-overview
62+
File: kubernetes-overview
6363
- Name: OpenShift Container Platform overview
6464
File: openshift-overview
6565
- Name: Web console walkthrough
@@ -2237,8 +2237,8 @@ Topics:
22372237
- Name: What huge pages do and how they are consumed by apps
22382238
File: what-huge-pages-do-and-how-they-are-consumed-by-apps
22392239
Distros: openshift-origin,openshift-enterprise
2240-
- Name: Performance Addon Operator for low latency nodes
2241-
File: cnf-performance-addon-operator-for-low-latency-nodes
2240+
- Name: Low latency tuning
2241+
File: cnf-low-latency-tuning
22422242
Distros: openshift-origin,openshift-enterprise
22432243
- Name: Creating a performance profile
22442244
File: cnf-create-performance-profiles

installing/installing-preparing.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -109,7 +109,7 @@ For a production cluster, you must configure the following integrations:
109109
[id="installing-preparing-cluster-for-workloads"]
110110
== Preparing your cluster for workloads
111111

112-
Depending on your workload needs, you might need to take extra steps before you begin deploying applications. For example, after you prepare infrastructure to support your application xref:../cicd/builds/build-strategies.adoc#build-strategies[build strategy], you might need to make provisions for xref:../scalability_and_performance/cnf-performance-addon-operator-for-low-latency-nodes.adoc#cnf-performance-addon-operator-for-low-latency-nodes[low-latency] workloads or to xref:../nodes/pods/nodes-pods-secrets.adoc#nodes-pods-secrets[protect sensitive workloads]. You can also configure xref:../monitoring/enabling-monitoring-for-user-defined-projects.adoc#enabling-monitoring-for-user-defined-projects[monitoring] for application workloads.
112+
Depending on your workload needs, you might need to take extra steps before you begin deploying applications. For example, after you prepare infrastructure to support your application xref:../cicd/builds/build-strategies.adoc#build-strategies[build strategy], you might need to make provisions for xref:../scalability_and_performance/cnf-low-latency-tuning.adoc#cnf-low-latency-tuning[low-latency] workloads or to xref:../nodes/pods/nodes-pods-secrets.adoc#nodes-pods-secrets[protect sensitive workloads]. You can also configure xref:../monitoring/enabling-monitoring-for-user-defined-projects.adoc#enabling-monitoring-for-user-defined-projects[monitoring] for application workloads.
113113
//If you plan to run ../windows_containers/enabling-windows-container-workloads.adoc#enabling-windows-container-workloads[Windows workloads], you must enable xref:../networking/ovn_kubernetes_network_provider/configuring-hybrid-networking.adoc#configuring-hybrid-networking[hybrid networking with OVN-Kubernetes] during the installation process; hybrid networking cannot be enabled after your cluster is installed.
114114

115115
[id="supported-installation-methods-for-different-platforms"]

installing/installing_openstack/installing-openstack-installer-ovs-dpdk.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[]
66

77
toc::[]
88

9-
If your {rh-openstack-first} deployment has Open vSwitch with the Data Plane Development Kit (OVS-DPDK) enabled, you can install an {product-title} cluster on it. Clusters that run on such {rh-openstack} deployments use OVS-DPDK features by providing access to link:https://doc.dpdk.org/guides/prog_guide/poll_mode_drv.html[poll mode drivers].
9+
If your {rh-openstack-first} deployment has Open vSwitch with the Data Plane Development Kit (OVS-DPDK) enabled, you can install an {product-title} cluster on it. Clusters that run on such {rh-openstack} deployments use OVS-DPDK features by providing access to link:https://doc.dpdk.org/guides/prog_guide/poll_mode_drv.html[poll mode drivers].
1010

1111
== Prerequisites
1212

@@ -23,7 +23,7 @@ processes.
2323

2424
* Configure your {rh-openstack} OVS-DPDK deployment according to link:https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html/network_functions_virtualization_planning_and_configuration_guide/part-dpdk-configure[Configuring an OVS-DPDK deployment] in the Network Functions Virtualization Planning and Configuration Guide.
2525

26-
** You must complete link:https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html/network_functions_virtualization_planning_and_configuration_guide/part-dpdk-configure#p-ovs-dpdk-flavor-deploy-instance[Creating a flavor and deploying an instance for OVS-DPDK] before you install a cluster on {rh-openstack}.
26+
** You must complete link:https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html/network_functions_virtualization_planning_and_configuration_guide/part-dpdk-configure#p-ovs-dpdk-flavor-deploy-instance[Creating a flavor and deploying an instance for OVS-DPDK] before you install a cluster on {rh-openstack}.
2727

2828
include::modules/installation-osp-default-deployment.adoc[leveloffset=+1]
2929
include::modules/installation-osp-control-compute-machines.adoc[leveloffset=+2]
@@ -71,7 +71,7 @@ include::modules/cluster-telemetry.adoc[leveloffset=+1]
7171
[role="_additional-resources"]
7272
[id="additional-resources_installing-openstack-installer-ovs-dpdk"]
7373
== Additional resources
74-
* xref:../../scalability_and_performance/cnf-performance-addon-operator-for-low-latency-nodes.adoc#cnf-understanding-low-latency_cnf-master[Low latency tuning of OpenShift Container Platform nodes]
74+
* xref:../../scalability_and_performance/cnf-low-latency-tuning.adoc#cnf-understanding-low-latency_cnf-master[Low latency tuning of OpenShift Container Platform nodes]
7575

7676
[id="next-steps_installing-openstack-installer-ovs-dpdk"]
7777
== Next steps

installing/installing_openstack/installing-openstack-user-sr-iov.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ include::modules/cluster-telemetry.adoc[leveloffset=+1]
8484
[role="_additional-resources"]
8585
[id="additional-resources_installing-openstack-user-sr-iov"]
8686
== Additional resources
87-
* xref:../../scalability_and_performance/cnf-performance-addon-operator-for-low-latency-nodes.html#cnf-understanding-low-latency_cnf-master[Low latency tuning of OpenShift Container Platform nodes]
87+
* xref:../../scalability_and_performance/cnf-low-latency-tuning.html#cnf-understanding-low-latency_cnf-master[Low latency tuning of OpenShift Container Platform nodes]
8888

8989
[id="next-steps_installing-user-sr-iov"]
9090
== Next steps

modules/accessing-an-example-cluster-node-tuning-operator-specification.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ Use this process to access an example Node Tuning Operator specification.
1111

1212
.Procedure
1313

14-
. Run:
14+
* Run the following command to access an example Node Tuning Operator specification:
1515
+
1616
[source,terminal]
1717
----
@@ -22,5 +22,5 @@ The default CR is meant for delivering standard node-level tuning for the {produ
2222

2323
[WARNING]
2424
====
25-
While in certain situations the support for pod labels can be a convenient way of automatically delivering required tuning, this practice is discouraged and strongly advised against, especially in large-scale clusters. The default Tuned CR ships without pod label matching. If a custom profile is created with pod label matching, then the functionality will be enabled at that time. The pod label functionality might be deprecated in future versions of the Node Tuning Operator.
25+
While in certain situations the support for pod labels can be a convenient way of automatically delivering required tuning, this practice is discouraged and strongly advised against, especially in large-scale clusters. The default Tuned CR ships without pod label matching. If a custom profile is created with pod label matching, then the functionality will be enabled at that time. The pod label functionality will be deprecated in future versions of the Node Tuning Operator.
2626
====

modules/cnf-installing-the-performance-addon-operator.adoc

Lines changed: 0 additions & 150 deletions
This file was deleted.

modules/cnf-understanding-low-latency.adoc

Lines changed: 14 additions & 41 deletions
Original file line numberDiff line numberDiff line change
@@ -9,44 +9,17 @@
99
The emergence of Edge computing in the area of Telco / 5G plays a key role in
1010
reducing latency and congestion problems and improving application performance.
1111

12-
Simply put, latency determines how fast data (packets) moves from the sender to
13-
receiver and returns to the sender after processing by the receiver. Obviously,
14-
maintaining a network architecture with the lowest possible delay of latency
15-
speeds is key for meeting the network performance requirements of 5G. Compared
16-
to 4G technology, with an average latency of 50ms, 5G is targeted to reach
17-
latency numbers of 1ms or less. This reduction in latency boosts wireless
18-
throughput by a factor of 10.
19-
20-
Many of the deployed applications in the Telco space require low latency that
21-
can only tolerate zero packet loss. Tuning for zero packet loss helps mitigate
22-
the inherent issues that degrade network performance. For more information, see
23-
link:https://www.redhat.com/en/blog/tuning-zero-packet-loss-red-hat-openstack-platform-part-1[Tuning
24-
for Zero Packet Loss in {rh-openstack-first}].
25-
26-
The Edge computing initiative also comes in to play for reducing latency rates.
27-
Think of it as literally being on the edge of the cloud and closer to the user.
28-
This greatly reduces the distance between the user and distant data centers,
29-
resulting in reduced application response times and performance latency.
30-
31-
Administrators must be able to manage their many Edge sites and local services
32-
in a centralized way so that all of the deployments can run at the lowest
33-
possible management cost. They also need an easy way to deploy and configure
34-
certain nodes of their cluster for real-time low latency and high-performance
35-
purposes. Low latency nodes are useful for applications such as Cloud-native
36-
Network Functions (CNF) and Data Plane Development Kit (DPDK).
37-
38-
{product-title} currently provides mechanisms to tune software on an
39-
{product-title} cluster for real-time running and low latency (around <20
40-
microseconds reaction time). This includes tuning the kernel and {product-title}
41-
set values, installing a kernel, and reconfiguring the machine. But this method
42-
requires setting up four different Operators and performing many configurations
43-
that, when done manually, is complex and could be prone to mistakes.
44-
45-
{product-title} uses the Node Tuning Operator to implement automatic
46-
tuning to achieve low latency performance for OpenShift applications.
47-
The cluster administrator uses this performance profile configuration that makes
48-
it easier to make these changes in a more reliable way. The administrator can
49-
specify whether to update the kernel to kernel-rt, reserve CPUs for cluster and operating system housekeeping duties, including pod infra containers, and isolate CPUs for application containers to run the workloads.
12+
Simply put, latency determines how fast data (packets) moves from the sender to receiver and returns to the sender after processing by the receiver. Maintaining a network architecture with the lowest possible delay of latency speeds is key for meeting the network performance requirements of 5G. Compared to 4G technology, with an average latency of 50 ms, 5G is targeted to reach latency numbers of 1 ms or less. This reduction in latency boosts wireless throughput by a factor of 10.
13+
14+
Many of the deployed applications in the Telco space require low latency that can only tolerate zero packet loss. Tuning for zero packet loss helps mitigate the inherent issues that degrade network performance. For more information, see link:https://www.redhat.com/en/blog/tuning-zero-packet-loss-red-hat-openstack-platform-part-1[Tuning for Zero Packet Loss in {rh-openstack-first}].
15+
16+
The Edge computing initiative also comes in to play for reducing latency rates. Think of it as being on the edge of the cloud and closer to the user. This greatly reduces the distance between the user and distant data centers, resulting in reduced application response times and performance latency.
17+
18+
Administrators must be able to manage their many Edge sites and local services in a centralized way so that all of the deployments can run at the lowest possible management cost. They also need an easy way to deploy and configure certain nodes of their cluster for real-time low latency and high-performance purposes. Low latency nodes are useful for applications such as Cloud-native Network Functions (CNF) and Data Plane Development Kit (DPDK).
19+
20+
{product-title} currently provides mechanisms to tune software on an {product-title} cluster for real-time running and low latency (around <20 microseconds reaction time). This includes tuning the kernel and {product-title} set values, installing a kernel, and reconfiguring the machine. But this method requires setting up four different Operators and performing many configurations that, when done manually, is complex and could be prone to mistakes.
21+
22+
{product-title} uses the Node Tuning Operator to implement automatic tuning to achieve low latency performance for {product-title} applications. The cluster administrator uses this performance profile configuration that makes it easier to make these changes in a more reliable way. The administrator can specify whether to update the kernel to kernel-rt, reserve CPUs for cluster and operating system housekeeping duties, including pod infra containers, and isolate CPUs for application containers to run the workloads.
5023

5124
{product-title} also supports workload hints for the Node Tuning Operator that can tune the `PerformanceProfile` to meet the demands of different industry environments. Workload hints are available for `highPowerConsumption` (very low latency at the cost of increased power consumption) and `realtime` (priority given to optimum latency). A combination of `true/false` settings for these hints can be used to deal with application-specific workload profiles and requirements.
5225

@@ -58,6 +31,6 @@ Workload hints simplify the fine-tuning of performance to industry sector settin
5831
5932
In an ideal world, all of those would be prioritized: in real life, some come at the expense of others. The Node Tuning Operator is now aware of the workload expectations and better able to meet the demands of the workload. The cluster admin can now specify into which use case that workload falls. The Node Tuning Operator uses the `PerformanceProfile` to fine tune the performance settings for the workload.
6033

61-
The environment in which an application is operating influences its behavior. For a typical data center with no strict latency requirements, only minimal default tuning is needed that enables CPU partitioning for some high performance workload pods.
62-
For data centers and workloads where latency is a higher priority, measures are still taken to optimize power consumption.
63-
The most complicated cases are clusters close to latency-sensitive equipment such as manufacturing machinery and software-defined radios. This last class of deployment is often referred to as Far edge. For Far edge deployments, ultra-low latency is the ultimate priority, and is achieved at the expense of power management.
34+
The environment in which an application is operating influences its behavior. For a typical data center with no strict latency requirements, only minimal default tuning is needed that enables CPU partitioning for some high performance workload pods. For data centers and workloads where latency is a higher priority, measures are still taken to optimize power consumption. The most complicated cases are clusters close to latency-sensitive equipment such as manufacturing machinery and software-defined radios. This last class of deployment is often referred to as Far edge. For Far edge deployments, ultra-low latency is the ultimate priority, and is achieved at the expense of power management.
35+
36+
In {product-title} version 4.10 and previous versions, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance. Now this functionality is part of the Node Tuning Operator.

0 commit comments

Comments
 (0)