You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/cnf-cpu-infra-container.adoc
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@
8
8
Generic housekeeping and workload tasks use CPUs in a way that may impact latency-sensitive processes. By default, the container runtime uses all online CPUs to run all containers together, which can result in context switches and spikes in latency. Partitioning the CPUs prevents noisy processes from interfering with latency-sensitive processes by separating them from each other. The following table describes how processes run on a CPU after you have tuned the node using the Performance Add-On Operator:
9
9
10
10
.Process' CPU assignments
11
-
[%header,cols=2*]
11
+
[%header,cols=2*]
12
12
|===
13
13
|Process type
14
14
|Details
@@ -34,7 +34,7 @@ Generic housekeeping and workload tasks use CPUs in a way that may impact latenc
34
34
35
35
The exact partitioning pattern to use depends on many factors like hardware, workload characteristics and the expected system load. Some sample use cases are as follows:
36
36
37
-
* If the latency-sensitive workload uses specific hardware, such as a network interface card (NIC), ensure that the CPUs in the isolated pool are as close as possible to this hardware. At a minimum, you should place the workload in the same Non-Uniform Memory Access (NUMA) node.
37
+
* If the latency-sensitive workload uses specific hardware, such as a network interface controller (NIC), ensure that the CPUs in the isolated pool are as close as possible to this hardware. At a minimum, you should place the workload in the same Non-Uniform Memory Access (NUMA) node.
38
38
39
39
* The reserved pool is used for handling all interrupts. When depending on system networking, allocate a sufficiently-sized reserve pool to handle all the incoming packet interrupts. In {product-version} and later versions, workloads can optionally be labeled as sensitive.
40
40
@@ -45,7 +45,7 @@ The decision regarding which specific CPUs should be used for reserved and isola
45
45
The reserved and isolated CPU pools must not overlap and together must span all available cores in the worker node.
46
46
====
47
47
48
-
To ensure that housekeeping tasks and workloads do not interfere with each other, specify two groups of CPUs in the `spec` section of the performance profile.
48
+
To ensure that housekeeping tasks and workloads do not interfere with each other, specify two groups of CPUs in the `spec` section of the performance profile.
49
49
50
50
* `isolated` - Specifies the CPUs for the application container workloads. These CPUs have the lowest latency. Processes in this group have no interruptions and can, for example, reach much higher DPDK zero packet loss bandwidth.
51
51
@@ -71,5 +71,5 @@ spec:
71
71
node-role.kubernetes.io/worker: ""
72
72
----
73
73
<1> Specify which CPUs are for infra containers to perform cluster and operating system housekeeping duties.
74
-
<2> Specify which CPUs are for application containers to run workloads.
74
+
<2> Specify which CPUs are for application containers to run workloads.
75
75
<3> Optional: Specify a node selector to apply the performance profile to specific nodes.
= Reducing NIC queues using the Performance Addon Operator
7
7
8
-
The Performance Addon Operator allows you to adjust the Network Interface Card (NIC) queue count for each network device by configuring the performance profile. Device network queues allows the distribution of packets among different physical queues and each queue gets a separate thread for packet processing.
8
+
The Performance Addon Operator allows you to adjust the network interface controller (NIC) queue count for each network device by configuring the performance profile. Device network queues allows the distribution of packets among different physical queues and each queue gets a separate thread for packet processing.
9
9
10
10
In real-time or low latency systems, all the unnecessary interrupt request lines (IRQs) pinned to the isolated CPUs must be moved to reserved or housekeeping CPUs.
Copy file name to clipboardExpand all lines: modules/nw-egress-router-about.adoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -100,7 +100,7 @@ $ openstack port set --allowed-address \
100
100
101
101
{rh-virtualization-first}::
102
102
103
-
If you are using link:https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/administration_guide/sect-virtual_network_interface_cards#Explanation_of_Settings_in_the_VM_Interface_Profile_Window[{rh-virtualization}], you must select *No Network Filter* for the Virtual Network Interface Card (vNIC).
103
+
If you are using link:https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/administration_guide/sect-virtual_network_interface_cards#Explanation_of_Settings_in_the_VM_Interface_Profile_Window[{rh-virtualization}], you must select *No Network Filter* for the Virtual network interface controller (vNIC).
Copy file name to clipboardExpand all lines: modules/nw-ptp-introduction.adoc
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@
7
7
8
8
The Precision Time Protocol (PTP) is used to synchronize clocks in a network. When used in conjunction with hardware support, PTP is capable of sub-microsecond accuracy, and is more accurate than Network Time Protocol (NTP).
9
9
10
-
The `linuxptp` package includes the `ptp4l` and `phc2sys` programs for clock synchronization. `ptp4l` implements the PTP boundary clock and ordinary clock. `ptp4l` synchronizes the PTP hardware clock to the source clock with hardware time stamping and synchronizes the system clock to the source clock with software time stamping. `phc2sys` is used for hardware time stamping to synchronize the system clock to the PTP hardware clock on the network interface card (NIC).
10
+
The `linuxptp` package includes the `ptp4l` and `phc2sys` programs for clock synchronization. `ptp4l` implements the PTP boundary clock and ordinary clock. `ptp4l` synchronizes the PTP hardware clock to the source clock with hardware time stamping and synchronizes the system clock to the source clock with software time stamping. `phc2sys` is used for hardware time stamping to synchronize the system clock to the PTP hardware clock on the network interface controller (NIC).
11
11
12
12
[id="ptp-elements_{context}"]
13
13
== Elements of a PTP domain
@@ -18,7 +18,7 @@ Grandmaster clock:: The grandmaster clock provides standard time information to
18
18
19
19
Ordinary clock:: The ordinary clock has a single port connection that can play the role of source or destination clock, depending on its position in the network. The ordinary clock can read and write time stamps.
20
20
21
-
Boundary clock:: The boundary clock has ports in two or more communication paths and can be a source and a destination to other destination clocks at the same time. The boundary clock works as a destination clock upstream. The destination clock receives the timing message, adjusts for delay, and then creates a new source time signal to pass down the network. The boundary clock produces a new timing packet that is still correctly synced with the source clock and can reduce the number of connected devices reporting directly to the source clock.
21
+
Boundary clock:: The boundary clock has ports in two or more communication paths and can be a source and a destination to other destination clocks at the same time. The boundary clock works as a destination clock upstream. The destination clock receives the timing message, adjusts for delay, and then creates a new source time signal to pass down the network. The boundary clock produces a new timing packet that is still correctly synced with the source clock and can reduce the number of connected devices reporting directly to the source clock.
Copy file name to clipboardExpand all lines: modules/nw-sriov-configuring-device.adoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -68,7 +68,7 @@ Only SR-IOV network devices on selected nodes are configured. The SR-IOV
68
68
Container Network Interface (CNI) plug-in and device plug-in are deployed only on selected nodes.
69
69
<5> Optional: Specify an integer value between `0` and `99`. A smaller number gets higher priority, so a priority of `10` is higher than a priority of `99`. The default value is `99`.
70
70
<6> Optional: Specify a value for the maximum transmission unit (MTU) of the virtual function. The maximum MTU value can vary for different NIC models.
71
-
<7> Specify the number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel Network Interface Card (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than `128`.
71
+
<7> Specify the number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than `128`.
72
72
<8> The `nicSelector` mapping selects the Ethernet device for the Operator to configure. You do not need to specify values for all the parameters. It is recommended to identify the Ethernet adapter with enough precision to minimize the possibility of selecting an Ethernet device unintentionally.
73
73
If you specify `rootDevices`, you must also specify a value for `vendor`, `deviceID`, or `pfNames`.
74
74
If you specify both `pfNames` and `rootDevices` at the same time, ensure that they point to an identical device.
Copy file name to clipboardExpand all lines: modules/optimizing-mtu-networking.adoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@
5
5
[id="optimizing-mtu_{context}"]
6
6
= Optimizing the MTU for your network
7
7
8
-
There are two important maximum transmission units (MTUs): the network interface card (NIC) MTU and the cluster network MTU.
8
+
There are two important maximum transmission units (MTUs): the network interface controller (NIC) MTU and the cluster network MTU.
9
9
10
10
The NIC MTU is only configured at the time of {product-title} installation. The MTU must be less than or equal to the maximum supported value of the NIC of your network. If you are optimizing for throughput, choose the largest possible value. If you are optimizing for lowest latency, choose a lower value.
Copy file name to clipboardExpand all lines: modules/virt-add-boot-order-web.adoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,7 +20,7 @@ Add items to a boot order list by using the web console.
20
20
21
21
. Click the pencil icon that is located on the right side of *Boot Order*. If a YAML configuration does not exist, or if this is the first time that you are creating a boot order list, the following message displays: *No resource selected. VM will attempt to boot from disks by order of appearance in YAML file.*
22
22
23
-
. Click *Add Source* and select a bootable disk or network interface card (NIC) for the virtual machine.
23
+
. Click *Add Source* and select a bootable disk or network interface controller (NIC) for the virtual machine.
24
24
25
25
. Add any additional disks or NICs to the boot order list.
0 commit comments