Skip to content

Commit d11a586

Browse files
author
Christopher Tauchen
committed
Global terminology fix: network interface card and vNIC
1 parent 886a92f commit d11a586

16 files changed

+24
-24
lines changed

modules/cnf-cpu-infra-container.adoc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
Generic housekeeping and workload tasks use CPUs in a way that may impact latency-sensitive processes. By default, the container runtime uses all online CPUs to run all containers together, which can result in context switches and spikes in latency. Partitioning the CPUs prevents noisy processes from interfering with latency-sensitive processes by separating them from each other. The following table describes how processes run on a CPU after you have tuned the node using the Performance Add-On Operator:
99

1010
.Process' CPU assignments
11-
[%header,cols=2*]
11+
[%header,cols=2*]
1212
|===
1313
|Process type
1414
|Details
@@ -34,7 +34,7 @@ Generic housekeeping and workload tasks use CPUs in a way that may impact latenc
3434

3535
The exact partitioning pattern to use depends on many factors like hardware, workload characteristics and the expected system load. Some sample use cases are as follows:
3636

37-
* If the latency-sensitive workload uses specific hardware, such as a network interface card (NIC), ensure that the CPUs in the isolated pool are as close as possible to this hardware. At a minimum, you should place the workload in the same Non-Uniform Memory Access (NUMA) node.
37+
* If the latency-sensitive workload uses specific hardware, such as a network interface controller (NIC), ensure that the CPUs in the isolated pool are as close as possible to this hardware. At a minimum, you should place the workload in the same Non-Uniform Memory Access (NUMA) node.
3838

3939
* The reserved pool is used for handling all interrupts. When depending on system networking, allocate a sufficiently-sized reserve pool to handle all the incoming packet interrupts. In {product-version} and later versions, workloads can optionally be labeled as sensitive.
4040

@@ -45,7 +45,7 @@ The decision regarding which specific CPUs should be used for reserved and isola
4545
The reserved and isolated CPU pools must not overlap and together must span all available cores in the worker node.
4646
====
4747

48-
To ensure that housekeeping tasks and workloads do not interfere with each other, specify two groups of CPUs in the `spec` section of the performance profile.
48+
To ensure that housekeeping tasks and workloads do not interfere with each other, specify two groups of CPUs in the `spec` section of the performance profile.
4949

5050
* `isolated` - Specifies the CPUs for the application container workloads. These CPUs have the lowest latency. Processes in this group have no interruptions and can, for example, reach much higher DPDK zero packet loss bandwidth.
5151

@@ -71,5 +71,5 @@ spec:
7171
node-role.kubernetes.io/worker: ""
7272
----
7373
<1> Specify which CPUs are for infra containers to perform cluster and operating system housekeeping duties.
74-
<2> Specify which CPUs are for application containers to run workloads.
74+
<2> Specify which CPUs are for application containers to run workloads.
7575
<3> Optional: Specify a node selector to apply the performance profile to specific nodes.

modules/cnf-provisioning-deploying-a-distributed-unit-(du)-manually.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ Running cell application on COTS hardware requires the following features to be
3333
* AVX-512 instruction set (for Flexran and / or FPGA implementation)
3434
* Additional features depending on the RAN Operator requirements
3535

36-
Accessing hardware acceleration devices and high throughput network interface cards by virtualized software applications
36+
Accessing hardware acceleration devices and high throughput network interface controllers by virtualized software applications
3737
requires use of SR-IOV and Passthrough PCI device virtualization.
3838

3939
In addition to the compute and acceleration requirements, DUs operate on multiple internal and external networks.

modules/cnf-provisioning-deploying-a-distributed-unit-manually.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ Running cell application on COTS hardware requires the following features to be
3333
* AVX-512 instruction set (for Flexran and / or FPGA implementation)
3434
* Additional features depending on the RAN Operator requirements
3535

36-
Accessing hardware acceleration devices and high throughput network interface cards by virtualized software applications
36+
Accessing hardware acceleration devices and high throughput network interface controllers by virtualized software applications
3737
requires use of SR-IOV and Passthrough PCI device virtualization.
3838

3939
In addition to the compute and acceleration requirements, DUs operate on multiple internal and external networks.

modules/cnf-reducing-netqueues-using-pao.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
[id="reducing-nic-queues-using-the-performance-addon-operator_{context}"]
66
= Reducing NIC queues using the Performance Addon Operator
77

8-
The Performance Addon Operator allows you to adjust the Network Interface Card (NIC) queue count for each network device by configuring the performance profile. Device network queues allows the distribution of packets among different physical queues and each queue gets a separate thread for packet processing.
8+
The Performance Addon Operator allows you to adjust the network interface controller (NIC) queue count for each network device by configuring the performance profile. Device network queues allows the distribution of packets among different physical queues and each queue gets a separate thread for packet processing.
99

1010
In real-time or low latency systems, all the unnecessary interrupt request lines (IRQs) pinned to the isolated CPUs must be moved to reserved or housekeeping CPUs.
1111

modules/installation-requirements-user-infra.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -326,7 +326,7 @@ On your IBM Power instance, set up:
326326
=== Network for the PowerVM guest virtual machines
327327

328328
* Virtualized by the Virtual I/O Server using Shared Ethernet Adapter
329-
* Virtualized by the Virtual I/O Server using IBM VNIC
329+
* Virtualized by the Virtual I/O Server using IBM vNIC
330330

331331
[discrete]
332332
=== Storage / main memory
@@ -363,7 +363,7 @@ On your IBM Power instance, set up:
363363
=== Network for the PowerVM guest virtual machines
364364

365365
* Virtualized by the Virtual I/O Server using Shared Ethernet Adapter
366-
* Virtualized by the Virtual I/O Server using IBM VNIC
366+
* Virtualized by the Virtual I/O Server using IBM vNIC
367367

368368
[discrete]
369369
=== Storage / main memory

modules/nw-egress-router-about.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -100,7 +100,7 @@ $ openstack port set --allowed-address \
100100

101101
{rh-virtualization-first}::
102102

103-
If you are using link:https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/administration_guide/sect-virtual_network_interface_cards#Explanation_of_Settings_in_the_VM_Interface_Profile_Window[{rh-virtualization}], you must select *No Network Filter* for the Virtual Network Interface Card (vNIC).
103+
If you are using link:https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/administration_guide/sect-virtual_network_interface_cards#Explanation_of_Settings_in_the_VM_Interface_Profile_Window[{rh-virtualization}], you must select *No Network Filter* for the Virtual network interface controller (vNIC).
104104

105105
VMware vSphere::
106106

modules/nw-ptp-introduction.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77

88
The Precision Time Protocol (PTP) is used to synchronize clocks in a network. When used in conjunction with hardware support, PTP is capable of sub-microsecond accuracy, and is more accurate than Network Time Protocol (NTP).
99

10-
The `linuxptp` package includes the `ptp4l` and `phc2sys` programs for clock synchronization. `ptp4l` implements the PTP boundary clock and ordinary clock. `ptp4l` synchronizes the PTP hardware clock to the source clock with hardware time stamping and synchronizes the system clock to the source clock with software time stamping. `phc2sys` is used for hardware time stamping to synchronize the system clock to the PTP hardware clock on the network interface card (NIC).
10+
The `linuxptp` package includes the `ptp4l` and `phc2sys` programs for clock synchronization. `ptp4l` implements the PTP boundary clock and ordinary clock. `ptp4l` synchronizes the PTP hardware clock to the source clock with hardware time stamping and synchronizes the system clock to the source clock with software time stamping. `phc2sys` is used for hardware time stamping to synchronize the system clock to the PTP hardware clock on the network interface controller (NIC).
1111

1212
[id="ptp-elements_{context}"]
1313
== Elements of a PTP domain
@@ -18,7 +18,7 @@ Grandmaster clock:: The grandmaster clock provides standard time information to
1818

1919
Ordinary clock:: The ordinary clock has a single port connection that can play the role of source or destination clock, depending on its position in the network. The ordinary clock can read and write time stamps.
2020

21-
Boundary clock:: The boundary clock has ports in two or more communication paths and can be a source and a destination to other destination clocks at the same time. The boundary clock works as a destination clock upstream. The destination clock receives the timing message, adjusts for delay, and then creates a new source time signal to pass down the network. The boundary clock produces a new timing packet that is still correctly synced with the source clock and can reduce the number of connected devices reporting directly to the source clock.
21+
Boundary clock:: The boundary clock has ports in two or more communication paths and can be a source and a destination to other destination clocks at the same time. The boundary clock works as a destination clock upstream. The destination clock receives the timing message, adjusts for delay, and then creates a new source time signal to pass down the network. The boundary clock produces a new timing packet that is still correctly synced with the source clock and can reduce the number of connected devices reporting directly to the source clock.
2222

2323
[id="ptp-advantages-over-ntp_{context}"]
2424
== Advantages of PTP over NTP

modules/nw-sriov-configuring-device.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ Only SR-IOV network devices on selected nodes are configured. The SR-IOV
6868
Container Network Interface (CNI) plug-in and device plug-in are deployed only on selected nodes.
6969
<5> Optional: Specify an integer value between `0` and `99`. A smaller number gets higher priority, so a priority of `10` is higher than a priority of `99`. The default value is `99`.
7070
<6> Optional: Specify a value for the maximum transmission unit (MTU) of the virtual function. The maximum MTU value can vary for different NIC models.
71-
<7> Specify the number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel Network Interface Card (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than `128`.
71+
<7> Specify the number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than `128`.
7272
<8> The `nicSelector` mapping selects the Ethernet device for the Operator to configure. You do not need to specify values for all the parameters. It is recommended to identify the Ethernet adapter with enough precision to minimize the possibility of selecting an Ethernet device unintentionally.
7373
If you specify `rootDevices`, you must also specify a value for `vendor`, `deviceID`, or `pfNames`.
7474
If you specify both `pfNames` and `rootDevices` at the same time, ensure that they point to an identical device.

modules/optimizing-mtu-networking.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
[id="optimizing-mtu_{context}"]
66
= Optimizing the MTU for your network
77

8-
There are two important maximum transmission units (MTUs): the network interface card (NIC) MTU and the cluster network MTU.
8+
There are two important maximum transmission units (MTUs): the network interface controller (NIC) MTU and the cluster network MTU.
99

1010
The NIC MTU is only configured at the time of {product-title} installation. The MTU must be less than or equal to the maximum supported value of the NIC of your network. If you are optimizing for throughput, choose the largest possible value. If you are optimizing for lowest latency, choose a lower value.
1111

modules/virt-add-boot-order-web.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ Add items to a boot order list by using the web console.
2020

2121
. Click the pencil icon that is located on the right side of *Boot Order*. If a YAML configuration does not exist, or if this is the first time that you are creating a boot order list, the following message displays: *No resource selected. VM will attempt to boot from disks by order of appearance in YAML file.*
2222

23-
. Click *Add Source* and select a bootable disk or network interface card (NIC) for the virtual machine.
23+
. Click *Add Source* and select a bootable disk or network interface controller (NIC) for the virtual machine.
2424

2525
. Add any additional disks or NICs to the boot order list.
2626

0 commit comments

Comments
 (0)