Skip to content

Commit eb673db

Browse files
author
Bob Furu
authored
Merge pull request #41484 from maxwelldb/osp-ovs-dpdk-worker-osdocs-3059
2 parents c6f845e + a785d5b commit eb673db

File tree

5 files changed

+318
-0
lines changed

5 files changed

+318
-0
lines changed

_topic_maps/_topic_map.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -343,6 +343,8 @@ Topics:
343343
File: installing-openstack-installer-kuryr
344344
- Name: Installing a cluster that supports SR-IOV compute machines on OpenStack
345345
File: installing-openstack-installer-sr-iov
346+
- Name: Installing a cluster on OpenStack that supports DPDK-connected compute machines
347+
File: installing-openstack-installer-ovs-dpdk
346348
- Name: Installing a cluster on OpenStack on your own infrastructure
347349
File: installing-openstack-user
348350
- Name: Installing a cluster on OpenStack with Kuryr on your own infrastructure
Lines changed: 88 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,88 @@
1+
:_content-type: ASSEMBLY
2+
[id="installing-openstack-installer-ovs-dpdk"]
3+
= Installing a cluster on OpenStack that supports DPDK-connected compute machines
4+
include::modules/common-attributes.adoc[]
5+
:context: installing-openstack-installer-ovs-dpdk
6+
7+
toc::[]
8+
9+
:FeatureName: Installing a cluster on {rh-openstack} that supports DPDK-connected compute machines
10+
11+
include::snippets/technology-preview.adoc[]
12+
13+
If your {rh-openstack-first} deployment has Open vSwitch with the Data Plane Development Kit (OVS-DPDK) enabled, you can install an {product-title} cluster on it. Clusters that run on such {rh-openstack} deployments use OVS-DPDK features by providing access to link:https://doc.dpdk.org/guides/prog_guide/poll_mode_drv.html[poll mode drivers].
14+
15+
== Prerequisites
16+
17+
* Review details about the
18+
xref:../../architecture/architecture-installation.adoc#architecture-installation[{product-title} installation and update]
19+
processes.
20+
** Verify that {product-title} {product-version} is compatible with your {rh-openstack} version by using the "Supported platforms for OpenShift clusters" section. You can also compare platform support across different versions by viewing the link:https://access.redhat.com/articles/4679401[{product-title} on {rh-openstack} support matrix].
21+
22+
* Have a storage service installed in {rh-openstack}, like block storage (Cinder) or object storage (Swift). Object storage is the recommended storage technology for {product-title} registry cluster deployment. For more information, see xref:../../scalability_and_performance/optimizing-storage.adoc#optimizing-storage[Optimizing storage].
23+
24+
* Have the metadata service enabled in {rh-openstack}.
25+
26+
* Plan your {rh-openstack} OVS-DPDK deployment by referring to link:https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html/network_functions_virtualization_planning_and_configuration_guide/assembly_ovsdpdk_parameters[Planning your OVS-DPDK deployment] in the Network Functions Virtualization Planning and Configuration Guide.
27+
28+
* Configure your {rh-openstack} OVS-DPDK deployment according to link:https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html/network_functions_virtualization_planning_and_configuration_guide/part-dpdk-configure[Configuring an OVS-DPDK deployment] in the Network Functions Virtualization Planning and Configuration Guide.
29+
30+
** You must complete link:https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html/network_functions_virtualization_planning_and_configuration_guide/part-dpdk-configure#p-ovs-dpdk-flavor-deploy-instance[Creating a flavor and deploying an instance for OVS-DPDK] before you install a cluster on {rh-openstack}.
31+
32+
include::modules/installation-osp-default-deployment.adoc[leveloffset=+1]
33+
include::modules/installation-osp-control-compute-machines.adoc[leveloffset=+2]
34+
include::modules/installation-osp-bootstrap-machine.adoc[leveloffset=+2]
35+
include::modules/cluster-entitlements.adoc[leveloffset=+1]
36+
include::modules/installation-osp-enabling-swift.adoc[leveloffset=+1]
37+
include::modules/installation-osp-verifying-external-network.adoc[leveloffset=+1]
38+
include::modules/installation-osp-describing-cloud-parameters.adoc[leveloffset=+1]
39+
include::modules/installation-obtaining-installer.adoc[leveloffset=+1]
40+
include::modules/installation-initializing.adoc[leveloffset=+1]
41+
include::modules/installation-configure-proxy.adoc[leveloffset=+2]
42+
include::modules/installation-configuration-parameters.adoc[leveloffset=+1]
43+
include::modules/installation-osp-custom-subnet.adoc[leveloffset=+2]
44+
include::modules/installation-osp-deploying-bare-metal-machines.adoc[leveloffset=+2]
45+
include::modules/installation-osp-config-yaml.adoc[leveloffset=+2]
46+
include::modules/ssh-agent-using.adoc[leveloffset=+1]
47+
include::modules/installation-osp-accessing-api.adoc[leveloffset=+1]
48+
include::modules/installation-osp-accessing-api-floating.adoc[leveloffset=+2]
49+
include::modules/installation-osp-accessing-api-no-floating.adoc[leveloffset=+2]
50+
include::modules/installation-osp-configuring-sr-iov.adoc[leveloffset=+1]
51+
include::modules/installation-launching-installer.adoc[leveloffset=+1]
52+
include::modules/installation-osp-verifying-cluster-status.adoc[leveloffset=+1]
53+
include::modules/cli-logging-in-kubeadmin.adoc[leveloffset=+1]
54+
55+
The cluster is operational. Before you can add OVS-DPDK compute machines though, you must perform additional tasks.
56+
57+
include::modules/networking-osp-enabling-metadata.adoc[leveloffset=+1]
58+
include::modules/networking-osp-enabling-vfio-noiommu.adoc[leveloffset=+1]
59+
include::modules/installation-osp-dpdk-binding-vfio-pci.adoc[leveloffset=+1]
60+
include::modules/installation-osp-dpdk-exposing-host-interface.adoc[leveloffset=+1]
61+
62+
.Additional resources
63+
64+
* xref:../../networking/multiple_networks/configuring-additional-network.adoc#nw-multus-host-device-object_configuring-additional-network[Creating an additional network attachment with the Cluster Network Operator]
65+
66+
The cluster is installed and prepared for configuration. You must now perform the OVS-DPDK configuration tasks in <<next-steps_installing-openstack-installer-ovs-dpdk, Next steps>>.
67+
68+
include::modules/cluster-telemetry.adoc[leveloffset=+1]
69+
70+
.Additional resources
71+
72+
* See xref:../../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring[About remote health monitoring] for more information about the Telemetry service
73+
74+
[id="additional-resources_installing-openstack-installer-ovs-dpdk"]
75+
== Additional resources
76+
* See xref:../../scalability_and_performance/cnf-performance-addon-operator-for-low-latency-nodes.adoc#cnf-understanding-low-latency_cnf-master[Performance Addon Operator for low latency nodes] for information about configuring your deployment for real-time running and low latency.
77+
78+
[id="next-steps_installing-openstack-installer-ovs-dpdk"]
79+
== Next steps
80+
81+
* To complete OVS-DPDK configuration for your cluster:
82+
** xref:../../scalability_and_performance/cnf-performance-addon-operator-for-low-latency-nodes.adoc#installing-the-performance-addon-operator_cnf-master[Install the Performance Addon Operator].
83+
** xref:../../scalability_and_performance/what-huge-pages-do-and-how-they-are-consumed-by-apps.adoc#what-huge-pages-do_huge-pages[Configure the Performance Addon Operator with huge pages support].
84+
* xref:../../post_installation_configuration/cluster-tasks.adoc#available_cluster_customizations[Customize your cluster].
85+
* If necessary, you can
86+
xref:../../support/remote_health_monitoring/opting-out-of-remote-health-reporting.adoc#opting-out-remote-health-reporting_opting-out-remote-health-reporting[opt out of remote health reporting].
87+
* If you need to enable external access to node ports, xref:../../networking/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-nodeport.adoc#nw-using-nodeport_configuring-ingress-cluster-traffic-nodeport[configure ingress cluster traffic by using a node port].
88+
* If you did not configure {rh-openstack} to accept application traffic over floating IP addresses, xref:../../post_installation_configuration/network-configuration.adoc#installation-osp-configuring-api-floating-ip_post-install-network-configuration[configure {rh-openstack} access with floating IP addresses].

modules/installation-osp-control-compute-machines.adoc

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,9 @@ endif::[]
1010
ifeval::["{context}" == "installing-openstack-installer-sr-iov"]
1111
:osp-sr-iov:
1212
endif::[]
13+
ifeval::["{context}" == "installing-openstack-installer-ovs-dpdk"]
14+
:osp-sr-iov:
15+
endif::[]
1316

1417
[id="installation-osp-control-compute-machines_{context}"]
1518
= Control plane and compute machines
@@ -49,3 +52,6 @@ endif::[]
4952
ifeval::["{context}" == "installing-openstack-installer-sr-iov"]
5053
:!osp-sr-iov:
5154
endif::[]
55+
ifeval::["{context}" == "installing-openstack-installer-ovs-dpdk"]
56+
:!osp-sr-iov:
57+
endif::[]
Lines changed: 193 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,193 @@
1+
:_content-type: PROCEDURE
2+
[id="installation-osp-dpdk-binding-vfio-pci_{context}"]
3+
= Binding the vfio-pci kernel driver to NICs
4+
5+
Compute machines that connect to a virtual function I/O (VFIO) network require the `vfio-pci` kernel driver to be bound to the ports that are attached to a configured network. Create a machine set for workers that attach to this VFIO network.
6+
7+
.Procedure
8+
9+
. From a command line, retrieve VFIO network UUIDs:
10+
+
11+
[source,terminal]
12+
----
13+
$ openstack network show <VFIO_network_name> -f value -c id
14+
----
15+
16+
. Create a machine set on your cluster from the following template:
17+
+
18+
[%collapsible]
19+
====
20+
[source,yaml]
21+
----
22+
apiVersion: machineconfiguration.openshift.io/v1
23+
kind: MachineConfig
24+
metadata:
25+
labels:
26+
machineconfiguration.openshift.io/role: worker
27+
name: 99-vhostuser-bind
28+
spec:
29+
config:
30+
ignition:
31+
version: 2.2.0
32+
systemd:
33+
units:
34+
- name: vhostuser-bind.service
35+
enabled: true
36+
contents: |
37+
[Unit]
38+
Description=Vhostuser Interface vfio-pci Bind
39+
Wants=network-online.target
40+
After=network-online.target ignition-firstboot-complete.service
41+
[Service]
42+
Type=oneshot
43+
EnvironmentFile=/etc/vhostuser-bind.conf
44+
ExecStart=/usr/local/bin/vhostuser $ARG
45+
[Install]
46+
WantedBy=multi-user.target
47+
storage:
48+
files:
49+
- contents:
50+
inline: vfio-pci
51+
filesystem: root
52+
mode: 0644
53+
path: /etc/modules-load.d/vfio-pci.conf
54+
- contents:
55+
inline: |
56+
#!/bin/bash
57+
set -e
58+
if [[ "$#" -lt 1 ]]; then
59+
echo "Nework ID not provided, nothing to do"
60+
exit
61+
fi
62+
63+
source /etc/vhostuser-bind.conf
64+
65+
NW_DATA="/var/config/openstack/latest/network_data.json"
66+
if [ ! -f ${NW_DATA} ]; then
67+
echo "Network data file not found, trying to download it from nova metadata"
68+
if ! curl http://169.254.169.254/openstack/latest/network_data.json > /tmp/network_data.json; then
69+
echo "Failed to download network data file"
70+
exit 1
71+
fi
72+
NW_DATA="/tmp/network_data.json"
73+
fi
74+
function parseNetwork() {
75+
local nwid=$1
76+
local pcis=()
77+
echo "Network ID is $nwid"
78+
links=$(jq '.networks[] | select(.network_id == "'$nwid'") | .link' $NW_DATA)
79+
if [ ${#links} -gt 0 ]; then
80+
for link in $links; do
81+
echo "Link Name: $link"
82+
mac=$(jq -r '.links[] | select(.id == '$link') | .ethernet_mac_address' $NW_DATA)
83+
if [ -n $mac ]; then
84+
pci=$(bindDriver $mac)
85+
pci_ret=$?
86+
if [[ "$pci_ret" -eq 0 ]]; then
87+
echo "$pci bind succesful"
88+
fi
89+
fi
90+
done
91+
fi
92+
}
93+
94+
function bindDriver() {
95+
local mac=$1
96+
for file in /sys/class/net/*; do
97+
dev_mac=$(cat $file/address)
98+
if [[ "$mac" == "$dev_mac" ]]; then
99+
name=${file##*\/}
100+
bus_str=$(ethtool -i $name | grep bus)
101+
dev_t=${bus_str#*:}
102+
dev=${dev_t#[[:space:]]}
103+
104+
echo $dev
105+
106+
devlink="/sys/bus/pci/devices/$dev"
107+
syspath=$(realpath "$devlink")
108+
if [ ! -f "$syspath/driver/unbind" ]; then
109+
echo "File $syspath/driver/unbind not found"
110+
return 1
111+
fi
112+
if ! echo "$dev">"$syspath/driver/unbind"; then
113+
return 1
114+
fi
115+
116+
if [ ! -f "$syspath/driver_override" ]; then
117+
echo "File $syspath/driver_override not found"
118+
return 1
119+
fi
120+
if ! echo "vfio-pci">"$syspath/driver_override"; then
121+
return 1
122+
fi
123+
124+
if [ ! -f "/sys/bus/pci/drivers/vfio-pci/bind" ]; then
125+
echo "File /sys/bus/pci/drivers/vfio-pci/bind not found"
126+
return 1
127+
fi
128+
if ! echo "$dev">"/sys/bus/pci/drivers/vfio-pci/bind"; then
129+
return 1
130+
fi
131+
return 0
132+
fi
133+
done
134+
return 1
135+
}
136+
137+
for nwid in "$@"; do
138+
parseNetwork $nwid
139+
done
140+
filesystem: root
141+
mode: 0744
142+
path: /usr/local/bin/vhostuser
143+
- contents:
144+
inline: |
145+
ARG="be22563c-041e-44a0-9cbd-aa391b439a39,ec200105-fb85-4181-a6af-35816da6baf7" <1>
146+
filesystem: root
147+
mode: 0644
148+
path: /etc/vhostuser-bind.conf
149+
----
150+
<1> Replace this value with a comma-separated list of VFIO network UUIDs.
151+
====
152+
+
153+
On boot for machines that are part of this set, the MAC addresses of ports are translated into PCI bus IDs. The `vfio-pci` module is bound to any port that is assocated with a network that is identified by the {rh-openstack} network ID.
154+
155+
.Verification
156+
157+
. On a compute node, from a command line, retrieve the name of the node by entering:
158+
+
159+
[source,terminal]
160+
----
161+
$ oc get nodes
162+
----
163+
164+
. Create a shell to debug the node:
165+
+
166+
[source,terminal]
167+
----
168+
$ oc debug node/<node_name>
169+
----
170+
171+
. Change the root directory for the current running process:
172+
+
173+
[source,terminal]
174+
----
175+
$ chroot /host
176+
----
177+
178+
. Enter the following command to list the kernel drivers that are handling each device on your machine:
179+
+
180+
[source,terminal]
181+
----
182+
$ lspci -k
183+
----
184+
+
185+
.Example output
186+
[source,terminal]
187+
----
188+
00:07.0 Ethernet controller: Red Hat, Inc. Virtio network device
189+
Subsystem: Red Hat, Inc. Device 0001
190+
Kernel driver in use: vfio-pci
191+
----
192+
+
193+
In the output of the command, VFIO ethernet controllers use the `vfio-pci` kernel driver.
Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
:_content-type: PROCEDURE
2+
[id="installation-osp-dpdk-exposing-host-interface_{context}"]
3+
= Exposing the host-device interface to the pod
4+
5+
You can use the Container Network Interface (CNI) plug-in to expose an interface that is on the host to the pod. The plug-in moves the interface from the namespace of the host network to the namespace of the pod. The pod then has direct control of the interface.
6+
7+
.Procedure
8+
9+
* Create an additional network attachment with the host-device CNI plug-in by using the following object as an example:
10+
+
11+
[source,yaml]
12+
----
13+
apiVersion: k8s.cni.cncf.io/v1
14+
kind: NetworkAttachmentDefinition
15+
metadata:
16+
name: vhostuser1
17+
namespace: default
18+
spec:
19+
config: '{ "cniVersion": "0.3.1", "name": "hostonly", "type": "host-device", "pciBusId": "0000:00:04.0", "ipam": { } }'
20+
----
21+
22+
.Verification
23+
24+
* From a command line, run the following command to see if networks are created in the namespace:
25+
+
26+
[source,terminal]
27+
----
28+
$ oc -n <your_cnf_namespace> get net-attach-def
29+
----

0 commit comments

Comments
 (0)