You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As a cluster administrator, you can configure an additional network for your VRF domain by using the CNI VRF plugin. The virtual network created by this plugin is associated with a physical interface that you specify.
11
-
12
-
[NOTE]
13
-
====
14
-
Applications that use VRFs need to bind to a specific device. The common usage is to use the `SO_BINDTODEVICE` option for a socket. `SO_BINDTODEVICE` binds the socket to a device that is specified in the passed interface name, for example, `eth1`. To use `SO_BINDTODEVICE`, the application must have `CAP_NET_RAW` capabilities.
15
-
16
-
Using a VRF through the `ip vrf exec` command is not supported in {product-title} pods. To use VRF, bind applications directly to the VRF interface.
==Creating an additional network attachment with the CNI VRF plugin
7
+
= Creating an additional network attachment with the CNI VRF plugin
21
8
22
9
The Cluster Network Operator (CNO) manages additional network definitions. When you specify an additional network to create, the CNO creates the `NetworkAttachmentDefinition` custom resource (CR) automatically.
<1> `plugins` must be a list. The first item in the list must be the secondary network underpinning the VRF network. The second item in the list is the VRF plugin configuration.
75
62
<2> `type` must be set to `vrf`.
@@ -107,36 +94,68 @@ additional-network-1 14m
107
94
There might be a delay before the CNO creates the CR.
108
95
====
109
96
110
-
.Verifying that the additional VRF network attachment is successful
97
+
.Verification
98
+
99
+
. Create a pod and assign it to the additional network with the VRF instance:
111
100
112
-
To verify that the VRF CNI is correctly configured and the additional network attachment is attached, do the following:
101
+
.. Create a YAML file that defines the `Pod` resource:
102
+
+
103
+
.Example `pod-additional-net.yaml` file
104
+
[source,yaml]
105
+
----
106
+
apiVersion: v1
107
+
kind: Pod
108
+
metadata:
109
+
name: pod-additional-net
110
+
annotations:
111
+
k8s.v1.cni.cncf.io/networks: '[
112
+
{
113
+
"name": "test-network-1" <1>
114
+
}
115
+
]'
116
+
spec:
117
+
containers:
118
+
- name: example-pod-1
119
+
command: ["/bin/bash", "-c", "sleep 9000000"]
120
+
image: centos:8
121
+
----
122
+
<1> Specify the name of the additional network with the VRF instance.
123
+
124
+
.. Create the `Pod` resource by running the following command:
125
+
+
126
+
[source,terminal]
127
+
----
128
+
$ oc create -f pod-additional-net.yaml
129
+
----
130
+
+
131
+
.Example output
132
+
[source,terminal]
133
+
----
134
+
pod/test-pod created
135
+
----
113
136
114
-
. Create a network that uses the VRF CNI.
115
-
. Assign the network to a pod.
116
-
. Verify that the pod network attachment is connected to the VRF additional network. Remote shell into the pod and run the following command:
137
+
. Verify that the pod network attachment is connected to the VRF additional network. Start a remote session with the pod and run the following command:
117
138
+
118
139
[source,terminal]
119
140
----
120
141
$ ip vrf show
121
142
----
122
143
+
123
144
.Example output
124
-
+
125
145
[source,terminal]
126
146
----
127
147
Name Table
128
148
-----------------------
129
-
red 10
149
+
vrf-11001
130
150
----
131
-
. Confirm the VRF interface is master of the secondary interface:
151
+
. Confirm that the VRF interface is the controller for the additional interface:
132
152
+
133
153
[source,terminal]
134
154
----
135
155
$ ip link
136
156
----
137
157
+
138
158
.Example output
139
-
+
140
159
[source,terminal]
141
160
----
142
161
5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode
As a cluster administrator, you can configure an additional network for a virtual routing and forwarding (VRF) domain by using the CNI VRF plugin. The virtual network that this plugin creates is associated with the physical interface that you specify.
10
+
11
+
Using a secondary network with a VRF instance has the following advantages:
12
+
13
+
Workload isolation:: Isolate workload traffic by configuring a VRF instance for the additional network.
14
+
Improved security:: Enable improved security through isolated network paths in the VRF domain.
15
+
Multi-tenancy support:: Support multi-tenancy through network segmentation with a unique routing table in the VRF domain for each tenant.
16
+
17
+
[NOTE]
18
+
====
19
+
Applications that use VRFs must bind to a specific device. The common usage is to use the `SO_BINDTODEVICE` option for a socket. The `SO_BINDTODEVICE` option binds the socket to the device that is specified in the passed interface name, for example, `eth1`. To use the `SO_BINDTODEVICE` option, the application must have `CAP_NET_RAW` capabilities.
20
+
21
+
Using a VRF through the `ip vrf exec` command is not supported in {product-title} pods. To use VRF, bind applications directly to the VRF interface.
22
+
====
23
+
24
+
[role="_additional-resources"]
25
+
.Additional resources
26
+
* xref:../../networking/multiple_networks/about-virtual-routing-and-forwarding.adoc#cnf-about-virtual-routing-and-forwarding_about-virtual-routing-and-forwarding[About virtual routing and forwarding]
0 commit comments