You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: source/plugins/vxlan.rst
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -66,7 +66,7 @@ have MTU of 1500 bytes, meaning that your physical interface/bridge must have MT
66
66
In order to configure "jumbo frames" you can i.e. make physical interface/bridge with 9000 bytes MTU, then all the VXLAN
67
67
interfaces will be created with MTU of 8950 bytes, and then MTU size inside Instance can be set to 8950 bytes.
68
68
69
-
In general it's recommend to use an MTU of at least 9000 bytes or larger. Most VXLAN capable network cards and switch support an MTU of up to 9216.
69
+
In general it is recommend to use an MTU of at least 9000 bytes or larger. Most VXLAN capable network cards and switch support an MTU of up to 9216.
70
70
71
71
Using an MTU of 9216 bytes allows for using Jumbo Frames (9000) within guest networks.
72
72
@@ -80,7 +80,7 @@ Important note on max number of multicast groups
80
80
81
81
Default value of "net.ipv4.igmp_max_memberships" (cat /proc/sys/net/ipv4/igmp_max_memberships) is "20", which means that host can be joined to max 20 multicast groups (attach max 20 multicast IPs on the host).
82
82
83
-
Since all VXLAN (VTEP) interfaces provisioned on host are multicast-based (belong to certain multicast group, and thus has it's own multicast IP that is used as VTEP), this means that you can not provision more than 20 (working) VXLAN interfaces per host.
83
+
Since all VXLAN (VTEP) interfaces provisioned on host are multicast-based (belong to certain multicast group, and thus has it is own multicast IP that is used as VTEP), this means that you can not provision more than 20 (working) VXLAN interfaces per host.
84
84
85
85
Under Linux you can NOT provision (start) more than 20 VXLAN interfaces and error message "No buffer space available" can be observed in Cloudstack Agent logs after provisioning required bridges and VXLAN interfaces.
86
86
@@ -271,7 +271,7 @@ In order to use this script create a symlink on **each** KVM hypervisor
271
271
272
272
This script is also available in the CloudStack `GIT repository <https://raw.githubusercontent.com/apache/cloudstack/refs/heads/main/scripts/vm/network/vnet/modifyvxlan-evpn.sh>`_.
273
273
274
-
View the contents of the script to understand it's inner workings, some key items:
274
+
View the contents of the script to understand it is inner workings, some key items:
275
275
276
276
- VXLAN (vtep) devices are created using 'nolearning', disabling the use of multicast
277
277
- UDP port 4789 (RFC 7348)
@@ -346,15 +346,15 @@ This configuration will:
346
346
- Enable the families ipv4, ipv6 and evpn
347
347
- Announce the IPv4 (10.255.192.12/32) and IPv6 (2001:db8:100::1/128) loopback addresses
348
348
- Advertise all VXLAN networks (VNI) detected locally on the hypervisor (vxlan network devices)
349
-
- Use ASN 4200800212 for this hypervisor (each node has it's own unique ASN)
349
+
- Use ASN 4200800212 for this hypervisor (each node has it is own unique ASN)
350
350
351
351
BGP and EVPN in the upstream network
352
352
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
353
353
This documentation does not cover configuring BGP and EVPN in the upstream network.
354
354
355
355
This will differ per network and is therefor difficult to capture in this documentation. A couple of key items though:
356
356
357
-
- Each hypervisor with establish eBGP session(s) with the Top-of-Rack router(s) in it's rack
357
+
- Each hypervisor with establish eBGP session(s) with the Top-of-Rack router(s) in it is rack
358
358
- These Top-of-Rack devices will connect to (a) Spine router(s)
359
359
- On the Spine router(s) the VNIs will terminate and they will act as IPv4/IPv6 gateways
0 commit comments