This chapter provides an overview of Virtual Extensible Local Area Network (VXLAN).
This chapter includes the following sections:
•Information About VXLAN
Information About VXLAN
•VEM L3 IP Interface for VXLAN
The VXLAN creates LAN segments by using an overlay approach with MAC in IP encapsulation. The encapsulation carries the original Layer 2 (L2) frame from the Virtual Machine (VM) which is encapsulated from within the Virtual Ethernet Module (VEM). Each VEM is assigned an IP address which is used as the source IP address when encapsulating MAC frames to be sent on the network. You can have multiple vmknics per VEM that are used as sources for this encapsulated traffic. The encapsulation carries the VXLAN identifier which is used to scope the MAC address of the payload frame.
The connected VXLAN is indicated within the port profile configuration of the vNIC and is applied when the VM connects. Each VXLAN uses an assigned IP multicast group to carry broadcast traffic within the VXLAN segment.
When a VM attaches to a VEM, if it is the first to join the particular VXLAN segment on the VEM, an IGMP Join is issued for the VXLAN's assigned multicast group. When the VM transmits a packet on the network segment, a lookup is made in the L2 table using the destination MAC of the frame and the VXLAN identifier. If the result is a hit, the L2 table entry will contain the remote IP address to use to encapsulate the frame and the frame will be transmitted within an IP packet destined to the remote IP address. If the result is a miss (broadcast/multicast/unknown unicasts fall into this bucket), the frame is encapsulated with the destination IP address set to be the VXLAN segment's assigned IP multicast group.
When an encapsulated packet is received from the network, it is decapsulated and the source MAC address of the inner frame and VXLAN ID, is added to the L2 table as the lookup key and the source IP address of the encapsulation header will be added as the remote IP address for the table entry.
VEM L3 IP Interface for VXLAN
When a VEM has a vEthernet interface connected to a VXLAN, the VEM requires at least one IP/MAC pair to terminate VXLAN packets. In this regard, the VEM acts as an IP host. The VEM only supports IPv4 addressing for this purpose.
Similar to how the VEM L3 Control is configured, the IP address to use for VXLAN is configured by assigning a port profile to a vmknic that has the capability vxlan command in it.
To support carrying VXLAN traffic over multiple uplinks , or sub-groups, in server configurations where vPC-HM MAC-Pinning is required, up to four vmknics with capability vxlan may be configured. We recommend that all the VXLAN vmknics within the same ESX/ESXi host are assigned to the same port profile which must have the capability vxlan parameter.
VXLAN traffic sourced by local vEthernet interfaces is distributed between these vmknics based on the source MAC in their frames. The VEM automatically pins the multiple VXLAN vmknics to separate uplinks. If an uplink fails, the VEM automatically repins the vmknic to a working uplink.
When encapsulated traffic is destined to a VEM connected to a different subnet, the VEM does not use the VMware host routing table. Instead, the vmknic initiates an ARP for the remote VEM IP addresses. The upstream router must be configured to respond by using the Proxy ARP feature.
The VXLAN encapsulation overhead is 50 bytes. In order to prevent performance degradation due to fragmentation, the entire interconnection infrastructure between all VEMs exchanging VXLAN packets should be configured to carry 50 bytes more than what the VM VNICs are configured to send. For example, using the default VNIC configuration of 1500 bytes, the VEM uplink port profile, upstream physical switch port, and interswitch links, and any routers if present, must be configured to carry an MTU of at least 1550 bytes. If that is not possible, it is suggested that the MTU within the guest VMs be configured to be smaller by 50 bytes, For example, 1450 bytes.
If this is not configured, the VEM attempts to notify the VM if it performs Path MTU (PMTU) Discovery. If the VM does not send packets with a smaller MTU, the VM fragments the IP packets. Fragmentation only occurs at the IP layer. If the VM sends a frame that is too large to carry, after adding the VXLAN encapsulation, and the frame does not contain an IP packet, the frame is dropped.
Maximum Number of VXLANs
The Cisco Nexus 1000V supports a total of 2048 VLANs and/or VXLANs. Either 2048 VLANs or 2048 VXLANs, or any combination adding to no more than 2048. This number matches up with the maximum number of ports on the Cisco Nexus 1000V. Thereby, allowing every port to be connected to a different VLAN or VXLAN.
This section contains the following topics:
•Disabling the VXLAN Feature Globally
Jumbo frames are supported by the Cisco Nexus 1000V to the extent that there is room leftover to accommodate the VXLAN encapsulation overhead, of at least 50 bytes, and the physical switch/router infrastructure can transport these jumbo sized IP packets.
Disabling the VXLAN Feature Globally
As a safety precaution, the no feature segmentation command will not be allowed if there are any ports associated with a VXLAN port profile. You must remove all the associations before disabling the feature. The no feature segmentation command will cleanup all the VXLAN Bridge Domain configurations on the Cisco Nexus 1000V.