PDF

White Paper
Flexible Workload Mobility and
Server Placement with VXLAN
Routing on Cisco CSR 1000V and
Cisco Nexus 1000V
White Paper
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 1 of 18
What You Will Learn
The Cisco CSR 1000V is the first product in the industry to support VXLAN routing (VXLAN Layer 3 gateway). This
document introduces Virtual Extensible LAN (VXLAN) and describes how to use Cisco® Cloud Services Router
(CSR) 1000V to route between VXLAN segments (VXLAN Layer 3 routing) in addition to Cisco Nexus® 1000V
support for VXLAN.
This document covers the following uses cases which support workload mobility and server placement:
●
VXLAN-to-VLAN routing
●
VXLAN-to-VXLAN routing
●
VXLAN routing between the virtual customer edge (Cisco CSR) and data center edge (Cisco ASR 1000 and
9000 Series Aggregation Services Routers).
In addition, this document presents the results and configuration for VXLAN routing testing performed using the
Cisco CSR 1000V and discusses the interoperability of the VXLAN implementation on the Cisco Nexus 1000V and
the Cisco CSR 1000V.
The Cisco CSR 1000V is the first product in the industry to support VXLAN routing (VXLAN Layer 3 gateway). It
can perform VXLAN-to-VXLAN routing and VXLAN-to-VLAN routing.
Introduction
Server virtualization has placed increased demands on the physical network infrastructure. At minimum, more MAC
address table entries are needed throughout the switched Ethernet network to support the potential attachment of
hundreds of thousands of virtual machines, each with its own MAC address.
Data centers also are being challenged to host multiple tenants on shared infrastructure, each with its own isolated
network domain. Each of these tenants and applications needs to be logically isolated from the others at the
network level. For example, a three-tier application can have multiple virtual machines in each tier and requires
logically isolated networks between these tiers. Traditional network isolation techniques such as IEEE 802.1Q
VLAN provide 4096 LAN segments (through a 12-bit VLAN identifier) and may not provide enough segments for
large cloud deployments.
Also, server administrators and application owners want flexibility in placing their workloads, and they are
increasingly demanding the capability to place any workload anywhere. However, a pod in a data center typically
consists of one or more racks of servers with associated network and storage connectivity. Tenants or applications
may start on one pod and then, as a result of expansion, require servers and virtual machines on other pods,
especially when tenants or applications in other pods are not fully utilizing all their resources. This capability
requires a stretched Layer 2 environment connecting the individual servers and virtual machines across multiple
pods and most likely across Layer 3 boundaries. This scenario is particularly relevant for cloud providers and
enterprises adopting private clouds, because cloud computing involves on-demand elastic provisioning of
resources, so the capability to provide service from any pod that has available capacity is mandatory.
VXLAN is an emerging technology that tries to address the requirements for more logical networks as well as for
the capability to allow any workload to be placed anywhere.
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 2 of 18
VXLAN
VXLAN is a MAC address in User Datagram Protocol (MAC-in-UDP) encapsulation technique. It uses a 24-bit
segment identifier in the form of a VXLAN ID (Figure 1). The VXLAN ID is larger than the traditional VLAN ID and
allows LAN segments to scale to 16 million, instead of 4096 with VLAN identifiers. In addition, the UDP
encapsulation allows each LAN segment to be extended across Layer 3 and helps ensure even distribution of
traffic across the PortChannel links that are commonly used in the data centers.
Figure 1.
VXLAN Encapsulation Plus Original Ethernet Frame
A VXLAN provides the same service to end systems as a VLAN. The virtual machines and physical servers don’t
know whether they connecting to the network through a VLAN or a VXLAN.
Server-to-server (physical or virtual) traffic on different access switches (virtual or physical) is encapsulated in a
VXLAN header plus UDP plus IP, as shown in Figure 1.
How It Works
VXLAN is a Layer 2 overlay scheme over a Layer 3 network. Each overlay is referred to as a VXLAN segment.
Only systems within the same VXLAN segment can communicate with each other. VXLAN could also be called a
tunneling scheme to overlay Layer 2 networks on top of Layer 3 networks. The tunnels are stateless. In the VXLAN
network model, the endpoint of the tunnel is called the VXLAN tunnel endpoint (VTEP). The VXLAN-related tunnel
and outer header encapsulation is known only to the VTEPs; the virtual machines and physical servers never see
it.
Unicast Communication Between Virtual Machines in the Same Subnet
Consider a virtual machine in a VXLAN segment. The virtual machine is unaware of VXLAN. To communicate with
a virtual machine on a different host, it sends an Ethernet frame destined for the target. The VTEP on the physical
host looks up the VXLAN ID with which this virtual machine is associated. It then determines whether the
destination MAC address is on the same segment. If it is, then an outer header consisting of an outer MAC
address, outer IP address, and VXLAN header is inserted in front of the original MAC address of the original
Ethernet frame. The final packet is transmitted to the destination VTEP as a unicast transmission to the IP address
of the remote VTEP connecting the destination virtual machine.
Upon reception, the remote VTEP (usually a remote virtual switch such as the Cisco Nexus 1000V) verifies that the
VXLAN ID is valid and is used by the destination virtual machine. If it is, then the packet is stripped of its outer
header and passed to the destination virtual machine. The destination virtual machine never knows about the
VXLAN ID or that the frame was transported with VXLAN encapsulation.
In addition to forwarding the packet to the destination virtual machine, the remote VTEP learns the mapping of the
inner source MAC address to the outer source IP address. It stores this mapping in a table so that when the
destination virtual machine sends a response packet, unknown-destination flooding of the response packet is not
needed.
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 3 of 18
Broadcast Communication and Mapping to Multicast
Consider the virtual machine on the source host attempting to communicate with the destination virtual machine
using IP. Assuming that they are both on the same subnet, the virtual machine sends an Address Resolution
Protocol (ARP) request, which is a broadcast frame. In a non-VXLAN environment, this frame would be broadcast
(flooded) to all switches carrying that VLAN.
With VXLAN, a header including the VXLAN ID is inserted at the beginning of the packet along with the IP header
and UDP header. However, this broadcast packet is sent to an IP multicast group. VXLAN usually requires a
mapping between a VXLAN ID and the IP multicast group that it will use. The Cisco Nexus 1000V has an
implementation of VXLAN that does not require multicast; however, this implementation is beyond the scope of this
document. In this document, VXLAN operates in multicast mode and therefore uses a multicast group for each
VXLAN network identifier (VNI).
On the basis of this mapping, the VTEP can provide Internet Group Management Protocol (IGMP) membership
reports to the upstream switch or router to have them join or leave the VXLAN-related IP multicast groups as
needed. This behavior will enable pruning of the leaf nodes for specific multicast traffic addresses according to
whether a member is available on this host using the specific multicast address.
The destination virtual machine sends a standard ARP response using IP unicast. This frame will be encapsulated
back to the VTEP connecting the originating virtual machine using IP unicast VXLAN encapsulation. This
encapsulation is possible because the mapping of the ARP response's destination MAC address to the VTEP IP
address was learned earlier through the ARP request.
Also note that multicast frames and unknown-MAC-address-destination frames are sent using multicast, similar to
the broadcast frames.
How VXLAN Works: Overview
Here is a quick overview of VXLAN:
●
VXLAN uses IP multicast to deliver broadcast, multicast, and unknown MAC address destinations to all
VTEPs participating in a given VXLAN.
●
Mappings of virtual machine MAC addresses to VTEP IP addresses are learned through receipt of
encapsulated packets in a way similar to Ethernet bridge flood-and-learn behavior.
●
Known-destination virtual machine MAC addresses are carried over point-to-point (P2P) tunnels between
VTEPs.
VXLAN Gateways
VXLAN operates in an overlay, on top of a Layer 3 infrastructure; however, it still requires two types of gateway: a
Layer 2 gateway that performs VXLAN-to-VLAN bridging, and a Layer 3 gateway that performs VXLAN-to-VXLAN
routing or VXLAN-to-VLAN routing (Figure 2).
This document focuses on the use of the Cisco CSR 1000V as a VXLAN Layer 3 gateway.
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 4 of 18
Figure 2.
VXLAN Gateways
Cisco CSR 1000V and VXLAN
The Cisco CSR 1000V (Figure 3) is Cisco’s virtual router offering that can be deployed in private, public, or hybrid
cloud environments. It provides a comprehensive, well-tested feature set and solid performance that can enable
enterprises and service providers to migrate to the cloud while maintaining the security, application experience,
and business continuity that they have come to expect in the physical world.
The Cisco CSR 1000V has its origins in the Cisco ASR 1000 Series physical platform. The Cisco ASR 1000 Series
is one of Cisco’s most successful platforms and is widely deployed by enterprises and service providers worldwide.
Cisco CSR was developed by taking the Cisco ASR 1000 Series model, removing the hardware, and embedding
the resulting container in a virtual machine that runs on a hypervisor. Cisco CSR is built to run on general-purpose
x86 hardware with a hypervisor providing the abstraction layer. The CPU, memory, hard disk, and network
interfaces are generalized and presented to the guest OS (Cisco IOS® XE Software) by the hypervisor.
The Cisco CSR 1000V is the first product in the industry to support VXLAN routing (VXLAN Layer 3 gateway). It
can perform VXLAN-to-VXLAN routing and VXLAN-to-VLAN routing. For an effective VXLAN deployment, the
Cisco CSR 1000V must be able to enable routing between VXLAN segments in the virtualized environment without
the need for extra external devices.
Cisco CSR also supports the VXLAN Layer 2 gateway function. This capability is not the focus of this document,
but for information, see Cisco CSR 1000V Layer 2 Gateway VXLAN Support.
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 5 of 18
Figure 3.
Cisco CSR 1000V
The main aspects of the Cisco CSR architecture are summarized here:
●
The guest OS is Cisco IOS XE, which runs a 64-bit MontaVista Linux kernel.
●
Cisco CSR 1000V retains the multithreaded nature of Cisco IOS XE. The threads are mapped to Linux.
●
The route processor and forwarding processor are implemented as processes and mapped to virtual CPUs
(vCPUs).
●
The network interfaces are mapped to virtual network interface cards (vNICs). VMXNET3 paravirtualized
drivers are the default.
●
There is no dedicated crypto engine. The Intel AES-NI instruction set is used to provide hardware-based
cryptography assistance.
●
The same Cisco IOS XE command-line interface (CLI) as for physical routers is available. The Cisco CSR
can be configured just like a physical router.
Cisco Nexus 1000V and VXLAN
The Cisco Nexus 1000V Virtual Ethernet Module (VEM) acts as the VTEP described previously. The VEM
encapsulates the original Layer 2 frame from the virtual machines into a VXLAN header plus UDP plus IP packet
(see Figure 1).
Each VEM is assigned an IP address, which is used as the source IP address when encapsulating Ethernet frames
to be sent on the network, this is accomplished by creating virtual network adaptors (VMKNICs) on each VEM. The
VMKNIC is connected to a VLAN to transport the VXLAN encapsulated traffic on the network
The VXLAN ID to be used for each virtual machine is specified in the port profile configuration and is applied when
the virtual machine connects to the network. Each VXLAN uses an assigned IP multicast group to carry broadcast,
unknown unicast, and multicast traffic in the VXLAN segment as described previously.
When a virtual machine attaches to a VEM, if it is the first to join a particular VXLAN segment on the VEM, an
IGMP join command is issued for the VXLAN's assigned multicast group. When the virtual machine transmits a
packet on the network segment, a lookup is performed in the Layer 2 table using the destination MAC address of
the frame and the VXLAN identifier.
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 6 of 18
If the result is a hit, the Layer 2 table entry will contain the remote IP address to use to encapsulate the frame, and
the frame will be transmitted in an IP packet destined for the remote IP address. If the result is a miss (broadcast,
multicast, and unknown unicast transmissions are in this category), the frame is encapsulated with the destination
IP address set as the VXLAN segment's assigned IP multicast group.
In addition to supporting VXLAN encapsulation, the Cisco Nexus 1000V:
●
Supports quality of service (QoS) for VXLAN
●
Extends the existing operating model to the cloud or environments in which VXLAN is required.
●
Supports Cisco vPath technology for virtualized network services together with VXLAN.
●
Provides an XML API for customization and integration.
●
Supports VXLAN in multicast mode (used in this document) and in unicast-only mode; when VXLAN
operates in unicast-only mode, the underlay infrastructure does not need to support multicast, and all the
frames, including broadcast, multicast, and unknown unicast, are encapsulated in unicast frames
For more information about VXLAN in the Cisco Nexus 1000V, see the white paper Deploying the VXLAN Feature
in Cisco Nexus 1000V Series Switches.
Use Cases
This section describes some of the use cases enabled by the support of VXLAN routing on the Cisco CSR 1000V.
●
VXLAN-to-VLAN routing: The Cisco CSR 1000V can be used to perform VXLAN-to-VLAN routing. In this
use case, virtual machines connected to the Cisco Nexus 1000V using VXLAN use the Cisco CSR as their
default gateway. The Cisco CSR performs routing between the VXLAN and VLAN to enable the virtual
machines to communicate with physical servers, routers, firewalls, and any other devices that are
connected to the network using a VLAN. The Cisco CSR is a VTEP and is connected to the same VXLAN
segments of the virtual machines that use it as their default gateway.
●
VXLAN-to-VXLAN routing: The Cisco CSR 1000V can be used to perform VXLAN-to-VXLAN routing. In this
use case, virtual machines connected to the Cisco Nexus 1000V using VXLAN use the Cisco CSR as their
default gateway, and the Cisco CSR performs routing between VXLAN segments to enable virtual machines
in different VXLAN segments to communicate with each other. The Cisco CSR can also be the default
gateway for physical servers connected to a hardware-based VTEP (such as Cisco Nexus switches). The
Cisco CSR is a VTEP and is connected to the same VXLAN segments of the virtual machines that use it as
their default gateway.
●
VXLAN-to-VXLAN routing between virtual machines and between the virtual customer edge (Cisco CSR)
and data center edge (Cisco ASR 1000 or 9000 Series): The Cisco CSR 1000V can be used to perform
VXLAN-to-VXLAN routing for traffic between virtual machines in different VXLAN segments, and for traffic
between virtual machines and the WAN. In this use case (Figure 4), virtual machines connected to the
Cisco Nexus 1000V using VXLAN use the Cisco CSR as their default gateway. The Cisco CSR performs
routing between VXLAN segments to enable virtual machines in different VXLAN segments to communicate
with each other (east-west traffic, or VXLAN to VXLAN), and also between the Cisco CSR (virtual customer
edge) and WAN edge (Cisco ASR 1000 or 9000 Series). In the case in which VXLAN is used between the
Cisco CSR (virtual customer edge) and WAN edge, the data center network is used as a transport network.
On the WAN edge, the Cisco ASR 1000 or 9000 Series maps the VXLAN encapsulated packet to a Virtual
Routing and Forwarding (VRF) and Multiprotocol Label Switching (MPLS) Layer 3 VPN.
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 7 of 18
Figure 4.
VXLAN Routing Between Virtual Customer Edge (Cisco CSR) and Data Center Edge
(Cisco ASR 1000 or 9000 Series)
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 8 of 18
Scope of Testing
The testing performed for this document had the following goals:
●
Validate that Cisco CSR 1000V works as a VTEP and supports routing between different VNIs.
●
Validate that VXLAN on Cisco CSR 1000V works together with Cisco Nexus 1000V under these conditions:
◦ Virtual machines are connected to the Cisco Nexus 1000V using VXLAN.
◦ Cisco CSR is connected to the Cisco Nexus 1000V but running as a VTEP, with no VXLAN on the Cisco
Nexus 1000V for Cisco CSR.
◦ VXLAN is run in multicast mode.
●
Validate the preceding conditions on Cisco UCS® B-Series Blade Servers.
Test Topology
Figure 5 shows the test topology.
Figure 5.
Test Topology
In Figure 5, the Cisco Nexus 3064 is the aggregation layer in a traditional data center topology. Any switch can be
used instead of the Cisco Nexus 3064; the only feature required on this switch is support for IP multicast.
The Cisco UCS-B Series was used as the computing platform for this testing and was running VMware ESXi 5.1.
A Cisco CSR 1000V running Cisco IOS XE 3.12.00.S is installed in one of the Cisco UCS blade servers.
The Cisco Nexus 1000V running Cisco NX-OS Software Release 4.2(1)SV2(2.2) is installed, and its VEMs are
installed on the Cisco UCS blade server that hosts the Cisco CSR as well as on the Cisco UCS blade server that
hosts the virtual machines.
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 9 of 18
The GigabitEthernet3 interface on the Cisco CSR is attached to a port profile on the Cisco Nexus 1000V
configured as an access port on VLAN 147. The VMkernel interface of VEM 3 used for VXLAN (IP address
10.107.147.150) is also connected to a port profile that uses that VLAN. VLAN 147 is created on Cisco UCS
Manager and allowed on the interfaces of the blades, so the connection between the VTEPs is performed over a
single transit VLAN; however, it can also be performed over a full routed network with the same results.
Two virtual machines (VM-1 and VM-2) are installed and configured with IP addresses in different subnets. VM-1
has IP address 10.10.10.20/24, and its default gateway is 10.10.10.5, which is bridge domain interface (BDI) 10 on
the Cisco CSR. VM-2 has IP address 10.20.20.20/24, and its default gateway is 10.20.20.5, which is BDI 20 on the
Cisco CSR.
The VEMs at which the Cisco CSR and the virtual machines are connected are managed by the same Cisco
Nexus 1000V Virtual Supervisor Module (VSM).
The Cisco Nexus 1000V supports VXLAN in the free Essentials edition. The Cisco CSR supports VXLAN in the
Premium Technology package.
Goals of the Testing
The topology in Figure 6 shows the functions validated on this document.
VM-1 with IP address 10.10.10.20 is connected to a port profile on the Cisco Nexus 1000V that uses VXLAN 5001
to provide Layer 2 segmentation. VM-1 is configured to use 10.10.10.5, which is BDI 10 on the Cisco CSR, as its
default gateway.
VM-2 with IP address 10.20.20.20 is connected to a port profile on the Cisco Nexus 1000V that uses VXLAN 5002
to provide Layer 2 segmentation. VM-2 is configured to use 10.20.20.5, which is BDI 20 on the Cisco CSR, as its
default gateway.
The Cisco CSR is configured as the VTEP and has BDI 10 for VNI 5001 and BDI 20 for VNI 5002.
The Cisco CSR can route between subnets 10.10.10.0/24 and 10.20.20.0/24, effectively performing VXLAN
routing, and it allows VM-1 to communicate with VM-2.
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 10 of 18
Figure 6.
Topology Showing Validated Functions
Configuration
This section provides the configurations required to achieve the goals of the testing described earlier in this
document.
Cisco Nexus 1000V VSM Configuration
The first part of the configuration is to enable VXLAN on the Cisco Nexus 1000V so that it functions as a VTEP.
This document does not cover basic installation of the Cisco Nexus 1000V because it assumes that a running
configuration exists.
First enable VXLAN on the VSM.
feature segmentation
To make the VEM on the VMware ESXi blade function as a VTEP and to perform the encapsulation of the VXLAN
traffic, configure a port profile in which a VMkernel interface is placed. The VEM will then use this VMkernel
interface to encapsulate and source the VXLAN traffic.
port-profile type vethernet VXLANvmkernel
vmware port-group
switchport mode access
switchport access vlan 147
<<< Transport VLAN.
capability vxlan
no shutdown
state enabled
system vlan 147
The capability vxlan keyword is what enables the VTEP capability on this interface. For this traffic to be working at
all times, make sure that the transit VLAN used to send the VXLAN traffic is configured as system vlan.
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 11 of 18
Because VXLAN adds overhead to the original IP packet, you should increase the maximum transmission unit
(MTU) size on the uplink port profiles. In addition, the VXLAN transit VLAN (147) must also be configured here as a
system VLAN.
port-profile type ethernet System-Uplink
vmware port-group
switchport mode trunk
switchport trunk allowed vlan add 147
mtu 1594
channel-group auto mode on mac-pinning
no shutdown
system vlan 147
state enabled
Using VMware vSphere, select the VMware ESXi host on which the Cisco Nexus 1000V is a VTEP (VEM 3 in the
topology shown in Figure 6) then configure the VMkernel interface and make sure that the port group is configured
as VXLANvmkernel.
You can then continue with the configuration that will be used to connect the actual virtual machines to use
VXLAN. You create a bridge domain for each VXLAN segment and associate a segment ID with the segment as
well as a multicast group.
bridge-domain tenant-1-1
segment id 5001
group 239.1.3.3
no segment mode unicast-only
<<<< Make the bridge domain use multicast-based VXLAN.
no segment distribution mac
bridge-domain tenant-1-2
segment id 5002
group 239.1.2.2
no segment mode unicast-only
no segment distribution mac
Cisco CSR 1000V, with Cisco IOS XE 3.12, supports only multicast-based VXLAN; therefore, the Cisco Nexus
1000V must be configured to use multicast-based VXLAN for the VXLAN segments that will interoperate with the
Cisco CSR 1000V: in this case, segment IDs 5001 and 5002.
To finish the configuration on the Cisco Nexus 1000V, you need to create the port profiles to which the virtual
machines (VM-1 and VM-2) will be attached.
The configuration of the port profiles that will be used to connect each of the virtual machines (VM-1 and VM-2) is
shown here.
port-profile type vethernet HostVXLAN-Tenant-1-1
vmware port-group
switchport mode access
switchport access bridge-domain tenant-1-1
no shutdown
state enabled
port-profile type vethernet HostVXLAN-Tenant-1-2
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 12 of 18
vmware port-group
switchport mode access
switchport access bridge-domain tenant-1-2
no shutdown
state enabled
The preceding configurations are the main ones required to enable VXLAN on the Cisco Nexus 1000V. For
detailed steps, please refer to Deploying the VXLAN Feature in Cisco Nexus 1000V Series Switches.
Cisco CSR Configuration
Configure the UDP port used for VXLAN encapsulation. The Cisco Nexus 1000V 4.2(1)SV2(2.2), the release
tested, uses port 8472, so the Cisco CSR is configured to match the port used on the Cisco Nexus 1000V.
vxlan udp port 8472
The VXLAN implementation on the Cisco CSR requires the traffic to be sourced from a loopback interface. The
Cisco CSR uses this IP address as the source of its VXLAN packets.
The IP address of the loopback interface used by the Cisco CSR should be known within the network so that the
Cisco Nexus 1000V can reach it.
In the Cisco Nexus 1000V, a VXLAN VMkernel interface is used to encapsulate and transport VXLAN frames, as
discussed previously. The VMware ESXi host routing table is ignored. The VMKNIC netmask is also ignored. The
Cisco Nexus 1000V VEM will initiate an ARP request for all remote VTEP IP addresses - in this testing, the IP
address of the loopback of the Cisco CSR - regardless of whether they are on the same subnet. For this reason,
you need to enable proxy ARP on the Layer 3 interface of the Cisco CSR that is connected to the same VLAN as
the VMkernel interface on the VMware ESXi so that the Cisco CSR can respond to ARP requests for its loopback.
Proxy ARP is enabled by default.
Now configure the Cisco CSR interfaces.
The GigabitEthernet3 interface is attached to VLAN 147, which is used as the transport VLAN. Loopback 0 is the
interface used as the VTEP on the Cisco CSR.
interface GigabitEthernet3
ip address 10.107.147.190 255.255.255.0
ip pim sparse-mode
no shutdown
!
interface Loopback0
ip address 1.1.1.1 255.255.255.255
ip pim sparse-mode
no shutdown
Protocol-Independent Multicast (PIM) must be enabled on the interfaces. The Cisco CSR currently supports only
VXLAN with Bidirectional PIM; therefore, the following configuration is also required.
ip pim bidir-enable
ip pim rp-address 10.107.147.190 bidir
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 13 of 18
The multicast rendezvous point address (rp-address) used in the configuration is the IP address of the
GigabitEthernet3 interface on the Cisco CSR, but the rendezvous point address does not have to be located on the
Cisco CSR itself if the network includes another router that is used as the rendezvous point.
Now configure VXLAN. Configure the network virtualization endpoint (NVE) interface.
interface nve1
no ip address
member vni 5001 mcast-group 239.1.3.3
member vni 5002 mcast-group 239.1.2.2
source-interface Loopback0
Make sure that the VNIs and the multicast groups exactly match the ones configured on the Cisco Nexus 1000V.
Next configure the bridge domains. You do this exactly the same way as for the Cisco Nexus 1000V.
You must attach a physical interface to the bridge domain for the solution to work. GigabitEthernet2 is used in this
document, although this interface was not actually used during this testing.
Adding GigabitEthernet2 to the bridge domain would allow VXLAN traffic to be bridged to it, and it is used when
you want to configure the Cisco CSR as a VXLAN Layer 2 gateway.
During this testing, the goal was to route the traffic instead of bridging. Nevertheless, at least one physical interface
that is up must be present in the bridge domain; otherwise, the bridge domain will stay down, as will the virtual
Layer 3 interface (BDI interface) connected to it.
bridge-domain 10
member vni 5001
member GigabitEthernet2 service-instance 10
!
bridge-domain 20
member vni 5002
member GigabitEthernet2 service-instance 20
!
interface GigabitEthernet2
no ip address
negotiation auto
service instance 10 ethernet
encapsulation dot1q 10
rewrite ingress tag pop 1 symmetric
!
service instance 20 ethernet
encapsulation dot1q 20
rewrite ingress tag pop 1 symmetric
The GigabitEthernet 2 interface is configured to push the VLAN tag that it receives. In this testing, this feature isn’t
important, because GigabitEthernet2 is connected into an empty VLAN and just serves as a placeholder to keep
the Layer 3 virtual interface (BDI) up.
Finally, you configure the Layer 3 virtual interfaces (BDI) that are used as the default gateways by the virtual
machines.
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 14 of 18
interface BDI10
ip address 10.10.10.5 255.255.255.0
!
interface BDI20
ip address 10.20.20.5 255.255.255.0
The testing was performed on a Cisco UCS B-Series Blade Server system, which uses Cisco UCS fabric
interconnects and multicast and IGMP snooping.
By default, IGMP snooping is enabled on the VLANs configured on the fabric interconnects; otherwise, all the
multicast traffic will be flooded to all the blades in the Cisco Unified Computing System™ (Cisco UCS®).
Because the Cisco CSR has PIM enabled and it is running on one of the Cisco UCS blades, the Cisco UCS fabric
interconnect will need the IGMP snooping feature to detect this and to handle the port on which the Cisco CSR is
connected as a multicast router (mrouter) port.
Currently, the Cisco UCS fabric interconnect does not handle interfaces south of the fabric interconnect as mrouter
interfaces. The fabric interconnect can detect multicast routers north of the fabric interconnect. The IGMP join
messages from the Cisco Nexus 1000V are detected by the Cisco UCS fabric interconnect. The workaround for
this problem and to get multicast-based VXLAN working on the topology tested is to configure static IGMP joins on
the Cisco CSR GigabitEthernet3 interface. This configuration triggers an IGMP join from the Cisco CSR, and this
join message is understood by the Cisco UCS fabric interconnect.
interface GigabitEthernet3
ip pim sparse-mode
ip igmp join-group 239.1.2.2
ip igmp join-group 239.1.3.3
In addition to this workaround, for multicast to work, on the VLAN used to transport VXLAN encapsulated traffic, a
device must send periodic IGMP queries. These queries usually are sent by a multicast-enabled router; however,
the IGMP querier can also be enabled on Layer 2 switches or the Cisco UCS fabric interconnect.
During this testing, the Cisco Nexus 3064 Switch was enabled with multicast so that it would function as the IGMP
querier for the transport VLAN (VLAN 147).
interface Vlan147
ip address 10.107.147.1/24
ip pim sparse-mode
Nexus-VSM# sh ip igmp snooping querier vlan 147
Vlan
IP Address
Version
Expires
Port
147
10.107.147.1
v2
00:04:49
port-channel2
Nexus-VSM#
Verification
Verification of the solution starts with the transmission of ICMP pings from the Microsoft Windows virtual machines
toward the Cisco CSR interfaces and toward each other.
The first virtual machine has an IP address of 10.10.10.20 and a default gateway of 10.10.10.5 (Cisco CSR
BDI10). The second virtual machine has an IP address of 10.20.20.20 and a default gateway of 10.20.20.5 (Cisco
CSR BDI20). Figure 7 shows a successful ping from the second virtual machine to the first virtual machine
traversing the Cisco CSR, as seen in the traceroute.
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 15 of 18
Keep in mind that these two virtual machines are not on the same subnet and that there is no VLAN-based
communication with the default gateway. All this communication uses VXLAN.
Figure 7.
Successful Ping
Now that you have verified that the connectivity works, you can verify the multicast routing and view some statistics
for the Cisco Nexus 1000V and Cisco CSR 1000V.
CSR-Tenant1#show nve vni
Interface
VNI
mcast
VNI state
nve1
5002
239.1.2.2
Up
nve1
5001
239.1.3.3
Up
CSR-Tenant1#
CSR-Tenant1#show nve vni int nve1 detail
Interface
VNI
mcast
VNI state
nve1
5002
239.1.2.2
Up
VNI Detailed statistics:
Pkts In
Bytes In
Pkts Out
Bytes Out
493
182621
35
3633
nve1
5001
239.1.3.3
Up
VNI Detailed statistics:
Pkts In
Bytes In
Pkts Out
Bytes Out
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 16 of 18
4882
2292015
5540
356605
CSR-Tenant1#
On the Cisco CSR, as shown in the preceding information, you can verify the configuration of the NVE interface.
You can also see how many packets and bytes were received and sent on the two VXLAN segments. These
counters are very useful in troubleshooting. When the multicast groups are joined successfully the VNI state will
transition to Up.
CSR-Tenant1#show nve peers interface nve1
Interface
Peer-IP
VNI
Up Time
nve1
10.107.147.150
5002
-
nve1
10.107.147.150
5001
-
CSR-Tenant1#
As soon as other peers are detected through multicast packets that are received, they can be verified. The Peer-IP
address shown in the preceding listing is the VMkernel interface that was configured on the VMware ESX server to
be used for VXLAN.
Nexus-VSM(config)# show bridge-domain
Global Configuration:
Mode: Unicast-only
MAC Distribution: Enable
Bridge-domain tenant-1-1 (1 ports in all)
Segment ID: 5001 (Manual/Active)
Mode: Multicast (override)
MAC Distribution: Disable (override)
Group IP: 239.1.3.3
State: UP
Mac learning: Enabled
Veth31
Bridge-domain tenant-1-2 (1 ports in all)
Segment ID: 5002 (Manual/Active)
Mode: Multicast (override)
MAC Distribution: Disable (override)
Group IP: 239.1.2.2
State: UP
Mac learning: Enabled
Veth11
On the VSM, you can verify that the bridge domains have joined the multicast groups as configured. You can also
verify that the bridge domains are configured in multicast mode.
LAB-UCS-A(nxos)# sh ip igmp snooping groups vlan 147
Type: S - Static, D - Dynamic, R - Router port
Vlan
Group Address
Ver
Type
Port list
147
*/*
-
R
Eth1/19
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 17 of 18
147
239.1.2.2
v2
D
Veth861 Veth871
147
239.1.3.3
v2
D
Veth861 Veth871
LAB-FLEXPOD-A(nxos)#
Finally, you can see in the Cisco UCS fabric interconnect that only the IGMP joins are seen from the virtual
Ethernet (Veth) ports connecting to the blades, and that the mrouter port is seen only on the northbound interface
connecting to the Cisco Nexus 3064.
Conclusion
VXLAN is becoming a more strategic and pervasive technology within enterprises, cloud and service provider data
centers. VXLAN is an overlay technology, allowing layer 2 adjacency for VM connectivity and VM migration over
layer 3 network boundaries. The Cisco CSR 1000V is the first product in the industry to support VXLAN routing
(VXLAN Layer 3 gateway). This provides a great deal of flexibility in where a workload is placed within the data
center, where it is moved to and how virtual network services are connected. This whitepaper presented some of
the most compelling use cases for VXLAN and the associated benefits:
●
VXLAN-to-VLAN routing enables a packet sourced from subnet A and destined to subnet B from a VXLAN
segment ID and route it to a VLAN, and vice-versa, e.g. route from one VXLAN segment to a VLAN within
the same network device.
●
VXLAN-to-VXLAN routing enables a packet sourced from subnet A and destined to subnet B from one
VXLAN segment ID and route it to another VXLAN segment ID, e.g. route from one VXLAN segment to
another within the same network device.
●
VXLAN routing between the virtual customer edge (Cisco CSR) and data center edge (Cisco ASR 1000 and
9000 Series Aggregation Services Routers).
●
VXLAN routing on the Cisco CSR 1000V and VXLAN interoperability with the Cisco Nexus 1000V.
For More Information
For more information please visit:
●
Cisco CSR 1000V web page
●
Cisco Nexus 1000V web page
●
Learn more about VXLAN
Printed in USA
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
C11-732382-00
08/14
Page 18 of 18