PDF

White Paper
Deploy Application Load Balancers
with Source Network Address
Translation in Cisco DFA
Last Updated: 1/27/2016
© 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 1 of 32
Contents
Introduction .............................................................................................................................................................. 3
Target Audience.................................................................................................................................................... 3
Prerequisites ......................................................................................................................................................... 3
Placing the Application Load Balancer in the Fabric ........................................................................................... 3
Choosing the Load Balancer Deployment Type .................................................................................................... 3
Deployment Scenario 1: Application Load Balancer with Virtual IP Address Directly Attached to Fabric ...... 4
Data Traffic Path in the Fabric .............................................................................................................................. 5
Configuring Autoconfiguration Profiles .................................................................................................................. 6
Deployment Scenario 2: Application Load Balancer with Host Route Injection and Dynamic Routing
between Load Balancer and Fabric ...................................................................................................................... 10
Data Traffic Path in the Fabric ............................................................................................................................ 11
Configuring Autoconfiguration Profiles ................................................................................................................ 13
Deployment Scenario 3: Application Load Balancer with Static Routing Between Load Balancer and Fabric
................................................................................................................................................................................. 18
Data Traffic Path in the Fabric ............................................................................................................................ 19
Configuring Autoconfiguration Profiles ................................................................................................................ 21
Deployment Scenario 4: Shared Hardware-Accelerated Application Delivery Controller with VIP Address
Directly Attached to Fabric ................................................................................................................................... 25
Data Traffic Path in the Fabric ............................................................................................................................ 26
Configuring Autoconfiguration Profiles ................................................................................................................ 28
Deployment Considerations for vPC+ Dual-Attached Appliances .................................................................... 28
Appendix: CLI Configurations for the Profiles .................................................................................................... 29
© 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 2 of 32
Introduction
The primary goal of this document is to provide guidelines about how to implement application load balancers in
the data center using Cisco® DFA (Dynamic Fabric Automation).
Readers will learn how to integrate load balancers into the DFA Fabric using network autoconfiguration on Cisco
Nexus® Family switches. The network integration deployment scenarios covered in this document are not specific
to any vendor and can accommodate any application load balancer available on the market today.
Target Audience
This document is written for network architects; network design, planning, and implementation teams; and
application services and maintenance teams.
Prerequisites
This document assumes that the reader is already familiar with the mechanisms of the DFA autoconfiguration
feature. The reader should be familiar with mobility domain, virtual switch interface (VSI) Discovery and
Configuration Protocol (VDP), network profile, and services-network profile configurations. Please refer to the
following configuration guide for more information:
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/dfa/configuration/b-dfa-configuration.html.
Placing the Application Load Balancer in the Fabric
Load-balancer appliances can be connected in several places in the network.
Network autoconfiguration on Cisco Nexus switches allows dynamic instantiation of the necessary configuration on
leaf nodes, so the recommended approach is to connect load balancers at the leaf level. Spine nodes do not
contain any classical ethernet (CE) host ports and should not be used as service attachment points.
With the dynamic autoconfiguration feature, load balancers, in both hardware and virtual machine form factors, can
be connected anywhere in the network. Network utilization and forwarding can be optimized when relevant service
appliances are attached to a single pair of leaf nodes, referred to as the service leaf. The logical role of the service
leaf does not change the configuration or enable additional features on this set of leaf nodes. It is used essentially
as a central location for attaching service nodes.
If your organization chooses to use the service leaf and needs to use virtual load balancers or virtual appliances,
you will need to follow certain guidelines. With automated or orchestrated virtual services deployment mechanisms,
the automation or orchestration tool must help ensure the location of deployed virtual services and virtual
machines. For example, in Cisco UCS® Director, you can specify a set of hypervisors, on which virtual services can
be created. Attaching this set of hypervisors to the service leaf will help ensure the location of deployed services in
the network.
Choosing the Load Balancer Deployment Type
In a network, a load balancer can be deployed in the following scenarios:
●
One or more load balancers for a given tenant: Load balancers can be virtual or physical.
●
One or more load balancers shared across multiple tenants: Here, the load balancer is most likely a
hardware platform, and depending on the vendor and software, the load balancer may provide built-in
virtualization features, such as traffic domains, Virtual Routing and Forwarding (VRF) functions, and virtual
contexts.
© 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 3 of 32
●
One or more hardware offload appliance shared across multiple tenants: This appliance would primarily be
used with SSL offload or other resource-intensive applications.
This document focuses on deployment scenarios in which a given load balancer is used by a single tenant. The
availability of multitenancy mechanisms allows you to easily expand the single-tenant scenario described here to
multitenant deployments by using VLAN and VRF separation.
Deployment Scenario 1: Application Load Balancer with Virtual IP Address Directly Attached
to Fabric
This scenario walks through a one-arm application load balancer. The virtual IP (VIP) address of the load balancer
is directly attached to the switch and will be visible in a similar way to an end host in the fabric. This very general
and frequently seen use case is shown in Figure 1.
Figure 1.
Logical Schema of One-Arm Load Balancer, Web Servers, and Clients Internal and External to Fabric
For this and all other deployment scenarios in this document, the load balancer is configured with Source Network
Address Translation (SNAT) to facilitate the server return path through the load balancer.
The load balancer is configured with one or more VIP addresses depending on the application requirements.
These addresses have their respective default gateways on the Leaf-1 node, which maintains the Address
Resolution Protocol (ARP) cache for all directly attached IP addresses. Each VIP address entry in the ARP cache
of the leaf node is then converted to the /32 IP address prefix and is distributed throughout the fabric using the
fabric control plane (Multiprotocol Border Gateway Protocol [MP-BGP]).
The default gateway for the VIP subnet is a switch virtual interface (SVI), which is automatically configured with the
autoconfigure feature of the fabric.
Network segments, which host web servers and internal to fabric clients, are configured with their respective
autoconfiguration profiles and can use the expedited forwarding or traditional forwarding mode.
© 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 4 of 32
Data Traffic Path in the Fabric
Clients that access the load-balanced application can be located within the fabric or external to fabric. Figures 2
and 3 show how application data traffic is load-balanced in the network fabric in this deployment scenario.
1.
Clients external or internal to the fabric request data from the web application, which can be reached through
VIP1.
2.
On the basis of the algorithm configured for the load balancer, the received request is prepared for forwarding
to one of the real web servers on the configuration list. The load balancer performs a NAT operation and
swaps out the client’s source IP address in the packet header and swaps in the VIP1 address. This process
helps ensure that the return traffic passes back through the load balancer. The packet is then forwarded to the
real server. In most deployment scenarios, VIP addresses and real web servers reside on different subnets.
Figure 2.
3.
Data Traffic Path in the Fabric: Client to Load Balancer to Web Server Path
When the load balancer receives the return traffic from the web server, the traffic is subjected to SNAT. This
process helps ensure that the client maintains the TCP session of a current web transaction or the User
Datagram Protocol (UDP) data stream of a given application.
4.
The load balancer then forwards the return traffic back to the client.
© 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 5 of 32
Figure 3.
Return Data Traffic Path in the Fabric: Web Server to Load Balancer to Client Return Path
Configuring Autoconfiguration Profiles
You can use the autoconfiguration feature of the Cisco Nexus switches and the related fabric to dynamically
instantiate the necessary configuration wherever end hosts or services appliances are attached to the fabric.
In this deployment scenario, the load balancer, as a services appliance, is configured so that the VIP address of
the load-balanced service is in the same subnet as the physical load-balancer network interface. The VIP address
is seen directly in the ARP table of the switch and redistributed to the fabric as a host /32 prefix. Moreover, there is
no need for any static or dynamic routing adjacency in this case. The load balancer must be properly configured in
the IP subnet (with the correct default gateway IP address).
The autoconfiguration profile defaultNetworkUniversalTfProfile1 will be used here to attach the load
balancer in exactly the same way as you attach regular hosts. With the autoconfiguration feature, you can attach
the load-balancer appliance from any vendor to fabric.
Note:
This example does not cover out-of-band (OOB) management-port configuration. If an OOB management
interface is connected to the fabric and needs to be configured, you also need to create a separate
autoconfiguration profile in Cisco Prime™ Data Center Network Manager (DCNM).
First, you need to determine which tenant will be hosting the load balancer (Figures 4 and 5). If the organization
and partition for the tenant do not exist, you will need to define them in DCNM.
When you create the partition, note that with DCNM and Cisco NX-OS Software Release 7.1 and later, you can
use universal autoconfiguration profiles. For this and the next deployment scenarios, use vrf-commonuniversal-dynamic-LB-ES2 as the partition profile. This specific partition profile is needed to facilitate the
redistribution of leaf-local routing information to the fabric. Please refer to the appendix for details about the
command-line interface (CLI) commands.
1
2
The CLI command details for this profile can be found in the appendix.
The CLI command details for this profile can be found in the appendix.
© 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 6 of 32
Figure 4.
Organization Creation
Figure 5.
Partition Creation
Next, you need to provision the autoconfiguration profile to which you intend to attach the load balancer (Figure 6).
Note that the functions described in this deployment scenario are verified only for matching network and partition
autoconfiguration profiles. You should use the traditional forwarding mode profile,
defaultNetworkUniversalTfProfile, to help ensure that VIP addresses are discovered throughout the
fabric and do not go silent, which may happen as a result of various vendor implementations.
Also note the VLAN and mobility domain being used. You will need to use this exact VLAN ID in the load-balancer
configuration. In the example used here, the global mobility domain is used to uniquely derive the virtual network ID
(VNI) value for a bridge domain to which the load balancer is attached. However, customers can use the multiplemobility-domain feature, which allows the choice of a value from the drop-down menu for the network profile
configuration. If a virtual appliance with a VDP-capable virtual switch is used (for example, Cisco Nexus1000V
Switch or Kernel-based Virtual Machine [KVM] Open Virtual Switch [OVS]), the mobility domain is not needed.
Please refer to the configuration guide for details.
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/dfa/configuration/b-dfaconfiguration/auto_configuration.html.
© 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 7 of 32
Figure 6.
Autoconfiguration Profile Creation for Load-Balancer VIP-Attached Subnet.
After you plug in your load-balancer appliance or, in case of a virtual appliance, spin up the virtual machine and
launch a service, the SVI default gateway is instantiated on the leaf node using autoconfiguration. Then the VIP
address for a configured service is learned on the leaf node along with the IP address of the main interface of a
load balancer in one-arm mode.
The instantiated autoconfiguration profile can be checked from the CLI of the leaf node to which the load balancer
is attached:
Leaf-1# show fabric database host detail
Active Host Entries
flags: L - Locally inserted, V - vPC+ inserted, R - Recovered, X - xlated Vlan
VLAN
VNI
STATE
FLAGS PROFILE(INSTANCE)
100
30003
Profile Active L
defaultNetworkUniversalTfProfile(instance_def_100_1)
Displaying Data Snooping Ports
Interface
Encap
Flags State
Eth1/1
100
L
Profile Active
Leaf-1#
© 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 8 of 32
VIP addresses configured on the load balancer are learned and can be seen from the MAC address table on the
leaf node:
Leaf-1# show mac address-table vlan 100
Legend:
* - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC
age - seconds since last seen,+ - primary entry using vPC Peer-Link
VLAN
MAC Address
Type
age
Secure NTFY
Ports/SWID.SSID.LID
---------+-----------------+--------+---------+------+----+-----------------* 100
2020.0000.00aa
static
0
F
F
sup-eth2
* 100
d867.d903.f345
dynamic
0
F
F
Eth1/1
Leaf-1#
As the configuration of the load balancer dictates, all VIP addresses use the same subnet and terminate on the leaf
node:
Leaf-1# show ip arp vrf OrganizationABC:PartitionABC
Flags: * - Adjacencies learnt on non-active FHRP router
+ - Adjacencies synced via CFSoE
# - Adjacencies Throttled for Glean
D - Static Adjacencies attached to down interface
IP ARP Table for context OrganizationABC:PartitionABC
Total number of entries: 4
Address
Age
MAC Address
Interface
172.16.10.10
00:02:11
d867.d903.f345
Vlan100
172.16.10.11
00:03:02
d867.d903.f345
Vlan100
172.16.10.12
00:03:02
d867.d903.f345
Vlan100
172.16.10.13
00:03:02
d867.d903.f345
Vlan100
Leaf-1#
The leaf node converts each of the ARP entries for the corresponding VIP addresses to /32 IP address prefixes
and shares them with the fabric:
Leaf-1# sh ip route vrf OrganizationABC:PartitionABC
IP Route Table for VRF "OrganizationABC:PartitionABC"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
'%<string>' in via output denotes VRF <string>
0.0.0.0/0, ubest/mbest: 1/0
*via 10.201.4.21%default, [200/0], 00:13:50, bgp-65510, internal, tag 65510,
segid 50003
© 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 9 of 32
172.16.10.0/24, ubest/mbest: 1/0, attached
*via 172.16.10.1, Vlan100, [0/0], 00:14:01, direct, tag 12345,
172.16.10.1/32, ubest/mbest: 1/0, attached
*via 172.16.10.1, Vlan100, [0/0], 00:14:01, local, tag 12345,
172.16.10.10/32, ubest/mbest: 1/0, attached
*via 172.16.10.10, Vlan100, [190/0], 00:06:18, hmm
172.16.10.11/32, ubest/mbest: 1/0, attached
*via 172.16.10.11, Vlan100, [190/0], 00:06:18, hmm
172.16.10.12/32, ubest/mbest: 1/0, attached
*via 172.16.10.12, Vlan100, [190/0], 00:06:18, hmm
172.16.10.13/32, ubest/mbest: 1/0, attached
*via 172.16.10.13, Vlan100, [190/0], 00:06:18, hmm
Leaf-1#
Leaf-1# sh ip bgp vrf OrganizationABC:PartitionABC
BGP routing table information for VRF OrganizationABC:PartitionABC, address fami
ly IPv4 Unicast
BGP table version is 10, local router ID is 172.16.10.1
Status: s-suppressed, x-deleted, S-stale, d-dampened, h-history, *-valid, >-best
Path type: i-internal, e-external, c-confed, l-local, a-aggregate, r-redist, Iinjected
Origin codes: i - IGP, e - EGP, ? - incomplete, | - multipath, & - backup
Network
Next Hop
*>i0.0.0.0/0
10.201.4.21
*>r172.16.10.0/24
0.0.0.0
*>r172.16.10.10/32
0.0.0.0
*>r172.16.10.11/32
Metric
LocPrf
Weight Path
100
0 i
0
100
32768 ?
0
100
32768 ?
0.0.0.0
0
100
32768 ?
*>r172.16.10.12/32
0.0.0.0
0
100
32768 ?
*>r172.16.10.13/32
0.0.0.0
0
100
32768 ?
Leaf-1#
The load balancer’s network connectivity is now provisioned. The load balancer is now ready for further service
policy configuration, which can be performed through its CLI or GUI, depending on the vendor of the load balancer
in use. Such configuration is beyond the scope of this document.
Deployment Scenario 2: Application Load Balancer with Host Route Injection and Dynamic
Routing between Load Balancer and Fabric
In this scenario, the virtual or physical load-balancer appliance is directly attached to a leaf switch, However, the
VIP address for the load-balanced application appears to be attached behind a virtual router inside the load
balancer. The reachability information about the configured VIPs addresses is shared with the fabric using the
Open Shortest Path First (OSPF) dynamic routing protocol. The load balancer establishes dynamic routing protocol
peering with the leaf device to facilitate the exchange of route information (Figure 7).
© 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 10 of 32
Figure 7.
Logical Schema Showing Dynamic Routing Adjacency Between the Load Balancer and the Fabric
Just as in deployment scenario 1, the load balancer is configured with SNAT to facilitate the server return path
through the load balancer.
Using the OSPF dynamic routing protocol, the load balancer shares reachability information about the entire
subnet on which VIP addresses reside. When the leaf node receives this reachability information, it is redistributed
to the MP-BGP control plane and shared throughout the fabric. As a result, the entire fabric will know how to reach
the VIP addresses for the applications.
Note:
Configuration of the dynamic routing protocol and peering is handled using the autoconfiguration profile
and is discussed later in this document.
Data Traffic Path in the Fabric
Scenario 2 is similar in many ways to scenario 1. Figures 8 and 9 show how application data traffic is loadbalanced in the DFA fabric in this deployment scenario.
1.
Clients external or internal to the fabric request data from the web application, which can be reached through
the VIP address (VIP1). The VIP addresses are already configured on the load balancer and shared with the
fabric, so any workload or device attached to the fabric in the same VRF instance will be able to reach the
desired VIP address.
2.
On the basis of the algorithm configured for the load balancer, the received request is prepared for forwarding
to one of the web servers on the configuration list. The load balancer performs a NAT operation and swaps out
the client’s source IP address in the packet header and swaps in the VIP1 address. This process helps ensure
that the return traffic passes through the load balancer. The packet is then forwarded to web server selected
earlier.
© 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 11 of 32
Figure 8.
3.
Data Traffic Path in the Fabric: Client to Load Balancer to Web Server Path
When the load balancer receives the return traffic from the web server, the traffic is subjected to NAT. This
process helps ensure that the client maintains the TCP session of a current web transaction.
4.
The load balancer then forwards the return traffic back to client.
Figure 9.
Return Data Traffic Path in the Fabric: Web Server to Load Balancer to Client Return Path
© 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 12 of 32
Configuring Autoconfiguration Profiles
In this deployment scenario, the fabric needs to establish dynamic routing adjacency with the load balancer.
In other words, the leaf node must automatically establish OSPF routing adjacency with the load balancer, receive
prefixes from the load balancer, and then redistribute the prefixes to the BGP control plane of the fabric.
In contrast to the first scenario, there is no need to configure distributed anycast gateway, when establishing
dynamic routing protocol adjacency between the load balancer and the leaf node. The network autoconfiguration
profile that meets this requirement and that is created specifically for such a scenario is
serviceNetworkUniversalDynamicRoutingLBProfile3. Note that this autoconfiguration profile must be
deployed in the partition defined with the vrf-common-universal-dynamic-LB-ES4 partition profile. Using
these two profiles in parallel facilitates the redistribution of the correct route information between the fabric and the
load balancer (Figures 10 and 11).
Figure 10.
3
4
Configuring the Partition Using the vrf-common-universal-dynamic-LB-ES Profile
The CLI command details for this profile can be found in the appendix.
The CLI command details for this profile can be found in the appendix.
© 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 13 of 32
Figure 11.
Configuring the Network Segment Used for Dynamic Routing Peering Between the Fabric and Load Balancer
The OSPF routing protocol configuration on the load balancer itself needs to be specified separately, using either
the load balancer’s CLI or GUI. The following options need to be configured:
●
Peering with the fabric using backbone area 0 (equivalent to area 0.0.0.0)
●
Default route (0.0.0.0/0) with the next hop pointing to the gateway: in the example here, 10.10.15.1
●
OSPF router ID according to the load-balancer-specific syntax
●
Advertisement of the VIP addresses in OSPF
●
VLAN ID value that matches the value configured in the autoconfiguration profile in DCNM: in the example
here, 301
After the load balancer is connected to the fabric, the leaf node will detect on the host port the data traffic tagged
with VLAN ID 301. This detection will trigger the instantiation of the autoconfiguration profile.
The following configuration is instantiated on the leaf or added to the existing configuration as part of the
autoconfiguration process:
Leaf-1# show run ospf
feature ospf
router ospf 5
vrf OrganizationA:PartitionA
router-id 10.10.15.1
interface Vlan301
© 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 14 of 32
ip router ospf 5 area 0.0.0.0
Leaf-1# sh run bgp
router bgp 65510
vrf OrganizationA:PartitionA
address-family ipv4 unicast
redistribute hmm route-map FABRIC-RMAP-REDIST-HOST
redistribute direct route-map FABRIC-RMAP-REDIST-SUBNET
redistribute ospf 5 route-map ospfMap
maximum-paths ibgp 2
address-family ipv6 unicast
redistribute hmm route-map FABRIC-RMAP-REDIST-V6HOST
redistribute direct route-map FABRIC-RMAP-REDIST-SUBNET
maximum-paths ibgp 2
vrf context OrganizationA:PartitionA
rd auto
address-family ipv4 unicast
route-target import 65510:9999
route-target both auto
address-family ipv6 unicast
route-target import 65510:9999
route-target both auto
Leaf-1#
Leaf-1# show run int vlan 301 expand-port-profile
interface Vlan301
no shutdown
vrf member OrganizationA:PartitionA
ip address 10.10.15.1/24 tag 12345
ip router ospf 5 area 0.0.0.0
Leaf-1#
Note the redistribute ospf 5 command in the BGP configuration. This command helps ensure that all VIP
address prefixes received from the load balancers are redistributed to the fabric BGP control plane and shared with
the rest of the fabric: that is, that the entire fabric will learn these prefixes through BGP.
© 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 15 of 32
The instantiated autoconfiguration profile can be checked from the CLI of leaf node to which the load balancer is
attached:
Leaf-1# sh fabric database host detail
Active Host Entries
flags: L - Locally inserted, V - vPC+ inserted, R - Recovered, X - xlated Vlan
VLAN
VNI
STATE
FLAGS PROFILE(INSTANCE)
301
30001
Profile Active L
serviceNetworkUniversalDynamicRoutingLBProfile(instance_def_301_1)
Displaying Data Snooping Ports
Interface
Encap
Flags State
Eth1/1
301
L
Profile Active
Leaf-1#
As seen in the following CLI output, the load balancer successfully established a routing adjacency with the fabric
leaf:
Leaf-1# sh ip ospf neighbors vrf OrganizationA:PartitionA
OSPF Process ID 5 VRF OrganizationA:PartitionA
Total number of neighbors: 1
Neighbor ID
10.10.15.2
Pri State
1 FULL/DR
Up Time
Address
00:00:03 10.10.15.2
Interface
Vlan301
Leaf-1#
The next CLI output confirms that the leaf received valid /32 IP routes through OSPF. Here, each such IP route
represents a VIP address configured on the load balancer:
Leaf-1# sh ip route vrf OrganizationA:PartitionA
IP Route Table for VRF "OrganizationA:PartitionA"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
'%<string>' in via output denotes VRF <string>
0.0.0.0/0, ubest/mbest: 1/0
*via 10.201.4.21%default, [200/0], 00:45:15, bgp-65510, internal, tag 65510,
segid 50005
10.10.15.0/24, ubest/mbest: 1/0, attached
*via 10.10.15.1, Vlan301, [0/0], 00:45:28, direct, tag 12345,
10.10.15.1/32, ubest/mbest: 1/0, attached
*via 10.10.15.1, Vlan301, [0/0], 00:45:28, local, tag 12345,
172.16.10.10/32, ubest/mbest: 1/0
*via 10.10.15.2, Vlan301, [110/41], 00:18:42, ospf-5, intra
172.16.10.11/32, ubest/mbest: 1/0
© 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 16 of 32
*via 10.10.15.2, Vlan301, [110/41], 00:18:42, ospf-5, intra
172.16.10.12/32, ubest/mbest: 1/0
*via 10.10.15.2, Vlan301, [110/41], 00:18:42, ospf-5, intra
172.16.10.13/32, ubest/mbest: 1/0
*via 10.10.15.2, Vlan301, [110/41], 00:18:42, ospf-5, intra
Leaf-1#
The following CLI output shows that redistribution from OSPF to BGP works as expected:
Leaf-1# sh ip bgp vrf OrganizationA:PartitionA
BGP routing table information for VRF OrganizationA:PartitionA, address family I
Pv4 Unicast
BGP table version is 35, local router ID is 10.10.15.1
Status: s-suppressed, x-deleted, S-stale, d-dampened, h-history, *-valid, >-best
Path type: i-internal, e-external, c-confed, l-local, a-aggregate, r-redist, Iinjected
Origin codes: i - IGP, e - EGP, ? - incomplete, | - multipath, & - backup
Network
Next Hop
*>i0.0.0.0/0
Metric
LocPrf
10.201.4.21
Weight Path
100
0 i
*>r10.10.15.0/24
0.0.0.0
0
100
32768 ?
*>r172.16.10.10/32
0.0.0.0
41
100
32768 ?
*>r172.16.10.11/32
0.0.0.0
41
100
32768 ?
*>r172.16.10.12/32
0.0.0.0
41
100
32768 ?
*>r172.16.10.13/32
0.0.0.0
41
100
32768 ?
Leaf-1#
In addition, the next two sets of CLI output show the MAC address and the respective ARP entry of the load
balancer’s interface:
Leaf-1# sh mac address-table vlan 301
Legend:
* - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC
age - seconds since last seen,+ - primary entry using vPC Peer-Link
VLAN
MAC Address
Type
age
Secure NTFY
Ports/SWID.SSID.LID
---------+-----------------+--------+---------+------+----+-----------------* 301
d867.d903.f345
dynamic
10
F
F
Eth1/1
Leaf-1#
Leaf-1# sh ip arp vrf OrganizationA:PartitionA
Flags: * - Adjacencies learnt on non-active FHRP router
+ - Adjacencies synced via CFSoE
# - Adjacencies Throttled for Glean
D - Static Adjacencies attached to down interface
© 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 17 of 32
IP ARP Table for context OrganizationA:PartitionA
Total number of entries: 1
Address
Age
MAC Address
Interface
10.10.15.2
00:16:51
d867.d903.f345
Vlan301
Leaf-1#
As a summary, Figure 12 depicts the logical routing topology of this scenario.
Figure 12.
Logical Routing Topology
Deployment Scenario 3: Application Load Balancer with Static Routing Between Load
Balancer and Fabric
This scenario is very similar to scenario 2: that is, the VIP address for the load-balanced application is configured
on the load balancer. However, in scenario 3 the load balancer does not establish dynamic routing protocol
adjacency with the leaf node in the fabric. Instead, the reachability information about VIP addresses is configured
on the leaf node and the load balancer using static routes (Figure 13).
Figure 13.
Logical Schema Showing the Static Routing Between the Load Balancer and the Fabric
© 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 18 of 32
Just as in the previous deployment scenarios, the load balancer is configured with SNAT to facilitate the server
return path through the load balancer.
Static routes toward VIP addresses need to be configured on a directly attached leaf node: in the example here, on
Leaf-1. The next hop for these prefixes should point to the load balancer’s interface IP address: in the example
here, 10.10.20.2.
In addition, these static routes must be redistributed to the MP-BGP control plane of the fabric to facilitate
fabricwide reachability to VIP addresses. Static routes to VIP addresses together with their redistribution are
configured in DCNM as part of the autoconfiguration profile and are dynamically instantiated when the load
balancer is attached to the network. As a result, the entire fabric will know how to reach VIP addresses for the
respective applications.
Please note, that automated configuration of the static routes happens as part of the partition profile
autoconfiguration. This means, that any network autoconfiguration profile, which is associated with such partition
profile or VRF, will also trigger automated configuration of static routes on a given Leaf node.
Data Traffic Path in the Fabric
Figures 14 and 15 show how application data traffic is load-balanced in the DFA fabric in this deployment scenario.
1.
Clients external or internal to the fabric request data from the web application, which can be reached through
the VIP address (VIP1). The VIP addresses are already configured on the load balancer. Static routes to the
VIP addresses are configured on the Leaf-1 node and are redistributed to the fabric control plane, so any
workload or device attached to the fabric in the same VRF instance will be able to reach the desired VIP
address.
2.
On the basis of the algorithm configured for the load balancer, the received request is prepared for forwarding
to one of the web servers on the configuration list. The load balancer performs a NAT operation and swaps out
the client’s source IP address in the packet header and swaps in the VIP1 address. This process helps ensure
that the return traffic passes through the load balancer. The packet is then forwarded to the web server
selected earlier.
Figure 14.
Data Traffic Path in the Fabric: Client to Load Balancer to Web Server Path
© 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 19 of 32
3.
When the load balancer receives the return traffic from the web server, the traffic is subjected to NAT. This
process helps ensure that the client maintains the TCP session of a current web transaction or UDP data
stream of a given application.
4.
The load balancer then forwards the return traffic back to the client.
Figure 15.
Return Data Traffic Path in the Fabric: Web Server to Load Balancer to Client Return Path
© 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 20 of 32
Configuring Autoconfiguration Profiles
In this deployment scenario, the fabric needs to configure static routes to VIP addresses on the load balancer
attached Leaf-1. In addition, Leaf-1 needs to redistribute these static routes to the MP-BGP control plane of the
fabric.
The network autoconfiguration profile that fits this requirement and is specifically created for such a scenario is
serviceNetworkUniversalTfStaticRoutingProfile5. Note that this autoconfiguration profile must be
deployed in the partition defined with the vrf-common-universal-static6 partition profile. Using these two
profiles in parallel facilitates redistribution of the correct route information between the fabric and the load balancer.
When configuring the vrf-common-universal-static partition profile in DCNM, you must specify the static
route to the subnet in which the VIP addresses are located. The next hop for this route should point to the interface
IP address of the load balancer (Figures 16 and 17).
Figure 16.
5
6
Configuring the Partition Profile: n00, n01, n02, etc. Signify the Subnet of the Static Route, and, nh00, nh01, nh02,
etc. Signify the Next-Hop IP Address
The CLI command details for this profile can be found in the appendix.
The CLI command details for this profile can be found in the appendix
© 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 21 of 32
Figure 17.
Configuring the Network Segment Used for Attaching a Load Balancer to the Fabric
The static route configuration on the load balancer itself needs to be specified separately, using either the load
balancer’s CLI or GUI. In most deployment cases, only the static default route will be required. The following
options need to be configured:
●
Default route (0.0.0.0/0) with the next hop pointing to the gateway: in the example here, 10.10.20.1
●
VLAN ID that matches the value configured in the autoconfiguration profile in DCNM: in the example here,
331
After the load balancer is connected to the fabric, the leaf node will detect on the host port the data traffic tagged
with VLAN ID 331. This detection will trigger the instantiation of the autoconfiguration profiles. The following
configuration is dynamically instantiated on the leaf or added to existing configuration as part of the
autoconfiguration process:
Leaf-1# sh run int vlan 331 expand-port-profile
interface Vlan331
no shutdown
vrf member OrganizationA:PartitionC
ip address 10.10.20.1/24 tag 12345
fabric forwarding mode anycast-gateway
Leaf-1#
© 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 22 of 32
Leaf-1# sh run bgp
router bgp 65510
vrf OrganizationA:PartitionC
address-family ipv4 unicast
redistribute hmm route-map FABRIC-RMAP-REDIST-HOST
redistribute direct route-map FABRIC-RMAP-REDIST-SUBNET
redistribute static route-map staticMap
maximum-paths ibgp 2
address-family ipv6 unicast
redistribute hmm route-map FABRIC-RMAP-REDIST-V6HOST
redistribute direct route-map FABRIC-RMAP-REDIST-SUBNET
maximum-paths ibgp 2
vrf context OrganizationA:PartitionC
rd auto
address-family ipv4 unicast
route-target import 65510:9999
route-target both auto
address-family ipv6 unicast
route-target import 65510:9999
route-target both auto
Leaf-1#
Leaf-1# sh run ip
vrf context OrganizationA:PartitionC
ip route 172.18.10.0/24 10.10.20.2
Leaf-1#
Note the redistribute static command in the preceding configuration. This command helps ensure that the
static route pointing to VIP addresses is redistributed to the fabric BGP control plane and shared with the rest of
the fabric.
The instantiated autoconfiguration profile can be checked from the CLI of the leaf node to which the load balancer
is attached:
© 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 23 of 32
Leaf-1# sh fabric database host detail
Active Host Entries
flags: L - Locally inserted, V - vPC+ inserted, R - Recovered, X - xlated Vlan
VLAN
VNI
STATE
FLAGS PROFILE(INSTANCE)
331
30031
Profile Active L
serviceNetworkUniversalTfStaticRoutingProfile(instance_def_331_4)
Displaying Data Snooping Ports
Interface
Encap
Flags State
Eth1/1
331
L
Profile Active
Leaf-1#
The following two sets of CLI output show that static route redistribution into BGP works as expected:
Leaf-1# sh ip route vrf OrganizationA:PartitionC
IP Route Table for VRF "OrganizationA:PartitionC"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
'%<string>' in via output denotes VRF <string>
0.0.0.0/0, ubest/mbest: 1/0
*via 10.201.4.21%default, [200/0], 00:03:18, bgp-65510, internal, tag 65510,
segid 50007
10.10.20.0/24, ubest/mbest: 1/0, attached
*via 10.10.20.1, Vlan331, [0/0], 00:03:32, direct, tag 12345,
10.10.20.1/32, ubest/mbest: 1/0, attached
*via 10.10.20.1, Vlan331, [0/0], 00:03:32, local, tag 12345,
172.16.10.0/24, ubest/mbest: 1/0
*via 10.10.20.2, [1/0], 00:03:31, static
Leaf-1#
Leaf-1# sh ip bgp vrf OrganizationA:PartitionC
BGP routing table information for VRF OrganizationA:PartitionC, address family I
Pv4 Unicast
BGP table version is 10, local router ID is 10.10.20.1
Status: s-suppressed, x-deleted, S-stale, d-dampened, h-history, *-valid, >-best
Path type: i-internal, e-external, c-confed, l-local, a-aggregate, r-redist, I-i
njected
Origin codes: i - IGP, e - EGP, ? - incomplete, | - multipath, & - backup
Network
Next Hop
*>i0.0.0.0/0
10.201.4.21
*>r10.10.20.0/24
*>r10.10.20.2/32
*>r172.16.10.0/24
Leaf-1#
0.0.0.0
0.0.0.0
0.0.0.0
Metric
0
0
0
LocPrf
Weight Path
100
0 i
100
100
100
32768 ?
32768 ?
32768 ?
Leaf-1#
© 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 24 of 32
In addition, the next two sets of CLI output show the MAC address and the respective ARP entry of the load
balancer’s interface:
Leaf-1# sh mac address-table vlan 331
Legend:
* - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC
age - seconds since last seen,+ - primary entry using vPC Peer-Link
VLAN
MAC Address
Type
age
Secure NTFY
Ports/SWID.SSID.LID
---------+-----------------+--------+---------+------+----+-----------------* 331
2020.0000.00aa
static
0
F
F
sup-eth2
* 331
d867.d903.f345
dynamic
400
F
F
Eth1/1
Leaf-1#
Leaf-1# sh ip arp vrf OrganizationA:PartitionC
IP ARP Table for context OrganizationA:PartitionC
Total number of entries: 1
Address
Age
MAC Address
Interface
10.10.20.2
00:00:09
d867.d903.f345
Vlan331
Leaf-1#
Deployment Scenario 4: Shared Hardware-Accelerated Application Delivery Controller with
VIP Address Directly Attached to Fabric
Scenario 4 deploys an application load balancer along with a hardware-accelerated application delivery controller
(ADC). The ADC is equipped with hardware-accelerated encryption offload mechanisms and can be used as a
shared resource among multiple applications (Figure 18).
The load balancer can be either a physical appliance or a virtual appliance. In the latter case, it should be able to
offload SSL encryption from the virtual load balancer to a physical hardware-accelerated ADC.
Figure 18.
Logical Schema Showing the Load Balancer and Hardware-Accelerated ADC Connection to the Service Leaf
© 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 25 of 32
The load balancer and hardware-accelerated ADC can be deployed using any of the first three scenarios
discussed in this document, depending on the requirements and each platform’s capabilities. However, to make
this scenario simpler, the deployment uses scenario 1 to deploy both the load balancer and hardware-accelerated
ADC. Also, each of the two devices in the network needs to perform NAT operations to enforce the return traffic
path.
Network autoconfiguration on Cisco Nexus switches dynamically instantiates autoconfiguration profiles anywhere
in the fabric, so the load balancer or hardware-accelerated ADC can be placed anywhere in the fabric. However, to
optimize fabric utilization, it is recommended to enforce a single location for placement of service nodes. This can
be performed by designating a single leaf or a pair of leaf nodes bundled in a virtual PortChannel (vPC+) as service
leaves and then attaching the most demanded and utilized service nodes there. This approach helps ensure that
back-end traffic is locally switched on a service leaf and does not need to traverse the fabric across spines.
Note that Layer 3 route peering over vPC+ is supported on the Cisco Nexus 6000 Series and 5600 platform,
switches, as well as Nexus 7000 and Nexus 7700 series with the NX-OS version 7.2.1 or later.
Also note that the “service leaf” designation does not pertain to any special leaf configuration, but rather is an
administrative designation.
Data Traffic Path in the Fabric
Figures 19 and 20 show how application data traffic is load-balanced in the DFA fabric in this deployment scenario.
1.
Clients external or internal to the fabric request data from the SSL-encrypted web application (TCP port 443),
which can be reached through the VIP address (VIP1). The VIP addresses are already configured on the load
balancer and shared with the fabric, so any workload or device attached to fabric in the same VRF instance
will be able to reach the desired VIP address.
2.
The load balancer is configured to forward any received SSL-encrypted traffic to the hardware-accelerated
ADC. The load balancer will perform a SNAT operation to enforce the return path.
3.
Upon receipt of the traffic, the hardware-accelerated ADC will decrypt the web traffic, select one of the web
servers according to the configured algorithm, and forward the data. Upon forwarding the data, the ADC will
perform a NAT operation and also change the destination TCP port from 443 to 81. This process enforces the
return path and helps ensure that the web server recognizes that the traffic received on port 81 is decrypted
traffic.
© 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 26 of 32
Figure 19.
Data Traffic Path in the Fabric: Client to Load Balancer to Hardware-Accelerated ADC
4.
The return traffic from the web server is sent to the hardware-accelerated ADC.
5.
The ADC SSL-encrypts the traffic, performs a NAT operation, and forwards the traffic to the load balancer.
6.
The load balancer receives the encrypted web traffic, performs a NAT operation, and then forwards the traffic
back to the client.
Figure 20.
Return Data Traffic Path in the Fabric
© 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 27 of 32
Configuring Autoconfiguration Profiles
Refer to deployment scenario 1 of this document for details about how to create the autoconfiguration profiles.
Note that both the load balancer and the hardware-accelerated ADC must be members of the same partition to
successfully communicate within the fabric.
Deployment Considerations for vPC+ Dual-Attached Appliances
If the load-balancer appliance needs to be dual homed, additional network autoconfiguration profile configuration is
required for deployment scenario 2. Deployment scenarios 1 and 3 require no additional changes.
In deployment scenario 2, the load balancer needs to establish and maintain OSPF dynamic routing adjacency with
both vPC+ peer switches: Leaf-1 and Leaf-2 (Figure 21).
Figure 21.
vPC+ Dual-Attached Load Balancer Establishes OSPF Routing Adjacency with Both vPC+ Peer Switches
The OSPF routing protocol requires a unique IP address to establish such routing adjacency. That is why the
network autoconfiguration profiles need to include additional detail.
Figure 22 shows the Secondary Gateway IPv4 Address field. The autoconfiguration process will use the IP address
specified in this field to configure the SVI IP address on a secondary vPC+ peer. The primary vPC+ peer is
configured with the value specified in the gatewayIpAddress field.
© 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 28 of 32
Figure 22.
Configuration of the IP address for SVI of the secondary vPC+ peer.
No additional configuration is needed on the load balancer. Note that the load balancer establishes and maintains
routing adjacencies with both vPC+ peers: Leaf1 and Leaf2.
Appendix: CLI Configurations for the Profiles
Figures 23 through 27 provide CLI configurations for the profiles used in this document.
Note:
This appendix is provided for reference only.
© 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 29 of 32
Figure 23.
Network Profile defaultNetworkUniversalTfProfile
Figure 24.
Partition Profile vrf-common-universal-dynamic-LB-ES
© 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 30 of 32
Figure 25.
Network Profile serviceNetworkUniversalDynamicRoutingLBProfile
Figure 26.
Network Profile serviceNetworkUniversalTfStaticRoutingProfile
© 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 31 of 32
Figure 27.
Network Profile vrf-common-universal-static
Printed in USA
© 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
C11-735416-01
01/16
Page 32 of 32