PDF

Contents
Introduction
Prerequisites
Requirements
Components Used
Configure
Network Diagram
Configurations
ISE
F5 LTM
Verify
Radius Cisco AV pair audit-session-id
LTM version changes
HTTP loadbalancing
SSL offloading
References
Related Cisco Support Community Discussions
Introduction
This document describes how to configure iRules on F5 Local Traffic Manager (LTM) for the Cisco
Identity Services Engine (ISE) Radius and HTTP loadbalancing.
Prerequisites
Requirements
Cisco recommends that you have knowledge of these topics:
●
Cisco ISE deployments, authentication, and authorization
●
Basic knowledge on F5 LTM
●
Knowledge on Radius and HTTP protocols
Components Used
The information in this document is based on these software and hardware versions:
●
Cisco Catalyst Switch
●
F5 BIG-IP Version 11.6
●
Cisco ISE software Versions 1.3 and later
Configure
Network Diagram
F5 LTM is configured as a loadbalancer for Radius. It's important to make sure that all
authentication and accounting packets for the same session are redirected to the same ISE node.
IETF attribute Calling-Station-ID is used for persistence profile.
No NAT is used. LTM is routing the packets for the Virtual Server to the correct Pool consisting of
two members.
It's important to make sure traffic coming back from ISE nodes is going via LTM. If not NAD will
see real ip address in the response instead of virtual. In such scenario SNAT would have to be
used on LTM - but that will break most of the ISE flows - so it is not supported.
Configurations
ISE
For simplicity of the article ISE configuration is skipped. Please refer to other materials. No special
configuration on ISE needed except addition of Network Devices (172.16.33.1).
F5 LTM
LTM is already preconfigured with the correct vlan/interfaces.
Two ISE nodes are added as nodes:
Pool is created for both nodes (monitoring is icmp based, could be udp/radius):
iRule is created. It's looking for every Access-Request for each attribute value pairs find IETF
Calling-Station-Id (31) and creates persistence rule based on it's value:
The content of the iRule:
Persistence Hash based profile is created. That profile is using above iRule:
Standard virtual server is created. UDP profile is selected:
That virtual server is using configured Pool and Persistence profile:
Verify
To check if the traffic is balanced correctly radius simulator could be used.
To send 4 Access-Requests to Virtual Server (172.16.33.103) address with specific CallingStation-Id attribute:
root@arrakis:# ./radiustest.py -d 172.16.33.103 -u cisco -p Krakow123 -s Krakow123 -ai
21:11:11:11:11:21 -n 4
Starting sniffing daemon
Choosen vmnet3 interface for sniffing
Sniffing traffic from udp and src 172.16.33.103 and port 1812
Sending Radius Access-Requests
Sending Radius Packet.......
Radius packet details: 172.16.33.1:27483 -> 172.16.33.103:1812
Radius Code: 1 (Access-Request)
Radius Id: 179
AVP[0] Type: 1 (User-Name) Value: cisco
AVP[1] Type: 2 (User-Password) Value: *
AVP[2] Type: 4 (NAS-IP-Address) Value: 172.16.33.1
AVP[3] Type: 31 (Calling-Station-Id) Value: 21:11:11:11:11:21
Sending Radius Packet.......
Radius packet details: 172.16.33.1:27483 -> 172.16.33.103:1812
Received Radius Packet......
Radius packet details: 172.16.33.103:1812 -> 172.16.33.1:27483
Radius Code: 2 (Access-Accept)
Radius Id: 236
AVP[0] Type: 18 Value: Hello, cisco. This is server1.
<some output omitted...3 more packets>
Waiting 5 seconds for the responses
Finishing sniffing thread
Results:
Radius-Request sent: 4
Radius-Accept received: 4
Radius-Reject received: 0
Other Radius messages received: 0
Finishing main thread
Both ISE servers are configured to return additional AV pair based on server it reached. ISE-PSN1
one will return Hello message for server 1, ISE-PSN2 will return Hello message for server 2 (on
ISE Network Access:ISE Host Name EQUALS conditions are being used).
To confirm that different server has responded based on different Calling-Station-Id attribute:
root@arrakis:# ./radiustest.py -d 172.16.33.103 -u cisco -p Krakow123 -s Krakow123 -ai
21:11:11:11:11:21 -n 4 | grep "Hello"
AVP[0] Type: 18 Value: Hello, cisco. This is server1.
AVP[0] Type: 18 Value: Hello, cisco. This is server1.
AVP[0] Type: 18 Value: Hello, cisco. This is server1.
AVP[0] Type: 18 Value: Hello, cisco. This is server1.
root@arrakis:# ./radiustest.py -d 172.16.33.103 -u cisco -p Krakow123 -s Krakow123 -ai
21:11:11:11:11:22 -n 4 | grep "Hello"
AVP[0] Type: 18 Value: Hello, cisco. This is server2
AVP[0] Type: 18 Value: Hello, cisco. This is server2
AVP[0] Type: 18 Value: Hello, cisco. This is server2
AVP[0] Type: 18 Value: Hello, cisco. This is server2
root@arrakis:# ./radiustest.py -d 172.16.33.103 -u cisco -p Krakow123 -s Krakow123 -ai
21:11:11:11:11:23 -n 4 | grep "Hello"
AVP[0] Type: 18 Value: Hello, cisco. This is server1.
AVP[0] Type: 18 Value: Hello, cisco. This is server1.
AVP[0] Type: 18 Value: Hello, cisco. This is server1.
AVP[0] Type: 18 Value: Hello, cisco. This is server1.
root@arrakis:# ./radiustest.py -d 172.16.33.103 -u cisco -p Krakow123 -s Krakow123 -ai
21:11:11:11:11:24 -n 4 | grep "Hello"
AVP[0] Type: 18 Value: Hello, cisco. This is server2
AVP[0] Type: 18 Value: Hello, cisco. This is server2
AVP[0] Type: 18 Value: Hello, cisco. This is server2
AVP[0] Type: 18 Value: Hello, cisco. This is server2
On LTM statistics for Pool could be verified:
Persistence cache on LTM:
root@(f5)(cfg-sync Standalone)(Active)(/Common)(tmos)# show /ltm persistence persist-records
Sys::Persistent Connections
universal 32313a31313a31313a31313a31313a3234 172.16.33.103:any 172.16.34.101:any (tmm: 1)
Total records returned: 1
Value "32313a31313a31313a31313a31313a3234" is ascii representation of "21:11:11:11:11:24".
All subsequent traffic matching this will be sent to ISE-PSN2 (172.16.34.101). Persistence entry
has been created with timeout 30 seconds (as per persist command in iRule).
After the entry expires:
root@(f5)(cfg-sync Standalone)(Active)(/Common)(tmos)# show /ltm persistence persist-records
Sys::Persistent Connections
Total records returned: 0
Traffic for the same Calling-Station-Id might be redirected to the ISE-PSN1 (172.16.34.100), new
entry will be created:
root@(f5)(cfg-sync Standalone)(Active)(/Common)(tmos)# show /ltm persistence persist-records
Sys::Persistent Connections
universal 32313a31313a31313a31313a31313a3234 172.16.33.103:any 172.16.34.100:any (tmm: 1)
Total records returned: 1
Then again for the next 30 seconds all traffic matching this pattern will be redirected to ISE-PSN1.
Radius Cisco AV pair audit-session-id
IETF Calling-Station-Id attribute would be the most commonly used for persistence. But other
attributes could be used also. For Cisco AV Pair with audit-session-id iRule example:
root@(f5)(cfg-sync Standalone)(Active)(/Common)(tmos)# show /ltm persistence persist-records
Sys::Persistent Connections
universal 32313a31313a31313a31313a31313a3234 172.16.33.103:any 172.16.34.100:any (tmm: 1)
Total records returned: 1
To test again radius simulator is used:
root@arrakis:# ./radiustest.py -d 172.16.33.103 -u cisco -p Krakow123 -s Krakow123 -avi 9 -avt 1
-avv "audit-session-id=0A30276F00001225751086B9" -avx string -ai 21:11:11:11:11:20 -n 4 | grep
Hello
AVP[0] Type: 18 Value: Hello, cisco. This is server1
AVP[0] Type: 18 Value: Hello, cisco. This is server1
AVP[0] Type: 18 Value: Hello, cisco. This is server1
AVP[0] Type: 18 Value: Hello, cisco. This is server1
root@arrakis:# ./radiustest.py -d 172.16.33.103 -u cisco -p Krakow123 -s Krakow123 -avi 9 -avt 1
-avv "audit-session-id=0A30276F00001225751086B9" -avx string -ai 21:11:11:11:11:20 -n 4 | grep
Hello
AVP[0] Type: 18 Value: Hello, cisco. This is server2
AVP[0] Type: 18 Value: Hello, cisco. This is server2
AVP[0] Type: 18 Value: Hello, cisco. This is server2
AVP[0] Type: 18 Value: Hello, cisco. This is server2
As a result two persistence entries are created:
root@(f5)(cfg-sync Standalone)(Active)(/Common)(tmos)# show /ltm persistence persist-records
Sys::Persistent Connections
universal
00000009012b61756469742d73657373696f6e2d69643d304133303237364630303030313232353735313038364238
172.16.33.103:any 172.16.34.100:any (tmm: 1)
universal
00000009012b61756469742d73657373696f6e2d69643d304133303237364630303030313232353735313038364239
172.16.33.103:any 172.16.34.101:any (tmm: 1)
Total records returned: 2
And all subsequent traffic with the same pattern is redirected to the same ISE server (as long as
those entries exists).
LTM version changes
Please be aware that it's not advised to use CLIENT_DATA to add a persistence rule anymore.
Starting from version 9.4 CLIENT_ACCEPTED should be used:
Events For UDP Virtual Servers
Also simpler access for Radius attributes is allowed. Example:
root@(f5)(cfg-sync Standalone)(Active)(/Common)(tmos)# show /ltm persistence persist-records
Sys::Persistent Connections
universal
00000009012b61756469742d73657373696f6e2d69643d304133303237364630303030313232353735313038364238
172.16.33.103:any 172.16.34.100:any (tmm: 1)
universal
00000009012b61756469742d73657373696f6e2d69643d304133303237364630303030313232353735313038364239
172.16.33.103:any 172.16.34.101:any (tmm: 1)
Total records returned: 2
Documentation states it's necessary to use Radius profile but on version 11.6 it's working even if
Radius profile for Virtual Server (VS) is set to none.
HTTP loadbalancing
Typically there is no need to load balance HTTP traffic for CWA and other guest flows. Guest's are
redirected directly to the specific node without the need to use loadbalancer.
On contrary sponsor and Device portals which are manually accessed can be easily balanced.
Please read more in the excellent document referenced:
HowTo: Cisco and F5 Deployment Guide-ISE Load Balancing Using BIG-IP
But there are scenarios when CWA + other guest flows might require HTTP loadbalancing. 3rd
party integration would be such scenario. The reason is that every NAD would be configured with
static redirection pointing to the same VS (FQDN). Guest flow loadbalancing would be the only
solution to have high availability in such scenarios. The challenge with that kind of redirection is to
match Radius Authentication with Accounting with HTTP session and make sure they all land on
the same PSN.
To achieve that two VS has been created. One for HTTP, one for Radius (both Authentication and
Accounting). Instead of classical persistence session concept is used.
Please be aware that it's HTTPS traffic which needs to be balanced (SSL offloading) and also it's
HTTPS traffic which needs to reach PSNs (nodes). SSL offloading configuration is presented in
the next section.
Once Radius Authentication (Access-Request) starts iRule for Radius VS is performing
session lookup to check if there is persistence for that Calling-Station-Id attribute. If not new
entry is being created. If yes the old entry is being used.
For all subsequent Radius packets the existing entry is being reused - as a result we are
redirected to the same node.
After endpoint is assigned ip address Radius Interim Accounting packet with Framed-IPAddress is being sent, this time additional session entry (based on Framed-IP-Address) is
being created redirecting to the same node which is already selected for Radius traffic
Once HTTP traffic is received iRule for HTTP VS is performing session lookup based on
source ip. The previous session (Framed-IP-Address) is matched and the same node is
selected
iRule code for Radius VS:
●
●
●
●
root@(f5)(cfg-sync Standalone)(Active)(/Common)(tmos)# show /ltm persistence persist-records
Sys::Persistent Connections
universal
00000009012b61756469742d73657373696f6e2d69643d304133303237364630303030313232353735313038364238
172.16.33.103:any 172.16.34.100:any (tmm: 1)
universal
00000009012b61756469742d73657373696f6e2d69643d304133303237364630303030313232353735313038364239
172.16.33.103:any 172.16.34.101:any (tmm: 1)
Total records returned: 2
iRule code for HTTP VS:
root@(f5)(cfg-sync Standalone)(Active)(/Common)(tmos)# show /ltm persistence persist-records
Sys::Persistent Connections
universal
00000009012b61756469742d73657373696f6e2d69643d304133303237364630303030313232353735313038364238
172.16.33.103:any 172.16.34.100:any (tmm: 1)
universal
00000009012b61756469742d73657373696f6e2d69643d304133303237364630303030313232353735313038364239
172.16.33.103:any 172.16.34.101:any (tmm: 1)
Total records returned: 2
To verify the flow send a test Access-Request with Calling-Station-ID 11:11:11:11:11:17 to Radius
VS:
root@(f5)(cfg-sync Standalone)(Active)(/Common)(tmos)# show /ltm persistence persist-records
Sys::Persistent Connections
universal
00000009012b61756469742d73657373696f6e2d69643d304133303237364630303030313232353735313038364238
172.16.33.103:any 172.16.34.100:any (tmm: 1)
universal
00000009012b61756469742d73657373696f6e2d69643d304133303237364630303030313232353735313038364239
172.16.33.103:any 172.16.34.101:any (tmm: 1)
Total records returned: 2
As a result /var/log/ltm on LTM will log there is no entry for that Calling-Station-ID and once
loadbalancing decision is made new entry redirecting to node 172.16.34.101 is created.
root@(f5)(cfg-sync Standalone)(Active)(/Common)(tmos)# show /ltm persistence persist-records
Sys::Persistent Connections
universal
00000009012b61756469742d73657373696f6e2d69643d304133303237364630303030313232353735313038364238
172.16.33.103:any 172.16.34.100:any (tmm: 1)
universal
00000009012b61756469742d73657373696f6e2d69643d304133303237364630303030313232353735313038364239
172.16.33.103:any 172.16.34.101:any (tmm: 1)
Total records returned: 2
All subsequent Radius packets are redirected to that node. At that stage Framed-IP-Address is
delivered (after client get address via DHCP):
root@(f5)(cfg-sync Standalone)(Active)(/Common)(tmos)# show /ltm persistence persist-records
Sys::Persistent Connections
universal
00000009012b61756469742d73657373696f6e2d69643d304133303237364630303030313232353735313038364238
172.16.33.103:any 172.16.34.100:any (tmm: 1)
universal
00000009012b61756469742d73657373696f6e2d69643d304133303237364630303030313232353735313038364239
172.16.33.103:any 172.16.34.101:any (tmm: 1)
Total records returned: 2
LTM will log that based on Calling-Station-ID the correct node is selected (172.16.34.101) - but
also new session table entry is created based on Framed-IP-Address (172.16.33.1). The same
node (172.16.34.101) is selected for that entry.
root@(f5)(cfg-sync Standalone)(Active)(/Common)(tmos)# show /ltm persistence persist-records
Sys::Persistent Connections
universal
00000009012b61756469742d73657373696f6e2d69643d304133303237364630303030313232353735313038364238
172.16.33.103:any 172.16.34.100:any (tmm: 1)
universal
00000009012b61756469742d73657373696f6e2d69643d304133303237364630303030313232353735313038364239
172.16.33.103:any 172.16.34.101:any (tmm: 1)
Total records returned: 2
At this stage any HTTP get request from ip address 172.16.33.1:
root@(f5)(cfg-sync Standalone)(Active)(/Common)(tmos)# show /ltm persistence persist-records
Sys::Persistent Connections
universal
00000009012b61756469742d73657373696f6e2d69643d304133303237364630303030313232353735313038364238
172.16.33.103:any 172.16.34.100:any (tmm: 1)
universal
00000009012b61756469742d73657373696f6e2d69643d304133303237364630303030313232353735313038364239
172.16.33.103:any 172.16.34.101:any (tmm: 1)
Total records returned: 2
Triggers the following result on LTM (traffic is redirected to a matching node: 172.16.34.101):
root@(f5)(cfg-sync Standalone)(Active)(/Common)(tmos)# show /ltm persistence persist-records
Sys::Persistent Connections
universal
00000009012b61756469742d73657373696f6e2d69643d304133303237364630303030313232353735313038364238
172.16.33.103:any 172.16.34.100:any (tmm: 1)
universal
00000009012b61756469742d73657373696f6e2d69643d304133303237364630303030313232353735313038364239
172.16.33.103:any 172.16.34.101:any (tmm: 1)
Total records returned: 2
This way Radius and HTTP traffic for the same endpoint is always redirected to the same node.
For existing entries every request refresh session entries.
Default session entry timers is 60s.
If at this stage new Calling-Station-Id appears everything is starting from the scratch. Also new
entries can override old one if needed. Example logs from LTM:
root@(f5)(cfg-sync Standalone)(Active)(/Common)(tmos)# show /ltm persistence persist-records
Sys::Persistent Connections
universal
00000009012b61756469742d73657373696f6e2d69643d304133303237364630303030313232353735313038364238
172.16.33.103:any 172.16.34.100:any (tmm: 1)
universal
00000009012b61756469742d73657373696f6e2d69643d304133303237364630303030313232353735313038364239
172.16.33.103:any 172.16.34.101:any (tmm: 1)
Total records returned: 2
Please notice that the last HTTP traffic is still going to node 172.16.34.100 because no Framed-IPAddress was delivered with Calling-Station-ID values 11:11:11:11:11:19 nor 11:11:11:11:11:20.
SSL offloading
This is typical configuration required for HTTPS traffic loadbalancing. The following screenshots
are presented just as a reference, for more documentation please refer to official F5 guides.
CA certificate used to sign PSNs (and VS) imported to the trusted store:
Certificate for VS terminating HTTPS session is generated, both key and cert uploaded. Service
name is vip.example.com
(that will be statically configured as a redirection on all NADs):
SSL Client profile is created, VS certificate and key are referenced. This profile will be responsible
for SSL termination from the client.
Also the SSL Server profile needs to be created - with default settings (decrypted SSL traffic will
be encrypted again when reaching PSNs). If needed Server certificate ignore option can be set to
true:
VS for HTTP is created (for all TCP ports and with HTTP profile enabled):
And both SSL profiles are referenced in the Advanced Configuration:
The redirection and SSL offloading can be tested from the client:
root@(f5)(cfg-sync Standalone)(Active)(/Common)(tmos)# show /ltm persistence persist-records
Sys::Persistent Connections
universal
00000009012b61756469742d73657373696f6e2d69643d304133303237364630303030313232353735313038364238
172.16.33.103:any 172.16.34.100:any (tmm: 1)
universal
00000009012b61756469742d73657373696f6e2d69643d304133303237364630303030313232353735313038364239
172.16.33.103:any 172.16.34.101:any (tmm: 1)
Total records returned: 2
LTM will generate a correct "match lookup" log and packet capture on the correct PSN node will
confirm the correct redirection.
CWA flow balanced on F5 is working fine.
References
●
●
●
●
●
●
Cisco Identity Services Engine Administrator Guide, Release 1.4
Cisco Identity Services Engine Administrator Guide, Release 1.3
HowTo: Cisco and F5 Deployment Guide-ISE Load Balancing Using BIG-IP
F5 LTM Introduction to irules
F5 LTM Radius irules
Technical Support & Documentation - Cisco Systems