http://www.cisco.com/systemtest/ge_ph3/ge3_tsum.pdf

Cisco IOS Profiled Releases 12.3(9a) and
12.1(22)E2, and CatOS Release 8.2(1) and 8.3(1)
Testing for Global Enterprise Customers
Version History
Version Number
Date
Notes
1
August 6, 2004
This document was created.
Executive Summary
This document describes the nEverest enterprise global testing topology, the test plans for basic IP
routing, LAN switching, multicast, quality of service (QoS), Voice over IP (VoIP), IP Security (IPSec),
and network management system (NMS) testing, and a summary of the test results including relevant
defect numbers. The execution of this test plan verified the functionality, system, reliability and
performance requirements as defined by the nEverest aggregated requirements document in an enterprise
global topology environment.
The following test types were conducted for each of the test configurations described in this document
and the nEverest: Enterprise Global Test Plan – Phase 3:
•
Functionality testing—Verified basic network operation and its configurability and stability in a
controlled environment.
•
System testing—Simulated a realistic customer environment by configuring defined features under
test and executing all Cisco IOS software functionality simultaneously with planned and various
levels of traffic generation. Negative test cases were also executed during the system test.
•
Reliability testing—Tested the environment that was set up and executed in the system test for
150 hours, to ensure that no new critical or severe defects were found.
This document contains the following main sections:
•
About Enterprise Global Testing, page 2
•
Test Results Summary, page 11
•
Functionality Testing, page 12
•
System Testing, page 13
•
Reliability Testing, page 13
•
Test Cases, page 15
•
Supplementary Information, page 35
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
1
About Enterprise Global Testing
The nEverest program is a quality initiative that focuses on satisfying customer requirements by
coordinating Cisco IOS release system-level and reliability testing under realistic conditions. The
nEverest testing uses the following five profiles, the designs of which are based solely on customer
requirements:
•
Enterprise global
•
Enterprise financial
•
Service provider/IP backbone
•
Service provider/MPLS backbone
•
Broadband
Cisco IOS Profiled Releases 12.3(9a) and 12.1(22)E2 , and CatOS Release 8.2(1) and 8.3(1) Testing for
Global Enterprise Customers provides a summary of the system-level and reliability testing completed
atop the profile for the specified release. Following are the target releases that were tested in nEverest
phase 3 for global enterprise:
Note
•
Cisco IOS Release 12.3(9a)
•
Cisco IOS Release 12.1(22)E2
•
CatOS Release 8.2(1) for Catalyst 4000 and 4500
•
CatOS Release 8.3(1) for Catalyst 6500
The software versions listed in this document were tested using the procedures described in this
document. All relevant unresolved defects found during testing are summarized in Table 2, Table 3, and
Table 4. We highly recommend that, in addition to the information contained in this report, you review
the Release Notes for each release to see the latest list of open defects for specific features not tested or
defects found after publication.
About Enterprise Global Testing
This section provides an overview of the nEverest enterprise global testing environment, and includes
the following information:
•
Enterprise Global Test Program Goals, page 2
•
Enterprise Global Topology, page 3
•
Enterprise Global Tests and Topologies Matrix, page 8
Enterprise Global Test Program Goals
The primary goal of the nEverest enterprise global test program is to harden a specific release of the
Cisco IOS software for large enterprise global campuses. The testing consists of defining and executing
a series of functionality, system, and reliability test cases in large enterprise global campuses. The
nEverest global enterprise phase 3 system test focused on IP routing with the Enhanced Interior Gateway
Routing Protocol (EIGRP), Border Gateway Protocol (BGP), Open Shortest Path First (OSPF) routing
protocols and various Layer 2 LAN switching features such as Uplinkfast and Portfast, QoS, VoIP,
multicast, security, and Simple Network Management Protocol (SNMP). A Cisco IOS software release
is considered hardened once it has been subjected to the testing defined in the nEverest aggregated
requirements document and meets exit criteria defined for this project.
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
2
About Enterprise Global Testing
Enterprise Global Topology
Figure 1 shows the high-level topology for the phase 3 enterprise global network.
Figure 1
Representative Enterprise Global Network Topology
The phase 3 enterprise global testing topology consisted of 5 multilayer-design campuses (2 large
campuses with data centers and 3 regional campuses), plus 12 remote campuses, as follows:
•
San Jose Campus
•
Boston Campus
•
Washington, D.C. Campus
•
Denver Campus
•
Dallas Campus
Remote Campuses are Arlington, Austin, Boca Raton, Colorado Springs, Houston, Los Angeles, Miami,
New Orleans, New York, Phoenix, Pittsburgh, and Santa Fe.
Refer to Figure 1 for the campus descriptions in the following sections.
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
3
About Enterprise Global Testing
San Jose Campus
The San Jose campus test bed represented a large enterprise headquarter campus with a data center.
Hardware Description
•
Two Cisco 7609 Optical Services Router (OSR) WAN aggregation routers connected to the other
enterprise global sites and to the Internet service provider (ISP). One Cisco 7206 VXR router WAN
aggregation router connected to remote sites.
•
Multiple types of high-speed WAN links were used: POS OC-3, ATM-OC-3 , T1, T3, and ATM-T3.
•
The core of the campus consisted of four Cisco Catalyst 6509 switches with dual Multilayer Switch
Feature Card 2 (MSFC2) and Policy Feature Card 2 (PFC2).
•
High-speed core Layer 3 Gigabit Ethernet (GE) links were used to connect two user buildings and
one data center building.
•
Within one building, two Cisco Catalyst 6506 switches were used as distribution layer switches, and
multiple switches such as the Catalyst 4006, Catalyst 5505, and Catalyst 6506 models were used as
access layer switches. In another building, two Cisco Catalyst 4500 switches, one of which was
using the SUP2+, were used as distribution layer switches; multiple Cisco Catalyst switches such as
the Catalyst 4006 and Catalyst 6506 models were used as access layer switches.
•
A separate Cisco 3640 router was used as an H.323 gatekeeper.
Routing Description
EGP
•
ISP connections on egsj-7609-w2
•
External BGP (eBGP) connections to to egbos-7206-w1, egwas-7609-w1, egdal-7206-w1, and
egden-7206-w2
•
Internal BGP (iBGP) connection to egsj-7609-w1 and egsj-7609-w2
IGP
•
EIGRP autonomous system 1
•
EIGRP within the entire campus, and extending to Los Angeles via SJW3
•
EIGRP authentication on the WAN link to the Los Angeles campus and within the distribution layer
of SJB1
Boston Campus
The Boston campus test bed represented a small enterprise campus located in a North American region.
Hardware Description
•
The WAN routers connecting to the other enterprise global sites and to the Internet consisted of three
Cisco 7206 VXR NPE400 WAN routers running serial peer-to-peer and ATM links. The campus also
consisted of a GE and a Fast Ethernet (FE) LAN.
•
There were two Cisco Catalyst 6506 switches, each with an MSFC2 or PFC2 in the core, and which
also provided the distribution layer functionality.
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
4
About Enterprise Global Testing
•
One Cisco Catalyst 6506, one Cisco Catalyst 4006 switch, and one Catalyst 4006 with a Cisco 4604
gateway module for VoIP, made up the access layer.
•
A Cisco 2651 router was used as a VoIP voice gateway, which registered into a gatekeeper located
at San Jose headquarters, and placed real VoIP calls to other gateways located at different campuses.
Routing Description
EGP
•
ISP connections on egwas-7609-w1
•
eBGP connections on egsj-7609-w1, egsj-7609-w2, egbos-7206-w2, egden-7206-w1, and
egdal-7206-w2
•
iBGP connections on egwas-7609-w1 and egwas-7609-w2
IGP
•
EIGRP autonomous system 3
•
EIGRP within the entire campus, and extending to the Miami, Pittsburgh, and New York campuses
via egwas-7505-w3
•
EIGRP authentication on the WAN link to Miami, Pittsburgh, and New York
Washington, D.C. Campus
The Washington, D.C. campus test bed represented a medium-sized enterprise campus located in the
eastern United States region. The test bed simulated one of the large enterprise headquarter campuses
with a data center.
Hardware Description
•
The WAN aggregation routers connecting to the other enterprise global campuses and to the Internet
consisted of two Cisco 7609 OSR routers and one Cisco 7505 router running ATM-OC-3, ATM-T3,
POS-OC-3, peer-to-peer T1, T3, and E3, and ATM/Frame Relay.
•
There were two Cisco Catalyst 6506 switches, each with an MSFC2 or PFC2 in the core, and that
also provided distribution layer functionality.
•
One Cisco Catalyst 6506 switch made up the access layer.
•
A Cisco 3640 router was used as a VoIP voice gateway and registered into the Cisco CallManager
located at the San Jose headquarters. Real VoIP calls were placed to other gateways located at
different campuses.
Routing Description
EGP
•
ISP connections on egwas-7609-w1
•
eBGP connections on egsj-7609-w1, egsj-7609-w2, egbos-7206-w2, egden-7206-w1, and
egdal-7206-w2
•
iBGP connection on egwas-7609-w1 and egwas-7609-w2
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
5
About Enterprise Global Testing
IGP
•
EIGRP autonomous system 3
•
EIGRP within the entire campus, and extending to Miami, Pittsburgh, and New York via
egwas-7505-w3
•
EIGRP authentication on the WAN link to Miami, Pittsburgh, and New York
Denver Campus
The Denver test bed represented a medium-sized enterprise campus located in a North America region.
Hardware Description
•
The WAN aggregation routers connecting to the other enterprise global campuses and to the Internet
consisted of two Cisco 7206 NPE-G1 routers, one Cisco 7206 NPE400 router, and a Cisco 7507
RSP8 router running ATM-T1/T3, peer-to-peer T1, High-Speed Serial Interface (HSSI),
ATM/Frame Relay, or Frame Relay-Frame Relay links, with ISDN on a Cisco 2600 router as a
backup.
•
The campus also contained four Cisco Catalyst 6506 switches with GE and FE backbone and core
and distribution layers using MSFC2 or PFC2.
•
One Cisco Catalyst 6506 switch, one Cisco Catalyst 4506 switch with Cisco 4604 Gateway Module
for VoIP, and one Cisco Catalyst 4003 switch made up the access layer.
•
A Cisco 3640 router was used as a VoIP voice gateway and registered into the gatekeeper located at
the San Jose headquarters, which placed real VoIP calls to other gateways located at different
campuses.
Routing Description
EGP
•
ISP connection on egden-7206-w2
•
eBGP connections on egsj-7609-w1, egden-7206-w2, egwas-7609-w1, and egden-7206-w1
•
iBGP connections on egden-7206-w1 and egden-7206-w2
IGP
•
EIGRP autonomous system 4
•
EIGRP within the entire campus, extending to Phoenix and New Orleans via egden-7206-w3;
Colorado Springs and Los Angeles via egden-7507-w4; and New Orleans via the Denver ISDN link
as backup
Dallas Campus
The Dallas campus test bed represented a medium-sized enterprise campus located in a European region.
Global application servers are located at this campus serving the smaller and remote campuses.
Applications such as voice, FTP and HTTP are simulated by Chariot and other traffic generating test
tools. The test bed simulates large amount of end users through the use of traffic generators and real
Linux stations.
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
6
About Enterprise Global Testing
Remote campuses covered under the Dallas campus are: Austin, Arlington, Boca Raton, and Houston.
The Dallas enterprise campus networks were deployed under one OSPF process with multiple OSPF
areas, and included virtual links and redistribution to BGP.
Hardware Description
•
The WAN routers connecting to the other enterprise global sites and to the Internet consisted of three
Cisco 7206 VXR NPE400 routers and a Cisco 7507 RSP8 router running ATM and PPP on the
WANs.
•
The campus also consisted of a GE and FE LAN connected to two Cisco Catalyst 6506 switches,
each with an MSFC2 or PFC2 card. One Catalyst was configured in hybrid mode and the other in
native mode with the IPSec blade.
•
A Cisco 3640 router was used as a VoIP voice gateway registers into the gatekeeper located at the
San Jose headquarters, which placed real VoIP calls to other gateways located at different campuses.
Routing Description
EGP
•
ISP connections on egdal-7206-w1
•
eBGP connections on egsj-7609-w2 and egwas-7609-w2
•
iBGP connections on egdal-7206-w1 and egdal-7206-w2
IGP
•
OSPF within the entire campus, and extending to Miami, Santa Fe, and Colorado Springs via
egdal-7505-w4
•
OSPF authentication on the WAN link to Miami, Santa Fe, and Colorado Springs
Remote Campuses
The remote campuses consisted of only a single-branch layer.
Hardware Description
•
Most of the remote network devices consisted of Cisco Catalyst 2950, Cisco Catalyst 3550, and
Cisco Catalyst 6506 switches, and Cisco 2651, Cisco 3745, Cisco 3600, Cisco 1760, and Cisco 7206
routers.
•
The New York remote campus utilized a Cisco Catalyst 3745 switch.
•
The Miami and Phoenix remote campuses had GE links to the WAN router.
•
The remaining remote campuses (Los Angeles, New York, Santa Fe, Colorado Springs, Houston,
Pittsburgh, New Orleans, Austin, Boca Raton, and Arlington) consisted of an FE link to the WAN
router.
•
Two Cisco Catalyst 6506 switches located in Miami, each with an MSFC2 or PFC2 card, performed
both Layer 2 and Layer 3 functions.
•
Linux end stations and PCs were used to simulate user traffic with Chariot and other test tools.
•
All WAN routers with the exception of the Cisco 7206 router in Miami were used as VoIP voice
gateways registering into a gatekeeper located at the San Jose headquarters, to place real VoIP calls
to other gateways located at other campuses.
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
7
About Enterprise Global Testing
Routing Description
EGP
Not used.
IGP
A mix of OSPF and EIGRP was used, depending on the WAN link connecting the remote campus to the
major campus, as described previously in the “Hardware Description” section. OSPF and EIGRP
authentication were used on the WAN link to the major campuses.
Note
The remote branch campuses did not redistribute between multiple routing protocols, with the
exception of Miami, which extended Dallas’ OSPF to be a not-so-stubby area (NSSA). This
extension allowed for the redistribution of EIGRP learned routes sourced by the Washington, D. C.
campus to be redistributed and tagged into Miami and the OSPF protocol, but not propagated to the
rest of the OSPF domain, because the Area Border Router (ABR) blocked these routes from being
sent into area 0.
Enterprise Global Tests and Topologies Matrix
Table 1 marks the features tested at each campus in the phase 3 enterprise global network with an X.
Click your mouse on the highlighted campus or feature link to go to a description of that campus or
feature test.
Table 1
Enterprise Global Tests and Topologies Matrix for nEverest Phase 3
San Jose
Campus
Boston
Campus
Washington,
D.C. Campus
Denver
Campus
Dallas
Campus
Remote
Campuses
BGP
X
X
X
X
X
—
BGP autonomous system prepend
X
X
X
X
X
—
BGP named community list
X
X
X
X
X
—
BGP prefix-based route filtering
X
X
X
X
X
—
BGP route authentication
X
X
X
X
X
—
BGP route dampening
X
X
X
X
X
—
BGP Soft Config feature
X
X
X
X
—
—
Default route generation
X
X
X
X
X
—
EIGRP
X
X
X
X
—
X
EIGRP metric tuning
X
X
X
X
—
X
EIGRP route authentication
X
X
X
X
—
X
EIGRP route filtration
X
X
X
X
—
X
EIGRP route redistribution
X
X
X
X
—
X
EIGRP stub router
X
X
X
X
—
X
OSPF
—
—
X
—
X
X
Feature and Tests
Basic IP Routing
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
8
About Enterprise Global Testing
Table 1
Enterprise Global Tests and Topologies Matrix for nEverest Phase 3 (continued)
Feature and Tests
San Jose
Campus
Boston
Campus
Washington,
D.C. Campus
Denver
Campus
Dallas
Campus
Remote
Campuses
OSPF ABR Type 3 LSA filtering
—
—
—
—
X
X
OSPF inbound filtering
—
—
—
—
X
X
OSPF LSA throttling
—
—
—
—
X
X
OSPF NSSA
—
—
—
—
X
X
OSPF packet pacing
—
—
—
—
X
X
Route summarization
X
X
X
X
X
X
cRTP/dcRTP
—
—
—
X
X
X
MLP LFI
—
—
—
X
—
X
802.1q
X
X
X
X
X
X
ISL VLAN
—
—
—
—
—
X
CDP
X
X
X
X
X
X
PortFast
X
X
X
X
X
X
STP
X
X
X
X
X
X
STP UplinkFast
X
—
X
X
X
X
UDLD
X
—
X
X
X
X
VTP domain
X
—
X
X
X
X
Accept-register filter
X
—
—
—
—
—
Anycast RP with Auto-RP
X
X
X
X
X
X
CGMP
X
X
—
X
—
—
IGMP snooping
X
X
X
X
X
X
IGMP V2
X
X
X
X
X
X
MDS Multilayer Director
—
X
X
—
—
—
MMLS
X
X
X
X
X
—
Multicast boundary
X
—
X
X
X
—
Multicast rate limit
X
—
—
—
—
X
PIM sparse mode
—
—
—
X
—
—
PIM sparse-dense mode
X
X
X
—
X
X
MIB Walk—SNMP Traps
X
X
X
X
X
X
Network Time Protocol (NTP)
X
X
X
X
X
X
SNMPv3
X
X
X
X
—
X
Syslog
X
—
X
X
X
X
Link Efficiency
Link Type and Encapsulation
Link Management
Multicast
Network Management
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
9
About Enterprise Global Testing
Table 1
Enterprise Global Tests and Topologies Matrix for nEverest Phase 3 (continued)
San Jose
Campus
Boston
Campus
Washington,
D.C. Campus
Denver
Campus
Dallas
Campus
Remote
Campuses
QoS classification and marking
X
X
X
X
X
X
QoS congestion avoidance
X
X
X
X
X
X
QoS congestion management
X
X
X
X
X
X
QoS traffic conditioning
X
—
X
X
X
X
HSRP
X
X
X
X
X
X
Monitor convergence times
X
X
X
X
X
X
RPR+
X
—
—
—
—
—
IPSec AIM, ISA, and VAM
hardware encryption
X
X
—
X
X
X
IPSec/Certificate Authority (CA)
—
—
—
—
X
—
IPSec/Cisco Catalyst 6000 series
switch
—
—
X
—
X
—
IPSec/QoS for encrypted traffic
—
—
—
—
X
—
IPSec/QoS preclassify
X
X
—
—
X
X
IPSec/transport mode
—
—
—
X
—
—
IPSec/transport mode/GRE
—
X
—
—
—
X
IPSec/tunnel mode
—
—
—
—
X
X
IPSec/tunnel mode/GRE
—
—
—
—
X
X
PIX Firewall
X
—
—
—
—
—
TACACS/AAA
X
X
X
X
X
X
H.323 VoIP
X
X
X
X
X
X
SCCP calls
X
X
X
X
X
X
BGP clear neighbors
X
X
X
—
X
—
CEF disable/enable
—
X
X
—
—
—
EIGRP clear neighbors
X
X
X
—
—
X
EIGRP remove
X
X
X
—
—
X
Fail and reestablish WAN links
—
X
X
—
—
X
OSPF clear neighbors (clear
process)
—
—
—
—
X
X
X
—
—
X
X
X
—
—
X
X
Feature and Tests
QoS
Resiliency
Security
Voice
Destructive Negative Tests
OSPF remove
Router reboot (power-cycle)
X
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
10
Test Results Summary
Table 1
Enterprise Global Tests and Topologies Matrix for nEverest Phase 3 (continued)
Feature and Tests
San Jose
Campus
Boston
Campus
Washington,
D.C. Campus
Denver
Campus
Dallas
Campus
Remote
Campuses
Router reboot (soft reboot)
X
X
X
—
X
X
RPF failover
X
—
—
—
—
—
Toggle BPDU to PortFast interface X
—
—
—
—
X
UDLD—unplug GE connection
X
—
X
—
X
—
UpLink Fast fail root port
X
—
X
X
—
X
BGP add and remove summary
initiating route
X
X
X
—
X
—
BGP route flap
X
X
X
—
X
—
EIGRP add and remove summary
initiating route
—
X
EIGRP route flap
X
X
X
—
—
X
OSPF add and remove summary
initiating route
—
—
—
—
X
—
OSPF external route flap
—
—
—
—
X
X
OSPF internal route flap
—
—
—
—
X
X
PIX failover—attach Failover
cable but disable fail
X
—
—
—
—
—
QoS add and remove class map
X
X
X
X
X
X
QoS add and remove policy map
X
X
X
X
X
X
QoS add and remove service
policy
X
X
X
X
X
X
VTP add and remove VLAN from
switch port
X
—
X
—
—
—
Nondestructive Negative Tests
—
—
Test Results Summary
The following sections summarize the phase 3 test environment and test results:
•
Functionality Testing, page 12
•
System Testing, page 13
•
Reliability Testing, page 13
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
11
Functionality Testing
Functionality Testing
The following functionality test cases for the Cisco IOS nEverest enterprise global project were
performed:
•
Basic IP Routing, page 15
•
Link Efficiency, page 18
•
Link Type and Encapsulation, page 19
•
Link Management, page 20
•
Multicast, page 22
•
Network Management, page 24
•
QoS, page 25
•
Resiliency, page 26
•
Security, page 27
•
Voice, page 30
•
Destructive Negative Tests, page 33
•
Nondestructive Negative Tests, page 34
Results of Functionality Testing
Table 2 shows the functionality test results.
Table 2
Functionality Test Results
Component
Pass/Fail
Functionality testing
Pass with exceptions
Functionality Testing Pass with Exceptions Explanations
A test case was marked “pass with exception” when the defects found were either inconsistent,
associated with a test tool, not seen in normal operations, or did not present a significant impact to a
customer’s daily network operations. The following exceptions were noted in the nEverest enterprise
global phase 3 functionality tests:
•
System reloaded when a crypto map was configured and unconfigured. (CSCee17103)—Identified
on device egla-3640-vw. This defect is a duplicate of CSCec73134, and CSCec7134 has been fixed
and verified by the test team.
•
Online insertion and removal of port adapters on device egden-7206-w3 generated tracebacks.
(CSCed41001)—This defect was resolved in Cisco IOS Releases 12.3(8)T and 12.3(8).
•
Parser error on policy map. (CSCee24708)—This defect was a cosmetic parser output problem.
•
Unable to clear a known neighbor using the clear eigrp neighbor command via a tunnel interface.
(CSCee44110)—This defect occurred on device egpit-3640-vw. The clear eigrp neighbor
command was not issued in normal operations, but rather during maintenance or troubleshooting
periods.
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
12
System Testing
•
The show eigrp neighbor command displayed nonexistent processes. (CSCee44507)—This was a
minor defect where a nonexistent process appeared in the output of the show eigrp neighbor
command. This defect was caused by leftover interface-level summary address configurations for a
nonexistent process.
•
The configure memory function temporarily destabilized interfaces. (CSCee45393)—This defect
occurred on devices egla-3640-vw and egwas-7505-w3. When the configure memory or copy start
running configuration command was issued, some interfaces changed state from up to down, then
back to up. These state changes triggered the Advanced Distance Vector and link-state routing
protocols to drop the known routes via the interface and recalculate the routing table. These
commands are not issued during normal operation, but rather only for troubleshooting or during
maintenance. CSCee45393 is a duplicate of CSCea79625, which has been closed.
•
Renaming a policy map did not update the running configuration of a related ATM virtual circuit
(VC). (CSCee44778)—This defect occurred on device egsj-7206-w3. When the ATM VC policy
map was overwritten with a new policy map, the expected new configuration did not occur, and in
some instances generated a bus error traceback. The workaround is to unconfigure the policy map
then apply a new policy map. CSCee44778 is a duplicate of CSCee12235, which has been closed.
•
The show policy interface command output had negative and invalid random detect data.
(CSCee44858)—This defect occurred on device egsj-7609-w2. The show policy interface
command indicated negative data counters initially, but subsequent show policy interface command
output displayed proper, positive counters. The defect report has been closed.
System Testing
The goal of the system test was to simulate a realistic customer environment by having the test team
configure all functional features under test and execute all Cisco IOS functionality simultaneously with
planned various levels of traffic generation.
The negative test cases were also executed during system test.
Results of System Testing
Table 3 shows the system test results.
Table 3
System Test Results
Component
Pass/Fail
System testing
Pass
Reliability Testing
The reliability test was an extension of the system test. The environment that was set up and executed
successfully in the system test was executed for 150 hours to ensure that no new critical or severe defects
were found during an extended test period.
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
13
Reliability Testing
Results of Reliability Testing
Table 4 shows the reliability test results.
Table 4
Reliability Test Results
Component
Pass/Fail
Reliability testing
Pass with exceptions
Reliability Testing Pass with Exceptions Explanations
•
Spurious accesses at peek queue head when testing tunnel protection (CSCee49875)—Spurious
memory accesses may occur on a Cisco 1700 series after IPSec tunnel protection is disabled. This
symptom is observed on a Cisco 1700 series router that runs Cisco IOS Release 12.3(9a) software.
This defect is fixed in Cisco IOS Release 12.3(10) software.
•
ALIGN-3-SPURIOUS traceback message. (CSCef16066)—The following traceback message is
observed when a QoS service policy is added or removed from the encrypted link:
Jul 14 18:05:25: %ALIGN-3-SPURIOUS: Spurious memory access made at 0x60763E68
reading 0x1
There is no workaround.
•
QoS does not classify IPSec packets with crypto tunnel protection (CSCee73845)—QoS does not
classify IPSec packets in a Generic Routing Encapsulation (GRE) IPSec tunnel protection
configuration, although the type of service (ToS) byte is copied to the IPsec header. This symptom
is observed only when QoS preclassification is not configured and when the ToS byte is used to
classify traffic. The same QoS configuration works in a crypto map configuration or in a GRE tunnel
configuration without IPSec. The workaround is to configure QoS preclassification.
•
QoS: ACL with port match does not work correctly with tunnel protection (CSCee66817)—The
access list used to match the TCP port number is not working correctly with crypto tunnel
protection. The same QoS configuration works with the legacy crypto map configuration. Much
traffic was classified as default class with tunnel protection. There is no workaround.
•
Memory Leak on IP EIGRP with QoS, IPSec, multicast, VoIP enabled. (CSCee61589)—Router
den-7206-w3 leaked about 90 KB of input/output memory over a 24-hour period. The router was
configured with QoS, multicast, VoIP, EIGRP, and IPSec, and had various types of traffic traversing
it at the time this data was collected. The traffic types are generated using Chariot, PaGent, CallGen,
and IXIA tools. Conditions: Normal operation with multiple features and traffic enabled. There is
no workaround.
•
After 24 to 36 hours of running traffic on the ATM subinterface, tracebacks were seen.
(CSCee56098)—The interface stopped sending any traffic, although the interface was still up.
Sending a ping fails on the interface and the EIGRP neighbor is also lost. Condition: This condition
occurs when traffic is left running for an extended period (24 to 36 hours). Reason: CRCs and line
errors are not correctly processed. The workaround is to reset the router. The interface will forward
traffic after the router comes up; however, the condition will recur after 24 to 36 hours.
•
Modular QoS CLI (MQC) on ISDN used incorrect interfaces. (CSCee27893)—Even though
Class-Based Weighted Fair Queueing (CBWFQ) is applied only on BRI B channels, the output of
the show policy-map interface command shows that it is applied on both the B and D channels. This
symptom is observed on a Cisco platform that runs Cisco IOS Release 12.3(6). There is no
workaround.
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
14
Test Cases
•
Device egla-3640-vw unable to register with a voice gateway because of excessive CPU utilization
due to QoS, IPSec, and general packet throughput. This configuration and the associated traffic
approached the limits of the Cisco 3640 router’s capabilities.
Test Cases
The following sections describe the functionality tests performed for phase 3.
Basic IP Routing
The purpose of this test was to verify the deployment of the following: Layer 2 features such as
VLAN,VLAN trunking, VLAN Trunking Protocol (VTP) Spanning Tree Protocol (STP), UplinkFast,
PortFast, Bridge Protocol Data Unit (BPDU) guard, Fast EtherChannel (FEC), Gigabit EtherChannel
(GEC), and Unidirectional Link Detect (UDLD); and Layer 3 features such as Hot Standby Router
Protocol (HSRP), EIGRP, OSPF, eBGP, and iBGP.
Unless otherwise noted, the entire enterprise global network was used to perform the Basic IP
functionality test. The following notes apply:
•
The Enhanced Interior Gateway Routing Protocol (EIGRP) and Open Shortest Path First (OSPF)
protocol were used as the internal gateway protocols (IGPs), and Border Gateway Protocol (BGP)
was used for the external gateway protocol (EGP).
•
The BGP core consisted of the ATM cloud seen in the center of Figure 1, with two T3 links
connecting to Boston, and the POS OC-3 link making the connection between the San Jose and
Washington, D.C. campuses.
•
The routers used for the BGP core were as follows:
– San Jose: egsj-7609-w1, egsj-7609-w2
– Boston: egbos-7206-w1, egbos-7206-w2
– Washington, D.C.: egwas-7609-w1, egwas-7609-w2
– Denver: egden-7206-w1, egden-7206-w2
– Dallas: egdal-7206-w1, egdal-7206-w2
•
The routers within the same campus established an iBGP neighbor relationship, whereas
intercampus connections were done via eBGP.
•
EGP authentication used Message Digest 5 (MD5) encapsulation and was performed on eBGP
neighbors between the San Jose and Washington, D.C. campuses.
•
Several eBGP connections from the campuses to two ISPs used route maps to filter unwanted routes.
•
Autonomous system path stripping was performed. The private autonomous system 6450x was
removed and the global autonomous system was applied and advertised to the ISP. The advertised
autonomous system path sent to the ISP reflected the appropriate desirability of using a subsequent
campus as an intermediate or transit autonomous system.
•
BGP peer groups were set up up in the following logical groups: internal peers, external campus
peers, and ISP peers.
•
All routes originated in other well-known peer campuses were advertised to other campuses with
the autonomous system prepended by 3.
•
IGP neighboring across WAN links used type MD5 authentication for the appropriate IGP.
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
15
Test Cases
•
The campus distribution, or collapsed distribution layer as appropriate, tuned its appropriate metrics
to ensure symmetric entrance and exit from the supported VLANs, and load balancing across the
redundant routers.
•
Approximately 10,000 routes were injected by Pagent’s Large Network Emulator (LNE) route
generator at various points in the topology. BGP was the IP EGP routing protocol for the ISP
connection.
•
EIGRP was configured as the IGP within the appropriate campus. Each major campus used its own
autonomous system to ensure campus independence.
•
The IGP for Dallas was OSPF. Dallas consisted of no more than four areas.
•
One area was configured as an NSSA.
•
Summarization was applied at the ABR for routes being created and advertised by the distribution
layer to the core layer, within the respective campus.
•
Multipath for EIGRP, OSPF, and BGP were set to the default number of paths on all platforms.
•
Mutual redistribution was performed. All IGP routes generated by the local campus were
redistributed into BGP. Only those eBGP learned routes originating from well-known peer campuses
were redistributed into the local IGP. These routes were tagged to prevent routing loops due to
mutual redistribution at multiple points.
•
Distributed CEF (dCEF) was enabled on all interfaces and platforms that supported it. Alternatively,
Cisco Express Forwarding (CEF) was enabled again when platform and software supported this
capability.
Test Procedure
The procedure used to perform the basic IP functionality test follows:
Step 1
eBGP was configured to allow for multiple autonomous systems to communicate using a common
routing protocol. It allowed administrative abilities such as route summarization, route filtration, and
traffic advertisement adjustments.
Step 2
BGP autonomous system prepend provided BGP with the ability to administratively advertise specific
routes as being less desirable when the test team examined the autonomous system path on the receiving
router.
Step 3
BGP named community lists were configured to allow grouping multiple community attributes into a
common named list.
Step 4
BGP prefix-based route filtering was configured to allow for prefix filtering with BGP sessions. This
filtering was done for many reasons. A typical scenario would be controlling the advertisement of routes
toward the ISP. One solution is to use route maps that filter based on prefix information, that is, network
IP addresses. Typically, only a block of IP address ranges are allowed to be transmitted, and hence a
route map can easily be created to filter any prefixes that are not within this range.
Step 5
BGP route authentication used MD5 authentication between two BGP peers, so that each segment sent
on the TCP connection between them was verified. All routers in this test bed that were peering using
BGP performed MD5 authentication using the same password; otherwise, the connection between them
could not be made. Invoking authentication caused the Cisco IOS software to generate and checked the
MD5 digest of every segment sent on the TCP connection. If authentication was invoked and a segment
failed authentication, a message appeared on the console.
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
16
Test Cases
Step 6
BGP route dampening was configured to provide the ability to protect router and network stability
against repeated and excessive network state changes elsewhere within the BGP domain.
Step 7
The BGP Soft Config feature was configured to provide the ability to apply policies to BGP learned
routes without restarting the peer relationship. Typically when the test team applied a new policy to BGP
learned routes via a BGP peer, the peer relationship needed to be restarted and the new policies applied
as the peering began.
Step 8
EIGRP default route generation generated a default route, either natively or as an external route.
Step 9
EIGRP was configured to provide enhancements over other routing protocols, such as fast convergence,
support for variable-length subnet masks, and support for partial updates and for multiple network layer
protocols.
Step 10
EIGRP metric tuning was configured to provide the ability to administratively adjust the link delay and
link bandwidth. A lower delay was applied to the interface during testing that normally would be the
active HSRP interface, to ensure symmetric routing and switching on and off of the supported VLANs.
Step 11
EIGRP route authentication using MD5 was invoked between two EIGRP neighbor peers. When the test
team used authentication, it forced the Cisco IOS software to generate and check the MD5 digest of
every segment sent on the EIGRP connection.
Step 12
EIGRP route filtration was configured to allow for explicit denial of route advertisement on an
interface-level basis. EIGRP route filtration can be used to minimize router resource consumption,
similarly to route summarization; however, when routes are denied from being advertised out a particular
interface, the devices are forced to use an otherwise generated summary route (for instance, a default
route), or will not have a route to forward traffic toward.
Step 13
EIGRP route redistribution was configured and routes were redistributed from the EGP into the IGP
selectively. Based on the autonomous system of origin, routes were tagged to prevent them from being
redistributed into BGP. Only routes originated in other campuses were permitted to be redistributed into
EIGRP. Cost was assigned based on the number of the autonomous system listed within the path.
Step 14
EIGRP stub routing was configured to prevent EIGRP queries from being sent over limited bandwidth
links to nontransit routers. Instead, distribution routers to which the stub router was connected answered
the query on behalf of the stub router. EIGRP stub routing provided the ability to reduce the chance of
further network instability due to congested or problematic WAN links.
Step 15
OSPF was used as the IGP in the Dallas campus, and as the routing protocol running between the Dallas
campus and any directly connected remote campus. The core routers in Dallas were located in OSPF
area 0. Nonzero areas were connected to devices eg-dal-c1 and eg-dal-c2; device eg-dal-w3 had a stub
area connected via Houston.
Step 16
The OSPF ABR Type 3 link state advertisements (LSA) filtering feature was configured to allow the
administrator improved control of route distribution between OSPF areas.
Step 17
OSPF inbound filtering was configured to allow defining of a route map to prevent OSPF routes from
being added to the routing table. This filtering happens at the moment when OSPF is installing the route
into the routing table.
Note
The OSPF LSA throttling feature is enabled by default and allowed for faster OSPF convergence.
This feature was customized so that one command controlled the sending of LSAs and another
command controlled the receiving interval. This feature also provided a dynamic mechanism to slow
down the frequency of LSA updates in OSPF during times of network instability.
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
17
Test Cases
Step 18
OSPF NSSA was configured to allow for the redistribution of external routes into an NSSA area, creating
a special type of LSA known as Type 7 that can only exist in an NSSA area. An NSSA Autonomous
System Boundary Router (ASBR) generated this LSA and an NSSA ABR translated it into a Type 5
LSA, which was propagated into the OSPF domain.
Step 19
OSPF packet pacing was configured so that OSPF update and retransmission packets were sent more
efficiently. This feature also allowed displaying the LSAs waiting to be sent out an interface.
Step 20
OSPF redistribution limits were configured to prevent a flooded network caused by a large number of
mistakenly injected IP routes into OSPF, perhaps caused when BGP was redistributed into OSPF.
Step 21
Route summarization provided the ability to administratively minimize resource consumption associated
with large routing tables. EIGRP was configured because it uses an interface-level configuration to
manually summarize, or it summarizes on the class-address boundary by default. OSPF was configured
to allow for summarization at the ABR. BGP was configured to allow for summarization based on the
routes it learns from the IGP, among other methods. EIGRP summarization was applied to the
distribution routers for routes that advertised subnetworks toward the core routers. The core routers
advertised routes toward the WAN aggregation routers that supported the branch campuses. OSPF
summarization was applied to all ABR routers. BGP summarization was applied to all eBGP
participating routers summarizing the local campus subnetworks.
Expected Results
We expected the following results:
•
Layer 2 features such as VLAN, VLAN trunking, VTP, STP, UplinkFast, PortFast, BPDU guard,
FEC, GEC, and UDLD would be correctly incorporated into the respective campus networks.
•
Layer 3 features such as HSRP, EIGRP, OSPF, eBGP, and iBGP would be correctly incorporated into
the respective campus networks.
•
The routing protocol features would be correctly incorporated into the network campus.
•
Basic network operation (network connectivity) and major IP routing features would work as
expected.
•
EIGRP/BGP routing or OSPF/BGP routing would work as expected.
•
The network connectivity within enterprise global and from global to ISP would work as expected.
•
The network parameters such as CPU usage, memory usage, and route convergence time could be
captured using various test tools.
Link Efficiency
The purpose of the link efficiency functionality test was to ensure that the following link-layer efficiency
mechanisms worked with queueing and traffic shaping:
•
Link Fragmentation and Interleaving (LFI) for Multilink PPP (MLP).
•
LFI for Frame Relay and ATM VCs.
•
Frame Relay Fragmentation (FRF).
•
Compressed Real-Time Protocol (CRTP).
•
Distributed Compressed Real-Time Protocol (dCRTP).
LFI for MLP, FRF.12, and cRTP was deployed and tested in the enterprise global network.
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
18
Test Cases
The following notes apply to the link efficiency functionality test:
•
The LFI feature reduced delay and jitter on slower-speed links by breaking up large datagrams and
interleaving low-delay traffic packets with resulting smaller packets. LFI was applied when
interactive traffic such as Telnet and VoIP was susceptible to increased latency and jitter because the
network processed large packets such as LAN-to-LAN FTP transfers that traversed a WAN link,
especially because they were queued on slower links.
•
LFI was designed especially for lower-speed links in which serialization delay is significant. LFI
required that MLP be configured on the interface with interleaving turned on.
•
LFI allows reserve queues to be set up so that Real-Time Protocol (RTP) streams can be mapped into
a higher priority queue in the configured weighted fair queue set.
•
FRF.12 was applied to a serial interface configured with Frame Relay encapsulation, and is usually
applied with access links that have a bandwidth of less than 2 Mbps. With FRF.12, nonreal-time data
frames can be carried together on lower-speed links without causing excessive delay to the real-time
traffic. FRF.12 allows routers from multiple remote sites to be multiplexed into a central site router
through Frame Relay links. FRF.12 enables Cisco 7500 series routers with a Versatile Interface
Processor (VIP) to support Frame Relay fragmentation, allowing scalability across multiple VIPs.
•
CRTP compresses the IP/UDP/RTP header in an RTP data packet from 40 bytes to approximately 2
to 5 bytes. CRTP reduction in line overhead for multimedia RTP traffic results in a corresponding
reduction in delay. CRTP should be applied on any WAN interface where bandwidth is a concern
and there is a high portion of RTP traffic. CRTP can be used for media-on-demand and interactive
services such as Internet telephony.
Test Procedure
The procedure used to perform the link efficiency functionality test follows:
Step 1
CRTP was tested on the WAN links FF1 between Denver and Colorado Springs (egden-7507-w4 and
egcs-2651-vw), FF2 or Dallas and Santa Fe (egdal-7507-w4 and egsaf-2651-vw), and AF3 or Denver
and New Orleans (egden-7206-w3 and egneo-2651-vw).
Step 2
MLP LFI was tested on the WAN link AF3 between Denver and New Orleans (egden-7206-w3 and
egneo-2651-vw) and FF2 or Dallas and Santa Fe (egdal-7507-w4 and egsaf-2651-vw).
Step 3
FRF.12 was tested on the WAN links FF1 between Denver and Colorado Springs ( egden-7507-w4 and
egcs-2651-vw), and FF2 between Dallas and Santa Fe (egdal-7507-w4 and egsaf-2651-vw).
Expected Results
We expected the link efficiency mechanism to work as expected, and will improve the efficiency and
predictability of the application service levels.
Link Type and Encapsulation
The purpose of the link type and encapsulation functionality test was to verify T3, E3, T3-ATM, POS
OC-3, and GE and FE connections, and Frame Relay, PPP, High-Level Data Link Control (HDLC),
ARPA (RFC 826 Ethernet-style ARP), 802.1q, Inter-Switch Link (ISL), and ATM adaptation layer
(AAL5snap) encapsulations.
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
19
Test Cases
The following notes apply to the link type and encapsulation functionality test:
•
The high-speed WAN layer consisted of a partially meshed set of connections consisting of T3, E3,
T3-ATM, and POS OC-3 connections. Each high-speed WAN router had redundant connections to
the local campus core routers using either GE or FE.
•
The Boston, Dallas, Denver, San Jose, and Washington, D.C. major campuses consisted of core,
distribution, WAN aggregation, and access layers. The layers were interconnected using redundant
GE and FE connections. The access layer connected to the workstations using FE.
•
All trunk connections used 802.1q.
•
Branches were connected to the major sites using Frame Relay, ATM/Frame Relay, serial, or ISDN.
The link types used on the test network branch connections were serial, HSSI, T1, Ch-T1, E1,
Ch-E1, T3/DS3, E3, ATM E3, ATM T3, Ch-T3, Ch-E3, and FE. The encapsulations types used were
Frame Relay, PPP, HDLC, ARPA, 802.1q, ISL, and AAL5snap.
Test Procedure
The procedure used to perform the link type and encapsulation functionality test follows:
Step 1
802.1Q VLANs were configured to allow LAN networks to be divided into workgroups connected via
common backbones to form VLAN topologies.
Step 2
ISL VLANs were configured to allow LAN networks to be divided into workgroups connected via
common backbones to form VLAN topologies.
Expected Results
We expected the link types and encapsulation to work as expected.
Link Management
The purpose of the link management functionality test was to verify correct operation of the following
link management mechanisms:
•
Cisco Discovery Protocol (CDP)
•
PortFast and UpLink Fast
•
STP
•
VTP
•
Unidirectional Link Detect (UDLD)
The following notes apply to the link management functionality test:
•
The Operation, Administration, and Maintenance (OAM) feature, which limits the number of
consecutive OAM Alarm Indication Signal (AIS) cells received before the subinterface is brought
up or down, also limited the number of OAM loopback retries before the subinterface was brought
down.
•
STP was used to ensure that a Layer 2 LAN switched environment was loop free with consistent
packet forwarding for the applicable Layer 2 destination.
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
20
Test Cases
•
VTP was used to ensure that the appropriate VLANs were propagated within a Layer 2 environment
to minimize the management associated with the network.
•
CDP was used to provide device connection information, regardless of media. CDP also allowed for
improved troubleshooting ability.
•
UDLD provided Layer 2 link integrity for LAN switches to ensure that the STP was appropriately
updated when the transmit or receive portion of a link fails.
•
PortFast, in conjunction with PortFast-BPDU guard, minimized STP convergence times associated
with bringing up an end device such as a workstation.
•
UpLink Fast provided improved convergence time for STP, when a directly connected link failed
and STP was forced to fail over to a redundant link.
•
STP was implemented in every Layer 2 LAN switched environment. In addition, it was tuned to
ensure that the root device on the same device was the active HSRP router.
•
VTP was implemented in every Layer 2 LAN switched environment to ensure that VLAN
information was propagated.
•
CDP was enabled on all LAN links.
•
UDLD was enabled on all GE connections.
•
PortFast was enabled on all access layer devices where the port was connecting to an end device.
•
UpLink Fast was implemented in all Layer 2 LAN switch devices.
Test Procedure
The procedure used to perform the link management functionality test follows:
Step 1
CDP was enabled.
This media- and protocol-independent device-discovery protocol runs on all Cisco-manufactured
equipment including routers, access servers, bridges, and switches. With CDP enabled, a device can
advertise its existence to other devices and receive information about other devices on the same LAN or
on the remote side of a WAN.
Step 2
PortFast was configured to cause an interface configured as a Layer 2 access port to enter the forwarding
state immediately, bypassing the listening and learning states.
Step 3
STP was configured to ensure that packets were not looped within a redundant Layer 2 network.
To load balance across different distribution layer switches and routers, testers used the spanning-tree
priority command to force the STP root to be distributed. The best priority was applied to all distribution
switches that were the primary egress for the applicable subnetworks (per the HSRP configuration). A
slightly less desirable priority applied to the secondary egress port (per the HSRP configuration).
When spanning tree PortFast is used on Layer 2 access ports connected to a single workstation or server,
it allows those devices to connect to the network immediately, rather than waiting for the spanning tree
to converge. This capability is particularly effective when combined with the BPDU guard feature.
If the interface received a BPDU, which should not happen when an interface is connected to a single
workstation or server, the spanning tree put the port into the blocking state.
Step 4
STP UplinkFast was configured to provide fast convergence in the network access layer after a spanning
tree topology change due to a direct link failure using uplink groups. An uplink group is a set of ports
(per VLAN), only one of which is forwarding at any given time. Specifically, an uplink group consists
of the root port (which is forwarding) and a set of blocked ports (not including self-looped ports). The
uplink group provides an alternate path in case the currently forwarding link fails.
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
21
Test Cases
Step 5
UDLD was enabled to perform tasks that autonegotiation cannot perform, such as detecting the identities
of neighbors and shutting down misconnected ports. When both autonegotiation and UDLD are enabled,
Layer 1 and Layer 2 detections work together to prevent physical and logical unidirectional connections
and the malfunctioning of other protocols. UDLD is a Layer 2 protocol that works with the Layer 1
mechanisms to determine the physical status of a link. At Layer 1, autonegotiation monitors physical
signaling and fault detection.
Step 6
VTP domain was configured to reduce administration in a switched network.
When a new VLAN on one VTP server is configured, the VLAN is distributed through all switches in
the domain, which reduces the need to configure the same VLAN everywhere. Each building was
configured in a separate VTP domain. The distribution layer switches were configured as VTP servers,
and the connected access layer switches were configured in VTP transparent mode.
Expected Results
We expected the link management mechanisms to work as expected.
Multicast
The purpose of the multicast functionality test was to deploy and verify multicast deployment scenarios
for large enterprise customers that want to gain the benefits of multicasting over the network. The
enterprise global test bed tested various multicast features on different Cisco platforms, with different
Cisco IOS and CatOS software images, as follows:
•
Accept-register filter
•
Anycast rendezvous point (RP) with Auto-RP
•
Cisco Group Management Protocol (CGMP)
•
Internet Group Management Protocol (IGMP) snooping
•
IGMPv2
•
MDS-7500 specific feature
•
Multicast multilayer switching (MMLS)
•
Multicast boundary
•
Multicast rate limit
•
Protocol Independent Multicast (PIM) sparse mode
•
PIM sparse-dense mode
The following notes apply to the multicast functionality test:
•
IP multicast routing was enabled on every router in the test bed. Except for the Denver campus, all
campuses used PIM sparse-dense mode. PIM sparse mode was tested on the Denver campus. Both
Auto-RP and static RP were configured in the test bed to provide redundancy.
•
Cisco’s IP/TV server was used to provide real multicast applications for the entire test bed. The
multicast traffic was sent to eight different groups. Some simulated multicast traffic was also
generated with a traffic generator.
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
22
Test Cases
•
The IP/TV server was connected to switch egsj-6505-sa3. The simulated IP/TV traffic was generated
from the San Jose campus from switches egsj-5505-sa1, egsj-5505-sa2, and egsj-6506-sa3.
Simulated feedback traffic was also sent from the San Jose campus user floors to the San Jose data
center.
•
Depending upon the location, the IP/TV client received a subset of the multicast groups:
– In the large campuses (San Jose, Boston, Washington, D.C., Denver, and Dallas), the IP/TV
clients were configured on all the user access switches. Those clients were subscribed to all
high-speed (1 Mbps) IP/TV programs.
– The remote, lower bandwidth branches such as Colorado Springs and New Orleans were
subscribed to two 60 kbps groups. The other, higher bandwidth branches subscribed to two
200 kbps programs.
Test Procedure
The procedure used to perform the multicast functionality test follows:
Step 1
Accept-register filter was configured to allow the RP router to filter PIM register messages, so that RP
could control which source registered with it.
Step 2
Anycast-RP used the Multicast Source Discovery Protocol (MSDP) protocol to provide resiliency and
redundancy. Auto-RP was configured to automatically distribute the group-to-RP mapping information
in a PIM-enabled network.
Step 3
CGMP was configured.
CGMP works with IGMP and requires a connection to a router to dynamically configure Layer 2 switch
ports so that IP multicast traffic was forwarded only to those ports associated with IP multicast hosts.
Step 4
IGMP snooping, an alternative to CGMP, was used on Layer 2 Cisco Catalyst switches. The Layer 2
switch monitors the IGMP queries and responses between end hosts and the router, thereby providing
multicast subscription control.
Step 5
IGMP V2 is enabled by default in the Cisco IOS software. IGMP messages were used by the router and
host to communicate with each other, so that the router could monitor the correct group membership
status.
Step 6
Multilayer Directors (MDS) were configured and worked with Cisco Express Forwarding (CEF), unicast
distributed fast switching (DFS), or flow switching to distribute switching of multicast packets received
at the line cards.
Step 7
MMLS was used in the Cisco Catalyst switches to create hardware shortcuts for (S, G) and (*, G) entries
to forward multicast traffic.
Step 8
The multicast boundary feature was configured to administratively limit the range of multicast streams.
Step 9
The multicast rate-limit feature was configured to control the rate by which a sender from the source list
can send to a multicast group in the group list.
Step 10
PIM sparse mode was enabled in the Denver campus and was used throughout the entire campus.
Step 11
PIM sparse-dense mode was enabled on all Layer 3 native Cisco IOS and multicast devices in the Boston,
San Jose, Washington, D.C., and Dallas campuses. This mode controls the way multicast groups are
propagated within the network. Traffic is usually propagated using PIM sparse mode. This test also
verified whether the network could forward reasonable amounts of dense mode multicast traffic.
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
23
Test Cases
Step 12
Fallback RP was configured on router egsj-6506-c2 to prevent dense mode fallback and blocking
multicast traffic for groups not specifically configured.
Expected Results
We expected that all multicast scenarios would be successfully deployed.
Network Management
The purpose of the network management functionality test was to ensure that the information stored in
Management Information Base (MIB) databases could be accessed, propagated, read, and written (when
applicable) using Simple Network Management Protocol (SNMP), and to ensure that the operation of
the Object Identifier (OID) values reflected the actual operation on the device.
Test Procedure
The procedure used to perform the network management functionality test follows:
Step 1
The test team manually tested new and enhanced MIBs.
Step 2
The test team used existing scripts to test all identified MIBs.
Automated SNMP scripts were used to perform compliance testing. The snmp_walk_test script was used
to launch MIBs (Basic MIBs, Multicast MIBs, VoIP MIBs, QoS MIBs, and Security MIBs) on the unit
under test. Scripts performed the following testing:
Step 3
•
Access testing
•
SYNTAX testing
•
READ_ONLY range testing
•
READ_WRITE range testing
The test team checked and verified SNMP versions 1, 2C, and 3 traps.
For SNMP v1, basic operations of GET, GETNEXT, SET, and TRAP were tested.
For SNMP v2C, basic operations of GET, GETNEXT, GETBULK, SET, TRAP, and INFORM were
tested.
Step 4
The test team verified the MIB database by performing various operations in the test bed topologies.
Step 5
The test team verified that the MIB variables were set and that values were propagated to other MIB
variables as defined.
Testing to determine whether various MIB implementations conformed to their respective MIB
specification was done. The test team verified that the real operation being set in the OIDs was correctly
reflected on the device. The tests were performed both manually and with automated scripts, and then
the results were compared with command output.
Step 6
The test team checked and compared the MIB support list to the output of the show snmp mib EXEC
command.
Step 7
The test team verified the MIB database by different operations in covered topologies.
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
24
Test Cases
Step 8
The test team verified that a MIB variable was set and its value was propagated to other MIB variables
as defined.
Step 9
The test team verified whether various MIB implementations conformed to the respective MIB
specification.
Step 10
The test team verified that the real operation being set in the OIDs was correctly reflected on the device.
The tests were performed both manually and with automation scripts, and then the results were compared
using Cisco IOS EXEC commands.
Step 11
The test team verified that SNMP versions 1, 2C, and 3 traps and informs were appropriately generated.
Step 12
The test team used system message logging (syslog) to collect messages from devices to a server running
a syslog daemon. Logging to a central syslog server helped in the aggregation of logs and alerts. Cisco
devices can send their log messages to a UNIX-style syslog file.
Step 13
The test team created a negative test peer with memory leak tests to verify that no SNMP operations
caused memory leaks.
Expected Results
We expected that the information stored in the MIB database could be accessed, propagated, read, and
written, so that the operation of the OID values reflected the actual operation on the device.
QoS
The purpose of the QoS test was to deploy and verify the following QoS scenarios and features:
•
Classification and marking at the access and distribution layers for the following features:
– Access lists
– Network-based application recognition (NBAR) to classify Telnet
– Distributed NBAR (dNBAR)
– IP Precedence
– Layer 2 Class of Service (CoS) to Layer 3 QoS trust
•
Congestion avoidance using Weighted Random Early Detection (WRED).
•
Congestion management using low latency queueing (LLQ) and CBWFQ.
•
Traffic conditioning using Frame Relay traffic shaping (FRTS), generic traffic shaping (GTS), and
Distributed Traffic Shaping (DTS).
The QoS features were implemented at the core, distribution, access and WAN and branch layers.
Negative testing was also performed by having the test team add, delete, and read service policies, policy
maps, and class maps.
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
25
Test Cases
Test Procedure
The procedure used to perform the QoS functionality test follows:
Step 1
QoS classification and marking are two separate actions that are usually done together. Packet
classification and marking took place on the outbound interfaces of the access and distribution layer
switches in all campuses. Layer 2 CoS to Layer 3 QoS trust mechanisms, also known as remarking or
mapping, were tested wherever possible. Classification also took place at other routers and switches in
the network to enforce predictable per-hop behaviors.
Step 2
WRED was configured as the primary congestion avoidance mechanism. This mechanism used
algorithms that provided buffer management to the output queues.
Step 3
The test team managed network congestion by setting up queues of differing priorities. The LLQ
priority and bandwidth commands were used to establish a set of queues to service all traffic. These
commands were added to the policy map applied to the outbound interfaces for all WAN routers and
Layer 3 switches.
Step 4
The policing and shaping features made up traffic conditioning. Phase 3 testing used GTS, DTS, FRTS,
and ATM traffic shaping (VBR-nrt), on the WAN and WAN edges to account for the disparity in link
speeds.
Expected Results
We expected the QoS deployment scenarios to work as expected for large enterprise customers that want
to protect their critical application data, including real-time VoIP, on WAN links.
Resiliency
Cisco defines network resiliency as the ability to recover from any network failure. The purpose of the
resiliency functionality test was to deploy and verify the following features:
•
Test routes configured to participate in an HSRP standby group, which provide redundant support
for a virtual IP address that is used by end hosts for a default gateway.
•
Monitor convergence times to learn how quickly the network recovers after a failure.
•
Test Route Processor Redundancy Plus (RPR+) on the Cisco 7600 series routers to verify that
interfaces stay up during a switch from an active to a standby Route Processor.
The following notes apply to the resiliency tests:
•
At the core layer, all campus core routers in San Jose had redundant supervisor modules and were
configured as the standby supervisor.
•
At the distribution layer, each campus used a full distribution layer, or a collapsed core and
distribution layer, and utilized a single supervisor for each device but maintained two routers with
HSRP running between them on each that supported users in the access layer.
•
There was no resiliency feature tested within the access layer except for physically redundant
connectivity from each access layer switch to the distribution layer router.
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
26
Test Cases
Test Procedure
The procedure used to perform the resiliency functionality test follows:
Step 1
To verify HSRP, the show standby command was used on the active and standby routers in the HSRP
group to display the local state, either standby or active. As a negative test, the active router that was set
up in the HSRP group was physically removed to verify that the standby router’s local state changed to
active.
Step 2
Monitoring convergence times was done using the external Ixia traffic generator tool. The convergence
time was measured from the time of failure to the time of complete recovery.
Step 3
RPR+ was tested on the Cisco 7600 series routers to verify that interfaces stayed up during a switch from
an active to a standby Route Processor. The show redundancy switchover and show redundancy states
commands were used to verify that interfaces stayed up during a switch from an active to a standby Route
Processor. The interface statuses were displayed after the test team copied a new image into both
supervisors.
Expected Results
We expected the following results:
•
Routes configured to participate in an HSRP standby group should provide redundant support for a
virtual IP address that is used by end hosts for a default gateway.
•
Convergence times should be fast so that customers do not lose traffic and the impact of a failure is
minimal.
•
RPR+ allows interfaces to stay up during a switch from an active to a standby Route Processor.
Security
The purpose of the security functionality test was to deploy and verify the IPSec deployment scenarios
for large enterprise customers that want to secure their intercampus communications while using the
scalability of key management features incorporated into IPSec.
The following security features were tested:
•
IPSec with AIM, ISA, and VAM hardware encryption
•
IPSec on the Cisco Catalyst 6000 series switch
•
IPSec and QoS interactivity
•
IPSec transport and tunnel modes with peer-to-peer, site-to-site (hub spoke), and GRE tunnel
topologies
•
PIX Firewall
•
Cisco Secure Server (AAA, TACACS+), Public Key Infrastructure (PKI) (CA)
•
Payload: voice and data
•
Encryption: Data Encryption Standard (DES), 3DES, Hardware encryption and Software encryption
•
Internet Key Exchange (IKE) Extended Authentication
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
27
Test Cases
•
Key management: Internet Security Association and Key Management Protocol (ISAKMP):
preshared, PKI (SA)
•
Packet filtration: access lists
Tunnel mode and transport mode were both tested in site-to-site and peer-to-peer encryption
configurations. The encryption method varied between hardware- and software-based encryption. Data
that was transported included traffic generated from testing tools such as Chariot and CallGen. PKI
provided two peers to dynamically share encryption keys.
Test Procedure
The procedure used to perform the security functionality test follows:
Step 1
IPSecurity (IPSec), Advanced Integration Module (AIM), Integrated Services Adapter (ISA), and
Virtual Private Network (VPN) Acceleration Module (VAM) hardware encryption were tested.
AIM and VAM modules were implemented in the security tests to increase encryption performance. The
AIM and VAM modules modify how a packet is treated within the router. With hardware crypto, the CPU
does not manage the packet all the time, and sends the packet to an AIM or VAM module before queueing
it for transmission.
The following encryption modules were used:
•
AIM-VPN (BP)
•
AIM-VPN (MP) for Cisco 2600 and 3600 routers
•
ISA, VAM, and VAM I for Cisco 7200 routers
The following IPSec encryption modules were tested as part of IPSec deployment:
Step 2
•
SA-VAM VPN module was added to the Denver WAN router (egden-7206-w3).
•
SA-ISA module was added to the Cisco 7200 router WAN routers in San Jose (egsj-7206-w3),
Boston (egbos-7206-w3), Dallas (egdal-7206-w1, egdal-7206-w2, and Phoenix (egphx-7206-w1).
•
NM-VPN/MP (Mid-Performance) network module was added to the WAN Cisco routers in
Pittsburgh (egpit-3640-w1) and Los Angeles (egla-3640-w1).
•
AIM-VPN/HPII (High-Performance II) VPN module was added to the Houston (eghou-3660-w1)
WAN routers.
•
MOD1700-VPN module was added to a Cisco1760 router. Data encryption 3DES algorithms were
used.
•
AIM-VPN/EP (Enhanced Performance) was added to a Cisco 2621 WAN router in Austin.
IPSec and Certificate Authority (CA) were tested.
A Microsoft CA server was located in San Jose. The IPSec and CA interaction was tested with the
processing of requests and revoking of certificates. Spoke-to-spoke calling was tested with VoIP packets
that were encrypted at one site, decrypted at a headend router at another site, routed to another tunnel
interface, encrypted again, and then decrypted one last time. Data packets were also encrypted and
decrypted at sites that allowed testing VPN termination behind the WAN edge.
Step 3
IPSec and Cisco Catalyst 6000 series switches were tested.
This feature test was implemented between Washington, D.C. and Dallas to test campus-to-campus
IPSec encryption and VPN termination in the core. Chariot emulation streams between the cores were
encrypted. This feature was implemented with a WS-SVC-IPsec-1, IPSec VPN Services Module for the
Cisco Catalyst 6500 switch.
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
28
Test Cases
Step 4
IPSec and QoS for encrypted traffic were tested.
This feature test was implemented because it maximized bandwidth for all IPSec traffic. Bandwidth
policies were defined for encrypted flows or all other traffic on the interface would use the bandwidth
the IPSec does not use. A class map matched all the IPSec traffic leaving the routers.
Step 5
IPSec and QoS preclassify were tested.
This feature test was implemented to enable IPSec to interoperate with QoS between the Dallas
(egdal-7206-w1) and Austin (egaus-2621-w1) campuses, for a simulated voice, video, and data traffic
profile. For configuring an IPSec-encrypted IP GRE tunnel, QoS preclassify was enabled on both the
tunnel interface and crypto map.
Step 6
IPSec transport mode with no IP GRE tunnel was tested.
This implementation was considered for testing because it saved bandwidth compared to the IPSec
tunnel mode—no IP GRE tunnel mode. In this configuration, IPSec encrypted IP unicast traffic only.
IPSec SAs were created for each access list line matched. An access list was specified in the crypto map
to designate packets that were to be encrypted.
Step 7
IPSec transport mode encrypting an IP GRE tunnel was tested.
This deployment was considered because it saved 16 bytes per packet over the IP GRE tunnel for G.729
calls. Because an additional IPSec IP header was not required, less overhead was incurred compared to
the IPsec tunnel mode encrypting an IP GRE tunnel feature test. Voice packets G729, G711, and Chariot
streams and routing were encrypted. The dynamic routing protocol used within the IP GRE tunnel was
EIGRP.
Step 8
IPSec tunnel mode with no IP GRE tunnel was tested.
This implementation was considered because it saved bandwidth compared to the IPSec tunnel mode
with IP GRE tunnel test, did not utilize an IP GRE tunnel, and minimized the antireplay drops. In this
configuration, IPSec security associations were created for each access list line matched. An access list
was specified in the crypto map to designate packets that were to be encrypted.
Step 9
IPSec tunnel mode encrypting an IP GRE tunnel was tested.
This implementation was considered because it incurred the greatest header overhead compared to IPSec
transport mode encrypting an IP GRE tunnel and IPSec tunnel mode with no IP GRE tunnel. The voice
packets (G.729, G.711), Chariot streams, and routing traffic were encrypted. The dynamic routing
protocol within the IP GRE tunnel was EIGRP. When this implementation was configured with routing
protocol EIGRP running within an IP GRE tunnel, the routing protocol’s hello packets maintained the
security associations between routers.
Step 10
A PIX Firewall device was tested.
A PIX Firewall device was deployed in the San Jose campus. PIX Firewall was deployed behind the
perimeter Cisco 7200 router (egsj-7206-w4), which connected to the ISP. The PIX device connected to
device egsj-7206-w3 inside the enterprise network. A web server was placed in the network, and two
PCs were used in the ISP test bed. One PC acted like an Internet user and hacker; the other PC acted like
a web server.
The following PIX features were tested:
•
Network Address Translation (NAT)
•
Port Address Translation (PAT)
•
AAA/TACACS+ integration
•
Syslog
•
Failover and stateful failover
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
29
Test Cases
Step 11
TACACS+ and AAA were tested.
TACACS+ services and AAA were tested as a part of the end-to-end global infrastructure at a system
level for all access, distribution, core, and WAN edge devices in the global topology. This feature test
was enabled on all routers and switches in the San Jose, Boston, Denver, Dallas, and remotes campuses
as part of the infrastructure. The AAA server was located in San Jose.
Step 12
IPSec for voice and video was tested.
A voice- and video-enabled VPN configuration was implemented between two links with QoS
preclassify and an IPSec GRE tunnel configured. Voice and video streams were simulated by Chariot
emulators.
Expected Results
We expected the following results:
•
The site-to-site IPSec VPN in an Architecture for Voice, Video, and Integrated Data (AVVID)
infrastructure for enterprise customers deploying services and solutions that included VPN, QoS,
and IP telephony would work as expected.
•
Campus PIX Firewall infrastructures would work as expected.
Voice
The purpose of the voice functionality test was to examine the convergence of the following two general
areas of VoIP telephony within the enterprise global topology:
•
The traditional H.323 gateway-based VoIP.
•
The newer IP telephone and Cisco CallManager version and upgrade (also called AVVID) additions.
The following voice features were used in the testing:
•
H.323 testing took place between all legacy gateways, Cisco Catalyst 4000 blades, VG200/VG248,
ATA186/188, and gatekeepers, and from gatekeepers and gateways to the Cisco CallManager.
•
Skinny Client Control Protocol (SCCP) testing took place between all real or simulated IP
telephones, Cisco Catalyst 6000 blades, Cisco Catalyst 4000 switch transcoding, Survivable Remote
Site Telephony (SRST) gateways, and the Cisco CallManager.
•
Cisco CallManager clustering used the centralized and distributed model.
•
Cisco CallManager upgrade procedure used version 3.1(x)and 3.2 (x) to 3.3.
•
RTP voice streams testing took place after all call setups.
•
Transcoding testing took place in selected paths at the data center and other regional sites.
•
Codec testing used G.711 and G.729.
•
SRST testing took place at the Los Angeles, Colorado Springs, and Pittsburgh remote sites.
The following hardware was used:
•
Cisco CallManagers were used at the San Jose campus.
•
Gateways were legacy in all campuses, and VG200/VG248 and ATA186/188 devices were tested in
San Jose.
•
Gatekeeper was used at the San Jose campus.
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
30
Test Cases
•
Cisco 7960 routers were used at all sites to provide two IP phones per inline power card, and one
per remote site switches in Pittsburgh, Los Angeles, and Colorado Springs.
•
Cisco36x0 routers were used in the legacy networks only.
•
Cisco 3745 router was used in the legacy network only.
•
Cisco 26xx routers was used in the legacy network only.
•
Cisco 720x router was used in the legacy network only.
•
Cisco Catalyst 6000 series switch was used in San Jose and Washington, D.C., with T1 CAS blades:
SJ-b1-a3 and WAS-c2.
•
Cisco Catalyst 6000 series switch was used in San Jose and Washington, D.C campuses, with FXS
Analog blades: SJ-b1-a3 and WAS-c2.
•
Cisco Catalyst 6000 series switch with inline power cards was used in San Jose and Washington,
D.C. campuses: SJ-b1-a3 and WAS-c2.
•
Cisco Catalyst 4000 series switch with access gateway modules was used in San Jose, Boston, and
Denver campuses: SJ-b2-a1, BOS-a3, and DEN-a3.
•
Cisco Catalyst 4000 series switch with 8-port FXSs was used in San Jose, Boston, and Denver
campuses: SJ-b2-a1, BOS-a3, and DEN-a3.
•
Cisco Catalyst 4000 series switch with two-port FXSs was used in SJ-b2-a1, BOS-a3, and DEN-a3.
•
Cisco Catalyst 4000 series switch with T1 CAS cards was used in SJ-b2-a1, BOS-a3, and DEN-a3.
•
Cisco Catalyst 4000 series switch with inline power cards was used in SJ-b2-a1, BOS-a3, and
DEN-a3.
The following test tools were used:
•
CallGen Telephony (Cisco 3660 router running CallGen Images):
– Legacy H323 Callgens: one router for digital T1s and one router for analog FXS.
– SCCP IP-Phone Callgens: one router for originate and one router for terminate.
– SCCP Digital T1 Callgens: one router for originate and one router for terminate.
– SCCP Analog FXS Callgens: one router for originate and one router for terminate.
•
Bulk Call Simulator: Based on SimClient+ tools.
•
CIC: CallGen Information Collector.
•
Analog and IP 7960 telephones.
Besides the features already listed, the scope included testing matrices in three areas:
•
Signaling protocols: (H.323 and SCCP).
– H.323 to H.323 calls—Covered across entire VoIP legacy network.
– H.323 to SCCP calls—Covered across entire AVVID network.
– SCCP to SCCP calls—Covered across entire AVVID network.
•
Call types (T1 Channel Associated Signaling [CAS], FXS, and VoIP).
– FXS to FXS.
– FXS to T1.
– FXS to IP.
– T1 to T1.
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
31
Test Cases
– T1 to IP.
– IP to IP.
•
Gateway types:
– Cisco Catalyst 6000 series switch only.
– Cisco Catalyst 4000 series switch only.
– Cisco Catalyst 6000 series switch to Cisco Catalyst 6000 series switch.
– Cisco Catalyst 6000 series switch to Cisco Catalyst 4000 series switch.
– Cisco Catalyst 6000 series switch to gateway.
– Cisco Catalyst 4000 series switch to Cisco Catalyst 4000 series switch.
– Cisco Catalyst 4000 series switch to gateway.
– Gateway to gateway.
Test Procedure
The procedure used to perform the voice functionality test follows:
Step 1
All campuses were wired—gateways, Cisco CallManagers, switches, Callgen, simulators, and so on.
Step 2
Propagated static routes were added to Callgen loopbacks on all adjacent routers.
Step 3
IP telephones were configured.
Step 4
Cisco CallManagers were configured and upgraded.
Step 5
Gatekeepers were configured.
Step 6
Callgen was configured.
Step 7
Bulk Simulator (SimClient+) was configured.
Step 8
SRST was configured.
Step 9
All devices under test (all routers and switches) were configured.
Step 10
Connectivity was brought up.
Step 11
Troubleshooting was performed in all steps, as necessary.
Expected Results
We expected the convergence of the following two general areas of VoIP telephony within the enterprise
global topology to work together:
•
The traditional H.323 gateway-based VoIP.
•
The newer IP telephone and Cisco CallManager version and upgrade (also called AVVID) additions.
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
32
Test Cases
Destructive Negative Tests
Negative tests were executed with an expectation of traffic loss and network instability. Negative tests
were performed on the following features:
•
IP routing: BGP, EIGRP, and OSPF
•
IP Security
•
Link management
Test Procedure
The procedure used to perform the destructive negative test follows:
Step 1
BGP clear neighbors tested that the neighbor relationship was reestablished and authentication was
consistent. The test was performed with both a hard clear and soft reconfigure.
Step 2
The CEF disable and enable test removed and added VLANs from specific switch ports, then disabled
and reenabled CEF to ensure that CEF adjacencies were reestablished.
Step 3
EIGRP clear neighbors tested that the neighbor relationship was reestablished and authentication was
consistent.
The EIGRP remove was more extensive testing of EIGRP clearing performed by removing the EIGRP
process entirely from the router.
Step 4
The fail and reestablish WAN links test failed WAN links in the campuses and on the devices noted, and
ensured that alternative routes were learned before the WAN link and the applicable routing neighbor
relationship were reestablished.
Step 5
OSPF clear neighbors (clear process) verified that the neighbor relationship was reestablished and
authentication was consistent.
Step 6
OSPF remove was a more extensive testing of OSPF clearing performed by completely removing the
OSPF process from the router.
Step 7
The router power-cycle reboot test verified the router’s ability to return to normal operation after a
manual reboot due to a power-cycle.
Step 8
The router soft reboot test verified the router’s ability to return to normal operations after a soft reboot.
Step 9
RPF failover tested the impact on the test bed when RPF failover occurs when the router is forwarding
multicast traffic.
Step 10
The toggle BPDUs to PortFast interface test toggled the sending of BPDUs to a PortFast with a BPDU
guard enabled. A Cisco Catalyst 2900 switch with a workstation was connected to the switch under test,
and also had PortFast and BPDU guard enabled. The following tasks were performed:
•
The Cisco Catalyst 2900 switch had STP disabled.
•
The Cisco Catalyst 2900 switch then had STP enabled on it, and generated BPDUs to be sent to the
device under test.
•
After a 2-minute wait period, results were collected.
•
STP was disabled on the Cisco Catalyst 2900 switch.
Step 11
The UDLD test was done by physically unplugging and plugging, either fully or partially, the GE and
FE contributing connections.
Step 12
The UpLink Fast fail root port test was done by unplugging the GE link uplink on an access switch.
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
33
Test Cases
Step 13
For the IPSec flap-encrypted link test, the interface to which the crypto map was applied was flapped.
The IPSec SA associations were reformed and the GRE tunnels were reestablished.
Step 14
In the IPSec flap hardware and software encryption method test, the device was tested for its ability to
switch from hardware encryption to software encryption, and then back to hardware encryption.
Step 15
The PIX failover, soft switch test started a continuous ping (from inside to outside the network) and FTP
session with hash mark enabled. This test forced the secondary or standby router to become active.
Step 16
The PIX failover, reload/power-cycle primary test reloaded the power cycle primary/active PIX, then the
test was repeated. The tester started a continuous ping (from inside to outside the network) and FTP
session with hash mark enabled, before reloading the primary/active PIX.
Step 17
The PIX failover, reload/power cycle secondary test reloaded the power cycle secondary/standby PIX,
then the test was repeated. The tester started a continuous ping (from inside to outside the network) and
FTP session with hash mark enabled, before reloading the secondary/standby PIX.
Step 18
The RP failover test was executed to verify the fault tolerant capability of Anycast.
Step 19
The shut/no shut interface test was performed on the active traffic path using the shutdown and no
shutdown commands during the test.
Expected Results
We expected the tests should result in the expected traffic loss and network instability.
Nondestructive Negative Tests
The nondestructive negative tests were done without an expectation that they would cause significant
traffic loss and network instability, because of network, node, and path redundancy.
Negative tests were performed by adding and removing routes that generate summary routes.
Test Procedure
The procedure used to perform the nondestructive negative test follows:
Step 1
The BGP add and remove summary initiating route test added and removed the route that generated the
summary route.
Step 2
The BGP route flap, EIGRP route flap, and OSPF external route flap tests tested the routing protocol’s
dampening capabilities.
Step 3
The EIGRP add and remove summary initiating route test added and removed the route that generated
the summary route.
Step 4
The OSPF add and remove summary initiating route test added and removed the route that generated the
summary route.
The routing protocol’s dampening capabilities were tested when routes learned by BGP, EIGRP, and
OSPF were flapped. These tests worked in tandem with the add and remove summary initiating route
tests to increase the modulation of flapped routes and trigger the dampening function in BGP, EIGRP,
and OSPF.
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
34
Supplementary Information
Step 5
In the PIX failover: attach failover cable but disable fail test, the tester attached a failover cable between
a primary and secondary PIX device but disabled failover (no failover command was used on the PIX
device).
Step 6
The QoS add and remove policy map test removed a policy map from the configuration.
Step 7
The QoS add and remove class map test removed a class map from the configuration.
Step 8
The QoS add and remove service policy test removed a service policy from the configuration.
Step 9
The VTP add and remove VLAN from switch port test added and removed VLANs from switch ports,
provided a load on the VTP structure, and dynamically allocated VLANs on and off of trunks.
Expected Results
We expected the tests should not result in the expected traffic loss and network instability.
Supplementary Information
The following sections provide additional information about the phase 3 testing:
•
Device Characteristics, page 35
•
Technical Notes, page 47
Device Characteristics
Table 5 through Table 10 show the device characteristics for the campuses used in phase 3 testing.
Table 5
Boston Campus Device Characteristics
Hostname
Platform
Operating
System
System Image
NVRAM
(in KB) Hardware
egbos-2651-v1
Cisco 2651
12.3(9a)
c2600-js-m
32
•
2 FE/IEEE 802.3 interfaces
•
24 serial network interfaces
•
1 ISDN BRI
•
2 channelized T1/PRI ports
egbos-4006-a2
Cisco
8.2(1)GLX
Catalyst 4000
cat4000-k8
480
Mod Port
1
2
3
48
5
34
Model
WS-X4013
WS-X4148-RJ45V
WS-X4232
egbos-4006-a3
Cisco
8.2(1)GLX
Catalyst 4000
cat4000-k8
480
Mod Port Model
1
2
WS-X6K-SUP2-2GE
WS-F6K-PFC2
WS-X6K-SUP2-2GE
3
48 WS-X6348-RJ-45
WS-F6K-VPWR
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
35
Supplementary Information
Table 5
Boston Campus Device Characteristics (continued)
System Image
NVRAM
(in KB) Hardware
Cisco
8.3(1)
Catalyst 6000
cat6000-sup2k8
512
Cisco
12.1(22)E2
Catalyst 6500
c6sup2_rp-jsv-m
381
Hostname
Platform
egbos-6506-a1
egbos-6506-c1
egbos-6506-c2
egbos-7206-w1
egbos-7206-w2
egbos-7206-w3
Table 6
Operating
System
Cisco
12.1(22)E2
Catalyst 6500
Cisco 7206
Cisco 7206
Cisco 7206
12.3(9a)
12.3(9a)
12.3(9a)
c6sup2_rp-jsv-m
c7200-js-m
c7200-js-m
c7200-jk9s-m
381
125
125
509
Mod Port Model
1
2 WS-X4013
3
48 WS-X4148-RJ45V
4
1 WS-X4604-GWY
•
105 virtual Ethernet/IEEE 802.3
interfaces
•
48 FE/IEEE 802.3 interfaces
•
18 GE/IEEE 802.3 interfaces
•
107 virtual Ethernet/IEEE 802.3
interfaces
•
48 FE/IEEE 802.3 interfaces
•
18 GE/IEEE 802.3 interfaces
•
5 FE/IEEE 802.3 interfaces
•
2 serial network interfaces
•
5 FE/IEEE 802.3 interfaces
•
4 serial network interfaces
•
2 channelized T1/PRI ports
•
8 Ethernet/IEEE 802.3 interfaces
•
4 GE/IEEE 802.3 interfaces
•
1 serial network interface
•
4 ISDN BRIs
•
8 ATM network interfaces
•
2 channelized T1/PRI ports
•
1 Integrated Service Adapter (ISA)
•
Integrated NT1s for 4 ISDN BRIs
Dallas Campus Device Characteristics
Hostname
Platform
Operating
System
System Image
NVRAM
(in KB) Hardware
egdal-3640-v
Cisco 3640
12.3(9a)
c3640-a3js-m
125
egdal-3660-w5
Cisco 3660
12.3(9a)
c3660-ik.9su2-m
125
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
36
•
2 FE/IEEE 802.3 interfaces
•
4 channelized T1/PRI ports
•
2 voice Foreign Exchange Station
(FXS) interfaces
•
4 Ethernet/IEEE 802.3 interfaces
•
2 FE/IEEE 802.3 interfaces
•
1 VPN module
Supplementary Information
Table 6
Dallas Campus Device Characteristics (continued)
System Image
NVRAM
(in KB) Hardware
Cisco
12.1(22)E2
Catalyst 6500
c6msfc2-jsv-m
509
•
18 virtual Ethernet/IEEE 802.3
interfaces
Cisco
12.2(14)EW
Catalyst 6500
c6sup2rp-jk9sv-ma
381
•
3 virtual Ethernet/IEEE 802.3
interfaces
•
48 FE/IEEE 802.3 interfaces
•
12 GE/IEEE 802.3 interfaces
•
8 Ethernet/IEEE 802.3 interfaces
•
3 FE/IEEE 802.3 interfaces
•
4 serial network interfaces
•
1 ATM network interface
•
1 ISA
•
8 Ethernet/IEEE 802.3 interfaces
•
4 FE/IEEE 802.3 interfaces
•
8 serial network interfaces
•
8 channelized E1/PRI ports
•
1 channelized E3 port
•
4 Ethernet/IEEE 802.3 interfaces
•
3 FE/IEEE 802.3 interfaces
•
1 serial network interface
•
8 ATM network interfaces
•
2 channelized E1/PRI ports
•
1 ISA
•
1 Gigabit Ethernet Interface
Processor (GEIP) controller
•
2 Gigabit Ethernet controllers
•
3 VIP4-80 RM7000 controllers for:
Hostname
Platform
egdal-6506-c1msfc2
egdal-6506-c2
egdal-7206-w1
egdal-7206-w2
egdal-7206-w3
egdal-7507-w4
Cisco 7206
Cisco 7206
Cisco 7206
Cisco 7500
Operating
System
12.3(9a)
12.3(9a)
12.3(9a)
12.3(9a)
c7200-jk9s-m
c7200-jk9s-m
c7200-jk9s-m
rsp-pv-m
125
125
125
2043
– 2 FE
– 8 Ethernet
– 2 T1
– 9 ATM
•
8 Ethernet/IEEE 802.3 interfaces
•
2 FE/IEEE 802.3 interfaces
•
2 GE/IEEE 802.3 interfaces
•
2 serial network interfaces
•
9 ATM network interfaces
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
37
Supplementary Information
Table 7
Denver Campus Device Characteristics
Hostname
Platform
Operating
System
System Image
NVRAM
(in KB) Hardware
egden-2600-isdn
Cisco 2621
12.3(9a)
c2600-js-m
32
egden-3640-v1
Cisco 3640
12.3(9a)
c3640-a3js-m
125
egden-4003-a2
Cisco
8.2(1)GLX
Catalyst 4000
cat4000-k8
480
egden-4506-a3
Cisco 4000
cat4000-i5s-m
467
12.2(18)EW
egden-6506-a1
Cisco
8.3(1)
Catalyst 6500
cat6000-sup2k8
512
egden-6506-c1
Cisco
12.1(22)E2
Catalyst 6500
MSFC
c6sup2_rp-jsv-m
381
egden-6506-c2
Cisco
12.1(22)E2
Catalyst 6500
MSFC
c6sup2_rp-jsv-m
381
•
2 FE/IEEE 802.3 interfaces
•
1 ISDN BRI
•
3 FE/IEEE 802.3 interfaces
•
24 serial network interfaces
•
2 channelized T1/PRI ports
•
2 voice Foreign Exchange Office
(FXO) interfaces
•
2 voice FXS interfaces
Mod Port Model
1
0
WS-X4012
2
48 WS-X4148-RJ
3
34 WS-X4232
•
5 virtual Ethernet/IEEE 802.3
interfaces
•
48 FE/IEEE 802.3 interfaces
•
3 GE/IEEE 802.3 interfaces
Mod Port Model
1
2 WS-X6K-SUP2-2GE
WS-F6K-PFC2
WS-X6K-SUP2-2GE
4
48 WS-X6348-RJ-45
WS-F6K-VPWR
•
3 virtual Ethernet/IEEE 802.3
interfaces
•
48 FE/IEEE 802.3 interfaces
•
18 GE/IEEE 802.3 interfaces
•
3 virtual Ethernet/IEEE 802.3
interfaces
•
48 FE/IEEE 802.3 interfaces
•
18 GE/IEEE 802.3 interfaces
egden-6506-d1
Cisco
12.1(22)E2
Catalyst 6500
MSFC
c6msfc-jsv-m
123
•
103 virtual Ethernet/IEEE 802.3
interfaces
egden-6506-d2
Cisco
12.1(22)E2
Catalyst 6500
MSFC
c6msfc-jsv-m
123
•
105 virtual Ethernet/IEEE 802.3
interfaces
egden-7206-wl
Cisco
7206VXR
c7200-js-m
509
•
3 FE/IEEE 802.3 interfaces
•
3 GE/IEEE 802.3 interfaces
•
1 ATM network interface
12.3(9a)
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
38
Supplementary Information
Table 7
Denver Campus Device Characteristics (continued)
Hostname
Platform
egden-7206-w2
Cisco
7206VXR
egden-7206-w3
egden-7507-w4
Cisco
7206VXR
Cisco 7507
Operating
System
System Image
NVRAM
(in KB) Hardware
12.3(9a)
c7200-js-m
125
12.3(9a)
12.3(9a)
c7200-jk9s-m
rsp-jsv-m
125
2043
•
8 Ethernet/IEEE 802.3 interfaces
•
4 FE/IEEE 802.3 interfaces
•
3 serial network interfaces
•
1 ATM network interface
•
4 channelized T1/PRI ports
•
8 Ethernet/IEEE 802.3 interfaces
•
3 FE/IEEE 802.3 interfaces
•
1 serial network interface
•
8 ATM network interfaces
•
2 channelized T1/PRI ports
•
1 ISA
•
2 GEIP controllers for 2 GE
•
2 VIP4-80 RM7000 controllers for:
– 1 FE
– 8 Ethernet
– 4 T1
– 8 ATM
•
8 Ethernet/IEEE 802.3 interfaces
•
1 FE/IEEE 802.3 interface
•
2 GE/IEEE 802.3 interfaces
•
2 serial network interfaces
•
8 ATM network interfaces
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
39
Supplementary Information
Table 8
San Jose Campus Device Characteristics
Hostname
Platform
Operating
System
System Image
NVRAM
(in KB) Hardware
egsj-3640-gk
Cisco 3640
12.3(9a)
c3640-jsx-m
125
egsj-3640-v
Cisco 3640
12.3(9a)
c3640-js-m
125
egsj-4006-b2al
Cisco
8.2(I)GLX
Catalyst 4000
cat4000-k8
480
egsj-4006-b2dl
Cisco
12.2(18)EW
Catalyst 4000
cat4000-i5s-m
467
egsj-4006-blal
Cisco
8.2(I)GLX
Catalyst 4000
cat4000-k8
480
egsj-4506-b2d2
Cisco
12.2(18)EW
Catalyst 4000
cat4000-i5s-m
467
egsj-5505-sa2
Cisco
6.4(3)
Catalyst 5500
cat5000-sup3
512
•
2 Ethernet/IEEE 802.3 interfaces
•
1 FE/IEEE 802.3 interface
•
2 Token Ring/IEEE 802.5
interfaces
•
2 Ethernet/IEEE 802.3 interfaces
•
1 FE/IEEE 802.3 interface
•
1 ISDN BRI
•
2 channelized T1/PRI ports
•
2 voice FXO interfaces
•
2 voice FXS interfaces
Mod
1
2
3
5
Cisco
6.4(3)
Catalyst 5500
—
512
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
40
Model
WS-X4013
WS-X4232
WS-X4148-RJ45V
WS-X4604-GWY
•
105 virtual Ethernet/IEEE 802.3
interfaces
•
32 FE/IEEE 802.3 interfaces
•
8 GE/IEEE 802.3 interfaces
Mod
1
2
3
Port
0
6
48
Model
WS-X4012
WS-X4306
WS-X4148-RJ45V
•
104 virtual Ethernet/IEEE 802.3
interfaces
•
32 FE/IEEE 802.3 interfaces
•
4 GE/IEEE 802.3 interfaces
Mod Port
1
2
2
5
egsj-5505-sal
Port
2
34
48
1
Mod
1
2
3
5
24
9
Port
2
24
24
9
Model
WS-X5530
WS-F5531
WS-U5534
WS-X5234
WS-X5410
Model
WS-X5550
WS-X5013
WS-X5224
WS-X5410
Supplementary Information
Table 8
San Jose Campus Device Characteristics (continued)
Operating
System
Hostname
Platform
egsj-6506-b1a3
Cisco
8.3(1)
Catalyst 6500
System Image
NVRAM
(in KB) Hardware
cat6000-sup2k8
512
Mod
1
2
Port Model
2
WS-X6K-SUP2-2GE
WS-F6K-PFC2
WS-X6K-SUP2-2GE
48
WS-X6148-RJ-45
egsj-6506-b1dl
Cisco
12.1(22)E2
Catalyst 6500
c6msfc2-jsv-m
509
egsj-6506-b2a2
Cisco
8.3(1)
Catalyst 6500
cat6000-sup2k8
512
Mod Port Model
1
2 WS-X6K-SUP2-2GE
WS-F6K-PFC2
WS-X6K-SUP2-2GE
2
48 WS-X6348-RJ-45
WS-F6K-VPWR
4
16 WS-X6516-GE-TX
5
16 WS-X6516-GE-TX
egsj-6506-sa3
Cisco
8.3(1)BOC
Catalyst 6500
cat6000-sup2k8
512
Mod Port Model
1
2 WS-X6K-SUP2-2GE
WS-F6K-PFC2
WS-X6K-SUP2-2GE
2
48 WS-X6148-RJ-45
egsj-6506-sd2
Cisco
12.1(22)E2
Catalyst 6500
c6sup2_rp-jsv-m
381
egsj-6506-sdl
egsj-6509-c1
egsj-6509-c2
egsj-6509-c3
Cisco
12.1(22)E2
Catalyst 6500
Cisco
12.1(22)E2
Catalyst 6500
Cisco
12.1(22)E2
Catalyst 6500
Cisco
12.1(22)E2
Catalyst 6500
c6sup2_rp-jsv-m
c6sup2_rp-jsv-m
c6sup2_rp-jsv-m
c6sup2_rp-jsv-m
381
381
381
381
•
105 virtual Ethernet/IEEE 802.3
interfaces
•
104 virtual Ethernet/IEEE 802.3
interfaces
•
48 FE/IEEE 802.3 interfaces
•
18 GE/IEEE 802.3 interfaces
•
103 virtual Ethernet/IEEE 802.3
interfaces
•
48 FE/IEEE 802.3 interfaces
•
18 GE/IEEE 802.3 interfaces
•
2 virtual Ethernet/IEEE 802.3
interfaces
•
48 FE/IEEE 802.3 interfaces
•
22 GE/IEEE 802.3 interfaces
•
2 virtual Ethernet/IEEE 802.3
interfaces
•
144 FE/IEEE 802.3 interfaces
•
22 GE/IEEE 802.3 interfaces
•
2 virtual Ethernet/IEEE 802.3
interfaces
•
96 FE/IEEE 802.3 interfaces
•
36 GE/IEEE 802.3 interfaces
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
41
Supplementary Information
Table 8
San Jose Campus Device Characteristics (continued)
Operating
System
Hostname
Platform
egsj-6509-c4
Cisco
12.1(22)E2
Catalyst 6500
egsj-7206-w3
egsj-7609-w1
Cisco 7206
Cisco 7609
12.3(9a)
12.1(22)E2
System Image
NVRAM
(in KB) Hardware
c6sup2_rp-jsv-m
381
c7200-jk.9s-m
c6sup2_rp-jsv-m
125
381
•
2 virtual Ethernet/IEEE 802.3
interfaces
•
144 FE/IEEE 802.3 interfaces
•
36 GE/IEEE 802.3 interfaces
•
4 FE/IEEE 802.3 interfaces
•
1 serial network interface
•
1 ATM network interface
•
2 channelized T1/PRI ports
•
1 ISA
•
2 FlexWAN controllers for:
– 3 HSSI
– 2 ATM
egsj-7609-w2
Cisco 7609
12.1(22)E2
c6sup2_rp-jsv-m
381
•
1 two-port OC-12 POS controller
(2 POS)
•
1 virtual Ethernet/IEEE 802.3
interface
•
6 GE/IEEE 802.3 interfaces
•
3 HSSI network interfaces
•
2 ATM network interfaces
•
2 POS network interfaces
•
2 FlexWAN controllers for:
– 2 serial
– 1 ATM
– 1 channelized T3
– 1 POS
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
42
•
1 two-port OC-12 POS controller
(2 POS)
•
4 virtual Ethernet/IEEE 802.3
interfaces
•
48 FE/IEEE 802.3 interfaces
•
6 GE/IEEE 802.3 interfaces
•
5 serial network interfaces
•
1 ATM network interface
•
3 POS network interfaces
•
1 channelized T3 port
Supplementary Information
Table 9
Washington, D.C. Campus Device Characteristics
Hostname
Platform
Operating
System
System Image
NVRAM
(in KB)
egwas-3660-v1
Cisco 3660
12.3(9a)
c3660-js-m
125
egwas-6506-c1
Cisco
12.2(14)SY3
Catalyst 6500
c6sup2 rp-jk.9sv-m
381
Hardware
•
2 FE/IEEE 802.3 interfaces
•
24 serial network interfaces
•
4 channelized T1/PRI ports
•
4 voice FXS interfaces
•
4 virtual Ethernet/IEEE 802.3
interfaces
•
48 FE/IEEE 802.3 interfaces
•
20 GE/IEEE 802.3 interfaces
egwas-6506-c2-m Cisco
12.1(22)E2
sfc2
Catalyst 6500
c6msfc2-jsv-m
509
•
31 virtual Ethernet/IEEE 802.3
interfaces
egwas-6506-sd1
c6sup2_rp-jsv-m
381
•
103 virtual Ethernet/IEEE 802.3
interfaces
•
48 FE/IEEE 802.3 interfaces
•
2 GE/IEEE 802.3 interfaces
•
2 GEIP controllers (2 GE)
•
1 VIP2 controller (1 FE)
•
1 VIP2 R5K controller (2 FE)
•
1 VIP4-80 RM7000 controller for:
egwas-7507-w3
Cisco
12.1(22)E2
Catalyst 6500
Cisco 7507
12.3(9a)
rsp-jsv-m
2043
– 8 T1
– 1 ATM
•
3 FE/IEEE 802.3 interfaces
•
2 GE/IEEE 802.3 interfaces
•
3 serial network interfaces
•
1 ATM network interface
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
43
Supplementary Information
Table 9
Washington, D.C. Campus Device Characteristics (continued)
Hostname
Platform
Operating
System
System Image
NVRAM
(in KB)
egwas-7609-w1
Cisco 7609
12.1(22)E2
c6sup2_rp-jsv-m
381
Hardware
•
2 FlexWAN controllers for:
– 2 serial
– 1 ATM
– 1 channelized T3
– 1 POS
egwas-7609-w2
Cisco 7609
12.1(22)E2
c6sup2_rp-jsv-m
381
•
1 virtual Ethernet/IEEE 802.3
interface
•
48 FE/IEEE 802.3 interfaces
•
10 GE/IEEE 802.3 interfaces
•
5 serial network interfaces
•
1 ATM network interface
•
1 POS network interface
•
1 channelized T3 port
•
1 FlexWAN controller for:
– 1 ATM
– 1 channelized E3
Table 10
•
1 virtual Ethernet/IEEE 802.3
interface
•
48 FE/IEEE 802.3 interfaces
•
10 GE/IEEE 802.3 interfaces
•
5 serial network interfaces
•
1 ATM network interface
•
1 channelized E3 port
Remote Campuses Device Characteristics
Hostname
Platform
Operating
System
System Image
NVRAM
(in KB)
Cisco 3640
12.3(9a)
c3640-jk9s-m
125
Hardware
Arlington
egarl-3640-wl
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
44
•
4 Ethernet/IEEE 802.3 interfaces
•
2 FE/IEEE 802.3 interfaces
•
1 serial network interface
•
1 VPN module
Supplementary Information
Table 10
Remote Campuses Device Characteristics (continued)
Hostname
Platform
Operating
System
System Image
NVRAM
(in KB)
Cisco 2621
12.3(9a)
c2600-jk9s-m
32
Hardware
Austin
egaus-2621-wl
•
2 FE/IEEE 802.3 interfaces
•
1 serial network interface
•
1 VPN module
•
1 FE/IEEE 802.3 interface
•
2 serial (synchronous and
asynchronous) network interfaces
•
1 VPN module
•
2 FE/IEEE 802.3 interfaces
•
2 serial network interfaces
•
4 voice FXS interfaces
•
24 FE/IEEE 802.3 interfaces
•
2 GE/IEEE 802.3 interfaces
Boca Raton
egboc-1760-w1
Cisco 1760
12.3(9a)
c1700k903sv8y7-m
32
Colorado Springs
egcs-2651-vw
egcs-2950-a
Cisco 2651
12.3(9a)
c2600-js-m
32
Cisco
Catalyst
2950T-24
12.1(20)EA1
c2950-16q4l2-m 32
eghou-3550-a
Cisco
Catalyst
3550-12T
12.1(20)EA1
c3550-i5q3l2-m
384
•
12 GE/IEEE 802.3 interfaces
eghou-3660-vw
Cisco 3660
12.3(9a)
c3660-ik.9su2-m 125
•
2 FE/IEEE 802.3 interfaces
•
2 serial network interfaces
•
1 channelized E1/PRI port
•
4 channelized T1/PRI ports
•
1 VPN module
•
4 voice FXS interfaces
Houston
Los Angeles
egla-3550-a
Cisco
Catalyst
3550-12T
12.1(20)EA1
c3550-i5q3l2-m
384
•
12 GE/IEEE 802.3 interfaces
egla-3640-vw
Cisco 3640
12.3(9a)
c3640-ik.9s-m
125
•
2 FE/IEEE 802.3 interfaces
•
2 serial network interfaces
•
2 channelized T1/PRI ports
•
1 VPN module
•
4 voice FXS interfaces
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
45
Supplementary Information
Table 10
Remote Campuses Device Characteristics (continued)
Hostname
Platform
Operating
System
System Image
NVRAM
(in KB)
Cisco 3640
12.3(9a)
c3640-js-m
125
Hardware
Miami
egmia-3640-vw2
egmia-6506-a1
egmia-6506-a2
egmia-7206-w1
Cisco
12.1(22)E2
Catalyst 6500
Cisco
12.1(22)E2
Catalyst 6500
Cisco 7206
12.3(9a)
c6sup2_rp-jsv-m 381
c6sup2_rp-jsv-m 381
c7200-js-m
509
•
4 FE/IEEE 802.3 interfaces
•
25 serial network interfaces
•
2 channelized T1/PRI ports
•
2 voice FXS interfaces
•
4 virtual Ethernet/IEEE 802.3
interfaces
•
48 FE/IEEE 802.3 interfaces
•
2 GE/IEEE 802.3 interfaces
•
5 virtual Ethernet/IEEE 802.3
interfaces
•
48 FE/IEEE 802.3 interfaces
•
2 GE/IEEE 802.3 interfaces
•
8 Ethernet/IEEE 802.3 interfaces
•
2 FE/IEEE 802.3 interfaces
•
3 GE/IEEE 802.3 interfaces
•
1 ATM network interface
•
2 FE/IEEE 802.3 interfaces
•
1 serial network interface
•
1 ISDN BRI
•
4 voice FXS interfaces
•
24 FE/IEEE 802.3 interfaces
•
2 GE/IEEE 802.3 interfaces
New Orleans
egneo-2651-vw
egneo-2950-a
Cisco 2651
12.3(9a)
c2600-js-m
c2950-i6q4l2-m
32
Cisco
Catalyst
2950T-24
12.1(20)EA1
32
egny-3550-a
Cisco
Catalyst
3550-12T
12.1(20)EA1
c3550-i5q3l2-m
384
•
12 GE/IEEE 802.3 interfaces
egny-3745-vw
Cisco 3620
12.3(9a)
c3745-js-m
151
•
2 FE/IEEE 802.3 interfaces
•
1 serial network interface
•
1 ISDN BRI
•
4 voice FXS interfaces
•
12 GE/IEEE 802.3 interfaces
New York
Phoenix
egphx-3550-a
Cisco
Catalyst
3550-12T
12.1(20)EAl
c3550-i5q3l2-m
384
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
46
Supplementary Information
Table 10
Remote Campuses Device Characteristics (continued)
Hostname
Platform
Operating
System
System Image
NVRAM
(in KB)
egphx-7204-vw
Cisco 7204
12.3(9a)
c7200-jk9s-m
125
Hardware
•
1 Ethernet/IEEE 802.3 interfaces
•
1 GE/IEEE 802.3 interface
•
26 serial network interfaces
•
4 channelized T1/PRI ports
•
1 voice resource
Pittsburgh
egpit-3550-a
Cisco
Catalyst
3550-12T
12.1(20)EAl
c3550-i9q3l2-m
384
•
12 GE/IEEE 802.3 interfaces
egpit-3640-vw
Cisco 3640
12.3(9a)
c3640-jk9s-m
125
•
2 Ethernet/IEEE 802.3 interfaces
•
26 serial network interfaces
•
2 channelized T1/PRI ports
•
1 VPN module
•
2 voice FXS interfaces
•
2 FE/IEEE 802.3 interfaces
•
2 serial network interfaces
•
4 voice FXS interfaces
•
24 FE/IEEE 802.3 interfaces
•
2 GE/IEEE 802.3 interfaces
Santa Fe
egsaf-2651-vw
egsaf-2950-a
Cisco 2651
Cisco
Catalyst
2950T-24
12.3(9a)
12.1(20)EAl
c2600-js-m
c2950-i6q4l2-m
32
32
Technical Notes
This section provides technical notes written by the test team about the testing.
Multicast: Static RP and Auto-RP Are Both Configured
Summary: Both the static RP and Auto-RP were configured in the test bed, to provide redundancy.
Detailed Description: None.
Attachments: None.
Exclusion of SUP720 from Phase 3
Summary: Because hardware was not available, the SUP720 was excluded from phase 3 testing.
This also required removal of PortChanneling (FEC and GEC) from phase 3 because the SUP720 is
required to support QoS on port channel interfaces.
Detailed Description: None.
Attachments: None.
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
47
Supplementary Information
MSFC 1 Has a 16 MB Limit for Cisco IOS and Boot Loader Images
Summary: The Cisco IOS image targeted for testing, c6msfc-jsv-mz.121-21.3.E1, was too large to fit
into 16 MB of flash memory, given that the boot image was kept in flash memory. Having the boot image
there was desirable, because it would be difficult to recover from a Cisco IOS image boot load problem
without it. The boot image is 1,892,452 bytes in size, leaving 14,098,204 bytes free for the Cisco IOS
image. The image c6msfc-jsv-mz.121-21.3.E1 is 14,754,112 bytes in size. To get around this problem,
the MSFC to boot from a TFTP server was set. However, defect CSCdy55543 interfered with this
approach, so a workaround was used: download c6msfc-jsv-mz.121-21.3.E1 to the SUP1A slot0: flash;
reset system from the SUP1A to reload the Cisco IOS image; break into ROMMON as MSFC attempts
to load “confreg 0x0102” from the ROMMON; use prompt “i” to reset to the new configuration; break
into ROMMON as MSFC attempts to load the boot sup-slot0:c6msfc-jsv-mz.121-21.3.E1 image. This
workaround brought the box up and it has continued running. However, any reload or need to restart the
box will require the manual workaround.
Detailed Description: None.
Attachments: None.
WS-X6K-SUP1A-MSFC Does Not Support CatOS 8.x
Summary: Supervisor Engine 1: Early versions of the Cisco Catalyst 6500 series Supervisor Engine 1
shipped with 64 MB DRAM, which does not support software Release 8.x (currently, new Supervisor
Engine 1 modules ship with 128 MB DRAM). To support software Release 8.x, 128 MB DRAM is
needed. With the exception of WS-X6K-SUP1A-MSFC, all other Supervisor Engine 1 modules can
upgrade to 128 MB DRAM using the MEM-S1-128MB upgrade kit. See the following link:
http://www.cisco.com/en/US/partner/products/hw/switches/ps708/prod_release_note09186a008019d7f
0.html#20785.
Detailed Description: None.
Attachments: None.
Flash Memory Requirement for Cisco IOS Release 12.3
Summary: Flash memory requirements for all versions of Cisco IOS Release 12.3 are shown in
Table 11, which applies to images with -js- and -jsx- in the name.
Table 11
Flash Memory Requirement for Cisco IOS Release 12.3
Platform (image)
Memory
Cisco 1700 (c1700-k9o3sv8y7-mz.123-6)
16 MB
Cisco 2610XM, Cisco 2611XM, Cisco 2650, Cisco 2651
32 MB
Cisco 3640
32 MB
Cisco 3745
32 MB
Cisco 7200
32 MB
Cisco 7507 (rsp-[jsv/pv]-mz.123-6)
32 MB
Detailed Description: None.
Attachments: None.
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
48
Supplementary Information
High CPU Usage on Cisco 2600 and Cisco 3600 Platforms with Encryption and QoS
Summary: Encryption and QoS and traffic with small-sized packets dramatically increases the
Cisco 2600 or Cisco 3600 router’s CPU utilization, even when encryption was done by the hardware
accelerator. In addition, many encryption and decryption errors occurred when QoS was configured with
IPSec with the antireplay function, especially when QoS preclassify was also configured for the
encrypted link.
We do not recommend that the customer use those platforms as a WAN aggregation and IPSec
aggregation router at the same time. We also do not recommend that the customer run IPSec over any
congested link that will require QoS configurations.
Detailed Description: High CPU usage on Cisco 2600 and Cisco 3600 router platforms with encryption
and QoS.
Disclaimer Notice: The information presented in this documentation is based on the nEverest phase 3
testing on the enterprise global test bed. All statements and recommendations are believed to be accurate
but are presented without warranty of any kind, express or implied. Users must take full responsibility
to use these technical notes.
Problem: Two Cisco 3640 routers (egla-3640-vw and egpit-3640-vw) and one Cisco 2621xm router
(egaus-2621-w1) constantly experienced CPU usage greater than 90 percent during nEverest phase 3
testing. The routers were all used as edge branch routers, to connect to a center office via IPSec and a
GRE tunnel through a T1 link.
Branch router LAVW was used as an example to characterize this issue. LAVW was a Cisco 3640 router
with one NM-VPN/MP hardware encryption module. IPSec and GRE and QoS were configured across
the T1 link as specified in the original test plan.
Test Procedure: Preliminary testing indicated that the router’s CPU usage dropped more than 20 percent
after encryption was removed. Due to the nature of the system testing environment, many features were
enabled on the router, particularly on the encrypted WAN interface, such as EIGRP authentication, IP
multicast, and QoS, so more detailed testes were done to narrow the problem.
The test was divided into two sections:
•
Breakdown test
•
Packet size test
Breakdown Test
Besides IPSec and GRE, QoS and QoS preclassification for VPN and EIGRP authentication was also
configured on the WAN serial interfaces. A step-by-step approach was taken to evaluate the impact of
each individual command and feature on the CPU usage of the router.
The same traffic stream from the Chariot traffic tool used in our normal system testing environment was
used during this test; see Table 12 for a description.
Table 12
Breakdown Test Description
Configure
Significant Impact on CPU
EIGRP_ Auth
No
GRE+EIGRP_Auth
No
QoS+GRE+EIGRP_Auth
No
IPSec+GRE+EIGRP_Auth
+10%
QoS+IPSec+GRE+EIGRP_Auth
+30%
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
49
Supplementary Information
The last configuration set was also the standard configuration used in nEverest phase 3 testing, during
which the routers experienced the highest CPU usage. As was also seen in the phase 3 testing, many
“HW_VPN-1-HPRXERR” error messages were shown in syslog under this standard configuration. This
high usage could be caused by fragmentation, QoS, or packet prioritization. (See CSCdt40220 for
details. CSCdz54621 also requests that the feature adjust the replay-vector size from the command line.)
A separate test confirmed that changing the IPSec transform set alone (removing esp-sha-hmac) to
remove the IPSec antireplay function can eliminate this error message. It also reduced CPU usage by 8
to 10 percent.
The breakdown tests demonstrated that encryption plus QoS can significantly increase the CPU usage,
especially when the link is congested. The traffic scripts generated from the Chariot traffic tool were a
combination of many different simulated applications with different traffic patterns, such as Telnet and
FTP applications. The packet size test was conducted to evaluate the CPU impact when the packet size
was changed.
Packet Size Test
The packet size test determined how the packet size can affect the router’s CPU usage under different
configurations: IPSec, and IPSec with GRE. QoS was enabled during the test. An Ixia traffic generator
was used to send traffic with different packet sizes at different rates, and then the router’s CPU usage
was measured; see Table 13 and Table 14 for the results.
Table 13
IPSec and GRE Packet Utilization
Size (in bytes)
PPS (in kbps)
WAN Rate Percentages
CPU Percentages
64
447
443
39
64
890
883
76
64
1055
1045
95
64
1055
1045
89 (without QoS)
64
1042
616
29 (without crypto)
1200
149
1504
14
1500
261
1503
16
Table 14
IPSec Packet Utilization
Size (in bytes)
PPS (in kbps)
WAN Rate Percentages
CPU Percentages
64
456
363
33
64
890
708
65
64
1042
833
69
64
1045
417
18 (without crypto)
64
1351
1080
95
1200
150
1338
12
1500
243
1505
19
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
50
Supplementary Information
Larger packets (1500 bytes or more) did not have much impact on the CPU, even when it needed
fragmentation. Many things can affect CPU usage. The two most important are packet size and
encryption, as Table 13 and Table 14 indicate. QoS can also makes a 6 percent contribution to CPU
usage.
Conclusions: We recommend that the customer take the following approaches to alleviate excessive
CPU usage problems:
•
Do not use a single Cisco 2600 or Cisco 3600 router to perform both WAN and IPSec aggregation
functions. Upgrade to higher performance routers, such as the Cisco 3700 series or Cisco 7200
series.
•
Carefully design the network to avoid congestions along the encrypted links, and avoid configuring
QoS features together with encryption, particular the QoS preclassify command on the encrypted
links. The QoS preclassify command was working on individual interfaces, but the combination of
encryption and QoS, especially when the antireplay function of IPSec was used, generated many
encryption and decryption errors.
Attachments: None.
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
51
Supplementary Information
Cisco IOS Releases 12.3(9a) and 12.1(22)E2, and Cisco CatOS Releases 8.2(1) and 8.3(1)
52