TECH-SP-3-EPN transport-Jakl

Transportní paketová
infrastruktura
poskytovatelů
služeb
TECH-SP3
David Jakl
Cisco Systems Engineer
Motivation: What are Service Operator Challenges?
Explosive
Bandwidth growth
Increasing
Operational
Complexity
£
¥ $€
T ECH-SP3
Stagnant
Revenue
Cisco and/or its affiliates. All rights reserved.
•
•
Static or reduced Budgets
Scalable
OTT services, video, mobility
driveArchitecture
bandwidth, networks continue to grow
•
Managing 100s to 1,000’s of
devices
Simple,
Uniform and
Open
Architecture
with different procedures, different user
interfaces, different systems
•
•
Competitive pressure, price erosion
Programmable, Open
Need to capture new markets
but time
Architecture
to deploy for new services is too slow
Cisco Public
Cisco Open Network Environment
Automated On-Demand Policy
Services
Dynamic Scale
Always “ON”
Anywhere
Fully
Open and
Real-Time
Intelligent
Analytics Virtualized Programmable
Convergence
Agility
Application
Interaction
VM
VM
CDN
Video
Seamless
Experience
Business
Core
Service Broker “Business Intents”
Applications and Services
Optimize
APIs
Edge
VM
Service
Orchestration Apps
CORE
Revenue
¥$£€
Service Catalog
Access
NCS
Cloud
VM / Storage
Control
NCS
EDGE
Evolved Services Platform
APIs
Access
Mobility
Evolved Programmable Network
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
Agenda






EPN 4.0
nV Satellite
Autonomic Networking
Zero-IP
Autonomic Carrier Ethernet
Summary
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
EPN 4.0
Cisco Evolved Programmable Network
Leading the NFV / SDN Evolution
EPN System
Scope
Cisco’s Open Network Environment
vGiLAN
vFirewall
vDPI
vNAT
vBNG
vDDoS
vSLB
VM
VM
VM
VM
VM
VM
VM
Network Function Virtualization
Pa rt o f ES P a n d EPN (N etwo rk, Sto ra ge, C omp ute )
N e twork API s (REST) a nd Service s Catalog
Orchestration
ESP Cloud
Orchestration
M u lti-La yer C o ntro l, Service C haining a nd Policy En fo rceme nt
Orchestration WAE
Quantum PS
Con trolle rs , Colle ctors
n Lig h t
Cisco nV
Virtualized Infrastructure
Virtual PE
Virtualiz ed
IOS-XR
IP +O p tical
Pro g ra mmin g a nd Manag ing o f Virtu al Reso urces
VM
on e PK, Ope n Flow, PCEP, N etconf/YANG , BGP-LS, GM PLS
Physical Infrastructure
ME Series
UCS
ASR 9XX
T ECH-SP3
NCS2000
NCS4000
ASR 9000
CRS
NCS6000
Pro g ra mmin g a nd M anag in g of Ph ysical Resource s
Nexus
Cisco and/or its affiliates. All rights reserved.
Cisco Public
EPN System Overview
Business Convergence
• Unified L3 VPN experience
• Seamless and Personalized BYOD
remote access and VPN Access
Enterprise
FMC
Residential
FMC
Consumer Convergence
• Unified Subscriber Experience
IP
Corporate
Virtualized RR, PCRF, CPEs
Virtualized Netw ork Services
Integrated BNG, WAG, CGN
nV
MPLS
Ethernet
T ECH-SP3
Virtualized PGW, BRAS
AN
uw av e ACM
Unified MPLS Transport
Cisco and/or its affiliates. All rights reserved.
Cisco Public
EPN System Components
OpenStack
Mobile
MAG
Enterprise
Fixed DHCP
Cisco PNR
Corporate
Unified
Residential
Fixed
Orchestration
NMS
Fixed MAG
LMA MPC
Prime Network Provisioning
& PerformanceSeamless Subscriber Mobility
IP
AAA, PCRF
Subscriber
Experience
Quantum Policy
Server
Virtualized Route Reflector
Fixed PCRF
Virtualized PGW, BRAS, CPE, VXLAN GW
Fixed Edge
CSG : ASR
901
ASR 920
CPEs: vHN,
CSR1000v,
ISR, ASR1k
Converged DPI
AGN-SE
PAN-SE
ASR-900X
Fixed
CGN
AGN-SE
Mobile Edge
PAN-SE
ASR-900X
CN
CRS-3
PAN-SE
ASR-9001
PAN
ASR-903
Unified MPLS Transport
FAN (PON,
DSL, Ethernet)
ME 4600, 2600
FAN (PON,
DSL, Ethernet)
ME 4600, 2600
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
FAN
ASR 920
ME3600X
NID
ME-1200
Cisco Public
Unified MPLS: What Key Technologies Are Involved?
• RFC 3107 label allocation provides hierarchy for scale
• BGP Filtering Mechanisms enable the network to learn what is needed, where is needed
and when is needed
• Seamless multicast integration with LSM and mLDP
• Flexible Access Network Integration options: MPLS (Labeled BGP Extension, LDP),
Ethernet, nV
• Remote LFA FRR and BGP PIC for seamless intra- and inter-domain high availability
• Contiguous and consistent Transport and Service OAM and Performance Monitoring
• Autonomic Networks for Unified MPLS Self Organization, Microwave ACM for Unified MPLS
network self-correlation
• Auto-IP address assignment and dynamic change
• Virtualized L2/L3 Services Edge with PW Headend
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
Unified MPLS Transport – Single AS, Multi-Area
LSPs between Remote Access Node Loopback
LDP Label
BGP Label
Control
Service Label
Next-Hop-Self
Next-Hop-Self
Next-Hop-Self
iBGP IPv 4+label
iBGP IPv 4+label
iBGP IPv 4+label
Access IGP Domain
Aggregation IGP Domain
Core IGP Domain
AN
PAN-ABR
Inline-RR
CN-ABR
Inline-RR
Central RR
iBGP
iBGP
Next-Hop-Self
iBGP IPv 4+label
iBGP IPv 4+label
Imp-Null
Aggregation IGP Domain
CN-ABR
Inline-RR
Access IGP Domain
PAN-ABR
Inline-RR
iBGP
iBGP
Next-Hop-Self
AN
iBGP
MTG
push
swap
push
pop
push
swap
swap
pop
push
swap
swap
pop
push
swap
pop
swap
swap
swap
push
LDP LSP
LDP LSP
LDP LSP
LDP LSP
iBGP Hierarchical LSP
Service LSP
Forw arding
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
LDP LSP
Cisco Public
pop
Unified MPLS BGP Control Plane
Single AS, Multi Area IGP, labeled BGP Access
Unified MPLS Transport
IPv4+label PE iBGP
IPv4+label
Inline RR
 NHS 
IPv4+label ABR
External RR
Inline RR
 NHS 
Inline RR
 NHS 
RR
iBGP
IPv4+label
iBGP
IPv4+label
IPv4+label PE
BNG, MSE
External RR
Exam ple: IP RAN VPNv4 Service
Inline RR
RR
Inline RR
Inline RR
VPNv4 PE
iBGP
VPNv4
CSG
iBGP
VPNv4
iBGP
VPNv4
VPNv4 PE
MTG (EPC GW)
Access Netw ork
Aggregation Netw ork
Core Netw ork
Service Edge Node (BNG,
MTG…)
IP/MPLS Transport
IP/MPLS Transport
Access Nodes
Fiber or uWav e Link, Ring
T ECH-SP3
Aggregation Node
IP/MPLS Transport
Aggregation Node
DWDM, Fiber Rings, H&S, Hierarchical Topology
Cisco and/or its affiliates. All rights reserved.
Core ABR
Core ABR
DWDM, Fiber Rings, Mesh Topology
Cisco Public
Optimal Routing with BGP Accumulated IGP
iBGP IPv 4+label
Access IGP Domain
PAN-ABR
Inline-RR
 NHS

iBGP IPv 4+label
Aggregation IGP Domain
AN
CN-ABR
Inline-RR
 NHS

Core IGP
Domain
CN-ABR
Inline-RR
Total
Cost = 10
AIGP=5
Traffic Forwarding
iBGP
iBGP
Total
Cost = 15
AIGP=10
LDP LSP
LDP LSP
LDP LSP
iBGP Hierarchical LSP
• Default BGP best path calculation based on IGP cost to next-hop only
– Next-hop’s IGP cost to destination ignored leading to suboptimal routing
• BGP AIGP enhances BGP best path calculation by accounting for both cost to next-hop and
next-hop’s cost to reach destination
– Eliminates sub-optimal routing
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
MPLS Resiliency Solution: LFA and Remote LFA
 LFA simplifies management of the underling
infrastructure
Backbone
 When no local LFA is available a node dynamically
computes its remote loop free alternate node(s)
– Done during SFP calculations using PQ algorithm (see draft)
 The node automatically establishes a directed LDP
session to the remote node
– The directed LDP session is used to exchange labels for the
FEC in question
A1
C1
Cisco and/or its affiliates. All rights reserved.
Directed LDP
session
C2
Access Region
Cisco Public
C5
C4
C3
 On failure, the node uses label stacking to tunnel traffic
to the Remote LFA node, which in turn forwards it to the
destination
T ECH-SP3
A2
Remote LFA FRR - Protection
 C2’s LIB
– C1’s label for FEC A1 = 20
– C3’s label for FEC C5 = 99
– C5’s label for FEC A1 = 21
Backbone
A1
 On failure, C2 sends A1-destined traffic onto an LSP
destined to C5
– Swap per-prefix label 20 with 21 that is expected by C5 for that
prefix, and push label 99
 When C5 receives the traffic, the top label 21 is the one
that it expects for that prefix and hence it forwards it onto
the destination using the shortest-path avoiding the link C1C2.
C1
A2
Directed LDP
session
20
X
21
C2
C4
21
99
C3
21
21
99
Access Region
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
C5
E1
Cisco Public
X
Ethernet Access: Hub-and-Spoke Topology
MC-LAG with ICCP
MC-LAG with PBB-EVPN
ICCP-SM
PE1
PE1
PE1
CE1
L2 VID X
L3 VID Z
CE1
CE1
MPLS
Core
MPLS
Core
MPLS
Core
L2 VID Y
L3 VID Z
PE2
•
•
•
Active/Standby mode
Support both L2 and L3
service
L3 service has two
configuration options: IRB
or L3 sub-interface
T ECH-SP3
PE2
PE2
•
•
Active/Active per-flow or
per-service LB
Support L2 service only
with PBB-EVPN
•
•
•
Cisco and/or its affiliates. All rights reserved.
Support both L2 and L3
services (ELINE provisioned
as ELAN)
L2 service: per-VLAN load
balancing
L3 service: active/active on
both links
Cisco Public
Ethernet Access: Ring and Mesh Topology
G.8032
CE1
REP and REP-AG
PE1
CE1
VID X
VID Y
RPL
Link
MPLS
Core
REP
•
ALT
port
PE2
Standard ring architecture
for Ethernet and xPON
access
T ECH-SP3
VID X
REP-AG
REP-AG
MPLS
Core
MPLS
Core
VID Y
VID X
VID Y
•
PE1
CE1
VID X
REP Edge
No
Neighbour
VID X
CE2
PE1
VID Y
R-APS
G.8032
Open Sub-ring
ICCP-SM (or STP-AG)
VID Y
CE2
Legacy deployed prestandard Cisco solution
Cisco and/or its affiliates. All rights reserved.
CE2
PE2
•
PE2
ICCP-SM or MST/PVSTAG can address any L2
topology
Cisco Public
Mobile Transport with Microwave ACM
 Access Network capable to adapt intelligently to uW
capacity drops:
 Y.1731 VSM signals Microwave Adaptive Code
Modulation changes to Access Node
Aggregation Node
Aggregation Node
Policy Logic that updates
IGP metric/G.8032 topology
and H-QOS
IP/MPLS or
Ethernet
interface
 MPLS Access Nodes adapt link IGP metric to new
capacity triggering SPFs recalculation
 Ethernet Access Nodes trigger G.8032 failover
below a certain capacity threshold
Y.1731 VSM
Signals the
Microwave
link speed
Microwave Fading
T ECH-SP3
 Optionally Access Node can change Hierarchical
QOS policy
– allows EF traffic to survive despite drop of capacity
Cisco and/or its affiliates. All rights reserved.
Cisco Public
Multicast Architecture
Recursive mLDP MP LSP
PIM v4/v6
Mcast Receiv er
Aggregation Node
Aggregation Node
Core Node
Core Node
Acces
IP/MPLS
domain
Core Netw ork
IP/MPLS Domain
Aggregation Network
IP/MPLS Domain
Aggregation Node
Core Node
Aggregation Node
Mcast Receiv er
Aggregation Network
IP/MPLS Domain
Core Node
Aggregation Node
Mcast Receiv er
• Core/Aggregation Network runs mLDP
– Supports business mVPNs
– Supports IP multicast for eMBMS and IPTV
• Access/Pre-Aggregation Network runs PIM v4/v6 - with VRF route leaking for eMBMS
– Enables eMBMS and IPTV services to reach Access Nodes (eNBs, DSLAMs)
• Sources distributed over BGP labeled unicast (v4 or v6) in Core and Aggregation and
redistributed into Pre-Aggregation and Access IGP v6 processes
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
Mcast
Source
EPN 4.0 DIGs
http://www.cisco.com/c/en/us/solutions/enterprise/design-zone-service-provider/programmable-network.html#~info-customer
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
EPN – MEF CE 2.0 Certified
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
nV Satellite
Traditional FTTx Access and Agg Network
Carrier Ethernet Aggregation
FTTx Access Netw ork
NNI
Ethernet Access
IP/MPLS
Agg
POP
Routed/
Bridged
MC-LAG
Trunk/vlan N:1,
1:1
IGMP-SN
MST
MSE
BNG
UNI
REP
G.8032
EPL,
EVPL,
ELAN,
EVLAN,
MST,
.1q tunneling
w L2PT
IGMP-SN
IGMP filter
Element Management Systems
(Resource Manager, Service Manager, South/Northbound Provisioning, Troubleshooting)
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
Customer
Premises
RG
FTTx Access and Agg Network nV Simplicity
Carrier Ethernet Aggregation
FTTx Access Netw ork
NNI
Ethernet Access
IP/MPLS
Agg
POP
MC-LAG
Routed/
Bridged
nV Satellite
Trunk/vlan N:1,
1:1
IGMP-SN
MST
MSE
BNG
UNI
REP
G.8032
nV Satellite
EPL,
EVPL,
ELAN,
EVLAN,
nV Satellite
nV Satellite
One nV Satellite
System
MST,
.1q tunneling
w L2PT
IGMP-SN
IGMP filter
nV Satellite
ElementElement
Management
Management
System Systems
(Resource
(Resource
Manager,
Manager,Service
ServiceManager,
Manager,OAM,
South/Northbound
Provisioning, Troubleshooting)
Provisioning, Troubleshooting)
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
Customer
Premises
RG
What is the nV Satellite Solution ?
• A single logical switch/router built by interconnecting an ASR9K and one or more
smaller satellite switches
ASR 9000
Satellite 1
N x 10G
Satellite 2
N x 10G
…
Satellite n
N x 10G
One Virtual System
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
The Cisco ASR 9000v Overview
nV Satellite to ASR9000 and CRS-3 host
Power Feeds
• Single AC pow er feed; or
Field Replaceable Fan Tray
1 RU ANSI & ETSI
Compliant
• Redundant Fans
• Redundant +24vDC, & -48vDC
Pow er Feeds
LEDs
• ToD/PSS Output
• BITS Out
4x10G SFP+
• Inter-Chassis Link Fabric Ports
• Plug-n-Play In-Band Management
44x10/100/1000 Mbps
Pluggables
• Full Line Rate Packet Processing and
Traffic Management
• Wide range of ONS and TMG
1G SFP and 10G SFP+ optics
supported, including copper, fiber,
CWDM/DWDM
T ECH-SP3
• Automatic Discovery and Provisioning
• Co-Located or Remote Distribution
Industrial Temp Rated
• -40C to +65C Operational Temperature
• -40C to +70C Storage Temperature
Cisco and/or its affiliates. All rights reserved.
Cisco Public
nV Satellite – ASR 901 and ASR 903 Overview
ASR901 Satellite Platform:
ASR903 Satellite Platform:
 Compact, Efficient & Hardened Device

–
–
–
–
–
–
–
1RU , 17.5 in x 1.72 in x 8.3 in (W*H*D)
12 Gbps switching capacity
Redundant power and fans
Low power consumption: <~50W
Fits in 300 mm cabinets, 1RU
Extended operating temp range -40 to 65 C
Side-2-side cooling
Compact, Redundant, Hardened
–
–
–
–
–
3RU, 6 interface slots
55Gbps throughput with 1st Gen RSP
Redundant PSUs (<550W), FANs and RSPs
Fits in 300mm cabinet (235mm deep), 19” EIA
Extended operating temp: -40º to 65º C (DC)
 Interfaces* and per-slot density:
–
Ethernet : 1x10GE and 8x1GE Interface
 Interfaces* and Per-slot Density:
–
Ethernet: 12 x GE
*Only Ethernet Interfaces are supported
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
nV Satellite System High-Level Overview
Satellite
access port
Satellites have zero
touch configuration
“nv” GigEthernet
port
Satellite Auto Discovery and Control Protocol
Fabric Links
(ICLs)
Satellite
One nV System
ASR9000 Host
• A special XR nV image on a satellite switch to make it an ASR 9000 nV satellite
• Satellite Auto Discovery and Control Protocol (SADCP) makes satellite as “virtual line card”
of the ASR 9000 Host
• From end user point of view, it’s a single logical system – ASR 9000 nV System.
– All management & configuration is done on the Host chassis
• Satellite and Host can be co-located or in different locations – No distance limitation
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
nV Auto Discovery and Control Protocol Operation
Satellite Auto Discovery and Control Protocol
CPU
CPU
MAC-DA
Satellite
MAC-SA
Control VID
Payload/FCS
One nV System
ASR9000 Host
• Discovery Phase
•
•
A CDP-like link-level protocol that discovers satellites and maintains a periodic heartbeat
Heartbeat sent once every second to detect satellite or fabric link failures.
–
CFM-based fast failure detection plan for future release.
• Control Phase
•
•
TCP-Based control protocol used for Inter-Process Communication between Host and Satellite
Get/Set style messages to provision the satellites and retrieve notifications from the satellite
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
nV Satellite and Host Data Plane Forwarding
MAC-DA MAC-SA VLANs (OPT) Payload
MAC-DA MAC-SA VLANs (OPT) Payload
MAC-DA MAC-SA nV-tag VLANs (OPT) Payload
Satellite
One nV System
ASR9000 Host
On Satellite
On Host
• Ethernet frame received on access port
• Special nV-tag is added to frame
• Local xconnect between access and fabric
port ( no MAC learning! )
• Packet is placed into fabric port egress queue
and transmitted out toward Host
• Host receives the packet on its satellite fabric port
• Maps frame to corresponding satellite virtual
access port based on nV tag
• Packet processing is identical to local ports (L2/L3
features, QoS, ACL, etc all done in the NPU)
• Packet is forwarded out of a local port or satellite
fabric port to same or different satellite
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
nV Satellite ID and Type Configuration
Satellite
Access Port
Satellite 101
One nV System
• Host nV configuration mode
ASR9000 Host
nv
satellite 101
description satellite 101 at bldg 16, 3700 Cisco Way
type asr9000v
serial-number CAT2039234G
secret 5 $1$S9sddjds00/3495
• Define the Satellite
–
Provide a unique Satellite ID
–
Identify Satellite ‘Type’
(e.g. asr9000v, asr901, asr903)
–
–
Optional: Identify the Satellite Serial Number
Optional: specify a MD5 password for any telnet
activities with Satellite
T ECH-SP3
“nV” GigEthernet port
Satellite Fabric Link
(ICL*)
Cisco and/or its affiliates. All rights reserved.
Cisco Public
nV Satellite Fabric Port and Access Port Mapping
Configuration
Satellite
Access Port
One nV System
• Define Satellite Fabric Port(s)
ASR9000 Host
interface TenGigE 0/2/0/2
• Identify Satellite ID connected to Fabric Port
• Map Satellite Access Ports to Fabric Port
Interface
T ECH-SP3
“nV” GigEthernet port
Satellite Fabric Link
(ICL*)
Satellite 101
nv
satellite-fabric-link satellite 101
remote-ports GigabitE 0/0/0-9
Cisco and/or its affiliates. All rights reserved.
Cisco Public
nV Satellite Interface Configuration
Satellite
Access Port
Satellite 101
One nV System
Interface and
Sub-interface
CLI Example
“nV” GigEthernet
port
Satellite Fabric Link
(ICL*)
ASR9000 Host
interface GigabitEthernet 101/0/0/1
ipv4 address 1.1.1.1 255.255.255.0
!
interface GigabitEthernet 101/0/0/2.100 l2transport
encapsulation dot1q 100
rewrite ingress tag push dot1q 2
!
• All Satellite Configuration is done on the Host
• Satellite is a remote line card: Access ports have feature parity with ASR9K local ports
• nV Satellite interface naming follows the same local interface naming convention:
sat-ID / sat-slot / sat-bay / sat-port
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
nV Satellite Supported Network Topologies
- Port Extender
 Single Home,
Static Pinning
Satellite
 Single Home,
Fabric Link Bundle
Satellite
 Dual Home to Cluster,
Static Pinning
Satellite
ASR9K nV
Edge
Satellite
ASR9K nV
Edge
 Dual Home to Cluster,
Fabric Link Bundle
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
ASR9K/CRS-3
ASR9K/CRS-3
Cisco Public
nV Satellite L2 Fabric, Ring Topologies
 Extending satellite connection across a
Layer 2 network
•
CFM
VLAN-A
A native 802.1Q tag is added to the
Satellite-Host control and data plane
protocol
Host A
Satellite
VLAN-B
Host B
CFM
 Expanding to support ring, & cascaded
topologies
Satellite
 Maintains the same plug & play
operational simplicity
Host A
Satellite
 CFM/CCM used for fast failure detection*
Satellite
Satellite
Satellite
* CFM/CCM for simple ring and cascading will be in future releases
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
Host B
Host
nV Satellite L1 Dual Homing Solution
 Same satellite dual homed to two separate
ASR9k Hosts – Primary and Backup
 Each host has independent control channel with
the satellite
 Satellite honors the configuration from its
primary host if there is conflict. Syslog message
generated if conflict
Satellite 1
Host B
Satellite 2
 Load balancing could be per satellite, or per
satellite access port (in future releases)
 If satellite loses its primary host or link, failover
occurs to its backup host
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Host A
E-ICCP
 Satellite is notified which host is primary or
backup
Satellite 1: Primary Host A
Backup Host B
Satellite 2: Primary Host B
Backup Host A
Cisco Public
Dual-Hosts nV Satellite Configuration
 Host1 Config:
 Host2 Config:
redundancy
iccp group 1
member neighbor 2.2.2.2
!
nv satellite
system-mac 8478.ac47.dd90
!
!
nv
satellite 101
type asr9000v
redundancy host-priority 10
!
!
ICCP Redundancy Group
Config
Optional ICCP Group
Sys MAC Config
Host Priority Config for
Satellite 101
interface TenGigE0/0/2/2
nv
satellite-fabric-link satellite 101
redundancy iccp-group 1
!
remote-ports GigabitEthernet 0/0/0-43
Use ICCP Group 1 for
Satellite 101 Dual Hosts
Operation
!
redundancy
iccp group 1
member neighbor 1.1.1.1
!
nv satellite
system-mac 8478.ac47.dd90
!
!
nv
satellite 101
type asr9000v
redundancy host-priority 20
!
!
interface TenGigE0/0/2/2
nv
satellite-fabric-link satellite 101
redundancy iccp-group 1
!
remote-ports GigabitEthernet 0/0/0-43
!
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
Data Plane Encapsulation
Ring/Cascading
 On the ring, one tag is not sufficient to identify
both the Satellite and Satellite access port
–
–
–
–
If MAC DA == My Satellite Chassis MAC,
consume
else continue on ring
Untagged for SDCP control packet and CFM
Single BVID for user data packet
Different BVID for ring local multicast replication
(Host ID)
DMAC: Host1
T ECH-SP3
Host 2
S103
 BVID in B-MAC bridging domain
–
–
–
Host 1
S102
 Switching decision at satellite:
–
S101
802.1ah (mac-in-mac) encapsulation for Ring
B-MAC identifies the Satellite or Host
I-SID identifies the Satellite access port
Satellite Access Port ID
(Satellite ID)
SMAC: S102
Cisco and/or its affiliates. All rights reserved.
BVID
I-SID
Original Access Port Frame
Cisco Public
nV Satellite Simple Ring Dual Host Configuration
 Host1 Config:
 Host2 Config:
nv
satellite 101
type asr9000v
redundancy host-priority 10
!
serial-number CAT1649U12B
!
satellite 103
type asr9000v
redundancy host-priority 20
!
serial-number CAT1521B1BY
!
!
interface TenGigE0/0/2/0
nv
satellite-fabric-link network
redundancy iccp-group 1
!
satellite 101
remote-ports GigabitEthernet 0/0/0-6
!
satellite 103
remote-ports GigabitEthernet 0/0/0-5
!
T ECH-SP3
Satellite 101 Config
Satellite 103 Config
Simple Ring Fabric Link,
Redundancy, and Per
Satellite Port Mapping
Config
Cisco and/or its affiliates. All rights reserved.
nv
satellite 101
type asr9000v
redundancy host-priority 20
!
serial-number CAT1649U12B
!
satellite 103
type asr9000v
redundancy host-priority 10
!
serial-number CAT1521B1BY
!
!
interface TenGigE0/0/2/0
nv
satellite-fabric-link network
redundancy iccp-group 1
!
satellite 101
remote-ports GigabitEthernet 0/0/0-6
!
satellite 103
remote-ports GigabitEthernet 0/0/0-5
!
Cisco Public
L2 Fabric Overview
Supported Models
 L2 Fabric supports satellite connectivity
across Ethernet Layer 2 domains
Layer2 VLAN EVC
Transport Network
 Satellite Fabric Link Redundancy
– Single Physical Link with two VLAN/EVC
– Two Physical Links with one VLAN/EVC each
VLAN 10
S101
 Each Host L2 sub-interface is mapped to one
satellite fabric port
VLAN 10
Host 1
VLAN 21
VLAN 11
VLAN 11
VLAN 21
S102
Host 2
VLAN 20
Transport VLAN (B-VLAN)
is used for packet
forwarding in the L2 cloud
DMAC: H1
T ECH-SP3
Sub-interface
terminating
VLAN 10, 11
SMAC: S2
Cisco and/or its affiliates. All rights reserved.
VLAN 20
Native L2 (802.1q) handoff
BVID
I-SID
Sub-interface
terminating
VLAN 20, 21
Original Access Port Frame
Cisco Public
nV Satellite L2 Fabric Dual Host Configuration
 Host1 Config:
 Host2 Config:
nv
satellite 101
type asr9000v
redundancy host-priority 10
!
serial-number CAT1604B17B
!
!
interface TenGigE0/0/1/0.10
encapsulation dot1q 10
nv
satellite-fabric-link satellite 101
!
ethernet cfm
continuity-check interval 10ms
!
redundancy iccp-group 1
!
remote-ports GigabitEthernet 0/0/0-5
!
T ECH-SP3
Satellite 101 Config
Satellite 101 L2fabric
VLAN Subinterface
Config
L2fabric VLAN EVC
CFM/CCM Monitoring
Satellite 101 L2fabric
Dual Hosts Redundancy
and Access Port Mapping
Cisco and/or its affiliates. All rights reserved.
nv
satellite 101
type asr9000v
redundancy host-priority 20
!
serial-number CAT1604B17B
!
!
interface TenGigE0/0/1/0.21
encapsulation dot1q 21
nv
satellite-fabric-link satellite 101
!
ethernet cfm
continuity-check interval 10ms
!
redundancy iccp-group 1
!
remote-ports GigabitEthernet 0/0/0-5
!
Cisco Public
nV L2 Multicast offload for MEF and Enterprise services
Multicast Stream
from core locally
replicated at
satellite nodes nV Satellite
•
nV ring
nV Host
PAN-SE
CPE
Multicast replication offloaded from nV host to
satellite
– Optimized BW utilization in nV ring
• IGMP snooping enabled on nV Hosts to learn
active multicast receivers on nV ring
– Multicast membership information propagated to
satellites via Cisco proprietary nV protocol
• Enables each satellite to perform multicast
replication locally
CPE
nV Satellite
nV Host
IGMP snooping
IGMP
T ECH-SP3
• Both hosts receive same multicast membership
requests from nV ring
– Send single copies of same multicast streams
– Each satellite replicates multicast traffic from only
one selected nV Host and forwards to receivers
Cisco and/or its affiliates. All rights reserved.
Cisco Public
nV Satellite Service Activation Testing
 Satellite dataplane loopback testing for PM and service activation
• User configures “nV” virtual interface just as any L2/L3 interface or sub-interface on host
• Satellite Interface loopback is configured at Host
!
inter face Gig abitEthernet 101/0/0/1
loo pback in ternal
!
ASR9000 Host
Satellite
ID 101
Tester
Internal Loopback
!
inter face Gig abitEthernet 101/0/0/1
loo pback li ne
!
ASR 9000 nV System
ASR9000 Host
Satellite
ID 101
CE
Line Loopback
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
ASR 9000 nV System
Cisco Public
Autonomic Networking
Deployment and Operations: Current Methodology
Purchase
Service
Activation
Installation
(Truck Roll)
Handling
Misconfigurations
(Truck Roll)
Pre-Staging
T ECH-SP3
45
Cisco and/or its affiliates. All rights reserved.
Cisco Public
Management/
Customization
Autonomic Networking : The Vision
Self-Managing
Self-Configuring
Self-Optimizing
Self-Protecting
Self-Healing
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
Circling back…
Thus, the most efficient workflow eliminates PreStaging and unnecessary truck rolls:
Purchase
T ECH-SP3
Installation
(Truck Roll)
Cisco and/or its affiliates. All rights reserved.
Service
Activation
Cisco Public
Management/
Customization
The Autonomic Networking Infrastructure
Management/Customization
Zero-Touch
Deployment
(EEM / PRIME/ SDN controller)
Consistent
Reachability
Security
a
Network
•
•
•
T ECH-SP3
SUDI /UDI
authentication
Domain
Certificates
Autonomic
Control Plane
Discovery
•
•
Channel
Discovery
Service
Discovery
Cisco and/or its affiliates. All rights reserved.
•
Autonomic
Control Plane
•
Indestructible,
virtual out-ofband channel
Cisco Public
The Autonomic Networking Infrastructure Explained
New Device
L2 cloud
E-LINE
E-LAN
E-TREE
TFTP
Server
Discovered
1
Channel discovery
2
Adjacency discovery
3
Join AN Domain
Proxy Device
CA
Rest of Autonomic Network
Registrar
ControlSecure,
Plane always available
Goal:
Autonomic ControlAutonomic
Plane
communication channel
Autonomic
Processes
T ECH-SP3
TFTP
Goal: Find the channel (VLAN) to
communicate on
Goal: Find Autonomic neighbors of the
same domain, OR download Certificate
from Registrar (post-authentication)
Goal: Join AN Domain after
Certificate download
4
5
AAA
Autonomic
Processes
Goal: Network embedded
intelligence, Service Discovery
Cisco and/or its affiliates. All rights reserved.
Cisco Public
Autonomic
Processes
Configure a Registrar
Router#configure terminal
Router(config)#autonomic registrar
Router(config-registrar)#domain-id cisco.com
Router(config-registrar)# CA external/local
Router(config-registrar)#external-CAurl <>
Router(config-registrar)#whitelist disk:whitelist.txt
Router(config-registrar)#no shut
•
•
Enter Autonomic Registrar Config mode
Configure domain-id – any name will do
Choose either external or local CA
Specify the external CA’s url (if selected)
Specify a local whitelist (Optional)
Unshut the Registrar – You’re done!
CA
If external-CA url is not specified, Registrar runs
an IOS CA locally
Can the whitelist be made optional?
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
Registrar Redundancy
•
A Registrar in an Autonomic domain:
• validates new devices (whitelist)
• Hands out domain certificates
•
1 Registrar  failure no new devices can join the
autonomic domain!
Registrar
•
Good practice to configure multiple registrars
•
Registrars can be distributed – no need to be neighbors!
Registrar
Identical
Configuration
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
Create a Whitelist
•
•
•
Devices joining the domain must be validated before handing out certificates
Create a whitelist (text file) of UDIs that are allowed to join
• Automatically generated by Cisco (from Bill of Sale) for new devices
• Updated by Customer for existing devices
Load whitelist on the Registrar (manually)
Cisco creates
whitelist for
New devices
Registrar
CSR1000v
Purchase
T ECH-SP3
Bill of Sale
Customer updates for
Existing devices
Cisco and/or its affiliates. All rights reserved.
Cisco Public
Channel Discovery
VLAN
noted
VLAN
noted
Michael
T ECH-SP3
Dark
Layer 2
Cloud
Cisco and/or its affiliates. All rights reserved.
Registrar
Cisco Public
Bring up Remote Sites: Channel Discovery
• Newly installed device is always
passive
• Typically, VLAN based E-LINE
services - each NID permits one
VLAN
• Channel discovery helps discover
the allowed VLAN
• ACP is kept separate from Data
plane using QinQ service instance
with fixed inner vlan = 4094
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Outer VLAN
ThirdParty
MetroEthernet
Cloud
Probe for VLAN
= 416 passes
through
Cisco Public
Inner VLAN
NID only
allows
VLAN 416
Restricting VLAN Ranges with Channel Discovery
• Intent configured on registrar
• Flooded through network
Router#configure terminal
Router(config)#autonomic intent
Router(config-intent)#acp outer-vlans 400-420
Router(config-intent)#end
Registrar
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
Domain Certificates
 Secure by Default
Validate UDI
against local
whitelist
Michael
T ECH-SP3
Dark
Layer 2
Cloud
Cisco and/or its affiliates. All rights reserved.
Registrar
Cisco Public
Autonomic Control Plane (ACP)
Dark
Layer 2
Cloud
Michael
Registrar
Router # show autonomic dev ice
UDI
Dev ice ID
Domain ID
Domain Certificate
Dev ice Address
T ECH-SP3
<UDI>
Router-1
cisco.com
(sub:) cn=Router-1:cisco.com
FD08:2EEF:C2EE::D253:5185:5472
Cisco and/or its affiliates. All rights reserved.
Cisco Public
Proxy Bootstrap
Hi Michael, I’m Steve.
What do I need to
configure to join ?
Nothing! Welcome to
AN. I’ll be your guide.
Michael
Steve
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Dark
Layer 2
Cloud
Registrar
Cisco Public
Bring up Remote Sites: ACP
•
Autonomic Control Plane comes up
using discovered channel
•
IPv6 connectivity to Pre-Aggregation
devices (ASR903) established
CA
Third–Party
Metro Ethernet
Cloud
FD08:2EEF:C2EE::D253:5185:547A
FD08:2EEF:C2EE::D253:5185:5237
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
Tree-like Control plane build-up
Virtual Out Of Band Channel (VOOB)
Michael
Steve
T ECH-SP3
60
Cisco and/or its affiliates. All rights reserved.
Dark
Layer 2
Cloud
Cisco Public
Registrar
Virtual Out Of Band Channel (VOOB)
AAA Misconfig /
Interface admin-shut
`
Michael
Dark
Layer 2
Cloud
Steve
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
Registrar
Advantages of the Autonomic Control Plane (ACP)
loopback
VRF
 Completely self-managing
loopback
VRF
Secure Tunnel
IPv6 link local
IPv6 link local
– No config!
 Secure
– Separate (VPN) and Encrypted (IPsec)
 Independent of Routing
– Only depends on link local addresses
 Independent of Configuration
– Only certificate visible in “sh running”
Use as a
“Virtual
Out-Of-Band
Channel”
 Visible
– Lots of show commands, debugs, etc.
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
Connect the outside world to the ACP
Connect Services:
DNS, AAA, PnP etc.
to ACP:
CA
AAA Serv er
!
interface Gig0/3
autonomic connect
ipv6 address 2000::10/64
end
!
Third–Party Metro
Ethernet Cloud
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
PnP
Connecting into the Autonomic Control Plane
loopback
VRF
Secure Tunnel
 Like normal “ip vrf forwarding” command
 All devices on this interface have full access to ACP
loopback
VRF
Interface eth 2
autonomic connect
ipv6 address 2000::10/64
Can SSH, SNMP, etc to loopbacks
 Long term: Servers will be autonomic devices
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
Service Discovery
•
•
Services automatically learnt by all the devices
Note: These are services in the Autonomic
domain context, not Global
CA
AAA Server
PnP
Router#show autonomic service
Service
Syslog
IP-Addr
2000::1
UNKNOWN
AAA
2000::1
UNKNOWN
AAA Accounting Port
1813
AAA
AAAAuthorization
Authorization Port
Port
Autonomic registrar
1812
FD08:2EEF:C2EE::D253:5185:5472
TFTP Server
2000::1
UNKNOWN
DNS Server
2000::1
UNKNOWN
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Third–Party
Metro Ethernet
Cloud
Cisco Public
Automatic Configuration Download
•
•
Accomplish Config download
using PnP server* or existing
TFTP servers
Bring up Services!
T ECH-SP3
TFTP
Third–Party
Metro Ethernet
Cloud
Cisco and/or its affiliates. All rights reserved.
Cisco Public
Intent Distribution
•
Intent = Business policy for the
entire network or subset of the
network
•
Automatic distribution of intent
using the intent distribution
protocol (IDP)
•
Intent Timestamp/version is
hot-potatoe-forwarded in the
network constantly
If timestamp > local intent
timestamp  pull in intent from
neighbour
•
T ECH-SP3
NMS
Systems
SDN
Controllers
Cisco and/or its affiliates. All rights reserved.
Michael
Steve
Cisco Public
Registrar
Virtualizing the Registrar: CSR1000v integration
IOIOS XE-3.15
CA
AAA Serv er
CSR1000v
PnP
Network Operations Center
(NOC) with CSR1000v VM
acting as the Registrar
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
The Autonomic Networking Infrastructure
Management/
Customization
Zero-Touch
Deployment
(EEM / PRIME/ SDN controller)
Consistent
Reachability
Security
a
Discovery
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
Device Support: SP, Enterprise and IoT
Supported today:
 ASR 901, ASR 901s, ASR 903, ASR 920, ME 3600, ME 3800
 Catalyst 2000, 3000, 4000, NG3k, IE 2000
 Open Source: Secure Network Bootstrap Infrastructure
(SNBI; part of OpenDayLight Helium release)
Roadmap
 ASR 9000
 ASR 1000, CSR 1000, ISR-G2, ISR-4000
 (more to come)
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
Standardisation
ANIMA Working Group: http://tools.ietf.org/w g/anima/
Early w ork
 A Framew ork for Autonomic Netw orking http://tools.ietf.org/html/dr aft-behringer-autonomic-netw ork-framew ork
 Making the Internet Secure by Default http://tools.ietf.org/html/draft-behr inger-default-secure
NMRG w ork
 Autonomic Netw orking: Definitions and Design Goals http://tools.ietf.org/html/draft-irtf-nmr g-autonomic-netw ork-definitions
 Gap Analysis for Autonomic Netw orking https://tools.ietf.org/html/draft-irtf-nmrg-an-gap-analysis
Use case drafts: Those are used to derive requirements for the Autonomic Netw orking Infrastructure
 Autonomic Netw orking Use Case for Netw ork Bootstrap https://tools.ietf.org/html/draft-behringer-autonomic-bootstrap
 Autonomic Netw ork Stable Connectivity https://tools.ietf.org/html/dr aft-eckert-anima-stable-connectivity
 Autonomic Prefix Management in Large-scale Netw orks https://tools.ietf.org/html/draft-jiang-anima-prefix- management
Solution drafts:
 An Autonomic Control Plane https://tools.ietf.org/html/draft-behringer-anima- autonomic-contr ol-plane
 Bootstrapping Key Infrastructures http://tools.ietf.org/html/draft-pritikin-anima-bootstrapping- keyinfrastructures
 Bootstrapping Trust on a Homenet (this is in homenet, not ANIMA) https://tools.ietf.org/html/draft-behr inger-homenet-trust-bootstrap
 A Generic Discovery and Neg. Protocol for Autonomic Netw orking https://tools.ietf.org/html/draft-carpenter-anima-gdn-protoc ol
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
References
 www.cisco.com/go/autonomic/
 IEFT Drafts: See earlier slide
 OpenDayLight Project SNBI: https://wiki.opendaylight.org/view/SecureNetworkBootstrapping:Main
 Autonomic Networking Configuration Guide, Cisco IOS Release 15S
www.cisco.com/en/US/partner/docs/ios-xml/ios/auto_net/configuration/15-s/an-auto-net-15-sbook.html
 Cisco IOS Autonomic Networking Command Reference
www.cisco.com/en/US/partner/docs/ios-xml/ios/auto_net/command/an-cr-book.html
 [email protected]
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
Auto-IP
Auto-IP
 Self assigning IP address
LLDP based Auto-IP
negotiation
1
Assign unique IP address to node
being inserted
2
Neighboring nodes and inserted node
negotiate physical link addresses
3
Connectivity established to the new
node without manual intervention to
existing nodes
Easy node insertion and IP address assignment in L3 rings
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
Auto-IP Solution Overview
R1
Auto-IP negotiation
non-owner
R2
owner
R3
owner
non-owner
 For ring topology point-to-point links use /31 mask
 Both interfaces are equal before the insertion
 After the insertion, the “owner” and ‘non-owner” interfaces will be determined
automatically depends on the adjacent Routers during the initial negotiation
 After the initial IP auto negotiation and IP address assignment, the “owner”
interface will keep its IP address during any ring operation:
insertion/removal/movement (stickiness)
 The “non-owner” interface could change its IP address based on its new
neighbor accordingly during the ring operation
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
Auto-IP: Plug-n-Play for L3 MPLS Ring
R1
Initial
state
non-owner, P=0
1.1.1.0/31
R1
Insert
new node
LLDP
negotiation
P=1, auto-IP=1.1.1.3
R2
non-owner
P=0
R3
R3
R2
Owner, P=2 non-owner, P=0
1.1.1.3/31
1.1.1.0/31
T ECH-SP3
Owner, P=2
1.1.1.1/31
owner
P=2, curr-IP=1.1.1.1
R1
non-owner
1.1.1.2/31
R3
Cisco and/or its affiliates. All rights reserved.
owner
1.1.1.1/31
On R2:
interface GigabitEthernet0/3
mpls ip
auto-ip-ring 1 ipv4-address 1.1.1.3
interface GigabitEthernet0/4
mpls ip
auto-ip-ring 1 ipv4-address 1.1.1.3
On R2:
interface GigabitEthernet0/3
mpls ip
ip address 1.1.1.3 255.255.255.254
auto-ip-ring 1 ipv4-address 1.1.1.3
interface GigabitEthernet0/4
mpls ip
ip address 1.1.1.0 255.255.255.254
auto-ip-ring 1 ipv4-address 1.1.1.3
Cisco Public
EPN Evolution
Autonomic Carrier Ethernet
Introducing Autonomic Carrier Ethernet Networks
Balance
Fully Centralized CP
BGP/SDN
Autonomic IGP + SR
SDN Controller
OpenFlow
Fully Distributed CP
BGP
T-LDP
BGP RFC 3107
RSVP-TE
MPLS LDP
IGP
IP
Autonomic Networking + Segment Routing + SDN
Minimal but “sufficient” distributed control plane intelligence
with centralized intelligence on the SDN controller.
SDN Controller
APIs
Access
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Aggregation
Cisco Public
Autonomic Carrier Ethernet Architecture Components
 Autonomic Network: secure infrastructure, auto discovery, plug-n-play
 Segment routing: self-deployed and self-protected, dynamic, flexible traffic
engineering
 SDN controller: service label provisioning, cloud integration
SDN Controller
[service label, SR label]
CE
Service label
SR labels: optional
NID
Anycast SR
label: 1001
1
[service label, SR label]
Access node
Gateway/service node
3
4
1
3
4
2
Auto-CE2
1
Core
2
Autonomic CE1
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
DC
Cisco Public
2
Auto-CE3
Anycast SR
label: 5001
Cloud Edge
3
4
Transport Architecture Overview
 Segment Routing: IGP only, no need for LDP; IGP shortest path as baseline
 Any node to any node transport connectivity: SR node label
 Service node redundancy: anycast SR label
 Link or node protection with Topology Independent Fast ReRoute (TI-FRR):
 50ms FRR in any topology
2
1
3
6
4
5
101
No IGP and LDP
interaction, NO hierarchy
BGP and LDP LSP
50msec auto TI-FRR
Core
102
7
Service Nodes
Anycast label
1001
DC
IGP/SR Domain: single area or process
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public
Inter-domain Transport Architecture
BGP free option: SDN controlled – Without Redistribution
 SR label stack: {local GW, remote GW, remote node}  isolated IGP islands, no
redistribution required, simple, scalable
 External SDN controller is used to provision the SR label stack
 SDN controller can learn the SR label stack via BGP-LS or via a simple pre-provisioned
 BGP Free option: no need for Hierarchical transport LSP’s – RFC 3107
AB: {GW1, GW2, B} =
{1001,2001,2}
SR label stack
CPE
CE
SDN
Controller
Anycast SR
label: 1001
vCPE
SR label stack:
[local GW,
A1
remote GW,
remote node] SR Node
label: 1
1
3
GW1
4
3
4
2
Anycast SR
label: 2001
CE3
Core
Anycast SR
label: 5001
Cloud edge
DC
IGP island
SDN controlled cross-domain
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
3
4
GW2
2
CE1
SDN
Controller
CE
Cisco Public
1B
2 SR Node
CE2
IGP island
label: 2
Inter-domain Transport Architecture
BGP free option: SDN controlled – With Redistribution
 SR label stack: {remote GW, remote node}:  isolated IGP islands, simple, scalable,
optimized label stack
 All Service Nodes labels need to be visible by the Access Nodes: Redistribution is required
 External SDN controller is used to provision the SR label stack
 BGP Free option: no need for Hierarchical transport LSP’s – RFC 3107
AB: {GW2, B} = {2001,2}
CE
SR label stack
CPE
vCPE
SR label stack:
[remote GW,
remote node]
A1
SR Node
label: 1
SDN
Controller
Anycast SR
label: 1001
3
All Service Nodes anycast
prefixes and SID’s are
redistributed within each
CE region
GW1
4
Anycast SR
label: 2001
2
CE1
Anycast SR
label: 5001
Cloud edge
DC
IGP island
SDN controlled cross-domain
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
3
4
GW2
Core
SDN
Controller
CE
Cisco Public
1B
2 SR Node
CE2
IGP island
label: 2
Cross-Domain: CE Transport to DC Network
 Data Center domain can be easily integrated with Carrier Ethernet Transport network
 Both the CPE/NID and the virtual PE are provisioned with SR label stack
 Carrier Ethernet and Data Center network perform MPLS label forwarding between NID
and vPE
NID label: 100
CPE
NID
2
6
101
4
GW1
1
102
5
7
3
Service Nodes
Anycast label 1001
NID  vPE: {1001, 2001, 100}
vPE  NID: {2001, 1001, 100}
T ECH-SP3
Core
Cisco and/or its affiliates. All rights reserved.
GW:DC
Service Nodes
Anycast label 2001
DC: SR domain
vPE
Cisco Public
Label: 100
Intra-domain Service Architecture





P2P static Pseudowire provisioned by SDN controller or NMS
Anycast SR label used to provide Service node redundancy
TI-LFA leveraged to achieve 50ms FRR in any topology
Service 1: E-line between two nodes
Service 2: L3VPN with PWHE
Service label 60001
60002
E-Line between Node1
and Node 2
CE
SDN Controller
Service label 60001,
60002
[{1}, 60002]
[SR label, Service label]
[{1001}, 60002]
[{2}, 60001]
T ECH-SP3
3
1
SR Node
label: 1
From UNI on Node 1 to L3 VPN
on redundant Service Node
101
4
Core
Anycast label 1001
102
SR Node 2
label: 2
CE [{1}, 60001]
Cisco and/or its affiliates. All rights reserved.
POP site /Cloud Edge
(distributed DC)
Cisco Public
DC
Summary
Summary





EPN 4.0
nV Satellite
Autonomic Networking
Zero-IP
Autonomic Carrier Ethernet
T ECH-SP3
Cisco and/or its affiliates. All rights reserved.
Cisco Public