Positioning 40 and 100 GbE in data center inter

Positioning 40 and 100 GbE in data center inter-sw itch l ink
ap p l ications and 40GbE PM D recom m endations
Adam Carter, Cisco
Al essan dro B arb ieri, Cisco
1
Data Center inter-s w itc h l ink ap p l ic atio ns
The adoption of 40GE as objectives for
8 02 . 3 ba w il l address the server interconnect
appl ication
H ow ever, it is inevitabl e that the technol og y w il l
m ig rate into the nex t l ay ers of the netw ork and
be adopted into the inter-sw itch l ink s w ithin the
data center.
The drivers for this m ig ration and the
req u irem ents of these inter-sw itch l ink s w il l be
discu ssed.
2
L is t o f s u p p o rters
Dan Dove, P r oC u r ve N et w or k i ng b y H P
C h r i s C ol e, F i ni s ar
P et e A ns l ow , N or t el
S t eve T r ow b r i d g e, A l c at el -L u c ent
R i c k R ab i novi c h , T -P A C K E T S
3
E th ernet arc h itec tu res in th e d ata c enter
Prevalent Media
T h ree-tier arc h itec tu re
M et ro
S in g le M o d e
F i b er
M u l t i m o d e f i b er
st ru ct u red cab l i n g
1 5 0 m @ 9 5 %
co verag e
( ko l esar_ 0 1 _ 0 9 0 6
. p d f )
Rack server:
U T P C o p p er
B l ad e S erver:
co p p er b ackp l an e
1 .
2 .
3 .
A c c
A c c
c om
I m p
Switch T o p o lo g y
H i g h D en si t y
M o d u l ar C h assi s
Core
H i g h D en si t y
M o d u l ar C h assi s
( E n d o f Ro w )
D i s t ri b u t i on
Modular
T oR
B lade
S w it c h e s
A c c es s
S erv ers
T o p o f Rack ( 1 U )
E m b ed d ed
B l ad e S w i t ch
Why do we need layers?
om m odat e hi g h-densi t y serv er dep loym ent s.
om m odat e net work g rowt h ( easi er t han a m esh t op olog y t o b ri ng a new rac k or row
i ng onli ne) .
rov e sc alab i li t y of L 2 net work p rot oc ols – net work desi g n b est p rac t i c e.
4
R el atio ns h ip b etw een s p eed and l ink
d ens ity in th e th ree-tier DC arc h itec tu re
S p eed
C ore
H i g hest
D i st ri b u t i on ( E nd of R ow)
A c c ess swi t c h
( T oR / B lade S wi t c h)
L owest
L owest
H i g hest
#
of li nk s
The uplink speed at each switching tier is determined by two factors:
1 . The downlink speed connecting to one layer below.
2 . The number of downlinks being aggregated.
5
Ethernet speeds positioning in the data center for
1 G b E and 1 0 G b E optim iz ed access
1GbE Server Access
4 0 / 1 0 0
G b E
4 0
T h ree-t i er a rch i t ect u re
M od u l ar C h as s i s
G b E
Distribution
1 0 G b E
(1 6 s e r v
or
1 0 / 4 0 G
4 0 s e r v e
b l ad e
e r s )
s w i t ch e s
b E ToR ( ~ 3 0 r s )
1
G b E
10 GbE Server Access
1 0 0
G b E
4 0 / 1 0 0
G b E
4 0
b l ad e
M od u l ar C h as s i s
Top of Rack
( 1 U – 4 8 por t s )
B l ad e S w i t ch
( 2 4 -3 2 por t s )
G b E
4 0 / 1 0 0
1 0
G b E
s w i t ch e s
ToR
G b E
F or p ower, si z e/ densi t y and c ost reasons 4 0 G i s li k ely t o dom i nat e ac c ess-t o-di st ri b u t i on
swi t c h u p li nk s f or sev eral years af t er t he I E E E 8 0 2 D . a 3 t a b a st andard wi ll b e rat i f i ed.
V L A N
6
T h e trans itio n o f p o rt s p eed s in s w itc h es
F or ex ample: 1 0 M
to 1 0 0 M
• O n switches, as technology matures and
costs decrease, the ratio of high speed to low
speed ports alters.
• I t took 4 years for 1 0 0 M
ports shipped.
ports to match 1 0 M
• B eyond the 1 : 1 point, ratio takes off ( i.e. low
speed v irtually disappears) .
• This transition is driv en by technology cost.
High/low speed switch
por t u n its r a tio 100 M
12 0%
101%
100%
8 0%
6 0%
4 0%
2 0%
0%
19 9 5
19 9 6
19 9 7
19 9 8
Source: Dell’O ro
( h i g h s p eed : low s p eed
s h i p p ed p ort s ra t i o)
19 9 9
7
b p s / 10 M b p
H igher-S peed Ethernet adoption dy nam ics on
serv ers and sw itches
M ass adoption of H igher
S peed E thernet ( H S E )
on serv er ports is driv en
by the av ailability of
H S E on L A N on
M otherboard ( L O M ) .
N I C -L O M
m ix
NETWORK INTERFACE CARD (NIC)
L AN ON M OTH ERB OARD (L OM )
1 00%
8 0%
6 0%
4 0%
2 0%
0%
A lthough H S E on serv er
ports ev entually driv es
H S E adoption on switch
ports, the transition is
not nearly as fast on the
switch side!
2 001
2 0 0 2
1 0 0 -1 0 0 0
s w i t ch
2 002
y ea r of 1
p ort M i x
2 003
G b E
2 004
on L O M
Total 100 Mbps
Total 1000 Mbps
2 005
Source: Dell’O ro
1 00%
9 0%
8 0%
7 0%
6 0%
5 0%
4 0%
3 0%
2 0%
Note: Switch port data are not broken down
by data center and wiring cl os et.
1 0%
0%
2 001
2 002
2 003
2 004
2 005
8
T he need for sol u tions optim iz ed for 1 G b E
aggregation is here to stay w el l b ey ond 2 0 1 0
100 M b p s / 10 M b p s
12 0%
1000 M b p s / 100 M b p s
10 G b p s / 1000 M b p s
101%
100%
9 2 %
8 0%
6 0%
…
4 0%
High/low speed switch
por t u n its r a tio
F oreca s t
…
2 0%
0%
1 0 0
4 %
19 9 5
M b / s St d
19 9 6
4
19 9 7
y ea rs
19 9 8
19 9 9
~ 1 :1
ra t i o
1 0 0 0
2 000
M b / s St d
Note: Switch port data are not broken down
by data center and wiring cl os et.
2 001
2 002
1 0
2 003
2 004
2 005
2 006
y ea rs
1 0
G b / s St d
1 0
2 007
2 008
~ 1 :1
ra t i o
y ea rs
2 009
2 010
2 011
Source: Dell’O ro Q 3 ’0 7
( h i g h s p eed : low s p eed
s h i p p ed p ort s ra t i o)
~ 1 :2 5
ra t i o
9
4 0 G
P M D F ib er R eq u irem ents in th e
Data Center
D is tanc e and m ed ia req u irem ents f or 4 0 G
Aggregation
C ore
O M -3
ri b b on
O M -3
du p lex
1 0 -4 0 G
O M -3
ri b b on
O M -3
ri b b on
4 0 –1 0 0 G
S i ng le m ode ~
1 0 k m
Greenfield
S i ng le m ode ~
D ata C enter T y p e
1 0 k m
1 0 -4 0 G
I nf ras tr.
m ix
req u irem .
4 0 –1 0 0 G
L eg a c y
A multi-f ib e r P M D s h o uld r e p r e s e n t th e mo s t c o s t e f f e c tiv e in te r c o n n e c t
s o lutio n f o r 4 0 G b E , h e n c e it is e x p e c te d th a t a 4 0 G P M D s up p o r tin g r ib b o n
f ib e r s , w ill b e in s tr ume n ta l to d r iv e h ig h -v o lume a d o p tio n o f 4 0 G .
10
4 0 G
Dr i ver s f or 4 0 G
R eq u irem ents S u m m ary
on O M -3 r i b b on:
A ccess to distribution uplinks in greenfield data centers
A ccess to distribution in legacy data center future-proofed for
1 0 0 G and/ or 4 0 / 1 0 0 mix ( blade switches v s. ToR switches)
Dr i ver s f or 4 0 G
on d u p l ex O M -3:
Dr i ver s f or 4 0 G
on S M F :
A ccess to distribution in legacy data centers with cable
infrastructure designed for 1 0 G bE
C ore links - for S M F we need a minimum of 1 0 km to be
backward compatible with lower data rates.
11
R ec o m m end atio ns
Now that 40 GbE has been adopted as part of the
8 02 . 3 ba T ask F orc e, there i s a need to c onsi der
i nter-swi tc h l i nk s appl i c ati ons at 40 GbE. O u r
rec om m endati on i s that the 8 02 . 3 ba T ask F orc e
shou l d c onsi der the fol l owi ng :
1. T h e D C i n t e r -s w i t c h l i n k a p p l i c a t i o n s h o u l d b e
c o n s id e r e d to b e p a r t o f th e 4 0 G M M F r e a c h
d is c u s s io n .
2 . C o n s id e r a s p a r t o f th e 4 0 G M M F o b je c tiv e b o th a
m u l t i -f i b e r a n d a d u p l e x f i b e r .
3 . A d o p tin g a n e w o b je c tiv e fo r 4 0 G
r e a c h o f 10 k m .
S M F w ith m in im u m
12