PowerPoint Presentation - “Client-side DWDM”: A model

“Client-side DWDM”:
A model for next-gen Baltimore region
optical fanout
Dan Magorian
Director of Engineering and Operations
MidAtlantic Crossroads (MAX)
Presentation to Joint Techs, Fermilab
July 17, 2007
Here’s the usual optical fanout nonstarter
situation at most places
• Almost all RONs have dwdm systems, either hardprovisioned with jumpers or with ROADMs.
• Hardly any campuses have dwdm systems, but many are
fiber-rich.
• So most campuses figure that will bolt dark fiber to lambdas
to connect RON lambdas to high-bandwidth researchers.
• If any actually materialize on their campus as real demand
with “check in hand”.
• A few realize this many not scale and are thinking about
white light switches to conserve fiber.
• When they think about dwdm systems to carry lambdas to
edge, cost usually prohibitive, given fiber resources and lack
of perceived need for many dwdm system features.
So the Baltimore region wasn’t much
different from anyone else
• Actual “check in hand” researcher 10G demand to JHU,
UMBC, & others was uncertain until recently.
• Community had been well served by MAX & Internet2 highperformance layer 3 transit services. No one in Baltimore or
DC had yet joined NLR.
• Tho MAX has been fanning out NLR lambdas across the
region for other projects and customers for years.
• But recently, Teraflow testbed & other projects appeared
with actual needs for 10G lambdas to Balt researchers
• Also, growing demand from less well-heeled esearcher
projects for vlans over shared 10G lambdas, similar to
NLR’s Framenet service.
• So, suddenly Balt institutions had to get their act together.
Luckily, had resources needed for this
• BERnet (Baltimore Educational Region network) has long
history of good forum for horse trading assets and working
mutual deals between 7 participants: state net, university
net, libraries, 4 universities.
• Many other regions have similar forums, but this level of
cooperation actually rather uncommon in Mid-Atlantic, so
BERnet frequently touted as good cooperative model.
• Had just built cooperative dwdm regional ring year before,
run by Univ System MD, and all 7 participants already had
dark fiber lit with 1310 to two main pop locations.
• MAX was already in midst of procuring 3rd generation
unified dwdm system to replace 2 fragmented metro rings
built 2000-2003 (more on that next time).
• State net was willing to contribute a fiber spur to Baltimore
no longer used in production net for research needs.
High-level BERnet
diagram, inc coming
MIT-MAX RON-RON
interconnection.
(Will mean at least 4
R&E paths to get 10G
north: I2, NLR, Awave,
and MIT)
NLR and I2
lambdas
MAX
dwdm
MCLN
BERnet
Participants
MAX
dwdm
CLPK
MIT
dwdm
Albany
MIT
dwdm
Boston
MIT
dwdm
NYC
Lambdas to
Europe
MIT
dwdm
BALT
BERnet
dwdm
6 St Paul
BERnet
Participants
BERnet Regional Diagram inc new MAX dwdm
JHMI
MIT 300 Lex.
JHU
40 wavelength
MUX w ITU XFP’s
Client Side Path
UMBC
660 Redwood
6 St. Paul
One Transponder Pair to Pay for
and Provision End to End
MCLN NLR & I2
College Park
40
wavelength
Amplified Line
Side Path
Already BERnet had talked through
L3 routed vs L2 bypass tradeoffs
• Not news to anyone in this community, same most places:
• High-end researcher demand approximates circuits with
persistent large flows needing low latency.
• National R&E backbones like ESnet have moved to
accommodate that by building circuit switching for top flows
• Upgrading campus L3 infrastructures (the “regular path”) to
accommodate this emerging demand involves very
expensive router and switch replacement.
• Usual interim approach is for campuses to “special case”
researchers with L2 bypass infrastructure until greater
overall demand warrants 10G end-to-end for everyone.
Originally, plan was to extend USM dwdm
system down to DC
• But new MAX dwdm wouldn’t be same system, would
have created OEO and need for two transponder pairs,
• Didn’t want to use “alien waves” across core:
– problems with no demarc
– need for color coordination across diverse systems.
• Wanted participants to understand longer term cost
implications of 1 vs 2 transponder pairs per lambda.
– One transponder pair instead of two means half the
incremental cost to add 10 G channels ($32K vs.
$64K ea). Over $1M save if all 40 populated.
– Transponders dominate costs over long term!
– Unlike state nets, were within distance, didn’t need
OEO for regen.
So instead, I talked BERnet participants into
DIY idea of “client dwdm”.
• Everyone is familiar with full-featured dwdm system
features. Also lots of vendors selling low-cost bare-bones
dwdm systems, eg Ekinops, JDSU, etc
• 3rd alternative: “do it yourself” minimal dwdm components
(that aren’t even systems) are perfect for regional fanout
from full-featured systems.
• So one XFP or SFP goes in client (trib) pluggable ports of
dwdm system, and other side goes in IT or researcher
ethernet switch or even in 10G nic. Also switch to switch.
• Instead of $30-60k dwdm cost per chassis for participants,
cost is only $22k for 40 lambda filter sets + $6k per “colored”
Finisar or equivalent XFP pair. 1G or 2.5G SFPs under
$1k/pr Also $15k/pop for newly released Aegis OLM-8000
optical power monitors from 99/1 taps to view 8 fibers.
• Lower costs mean even small folks can play!
“Client” and “Line” dwdm provisioning
example
JHU
Client side: instead of “normal” 1310 optic,
uses “colored” dwdm XFP pair on an assigned
wavelength 40 km reach $6K
660 Redwood
6 St. Paul
Line side: Transponder
Pair makes the 185 km
reach with Amps and
holds the client XFP $32K
MCLN NLR & I2
College Park
Many commodity
40 channel 2U C band passive
parallel filters are available. We
chose Bookham who OEM
components for Adva and others’
dwdm systems, and had them
packaged for us. We needed
99/1 tap ports for optical power
monitors. Beware: many dwdm
system vendors mark up filters
significantly. Also, some still 32
More “Client Dwdm” examples
JHU switch
DWDM
XFP pairs on an assigned wavelengths on 40
channel dwdm filters (40 km reach $6K). Red
lambda is to DC, blue to NYC, green local to UMBC
6 St. Paul
660 Redwood
DWDM
DWDM
DWDM
300 W Lexington
DWDM
DWDM
UMBC
All each participant fiber needs is filter pair plus
one optical power monitor at each pop,
not full dwdm chassis
This “client dwdm” approach has lots of
advantages in many situations
• Good interim approach where have fiber and need to do
more than just put 1310 over it, but don’t have budget or
need for full or even bare-bones dwdm system.
• Easy to migrate existing 1310 services onto dwdm colored
sfps and xfps. Doesn’t take lot of optical expertise.
• Optical power monitors are very important for snmp data to
mrtg power levels. Remember, while some lambdas are
from campus IT switches, some researcher lambdas from
their switches or nics may not be visible to campus IT at all.
• Because one end in full-featured dwdm xpdr client port, still
have benefit of support of 10 G WANphy, SONET/SDH,
OTU2 for international partners, Fiber Channel (DR sites),
GMPLS enable control plane and Dynamic Provisioning for
DCS service.
• Main caveat: Cisco and Ciena proprietary pluggable optics!
• Other folks considering using it, eg NOAA & NASA/GSFC.
Deploying it in August, will let you know how it
comes out!
Will forward info on filters or optical power
monitors we used to anyone interested.
Thanks!
[email protected]