50 GbE, 100 GbE and 200 GbE PMD Requirements

50GbE,100GbEand200GbEPMD
Requirements
AliGhiasi GhiasiQuantumLLC
NGOATHMee6ng
Atlanta
January20,2015
Observa;onon50GbE,200GbE,andNG
100GbEPMDs
q  50GbEand200GbEarecomplimentarysetofstandardsjustasweobservedinthe
marketplacethecomplimentarynatureof25GbE/100GbE
–  Currentgenera;onofswitchASICoffer4x25GbEbreakoutforsmallincrementalcost
–  Nextgenera;onswitchASICwilloffer4x50GbEbreakoutforsameeconomics
–  Completeeco-systemrequirebackplane,Cucable,100mMMF,possibly500mPSM4,2000m,
and10,000mPMDsandshouldfollow25/100GbEPMDs
q  NG100GbEPMDsaNributesandrequirements
–  Currentlywiththeincreaseinvolumethemarketisenjoyingsignificantcostreduc;onfor100
GbEPMDssuchas100GBase-SR4,PSM4,andCWDM4/CLR4
–  CostmaynotbethemaindrivertodefineNG100GbEPMDswithexcep;onofCAUI-2
–  Currentlydefined100GbEPMDswillrequireinverse-muxwithintroduc;onof50GASICIO
•  APMA-PMAdevicecouldaddressanyI/Omismatch
•  SimplestformofPMA/PMDimplementa;onoccursforthecasewhen#ofelectricallanes=
#ofop;callanes/λ
–  Doweneedwitheverygenera;onofelectricalI/O25G,50G,100Gintroducenew100GbE
PMDswhichareop;mizedforgivengenera;onofASICbutnotop;callybackwardcompa;ble
–  Thedecisiontodefinenewop;calPMDshouldnotbetakenlightlytosaveaPMA-PMAmux!
A.Ghiasi
IEEE802.3NGOATHStudyGroup
2
Today’sEthernetMarketIsn’tJustabout
Enterprise
q  Router/OTN
–  Leadsthedeploymentwithfastestnetworkinterface
–  Drivesbleedingedgetechnologyathighercostandlowerdensity
q  Clouddatacenters
–  Closfabricstypicallyoperateatlowerportspeedspeedtoachieveswitch
ASICradixof32or64
–  Drivescost-power-densitytoenablemassivedatacenterbuiltout
–  Forklicupgradedoublingcapacityevery~2.5yearswithdoublingof
switchASICcapacity
q  Enterprise
–  Enjoysthevolume-costbenefitofdeployingpreviousgenera;onof
CloudDataCenterstechnology
–  MorecorporateITservicesarenowhostedbytheCloudoperator
–  AccordingtoGoldmanSachsresearchfrom2013-2018Cloudwillgrowat
rateof30%CAGRcompareto5%forEnterpriseIT
• 
A.Ghiasi
hfp://www.forbes.com/sites/louiscolumbus/2015/01/24/roundup-of-cloud-compu;ng-forecasts-and-market-es;mates-2015/.
IEEE802.3NGOATHStudyGroup
3
EthernetSerialBitrateandPortSpeedEvolu;on
q  Router/OTN,Cloud,vsEnterpriseapplica6ons
–  NGOATHprojectaddressesnextgenera;onCloudandEnterprise
–  50GbEisnotonlyaninterfaceonCloudserverbutalsoareplacementfor40GbE
intheEnterprise.
Serial"Bitrate"(Gb/s)"in"Rela:on"to"Ethernet"Standard"
1000"
800"GbE"
400"GbE"
200"GbE"
100"GbE"
"
"
100"
Serial"Bitrate"
50"GbE"
Standard"
25"GbE"
10"
1"
1995"
10"GbE"
1"GbE"
"
2000"
2005"
2010"
2015"
2020"
2025"
Year"
A.Ghiasi
IEEE802.3NGOATHStudyGroup
4
Evolu;onofEthernetSpeed-Feed
q  NGOATHprojectisaddressingtheneedfornextgenera6onCloudtrack
–  P802.3.bsaddressingtheneedfornextgenera;onRouter/OTNtrack
–  25GSMFprojectaddressingtheneedofnextgenera;onEnterprise/campus
2008
2009
2010
2011
Router/OTN
Track
CloudTrack
Enterprise
Track
480GSwitch
48PortsSFP+
48x10GbE
*Notallpossibleconfigura;onarelisted.
A.Ghiasi
ApproximateYearofIntroduc6on*
2016
2012
2013
2014 2015
2017
2018
400GLinecard
4PortsCFP
4x100GbE
800GLinecard
8PortsCFP2
8x100GbE
1440GSwitch
36PortsQSFP10
36x40GbE/144x10GbE
3200GSwitch
32PortsQSFP28
32x100GbE/64x50GbE/
128x25GbE
480GSwitch
48PortsSFP+
48x10/1GbE
1280GSwitch
32PortsQSFP10
32x40GbE/128x10/1GbE
IEEE802.3NGOATHStudyGroup
2019
3200GLinecard
8PortsCFP8
8x400GbE
6400GSwitch
32PortsQSFP56
32x200GbE/64x100GbE
128x50GbE
3200GSwitch
32PortsQSFP28
32x100GbE/128x25/10GbE
5
CurrentandNextNGOATHPMDs
25G/50GI/100G-MAC
50G/100G/200G-MAC
Reconcilia6on(RS)
Reconcilia6on(RS)
CCMII
PMA
PCS
FECRS(528,514)or
(544,514)
PMA
CAUI-4
4x25GAUI
CCAUI-4
4xLAUI
A.Ghiasi
IEEE802.3NGOATHStudyGroup
50G-FR/200G-FR4
50G-LR/200G-LR4
50/200
GbE
100GBase-SR4
25/50/
100GbE
PSM4
CLR4
CWDM4
100GBase-LR4
100GBase-CR4
100GbE
PMA
PMD
25/100
GbE
100GBase-KR4
I.50GbEIncludedin
hNp://25gethernet.org
applica6onistoincrease
fabricradix.
PMA
PMD 50/200 50/100/
GbE 200GbE
100G-DR2/200GDR4
200G-SR4/100GSR2/50G-SR
ForallPMDsexceptLR4
50G-CR/200G-CR4
FECRS(528,514)
50G-KR/200G-KR4
PCS
100GbE
100Gbase-FR2or
100Gbase-FR
CGMII
6
TheChallengewith100GbENextGenPMDs
q  Approachsupportexis6ng100GbEPMD
TP3
PMA
KR4KP4
CAUI-4
½ofQSFP56
PMD
PMD
PMA
PMA
Legacy
100 GbE
KR4 FEC
TP3
PMA
TP2
QSFP28
CAUI-2
Next Gen
100 GbE
KP4?
CAUI-2
Next Gen
100 GbE
KP4?
TP2
q  Approachtosupportnew100GbEPMDs
½ofQSFP56
PMD
PMD
PMA
KP4
FEC?
CAUI-4
PMA
Legacy
100 GbE
TP3
TP3
PMA
PMA
TP2
QSFP28
TP2
q  Thesimplestapproachiffeasibleistodefinenew100GbEPMDsbasedonKR4FEC.
½ofQSFP56
PMD
TP3
A.Ghiasi
TP3
PMA
PMA
TP2
PMD
CAUI-4
PMA
PMA
Legacy
100 GbE
KR4 FEC
QSFP28
CAUI-2
Next Gen
100 GbE
KR4 FEC
TP2
IEEE802.3NGOATHStudyGroup
7
Observa;on:50/200GbEaretheenablers
while100GbEisaniceaddi;on
q  200GbEPMDs(Applica6onnextgenera6onCloudDataCenter)
–  200Gbase-LR4basedonCWDM4enablesnextgenera;onuncooledlowcostPMD
•  Does200Gbase-FR4offerssignificantlylowercostsolu;ontodefineseparatePMD?
–  200Gbase-DR4offers200GbEaswellas50/100GbEbreakout
–  100Gbase-SR4offers200GbEaswellas50/100GbEbreakout
–  200Gbase-KR4with30+dBrequiredtomeet1mbackplane
•  Backplanelosswilldetermineexactcablereachof3to5m
q  100GbEPMDs(Applica6ondoublingradixinCloudandcouldbeabeNermatch
withnextgenera6on50GASICsIO)
–  100Gbase-LR2toenablesnextgenera;onuncooledlowcostPMD
•  Does100Gbase-FR4offerssignificantlylowercostsolu;ontodefineseparatePMD?
–  100Gbase-DR2use½ofthe200Gbase-DR4
–  100Gbase-SR2op;ons:use½ofthe200Gbase-SR4ordefineadual-λduplex
–  Tooearlytodefineserial100Gb/sandnoneedtodefine2lanesCuKR2/CR2
q  50GbEPMDs(Nextgenera6onserversandnextGenEnterprise/campus)
–  50Gbase-LRrequiredforthecampusandaccessapplica;on
–  Doweneedtodefineboth50Gbase-FRand50GBase-DR?
–  50Gbase-SR
–  200Gbase-KR4with30+dBrequiredtomeet1mbackplane
•  Backplanelossshoulddeterminetheexactcablereach3to5m.
A.Ghiasi
IEEE802.3NGOATHStudyGroup
8
50Gb/s/laneInterconnectSpace
q  OIFhasbeendefiningUSR,XSR,VSR,MR,andLR
–  OIF-56G-LRisgoodstar;ngpointbutdoesnotsupportprac;cal1m
backplaneimplementa;onbut27.5dBisinsufficienttobuildprac;cal
backplanes!
Application
Modulation
Reach
Coupling
Loss
Chip-to-OE
OIF-56G-USR
NRZ
(MCM)
Chip-to-nearbyOE OIF-56G-XSR
NRZ/
(noconnector)
PAM4
Chip-to-module
OIF-56G-VSR NRZ/PAM4
(oneconnector)
IEEECDAUI-8
PAM4
<1cm
DC
2dB@28GHz
<5cm
DC
<10cm
AC
<10cm
AC
Chip-to-chip
(oneconnector)
Backplane
(twoconnectors)
A.Ghiasi
Standard
OIF-56G-MR
NRZ/PAM4
<50cm
AC
IEEECDAUI-8
PAM4
<50cm
AC
8dB@28GHz
4.2dB@14GHz
18dB@28GHz
10dB@14GHz
[email protected]
35.8dB@28GHz
20dB@14GHz
[email protected]
OIF-56-LR
IEEE
PAM4
<100cm
100cm
AC
AC
27.5dB@14GHz
[email protected]
IEEE802.3NGOATHStudyGroup
9
TEWhisper40”Backplane
“TheGoldStandard”
See:hfp://www.ieee802.org/3/bj/public/jul13/tracy_3bj_01_0713.pdf
A.Ghiasi
IEEE802.3NGOATHStudyGroup
10
Responseof40”TEWhisperBackplanewithMegtron6
–  30”backplaneMegtron6HVLPwith6milstraces
•  Fordenseapplica;onmoreprac;caltracewidthwillbe4.5-5mils
–  Daughtercards5”eachMegtron6VLPwith6milstraces
–  Thelossis~30dBat12.87
–  Actualimplementa;onmayneedtousenarrowertraceslike4-5mils
increasingthelossfurther
–  Withbackplanenotshrinking30-32dBlossisrequiredforprac;callinecards.
A.Ghiasi
IEEE802.3NGOATHStudyGroup
11
25G/50GChannelSummaryResultsfor
TEWhisper1mBackplane
q  Closingthelinkbudgeton30dBchannelwith2dBCOMmarginisnot
trivial
–  Alossreduc;onalsonotanop;ongivenTEbackplanetracewidthratherwide
of6milswheretypicallinecardtracewouldbein4.5-5mils!
TestCases
Channel Channel+
IL(dB) PKGIL(dB)
ILD
ICN
(mV)
PSXT
(mV)
COM
(dB)
25GNRZWithIEEE12mmPackage
28.4
30.4
0.37
1.60
4.0
5.5
25GNRZWithIEEE30mmPackage
28.4
32.5
0.37
1.63
3.3
4.8
25GPAM4WithIEEE12mmPackage
16.4
17.1
0.05
0.98
2.0
5.7
25GPAM4WithIEEE30mmPackage
16.4
18.1
0.05
0.98
1.8
5.7
50GPAM4WithIEEE12mmPackage
29.7
34.7
0.41
1.65
3.1
50GPAM4WithIEEE30mmPackage
29.7
36.7
0.41
1.64
2.66
A.Ghiasi
IEEE802.3NGOATHStudyGroup
12
MoreAdvancePCBMaterialOnly
ModestlyImprovestheBackplaneLoss
q  EvenmovingfromMegtron6DF~0.005toTachyonDF~0.0021theloss
onlyimprovesby~20%
–  WithDF≤0.005lossisnowdominatedbyconductorsizeandroughness.
LeeRitcheySourceDeisgncon2015
A.Ghiasi
IEEE802.3NGOATHStudyGroup
13
Summary
q  In802.3todayweneedaddress3markets
–  Router/OTNbleedingedgetechnologyandspeed
–  ClouddrivenbyforklicupgradeasresultofswitchBWdoublingevery~2.5years
–  Enterpriseleveraginglastgenera;oncloudtechnology
q  50/200GbEofferop6mumsolu6onsetfornextgenera6oncloudwith50GbEfor
serversand200GbEforfabrics
–  Incurrentdatacenterbuildout50GbE(25GMSA)isdeployedtodoubleradixand
fabriccapacity
•  Innextgenera;ondatacentershighdensity100GbElikelywillbedeployedtobuildultrascale
fabric
q  NextGen100GbEPMDscanbebasedon200GbEPCS/FECoritcanbedefinedtobe
backwardcompa6bleusingClause82PCSandKR4FEC
–  TheadvantageofusingcommonFECfor100GbEand200GbEistoachieveiden;cal
performanceforaPMDopera;nginfullrateorbreakoutmode
–  Consideringtheinvestmentmadeincurrent100GbEPMDsbackwardcompa;bility
shouldanimportantconsidera;on
q  Toenablenextgenera6on6.4Tblinecard,thebackplanebasedonimprovedFR4
materialmustoperateat50Gb/s/lane
–  Aminimumlossof30dBisrequiredforconstruc;onof1mconven;onalbackplane
q  The802.3needtobalancecloudapplica6onsdrivenbyforklioupgradeaswellas
synergyandcompa6bilityacrossEtherneteco-system.
A.Ghiasi
IEEE802.3NGOATHStudyGroup
14