Upgrade to Pro — share decks privately, control downloads, hide ads and more …

The Ethernet Switching Landscape

The Ethernet Switching Landscape

In this 2 hour presentation, I discussed the new technologies making Ethernet switches more important that simply "speeds and feeds." After the technology discussion, I walked through the key points of 17 different Ethernet switch vendors, explaining their unique market positioning and value propositions.

2713ce8f9b3998a74cf3dee9b86ce7e5?s=128

Ethan Banks

April 03, 2014
Tweet

More Decks by Ethan Banks

Other Decks in Technology

Transcript

  1. The Ethernet Switching Landscape Hour 1: Matching Technology with Problems

  2. Introduction - 1 •  Who am I? –  Ethan Banks,

    CCIE #20655. –  Senior Network Architect @ Carenection. –  Co-host of the Packet Pushers Podcast. –  Freelance writer for Network Computing and others. •  This is a 2 hour session with a 15 minute break between hours 1 & 2.
  3. Introduction - 2 •  Who are these sessions for? – 

    Network engineers & architects with infrastructure to build. –  Technical managers needing to understand the business impact of new Ethernet technologies. –  Folks in the enterprise / data center world.
  4. Introduction - 3 •  What will we cover in hour

    1 (technology focused)? –  Speeds & latency –  MLAG & ECMP –  SPB & TRILL –  Leaf/spine –  Physical concerns: optics & cabling –  Openflow, SDN, and network virtualization –  Whitebox switching
  5. Introduction - 4 •  What will we cover in hour

    2 (vendor focused)? –  Established players: Arista, Avaya, Brocade, Cisco, Dell, Extreme, HP, IBM, and Juniper. –  Challengers: Big Switch, Cumulus, Huawei, Mellanox, NEC, Pica8, Plexxi, and Pluribus.
  6. Introduction - 5 •  Housekeeping –  Please ask questions along

    the way. –  I will not be offended if you need to leave. –  The purpose of this session is to educate, not market. I have no vested interest in any product I mention.
  7. Ethernet Tech – Latency - 1 •  What is latency?

    The time it takes for a frame to enter and then exit a switch. •  How is latency measured? FIFO. Why does this matter? –  Cut-through vs. store & forward –  Comparing apples to apples.
  8. Ethernet Tech – Latency - 2 •  A sampling of

    vendor-reported “as low as” latency. –  Cisco Nexus 5000 3.20µs –  Cisco Nexus 5500UP 1.80µs –  Pica8 P-3290 1.00µs –  Arista 7050S 0.80µs –  Dell Z9500 0.60µs –  Juniper QFX5100 0.55µs –  Mellanox SX1036 0.22µs
  9. Ethernet Tech – Latency - 3 •  Did you know

    that different PHYs can impact latency? –  A PHY is the physical Ethernet medium you use, as in the transceiver. –  SFP+ is ~0.3µs. –  10GBASE-T is ~2.6µs.
  10. Ethernet Tech – Latency - 4 •  Should you care

    about latency? It’s all about the application. So… What’s your application? Do microseconds count? Do nanoseconds?
  11. Ethernet Tech – Speeds – 1 •  Faster is better,

    right? –  Faster Ethernet has implications for cabling. –  Then there’s the question of where you’re in such a hurry to get to. To another data center pod? To a storage array? To an adjacent host? –  Where you’re going has implications for what your network topology looks like.
  12. Ethernet Tech – Speeds – 2 •  What are the

    drivers for faster Ethernet? –  Reduced tolerance for data center oversubscription. Why? Ever-increasing traffic between access layer hosts. Think virtualization traffic patterns. East-west. –  High densities of virtualized servers housed in blade enclosures. –  Aggregation between layers. Multi 1GbE drives 10GbE uplinks. Multi 10GbE drives 40GbE uplinks.
  13. Ethernet Tech – Speeds – 2.1

  14. Ethernet Tech – Speeds – 3 •  10GbE – Use

    case 1 – as links between layers. –  Campus closets to core. –  ToR or EoR switches to the aggregation layer.
  15. Ethernet Tech – Speeds – 4 •  10GbE – Use

    case 2 – as access layer ports facing hosts. –  Bladecenters. –  Storage. –  10GBASE-T LAN-on-motherboard modules.
  16. Ethernet Tech – Speeds – 5 •  10GbE – media.

    –  Premade copper SFP+ terminated cables. –  Variety of fiber SFP+ modules. –  What about 10GBASE-T? •  Runs over unshielded CAT6 up to 55m. •  Runs over CAT6 shielded, CAT6A, and CAT7 up to 100m.
  17. Ethernet Tech – Speeds – 6 •  40GbE – Use

    case – as links between layers. –  Find high density 40GbE ports on switches meant to act as backbone or spine switches. –  Find 40GbE uplink ports on ToR/EoR access layer switches.
  18. Ethernet Tech – Speeds – 7 •  40GbE – media.

    –  QSFP+. A channelized transceiver that uses aggregates 12 fibers into a 40GbE path. There are breakout assemblies to convert QSFP+ into 4 distinct 10GbE SFP+ interfaces.
  19. Ethernet Tech – Speeds – 8 Cisco QSFP to SFP+

    fiber & copper breakout assemblies. Images from cisco.com.
  20. Ethernet Tech – Speeds – 9 •  40GbE – media.

    –  BiDi. A QSFP+ transceiver that runs over 2 strands of fiber instead of 12. Cisco offers today (one aspect of the Nexus 9K announcements). Expected to be across the industry.
  21. Ethernet Tech – Speeds – 10 QSFP+ MPO connector vs.

    “traditional” LC connector. 40GbE BiDi optics let you keep the fiber plant & patches you’ve got. MPO/MTP  12-­‐Fiber  Connector   LC  Connector  
  22. Ethernet Tech – Speeds – 11 •  100GbE and higher.

    –  Yes, that’s a thing. –  Do you really need this? –  There’s a notion of “build it and forget about it”. In other words, build a massive fabric and eliminate contention. Sort of like putting in a 100 lane highway so that there could never possibly be a traffic jam.
  23. Ethernet Tech – Architecture •  We could spend a lot

    of time on switch architecture, but here’s a few things to consider. –  Oversubscription. How does the amount of access bandwidth map to the uplink bandwidth? –  Non-blocking. A switch might be line rate on all ports, but how does that map to your network topology? –  Silicon. Some vendors make their own ASICs. Some vendors supplied by Broadcom, Marvell, and others. Does it matter? –  Heat & power consumption. Pay attention to the amount of heat generated and power consumed per port.
  24. Ethernet Tech – MLAG - 1 •  Multi-chassis link aggregation.

    •  Two or more physical switches appear as one for purposes of link bundles. Think LACP (802.3ad). •  Separated control-planes –  Cisco – Virtual Port-Channel (vPC) –  Arista – MLAG •  Unified control plane –  Cisco – cross-stack etherchannel (Catalyst 3750, 3850) –  HP – IRF –  Juniper – Virtual Switch
  25. Ethernet Tech – MLAG - 2 In  MLAG  schemes,  a

     host  uplinks   to  different  physical  switches  via   single  bonded  link.  Diversity  with   interface  simplicity.         Switches  can  connect  to  each   other  using  a  single  port-­‐channel.  
  26. Ethernet Tech – ECMP - 1 •  Equal cost multipath.

    •  The number of parallel paths a switch will actively forward across at L2 or L3. •  L2MP – can’t do this with spanning-tree. –  For our purposes, a single LAG bundle is not L2MP. –  Shortest Path Bridging (SPB) –  TRILL •  L3MP – how many parallel paths can be installed in the routing table? Varies by platform & vendor. •  OpenFlow
  27. Ethernet Tech – ECMP - 2 In  this  topology,  leaf

     switches  are  top  of  rack,  connec5ng  to  spine  switches.  All  links   are  forwarding.  Hop  count  between  all  hosts  in  the  topology  are  equidistant.     Leaf  1  has  2  equal  costs  paths  to  Leaf  2,  Leaf  3,  and  Leaf  4  via  Spine  1  and  Spine  2.   -­‐  Assuming  L2,  TRILL  or  SPB  is  used  between  leaf  and  spine  switches.   -­‐  Assuming  L3,  OSPF  is  a  common  choice.   -­‐  Assuming  OpenFlow,  a  central  controller  programs  switch  TCAM  directly.  
  28. Ethernet Tech – Fabric - 1 •  Ethernet fabric describes

    a mesh of switches forwarding on all links. •  The point is to move toward a non-blocking architecture. How? –  Lower hop count. –  More available bandwidth. •  What else can we do with this? Add storage like FCoE. Or think about lossless iSCSI.
  29. Ethernet Tech – Fabric - 2 Host  1  has  mulLple

      paths  across  the  fabric   to  reach  Host  2.     The  lowest  cost  path  is   chosen  via  switches  8,   1,  9,  5,  and  4  –  a  cost   of  13.     Alternate  paths  are   more  costly.  For   example,  the  path  via   switches  8,1,2,9,  and  4   has  a  cost  of  25.  
  30. Ethernet Tech – L2 DCI - 1 •  DCI =

    Data Center Interconnect •  L2 DCI stretches VLANs between data centers. •  Is this a switch or edge router technology? Yes. –  Cisco – Overlay Transport Virtualization (OTV) –  Juniper – Virtual Private LAN service (VPLS) –  HP – Ethernet Virtual Interconnect (EVI) •  Can you use SPB or TRILL for DCI? Some do – it’s a design question with many considerations. •  So…why DCI? Workload mobility (vMotion), active-active data centers.
  31. Ethernet Tech – L2 DCI - 2 Stretched  VLAN  x

     allows  the  10.1.1.0/24   network  to  co-­‐exist  in  both  DC1  &  DC2.     The  DCI  switches  take  care  of  complica5ons   related  to  spanning-­‐tree  and  broadcast   domains.  
  32. Ethernet Tech – SDN & OpenFlow •  SDN = software

    defined networking •  SDN is a foundational technology that will enable a new breed of integrated applications. •  Look for… –  OpenFlow support. –  Programmability. –  VXLAN Tunnel Endpoint (VTEP). •  Ask vendors how the switch fits into their SDN strategy.
  33. Ethernet Tech – Bare Metal & Whiteboxes •  Two sides

    of the same coin. –  What are bare metal switches? •  A switch with no OS on it. •  “Generic” Ethernet switches based around merchant silicon from vendors like Accton, Celestica, and Quanta. –  What is whitebox switching? •  Starts as a bare metal switch. •  You (or a vendor) puts an OS on the switch. Think Big Switch, Cumulus, Pica8, but note selective hardware compatibility. •  What’s the point? Inexpensive switching for shops that know exactly what they need, and are likely pushing towards SDN control of their network.
  34. Questions?

  35. Stay in touch! Podcast PacketPushers.net Blog EthanCBanks.com Blog NetworkComputing.com E-Mail

    ethan.banks@packetpushers.net Twitter @ecbanks Also… LinkedIn & Google+
  36. The Ethernet Switching Landscape Hour 2: Vendors & Their Offerings

  37. Introduction - 1 •  Who am I? –  Ethan Banks,

    CCIE #20655. –  Senior Network Architect @ Carenection. –  Co-host of the Packet Pushers Podcast. –  Freelance writer for Network Computing and others.
  38. Introduction - 2 •  Who are these sessions for? – 

    Network engineers & architects with infrastructure to build. –  Technical managers needing to understand the business impact of new Ethernet technologies. –  Folks in the enterprise / data center world.
  39. Introduction - 3 •  What we covered in hour 1

    (technology focused)? –  Speeds & latency –  MLAG & ECMP –  SPB & TRILL –  Leaf/spine –  Physical concerns: optics & cabling –  Openflow, SDN, and network virtualization –  Whitebox switching
  40. Introduction - 4 •  What will we cover in hour

    2 (vendor focused)? –  Established players: Arista, Avaya, Brocade, Cisco, Dell, Extreme, HP, IBM, and Juniper. –  Challengers: Big Switch, Cumulus, Huawei, Mellanox, NEC, Pica8, Plexxi, and Pluribus.
  41. Introduction - 5 •  Housekeeping –  Please ask questions along

    the way. –  I will not be offended if you need to leave. –  The purpose of this session is to educate, not market. I have no vested interest in any product I mention.
  42. Established Players - 1 •  Value proposition is high-density, low-latency

    switches at a low cost per port. •  Emphasis on engineer-friendliness. •  Focused on high-volume data centers. •  Arista espouses a L3 mesh with L2 overlay design. •  Differentiator: Extensible OS, their pride & joy. •  VM Tracer talks to vCenter, tracks VMs, integrates VLAN creation.
  43. Established Players – 1.1

  44. Established Players - 2 •  Purchased Nortel’s enterprise business in

    2009. •  Full switch range suitable for the enterprise space. •  Differentiator: Virtual Enterprise Network Architecture (VENA) •  Fabric Connect component of VENA is probably the most interesting element – Shortest Path Bridging (802.1aq).
  45. Established Players – 2.1

  46. Established Players - 3 •  Purchased Foundry Networks in 2008.

    •  Often thought of as a storage player, but has pushed hard to gain data center share. •  Full range of switches. •  Differentiators: VCS Fabric (based on TRILL), AMPP, “ease of use.” •  Building strong SDN team.
  47. Established Players – 3.1

  48. Established Players - 4 •  The market share leader in

    Ethernet switching. •  Ethernet features the Catalyst & Nexus lines. •  More or less, Catalyst is the campus/enterprise play, and Nexus is the data center play. •  Some Catalyst & Nexus products compete. •  FabricPath is based on TRILL, and offers a “standards based TRILL” mode. •  SDN strategy is overly broad, but gaining focus.
  49. Established Players – 4.1

  50. Established Players - 5 •  Dell purchased Force10 in 2011.

    •  Dell’s offerings are a mix of SMB gear (generic Dell) & “Data Center Networking” gear (Force10). •  The point for Dell is to have a play across the entire DC, complementing their storage & server business. •  Differentiator: Dell Networking Open Automation •  Announced Active Fabric Manager & Controller.
  51. Established Players – 5.1

  52. Established Players - 6 •  Extreme purchased Enterasys in 2013.

    •  “There is no Enterasys. Only Extreme.” •  How does this pairing make sense? •  Almost no sales territory overlap. •  $600M in aggregate revenue expected. •  Extreme is all Broadcom. Enterasys has custom ASICs. •  Enterasys is a strong wifi play, anticipates all-wifi access layer. •  Enterasys brings strong network management to the table.
  53. Established Players – 6.1 •  Extreme has a full line

    of switches, but not “exciting.” •  What Extreme picked up in the Enterasys purchase might be the best kept secret in enterprise networking. Mature solution – since 2001. •  Differentiator: CoreFlow ASIC allows Extreme to track L4-L7 data about endpoints & apply policy. •  Integrations with AirWatch, Citrix, iBoss, MobileIron, PaloAlto, VMware, Hyper-V. •  Purview application does heavy network analytics.
  54. Established Players – 6.2

  55. Established Players – 6.3 Extreme  OneFabric  –  manage  the  network

     between  users   and  applica5ons  as  one  en5ty.  
  56. Established Players - 7 •  Purchased 3Com in 2009. • 

    Excessively full range of switches. •  Product lines are a mix of “ProCurve” and “H3C”. •  Differentiator: OpenFlow + SDN applications, FlexNetwork Architecture (FlexFabric, FlexCampus, FlexBranch), IRF •  “Flex” is a collection of technologies HP assembles (TRILL, SPB, IRF, EVB, VEPA).
  57. Established Players – 7.1

  58. Established Players - 8 •  IBM acquired Blade Network Technologies

    in 2010. •  IBM doesn’t want to sell you a switch. They want to sell you a business system (that includes switches). •  Networking site features SDN prominently. •  Differentiator: Flex System Fabric Network – integrated FC, FCoE, Ethernet, & Infiniband managed by a single GUI, and extended into the virtual switching layer.
  59. Established Players - 9 •  2 main switching lines: QFX

    & EX. •  QFX are positioned for top-of-rack or end-of-row data center deployment, and as the access layer of a QFabric system. •  EX are positioned for enterprise, campus, DC & service provider. •  Full line of switches. •  Network engineers tend to love Junos. •  Differentiator – Virtual Chassis. Manage up to 10 switches as a single device. •  Identity crisis? Service provider vs. enterprise businesses.
  60. Established Players – 9.1

  61. Challengers - 1 •  A longtime leader in the SDN

    space. •  Solution is an OpenFlow controller + switch OS. •  Big Switch Controller is the commercial offering. •  FloodLight is the open source offering. •  Switch Light – thin OS programmed via OpenFlow to run on bare metal switches. •  Main applications are Big Virtual Switch & Big Tap. •  Differentiator – long list of partners.
  62. Challengers 1.1 “The  Big  Switch  Networks  Open   SDN  is

     based  on  a  three-­‐5er   architecture:    northbound  open   APIs,  an  open-­‐core  controller,   and  southbound  standards-­‐ based  data  plane   communica5on  protocols.”     hTp://bigswitch.com/products/open-­‐sdn  
  63. Challengers - 2 •  This isn’t a switch…it’s a “Linux

    operating system for networking hardware.” Load Cumulus Linux on a bare metal switch; manage your network on Linux. •  They are not merely based on Linux, but are in fact completely Linux, bash shell and all. •  Differentiator – run open-source OS & tools on your network hardware*. Leave behind proprietary. •  Partnership with Dell announced; Cumulus Linux can run on Dell S6000 and S4810 ToR switches. *just be sure to check the HCL.
  64. Challengers – 2.1 “Unlike  many  industry   compe5tors,  we  are

     NOT  a  Linux-­‐ based  opera5ng  system,  we  ARE   Linux.  We  offer  the  en5rety  of  the   Linux  experience,  as  you   understand  it.  The  front  panel   ports  of  the  switching  fabric   appear  to  the  Linux  kernel  as  if   they  are  standard  NICs.  In  other   words,  we  accelerate  the  data   path  using  the  switching  silicon   while  preserving  the  control  and   management  abstrac5ons  of   standard  Linux.”     h]p://cumulusnetworks.com/product/ architecture/  
  65. Challengers - 3 •  Huawei is huge everywhere in the

    world except the U.S. •  There are two major businesses related to networking: telecom & enterprise. Huawei is committed to the enterprise market in the U.S. •  Campus switching line goes from the S1700 to S12700. •  The DC chassis is the CloudEngine 5800, 6800, & 12800. •  CSS = Cluster Switch System – up to 4 physical into 1 virtual. •  TRILL for L2 multipath in the CE line.
  66. Challengers 3.1 “Huawei  CloudEngine  (CE)  series   includes  the  CE12800

     flagship   core  switches  with  the  world's   highest  performance,  and   CE6800/5800  high-­‐performance   box  switches  (for  10GE/GE   access).  The  CE  series  uses   Huawei's  next-­‐genera5on  VRP8   sofware  plagorm  and  supports   extensive  data  center  and  campus   network  services.”     hTp://www.huaweienterpriseusa.com/products/network/switches/ data-­‐center-­‐switches/cloudengine-­‐series-­‐data-­‐center-­‐switches  
  67. Challengers - 4 •  High performance, non-blocking data center- focused

    switches using custom ASICs. •  ConnectX – NICs •  SwitchX - switches •  Open-sourced their OS, calling it “Open Ethernet”. •  Differentiators. •  Run Ethernet and Infiniband in the same switch, and bridge between the two. •  Extremely low latency Ethernet switches.
  68. Challengers 4.1 “Using  SX1036  switches  with  36  40GbE  ports,  the

     data  center  can  be  scaled  up  4.5  5mes  in   the  number  of  servers.  This  is  achieved  by  having  18  switches  in  the  1st-­‐5er  and  36  in  the   2nd.  Each  switch  in  5er  1  interconnects  with  all  36  switches  in  5er  2.  This  way,  5er-­‐2   switches  each  remain  with  18  ports  to  ToR  (leaf)  switches,  or  4  leafs  each.  In  other  words,   the  data  center  can  scale  up  to  144  ToR  switches,  for  a  total  of  6912  servers.”     hTp://www.mellanox.com/related-­‐docs/whitepapers/SX1036-­‐The-­‐Ideal-­‐40GbE-­‐AggregaLon-­‐Switch.pdf  
  69. Challengers - 5 •  ProgrammableFlow is an OpenFlow-based combination of

    a controller, Ethernet switches, a virtual switch, and umbrella manager handling 10 controllers. •  Switches are pointed to a controller. A controller configures the switches via OpenFlow. •  Differentiator - the most mature pure SDN solution on the market.
  70. Challengers 5.1

  71. Challengers 5.2

  72. Challengers - 6 •  Emphasis on open networking via PicOS.

    •  They make “white box” switches, all non-blocking with 1 microsecond or less latency. •  Low cost per port. •  Offers an Open SDN Starter Kit. •  Differentiator – an inexpensive switch with all the usual L2/L3 functionality, plus support for OpenFlow 1.3 to support any sort of clever SDN you’d like to deploy.
  73. Challengers 6.1

  74. Challengers 6.2 “Pica8  has  packaged  an  easy-­‐to-­‐use  SDN  Starter  Kit

     designed  to  provide  everything  you  need  to   get  your  SDN  lab  up  and  running  in  just  an  hour.  We've  taken  the  guesswork  out  of  deploying   SDN  by  integra5ng  a  controller,  physical  switch,  and  Open-­‐vSwitch  (OVS)  into  one  solu5on.  Also   included  is  one  real-­‐world  SDN  applica5on  that  you  can  run  –  a  programmable  network  tap   including  Wireshark’s  network  protocol  analyzer.”     hTp://www.pica8.com/documents/pica8-­‐datasheet-­‐sdn-­‐starter-­‐kit.pdf  
  75. Challengers - 7 •  Thought leaders in the SDN space.

    •  Data from applications & flow inform a controller. •  Operators configure “affinities.” •  Differentiator – hardware & software solution. •  Optical interconnect using DWDM providing direct links to a mesh of switches: LightRail. •  Controller programs optimal paths based on affinities. •  Data Services Engine normalizes & abstracts data sources.
  76. Challengers 7.1 “The  op5cal  CrossPoint  itself  acts   as  a

     passive  op5cal  connec5on,   meaning  op5cal  traffic  can  pass   through  one  Plexxi  switch  to  an   adjacent  Plexxi  switch  without   incurring  an  Ethernet  switch  hop.   More  precisely,  there  is  no   op5cal-­‐electronic-­‐op5cal   conversion  for  LightRail  traffic   not  termina5ng  on  the  switch.”     hTp://www.plexxi.com/wp-­‐content/uploads/ 2013/11/Switch-­‐2-­‐Product-­‐Brief.pdf  
  77. Challengers 7.2 “The  mul5dimensional  LightRail  interfaces  also  allow  for  massively

     scalable  architectures.  By   interconnec5ng  many  rings,  the  Plexxi  Switch  2  enables  full  Torus  (or  Manha]an-­‐style  grid)   topologies  capable  of  suppor5ng  more  than  100,000  access  ports.”     hTp://www.plexxi.com/wp-­‐content/uploads/2013/11/Switch-­‐2-­‐Product-­‐Brief.pdf  
  78. Challengers - 8 •  Not just a switch. A “server-switch.”

    •  Pluribus has combined a server with a switch, termed the Freedom architecture. Why? Enable applications to tightly interact with the network. •  Examples of how this is useful include: •  Single point of fabric management that includes HA. •  Fabric-wide analytics (not analysis at one point). •  Granular traffic management across the fabric without having to understand the physical topology.
  79. Challengers 8.1 “F64-­‐M  offers  a  2U  server-­‐class  single  socket  with

     fabric-­‐wide  analy5cs  and  underlay   virtualiza5on.  F64-­‐L  offers  a  high  performance  control  plane  for  highly  virtualized  large   scale  Layer  2  and  Layer  3  networks  and  fabric  services.  F64-­‐XL  offers  high  performance   for  Layer  4  through  Layer  7  applica5ons  and  advanced  storage  op5ons.”     hTp://pluribusnetworks.com/media/briefs/freedom-­‐datasheet-­‐final.pdf  
  80. Challengers 8.2 “DevOps  and  NetOps  now  have  an  open  architecture

     to  program,  virtualize  and  automate  the   network  exactly  like  a  server,  with  bare-­‐metal  performance  efficiency,  availability  and  security.”   hTp://pluribusnetworks.com/media/briefs/pn-­‐arch-­‐brief-­‐final.pdf  
  81. Questions?

  82. Stay in touch! Podcast PacketPushers.net Blog EthanCBanks.com Blog NetworkComputing.com E-Mail

    ethan.banks@packetpushers.net Twitter @ecbanks Also… LinkedIn & Google+