$30 off During Our Annual Pro Sale. View Details »

What Happened in Data Center Networking?

Phil Huang
August 25, 2017

What Happened in Data Center Networking?

#OpenCORD #ONF #Edgecore #Trellis #BSN #BCF #ProjectOlympus

Phil Huang

August 25, 2017
Tweet

More Decks by Phil Huang

Other Decks in Technology

Transcript

  1. What Happened in Data Center Networking?
    Phil Huang
    Open Networking Solution Engineer, Edgecore Networks
    Digital Ocean HsinChu, Taiwan, Aug 25, 2017

    View Slide

  2. Phil Huang 黃秉鈞 (小飛機)
    • Edgecore Networks Solution Engineer
    • ONF CORD / Atrium
    • BigSwitch / Pica8 / CumulusLinux
    • Open Source SI
    • ONF CORD Ambassador
    • SDNDS-TW Co-Founder
    8/26/17 © 2017 Edgecore Networks. All rights reserved | www.edge-core.com 2

    View Slide

  3. Why Edgecore Networks?
    8/26/17 3
    Delivering at Scale
    Cumulus®
    Linux®
    Network OS
    ICOS
    Software Network OS Partner
    SONiC
    Open Hardware / Open Source Software
    Delivery & Support

    View Slide

  4. Network Evolution

    View Slide

  5. Facebook Datacenter
    © 2017 Edgecore Networks. All rights reserved | www.edge-core.com
    Ref: http://www.zdnet.com/pictures/facebooks-data-centers-worldwide-by-the-numbers-and-in-pictures/

    View Slide

  6. Open Networking Evolution
    © 2017 Edgecore Networks. All rights reserved | www.edge-core.com
    DC Core
    Data Center Clos Fabric
    Cloud Service Providers
    Telecom Service Providers
    Enterprise & Campus
    IXP

    View Slide

  7. Underlay Network Evolution for Data Center
    © 2017 Edgecore Networks. All rights reserved | www.edge-core.com
    Three-Tier Architecture
    Ref: https://code.facebook.com/posts/360346274145943/introducing-data-center-fabric-the-next-generation-facebook-data-center-network/
    Leaf-Spine Architecture
    1 3
    2 4
    Facebook Fabric

    View Slide

  8. Overlay Network Evolution for Data Center
    © 2017 Edgecore Networks. All rights reserved | www.edge-core.com
    Tenant A Tenant B Tenant C
    Physical Network Infrastructure
    Abstract network view for an
    tenant
    • Decoupled from physical infra
    • Composed as a set of logical
    network resources

    View Slide

  9. Next Gen Data Center Networking
    • Trend 1
    • Disaggregation and White box
    • Trend 2
    • Virtualization, Overlays, and
    OpenStack
    • Trend 3
    • Two-stage Leaf-spine Clos-
    Fabrics with ECMP and Pods
    • Trend 4
    • SDN, Policy, and Intent
    • Trend 5
    • Big Data and Analytics
    8/26/17 © 2017 Edgecore Networks. All rights reserved | www.edge-core.com 9
    Ref: https://www.linux.com/blog/event/open-networking-summit/2017/3/linux-foundation-highly-relevant-data-center-networking-evolution-says-sdxcentral-report

    View Slide

  10. 10

    View Slide

  11. Open Compute Project, OCP
    • Founded in 2011
    • Global community for Open IT hardware
    • Increased Flexibility
    • Push for standard HW and Reduced Cost
    • Initial Data Center focus
    • Now broadening to telecom and Enterprise
    • Disaggregated
    • Fully open hardware with enabling software
    8/26/17 © 2017 Edgecore Networks. All rights reserved | www.edge-core.com 11

    View Slide

  12. What’s Inside Switch Box?
    8/26/17 © 2017 Edgecore Networks. All rights reserved | www.edge-core.com 12
    Silicon
    Hardware Driver
    Control / Management
    Software
    Network OS
    Mechanical Box

    View Slide

  13. OCP, Networking
    • Fully disaggregated and open networking HW & SW
    • Operating System - Linux based operating systems &
    developer tools, and REST API’s
    • Fully automated configuration management & bare metal
    provisioning
    • Universal & Multi-Form Factor Switch motherboard hardware
    • Fully open integration & connectivity
    • Energy efficient power & cooling designs
    • Software Defined Networking (SDN)
    Ref: http://www.opencompute.org/wiki/Networking
    13

    View Slide

  14. Example: Wedge 100
    Ref: https://code.facebook.com/posts/681382905244727/introducing-wedge-and-fboss-the-next-steps-toward-a-disaggregated-network/
    Facebook Design
    CPU: Intel/ARM/…
    ASIC: Broadcom/Mellanox/…
    - NOS: Open Network Linux
    - Forwarding Agent: FBOSS
    - BMC: OpenBMC
    Hardware Software
    ”Switch as a Server”
    14

    View Slide

  15. OCP Networking - Software
    Switch Abstraction Interface, SAI
    − Defines API to provide a vendor-independent way of controlling forwarding
    elements, such as a switching ASIC, an NPU or a software switch in a
    uniform manner.
    Open Network Linux, ONL
    − Linux distribution (Debian) with added driver and configuration for running
    bare metal switches
    Open Optical Monitoring, OOM
    − Contents of optical module EEPROM accessible to python programmers.
    Open Network Install Environment, ONIE
    − Open “install environment” for bare metal network switches
    − ONIE enables a bare metal network switch ecosystem where end users have
    a choice among different network operating systems
    Ref: https://github.com/opencomputeproject
    15

    View Slide

  16. SDN Based Switch Models
    8/26/17 © 2017 Edgecore Networks. All rights reserved | www.edge-core.com 16
    Traditional
    Switch
    Data Plane
    Control Plane
    Applications
    SDN-based System
    Data Plane
    Control Plane
    Applications
    Control Protocol
    Open Networking
    Switch
    Data Plane
    Linux OS
    Applications

    View Slide

  17. Big Switch Networks
    Big Cloud Fabric

    View Slide

  18. Shared ”One Big Switch” Architecture
    § Traditional Netframe Design
    § Single point of management
    § Proprietary, Vendor Lock-in, Fixed slots
    Big Cloud Fabric
    Controller
    Hierarchical
    Control Plane
    1 3
    Spine
    Switches
    2
    10G/40G
    Backplane
    4
    1 3
    2 4
    Leaf
    Switches
    1G/ 10G/40G
    Workloads
    § Disaggregate Netframe to one “Big Switch”
    § Open, Centralized management
    Easy to scale-out your network

    View Slide

  19. Overview of Big Cloud Fabric
    Open, Economical Solution for Existing Enterprise & Service Provider Data Center
    Multi-Orchestrated VM/Container
    Single Programmatic Interface for up to
    64 Rack Fabric
    Big Switch Controller
    Full Automation for Provisioning,
    HA / Resiliency & Management
    Switch Light OS
    Open Network Linux (ONL) Based OS
    for Edgecore Networks switches
    OCP-enabled Switch
    High customizable & quality switch
    vendor
    Switch Light Virtual
    For OpenStack/OpenShift/
    Kubernetes...etc deployments

    View Slide

  20. Distributed Logical Routing
    Host 1
    10.50.1.2
    Host 2
    10.50.1.3
    10.50.1.0/24
    Host 3
    10.50.2.2
    Host 4
    10.50.2.3
    10.50.2.0/24
    Logical Network
    (TENANT T1)
    Rack 1 Rack 2
    Spine
    Router IP 10.50.1.1 Router IP 10.50.2.1
    Segment Green
    10.50.2.0/24
    Segment Orange
    10.50.1.0/24
    Host 1
    10.50.1.2
    Host 3
    10.50.2.2
    Host 2
    10.50.1.3
    Host 4
    10.50.2.3
    Physical Network
    Logical
    Tenant Router T1

    View Slide

  21. Test Path
    Visibility Network Troubleshooting
    Spine 3
    Spine 1 Spine 2
    R1L2
    R1L1
    R3L2
    R3L1
    R2L2
    R2L1
    Mesos-agent-2
    7Xkf8don6Y
    Spine
    Leaf
    Mesos-agent-3
    nmDh0cpymd
    Ethernet 26
    Ethernet 49
    Ethernet 5
    Ethernet 2
    Ethernet 50
    Ethernet 19
    enp5s0f1 enp8s0f0
    qvo7Xkf8don6Y qvonmDh0cpymd
    vSwitch
    Source Container
    BIG CLOUD FABRIC
    CONTROLLER
    (CLI, GUI, API)
    Destination Container
    One-click flow trace across the fabric
    No box-by-box hopping

    View Slide

  22. ONF Trellis

    View Slide

  23. Metro
    Router
    White Box White Box
    White Box
    White Box
    White Box White Box White Box White Box
    White Box White Box White Box
    White Box
    White Box
    White Box
    Open Source
    SDN-based
    Bare-metal
    White Box
    White Box
    R,E,M-
    Access
    CORD Architecture
    23
    ONOS Controller Cluster
    vRouter
    Control
    XOS (Orchestrator)
    vSG
    VNF
    VNF VNF
    VNF
    vSG VNF
    VNF
    VNF
    VNF
    vSG VNF
    VNF VNF
    VNF
    OVS OVS OVS OVS OVS
    Residential Mobile Enterprise
    Underlay
    Control
    Underlay
    Overlay
    Control
    Overlay
    vOLT
    Control

    View Slide

  24. What is Trellis?
    24
    Datacenter Leaf-Spine
    Fabric Underlay
    Virtual Network
    Overlay
    Unified SDN Control
    Of Underlay & Overlay
    ONOS
    Controller Cluster &
    Apps
    Trellis is the enabling Network Infrastructure for CORD
    Trellis Provides Common control over underlay & overlay networks, including
    1. Service Composition for Tenant Networks
    2. Distributed Virtual Routing
    3. Optimized Delivery of Multicast Traffic Streams

    View Slide

  25. Underlay Fabric – Open Hardware
    25
    White Box SDN Switch
    Edgecore AS6712-32x
    Spine Switch
    32 x 40G ports downlink to leaf switches
    40G QSFP+/DAC
    GE mgmt.
    AS6712-32x
    White Box SDN Switch
    Edgecore AS6712-32x
    Leaf Switch
    24 x 40G ports downlink to servers
    and vOLT
    8 x 40G ports uplink to different spine switches
    ECMP across all uplink ports
    GE
    mgmt.
    AS6712-32x

    View Slide

  26. 26
    BRCM ASIC
    OF-DPA
    Indigo OF Agent
    OF-DPA API
    OpenFlow 1.3
    Leaf/Spine Switch Software Stack
    OCP
    Software
    -
    ONL
    ONIE
    OCP Bare Metal Hardware
    BRCM SDK API
    ONOS
    OCP: Open Compute Project
    ONL: Open Network Linux
    ONIE: Open Network Install Environment
    BRCM: Broadcom Merchant Silicon ASICs
    OF-DPA: OpenFlow Datapath Abstraction
    ONL-2.0.0-ONL-OS-DEB8-2016-12-22 OF-DPA 3.0 EA4
    ONOS 1.8.9
    CORD-3.0
    Underlay Fabric – Software Stacks

    View Slide

  27. L2 Unicast
    27
    Leaf1 Leaf2
    Spine1 Spine2
    Host1 Host2 Host3
    OLT
    Upstream
    Router
    Quagga
    ONOS

    View Slide

  28. L2 Broadcast
    28
    Leaf1 Leaf2
    Spine1 Spine2
    Host1 Host2 Host3
    OLT
    Upstream
    Router
    Quagga
    ONOS

    View Slide

  29. L3 Unicast
    29
    Leaf1 Leaf2
    Spine1 Spine2
    Host1 Host2 Host3
    OLT
    Upstream
    Router
    Quagga
    ONOS

    View Slide

  30. L3 Multicast
    30
    Leaf1 Leaf2
    Spine1 Spine2
    Host1 Host2 Host3
    OLT
    Upstream
    Router
    Quagga
    ONOS

    View Slide

  31. vRouter Integration
    31
    Leaf1 Leaf2
    Spine1 Spine2
    Host1 Host2 Host3
    OLT
    Upstream
    Router
    Quagga
    ONOS
    BGP
    Data

    View Slide

  32. vSG Integration
    32
    Leaf1 Leaf2
    Spine1 Spine2
    Host1 Host2 Host3
    OLT
    Upstream
    Router
    Quagga
    ONOS
    Q-in-Q

    View Slide

  33. Put everything all together…
    33
    Leaf1 Leaf2
    Spine1 Spine2
    Host1 Host2 Host3
    OLT
    Upstream
    Router
    Quagga
    ONOS

    View Slide

  34. Virtual Network Overlay
    34
    OVS
    OVS
    OVS
    OVS
    OVS
    OVS
    OVS
    OVS
    OVS
    Service VNFs & vNets
    Non-overlapping addresses
    Service B
    Virtual Network
    Tenant Green
    Virtual Network
    Overlapping address space
    Connectivity isolation
    VMs/Containers
    Service Y
    Virtual Network
    Tenant Blue
    Virtual Network
    Services can dynamically
    grow or shrink
    VXLAN Overlay VXLAN Overlay
    VXLAN
    Overlay
    Single VXLAN
    port in OVS

    View Slide

  35. Trellis Summary
    Underlay Fabric
    • L2/L3 spine-leaf fabric – Bare-metal hardware + open source software
    • SDN control plane – No distributed protocols
    • Modern ASIC data plane – 1.28 Tbps switching bandwidth for each switch
    Virtual Network Overlay
    • Designed for NFV – ChainedVNFs using with best principles of cloud
    • Overlay Control – XOS and VTN implement service graph
    • OVS + VXLAN Data Plane
    Unified SDN Control
    • Common Control – Opportunity for optimized service delivery
    35

    View Slide

  36. 8/26/17 © 2017 Edgecore Networks. All rights reserved | www.edge-core.com 36

    View Slide

  37. FreeRangeRouting
    • IP routing protocol suite for Linux and Unix platforms
    • Includes protocol daemons for BGP, IS-IS, LDP, OSPF, PIM, EIGRP
    and RIP
    • Seamless integration with native Linux/Unix IP networking stacks
    including connecting hosts / VMs / containers
    • Fork from Quagga
    • Community driven based on
    • Github
    • Mail List
    • Slack
    8/26/17 © 2017 Edgecore Networks. All rights reserved | www.edge-core.com 37
    Ref: https://frrouting.org/

    View Slide

  38. Major Change
    8/26/17 © 2017 Edgecore Networks. All rights reserved | www.edge-core.com 38
    BGP EVPN
    Ref: https://github.com/FRRouting/frr/wiki/Major-Changes

    View Slide

  39. Continuous Integration
    8/26/17 © 2017 Edgecore Networks. All rights reserved | www.edge-core.com 39
    Ref: https://ci1.netdef.org/browse/FRR-FRR-470/test

    View Slide

  40. FRRouting Testing Report
    8/26/17 © 2017 Edgecore Networks. All rights reserved | www.edge-core.com 40
    Ref: https://frrouting.org/test-results/BGP4_extended_results.pdf

    View Slide

  41. How to Install FRRouting?
    8/26/17 © 2017 Edgecore Networks. All rights reserved | www.edge-core.com 41
    Ref: https://github.com/FRRouting/frr/tree/master/doc

    View Slide

  42. 8/26/17 © 2017 Edgecore Networks. All rights reserved | www.edge-core.com 42
    Ref: https://twitter.com/menotyousharp/status/859802897722335236

    View Slide

  43. Summary
    • Hardware and Software Disaggregation
    • Unified Centralized Control and Management
    • Flexible
    • Security
    • Visibility
    • Lower CAPEX and OPEX
    • Deliver new service quickly and efficiency
    8/26/17 © 2017 Edgecore Networks. All rights reserved | www.edge-core.com 43
    "You disaggregate to get choice; you aggregate to get efficiencies"

    View Slide

  44. Join Us!
    8/26/17 © 2017 Edgecore Networks. All rights reserved | www.edge-core.com 44

    View Slide

  45. Appendix
    • Software for Open Networking in the Cloud SONiC
    • http://azure.github.io/SONiC/
    • Ecosystem momentum positions Microsoft’s Project Olympus as de
    facto open compute standard
    • https://azure.microsoft.com/en-us/blog/ecosystem-momentum-positions-microsoft-s-
    project-olympus-as-de-facto-open-compute-standard/
    • Channel 9 - Microsoft Project Olympus
    • https://channel9.msdn.com/Series/Microsoft-Global-Datacenters/Microsoft-Project-
    Olympus
    • GitHub – Project Olympus
    • https://github.com/opencomputeproject/Project_Olympus
    8/26/17 © 2017 Edgecore Networks. All rights reserved | www.edge-core.com 45

    View Slide

  46. Open Networking
    from
    Freedom
    Control
    Innovation
    © 2017 Edgecore Networks. All rights reserved | www.edge-core.com 46

    View Slide