$30 off During Our Annual Pro Sale. View Details »

Introduction of ONF CORD in Global SDNFV Tech 2017

Phil Huang
August 02, 2017

Introduction of ONF CORD in Global SDNFV Tech 2017

#opencord #onf

Phil Huang

August 02, 2017
Tweet

More Decks by Phil Huang

Other Decks in Technology

Transcript

  1. 1

    View Slide

  2. Introduction of ONF CORD
    Phil Huang 黃秉鈞
    [email protected] / [email protected]
    ONF CORD Ambassador / Edgecore Networks Solution Engineer
    Global SDNFV Tech Conference 2017, Beijing, China, August 2, 2017

    View Slide

  3. 黃秉鈞 Phil Huang
    • ONF CORD Ambassador Member
    • Edgecore Solution Engineer
    § ONF CORD / Atrium
    § BigSwitch / CumulusLinux / Pica8
    • SDNDS-TW (SDN開發者社群)
    共同創辦者
    3
    Ref: https://www.linkedin.com/in/phil-huang-09b09895/

    View Slide

  4. 4

    View Slide

  5. Telco Central Office (CO)
    Mobile
    Residential
    Enterprise
    Central Office
    Can be small or large
    and has different names
    in different contexts
    5
    • CO is a service provider’s gateway to
    its customers
    • There are 1000+ of COs
    • Per CO may support
    § 10K+ residential subscribers
    § 10K+ mobile subscribers
    § 1K+ enterprise customers
    • CO providers a great vantage point
    for service providers

    View Slide

  6. Residential Network
    6
    PC CPE ONU OLT BNG Internet
    Home Roadside CO

    View Slide

  7. Mobile Network
    7
    Phone eNB BBU SGW BGW Internet
    User Field CO

    View Slide

  8. Enterprise Network
    8
    PC CPE EE TE ROADM
    Office CO
    ROADM
    CO
    Metro
    Net

    Internet

    View Slide

  9. Challenges
    • Source of high CAPEX and OPEX
    • Lack of programmability inhibits innovation
    • Limits ability to create new services and revenue
    ØHard to create innovative services
    9

    View Slide

  10. What is CORD?
    10
    Central Office Re-architected as a Datacenter
    SDN + NFV + Cloud
    Open Source Software
    Commodity Hardware
    (Servers, White-Box Switches, I/O Blades)
    Large
    number of
    COs
    Evolved over
    40-50 years
    300+ Types
    of
    equipment
    Huge source
    of
    CAPEX/OPEX

    View Slide

  11. CORD Aims to Deliver
    11
    Agility of a cloud provider
    Software platforms that enable rapid creation of new services
    Economies of a datacenter
    Infrastructure built with a few commodity building blocks using
    open source software and white-box switches

    View Slide

  12. Design Philosophy -> Tangible Value
    12
    SDN NFV
    Cloud
    Extends the agility of micro-services to the access network
    Supports legacy VNFs and
    pushes the limits of disaggregation
    Interconnects VNFs and is
    a source of innovative services
    XaaS

    View Slide

  13. Service Provider Driven
    13

    View Slide

  14. Traditional Service Provider Network
    14
    Aggregation
    Switch

    OLT

    ONU
    ONU
    Splitter
    Splitter

    BNG
    Switch
    Internet
    Reliability ☹
    Scalability ☹
    Flexibility ☹
    Cost

    View Slide

  15. Data Center Leaf-Spine Fabric
    15
    Reliability
    Scalability
    Flexibility
    Latency Cost
    Bandwidth
    Spine
    Leaf

    View Slide

  16. CORD: Software Stack
    16
    XOS
    ONOS
    Access-
    as-a-Service
    Subscriber-
    as-a-Service
    Internet-
    as-a-Service
    CDN
    OpenStack / Docker
    vSG ...
    vRout
    er
    ...
    Multicast
    Control
    Fabric
    Control
    VTN
    Ceilometer
    Monitoring-
    as-a-Service
    Scalable Services Run in
    OpenStack VMs and Docker Containers
    Control Applications
    Hosted by ONOS
    Multi-Tenant
    Services
    Assembled
    by XOS

    View Slide

  17. Metro
    Router
    White Box White Box
    White Box
    White Box
    White Box White Box White Box White Box
    White Box White Box White Box
    White Box
    White Box
    White Box
    Open Source
    SDN-based
    Bare-metal
    White Box
    White Box
    R,E,M-
    Access
    CORD Architecture
    17
    ONOS Controller Cluster
    vRouter
    Control
    XOS (Orchestrator)
    vSG
    VNF
    VNF VNF
    VNF
    vSG VNF
    VNF
    VNF
    VNF
    vSG VNF
    VNF VNF
    VNF
    OVS OVS OVS OVS OVS
    Residential Mobile Enterprise
    Underlay
    Control
    Underlay
    Overlay
    Control
    Overlay
    vOLT
    Control

    View Slide

  18. Related Open Source Projects with ONF CORD
    18

    View Slide

  19. Build Physical POD
    Deployments
    19

    View Slide

  20. Server Roles
    20

    View Slide

  21. Software Layers
    21
    Bootstrap Layer
    Container infrastructure layer
    Basic infrastructure layer
    Physical fabric control
    Service fabric layer
    Storage layer
    OpenStack layer
    OAM services/tools layer
    OpenStack layer
    Local orchestration layer
    Analytics infrastructure layer
    R-CORD meta module
    E-CORD meta module
    M-CORD meta module
    Other use-case meta modules

    View Slide

  22. Full POD: Definition
    • The minimum amount of hardware that can be used to perform
    a full test of the current CORD features
    22
    CORD Fabric
    4 x White-box
    switches
    Compute
    3 x x86 servers
    Suggested Components
    • OCP-qualified Switches
    § AS6712-32x 40GbE
    • Server
    § QuantaGrid D51-1U
    § Intel XL710 10/40 GbE

    View Slide

  23. Physical POD Topology
    23
    Ref: https://github.com/opencord/cord/blob/master/docs/quickstart_physical.md

    View Slide

  24. Network Connectivity: User / Data Plane
    24
    Head node 1
    Compute node 2
    Compute node 1
    Leaf 1 Leaf 2
    Spine 1 Spine 2
    Fabric
    4x White-box switches
    Compute
    3x standard x86 servers
    Access devices Metro network

    View Slide

  25. Network Connectivity: Complete View
    25
    Compute node 1
    Head node
    Spine 1
    Spine 2
    Internet
    External access to the POD
    Where the operator connects
    IPMI
    Fabric
    to leafs
    IPMI
    Fabric
    to leafs
    Linux mgmt to internal
    Linux mgmt to internal
    Compute node 2
    IPMI
    Fabric
    to leafs
    Linux mgmt to internal
    Leaf 1
    Leaf 2
    Fabric
    Fabric
    Fabric
    Fabric
    Internal mgmt L2 switch
    External network L2
    switch
    Mgmt

    View Slide

  26. CORD Single Node
    26

    View Slide

  27. CORD Multi Node
    27

    View Slide

  28. Synchronization Process
    28

    View Slide

  29. Trellis
    CORD Network Infrastructure
    29

    View Slide

  30. What is Trellis?
    30
    Datacenter Leaf-Spine
    Fabric Underlay
    Virtual Network
    Overlay
    Unified SDN Control
    Of Underlay & Overlay
    ONOS
    Controller Cluster &
    Apps
    Trellis is the enabling Network Infrastructure for CORD
    Trellis Provides Common control over underlay & overlay networks, including
    1. Service Composition for Tenant Networks
    2. Distributed Virtual Routing
    3. Optimized Delivery of Multicast Traffic Streams

    View Slide

  31. Underlay Fabric – Open Hardware
    31
    White Box SDN Switch
    Edgecore AS6712-32x
    Spine Switch
    32 x 40G ports downlink to leaf switches
    40G QSFP+/DAC
    GE
    mgmt.
    AS6712-32x
    White Box SDN Switch
    Edgecore AS6712-32x
    Leaf Switch
    24 x 40G ports downlink to servers
    and vOLT
    8 x 40G ports uplink to different spine switches
    ECMP across all uplink ports
    GE
    mgmt.
    AS6712-32x

    View Slide

  32. Underlay Fabric – Software Stacks
    32
    BRCM ASIC
    OF-DPA
    Indigo OF Agent
    OF-DPA API
    OpenFlow 1.3
    Leaf/Spine Switch Software Stack
    OCP
    Software
    -
    ONL
    ONIE
    OCP Bare Metal Hardware
    BRCM SDK API
    ONOS
    OCP: Open Compute Project
    ONL: Open Network Linux
    ONIE: Open Network Install Environment
    BRCM: Broadcom Merchant Silicon ASICs
    OF-DPA: OpenFlow Datapath Abstraction
    ONL-2.0.0-ONL-OS-DEB8-2016-12-22 OF-DPA 3.0 EA4
    ONOS 1.8.9
    CORD-3.0

    View Slide

  33. L2 Unicast
    33
    Leaf1 Leaf2
    Spine1 Spine2
    Host1 Host2 Host3
    OLT
    Upstream
    Router
    Quagga
    ONOS

    View Slide

  34. L2 Broadcast
    34
    Leaf1 Leaf2
    Spine1 Spine2
    Host1 Host2 Host3
    OLT
    Upstream
    Router
    Quagga
    ONOS

    View Slide

  35. L3 Unicast
    35
    Leaf1 Leaf2
    Spine1 Spine2
    Host1 Host2 Host3
    OLT
    Upstream
    Router
    Quagga
    ONOS

    View Slide

  36. L3 Multicast
    36
    Leaf1 Leaf2
    Spine1 Spine2
    Host1 Host2 Host3
    OLT
    Upstream
    Router
    Quagga
    ONOS

    View Slide

  37. vRouter Integration
    37
    Leaf1 Leaf2
    Spine1 Spine2
    Host1 Host2 Host3
    OLT
    Upstream
    Router
    Quagga
    ONOS
    BGP
    Data

    View Slide

  38. vSG Integration
    38
    Leaf1 Leaf2
    Spine1 Spine2
    Host1 Host2 Host3
    OLT
    Upstream
    Router
    Quagga
    ONOS
    Q-in-Q

    View Slide

  39. Put everything all together…
    39
    Leaf1 Leaf2
    Spine1 Spine2
    Host1 Host2 Host3
    OLT
    Upstream
    Router
    Quagga
    ONOS

    View Slide

  40. Virtual Network Overlay
    40
    OVS
    OVS
    OVS
    OVS
    OVS
    OVS
    OVS
    OVS
    OVS
    Service VNFs & vNets
    Non-overlapping addresses
    Service B
    Virtual Network
    Tenant Green
    Virtual Network
    Overlapping address space
    Connectivity isolation
    VMs/Containers
    Service Y
    Virtual Network
    Tenant Blue
    Virtual Network
    Services can dynamically
    grow or shrink
    VXLAN Overlay VXLAN Overlay
    VXLAN
    Overlay
    Single VXLAN
    port in OVS

    View Slide

  41. Trellis Summary
    • Underlay Fabric
    § L2/L3 spine-leaf fabric – Bare-metal hardware + open source software
    § SDN control plane – No distributed protocols
    § Modern ASIC data plane – 1.28 Tbps switching bandwidth for each switch
    • Virtual Network Overlay
    § Designed for NFV – ChainedVNFs using with best principles of cloud
    § Overlay Control – XOS and VTN implement service graph
    § OVS + VXLAN Data Plane
    • Unified SDN Control
    § Common Control – Opportunity for optimized service delivery
    41

    View Slide

  42. Fabric Enhancements
    • Complete support IPv6
    • Support dual homing (servers, access devices, upstream routers)
    • Support in-band control of remote access devices
    • Support policies for redirecting or blocking traffic
    • Support latest OF-DPA chipset (e.g., Qumran)
    • Generalized support pseudo wire (E-CORD)
    42

    View Slide

  43. Virtual Optical Line Terminal
    Network as a Switch
    43

    View Slide

  44. Disaggregated Optical Line Termination
    44
    GPON Chassis-Type OLT
    GPON Line Card
    GPON Line Card
    GPON Line Card
    Switching Board
    Control Board
    Backpalne
    Each GPON OLT IO Blade is connected to
    TOR switch with 40/100Gbps uplink port
    Disaggregated OLT
    GPON OLT IO Blade
    GPON OLT IO Blade
    GPON OLT IO Blade
    ToR Switch
    x86 Server
    vOLT Control App

    View Slide

  45. AT&T Open GPON Hardware Spec
    • 48 Port, 1RU, I/O Pizza box
    • GPON MAC
    • GPON protocol management
    • 802.1ad-compiant VLAN bridging
    • Ethernet MAC
    45

    View Slide

  46. What is vOLTHA?
    Layer of abstraction atop legacy and next generation netwrok
    equipment both PON and in the future xDSL, Docsis, G.Fast, Ethernet
    Key value add of vOLTHA:
    • Network as a Switch - access network abstracted as a programmable switch
    • Evolution to virtualization - legacy and virtualized devices. Runs on the device on general
    servers or in a DC
    • Unified OAM abstraction - provides unified, vendor/tech agnostic management interface
    • vOLTHA Confines the differences of access tech to the locality of access and hiding from
    the upper layers of the OSS stack
    46

    View Slide

  47. vOLTHA High Level Architecture
    47

    View Slide

  48. Hardware
    vOLTHA
    AT&T Open GPON –Software Overview
    Ref: https://wiki.opencord.org/display/CORD/VOLTHA%3A+vOLT+Hardware+Abstraction 48
    OpenFlow
    Controller
    Configuration
    Controller
    Ref: https://wiki.opencord.org/display/CORD/CORD+Summit+--+July+29%2C+2016?preview=/1278537/1279415/Tom%20Anschutz%20R-CORD%20Breakout.pdf

    View Slide

  49. Edgecore ASFvOLT16
    49
    vOLTHA
    asfvolt16_olt adapter (Python)
    ASFvOLT16 OLT CPU
    BAL Objects/protos
    BAL Core/Utils
    Qumran API
    Maple SDK Qumran SDK
    BCM88470
    Qumran
    (QAX)
    Ethernet
    PSU/FAN
    (FPGA)
    XFP/QSFP
    SyncE/IEEE
    1588
    Timing
    VPD
    (EEPROM)
    Board
    (Reset/WD)
    Uplink
    Ethernet
    Maple NNI
    PON Maple NNI
    PON Maple NNI
    PON Maple NNI
    PON
    Redfish
    Http(s)
    REST
    Board Config, optics
    supervision, TCA,
    Alarms
    gRPC
    client
    ONLP API
    Maple API
    bal_voltha_app (C)
    gRPC Client
    gRPC Server
    bal_cli
    gRPC
    server
    vOLTHA IAdapterInterface
    Flows, Intf, Stats, Ind, TCA
    OMCI
    ONU
    act/flow
    Maple
    stubs
    OpenNetworkLinux
    ONIE
    OpenNetworkLinux
    BRCM PON MAC/Switch
    Broadcom Proprietary
    Redfish HW REST API
    Adapter Board HW stat/ctrl
    ASFvOLT16 board HW
    Adapter PON elements
    OLT driver elements
    ASFvOLT16 vOLTHA adapter
    BRCM maybe required
    Maybe required
    Ref: https://wiki.opencord.org/display/CORD/VOLTHA+Adapter+for+Edgecore+ASFvOLT16+OLT
    Hardware
    & Firmware
    vOLTHA

    View Slide

  50. 50

    View Slide

  51. OLT / ONU Interoperability
    51
    GPON OLT IO Blade
    GPON OLT IO Blade
    Ref: https://wiki.opencord.org/display/CORD/VOLTHA - xPON in vOLTHA Proposal.pdf
    Splitter
    Splitter
    ONU
    ONU
    OLT Adapter
    Hardware
    vOLTHA
    ONU Adapter
    IAdapter Interface
    vOLTHA Core

    View Slide

  52. R-CORD
    Residential Access
    52

    View Slide

  53. Legacy Central Office
    53
    Backbone Network
    Residence Central Office
    CPE: Customer Premises Equipment
    OLT: Optical Line Termination
    BNG: Broadband Network Gateway
    CPE ONU OLT
    ETH
    AGG
    BNG BNG

    View Slide

  54. Switching Fabric
    Software and Hardware Disaggregation
    Backbone Network
    Residence Central Office
    CPE: Customer Premises Equipment
    OLT: Optical Line Termination
    BNG: Broadband Network Gateway
    CPE ONU OLT
    ETH
    AGG
    BNG
    vOLT
    vSG vRouter
    BNG
    54

    View Slide

  55. R-CORD Controller: Software Architecture
    55
    CORD Controller
    vOLT
    Controller
    vSG
    Controller
    vRouter
    Residential
    Subscribers
    Controller
    Controller
    vCDN
    Controller
    OpenStack ONOS
    Controller Controller
    Monitoring
    Everything-as-a-Service (XaaS)

    View Slide

  56. AT&T and R-CORD
    56

    View Slide

  57. AT&T and R-CORD - Result
    • System
    • dual 2670 v3 / 64 GB ram dual Intel 10G NIC
    • Target was ~4000 subscribers per server
    • Optimal Configuration16 VMs, 128 S-Tags with 31 C-
    Tags each
    • 1.2 mps @ 64 byte and 9.6 gbs @ 1400 byte
    • 40-50% Idle with 46 GB ram used
    57

    View Slide

  58. Telefónica and R-CORD
    58
    Ref: https://wiki.opencord.org/pages/viewpage.action?pageId=1967521

    View Slide

  59. CORD Community
    Open and Community-Driven
    59

    View Slide

  60. CORD Service Providers
    60

    View Slide

  61. CORD Collaborators
    61
    In less than a year, the number of CORD collaborators almost equals number of ONOS collaborators

    View Slide

  62. CORD Brigades
    1. 5G transport network Brigade
    2. BOM Brigade
    3. Certification Brigade
    4. Performance Brigade
    5. Upgrade OpenStack Brigade
    6. Container Brigade
    7. Hierarchical CORD Brigade
    8. Virtualized CORD Dev environment Brigade
    62
    Ref: https://wiki.opencord.org/display/CORD/Brigades

    View Slide

  63. Integration Efforts
    63

    View Slide

  64. CORD Build 2017
    64
    November 7 to November
    9
    San Jose, California, USA,
    QCT Headquarters
    Ref: http://opencord.org/view-blog/?id=4918

    View Slide

  65. Thank you!
    http://opencord.org

    View Slide