Telco Central Office (CO) Mobile Residential Enterprise Central Office Can be small or large and has different names in different contexts 5 • CO is a service provider’s gateway to its customers • There are 1000+ of COs • Per CO may support § 10K+ residential subscribers § 10K+ mobile subscribers § 1K+ enterprise customers • CO providers a great vantage point for service providers
Challenges • Source of high CAPEX and OPEX • Lack of programmability inhibits innovation • Limits ability to create new services and revenue ØHard to create innovative services 9
What is CORD? 10 Central Office Re-architected as a Datacenter SDN + NFV + Cloud Open Source Software Commodity Hardware (Servers, White-Box Switches, I/O Blades) Large number of COs Evolved over 40-50 years 300+ Types of equipment Huge source of CAPEX/OPEX
CORD Aims to Deliver 11 Agility of a cloud provider Software platforms that enable rapid creation of new services Economies of a datacenter Infrastructure built with a few commodity building blocks using open source software and white-box switches
Design Philosophy -> Tangible Value 12 SDN NFV Cloud Extends the agility of micro-services to the access network Supports legacy VNFs and pushes the limits of disaggregation Interconnects VNFs and is a source of innovative services XaaS
Traditional Service Provider Network 14 Aggregation Switch … OLT … ONU ONU Splitter Splitter … BNG Switch Internet Reliability ☹ Scalability ☹ Flexibility ☹ Cost
CORD: Software Stack 16 XOS ONOS Access- as-a-Service Subscriber- as-a-Service Internet- as-a-Service CDN OpenStack / Docker vSG ... vRout er ... Multicast Control Fabric Control VTN Ceilometer Monitoring- as-a-Service Scalable Services Run in OpenStack VMs and Docker Containers Control Applications Hosted by ONOS Multi-Tenant Services Assembled by XOS
Metro Router White Box White Box White Box White Box White Box White Box White Box White Box White Box White Box White Box White Box White Box White Box Open Source SDN-based Bare-metal White Box White Box R,E,M- Access CORD Architecture 17 ONOS Controller Cluster vRouter Control XOS (Orchestrator) vSG VNF VNF VNF VNF vSG VNF VNF VNF VNF vSG VNF VNF VNF VNF OVS OVS OVS OVS OVS Residential Mobile Enterprise Underlay Control Underlay Overlay Control Overlay vOLT Control
Full POD: Definition • The minimum amount of hardware that can be used to perform a full test of the current CORD features 22 CORD Fabric 4 x White-box switches Compute 3 x x86 servers Suggested Components • OCP-qualified Switches § AS6712-32x 40GbE • Server § QuantaGrid D51-1U § Intel XL710 10/40 GbE
Network Connectivity: Complete View 25 Compute node 1 Head node Spine 1 Spine 2 Internet External access to the POD Where the operator connects IPMI Fabric to leafs IPMI Fabric to leafs Linux mgmt to internal Linux mgmt to internal Compute node 2 IPMI Fabric to leafs Linux mgmt to internal Leaf 1 Leaf 2 Fabric Fabric Fabric Fabric Internal mgmt L2 switch External network L2 switch Mgmt
What is Trellis? 30 Datacenter Leaf-Spine Fabric Underlay Virtual Network Overlay Unified SDN Control Of Underlay & Overlay ONOS Controller Cluster & Apps Trellis is the enabling Network Infrastructure for CORD Trellis Provides Common control over underlay & overlay networks, including 1. Service Composition for Tenant Networks 2. Distributed Virtual Routing 3. Optimized Delivery of Multicast Traffic Streams
Underlay Fabric – Open Hardware 31 White Box SDN Switch Edgecore AS6712-32x Spine Switch 32 x 40G ports downlink to leaf switches 40G QSFP+/DAC GE mgmt. AS6712-32x White Box SDN Switch Edgecore AS6712-32x Leaf Switch 24 x 40G ports downlink to servers and vOLT 8 x 40G ports uplink to different spine switches ECMP across all uplink ports GE mgmt. AS6712-32x
Virtual Network Overlay 40 OVS OVS OVS OVS OVS OVS OVS OVS OVS Service VNFs & vNets Non-overlapping addresses Service B Virtual Network Tenant Green Virtual Network Overlapping address space Connectivity isolation VMs/Containers Service Y Virtual Network Tenant Blue Virtual Network Services can dynamically grow or shrink VXLAN Overlay VXLAN Overlay VXLAN Overlay Single VXLAN port in OVS
Trellis Summary • Underlay Fabric § L2/L3 spine-leaf fabric – Bare-metal hardware + open source software § SDN control plane – No distributed protocols § Modern ASIC data plane – 1.28 Tbps switching bandwidth for each switch • Virtual Network Overlay § Designed for NFV – ChainedVNFs using with best principles of cloud § Overlay Control – XOS and VTN implement service graph § OVS + VXLAN Data Plane • Unified SDN Control § Common Control – Opportunity for optimized service delivery 41
Fabric Enhancements • Complete support IPv6 • Support dual homing (servers, access devices, upstream routers) • Support in-band control of remote access devices • Support policies for redirecting or blocking traffic • Support latest OF-DPA chipset (e.g., Qumran) • Generalized support pseudo wire (E-CORD) 42
Disaggregated Optical Line Termination 44 GPON Chassis-Type OLT GPON Line Card GPON Line Card GPON Line Card Switching Board Control Board Backpalne Each GPON OLT IO Blade is connected to TOR switch with 40/100Gbps uplink port Disaggregated OLT GPON OLT IO Blade GPON OLT IO Blade GPON OLT IO Blade ToR Switch x86 Server vOLT Control App
What is vOLTHA? Layer of abstraction atop legacy and next generation netwrok equipment both PON and in the future xDSL, Docsis, G.Fast, Ethernet Key value add of vOLTHA: • Network as a Switch - access network abstracted as a programmable switch • Evolution to virtualization - legacy and virtualized devices. Runs on the device on general servers or in a DC • Unified OAM abstraction - provides unified, vendor/tech agnostic management interface • vOLTHA Confines the differences of access tech to the locality of access and hiding from the upper layers of the OSS stack 46
AT&T and R-CORD - Result • System • dual 2670 v3 / 64 GB ram dual Intel 10G NIC • Target was ~4000 subscribers per server • Optimal Configuration16 VMs, 128 S-Tags with 31 C- Tags each • 1.2 mps @ 64 byte and 9.6 gbs @ 1400 byte • 40-50% Idle with 46 GB ram used 57