Upgrade to Pro — share decks privately, control downloads, hide ads and more …

An introduction of kubernetes with CORD

An introduction of kubernetes with CORD

An progress about integrating kubernetes with CORD.

Hung-Wei Chiu

November 22, 2017
Tweet

More Decks by Hung-Wei Chiu

Other Decks in Programming

Transcript

  1. WHO AM I • Hung-Wei Chiu (hwchiu) • [email protected]

    hwchiu.com • Experience • Software Engineer at Linker Netowrks • Software Engineer at Synology (2014~2017) • Co-Found of SDNDS-TW • Open Source experience • SDN related projects (mininet, ONOS, Floodlight, awesome-sdn)
  2. OUTLINE • What is CORD • Challenge of kubernetes with

    CORD. • What have we done now • Next steps
  3. CENTRAL OFFICE RE-ARCHITECTED AS A DATACENTER 5 SDN + NFV

    + Cloud Open Source Software Commodity Hardware (Servers, White-Box Switches, I/O Blades) Large number of COs Evolved over 40-50 years 300+ Types of equipment Huge source of CAPEX/OPEX
  4. Metro Router White Box White Box White Box White Box

    White Box White Box White Box White Box White Box White Box White Box White Box White Box White Box Open Source SDN-based Bare-metal White Box White Box R,E,M- Access 7 ONOS Controller Cluster vRouter Control XOS (Orchestrator) vSG VNF VNF VNF VNF vSG VNF VNF VNF VNF vSG VNF VNF VNF VNF OVS OVS OVS OVS OVS Residential Mobile Enterprise Underlay Control Underlay Overlay Control Overlay vOLT Control Final CORD Architecture
  5. Metro Router White Box White Box White Box White Box

    White Box White Box White Box White Box White Box White Box White Box White Box White Box White Box Open Source SDN-based Bare-metal White Box White Box R,E,M- Access 8 ONOS Controller Cluster vRouter Control XOS (Orchestrator) vSG VNF VNF VNF VNF vSG VNF VNF VNF VNF vSG VNF VNF VNF VNF OVS OVS OVS OVS OVS Residential Mobile Enterprise Underlay Control Underlay Overlay Control Overlay vOLT Control Final CORD Architecture
  6. Metro Router White Box White Box White Box White Box

    White Box White Box White Box White Box White Box White Box White Box White Box White Box White Box Open Source SDN-based Bare-metal White Box White Box R,E,M- Access 9 ONOS Controller Cluster vRouter Control XOS (Orchestrator) vSG VNF VNF VNF VNF vSG VNF VNF VNF VNF vSG VNF VNF VNF VNF OVS OVS OVS OVS OVS Residential Mobile Enterprise Underlay Control Underlay Overlay Control Overlay vOLT Control Final CORD Architecture
  7. SUMMARY • VM-based NFV (Network FunctionVirtualization) • Use ONOS (SDN

    controller) + OpenVSwitch to control packets. • Use XOS (Service Orchestration) to control all services (VNF) • ONOS/XOS need to communicate with OpenStack component.
  8. CHANGE TO KUBERNETES • VM-based NFV (Network FunctionVirtualization) • vSG,

    vPGW, vSGW, etc • Who owns the NFV? • Vendors • We can’t force them to convert all NFVs to container. • It’s impossible to have a kubernetes solution for CORD now.
  9. CHANGE TO KUBERNETES • Use ONOS (SDN controller) + Open

    vSwitch to control packets. • There’re so many CNI for kubernetes now. • None of them are Open vSwitch based solution. • Linen-CNI is Open vSwitch + Linux Bridge solution • Same subnet traffics between POD are controlled by Bridge NODE POD POD OVS br0 ens0p3
  10. CHENLAGE TO KUBERNETES • Use ONOS (SDN controller) + Open

    vSwitch to control packets. • Create our own CNI to support pure Open vSwitch version. • Still some problem need to conquer future. NODE POD POD OVS ens0p3 ?
  11. CHANGE TO KUBERNETES • Use XOS (Service Orchestration) to control

    all services (VNF) • Kubernetes can handle most thing. • XOS should communicate with Kubernetes via its API service.
  12. CHANGE TO KUBERNETES • ONOS/XOS need to communicate with OpenStack

    component. • ONOS need to know the IP information of each host (VMs) from neutron component. • In kubernetes, we need to provide the IP information of each POD. • Since we implement our CNI, we can send the IP information after CNI assigns IP to POD. • Send the information via Restful API/gRPC
  13. PROBLEMSWE MET • Deploy ONOS controller as container • Multiple

    network interface for POD • Centralized IP management
  14. DEPLOY PROBLEM • We need to deploy ONOS as container

    • The chicken-and-egg conundrum! • Hard to solve, we need to work-around now. • We decide to move the ONOS out of POD. • For each node, it should have multiple network interface including data network and control network. • Out of band.
  15. DEPLOY PROBLEM NODE POD POD OVS ens0p3 ens0p4 NODE POD

    POD OVS ens0p3 ens0p4 NODE POD POD OVS ens0p3 ens0p4 Data network Control network
  16. DEPLOY PROBLEM NODE POD POD OVS ens0p3 ens0p4 NODE POD

    POD OVS ens0p3 ens0p4 NODE POD POD OVS ens0p3 ens0p4 Data network Control network
  17. MULTIPLE NETWORK INTERFACE • For some NFV,(vSG) they need to

    multiple interface in its POD. • For example. NODE POD POD OVS ens0p3
  18. MULTIPLE NETWORK INTERFACE • We found a open source project

    (multus-CNI) • Provides the multi interface support in a pod • We don’t figure out how to use it first. • We thought it’s a global setting. • We try to implement by ourself.
  19. MULTIPLE NETWORK INTERFACE • Multiple network interface means call CNI

    multiple times. • For CNI, we need to know the namespace location for each POD. • We want to provide a interface to dynamically call CNI for any existing POD. • Input • Pod Name, Network configuration (CNI name, configuration) • Output • Success (add another interface for existing POD) • Error
  20. MULTIPLE NETWORK INTERFACE • For each CNI, it should know

    the namespace location of each POD. • We also need to know the POD name • In the CNI plugin, we can get above information via • Args.Args (many information, separate by semi-colon) • Args.Netns • We store those information in ETCD.
  21. MULTIPLE NETWORK INTERFACE • After I have finished almost tasks.

    • One intel guy tell us multus-cni support pod configuration. • …..….. • Ok, we use multus-cni.
  22. CENTRALIZED IP MANAGEMENT • Our CNI plugin use IPAM to

    handle the IP management. • Official IPAM support two types. • Host-local • DHCP
  23. IPAM DHCP • Requirement • You should run a IPAM

    DHCP daemon on each node. • You should setup a DHCP server on your network. • How it works. • Start a DHCP client when a POD is be created. • Forward the DHCP packet to DHCP server(it depends on your CNI forward L2 broadcast) • Official recommend to use MacVLan as CNI. • Limitation • All node in same subnet. • Simple configuration.You only specify “type=dhcp” in CNI configuration.
  24. IPAM HOST-LOCAL • Requirement • None • How it works

    • Use a local file to record how many IP address has been used. • Lookup the file and choose an available IP address for CNI. • Limitation. • You should prepare a configuration for each Node with different setting. • Complex configuration.You need to specify which subnet the Node will use. • Make sure no duplicate for each Node.
  25. IPAM • What we want ? • Simple configuration •

    Support multiple subnet • We decided to create new IPAM
  26. DHCP • Refer to trellis (CORD network infrastructure) • Requirement

    • DHCP server • How it works • We set the Gateway Address for each dhcp request to support multiple subnet via L3 unicast. • It looks like
  27. DHCP NODE 1 POD POD OVS ens0p3 POD POD OVS

    ens0p3 NODE 2 network Master DHCP Server DHCP relay DHCP relay 192.168.1.1 192.168.2.1 Subnet 192.168.1.0/24 { } Subnet 192.168.2.0/24 { }
  28. DHCP – PROBLEM • The Problem is.. • How to

    decide the IP address of each Open vSwitch? • Gateway address of each subnet. • Use ETCD • Maybe we can use ETCD to replace the DHCP
  29. ETCD • Implement a new IPAM which use etcd to

    record the subnet of each node. • Simple configuration • Network: 10.12.0.0/16 • Subnet length: 24 • etcd address. • The subnet will from 10.12.1.0/24 to 10.12.255.0/24 • Simple and easy to implement.
  30. NEXT STEP • Integrate the ONOS (SDN controller) with our

    CNI. • Make sure the ONOS can control the network. • For the kubernetes internal communication, the ONOS should implement all of them by Openflow, not iptables. NODE POD POD OVS ens0p3 Use openflow rules many iptable rules.
  31. Q&A