Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Designing and Implementing Multi-tenancy Data Center Networking with SRv6 in Large Scale Platform

Designing and Implementing Multi-tenancy Data Center Networking with SRv6 in Large Scale Platform

GTC 21
https://www.nvidia.com/ja-jp/gtc/

2021/4/14

Designing/Implementing Multi-tenancy Data Center Networking with SRv6 in Large Scale Platform
市原裕史 (Verda室 ネットワーク開発チーム)

LINE Developers

April 14, 2021
Tweet

More Decks by LINE Developers

Other Decks in Technology

Transcript

  1. About Me • Hirofumi Ichihara • LINE Corporation ◦ Network

    Development Team • Network Software Developer ◦ SDN/NFV ◦ OpenStack Neutron ◦ Docker ◦ Kubernetes
  2. Verda & LINE Infra Scale 4 Virtual Machine 55000+ Baremetal

    server 20000+ Hypervisor 2000+ All Physical Servers 50000+ Peak of User Traffic 3Tbps+
  3. 5 FaaS IaaS PaaS KaaS Container Event Stream DBaaS DB

    Search and Analytics VM Identity Network Image DNS Block Storage Object Storage Bare metal Load Balancer Function
  4. LINE Services and Networks Full L3 CLOS Network* • Single

    tenant network • LINE message service and related services running Exclusive Network for Services • Service with specific requirements running • Building specific network for each service * Excitingly simple multi-path OpenStack networking: LAG-less, L2-less, yet fully redundant https://www.slideshare.net/linecorp/excitingly-simple-multipath-openstack-networking-lagless- l2less-yet-fully-redundant Many fragment underlay networks Many works to design and build Management cost increases Messaging, Manga, Game, ... Financial, HealthCare ...
  5. Multi tenant network overlay • Sharing underlay network decrease management

    cost • Achieve policy for each service(tenant) on isolated network Simple L3 underlay network Flexibly scale overlay network Security individual tenant Network Service Chaining
  6. 8 Virtual Private Network for Virtual Machine Private Network A

    VM VM Packet VM Private Network B VM VM VM • Virtual machine connects to private network • Isolated from other private networks • Multiple networks for each tenant
  7. VXLAN Pros • More information • Many network devices support

    Cons • Lose advances of full-L3 • Need additional protocol to achieve service chaining IPv6 Segment Routing (SRv6) Pros • IPv6 forwarding only on underlay • Support segregation and service chaining with Segment ID Cons • No information about DC use case • No network device support + SRv6 future Adopted SRv6 Multi-tenancy Network
  8. SRv6 Segment ID (SID) • 128bit number(IPv6 address) • Locator:

    Information for routing to SRv6 node(parent node). It must be unique whitin a SR domain • Function: Information to identify the action to be performed on the parent node Segment Routing Header (SRH) • IPv6 extension header • Including a Segment List, Segment Left points out current point of Segment List and so on Locator Function Function examples • H.Encaps(Encap): Encapsulation packet with IPv6 header and SRH • End.DX4(Decap): Remove IPv6 header and SRH from packet and then forward next hop • End.DT4(Decap): Remove IPv6 header and SRH from packet and then lookup routing table and forward (DT4 is implemented in Linux Kernel 5.11 but we donʼt support the kernel so we uses DX4 although DT4 is better) 128bit
  9. IPv6 Hypervisor Server 12 Networking Implementation with SRv6 VM VM

    VM VRF VRF Hypervisor Server VM VM VM VRF VRF SRv6 Header Packet • On Hypervisor server(HV), virtual network is achieved with Linux VRF • On IPv6 Network between HVs, virtual network is achieved with SRv6 overlay network
  10. Hypervisor Server A 13 H.Encaps VM VM vrfA Hypervisor Server

    B VM vrfA SRv6 Header fc00:aaaa:bbbb:cccd::2 Packet fc00:aaaa:bbbb:cccc::1/128 fc00:aaaa:bbbb:cccd::1/128 lo lo fc00:aaaa:bbbb:cccc::2/128 vrfA fc00:aaaa:bbbb:cccd::2/128 vrfA Locator Function Packet • Hypervisor server has specific SID identifies a VRF of server • Locator: Identify hypervisor server (e.g. fc00:aaaa:bbbb:cccc::/64) • Function: Identify VRF(e.g. 2) • SRv6 Header with SID is added to head of packet from VM
  11. Server VM 14 SRv6 Routing on CLOS Network • Our

    all L3 switches(Cumulus) donʼt support SRv6 • All switches forwards SRv6 packet as just IPv6 packet Server ToR ToR ToR ToR ToR ToR Server Server Server Server Server Server Server Server Server Server IPv6 Packet IPv6 Packet IPv6 Packet VM
  12. Hypervisor Server 15 End.DT4 VM VM vrfA Hypervisor Server VM

    vrfA SRv6 Header fc00:aaaa:bbbb:cccc::2 Packet fc00:aaaa:bbbb:cccc::1/128 lo fc00:aaaa:bbbb:cccc::2/128 vrfA Locator Function Packet • Hypervisor server has SRv6 decapsulation rule • The rule checks function in SRv6 Header and forwards the packet without the header • Packet is forwarded to VM on Linux VRF fc00:aaaa:bbbb:cccd::2/128 action End.DT4 vrfA Packet
  13. 17 LINE Network Control Plane Journey Full L3 Network with

    BGP Past 2016~ SRv6 Network with SDN Current 2019~ SRv6 Network with SDN and BGP Future 2021~
  14. 18 LINE Network Control Plane Journey Full L3 Network with

    BGP Full L3 Network with BGP Past 2016~ SRv6 Network with SDN Current 2019~ Future 2021~ SRv6 Network with SDN and BGP
  15. 19 Full L3 Network with BGP • OpenStack Nova creates

    Virtual Machine(VM) with tap device • Each tap device behaves like default gateway for each VM Nova Compute However there is no route to each VM VM Linux Routing Cumulus Switch
  16. 20 Full L3 Network with BGP • OpenStack Neutron adds

    a route for each VM on Hypervisor server Neutron Agent Now VMs get reachability to each VM but they donʼt have default route
  17. 21 Full L3 Network with BGP • L3 switch advertises

    default route to FRR on Hypervisor server • FRR gets default route and adds the route on Hypervisor server Now VMs get default route but VMʼs IP address is not reachable out side of Hypervisor server yet
  18. 22 Full L3 Network with BGP • FRR advertises VMʼs

    IP address to L3 switch • L3 switch gets VMʼs IP route and advertises it to upper L3 switches
  19. 24 LINE Network Control Plane Journey SRv6 Network with SDN

    Full L3 Network with BGP Past 2016~ SRv6 Network with SDN Current 2019~ Future 2021~ SRv6 Network with SDN and BGP
  20. 25 SRv6 Network with SDN • Each VM connects to

    VRF • Each HV has IPv6 including SID Locator
  21. 26 SRv6 Network with SDN • FRR advertises IPv6 address

    route for each HV server • Neutron adds a route for each VM on VRF Packet with SRv6 header is reachable to each HV but VMʼs IP isnʼt reachable yet 73'#
  22. 27 SRv6 Network with SDN • Neutron generates SID for

    each VRF which combines Locator and Function • Neutron adds End.DT4 rules Packet with SRv6 header can be decapsulated on each HV server but VMʼs packet isnʼt encapsulated yet 73'#
  23. 28 SRv6 Network with SDN • Neutron adds H.Encaps rule

    to each VRF VMs get network reachability on SRv6 Network #
  24. 29 SRv6 Network with SDN Pain Point • All network

    configurations are managed via Neutron API • Failure of controller layer may affect network management • We have to add controller algorithm for everything • Sometimes API response latency may be bottleneck • Difficult to connect to SRv6 support network device • Neutron needs to configure network device if necessary
  25. 30 LINE Network Control Plane Journey SRv6 Network with SDN

    and BGP Full L3 Network with BGP Past 2016~ SRv6 Network with SDN Current 2019~ Future 2021~ SRv6 Network with SDN and BGP
  26. 31 SRv6 Network with SDN and BGP • Each VM

    connects to VRF • Each HV has IPv6 including SID Locator
  27. 32 SRv6 Network with SDN and BGP • FRR advertises

    IPv6 address route of each HV server • Neutron adds a route for each VM on VRF Packet with SRv6 header is reachable to each HV but VMʼs IP isnʼt reachable yet
  28. 33 SRv6 Network with SDN and BGP • Neutron generates

    SID for each VRF which combines Locator and Function • Neutron adds End.DT4 rules Packet with SRv6 header can be decapsulated on each HV server but VMʼs packet isnʼt encapsulated yet
  29. 34 SRv6 Network with SDN and BGP • FRR advertises

    VMʼs IP address and the SID information as VPNv4-Unicast via BGP
  30. 35 SRv6 Network with SDN and BGP • Upper L3

    switch(Cumulus) catches the advertised SRv6 route information
  31. 36 SRv6 Network with SDN and BGP • Upper L3

    switch(Cumulus) re-distributes VMʼs IP address and SRv6 information to FRR on each hypervisor server
  32. 37 SRv6 Network with SDN and BGP • FRR receives

    VMʼs IP address and SRv6 information from upper switch(Cumulus) • The route is installed on each server
  33. 40 NFV(Network Functions Virtualization) Achieve network function without exclusive physical

    device https://linedevday.linecorp.com/jp/2019/sessions/F1-7 https://linedevday.linecorp.com/2020/ja/sessions/2076
  34. 41 BGP to VM Enable VM to advertise routes via

    BGP Server L3 Switch L3 L3 Server Server Server Server Server VM 10.0.0.1 10.0.0.2 Server L3 Switch L3 L3 Server Server Server VM 10.0.0.1 10.0.0.2 10.0.1.1 10.0.1.2 10.0.0.1 10.0.0.2 10.0.1.1 10.0.1.2 10.0.0.1 10.0.0.2 BGP to VM Without BGP to VM
  35. 42 BGP to VM 1. HVʼs FRR advertises VMʼs IP

    address via BGP 2. Upper L3 switch(Cumulus) catches the advertised routes information
  36. 43 BGP to VM 1. HVʼs FRR advertises VMʼs IP

    address via BGP 2. Upper L3 switch(Cumulus) catches the advertised routes information 3. HVʼs FRR creates BGP peer to FRR on VM by Neutron • Use local AS number • Use metadata server IP(169.254.169.254) on HV
  37. 44 BGP to VM 1. HVʼs FRR advertises VMʼs IP

    address via BGP 2. Upper L3 switch(Cumulus) catches the advertised routes information 3. HVʼs FRR creates BGP peer to FRR on VM • Use local AS number • Use metadata server IP(169.254.169.254) on HV 4. Set advertised network address on VM
  38. 45 BGP to VM 1. HVʼs FRR advertises VMʼs IP

    address via BGP 2. Upper L3 switch(Cumulus) catches the advertised routes information 3. HVʼs FRR creates BGP peer to FRR on VM • Use local AS number • Use metadata server IP(169.254.169.254) on HV 4. Set advertised network address on VM 5. VMʼs FRR advertises the address to HVʼs FRR via BGP
  39. 46 BGP to VM 1. HVʼs FRR advertises VMʼs IP

    address via BGP 2. Upper L3 switch(Cumulus) catches the advertised routes information 3. HVʼs FRR creates BGP peer to FRR on VM • Use local AS number • Use metadata server IP(169.254.169.254) on HV 4. Set advertised network address on VM 5. VMʼs FRR advertises the address to HVʼs FRR via BGP 6. HVʼs FRR advertises the address to upper L3 switch
  40. 47 Cloud Router IPv6 Hypervisor Server VM VM VM Hypervisor

    Server Cloud Router Cloud Router SRv6 Header Packet • SRv6 network between VMs with same private network • IPv4 network between VM and IDC network via Cloud Router • Secure network between VM and other network via Cloud Router IPv4 Packet Another location VPN
  41. 48 Cloud Router • Cloud Router runs in VM •

    Cloud Router isnʼt share over a tenant • Multiple Cloud Routers can run in a tenant
  42. 50 Cloud Router NAT 1. Cloud Router advertises NAT addresses

    2. Cloud Router changes source address to NAT address and then forward packets
  43. 52 Cloud Router VPN 1. Cloud Router creates IPsec tunnel

    to VPN Gateway on other location 2. Cloud Router advertises Network1ʼs subnet address to VPN Gateway 3. Could Router forwards packets into IPsec tunnel
  44. 53 Summary • Multi-tenancy data center networking with SRv6 use

    case • Architecture of SRv6 data networking • SRv6 SDN implementation • NFV implementation on SRv6 network • Future plan: SDN and BGP hybrid model Thank you