Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Container Network Interface: Network plugins for Kubernetes and beyond

Container Network Interface: Network plugins for Kubernetes and beyond

The talk from KubeCon 2015

Eugene Yakubovich

November 09, 2015
Tweet

More Decks by Eugene Yakubovich

Other Decks in Technology

Transcript

  1. Kubernetes networking model - IP per pod - Pods in

    the cluster can be addressed by their IP
  2. How to allocate IP addresses? - From a fixed block

    on a host - DHCP - IPAM system backed by SQL database - SDN assigned: e.g. Weave
  3. Order matters! - macvlan + DHCP ◦ Create macvlan device

    ◦ Use the device to DHCP ◦ Configure device with allocated IP - Routed + IPAM ◦ Ask IPAM for an IP ◦ Create veth and routes on host and/or fabric ◦ Configure device with allocated IP
  4. CNI - Container can join multiple networks - Network described

    by JSON config - Plugin supports two commands - Add container to the network - Remove container from the network
  5. User configures a network $ cat /etc/cni/net.d/10-mynet.conf { "name": "mynet",

    "type": "bridge", "ipam": { "type": "host-local", "subnet": "10.10.0.0/16" } }
  6. CNI: Step 1 Container runtime creates network namespace and gives

    it a named handle $ cd /var/lib/cni $ touch myns $ unshare -n mount --bind /proc/self/ns/net myns
  7. CNI: Step 2 Container runtime invokes the CNI plugin $

    export CNI_COMMAND=ADD $ export CNI_NETNS=/var/lib/cni/myns $ export CNI_CONTAINERID=5248e9f8-3c91-11e5-... $ export CNI_IFNAME=eth0 $ $CNI_PATH/bridge </etc/cni/net.d/10-mynet.conf
  8. CNI: Step 3 Inside the bridge plugin (1): $ brctl

    addbr mynet $ ip link add veth123 type veth peer name $CNI_IFNAME $ brctl addif mynet veth123 $ ip link set $CNI_IFNAME netns $CNI_IFNAME $ ip link set veth123 up
  9. CNI: Step 3 Inside the bridge plugin (2): $ IPAM_PLUGIN=host-local

    # from network conf $ echo $IPAM_PLUGIN { "ip4": { "ip": "10.10.5.9/16", "gateway": "10.10.0.1" } }
  10. CNI: Step 3 Inside the bridge plugin (3): # switch

    to container namespace $ ip addr add 10.0.5.9/16 dev $CNI_IFNAME # Finally, print IPAM result JSON to stdout
  11. Kubernetes + CNI + Docker - Kubernetes has its own

    network plugins - CNI "driver" is a k8s network plugin - Future: make CNI native plugin system
  12. Kubernetes + CNI + Docker - k8s starts "pause" container

    to create netns - k8s invokes its plugin (CNI driver) - k8s CNI driver executes a CNI plugin - CNI plugin joins "pause" container to network - Pod containers use "pause" container netns
  13. Kubernetes + rkt - rkt natively supports CNI - Kubernetes

    delegates to rkt to invoke CNI plugins
  14. Want to work on upstream Kubernetes or distributed systems infrastructure?

    CoreOS San Francisco is hiring. Work at CoreOS coreos.com/careers