Slide 1

Slide 1 text

github.com/opencontainers/specs

Slide 2

Slide 2 text

github.com/appc/spec

Slide 3

Slide 3 text

Image Format Application Container Image (.aci) tarball of rootfs + manifest uniquely identified by ImageID (hash)

Slide 4

Slide 4 text

Image Discovery Resolves app name →artefact (.aci) example.com/http-server coreos.com/etcd DNS + HTTPS + HTML meta tags

Slide 5

Slide 5 text

Crypto Verification Take an ACI, public key and signature. Verify()

Slide 6

Slide 6 text

Pods grouping of multiple applications (templated or deterministic) shared execution context (namespaces, volumes)

Slide 7

Slide 7 text

Executor runtime environment isolators, networking, lifecycle metadata service

Slide 8

Slide 8 text

appc and OCI aka https://xkcd.com/927

Slide 9

Slide 9 text

OCI - Open Containers Initiative - Announced June 2015 (as OCP) - Lightweight, open governance project - Linux Foundation - Container runtime format - configuration on disk, execution environment - Runtime implementation (runc)

Slide 10

Slide 10 text

appc vs OCI appc - image format - runtime environment - pods - image discovery OCI - runtime format - runtime environment

Slide 11

Slide 11 text

appc vs OCI appc runtime - environment variables - Linux device files - hooks - etc... - multiple apps OCI runtime - environment variables - Linux device files - hooks - etc... - single app (process)

Slide 12

Slide 12 text

Container Network Interface github.com/appc/cni Brandon Philips @brandonphilips

Slide 13

Slide 13 text

Application containers are awesome - Application containers provide - isolation - packaging - Networking isolation - its own port space - its own IP

Slide 14

Slide 14 text

Network Namespace - Can every container have a "real" IP? - How should network be virtualized? - Is network virtualization part of "container runtime"? e.g. rkt, docker, etc

Slide 15

Slide 15 text

$ sudo unshare -n /bin/bash $ ip addr 1: lo: mtu 65536 ... link/loopback 00:00:00:00:00:00 brd ... New net ns

Slide 16

Slide 16 text

$ ip link set lo up $ ip addr 1: lo: mtu 65536 ... link/loopback 00:00:00:00:00:00 brd ... inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever New net ns

Slide 17

Slide 17 text

$ ping 8.8.8.8 connect: Network is unreachable $ ip route show $ New net ns

Slide 18

Slide 18 text

veth 10.0.1.5/31 10.0.1.4 10.0.1.7/31 10.0.1.6

Slide 19

Slide 19 text

veth 10.0.1.5/24 10.0.1.7/24 10.0.1.1/24

Slide 20

Slide 20 text

Virtualizing the NIC and Network - veth pair (plus linux-bridge) - macvlan - ipvlan - OVS - vlan - vxlan

Slide 21

Slide 21 text

IP Address Management - Host - Cluster - Global

Slide 22

Slide 22 text

Which one? No right answer!

Slide 23

Slide 23 text

Need pluggable network strategy

Slide 24

Slide 24 text

Container Runtime (e.g. rkt) veth macvlan ipvlan OVS

Slide 25

Slide 25 text

Container Runtime (e.g. rkt) veth macvlan ipvlan OVS

Slide 26

Slide 26 text

Container Runtime (e.g. rkt) veth macvlan ipvlan OVS Container Networking Interface (CNI)

Slide 27

Slide 27 text

CNI - Container can join multiple networks - Network described by JSON config - Plugin supports two commands - Add container to the network - Remove container from the network

Slide 28

Slide 28 text

User configures a network $ cat /etc/rkt/net.d/10-mynet.conf { "name": "mynet", "type": "bridge", "ipam": { "type": "host-local", "subnet": "10.10.0.0/16" } }

Slide 29

Slide 29 text

CNI: Step 1 Container runtime creates network namespace and gives it a named handle $ cd /run $ touch myns $ unshare -n mount --bind /proc/self/ns/net myns

Slide 30

Slide 30 text

CNI: Step 2 Container runtime invokes the CNI plugin $ export CNI_COMMAND=ADD $ export CNI_NETNS=/run/myns $ export CNI_CONTAINERID=5248e9f8-3c91-11e5-... $ export CNI_IFNAME=eth0 $ $CNI_PATH/bridge

Slide 31

Slide 31 text

CNI: Step 3 Inside the bridge plugin (1): $ brctl addbr mynet $ ip link add veth123 type veth peer name $CNI_IFNAME $ brctl addif mynet veth123 $ ip link set $CNI_IFNAME netns $CNI_IFNAME $ ip link set veth123 up

Slide 32

Slide 32 text

CNI: Step 3 Inside the bridge plugin (2): $ IPAM_PLUGIN=host-local # from network conf $ echo $IPAM_PLUGIN { "ip4": { "ip": "10.10.5.9/16", "gateway": "10.10.0.1" } }

Slide 33

Slide 33 text

CNI: Step 3 Inside the bridge plugin (3): # switch to container namespace $ ip addr add 10.0.5.9/16 dev $CNI_IFNAME # Finally, print IPAM result JSON to stdout

Slide 34

Slide 34 text

Current plugins Top level ptp bridge macvlan ipvlan IPAM host-local dhcp

Slide 35

Slide 35 text

Questions github.com/appc/cni