The "*app*" perspective is how the app processes inside the pod see the environment.
This example pod will use a set of three apps:
| Name | Version | Image hash
|------------------------------------|---------|---------------------------------------------
| example.com/reduce-worker | 1.0.0 | sha512-277205b3ae3eb3a8e042a62ae46934b470e43
| example.com/worker-backup | 1.0.0 | sha512-3e86b59982e49066c5d813af1c2e2579cbf57
| example.com/reduce-worker-register | 1.0.0 | sha512-86298e1fdb95ec9a45b5935504e26ec29b8fe
#### Filesystem Setup
Each app in a pod will start chrooted into its own unique read-write filesystem before execut
An app's filesystem must be *rendered* in an empty directory by the following process (or equ
- The `rootfs` contained in the ACI is extracted
- If the ACI contains a non-empty `dependencies` field in its `ImageManifest`, the `rootfs` o
is extracted, in the order in which they are listed
- If the ACI contains a non-empty `pathWhitelist` field in its `ImageManifest`, *all* paths n
must be removed
Every execution of an app MUST start from a clean copy of this rendered filesystem.
The simplest implementation will take an ACI (with no dependencies) and extract it into a new
Slide 7
Slide 7 text
What we want?
Image Format Storytime!
Slide 8
Slide 8 text
you
Slide 9
Slide 9 text
you as a sw engineer
Slide 10
Slide 10 text
your
with Ada.Text_IO;
procedure Hello_World is
use Ada.Text_IO;
begin
Put_Line("Hello, world!");
end;
#include
int main()
{
printf("Hello, world!\n");
}
package main
import "fmt"
func main() {
fmt.Println("Hello, world!")
}
Slide 11
Slide 11 text
your container
image
Slide 12
Slide 12 text
your /bin/java
/opt/app.jar
/lib/libc
Slide 13
Slide 13 text
your /bin/python
/opt/app.py
/lib/libc
Slide 14
Slide 14 text
your example.com/app
d474e8c57737625c
Slide 15
Slide 15 text
your
d474e8c57737625c
Signed By: Alice
Slide 16
Slide 16 text
you as an ops engineer
Slide 17
Slide 17 text
your
Slide 18
Slide 18 text
your
example.com/app
x3
Slide 19
Slide 19 text
your
example.com/app
x3
Slide 20
Slide 20 text
your
???
example.com/app
x3
Slide 21
Slide 21 text
Towards a Standard
Journey to OCI
Slide 22
Slide 22 text
An image format
A container runtime
A log collection daemon
An init system and process babysitter
A container image build system
Slide 23
Slide 23 text
An image format
Slide 24
Slide 24 text
17 Months Ago
November 2014
Slide 25
Slide 25 text
Docker Image Format Circa 2014
- Very fluid format and evolution
- Not content-addressable
- No name delegation/discovery
- Like MX records
- No mechanism for signing
appc image in a nutshell
● Image Format (ACI)
○ what does an application consist of?
● Image Discovery
○ how can an image be located?
● Content-addressing
○ what is the cryptographic id of an image?
● Signing
○ how is an image signed?
Slide 29
Slide 29 text
a modern, secure container runtime
a simple, composable tool (CLI)
an implementation of an open standard (appc)
Slide 30
Slide 30 text
shell scripts
docker2aci
acbuild
goaci
deb2aci
Slide 31
Slide 31 text
12 Months Ago
April 2015
Slide 32
Slide 32 text
Docker v2.2 Image Format Circa 2014
- Versioned v2.0, v2.1, v2.2 schema
- Content-addressable
- No name delegation/discovery
- Like MX records
- Optional and non-prescribed signing
● A serialized image format
● Content-addressable
Optional stuff
○ Signatures that are based on signing image content
address
○ Naming that is federated based on DNS and can be
delegated
OCI Image Format Spec Project
Slide 39
Slide 39 text
Where are we going?
● Goal: standard container
○ Runtime
○ Image
○ Identity & Signing
○ Discovery & Naming
○ Distribution
● Goal: Enable Innovation
○ Build systems
○ Runtimes
Slide 40
Slide 40 text
Container Networking
Connectivity and Beyond
Slide 41
Slide 41 text
Container Networking
Interface
Initially designed for rkt, used lots of places
github.com/appc/cni
Slide 42
Slide 42 text
Connectivity for "pod" network model
- IP per pod
- Pods in the cluster can be addressed by their IP
Slide 43
Slide 43 text
How to network containers together?
linux-bridge
macvlan
ipvlan
Open vSwitch
Weave
Project Calico
flannel
GCE networking
AWS VPC
Slide 44
Slide 44 text
How to allocate IP addresses?
- From a fixed block on a host
- DHCP
- IPAM system backed by SQL database
- SDN assigned: e.g. Weave
CNI
- Container can join multiple networks
- Network described by JSON config
- Plugin supports two commands
- Add container to the network
- Remove container from the network
Slide 47
Slide 47 text
User configures a network
$ cat /etc/cni/net.d/10-mynet.conf
{
"name": "mynet",
"type": "bridge",
"ipam": {
"type": "host-local",
"subnet": "10.10.0.0/16"
}
}
Slide 48
Slide 48 text
CNI: Step 1
Container runtime creates network namespace
and gives it a named handle
$ cd /var/lib/cni
$ touch myns
$ unshare -n mount --bind /proc/self/ns/net myns
CNI Flexibility
- Plugins manage their own state
- Essential for network vendors who often have complex
control planes
- Process model exposes full Linux network stack
- External plugins implement the API and get out
of the way
- Metaswitch networks, Weaveworks
Slide 51
Slide 51 text
CNI Community
- Maintainers from CoreOS, Pivotal, and Weaveworks
- Used by rkt, Kubernetes, Cloud Foundry, kurma, and
usable with runC
- External plugins from
- Metaswitch networks, Weaveworks
Slide 52
Slide 52 text
Container Networking
Model
CNM is implemented in github.
com/docker/libnetwork
Slide 53
Slide 53 text
Defining a new network model
- Network
- A logical network (think vlan)
- Endpoint
- Connects a sandbox to a network
- Sandbox
- A container level network configuration for DNS, routes,
etc
Slide 54
Slide 54 text
CNM for Cluster and Local Networking
Slide 55
Slide 55 text
"Battery Included Plugins"
- Bridge
- "Default Docker Networking"
- Overlay
- VXLAN and coordinated internally using libkv
- Remote
- Everyone else uses this
Slide 56
Slide 56 text
Remote Plugins
- Operate a long running plugin API service
- Linux networking features restricted to API
- External plugins
- Difficult to integrate because of API model
- Existing control planes don't get useful metadata
Slide 57
Slide 57 text
- Maintainers from Docker, and Tencent
- Used by docker engine
- External plugins from
- Metaswitch networks, Weaveworks
libnetwork Community
Slide 58
Slide 58 text
Container Networking
Conclusions
Slide 59
Slide 59 text
CNM creates a new interface to the network world
- Hard to integrate into existing systems like Kubernetes
- Exposes networking concepts through new API/model
- Adopted by Docker Engine
CNI is a simple model for container networking
- Simple to integrate with process based workflow
- Exposes full Linux network stack
- Adopted by rkt, kurma, Kubernetes, Cloud Foundry, and
easy integration with runC