that, it is a platform • User interface • Backward compatibility with existing applications • Clustering with Swarm mode • Opinionated workflows and defaults, such as Docker Hub • Commercial support • Product direction not entirely community led Some people do not want those things.
OCI runtime • Used by Docker since 1.11 in 2016 as a container runtime • Relaunched in December 2016 with new scope • Docker now using 0.2 branch • 1.0 master branch is where the new work is taking place Entirely new scope, and donated to CNCF
2Q 2017, but the evolution helps • Some code is reused from Docker • Some is rewritten and improved based on experience • The runtime code is already in production with lots of users • Focus on getting APIs clear and clean • Extensible via plugins • Will be supported for one year • Can evolve in new directions for 2.0, ... • Limited scope...
networking in containerd ◦ This is what the users of containerd wanted ◦ https://github.com/docker/containerd/issues/362 ◦ continue to use CNI or other APIs as before • No volume management in containerd ◦ Storage layer can be hooked in at OCI layer ◦ The Container Storage Interface (CSI) is a proposed new industry standard for cluster-wide volume plugins. ◦ Joint proposal from a group of people who work on Docker, Kubernetes, Mesosphere and Cloud Foundry. ◦ CSI is currently in the early draft stage, and seeking feedback from the community.” ◦ https://github.com/moby/moby/issues/31923
log management in containerd (yet) ◦ Output streams of containers can be handled as required ◦ Platform can arrange logging how it wishes ◦ https://github.com/docker/containerd/issues/603 discusses changes ◦ Possibly adding timestamps, formatting in the shim • No build in containerd ◦ Use other tools for building containers ◦ Very different concerns from runtime
{ rpc Info(InfoRequest) returns (InfoResponse); rpc Delete(DeleteContentRequest) returns (google.protobuf.Empty); rpc Read(ReadRequest) returns (stream ReadResponse); rpc Status(StatusRequest) returns (stream StatusResponse); rpc Write(stream WriteRequest) returns (stream WriteResponse); } Content is identified via a digest, i.e. content hash. Status gives the status of an in progress write transaction. Content Digest
Prepare(PrepareRequest) returns (MountResponse); rpc Mounts(MountsRequest) returns (MountResponse); } • Unpack a downloaded image • Prepare the root filesystem from the set of layers • Mounts returns a list of mounts to make, does not execute them
layering images: ◦ overlay ◦ btrfs ◦ vfs ❖ These correspond to overlay and snapshotting drivers, which are the two models. The aim is to make sure the API provides support for both types, not to be comprehensive. ❖ Also a plain driver that does not use layers. ❖ Plugins will provide additional mechanisms, eg ZFS
API • There are some commands for testing e.g. ( dist rootfs, snapshot) ◦ However their CLIs are unstable and may be incomplete ◦ They can be useful for understanding what is going on, and writing tests ◦ Helpful in trying out low level operations, eg applying layers.
• ctr - everything for containers • shim - directly interact with containers ( bypass containerd) Containerd end to end $ dist pull docker.io/library/redis:alpine $ ctr run --id redis -t docker.io/library/redis:alpine docker run
best supported runtime • Containerd is being written to replace the relevant code in Docker • The CRI acts as an API for runtimes in Kubernetes • Work on integration in https://github.com/cri-containerd/kubernetes • Kubernetes PR https://github.com/kubernetes/kubernetes/pull/43655 • Containerd 1.0 milestone support in at least one runtime • Likely 1.0 will be shipped with at least Docker and Kubernetes support • Working with Kubernetes is an essential part of roadmap