2019/07/23(火)14:30 〜 17:30に実施されたCloud Native Dojo @ IBMの資料です。
by Doug Davis
1st session: Cloud Native: Everything you need to know in 80 minutes!
2nd session: Knative: The next wave for Cloud Native
by Japan Developer Advocate team 14:40 - 16:00 (80min) Cloud Native: Everything you need to know in 80 minutes! 16:00 - 16:10 Break 16:10 - 17:20 (70min) Knative: The next wave for Cloud Native 17:20 - 17:30 (10min) Q&A , Closing 1
isolation • Similar to VMs but managed at the process level • All processes MUST be able to run on the shared kernel • Each container has its own set of "namespaces" (isolated view) • PID - process IDs • USER - user and group IDs • UTS - hostname and domain name • NS - mount points • NET - Network devices, stacks, ports • IPC - inter-process communications, message queues • cgroups - controls limits and monitoring of resources • Plus it gets its own root filesystem 4
Machine Hardware Base OS/Kernel Container OS-specific files bins / libs App Operating System +procs bins / libs App App VS Virtual Machine Container VM ? Containers share the same base Kernel Each VM has its own OS App, bins/libs/OS must all be runnable on the shared kernel If OS files aren't needed they can be excluded.
to: • Create a new directory • Lay-down the container's filesystem • Setup the networks, mounts, ... • Start the process • Better resource utilization • Can fit far more containers than VMs into a host 6
are not new • Docker just made them easy to use • Docker creates and manages the lifecycle of containers • Setup filesystem • CRUD container • Setup networks • Setup volumes / mounts • Create: start new process telling OS to run it in isolation 7
Hello World • What happened? • Docker created a directory with a "ubuntu" filesystem (image) • Docker created a new set of namespaces • Ran a new process: echo Hello World • Using those namespaces to isolate it from other processes • Using that new directory as the "root" of the filesystem (chroot) • That's it! • Notice as a user I never installed "ubuntu" 8
-ti ubuntu bash root@62deec4411da:/# pwd / root@62deec4411da:/# exit $ • Now the process is "bash" instead of "echo" • But its still just a process • Look around, mess around, its totally isolated • rm /etc/passwd – no worries! • MAKE SURE YOU'RE IN A CONTAINER! 9
-ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 14:33 ? 00:00:00 ps -ef • Things to notice • Each container only sees its own process(es) • Running as "root" • Running as PID 1 10
metadata • For sharing and redistribution • Global/public registry for sharing: DockerHub • Similar, in concept, to a VM image 11 Docker Host Containers Base OS/Kernel Containers Docker Engine Images Liberty Ubuntu
Hardware Hypervisor Virtual Machine Hardware Base OS/Kernel + procs Container OS-specific files bins / libs App Operating System +procs bins / libs App App VS Virtual Machine Container Containers share the same base Kernel Each VM has its own OS A layer A layer A layer
of the story • Sharing them is the other • DockerHub - http://hub.docker.com • Public registry of Docker Images • Hosted by Docker Inc. • Free for public images, pay for private ones (one free private) • By default docker engines will look in DockerHub for images • Browser interface for searching, descriptions of images 15 Docker Host Containers Base OS/Kernel Containers Docker Engine Registry Images Liberty Ubuntu mysql nginx Liberty Ubuntu
on a host • Accepts requests from clients • REST API • Maps container ports to host ports • E.g. 80 à 3582 • Images • Docker Client • Drives engine • Drives "builder" of Images • Docker Registry • Image DB Docker Host Containers Base OS/Kernel Containers Docker Engine $ docker run ... $ docker build ... Registry Exposed/Mapped Ports Images Liberty Ubuntu mysql nginx Liberty Ubuntu Client
is kept outside of the container • Volume mount into the container • External storage - e.g. cloud object storage, DB • Monitoring • Tools to monitor resource usage of container • Can limit resources allocated to each container to prevent excessive usage • Logging & Monitoring • Typically stdout/stderr of container is the log • Can leverage tooling and external services for capturing logs or metrics 18
Servers, services - e.g. webapps, front-ends for back-end system • Pre-built environments - e.g. testing • Anything you don't want to install locally - e.g. compilers, runtimes • If you don't need to modify the kernel, start with containers 19
filesystem + metadata • No runtime state is kept in the Image • Many tools available - e.g. "docker build" + Dockerfiles $ cat Dockerfile FROM golang COPY myapp.go / RUN go build -o /myapp /myapp.go FROM ubuntu COPY --from=0 /maypp /myapp ENTRYPOINT [ "/myapp" ] $ docker build -t myapp . $ docker run myapp 21
Managing clusters of containers is a challenge • Networking • Between containers and external • Load-balancing • Security & Isolation • Scaling - e.g. based on load • Lifecycle management - e.g. restart if crashed • Placement to ensure high availability 22
Manage infrastructure resources needed by applications • Volumes • Networks • Secrets • And many many many more... • Declarative model • Provide the "desired state" and Kubernetes will make it happen • What's in a name? • Kubernetes (K8s/Kube): "Helmsman" in ancient Greek 23
(etcd). With "controllers" that react to changes in the DB. The controllers are what make it Kubernetes. This pluggability and extensibility is part of its "secret sauce". • DB represents the user's desired state • Controllers attempt to make reality match the desired state "API Server" is the HTTP/REST front-end to the DB More on controllers later... 24 DB API Server Client/User Controller Node Networks Volumes Secrets ... Request Monitor
Deployments • Events • Endpoints • Ingress • Jobs • Nodes • Namespaces • Pods • Persistent Volumes • Replica Sets • Secrets • Service Accounts • Services • Stateful Sets, and more... • Kubernetes aims to have the building blocks on which you build a cloud native platform. • Therefore, the internal resource model is the same as the end user resource model. Key Resources • Pod: set of co-located containers • Smallest unit of deployment • Several types of resources to help manage them • Replica Sets, Deployments, Stateful Sets, ... • Services • Define how to expose your app as a DNS entry • Query based selector to choose which pods apply 25 • A resource for every purpose
new application 2. API server receives the request and stores it in the DB (etcd) 3. Controllers detect the resource changes and act upon it 4. Deployment controller detects the new app and creates new pods in the DB to matchthe desired # of instances 5. Scheduler assigns new pods to a kubelet 6. Kubelet detects pods and deploys them via the container runtime (e.g. Docker) 7. Kubeproxy manages network traffic for the pods – including service discovery and load-balancing Node Node Pod Base OS/Kernel Docker Engine Images Liberty Ubuntu Kublet Kube- Proxy Pod/Service C C C Master API Server Controllers Replication Endpoints ... Kub Client ( kubectl ) deployment.yml Storage (etcd) 7 1 2 3 4 6 Scheduler 5
the community • Most major Cloud Native players are there • Lots of tooling to help with deploying/managing K8s and your apps • https://kubernetes.io • https://github.com/kubernetes • IBM Kubernetes Service: https://ibm.com/iks 29
cloud native apps • Breakdown monolith into microservices • Extend existing monolith using containers • Lift and Shift - to containers or VMs • Replace existing features with SaaS offerings • Leverage DevOps - CI/CD best practices 30
tutorials : https://github.com/IBM/containers • Docker 101 • Kubernetes 101 • Local Kubernetes Development • Helm 101 • Kubernetes Networking 101 • Istio 101 • Service Catalog • Knative 101 • IBM Kubernetes Service (IKS) : https://ibm.com/iks • Can get a free cluster to play with 31
set of watchers that react to changes in the DB Blue/green deployments Containers Pods Replica Sets Deployments Services Endpoints Secrets Networks Volumes/PV/PVC Ingress / LBs yaml Spec vs Status helm kubectl Istio ...
- A platform to build platforms Reality: It is the platform - Force developers to manage infrastructure - With flexibility came complexity I just want to run my app!
concern (and control) over infrastructure implementation Virtual machines Functions Containers Bare Metal • Faster start-up times • Better resource utilization • Finer-grained management • Splitting up the monolith
middleware components that are essential to build modern, source-centric, and container- based applications that can run anywhere: on premises, in the cloud, or even in a third-party data center." - Kn docs Huh ?
Service - manages the lifecycle of app • Configuration - manage history of app • Revision - A snapshot of your app • Config and image • Route - Endpoint and network traffic management Route Configuration Revision 1 Revision 2 Revision 3 90% 10% Service
core K8s resources for you o Provide a nicer UX for K8s o Enables easy traffic splitting - e.g. for A/B upgrades o Better resource utilization o Scale to zero capabilities o Auto-scaling on demand o Runtime building blocks on which Cloud Providers can build a Serverless platform
event producer o Can create the subscription for you o Brokers - a receiver of events o E.g. a queue o Trigger - ask for events from a Broker o Can specify a filter Manage to coordination/delivery of events to sinks Service Trigger ?? Broker Source Replies Sink Svc/Chan
of "Sinks" o Final "Reply" o Event Registry o Collection of EventTypes (Kn Broker, CloudEvent type/source/schema) o Auto registration by some EventSources (when sink is a broker)
] [ args ] Services : Create, Delete, Describe, List, Update Revisions : Describe, Delete, List Routes : List WIP Plug-ins to extend list of commands Service Traffic splitting Integration with KnEventing
Task : set of "steps" : execution of a container TaskRun : resource representing the execution of a Task Pipeline : ordered set of "tasks" PipelineRun : resource representing the execution of a Pipeline Exercise for the user to connect Tekton to KnServing https://github.com/tektoncd/pipeline
of an app o Each deployment did a rolling upgrade o Traffic split between revisions Eventing o Integrated Github webhook - subscribed o "push" events on our repo sent to a rebuild KnService Tekton o Invoked via rebuild KnService o Built new image, pushed to DockerHub o Triggered new revision of our app https://github.com/duglin/helloworld What we didn't do: o Create Deployment / ReplicaSet o Create a K8s Service o Create an Endpoint / Ingress o Set up a dynamic Load-Balancer o Set up an auto-scaler o Talk to Github directly to create web hook o ...
• Serverless Framework? • PaaS? • New "Deployment" w/ magic pixie dust? • A new Kubernetes user experience? Does it matter? Does it meet your needs? Letting developers be developers again
o Can use Istio for networking/traffic splitting o But it is optional o "kn" CLI is the preferred UX when not using yaml o Service vs Configs/Revisions/Routes o Consider Tekton for your CI/CD needs
for IBM Cloud Kubernetes Service (IKS) - "experimental" o One click install of Knative into your cluster o Includes Istio o Updates to Knative are managed o https://ibm.com/iks $ ic ks cluster-addon-enable knative -y CLUSTER