CNCF Ambassador, Certified Kubernetes Administrator and (former) Kubernetes WG/SIG Lead KubeCon Speaker in Berlin, Austin, Copenhagen, Shanghai & Seattle KubeCon Keynote Speaker in Barcelona Kubernetes approver and subproject owner, active in the community for 3+ years. Got kubeadm to GA. Weave Ignite creator Driving luxas labs which currently performs contracting for Weaveworks
different scopes Master 1 Master N Node 1 Node N kubeadm kubeadm kubeadm kubeadm Cloud Provider Load Balancers Monitoring Logging Cluster API Spec Cluster API Cluster API Implementation Addons Kubernetes API Bootstrapping Machines Infrastructure kops
the management of (X) clusters across (Y) providers simple, secure, and configurable.” “How can I manage any number of clusters in a similar fashion to how I manage deployments in Kubernetes?”
I manage other lifecycle events across that infrastructure (upgrades, deletions, etc.)?” “How can we control all of this via a consistent API across providers?”
with a container UX and built-in GitOps management” - Firecracker MicroVMs & OCI containers to unify containers and VMs. - Works in a GitOps fashion; manages VMs declaratively
worked on programming tasks We needed to: a) Use open source (no “normal” VM licenses) b) Run legacy applications with “special requirements” c) Integrate with containers
VM # Use 2 vCPUs and 1GB of RAM, enable automatic SSH access and name it my-vm ignite run weaveworks/ignite-ubuntu \ --cpus 2 \ --memory 1GB \ --ssh \ --name my-vm # List running VMs ignite ps # List Docker (OCI) and kernel images imported into Ignite ignite images ignite kernels # Get the boot logs of the VM ignite logs my-vm # SSH into the VM ignite ssh my-vm Demo!
name: my-nodes spec: replicas: 3 selector: matchLabels: foo: bar template: metadata: labels: foo: bar spec: providerConfig: value: apiVersion: "baremetalconfig/v1alpha1" kind: "BareMetalProviderConfig" zone: "us-central1-f" machineType: "n1-standard-1" image: "ubuntu-1604-lts" versions: kubelet: 1.14.2 containerRuntime: name: containerd version: 1.2.0 • With Kubernetes we manage our applications declaratively a. Why not for the cluster itself? • With the Cluster API, we can declaratively define the desired cluster state a. Operator implementations reconcile the state b. Use Spec & Status like the rest of k8s c. Common management solutions for e.g. upgrades, autoscaling and repair d. Allows for “GitOps” workflows