Slide 1

Slide 1 text

No content

Slide 2

Slide 2 text

Current Adoption Status October 2020 Projects 20+ App Configs 130+ K8s Clusters 50+

Slide 3

Slide 3 text

Projects Adopted LINE SPOT LINE Shopping LINE Travel LINE HUB LINE MUSIC LINE TODAY

Slide 4

Slide 4 text

Infrastructure Modernization

Slide 5

Slide 5 text

Cloud-Native Applications Microservices Containers DevOps

Slide 6

Slide 6 text

Why Modernize Infrastructure? Scalability Developer On-boarding Portability

Slide 7

Slide 7 text

Developer On-boarding Why Modernize Infrastructure? BEFORE. Install dependencies, setup environments, lots of moving parts AFTER. Self-contained, ready-to-go Docker images can be run anywhere with Docker Scalability Portability

Slide 8

Slide 8 text

Developer On-boarding Why Modernize Infrastructure? BEFORE. Take days if not weeks to get a development environment setup AFTER. The entire stack is only a single kubectl apply command away Scalability Portability

Slide 9

Slide 9 text

Developer On-boarding Why Modernize Infrastructure? BEFORE. Scaling with VMs, high overhead, slower startup time AFTER. Seconds to start, run more applications or replicas on a node, higher utilization Scalability Portability

Slide 10

Slide 10 text

› Started at beginning of 2018 › "One cluster for all" approach › Separate clusters for dev, staging and production environments › K8s provisioned and managed by Rancher Kubernetesization at LINE Taiwan

Slide 11

Slide 11 text

Shared K8s Cluster at LINE Taiwan December 2019 Namespaces 35 Running Pods 3500+ Nodes 100+

Slide 12

Slide 12 text

Challenges of Adopting Kubernetes

Slide 13

Slide 13 text

Needs for Developer Tooling Limited K8s Knowledge Lack of Awareness of K8s Challenges of Adopting Kubernetes Config & Manifests Management Arbitrary Cluster Manipulations Lack of Best Practices

Slide 14

Slide 14 text

› Most teams thought of Rancher as a sort of "PaaS" › Configure and deploy workloads directly from Rancher UI › No awareness of what's "underneath" Rancher Lack of Awareness of Kubernetes

Slide 15

Slide 15 text

› Most teams thought of Rancher as a sort of "PaaS" › Configure and deploy workloads directly from Rancher UI › No awareness of what's "underneath" Rancher Lack of Awareness of Kubernetes

Slide 16

Slide 16 text

› Limited Kubernetes knowledge due to the ease-of-use of Rancher UI › Hard to communicate when encountering issues › Guess and check approach to troubleshooting Limited Kubernetes Knowledge

Slide 17

Slide 17 text

› VKS is our in-house managed Kubernetes service › Need for more developer-friendly tooling after VKS migration › Rancher UI was used by developers, QAs and even non-technical people › Importing VKS clusters to Rancher is not possible Needs for Developer Tooling

Slide 18

Slide 18 text

› Where to store YAML files? › How to handle configuration changes? › Permission & access control of files Configuration & Manifests Management

Slide 19

Slide 19 text

› Easily obtained kubeconfigs › Direct cluster manipulation thru kubectl › Difficult to track changes made to Kubernetes objects Arbitrary Cluster Manipulations

Slide 20

Slide 20 text

› Many ways of implementing and exposing services › Choice of ingress controllers › Resource & capacity planning › Observability of services & applications Lack of Best Practices

Slide 21

Slide 21 text

GitOps

Slide 22

Slide 22 text

Benefits of GitOps Single Source of Truth Minimal Direct Manipulations Developer-Friendly

Slide 23

Slide 23 text

› Single Git repository to store configuration for all environments › Declarative configuration with Kubernetes manifests in YAML format Single Source of Truth

Slide 24

Slide 24 text

› Manage Kubernetes clusters with familiar Git workflows › Git history becomes the change log of Kubernetes objects and cluster states Developer- Friendly

Slide 25

Slide 25 text

› Provide a safer manner for changing cluster states › Minimize the need to manipulate Kubernetes objects manually › All changes could be verified thru code reviews › Live cluster states can be synced with the changes automatically Minimal Direct Manipulations

Slide 26

Slide 26 text

Developer Config Repository Pull Request Developers Create Code Review Merge Sync Agent Pull Sync K8s Cluster

Slide 27

Slide 27 text

GitOps at LINE Taiwan

Slide 28

Slide 28 text

How We Implement GitOps ArgoCD Standardized Apps & Practices Kustomize

Slide 29

Slide 29 text

ArgoCD › Declarative Continuous Delivery for Kubernetes › The controller/sync agent for config repository and Kubernetes clusters › Web UI for live comparison between desired state and live state

Slide 30

Slide 30 text

ArgoCD Dashboard

Slide 31

Slide 31 text

ArgoCD Application View

Slide 32

Slide 32 text

ArgoCD App Diff

Slide 33

Slide 33 text

Developer Config Repository Pull Request Developers Create Code Review Merge Sync Agent Pull Sync K8s Cluster

Slide 34

Slide 34 text

Developer Config Repository Pull Request Developers Create Code Review Merge ArgoCD Webhook Sync K8s Cluster

Slide 35

Slide 35 text

Kustomize › Kubernetes native configuration management › Plain, template-free YAMLs › Supported natively by ArgoCD › Encourage using with GitOps

Slide 36

Slide 36 text

Manifests , kustomization.yaml Reference BASE Patches , kustomization.yaml Reference OVERLAY Reference Kustomize Build Final Manifests

Slide 37

Slide 37 text

base overlay kustomization

Slide 38

Slide 38 text

Generated YAML Output

Slide 39

Slide 39 text

Developer Config Repository Pull Request Developers Create Code Review Merge ArgoCD Webhook Sync K8s Cluster

Slide 40

Slide 40 text

Developer Config Repository Pull Request Developers Create Code Review Merge ArgoCD Webhook Sync K8s Cluster Kustomize Build

Slide 41

Slide 41 text

› All clusters need to have common infrastructure setup › Ingress controller › Observability agents › Collect all readily deployable application in a single repository Standardized Common Apps

Slide 42

Slide 42 text

Kustomization with Remote Base

Slide 43

Slide 43 text

K8s + GitOps Package applications in Docker images Write base manifests, use standardized apps if necessary Prepare overlays for different environments Sync manifests to live clusters Base Config Overlays Sync Containerize

Slide 44

Slide 44 text

Developer Config Repository Pull Request Developers Create Code Review Merge ArgoCD Webhook Sync K8s Cluster Kustomize Build

Slide 45

Slide 45 text

› Create separate read-only accounts for daily use › Create usage specific accounts for manual operation › Setup different roles by using Kubernetes RBAC Other Common Practices

Slide 46

Slide 46 text

Solving Challenges of Adopting K8s › Migrate from shared cluster with Rancher to VKS Lack of Awareness of K8s › From shared cluster to project-owned clusters › Introducing GitOps and related tooling Limited K8s Knowledge Needs for Developer Tooling › Use Git as the single source of truth for manifests › Kustomize for manifest customization Config & Manifests Management › Sync cluster states with ArgoCD › Avoid direct usage of kubectl › Standardized common apps › Implement industry-wide best practices Arbitrary Cluster Manipulations Lack of Best Practices

Slide 47

Slide 47 text

Next Steps › Automate new cluster on-boarding process › YAML validation › Kubernetes object validation › Manifest policy checks

Slide 48

Slide 48 text

Thank you