Upgrade to Pro — share decks privately, control downloads, hide ads and more …

How GitOps Helps Kubernetes Adoption

Sponsored · Your Podcast. Everywhere. Effortlessly. Share. Educate. Inspire. Entertain. You do you. We'll handle the rest.

How GitOps Helps Kubernetes Adoption

Chih Chiang Tsai
LINE Taiwan Site Reliability Engineering Software Engineer
https://linedevday.linecorp.com/2020/ja/sessions/9156
https://linedevday.linecorp.com/2020/en/sessions/9156

Avatar for LINE DevDay 2020

LINE DevDay 2020

November 25, 2020
Tweet

More Decks by LINE DevDay 2020

Other Decks in Technology

Transcript

  1. Developer On-boarding Why Modernize Infrastructure? BEFORE. Install dependencies, setup environments,

    lots of moving parts AFTER. Self-contained, ready-to-go Docker images can be run anywhere with Docker Scalability Portability
  2. Developer On-boarding Why Modernize Infrastructure? BEFORE. Take days if not

    weeks to get a development environment setup AFTER. The entire stack is only a single kubectl apply command away Scalability Portability
  3. Developer On-boarding Why Modernize Infrastructure? BEFORE. Scaling with VMs, high

    overhead, slower startup time AFTER. Seconds to start, run more applications or replicas on a node, higher utilization Scalability Portability
  4. › Started at beginning of 2018 › "One cluster for

    all" approach › Separate clusters for dev, staging and production environments › K8s provisioned and managed by Rancher Kubernetesization at LINE Taiwan
  5. Needs for Developer Tooling Limited K8s Knowledge Lack of Awareness

    of K8s Challenges of Adopting Kubernetes Config & Manifests Management Arbitrary Cluster Manipulations Lack of Best Practices
  6. › Most teams thought of Rancher as a sort of

    "PaaS" › Configure and deploy workloads directly from Rancher UI › No awareness of what's "underneath" Rancher Lack of Awareness of Kubernetes
  7. › Most teams thought of Rancher as a sort of

    "PaaS" › Configure and deploy workloads directly from Rancher UI › No awareness of what's "underneath" Rancher Lack of Awareness of Kubernetes
  8. › Limited Kubernetes knowledge due to the ease-of-use of Rancher

    UI › Hard to communicate when encountering issues › Guess and check approach to troubleshooting Limited Kubernetes Knowledge
  9. › VKS is our in-house managed Kubernetes service › Need

    for more developer-friendly tooling after VKS migration › Rancher UI was used by developers, QAs and even non-technical people › Importing VKS clusters to Rancher is not possible Needs for Developer Tooling
  10. › Where to store YAML files? › How to handle

    configuration changes? › Permission & access control of files Configuration & Manifests Management
  11. › Easily obtained kubeconfigs › Direct cluster manipulation thru kubectl

    › Difficult to track changes made to Kubernetes objects Arbitrary Cluster Manipulations
  12. › Many ways of implementing and exposing services › Choice

    of ingress controllers › Resource & capacity planning › Observability of services & applications Lack of Best Practices
  13. › Single Git repository to store configuration for all environments

    › Declarative configuration with Kubernetes manifests in YAML format Single Source of Truth
  14. › Manage Kubernetes clusters with familiar Git workflows › Git

    history becomes the change log of Kubernetes objects and cluster states Developer- Friendly
  15. › Provide a safer manner for changing cluster states ›

    Minimize the need to manipulate Kubernetes objects manually › All changes could be verified thru code reviews › Live cluster states can be synced with the changes automatically Minimal Direct Manipulations
  16. ArgoCD › Declarative Continuous Delivery for Kubernetes › The controller/sync

    agent for config repository and Kubernetes clusters › Web UI for live comparison between desired state and live state
  17. Kustomize › Kubernetes native configuration management › Plain, template-free YAMLs

    › Supported natively by ArgoCD › Encourage using with GitOps
  18. › All clusters need to have common infrastructure setup ›

    Ingress controller › Observability agents › Collect all readily deployable application in a single repository Standardized Common Apps
  19. K8s + GitOps Package applications in Docker images Write base

    manifests, use standardized apps if necessary Prepare overlays for different environments Sync manifests to live clusters Base Config Overlays Sync Containerize
  20. › Create separate read-only accounts for daily use › Create

    usage specific accounts for manual operation › Setup different roles by using Kubernetes RBAC Other Common Practices
  21. Solving Challenges of Adopting K8s › Migrate from shared cluster

    with Rancher to VKS Lack of Awareness of K8s › From shared cluster to project-owned clusters › Introducing GitOps and related tooling Limited K8s Knowledge Needs for Developer Tooling › Use Git as the single source of truth for manifests › Kustomize for manifest customization Config & Manifests Management › Sync cluster states with ArgoCD › Avoid direct usage of kubectl › Standardized common apps › Implement industry-wide best practices Arbitrary Cluster Manipulations Lack of Best Practices
  22. Next Steps › Automate new cluster on-boarding process › YAML

    validation › Kubernetes object validation › Manifest policy checks