Developer On-boarding
Why Modernize Infrastructure?
BEFORE.
Install dependencies, setup
environments, lots of moving parts
AFTER.
Self-contained, ready-to-go Docker
images can be run anywhere with
Docker
Scalability
Portability
Slide 8
Slide 8 text
Developer On-boarding
Why Modernize Infrastructure?
BEFORE.
Take days if not weeks to get a
development environment setup
AFTER.
The entire stack is only a single
kubectl apply command away
Scalability
Portability
Slide 9
Slide 9 text
Developer On-boarding
Why Modernize Infrastructure?
BEFORE.
Scaling with VMs, high overhead,
slower startup time
AFTER.
Seconds to start, run more
applications or replicas on a node,
higher utilization
Scalability
Portability
Slide 10
Slide 10 text
› Started at beginning of 2018
› "One cluster for all" approach
› Separate clusters for dev, staging and production environments
› K8s provisioned and managed by Rancher
Kubernetesization at LINE Taiwan
Slide 11
Slide 11 text
Shared K8s Cluster at LINE Taiwan
December 2019
Namespaces
35
Running Pods
3500+
Nodes
100+
Slide 12
Slide 12 text
Challenges of Adopting
Kubernetes
Slide 13
Slide 13 text
Needs for Developer Tooling
Limited K8s Knowledge
Lack of Awareness of K8s
Challenges of Adopting Kubernetes
Config & Manifests Management
Arbitrary Cluster Manipulations
Lack of Best Practices
Slide 14
Slide 14 text
› Most teams thought of
Rancher as a sort of "PaaS"
› Configure and deploy
workloads directly from
Rancher UI
› No awareness of what's
"underneath" Rancher
Lack of
Awareness of
Kubernetes
Slide 15
Slide 15 text
› Most teams thought of
Rancher as a sort of "PaaS"
› Configure and deploy
workloads directly from
Rancher UI
› No awareness of what's
"underneath" Rancher
Lack of
Awareness of
Kubernetes
Slide 16
Slide 16 text
› Limited Kubernetes knowledge due to the ease-of-use of Rancher UI
› Hard to communicate when encountering issues
› Guess and check approach to troubleshooting
Limited Kubernetes Knowledge
Slide 17
Slide 17 text
› VKS is our in-house managed Kubernetes service
› Need for more developer-friendly tooling after VKS migration
› Rancher UI was used by developers, QAs and even non-technical people
› Importing VKS clusters to Rancher is not possible
Needs for Developer Tooling
Slide 18
Slide 18 text
› Where to store YAML files?
› How to handle configuration changes?
› Permission & access control of files
Configuration & Manifests Management
Slide 19
Slide 19 text
› Easily obtained kubeconfigs
› Direct cluster manipulation thru kubectl
› Difficult to track changes made to Kubernetes objects
Arbitrary Cluster Manipulations
Slide 20
Slide 20 text
› Many ways of implementing and exposing services
› Choice of ingress controllers
› Resource & capacity planning
› Observability of services & applications
Lack of Best Practices
Slide 21
Slide 21 text
GitOps
Slide 22
Slide 22 text
Benefits of GitOps
Single Source of Truth
Minimal Direct Manipulations
Developer-Friendly
Slide 23
Slide 23 text
› Single Git repository to store
configuration for all environments
› Declarative configuration with
Kubernetes manifests in YAML format
Single Source of Truth
Slide 24
Slide 24 text
› Manage Kubernetes
clusters with familiar Git
workflows
› Git history becomes the
change log of Kubernetes
objects and cluster states
Developer-
Friendly
Slide 25
Slide 25 text
› Provide a safer manner for changing cluster states
› Minimize the need to manipulate Kubernetes objects manually
› All changes could be verified thru code reviews
› Live cluster states can be synced with the changes automatically
Minimal Direct Manipulations
How We Implement GitOps
ArgoCD
Standardized Apps & Practices
Kustomize
Slide 29
Slide 29 text
ArgoCD
› Declarative Continuous Delivery
for Kubernetes
› The controller/sync agent for
config repository and Kubernetes
clusters
› Web UI for live comparison
between desired state and live
state
› All clusters need to have common infrastructure setup
› Ingress controller
› Observability agents
› Collect all readily deployable application in a single repository
Standardized Common Apps
Slide 42
Slide 42 text
Kustomization with Remote Base
Slide 43
Slide 43 text
K8s + GitOps
Package applications in
Docker images
Write base manifests,
use standardized
apps if necessary
Prepare overlays for
different environments
Sync manifests to
live clusters
Base Config Overlays Sync
Containerize
› Create separate read-only accounts for daily use
› Create usage specific accounts for manual operation
› Setup different roles by using Kubernetes RBAC
Other Common Practices
Slide 46
Slide 46 text
Solving Challenges of Adopting K8s
› Migrate from shared cluster with Rancher
to VKS
Lack of Awareness of K8s
› From shared cluster to project-owned
clusters
› Introducing GitOps and related tooling
Limited K8s Knowledge
Needs for Developer Tooling
› Use Git as the single source of truth for
manifests
› Kustomize for manifest customization
Config & Manifests Management
› Sync cluster states with ArgoCD
› Avoid direct usage of kubectl
› Standardized common apps
› Implement industry-wide best practices
Arbitrary Cluster Manipulations
Lack of Best Practices
Slide 47
Slide 47 text
Next Steps
› Automate new cluster on-boarding process
› YAML validation
› Kubernetes object validation
› Manifest policy checks