Upgrade to Pro — share decks privately, control downloads, hide ads and more …

etcd

 etcd

Abstraction of etcd, for LT.

Avatar for Yasuhiro Murata

Yasuhiro Murata

August 22, 2019
Tweet

More Decks by Yasuhiro Murata

Other Decks in Technology

Transcript

  1. What is etcd ? u a distributed reliable key-value store

    for the most critical data of a distributed system • Simple • well-defined, user-facing API (gRPC) • Secure • automatic TLS with optional client cert authentication • Fast • benchmarked 10,000 writes/sec • Reliable • properly distributed using Raft Raft is a consensus algorithm, equivalent to Paxos in fault-tolerance and performance. written in Go
  2. etcd with Kubernetes u etcd is frequently teamed with... •

    Kubernetes • an open-source system for automating deployment, scaling, and management of containerized applications • locksmith • a reboot manager for the CoreOS update engine • vulcand • a programmatic extendable proxy for microservices and API management • Doorman • a solution for Global Distributed Client Side Rate Limiting
  3. etcd with Kubernetes u Master Components of Kubernetes are... •

    kube-apiserver • the front end of Kubernetes control plane, exposes the Kubernetes API • etcd • a backing store for all cluster data • kube-scheduler • watches newly created pods and selects a node for them • kube-controller-manager • runs controllers – Node, Replication, Endpoints, Service Account & Token • cloud-controller-manager • Interact with the underlying cloud providers
  4. etcd with Kubernetes u Operating etc cluster for Kubernetes as

    following... • run etcd as a cluster of odd members • fulfill guaranteed resource requirements • set version of etcd 3.2.10+, recommended • limit access to etcd cluster, because it’s like root permission in cluster • back up an etcd cluster Built-in snapshot or Volume snapshot on GKE, master nodes are automatically scaled
  5. etcd with Kubernetes u Recommended GCE configurations are... Cluster size

    Use case Recommend type small • fewer than 100 clients • fewer than 200 of requests/sec • stores less than 100MB • Ex. a 50-node Kubernetes cluster n1-standard-2 50GB PD SSD medium • fewer than 500 clients • fewer than 1,000 of requests/sec • stores less than 500MB • Ex. a 250-node Kubernetes cluster n1-standard-4 150GB PD SSD large • fewer than 1,500 clients • fewer than 10,000 of requests/sec • stores less than 1GB • Ex. a 1,000-node Kubernetes cluster n1-standard-8 250GB PD SSD