Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Kubernetes Controllers - are they loops or events?
Search
Tim Hockin
February 20, 2021
Technology
11
4.1k
Kubernetes Controllers - are they loops or events?
Tim Hockin
February 20, 2021
Tweet
Share
More Decks by Tim Hockin
See All by Tim Hockin
Kubernetes in the 2nd Decade
thockin
0
470
Why Service is the worst API in Kubernetes, and what we can do about it
thockin
2
1k
Kubernetes Pod Probes
thockin
6
4.7k
Go Workspaces for Kubernetes
thockin
2
1.1k
Code Review in Kubernetes
thockin
2
1.8k
Multi-cluster: past, present, future
thockin
0
570
Kubernetes Network Models (why is this so dang hard?)
thockin
9
2k
KubeCon EU 2020: SIG-Network Intro and Deep-Dive
thockin
8
1.4k
A Non-Technical Kubernetes Talk (KubeCon EU 2020)
thockin
3
650
Other Decks in Technology
See All in Technology
Bedrock PolicyでAmazon Bedrock Guardrails利用を強制してみた
yuu551
0
260
Claude_CodeでSEOを最適化する_AI_Ops_Community_Vol.2__マーケティングx_AIはここまで進化した.pdf
riku_423
2
610
Ruby版 JSXのRuxが気になる
sansantech
PRO
0
170
Context Engineeringの取り組み
nutslove
0
380
We Built for Predictability; The Workloads Didn’t Care
stahnma
0
150
学生・新卒・ジュニアから目指すSRE
hiroyaonoe
2
770
私たち準委任PdEは2つのプロダクトに挑戦する ~ソフトウェア、開発支援という”二重”のプロダクトエンジニアリングの実践~ / 20260212 Naoki Takahashi
shift_evolve
PRO
2
210
StrandsとNeptuneを使ってナレッジグラフを構築する
yakumo
1
130
Oracle Cloud Observability and Management Platform - OCI 運用監視サービス概要 -
oracle4engineer
PRO
2
14k
広告の効果検証を題材にした因果推論の精度検証について
zozotech
PRO
0
210
制約が導く迷わない設計 〜 信頼性と運用性を両立するマイナンバー管理システムの実践 〜
bwkw
3
1.1k
Webhook best practices for rock solid and resilient deployments
glaforge
2
310
Featured
See All Featured
Why Our Code Smells
bkeepers
PRO
340
58k
Fantastic passwords and where to find them - at NoRuKo
philnash
52
3.6k
Avoiding the “Bad Training, Faster” Trap in the Age of AI
tmiket
0
79
Color Theory Basics | Prateek | Gurzu
gurzu
0
200
Future Trends and Review - Lecture 12 - Web Technologies (1019888BNR)
signer
PRO
0
3.2k
Amusing Abliteration
ianozsvald
0
110
Joys of Absence: A Defence of Solitary Play
codingconduct
1
290
Applied NLP in the Age of Generative AI
inesmontani
PRO
4
2.1k
Principles of Awesome APIs and How to Build Them.
keavy
128
17k
Utilizing Notion as your number one productivity tool
mfonobong
3
220
Keith and Marios Guide to Fast Websites
keithpitt
413
23k
Reality Check: Gamification 10 Years Later
codingconduct
0
2k
Transcript
Kubernetes Controllers Are they loops or events? Tim Hockin @thockin
v1
Background on “reconciliation”: https://speakerdeck.com/thockin/kubernetes-what-is-reconciliation
Background on “edge vs. level”: https://speakerdeck.com/thockin/edge-vs-level-triggered-logic
Usually when we talk about controllers we refer to them
as a “loop”
Imagine a controller for Pods (aka kubelet). It has 2
jobs: 1) Actuate the pod API 2) Report status on pods
What you’d expect looks something like:
Node Kubernetes API a kubelet b c Get all pods
Node Kubernetes API a kubelet b c { name: a,
... } { name: b, ... } { name: c, ... }
Node Kubernetes API a kubelet b c for each pod
p { if p is running { verify p config } else { start p } gather status }
Node Kubernetes API a kubelet b c Set status c
a b
...then repeat (aka “a poll loop”)
Here’s where it matters
Node Kubernetes API a kubelet b c c a b
kubectl delete pod b
Node Kubernetes API a kubelet c c a b kubectl
delete pod b
Node Kubernetes API a kubelet c Get all pods c
a b
Node Kubernetes API a kubelet c { name: a, ...
} { name: c, ... } c a b
Node Kubernetes API a kubelet c I have “b” but
API doesn’t - delete it! c a b
Node Kubernetes API a kubelet c Set status c a
This is correct level-triggered reconciliation Read desired state, make it
so
Some controllers are implemented this way, but it’s inefficient at
scale
Imagine thousands of controllers (kubelet, kube-proxy, dns, ingress, storage...) polling
continuously
We need to achieve the same behavior more efficiently
We could poll less often, but then it takes a
long (and variable) time to react - not a great UX
Enter the “list-watch” model
Node Kubernetes API a kubelet b c Get all pods
Node Kubernetes API a kubelet b c { name: a,
... } { name: b, ... } { name: c, ... }
Node Kubernetes API a kubelet b c Cache: { name:
a, ... } { name: b, ... } { name: c, ... }
Node Kubernetes API a kubelet b c Watch all pods
Cache: { name: a, ... } { name: b, ... } { name: c, ... }
Node Kubernetes API a kubelet b c Cache: { name:
a, ... } { name: b, ... } { name: c, ... } for each pod p { if p is running { verify p config } else { start p } gather status }
Node Kubernetes API a kubelet b c Set status c
a b Cache: { name: a, ... } { name: b, ... } { name: c, ... }
We trade memory (the cache) for other resources (API server
CPU in particular)
There’s no point in polling my own cache, so what
happens next?
Remember that watch we did earlier? That’s an open stream
for events.
Node Kubernetes API a kubelet b c c a b
kubectl delete pod b Cache: { name: a, ... } { name: b, ... } { name: c, ... }
Node Kubernetes API a kubelet c c a b kubectl
delete pod b Cache: { name: a, ... } { name: b, ... } { name: c, ... }
Node Kubernetes API a kubelet c Delete: { name: b,
... } c a b Cache: { name: a, ... } { name: b, ... } { name: c, ... }
Node Kubernetes API a kubelet c Delete: { name: b,
... } c a b Cache: { name: a, ... } { name: c, ... }
Node Kubernetes API a kubelet c Cache: { name: a,
... } { name: c, ... } c a b API said to delete pod “b”.
Node Kubernetes API a kubelet c Cache: { name: a,
... } { name: c, ... } c a API said to delete pod “b”.
“But you said edge-triggered is bad!”
It is! But this isn’t edge-triggered.
The cache is updated by events (edges) but we are
still reconciling state
“???”
The controller can be restarted at any time and the
cache will be reconstructed - we can’t “miss an edge*” * modulo bugs, read on
Even if you miss an event, you can still recover
the state
Ultimately it’s all just software, and software has bugs. Controllers
should re-list periodically to get full state...
...but we’ve put a lot of energy into making sure
that our list-watch is reliable.