Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Kubernetes Controllers - are they loops or events?
Search
Tim Hockin
February 20, 2021
Technology
11
3.8k
Kubernetes Controllers - are they loops or events?
Tim Hockin
February 20, 2021
Tweet
Share
More Decks by Tim Hockin
See All by Tim Hockin
Kubernetes in the 2nd Decade
thockin
0
310
Why Service is the worst API in Kubernetes, and what we can do about it
thockin
2
810
Kubernetes Pod Probes
thockin
6
4.2k
Go Workspaces for Kubernetes
thockin
2
990
Code Review in Kubernetes
thockin
2
1.7k
Multi-cluster: past, present, future
thockin
0
470
Kubernetes Network Models (why is this so dang hard?)
thockin
9
1.8k
KubeCon EU 2020: SIG-Network Intro and Deep-Dive
thockin
8
1.3k
A Non-Technical Kubernetes Talk (KubeCon EU 2020)
thockin
3
580
Other Decks in Technology
See All in Technology
リアルタイム分析データベースで実現する SQLベースのオブザーバビリティ
mikimatsumoto
0
1.4k
Building Products in the LLM Era
ymatsuwitter
10
5.8k
(機械学習システムでも) SLO から始める信頼性構築 - ゆる SRE#9 2025/02/21
daigo0927
0
170
現場で役立つAPIデザイン
nagix
35
12k
転生CISOサバイバル・ガイド / CISO Career Transition Survival Guide
kanny
3
1k
利用終了したドメイン名の最強終活〜観測環境を育てて、分析・供養している件〜 / The Ultimate End-of-Life Preparation for Discontinued Domain Names
nttcom
2
270
『衛星データ利用の方々にとって近いようで触れる機会のなさそうな小話 ~ 衛星搭載ソフトウェアと衛星運用ソフトウェア (実物) を動かしながらわいわいする編 ~』 @日本衛星データコミニティ勉強会
meltingrabbit
0
150
個人開発から公式機能へ: PlaywrightとRailsをつなげた3年の軌跡
yusukeiwaki
11
3.1k
依存パッケージの更新はコツコツが勝つコツ! / phpcon_nagoya2025
blue_goheimochi
3
160
次世代KYC活動報告 / 20250219-BizDay17-KYC-nextgen
oidfj
0
280
Culture Deck
optfit
0
430
Visualize, Visualize, Visualize and rclone
tomoaki0705
4
3.4k
Featured
See All Featured
Faster Mobile Websites
deanohume
306
31k
Practical Tips for Bootstrapping Information Extraction Pipelines
honnibal
PRO
12
970
ピンチをチャンスに:未来をつくるプロダクトロードマップ #pmconf2020
aki_iinuma
114
50k
Optimising Largest Contentful Paint
csswizardry
34
3.1k
The Cost Of JavaScript in 2023
addyosmani
47
7.3k
Visualizing Your Data: Incorporating Mongo into Loggly Infrastructure
mongodb
45
9.4k
Fantastic passwords and where to find them - at NoRuKo
philnash
51
3k
Automating Front-end Workflow
addyosmani
1368
200k
Fight the Zombie Pattern Library - RWD Summit 2016
marcelosomers
233
17k
Making Projects Easy
brettharned
116
6k
Cheating the UX When There Is Nothing More to Optimize - PixelPioneers
stephaniewalter
280
13k
Embracing the Ebb and Flow
colly
84
4.6k
Transcript
Kubernetes Controllers Are they loops or events? Tim Hockin @thockin
v1
Background on “reconciliation”: https://speakerdeck.com/thockin/kubernetes-what-is-reconciliation
Background on “edge vs. level”: https://speakerdeck.com/thockin/edge-vs-level-triggered-logic
Usually when we talk about controllers we refer to them
as a “loop”
Imagine a controller for Pods (aka kubelet). It has 2
jobs: 1) Actuate the pod API 2) Report status on pods
What you’d expect looks something like:
Node Kubernetes API a kubelet b c Get all pods
Node Kubernetes API a kubelet b c { name: a,
... } { name: b, ... } { name: c, ... }
Node Kubernetes API a kubelet b c for each pod
p { if p is running { verify p config } else { start p } gather status }
Node Kubernetes API a kubelet b c Set status c
a b
...then repeat (aka “a poll loop”)
Here’s where it matters
Node Kubernetes API a kubelet b c c a b
kubectl delete pod b
Node Kubernetes API a kubelet c c a b kubectl
delete pod b
Node Kubernetes API a kubelet c Get all pods c
a b
Node Kubernetes API a kubelet c { name: a, ...
} { name: c, ... } c a b
Node Kubernetes API a kubelet c I have “b” but
API doesn’t - delete it! c a b
Node Kubernetes API a kubelet c Set status c a
This is correct level-triggered reconciliation Read desired state, make it
so
Some controllers are implemented this way, but it’s inefficient at
scale
Imagine thousands of controllers (kubelet, kube-proxy, dns, ingress, storage...) polling
continuously
We need to achieve the same behavior more efficiently
We could poll less often, but then it takes a
long (and variable) time to react - not a great UX
Enter the “list-watch” model
Node Kubernetes API a kubelet b c Get all pods
Node Kubernetes API a kubelet b c { name: a,
... } { name: b, ... } { name: c, ... }
Node Kubernetes API a kubelet b c Cache: { name:
a, ... } { name: b, ... } { name: c, ... }
Node Kubernetes API a kubelet b c Watch all pods
Cache: { name: a, ... } { name: b, ... } { name: c, ... }
Node Kubernetes API a kubelet b c Cache: { name:
a, ... } { name: b, ... } { name: c, ... } for each pod p { if p is running { verify p config } else { start p } gather status }
Node Kubernetes API a kubelet b c Set status c
a b Cache: { name: a, ... } { name: b, ... } { name: c, ... }
We trade memory (the cache) for other resources (API server
CPU in particular)
There’s no point in polling my own cache, so what
happens next?
Remember that watch we did earlier? That’s an open stream
for events.
Node Kubernetes API a kubelet b c c a b
kubectl delete pod b Cache: { name: a, ... } { name: b, ... } { name: c, ... }
Node Kubernetes API a kubelet c c a b kubectl
delete pod b Cache: { name: a, ... } { name: b, ... } { name: c, ... }
Node Kubernetes API a kubelet c Delete: { name: b,
... } c a b Cache: { name: a, ... } { name: b, ... } { name: c, ... }
Node Kubernetes API a kubelet c Delete: { name: b,
... } c a b Cache: { name: a, ... } { name: c, ... }
Node Kubernetes API a kubelet c Cache: { name: a,
... } { name: c, ... } c a b API said to delete pod “b”.
Node Kubernetes API a kubelet c Cache: { name: a,
... } { name: c, ... } c a API said to delete pod “b”.
“But you said edge-triggered is bad!”
It is! But this isn’t edge-triggered.
The cache is updated by events (edges) but we are
still reconciling state
“???”
The controller can be restarted at any time and the
cache will be reconstructed - we can’t “miss an edge*” * modulo bugs, read on
Even if you miss an event, you can still recover
the state
Ultimately it’s all just software, and software has bugs. Controllers
should re-list periodically to get full state...
...but we’ve put a lot of energy into making sure
that our list-watch is reliable.