Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Kubernetes Controllers - are they loops or events?
Search
Tim Hockin
February 20, 2021
Technology
11
4k
Kubernetes Controllers - are they loops or events?
Tim Hockin
February 20, 2021
Tweet
Share
More Decks by Tim Hockin
See All by Tim Hockin
Kubernetes in the 2nd Decade
thockin
0
400
Why Service is the worst API in Kubernetes, and what we can do about it
thockin
2
960
Kubernetes Pod Probes
thockin
6
4.6k
Go Workspaces for Kubernetes
thockin
2
1.1k
Code Review in Kubernetes
thockin
2
1.8k
Multi-cluster: past, present, future
thockin
0
530
Kubernetes Network Models (why is this so dang hard?)
thockin
9
1.9k
KubeCon EU 2020: SIG-Network Intro and Deep-Dive
thockin
8
1.3k
A Non-Technical Kubernetes Talk (KubeCon EU 2020)
thockin
3
620
Other Decks in Technology
See All in Technology
実装で解き明かす並行処理の歴史
zozotech
PRO
1
260
Sidekiq その前に:Webアプリケーションにおける非同期ジョブ設計原則
morihirok
17
7.1k
空間を設計する力を考える / 20251004 Naoki Takahashi
shift_evolve
PRO
3
250
From Prompt to Product @ How to Web 2025, Bucharest, Romania
janwerner
0
110
生成AIを活用したZennの取り組み事例
ryosukeigarashi
0
190
"複雑なデータ処理 × 静的サイト" を両立させる、楽をするRails運用 / A low-effort Rails workflow that combines “Complex Data Processing × Static Sites”
hogelog
3
1.7k
関係性が駆動するアジャイル──GPTに人格を与えたら、対話を通してふりかえりを習慣化できた話
mhlyc
0
130
Function calling機能をPLaMo2に実装するには / PFN LLMセミナー
pfn
PRO
0
820
Flaky Testへの現実解をGoのプロポーザルから考える | Go Conference 2025
upamune
1
390
リーダーになったら未来を語れるようになろう/Speak the Future
sanogemaru
0
240
BtoBプロダクト開発の深層
16bitidol
0
150
Modern_Data_Stack最新動向クイズ_買収_AI_激動の2025年_.pdf
sagara
0
180
Featured
See All Featured
Improving Core Web Vitals using Speculation Rules API
sergeychernyshev
19
1.2k
How to Ace a Technical Interview
jacobian
280
23k
VelocityConf: Rendering Performance Case Studies
addyosmani
332
24k
[Rails World 2023 - Day 1 Closing Keynote] - The Magic of Rails
eileencodes
36
2.5k
Building a Scalable Design System with Sketch
lauravandoore
462
33k
Music & Morning Musume
bryan
46
6.8k
Into the Great Unknown - MozCon
thekraken
40
2.1k
Exploring the Power of Turbo Streams & Action Cable | RailsConf2023
kevinliebholz
34
6.1k
Building Better People: How to give real-time feedback that sticks.
wjessup
368
20k
Documentation Writing (for coders)
carmenintech
75
5k
Helping Users Find Their Own Way: Creating Modern Search Experiences
danielanewman
30
2.9k
GitHub's CSS Performance
jonrohan
1032
460k
Transcript
Kubernetes Controllers Are they loops or events? Tim Hockin @thockin
v1
Background on “reconciliation”: https://speakerdeck.com/thockin/kubernetes-what-is-reconciliation
Background on “edge vs. level”: https://speakerdeck.com/thockin/edge-vs-level-triggered-logic
Usually when we talk about controllers we refer to them
as a “loop”
Imagine a controller for Pods (aka kubelet). It has 2
jobs: 1) Actuate the pod API 2) Report status on pods
What you’d expect looks something like:
Node Kubernetes API a kubelet b c Get all pods
Node Kubernetes API a kubelet b c { name: a,
... } { name: b, ... } { name: c, ... }
Node Kubernetes API a kubelet b c for each pod
p { if p is running { verify p config } else { start p } gather status }
Node Kubernetes API a kubelet b c Set status c
a b
...then repeat (aka “a poll loop”)
Here’s where it matters
Node Kubernetes API a kubelet b c c a b
kubectl delete pod b
Node Kubernetes API a kubelet c c a b kubectl
delete pod b
Node Kubernetes API a kubelet c Get all pods c
a b
Node Kubernetes API a kubelet c { name: a, ...
} { name: c, ... } c a b
Node Kubernetes API a kubelet c I have “b” but
API doesn’t - delete it! c a b
Node Kubernetes API a kubelet c Set status c a
This is correct level-triggered reconciliation Read desired state, make it
so
Some controllers are implemented this way, but it’s inefficient at
scale
Imagine thousands of controllers (kubelet, kube-proxy, dns, ingress, storage...) polling
continuously
We need to achieve the same behavior more efficiently
We could poll less often, but then it takes a
long (and variable) time to react - not a great UX
Enter the “list-watch” model
Node Kubernetes API a kubelet b c Get all pods
Node Kubernetes API a kubelet b c { name: a,
... } { name: b, ... } { name: c, ... }
Node Kubernetes API a kubelet b c Cache: { name:
a, ... } { name: b, ... } { name: c, ... }
Node Kubernetes API a kubelet b c Watch all pods
Cache: { name: a, ... } { name: b, ... } { name: c, ... }
Node Kubernetes API a kubelet b c Cache: { name:
a, ... } { name: b, ... } { name: c, ... } for each pod p { if p is running { verify p config } else { start p } gather status }
Node Kubernetes API a kubelet b c Set status c
a b Cache: { name: a, ... } { name: b, ... } { name: c, ... }
We trade memory (the cache) for other resources (API server
CPU in particular)
There’s no point in polling my own cache, so what
happens next?
Remember that watch we did earlier? That’s an open stream
for events.
Node Kubernetes API a kubelet b c c a b
kubectl delete pod b Cache: { name: a, ... } { name: b, ... } { name: c, ... }
Node Kubernetes API a kubelet c c a b kubectl
delete pod b Cache: { name: a, ... } { name: b, ... } { name: c, ... }
Node Kubernetes API a kubelet c Delete: { name: b,
... } c a b Cache: { name: a, ... } { name: b, ... } { name: c, ... }
Node Kubernetes API a kubelet c Delete: { name: b,
... } c a b Cache: { name: a, ... } { name: c, ... }
Node Kubernetes API a kubelet c Cache: { name: a,
... } { name: c, ... } c a b API said to delete pod “b”.
Node Kubernetes API a kubelet c Cache: { name: a,
... } { name: c, ... } c a API said to delete pod “b”.
“But you said edge-triggered is bad!”
It is! But this isn’t edge-triggered.
The cache is updated by events (edges) but we are
still reconciling state
“???”
The controller can be restarted at any time and the
cache will be reconstructed - we can’t “miss an edge*” * modulo bugs, read on
Even if you miss an event, you can still recover
the state
Ultimately it’s all just software, and software has bugs. Controllers
should re-list periodically to get full state...
...but we’ve put a lot of energy into making sure
that our list-watch is reliable.