Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Building Adaptive Systems
Search
Sponsored
·
Ship Features Fearlessly
Turn features on and off without deploys. Used by thousands of Ruby developers.
→
Chris Keathley
May 28, 2020
Programming
44
2.9k
Building Adaptive Systems
Chris Keathley
May 28, 2020
Tweet
Share
More Decks by Chris Keathley
See All by Chris Keathley
Solid code isn't flexible
keathley
5
1.1k
Contracts for building reliable systems
keathley
6
1.1k
Kafka, the hard parts
keathley
3
1.9k
Building Resilient Elixir Systems
keathley
7
2.5k
Consistent, Distributed Elixir
keathley
6
1.6k
Telling stories with data visualization
keathley
1
680
Easing into continuous deployment
keathley
2
420
Leveling up your git skills
keathley
0
820
Generative Testing in Elixir
keathley
0
580
Other Decks in Programming
See All in Programming
Rust 製のコードエディタ “Zed” を使ってみた
nearme_tech
PRO
0
160
AIと一緒にレガシーに向き合ってみた
nyafunta9858
0
220
AIエージェントのキホンから学ぶ「エージェンティックコーディング」実践入門
masahiro_nishimi
5
430
AI によるインシデント初動調査の自動化を行う AI インシデントコマンダーを作った話
azukiazusa1
1
710
CSC307 Lecture 02
javiergs
PRO
1
780
Lambda のコードストレージ容量に気をつけましょう
tattwan718
0
120
AI巻き込み型コードレビューのススメ
nealle
1
160
AI時代のキャリアプラン「技術の引力」からの脱出と「問い」へのいざない / tech-gravity
minodriven
21
7.1k
AWS re:Invent 2025参加 直前 Seattle-Tacoma Airport(SEA)におけるハードウェア紛失インシデントLT
tetutetu214
2
110
CSC307 Lecture 06
javiergs
PRO
0
680
AgentCoreとHuman in the Loop
har1101
5
230
Amazon Bedrockを活用したRAGの品質管理パイプライン構築
tosuri13
4
370
Featured
See All Featured
Distributed Sagas: A Protocol for Coordinating Microservices
caitiem20
333
22k
Bootstrapping a Software Product
garrettdimon
PRO
307
120k
DevOps and Value Stream Thinking: Enabling flow, efficiency and business value
helenjbeal
1
92
Evolution of real-time – Irina Nazarova, EuRuKo, 2024
irinanazarova
9
1.2k
The #1 spot is gone: here's how to win anyway
tamaranovitovic
2
930
技術選定の審美眼(2025年版) / Understanding the Spiral of Technologies 2025 edition
twada
PRO
117
110k
So, you think you're a good person
axbom
PRO
2
1.9k
Fight the Zombie Pattern Library - RWD Summit 2016
marcelosomers
234
17k
Applied NLP in the Age of Generative AI
inesmontani
PRO
4
2k
Music & Morning Musume
bryan
47
7.1k
The Art of Delivering Value - GDevCon NA Keynote
reverentgeek
16
1.8k
AI in Enterprises - Java and Open Source to the Rescue
ivargrimstad
0
1.1k
Transcript
Chris Keathley / @ChrisKeathley /
[email protected]
Building Adaptive Systems
Server Server
Server Server I have a request
Server Server
Server Server
Server Server No Problem!
Server Server
Server Server Thanks!
Server Server
Server Server I have a request
Server Server
Server Server
Server Server I’m a little busy
Server Server I’m a little busy I have more requests!
Server Server I’m a little busy I have more requests!
Server Server I’m a little busy I have more requests!
Server Server I’m a little busy I have more requests!
Server Server I’m a little busy I have more requests!
Server Server I’m a little busy I have more requests!
Server Server I’m a little busy I have more requests!
Server Server I’m a little busy I have more requests!
Server Server I don’t feel so good
Server
Server Welp
Server Welp
All services have objectives
A resilient service should be able to withstand a 10x
traffic spike and continue to meet those objectives
Lets Talk About… Queues Overload Mitigation Adaptive Concurrency
Lets Talk About… Queues Overload Mitigation Adaptive Concurrency
What causes overload?
What causes overload? Server Queue
What causes overload? Server Queue Processing Time Arrival Rate >
Little’s Law Elements in the queue = Arrival Rate *
Processing Time
Little’s Law Server 1 requests = 10 rps * 100
ms 100ms
Little’s Law Server 1 requests = 10 rps * 100
ms 100ms
Little’s Law Server 1 requests = 10 rps * 100
ms 100ms
Little’s Law Server 2 requests = 10 rps * 200
ms 200ms
Little’s Law Server 2 requests = 10 rps * 200
ms 200ms
Little’s Law Server 2 requests = 10 rps * 200
ms 200ms
Little’s Law Server 2 requests = 10 rps * 200
ms 200ms
Little’s Law Server 2 requests = 10 rps * 200
ms 200ms
Little’s Law Server 2 requests = 10 rps * 200
ms 200ms BEAM Processes
Little’s Law Server 2 requests = 10 rps * 200
ms 200ms BEAM Processes CPU Pressure
Little’s Law Server 3 requests = 10 rps * 300
ms 300ms BEAM Processes CPU Pressure
Little’s Law Server 30 requests = 10 rps * 3000
ms 3000ms BEAM Processes CPU Pressure
Little’s Law Server 30 requests = 10 rps * ∞
ms ∞ BEAM Processes CPU Pressure
Little’s Law 30 requests = 10 rps * ∞ ms
Little’s Law ∞ requests = 10 rps * ∞ ms
Little’s Law ∞ requests = 10 rps * ∞ ms
This is bad
Lets Talk About… Queues Overload Mitigation Adaptive Concurrency
Lets Talk About… Queues Overload Mitigation Adaptive Concurrency
Overload Arrival Rate > Processing Time
Overload Arrival Rate > Processing Time We need to get
these under control
Load Shedding Server Queue Server
Load Shedding Server Queue Server Drop requests
Load Shedding Server Queue Server Drop requests Stop sending
Autoscaling
Autoscaling
Autoscaling Server DB Server
Autoscaling Server DB Server Requests start queueing
Autoscaling Server DB Server Server
Autoscaling Server DB Server Server Now its worse
Autoscaling needs to be in response to load shedding
Circuit Breakers
Circuit Breakers
Circuit Breakers Server Server
Circuit Breakers Server Server
Circuit Breakers Server Server Shut off traffic
Circuit Breakers Server Server
Circuit Breakers Server Server I’m not quite dead yet
Circuit Breakers are your last line of defense
Lets Talk About… Queues Overload Mitigation Adaptive Concurrency
Lets Talk About… Queues Overload Mitigation Adaptive Concurrency
We want to allow as many requests as we can
actually handle
None
Adaptive Limits Time Concurrency
Adaptive Limits Actual limit Time Concurrency
Adaptive Limits Actual limit Dynamic Discovery Time Concurrency
Load Shedding Server Server
Load Shedding Server Server Are we at the limit?
Load Shedding Server Server Am I still healthy?
Load Shedding Server Server
Load Shedding Server Server Update Limits
Adaptive Limits Time Concurrency Increased latency
Latency Successful vs. Failed requests Signals for Adjusting Limits
Additive Increase Multiplicative Decrease Success state: limit + 1 Backoff
state: limit * 0.95 Time Concurrency
Prior Art/Alternatives https://github.com/ferd/pobox/ https://github.com/fishcakez/sbroker/ https://github.com/heroku/canal_lock https://github.com/jlouis/safetyvalve https://github.com/jlouis/fuse
Regulator https://github.com/keathley/regulator
Regulator.install(:service, [ limit: {Regulator.Limit.AIMD, [timeout: 500]} ]) Regulator.ask(:service, fn ->
{:ok, Finch.request(:get, "https://keathley.io")} end) Regulator
Conclusion
Queues are everywhere
Those queues need to be bounded to avoid overload
If your system is dynamic, your solution will also need
to be dynamic
Go and build awesome stuff
Thanks Chris Keathley / @ChrisKeathley /
[email protected]