Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Speaker Deck
PRO
Sign in
Sign up
for free
Building Adaptive Systems
Chris Keathley
May 28, 2020
Programming
18
650
Building Adaptive Systems
Chris Keathley
May 28, 2020
Tweet
Share
More Decks by Chris Keathley
See All by Chris Keathley
keathley
2
330
keathley
1
510
keathley
5
840
keathley
4
710
keathley
0
190
keathley
0
65
keathley
0
260
keathley
0
140
keathley
2
240
Other Decks in Programming
See All in Programming
hr01
1
1.3k
saki4869
0
190
muttsu_623
0
540
hollodotme
1
110
hanakla
2
3k
line_developers_tw
1
490
tommykw
1
360
azdaroth
0
140
ken3ypa
0
160
showwin
0
130
466548
0
140
mraible
PRO
0
270
Featured
See All Featured
lemiorhan
627
43k
bkeepers
321
53k
smashingmag
229
18k
bkeepers
52
4.1k
mojombo
358
62k
sachag
267
17k
robhawkes
52
2.8k
bryan
100
11k
jlugia
216
16k
imathis
478
150k
jrom
114
7.1k
zakiwarfel
88
3.3k
Transcript
Chris Keathley / @ChrisKeathley / c@keathley.io Building Adaptive Systems
Server Server
Server Server I have a request
Server Server
Server Server
Server Server No Problem!
Server Server
Server Server Thanks!
Server Server
Server Server I have a request
Server Server
Server Server
Server Server I’m a little busy
Server Server I’m a little busy I have more requests!
Server Server I’m a little busy I have more requests!
Server Server I’m a little busy I have more requests!
Server Server I’m a little busy I have more requests!
Server Server I’m a little busy I have more requests!
Server Server I’m a little busy I have more requests!
Server Server I’m a little busy I have more requests!
Server Server I’m a little busy I have more requests!
Server Server I don’t feel so good
Server
Server Welp
Server Welp
All services have objectives
A resilient service should be able to withstand a 10x
traffic spike and continue to meet those objectives
Lets Talk About… Queues Overload Mitigation Adaptive Concurrency
Lets Talk About… Queues Overload Mitigation Adaptive Concurrency
What causes overload?
What causes overload? Server Queue
What causes overload? Server Queue Processing Time Arrival Rate >
Little’s Law Elements in the queue = Arrival Rate *
Processing Time
Little’s Law Server 1 requests = 10 rps * 100
ms 100ms
Little’s Law Server 1 requests = 10 rps * 100
ms 100ms
Little’s Law Server 1 requests = 10 rps * 100
ms 100ms
Little’s Law Server 2 requests = 10 rps * 200
ms 200ms
Little’s Law Server 2 requests = 10 rps * 200
ms 200ms
Little’s Law Server 2 requests = 10 rps * 200
ms 200ms
Little’s Law Server 2 requests = 10 rps * 200
ms 200ms
Little’s Law Server 2 requests = 10 rps * 200
ms 200ms
Little’s Law Server 2 requests = 10 rps * 200
ms 200ms BEAM Processes
Little’s Law Server 2 requests = 10 rps * 200
ms 200ms BEAM Processes CPU Pressure
Little’s Law Server 3 requests = 10 rps * 300
ms 300ms BEAM Processes CPU Pressure
Little’s Law Server 30 requests = 10 rps * 3000
ms 3000ms BEAM Processes CPU Pressure
Little’s Law Server 30 requests = 10 rps * ∞
ms ∞ BEAM Processes CPU Pressure
Little’s Law 30 requests = 10 rps * ∞ ms
Little’s Law ∞ requests = 10 rps * ∞ ms
Little’s Law ∞ requests = 10 rps * ∞ ms
This is bad
Lets Talk About… Queues Overload Mitigation Adaptive Concurrency
Lets Talk About… Queues Overload Mitigation Adaptive Concurrency
Overload Arrival Rate > Processing Time
Overload Arrival Rate > Processing Time We need to get
these under control
Load Shedding Server Queue Server
Load Shedding Server Queue Server Drop requests
Load Shedding Server Queue Server Drop requests Stop sending
Autoscaling
Autoscaling
Autoscaling Server DB Server
Autoscaling Server DB Server Requests start queueing
Autoscaling Server DB Server Server
Autoscaling Server DB Server Server Now its worse
Autoscaling needs to be in response to load shedding
Circuit Breakers
Circuit Breakers
Circuit Breakers Server Server
Circuit Breakers Server Server
Circuit Breakers Server Server Shut off traffic
Circuit Breakers Server Server
Circuit Breakers Server Server I’m not quite dead yet
Circuit Breakers are your last line of defense
Lets Talk About… Queues Overload Mitigation Adaptive Concurrency
Lets Talk About… Queues Overload Mitigation Adaptive Concurrency
We want to allow as many requests as we can
actually handle
None
Adaptive Limits Time Concurrency
Adaptive Limits Actual limit Time Concurrency
Adaptive Limits Actual limit Dynamic Discovery Time Concurrency
Load Shedding Server Server
Load Shedding Server Server Are we at the limit?
Load Shedding Server Server Am I still healthy?
Load Shedding Server Server
Load Shedding Server Server Update Limits
Adaptive Limits Time Concurrency Increased latency
Latency Successful vs. Failed requests Signals for Adjusting Limits
Additive Increase Multiplicative Decrease Success state: limit + 1 Backoff
state: limit * 0.95 Time Concurrency
Prior Art/Alternatives https://github.com/ferd/pobox/ https://github.com/fishcakez/sbroker/ https://github.com/heroku/canal_lock https://github.com/jlouis/safetyvalve https://github.com/jlouis/fuse
Regulator https://github.com/keathley/regulator
Regulator.install(:service, [ limit: {Regulator.Limit.AIMD, [timeout: 500]} ]) Regulator.ask(:service, fn ->
{:ok, Finch.request(:get, "https://keathley.io")} end) Regulator
Conclusion
Queues are everywhere
Those queues need to be bounded to avoid overload
If your system is dynamic, your solution will also need
to be dynamic
Go and build awesome stuff
Thanks Chris Keathley / @ChrisKeathley / c@keathley.io