Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Concurrency Basics for Elixir
Search
Maciej Kaszubowski
August 02, 2018
Programming
0
98
Concurrency Basics for Elixir
Slides from internal presentation at
https://appunite.com
Maciej Kaszubowski
August 02, 2018
Tweet
Share
More Decks by Maciej Kaszubowski
See All by Maciej Kaszubowski
Error-free Elixir
mkaszubowski
0
260
Modular Design in Elixir (ElixirConf EU 2019)
mkaszubowski
2
560
The Big Ball of Nouns
mkaszubowski
0
86
Modular Design in Elixir
mkaszubowski
1
340
Our three years with Elixir
mkaszubowski
0
190
Distributed Elixir
mkaszubowski
0
100
Software Architecture
mkaszubowski
0
110
Let it crash - fault tolerance in Elixir/OTP
mkaszubowski
0
380
CRDTs - The science behind Phoenix Presence
mkaszubowski
2
240
Other Decks in Programming
See All in Programming
Fibonacci Function Gallery - Part 1
philipschwarz
PRO
0
220
Mermaid x AST x 生成AI = コードとドキュメントの完全同期への道
shibuyamizuho
0
160
PSR-15 はあなたのための ものではない? - phpcon2024
myamagishi
0
130
Webエンジニア主体のモバイルチームの 生産性を高く保つためにやったこと
igreenwood
0
340
Scalaから始めるOpenFeature入門 / Scalaわいわい勉強会 #4
arthur1
1
330
rails stats で紐解く ANDPAD のイマを支える技術たち
andpad
1
290
MCP with Cloudflare Workers
yusukebe
2
220
テストコード文化を0から作り、変化し続けた組織
kazatohiei
2
1.5k
menu基盤チームによるGoogle Cloudの活用事例~Application Integration, Cloud Tasks編~
yoshifumi_ishikura
0
110
선언형 UI에서의 상태관리
l2hyunwoo
0
160
これが俺の”自分戦略” プロセスを楽しんでいこう! - Developers CAREER Boost 2024
niftycorp
PRO
0
190
クリエイティブコーディングとRuby学習 / Creative Coding and Learning Ruby
chobishiba
0
3.9k
Featured
See All Featured
Why Our Code Smells
bkeepers
PRO
335
57k
Rebuilding a faster, lazier Slack
samanthasiow
79
8.7k
RailsConf & Balkan Ruby 2019: The Past, Present, and Future of Rails at GitHub
eileencodes
132
33k
RailsConf 2023
tenderlove
29
940
jQuery: Nuts, Bolts and Bling
dougneiner
61
7.5k
How to Create Impact in a Changing Tech Landscape [PerfNow 2023]
tammyeverts
48
2.2k
Mobile First: as difficult as doing things right
swwweet
222
9k
10 Git Anti Patterns You Should be Aware of
lemiorhan
PRO
656
59k
Exploring the Power of Turbo Streams & Action Cable | RailsConf2023
kevinliebholz
28
4.4k
Save Time (by Creating Custom Rails Generators)
garrettdimon
PRO
28
900
The Straight Up "How To Draw Better" Workshop
denniskardys
232
140k
Principles of Awesome APIs and How to Build Them.
keavy
126
17k
Transcript
Concurrency basics For Elixir-based Systems
None
So, what’s concurrency?
Sequential Execution (3 functions, 1 thread)
Sequential Execution (3 functions, 1 thread) Concurrent Execution (3 functions,
3 threads)
Sequential Execution (3 functions, 1 thread) Concurrent Execution (3 functions,
3 threads) Preemptive scheduling
Where’s the benefit?
Req1 Req2 Req3 Resp Sequential Execution time Waiting time
Req1 Req2 Req3 Resp Req1 Resp Req2 Req3 Sequential Concurrent
Execution time Waiting time
CPU bound Re Re Re Res Re Res Re Re
I/O bound
Concurrent or Parallel What’s the difference?
Concurrent Execution (3 functions, 3 threads)
Concurrent Execution (3 functions, 3 threads) Parallel Execution (3 functions,
3 threads, 2 cores) core 1 core 2
root@kingschat-api-c8f8d6b76-4j65j:/app# nproc 12 root@tahmeel-api-prod-b5979bdc6-q5wz6:/# nproc 1 How many cores?
Concurrent Execution (3 functions, 3 threads) Parallel Execution (3 functions,
3 threads, 2 cores) core 1 core 2 (by default) One erlang scheduler per core
:observer_cli.start()
None
Req1 Req2 Req3 Resp Req1 Resp Req2 Req3 Sequential Concurrent
Execution time Waiting time Req1 Resp Req2 Req3 Parallel
Sequential execution
Phoenix Request Req 1
Phoenix Request Resp
Phoenix Request Req 2
Phoenix Request Resp
Phoenix Request Req 3
Phoenix Request Resp
Concurrent execution
Phoenix Request
Phoenix Request Task 1 Task 2 Task 3
Phoenix Request Task 1 Task 2 Task 3 Req 1
Req 2 Req 3
Phoenix Request Task 1 Task 2 Task 3 Resp Resp
Resp
Phoenix Request Task 1 Task 2 Task 3
R1 APP Server DB Server (3 cores) R2 R1 R2
Time Execution time Waiting time
R1 APP Server DB Server (3 cores) Send resp R2
R3 R1 R2 R3 Time Execution time Waiting time
How much can we gain?
Amdahl’s Law
Amdahl’s Law
Amdahl’s Law in a nutshell The more synchronisation, the less
benefit from multiple cores
R1 APP Server Send resp R2 R3 R1 R2 R3
Time Execution time Waiting time Almost 100% parallel (almost no synchronisation) DB Server (3 cores)
But…
R1 APP Server Send resp R2 R3 R1 R2 R3
Time Execution time Waiting time This is not constant DB Server (3 cores)
R1 APP Server Send resp R2 R3 R1 R2 R3
Time Execution time Waiting time This is not infinite DB Server (3 cores)
R1 APP Server R2 R3 R1 R2 R3 R4 Time
Execution time Waiting time DB Server (3 cores)
R1 APP Server R2 R3 R1 R2 R3 R4 Time
Execution time Waiting time DB Server (3 cores)
R1 APP Server R2 R3 R1 R2 R3 R4 R4
Time Execution time Waiting time DB Server (3 cores)
R1 APP Server R2 R3 R1 R2 R3 R4 R4
Time Execution time Waiting time DB Server (3 cores)
R1 APP Server R2 R3 R1 R2 R3 R4 R4
Time Execution time Waiting time DB Server (3 cores)
R1 APP Server Send resp R2 R3 R1 R2 R3
R4 R4 Time Execution time Waiting time DB Server (3 cores)
R1 APP Server R2 R3 R1 R2 R3 R4 R4
Time Execution time Waiting time R5 R6 R7 R5 R6 R7 DB Server (3 cores)
Phoenix Request Task 1 Task 2 Task 3 Req 1
Req 2 Req 3 Remember this?
This isn’t exactly true
None
Connection pool (Prevents from overworking the DB)
Pool Manager (Blocks until a free worker is available)
None
Pool Manager (Blocks until a free worker is available)
None
It gets worse
Pool Manager Mailbox Has to be synchronised
Pool Manager Message Passing Is just copying data in shared
memory
Pool Manager Remember semaphores?
Logger Metrics Sentry
Network stack
Network stack
Network stack
Network stack Sentry Metrics
OS Threads (Garbage Collection) Data Bus Virtual Machines Memory characteristics
(e.g. processor caches) … Other synchronisation points
That’s hard
That’s REALLY hard
That’s REALLY hard Seriously, people spend their entire careers on
this
So, what to do?
Measure
Measure Measure
Measure Measure Measure
Measure ON PRODUCTION
Measure ON PRODUCTION You WILL get false results on staging/locally
Measure Entire system You WILL get false results for single
functions
Measure ONLY IF YOU HAVE TRAFFIC
“premature optimization is the root of all evil”
If something takes X ms, it will always take X
ms.
Async execution cannot “remove” this time It can only hide
it
BACK PRESSURE
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer Stop
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer OK, give me more
Producent Consumer Consumer
None
Back pressure
Thanks!