Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Concurrency Basics for Elixir
Search
Maciej Kaszubowski
August 02, 2018
Programming
0
140
Concurrency Basics for Elixir
Slides from internal presentation at
https://appunite.com
Maciej Kaszubowski
August 02, 2018
Tweet
Share
More Decks by Maciej Kaszubowski
See All by Maciej Kaszubowski
Error-free Elixir
mkaszubowski
0
420
Modular Design in Elixir (ElixirConf EU 2019)
mkaszubowski
2
900
The Big Ball of Nouns
mkaszubowski
0
120
Modular Design in Elixir
mkaszubowski
1
400
Our three years with Elixir
mkaszubowski
0
270
Distributed Elixir
mkaszubowski
0
180
Software Architecture
mkaszubowski
0
150
Let it crash - fault tolerance in Elixir/OTP
mkaszubowski
0
510
CRDTs - The science behind Phoenix Presence
mkaszubowski
2
290
Other Decks in Programming
See All in Programming
Fragmented Architectures
denyspoltorak
0
160
Best-Practices-for-Cortex-Analyst-and-AI-Agent
ryotaroikeda
1
110
プロダクトオーナーから見たSOC2 _SOC2ゆるミートアップ#2
kekekenta
0
220
AWS re:Invent 2025参加 直前 Seattle-Tacoma Airport(SEA)におけるハードウェア紛失インシデントLT
tetutetu214
2
110
CSC307 Lecture 03
javiergs
PRO
1
490
15年続くIoTサービスのSREエンジニアが挑む分散トレーシング導入
melonps
2
200
LLM Observabilityによる 対話型音声AIアプリケーションの安定運用
gekko0114
2
430
コントリビューターによるDenoのすゝめ / Deno Recommendations by a Contributor
petamoriken
0
200
HTTPプロトコル正しく理解していますか? 〜かわいい猫と共に学ぼう。ฅ^•ω•^ฅ ニャ〜
hekuchan
2
690
コマンドとリード間の連携に対する脅威分析フレームワーク
pandayumi
1
450
余白を設計しフロントエンド開発を 加速させる
tsukuha
7
2.1k
QAフローを最適化し、品質水準を満たしながらリリースまでの期間を最短化する #RSGT2026
shibayu36
2
4.4k
Featured
See All Featured
Utilizing Notion as your number one productivity tool
mfonobong
3
220
技術選定の審美眼(2025年版) / Understanding the Spiral of Technologies 2025 edition
twada
PRO
117
110k
Data-driven link building: lessons from a $708K investment (BrightonSEO talk)
szymonslowik
1
910
JAMstack: Web Apps at Ludicrous Speed - All Things Open 2022
reverentgeek
1
340
HDC tutorial
michielstock
1
380
Skip the Path - Find Your Career Trail
mkilby
0
56
Everyday Curiosity
cassininazir
0
130
End of SEO as We Know It (SMX Advanced Version)
ipullrank
3
3.9k
Money Talks: Using Revenue to Get Sh*t Done
nikkihalliwell
0
150
Kristin Tynski - Automating Marketing Tasks With AI
techseoconnect
PRO
0
140
Git: the NoSQL Database
bkeepers
PRO
432
66k
JavaScript: Past, Present, and Future - NDC Porto 2020
reverentgeek
52
5.8k
Transcript
Concurrency basics For Elixir-based Systems
None
So, what’s concurrency?
Sequential Execution (3 functions, 1 thread)
Sequential Execution (3 functions, 1 thread) Concurrent Execution (3 functions,
3 threads)
Sequential Execution (3 functions, 1 thread) Concurrent Execution (3 functions,
3 threads) Preemptive scheduling
Where’s the benefit?
Req1 Req2 Req3 Resp Sequential Execution time Waiting time
Req1 Req2 Req3 Resp Req1 Resp Req2 Req3 Sequential Concurrent
Execution time Waiting time
CPU bound Re Re Re Res Re Res Re Re
I/O bound
Concurrent or Parallel What’s the difference?
Concurrent Execution (3 functions, 3 threads)
Concurrent Execution (3 functions, 3 threads) Parallel Execution (3 functions,
3 threads, 2 cores) core 1 core 2
root@kingschat-api-c8f8d6b76-4j65j:/app# nproc 12 root@tahmeel-api-prod-b5979bdc6-q5wz6:/# nproc 1 How many cores?
Concurrent Execution (3 functions, 3 threads) Parallel Execution (3 functions,
3 threads, 2 cores) core 1 core 2 (by default) One erlang scheduler per core
:observer_cli.start()
None
Req1 Req2 Req3 Resp Req1 Resp Req2 Req3 Sequential Concurrent
Execution time Waiting time Req1 Resp Req2 Req3 Parallel
Sequential execution
Phoenix Request Req 1
Phoenix Request Resp
Phoenix Request Req 2
Phoenix Request Resp
Phoenix Request Req 3
Phoenix Request Resp
Concurrent execution
Phoenix Request
Phoenix Request Task 1 Task 2 Task 3
Phoenix Request Task 1 Task 2 Task 3 Req 1
Req 2 Req 3
Phoenix Request Task 1 Task 2 Task 3 Resp Resp
Resp
Phoenix Request Task 1 Task 2 Task 3
R1 APP Server DB Server (3 cores) R2 R1 R2
Time Execution time Waiting time
R1 APP Server DB Server (3 cores) Send resp R2
R3 R1 R2 R3 Time Execution time Waiting time
How much can we gain?
Amdahl’s Law
Amdahl’s Law
Amdahl’s Law in a nutshell The more synchronisation, the less
benefit from multiple cores
R1 APP Server Send resp R2 R3 R1 R2 R3
Time Execution time Waiting time Almost 100% parallel (almost no synchronisation) DB Server (3 cores)
But…
R1 APP Server Send resp R2 R3 R1 R2 R3
Time Execution time Waiting time This is not constant DB Server (3 cores)
R1 APP Server Send resp R2 R3 R1 R2 R3
Time Execution time Waiting time This is not infinite DB Server (3 cores)
R1 APP Server R2 R3 R1 R2 R3 R4 Time
Execution time Waiting time DB Server (3 cores)
R1 APP Server R2 R3 R1 R2 R3 R4 Time
Execution time Waiting time DB Server (3 cores)
R1 APP Server R2 R3 R1 R2 R3 R4 R4
Time Execution time Waiting time DB Server (3 cores)
R1 APP Server R2 R3 R1 R2 R3 R4 R4
Time Execution time Waiting time DB Server (3 cores)
R1 APP Server R2 R3 R1 R2 R3 R4 R4
Time Execution time Waiting time DB Server (3 cores)
R1 APP Server Send resp R2 R3 R1 R2 R3
R4 R4 Time Execution time Waiting time DB Server (3 cores)
R1 APP Server R2 R3 R1 R2 R3 R4 R4
Time Execution time Waiting time R5 R6 R7 R5 R6 R7 DB Server (3 cores)
Phoenix Request Task 1 Task 2 Task 3 Req 1
Req 2 Req 3 Remember this?
This isn’t exactly true
None
Connection pool (Prevents from overworking the DB)
Pool Manager (Blocks until a free worker is available)
None
Pool Manager (Blocks until a free worker is available)
None
It gets worse
Pool Manager Mailbox Has to be synchronised
Pool Manager Message Passing Is just copying data in shared
memory
Pool Manager Remember semaphores?
Logger Metrics Sentry
Network stack
Network stack
Network stack
Network stack Sentry Metrics
OS Threads (Garbage Collection) Data Bus Virtual Machines Memory characteristics
(e.g. processor caches) … Other synchronisation points
That’s hard
That’s REALLY hard
That’s REALLY hard Seriously, people spend their entire careers on
this
So, what to do?
Measure
Measure Measure
Measure Measure Measure
Measure ON PRODUCTION
Measure ON PRODUCTION You WILL get false results on staging/locally
Measure Entire system You WILL get false results for single
functions
Measure ONLY IF YOU HAVE TRAFFIC
“premature optimization is the root of all evil”
If something takes X ms, it will always take X
ms.
Async execution cannot “remove” this time It can only hide
it
BACK PRESSURE
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer Stop
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer OK, give me more
Producent Consumer Consumer
None
Back pressure
Thanks!