Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Concurrency Basics for Elixir
Search
Sponsored
·
SiteGround - Reliable hosting with speed, security, and support you can count on.
→
Maciej Kaszubowski
August 02, 2018
Programming
0
140
Concurrency Basics for Elixir
Slides from internal presentation at
https://appunite.com
Maciej Kaszubowski
August 02, 2018
Tweet
Share
More Decks by Maciej Kaszubowski
See All by Maciej Kaszubowski
Error-free Elixir
mkaszubowski
0
420
Modular Design in Elixir (ElixirConf EU 2019)
mkaszubowski
2
900
The Big Ball of Nouns
mkaszubowski
0
120
Modular Design in Elixir
mkaszubowski
1
400
Our three years with Elixir
mkaszubowski
0
270
Distributed Elixir
mkaszubowski
0
180
Software Architecture
mkaszubowski
0
150
Let it crash - fault tolerance in Elixir/OTP
mkaszubowski
0
510
CRDTs - The science behind Phoenix Presence
mkaszubowski
2
300
Other Decks in Programming
See All in Programming
Best-Practices-for-Cortex-Analyst-and-AI-Agent
ryotaroikeda
1
110
Oxlint JS plugins
kazupon
1
1k
CSC307 Lecture 07
javiergs
PRO
1
560
開発者から情シスまで - 多様なユーザー層に届けるAPI提供戦略 / Postman API Night Okinawa 2026 Winter
tasshi
0
210
カスタマーサクセス業務を変革したヘルススコアの実現と学び
_hummer0724
0
740
プロダクトオーナーから見たSOC2 _SOC2ゆるミートアップ#2
kekekenta
0
230
なるべく楽してバックエンドに型をつけたい!(楽とは言ってない)
hibiki_cube
0
140
SourceGeneratorのススメ
htkym
0
200
【卒業研究】会話ログ分析によるユーザーごとの関心に応じた話題提案手法
momok47
0
200
AIによる高速開発をどう制御するか? ガードレール設置で開発速度と品質を両立させたチームの事例
tonkotsuboy_com
7
2.4k
例外処理とどう使い分ける?Result型を使ったエラー設計 #burikaigi
kajitack
16
6.1k
KIKI_MBSD Cybersecurity Challenges 2025
ikema
0
1.3k
Featured
See All Featured
The #1 spot is gone: here's how to win anyway
tamaranovitovic
2
950
Facilitating Awesome Meetings
lara
57
6.8k
The untapped power of vector embeddings
frankvandijk
1
1.6k
Collaborative Software Design: How to facilitate domain modelling decisions
baasie
0
140
Navigating the moral maze — ethical principles for Al-driven product design
skipperchong
2
250
The Impact of AI in SEO - AI Overviews June 2024 Edition
aleyda
5
740
Building a Scalable Design System with Sketch
lauravandoore
463
34k
Groundhog Day: Seeking Process in Gaming for Health
codingconduct
0
98
Put a Button on it: Removing Barriers to Going Fast.
kastner
60
4.2k
The browser strikes back
jonoalderson
0
420
Jamie Indigo - Trashchat’s Guide to Black Boxes: Technical SEO Tactics for LLMs
techseoconnect
PRO
0
65
Become a Pro
speakerdeck
PRO
31
5.8k
Transcript
Concurrency basics For Elixir-based Systems
None
So, what’s concurrency?
Sequential Execution (3 functions, 1 thread)
Sequential Execution (3 functions, 1 thread) Concurrent Execution (3 functions,
3 threads)
Sequential Execution (3 functions, 1 thread) Concurrent Execution (3 functions,
3 threads) Preemptive scheduling
Where’s the benefit?
Req1 Req2 Req3 Resp Sequential Execution time Waiting time
Req1 Req2 Req3 Resp Req1 Resp Req2 Req3 Sequential Concurrent
Execution time Waiting time
CPU bound Re Re Re Res Re Res Re Re
I/O bound
Concurrent or Parallel What’s the difference?
Concurrent Execution (3 functions, 3 threads)
Concurrent Execution (3 functions, 3 threads) Parallel Execution (3 functions,
3 threads, 2 cores) core 1 core 2
root@kingschat-api-c8f8d6b76-4j65j:/app# nproc 12 root@tahmeel-api-prod-b5979bdc6-q5wz6:/# nproc 1 How many cores?
Concurrent Execution (3 functions, 3 threads) Parallel Execution (3 functions,
3 threads, 2 cores) core 1 core 2 (by default) One erlang scheduler per core
:observer_cli.start()
None
Req1 Req2 Req3 Resp Req1 Resp Req2 Req3 Sequential Concurrent
Execution time Waiting time Req1 Resp Req2 Req3 Parallel
Sequential execution
Phoenix Request Req 1
Phoenix Request Resp
Phoenix Request Req 2
Phoenix Request Resp
Phoenix Request Req 3
Phoenix Request Resp
Concurrent execution
Phoenix Request
Phoenix Request Task 1 Task 2 Task 3
Phoenix Request Task 1 Task 2 Task 3 Req 1
Req 2 Req 3
Phoenix Request Task 1 Task 2 Task 3 Resp Resp
Resp
Phoenix Request Task 1 Task 2 Task 3
R1 APP Server DB Server (3 cores) R2 R1 R2
Time Execution time Waiting time
R1 APP Server DB Server (3 cores) Send resp R2
R3 R1 R2 R3 Time Execution time Waiting time
How much can we gain?
Amdahl’s Law
Amdahl’s Law
Amdahl’s Law in a nutshell The more synchronisation, the less
benefit from multiple cores
R1 APP Server Send resp R2 R3 R1 R2 R3
Time Execution time Waiting time Almost 100% parallel (almost no synchronisation) DB Server (3 cores)
But…
R1 APP Server Send resp R2 R3 R1 R2 R3
Time Execution time Waiting time This is not constant DB Server (3 cores)
R1 APP Server Send resp R2 R3 R1 R2 R3
Time Execution time Waiting time This is not infinite DB Server (3 cores)
R1 APP Server R2 R3 R1 R2 R3 R4 Time
Execution time Waiting time DB Server (3 cores)
R1 APP Server R2 R3 R1 R2 R3 R4 Time
Execution time Waiting time DB Server (3 cores)
R1 APP Server R2 R3 R1 R2 R3 R4 R4
Time Execution time Waiting time DB Server (3 cores)
R1 APP Server R2 R3 R1 R2 R3 R4 R4
Time Execution time Waiting time DB Server (3 cores)
R1 APP Server R2 R3 R1 R2 R3 R4 R4
Time Execution time Waiting time DB Server (3 cores)
R1 APP Server Send resp R2 R3 R1 R2 R3
R4 R4 Time Execution time Waiting time DB Server (3 cores)
R1 APP Server R2 R3 R1 R2 R3 R4 R4
Time Execution time Waiting time R5 R6 R7 R5 R6 R7 DB Server (3 cores)
Phoenix Request Task 1 Task 2 Task 3 Req 1
Req 2 Req 3 Remember this?
This isn’t exactly true
None
Connection pool (Prevents from overworking the DB)
Pool Manager (Blocks until a free worker is available)
None
Pool Manager (Blocks until a free worker is available)
None
It gets worse
Pool Manager Mailbox Has to be synchronised
Pool Manager Message Passing Is just copying data in shared
memory
Pool Manager Remember semaphores?
Logger Metrics Sentry
Network stack
Network stack
Network stack
Network stack Sentry Metrics
OS Threads (Garbage Collection) Data Bus Virtual Machines Memory characteristics
(e.g. processor caches) … Other synchronisation points
That’s hard
That’s REALLY hard
That’s REALLY hard Seriously, people spend their entire careers on
this
So, what to do?
Measure
Measure Measure
Measure Measure Measure
Measure ON PRODUCTION
Measure ON PRODUCTION You WILL get false results on staging/locally
Measure Entire system You WILL get false results for single
functions
Measure ONLY IF YOU HAVE TRAFFIC
“premature optimization is the root of all evil”
If something takes X ms, it will always take X
ms.
Async execution cannot “remove” this time It can only hide
it
BACK PRESSURE
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer Stop
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer
Producent Consumer Consumer OK, give me more
Producent Consumer Consumer
None
Back pressure
Thanks!