Lock in $30 Savings on PRO—Offer Ends Soon! ⏳
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
Scaling Django with Distributed Systems
Search
Andrew Godwin
April 07, 2017
Programming
3
2.3k
Scaling Django with Distributed Systems
A talk I gave at PyCon Ukraine 2017.
Andrew Godwin
April 07, 2017
Tweet
Share
More Decks by Andrew Godwin
See All by Andrew Godwin
Reconciling Everything
andrewgodwin
1
350
Django Through The Years
andrewgodwin
0
260
Writing Maintainable Software At Scale
andrewgodwin
0
470
A Newcomer's Guide To Airflow's Architecture
andrewgodwin
0
380
Async, Python, and the Future
andrewgodwin
2
690
How To Break Django: With Async
andrewgodwin
1
750
Taking Django's ORM Async
andrewgodwin
0
750
The Long Road To Asynchrony
andrewgodwin
0
700
The Scientist & The Engineer
andrewgodwin
1
800
Other Decks in Programming
See All in Programming
React Native New Architecture 移行実践報告
taminif
1
130
ID管理機能開発の裏側 高速にSaaS連携を実現したチームのAI活用編
atzzcokek
0
190
【CA.ai #3】ワークフローから見直すAIエージェント — 必要な場面と“選ばない”判断
satoaoaka
0
220
TUIライブラリつくってみた / i-just-make-TUI-library
kazto
1
320
AI時代もSEOを頑張っている話
shirahama_x
0
240
堅牢なフロントエンドテスト基盤を構築するために行った取り組み
shogo4131
7
2k
全員アーキテクトで挑む、 巨大で高密度なドメインの紐解き方
agatan
8
19k
TVerのWeb内製化 - 開発スピードと品質を両立させるまでの道のり
techtver
PRO
3
1.4k
令和最新版Android Studioで化石デバイス向けアプリを作る
arkw
0
270
[堅牢.py #1] テストを書かない研究者に送る、最初にテストを書く実験コード入門 / Let's start your ML project by writing tests
shunk031
12
7k
ハイパーメディア駆動アプリケーションとIslandアーキテクチャ: htmxによるWebアプリケーション開発と動的UIの局所的適用
nowaki28
0
360
[SF Ruby Conf 2025] Rails X
palkan
0
450
Featured
See All Featured
Fireside Chat
paigeccino
41
3.7k
Context Engineering - Making Every Token Count
addyosmani
9
470
Site-Speed That Sticks
csswizardry
13
990
Art, The Web, and Tiny UX
lynnandtonic
303
21k
Balancing Empowerment & Direction
lara
5
790
Fight the Zombie Pattern Library - RWD Summit 2016
marcelosomers
234
17k
Done Done
chrislema
186
16k
A designer walks into a library…
pauljervisheath
210
24k
GraphQLとの向き合い方2022年版
quramy
50
14k
Mobile First: as difficult as doing things right
swwweet
225
10k
Visualizing Your Data: Incorporating Mongo into Loggly Infrastructure
mongodb
48
9.8k
Keith and Marios Guide to Fast Websites
keithpitt
413
23k
Transcript
None
Andrew Godwin Hi, I'm Django core developer Senior Software Engineer
at Used to complain about migrations a lot
Distributed Systems
c = 299,792,458 m/s
Early CPUs c = 60m propagation distance Clock ~2cm 5
MHz
Modern CPUs c = 10cm propagation distance 3 GHz
Distributed systems are made of independent components
They are slower and harder to write than synchronous systems
But they can be scaled up much, much further
Trade-offs
There is never a perfect solution.
Fast Good Cheap
None
Load Balancer WSGI Worker WSGI Worker WSGI Worker
Load Balancer WSGI Worker WSGI Worker WSGI Worker Cache
Load Balancer WSGI Worker WSGI Worker WSGI Worker Cache Cache
Cache
Load Balancer WSGI Worker WSGI Worker WSGI Worker Database
CAP Theorem
Partition Tolerant Consistent Available
PostgreSQL: CP Consistent everywhere Handles network latency/drops Can't write if
main server is down
Cassandra: AP Can read/write to any node Handles network latency/drops
Data can be inconsistent
It's hard to design a product that might be inconsistent
But if you take the tradeoff, scaling is easy
Otherwise, you must find other solutions
Read Replicas (often called master/slave) Load Balancer WSGI Worker WSGI
Worker WSGI Worker Replica Replica Main
Replicas scale reads forever... But writes must go to one
place
If a request writes to a table it must be
pinned there, so later reads do not get old data
When your write load is too high, you must then
shard
Vertical Sharding Users Tickets Events Payments
Horizontal Sharding Users 0 - 2 Users 3 - 5
Users 6 - 8 Users 9 - A
Both Users 0 - 2 Users 3 - 5 Users
6 - 8 Users 9 - A Events 0 - 2 Events 3 - 5 Events 6 - 8 Events 9 - A Tickets 0 - 2 Tickets 3 - 5 Tickets 6 - 8 Tickets 9 - A
Both plus caching Users 0 - 2 Users 3 -
5 Users 6 - 8 Users 9 - A Events 0 - 2 Events 3 - 5 Events 6 - 8 Events 9 - A Tickets 0 - 2 Tickets 3 - 5 Tickets 6 - 8 Tickets 9 - A User Cache Event Cache Ticket Cache
Teams have to scale too; nobody should have to understand
eveything in a big system.
Services allow complexity to be reduced - for a tradeoff
of speed
Users 0 - 2 Users 3 - 5 Users 6
- 8 Users 9 - A Events 0 - 2 Events 3 - 5 Events 6 - 8 Events 9 - A Tickets 0 - 2 Tickets 3 - 5 Tickets 6 - 8 Tickets 9 - A User Cache Event Cache Ticket Cache User Service Event Service Ticket Service
User Service Event Service Ticket Service WSGI Server
Each service is its own, smaller project, managed and scaled
separately.
But how do you communicate between them?
Service 2 Service 3 Service 1 Direct Communication
Service 2 Service 3 Service 1 Service 4 Service 5
Service 2 Service 3 Service 1 Service 4 Service 5
Service 6 Service 7 Service 8
Service 2 Service 3 Service 1 Message Bus Service 2
Service 3 Service 1
A single point of failure is not always bad -
if the alternative is multiple, fragile ones
Channels and ASGI provide a standard message bus built with
certain tradeoffs
Backing Store e.g. Redis, RabbitMQ ASGI (Channel Layer) Channels Library
Django Django Channels Project
Backing Store e.g. Redis, RabbitMQ ASGI (Channel Layer) Pure Python
Failure Mode At most once Messages either do not arrive,
or arrive once. At least once Messages arrive once, or arrive multiple times
Guarantees vs. Latency Low latency Messages arrive very quickly but
go missing more Low loss rate Messages are almost never lost but arrive slower
Queuing Type First In First Out Consistent performance for all
users First In Last Out Hides backlogs but makes them worse
Queue Sizing Finite Queues Sending can fail Infinite queues Makes
problems even worse
You must understand what you are making (This is surprisingly
uncommon)
Design as much as possible around shared-nothing
Per-machine caches On-demand thumbnailing Signed cookie sessions
Has to be shared? Try to split it
Has to be shared? Try sharding it.
Django's job is to be slowly replaced by your code
Just make sure you match the API contract of what
you're replacing!
Don't try to scale too early; you'll pick the wrong
tradeoffs.
Thanks. Andrew Godwin @andrewgodwin channels.readthedocs.io