Upgrade to PRO for Only $50/Year—Limited-Time Offer! 🔥
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
LXJS 2013: backpack — scalable photo storage
Search
Ivan Babrou
October 02, 2013
Programming
2
150
LXJS 2013: backpack — scalable photo storage
http://bobrik.name/talks/lxjs2013.pdf
— slides with notes.
http://youtu.be/T4DgxvS9Xho
— video.
Ivan Babrou
October 02, 2013
Tweet
Share
More Decks by Ivan Babrou
See All by Ivan Babrou
node.js for millions of images
bobrik
7
1.5k
Other Decks in Programming
See All in Programming
Canon EOS R50 V と R5 Mark II 購入でみえてきた最近のデジイチ VR180 事情、そして VR180 静止画に活路を見出すまで
karad
0
130
Deno Tunnel を使ってみた話
kamekyame
0
160
大体よく分かるscala.collection.immutable.HashMap ~ Compressed Hash-Array Mapped Prefix-tree (CHAMP) ~
matsu_chara
2
220
生成AIを利用するだけでなく、投資できる組織へ
pospome
2
370
Findy AI+の開発、運用におけるMCP活用事例
starfish719
0
1.5k
Tinkerbellから学ぶ、Podで DHCPをリッスンする手法
tomokon
0
140
マスタデータ問題、マイクロサービスでどう解くか
kts
0
110
組み合わせ爆発にのまれない - 責務分割 x テスト
halhorn
1
150
Jetpack XR SDKから紐解くAndroid XR開発と技術選定のヒント / about-androidxr-and-jetpack-xr-sdk
drumath2237
1
100
ZOZOにおけるAI活用の現在 ~モバイルアプリ開発でのAI活用状況と事例~
zozotech
PRO
9
5.8k
公共交通オープンデータ × モバイルUX 複雑な運行情報を 『直感』に変換する技術
tinykitten
PRO
0
150
Rubyで鍛える仕組み化プロヂュース力
muryoimpl
0
150
Featured
See All Featured
Chrome DevTools: State of the Union 2024 - Debugging React & Beyond
addyosmani
9
1k
From Legacy to Launchpad: Building Startup-Ready Communities
dugsong
0
110
Primal Persuasion: How to Engage the Brain for Learning That Lasts
tmiket
0
180
The SEO Collaboration Effect
kristinabergwall1
0
300
Why Our Code Smells
bkeepers
PRO
340
57k
Introduction to Domain-Driven Design and Collaborative software design
baasie
1
500
Getting science done with accelerated Python computing platforms
jacobtomlinson
0
73
Have SEOs Ruined the Internet? - User Awareness of SEO in 2025
akashhashmi
0
180
Rebuilding a faster, lazier Slack
samanthasiow
85
9.3k
The Hidden Cost of Media on the Web [PixelPalooza 2025]
tammyeverts
1
120
Optimising Largest Contentful Paint
csswizardry
37
3.5k
Testing 201, or: Great Expectations
jmmastey
46
7.8k
Transcript
HI THERE, LXJS
% whoami Ian Babrou, Topface.com
60+ million users 100+ million photos
16 photos on main page up to 200 in feed
many small “previews”
This talk is about PHOTOS
Let’s look at some more numbers
12 storage nodes 70TB total space 44TB used
250 TB per month 1.6 Gbps peak 850 Mbps average
powered by node.js & nginx! open-source FTW
ARCHITECTURE aka part 1
frontend cache resizer storage
of course there are frontends, probably more than one frontend
frontend frontend frontend
round-robin dns + ipvs, probably frontend frontend frontend frontend
NGINX ngx_http_upstream_hash_module is your friend
NGINX ngx_http_upstream_hash_module because you need more than one cache, right?
#protip don’t cache anything twice
don’t do: frontend cache cache cache file#3 file#1 file#2 file#1
file#2 file#2 file#3 file#3 file#1
do: frontend cache cache cache file#3 file#1 file#2
NGINX + SSD is just great for caching, forget about
tmpfs
#protip overallocate caches
RESIZING resizing on the fly saves disk, but eats cpu
NGINX ngx_http_image_filter is your friend
BACKPACK aka part 2
first try nginx
okay for 1k files
okay for 10k files
okay for 50k files
okay until you fit in memory or have ssd
RANDOM ACCESS
DISKS ARE SPINNING
node.js to the rescue!
... and redis
... and zookeeper
simple idea: no extra fseek(3)
inspired by haystack from facebook
concatenate small files into bigger
always keep index in memory
REALIZATION ON DISK
3.5 gb files as many as you need
index for each name:offset:length name:offset:length name:offset:length
but.. no worries, this is only needed if redis goes
crazy
REALIZATION IN MEMORY
keys for files name -> file:offset:length name -> file:offset:length name
-> file:offset:length
redis 3.5gb data + index 3.5gb data + index 3.5gb
data + index memory disk all together:
POWERED BY node.js looks like webdav
PUT: 1. write data 2. write index 3. write redis
key
GET: 1. read redis key 2. read data data files
are always open!
let’s read 100K files! 0 37,5 75 112,5 150 backpack
nginx
LESS SEEKS LEAD TO BIGGER THROUGHPUT
BONUS! linearized access for processing
BUT WHAT ABOUT FUTURE?
NO MORE MEMORY vs DISK* Probably, someday.
MANAGEMENT aka part 3
1. adding servers 2. replication 3. failover
COORDINATOR
COORDINATOR combines servers into shards
COORDINATOR that’s where we need zookeeper
I KNOW let’s use DHT! like dynamo!
Rebalancing on capacity change
NO!
NO. THANK YOU!
LET’S MAKE IT SIMPLE
SHARDS (aka buckets) backpack #1 backpack #2 backpack #3 backpack
#4 backpack #5 backpack #6 shard #1 (50%) shard #2 (50%) 1:lol.jpg 2:wtf.jpg
ADDING SHARD backpack #1 backpack #3 backpack #3 backpack #4
backpack #5 backpack #6 1:lol.jpg 1:wtf.jpg shard #1 (50%) shard #2 (50%) backpack #7 backpack #8 backpack #9 50% chance shard #3 (0%)
COORDINATOR knows how to handle next file
NO REBALANCING SIMPLE
REPLICATOR
WHAT IF METEORITE WILL HIT YOUR NODE?
IT HAPPENS. YOU NEED TO ACCEPT THAT.
REPLICATOR to the rescue!
make multi-node SHARDS
DISTRIBUTE SHARDS ACROSS SERVERS
backpack #1 shard #1 lol.jpg backpack #1 lol.jpg backpack #1
lol.jpg server #1 server #1 server #1 coordinator replicator
REPLICATOR EVENTUALLY MAKES COPIES
THE WHOLE THING IS BULLET-PROOF IF YOU NEED IT
backpack #1 backpack #4 backpack #2 backpack #5 backpack #3
backpack #6 backpack #7 backpack #8 backpack #9 server #1 server #2 server #3 zookeeper #1 zookeeper #2 zookeeper #3 redis-queue #1 redis-queue #2 redis-queue #3 coordinator #1 coordinator #2 coordinator #3 replicator #1 replicator #2 replicator #3
GET THE CODE /Topface/backpack npm install backpack{,-coordinator,-replicator}
That’s it! bobrik ibobrik