possible to send multiple requests in parallel ◦ redundant header fields like “User-Agent” ◦ long-polling drains device’s battery ◦ SPDY request multiplexing and response out of order ◦ header compression. not for all devices ◦ started with unsecured. need TLS ▪ prepackaged TLS per client installs Spoke to Mubarak (Layer HQ) at Erlang Factory as to why SPDY. In short answer: because we can. we control the clients and server (server push, rate limiting)
nodes sharded on client side (with zookeeper) ◦ 1 million registered users. 1 billion rows per month in Jun 11. 5-10 billion rows per month in Mar. 12 • Evaluating: HBase, Cassandra, MongoDB ◦ HBase win How they choose?
LINE System) O(1) Messages in delivery queue Asynchronous jobs in job queue O(n) User Profile Contacts / Groups These data originally increase with O(n^2), but there are limitations on the number of links between users. (= O (n * CONSTANT_SIZE)) O(n*t) Messages in Inbox Change-sets of User Profile / Groups / Contacts
writes O(n) Availability, Scalability Workload: fast random reads O(n*t) Scalability, Massive volume Billions of small rows per day, but mostly cold data Workload: fast sequential writes (append-only) and fast reads of the latest data
hbase • http://developers.linecorp.com/blog/?p=1420 LINE Server development and release process (in japanese) • http://developers.linecorp.com/blog/?p=2745 LINE Adopting SPDY • http://developers.linecorp.com/blog/?p=2381 • http://developers.linecorp.com/blog/?p=2729 Layer HQ - The art of powering the internet’s next messaging system • http://www.erlang-factory.com/static/upload/media/1427794729386661erlangfactory2015talklayer.pdf Whatsapp architecture http://highscalability.com/blog/2014/2/26/the-whatsapp-architecture-facebook-bought-for-19-billion.html