Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Why we built a distributed system - DSConf, Pune 2018

Sriram
April 21, 2018

Why we built a distributed system - DSConf, Pune 2018

In the first edition of DSConf, we spoke about our reasons for why we built an in-house distributed system and how Suuchi - toolkit to build such systems evolved.

Sriram

April 21, 2018
Tweet

More Decks by Sriram

Other Decks in Technology

Transcript

  1. BigTable, 2006 Dynamo, 2007 Cassandra, 2008 Voldemort, 2009 rise of

    KV Stores distributed, replicated, fault-tolerant, sorted*
  2. boils down to... Distributed Data Store + CoProcessors (Bigtable /

    HBase) …run arbitrary code “next” to each shard
  3. Distributed Data Store + CoProcessors (Bigtable / HBase) - Business

    logic upgrade is painful - CoProcessors are not services, more an afterthought - Failure semantics are not well established - More applications means multiple coproc or single bloated coproc - Noisy neighbours / Impedance due to a shared datastore
  4. In-house Vs Off-the-shelf In-house Off-the-shelf Features Subset Superset Moving parts

    Fully Controllable Community Controlled Ownership Implicit Acquired / Cultural Upfront cost High Low Expertise Hired / Retained / Nurtured Community
  5. पांग ப Communication key=”foo” key=”bar” key=”baz” Request Routing Sync /

    Async Replication Replication Data Sharding Cluster Membership Primitives in a Distributed System
  6. Suuchi Provides support for ... - underlying communication channel -

    routing queries to appropriate member - detecting your cluster members - replicating your data based on your strategy - local state via embedded KV store per node (optionally) github.com/ashwanthkumar/suuchi
  7. Sharding / Routing + Consistent Hash Ring - Your own

    sharding technique? node 2 node 1 node 3 node 4 Consistent hashing and random trees: Distributed caching protocols for relieving hot spots on the World Wide Web
  8. Sharding / Routing + Consistent Hash Ring - Your own

    sharding technique? node 2 node 1 node 3 node 4 Consistent hashing and random trees: Distributed caching protocols for relieving hot spots on the World Wide Web
  9. Sharding / Routing + Consistent Hash Ring - Your own

    sharding technique? node 2 node 1 node 3 node 4 Consistent hashing and random trees: Distributed caching protocols for relieving hot spots on the World Wide Web
  10. Sharding / Routing + Consistent Hash Ring - Your own

    sharding technique? Consistent hashing and random trees: Distributed caching protocols for relieving hot spots on the World Wide Web node 2 node 3 node 4
  11. Replication Provides high availability for write heavy systems at the

    cost of consistency sync async* every request is successful only if all the replicas succeeded
  12. Suuchi @ Indix • HTML Archive ◦ Handles 1000+ tps

    - write heavy system ◦ Stores 120 TB of url & timestamp indexed HTML pages • Stats (as Monoids) Aggregation System ◦ Approximate real-time aggregates ◦ Timeline & windowed queries • Real time scheduler for our Crawlers ◦ Prioritising which next batch of urls to crawl ◦ Helps crawl 20+ million urls per day
  13. Ringpop from Uber, 2015 Gizzard from Twitter, 2011 Slicer from

    Google, 2016 Suuchi, 2016 Idea behind Suuchi Membership, Request Routing, Sharding etc.