Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Sharding with MongoDB -- MongoDC 2012

Sharding with MongoDB -- MongoDC 2012

Introduction to sharding with MongoDB.

Tyler Brock

June 26, 2012
Tweet

More Decks by Tyler Brock

Other Decks in Technology

Transcript

  1. Mongod started with --configsvr option Must have 3 (or 1

    in development) Data is commited using 2 phase commit config Config Server
  2. mongos Acts just like shard router / proxy One or

    as many as you want Light weight -- can run on App servers Caches meta-data from config servers mongos
  3. mongod mongod mongod Bring up mongods or Replica Sets mongod

    mongod mongod mongod mongod mongod RS RS RS mongod --shardsvr mongod --replSet --shardsvr
  4. config mongod mongod mongod mongod mongod mongod mongod mongod mongod

    RS RS RS Bring up Config Servers config config mongod --configsvr
  5. config mongod mongod mongod mongod mongod mongod mongod mongod mongod

    RS RS RS Bring up Mongos config config mongos mongos --configdb <list of configdb uris>
  6. > use admin > db.runCommand({"addShard": <shard uri>}) Connect to Mongos

    + Add Shards +Enable Sharding > db.runCommand( { enablesharding : "<dbname>" } ); > db.runCommand( { shardcollection : "<namespace>", key : <key> }); +Shard a Collection
  7. > db.runCommand({ }) { name: “Joe”, email: “[email protected]”, }, {

    name: “Bob”, email: “[email protected]”, }, { name: “Tyler”, email: “[email protected]”, } shardcollection: “test.users”, Keys key: { email: 1 } test.users
  8. { name: “Joe”, email: “[email protected]”, }, { name: “Bob”, email:

    [email protected]”, }, { name: “Tyler”, email: “[email protected]”, } shardcollection: “test.users”, Keys key: { email: 1 } test.users
  9. { name: “Joe”, email: “[email protected]”, }, { name: “Bob”, email:

    [email protected]”, }, { name: “Tyler”, email: “[email protected]”, } Keys key: { email: 1 } test.users
  10. Splitting config config config mongos Shard 1 Shard 2 Shard

    3 Shard 4 Split this big chunk into 2 chunks
  11. Balancing config config config mongos Shard 1 Shard 2 Shard

    3 Shard 4 Shard1, move a chunk to Shard2
  12. Balancing config config config mongos Shard 1 Shard 2 Shard

    3 Shard 4 Shard1, move another chunk to Shard3
  13. Balancing config config config mongos Shard 1 Shard 2 Shard

    3 Shard 4 Shard1, move another chunk to Shard4
  14. Routed Request 1 2 mongos shard shard shard 1. Query

    arrives at Mongos 2. Mongos routes query to a single shard
  15. Routed Request 1 2 3 mongos shard shard shard 1.

    Query arrives at Mongos 2. Mongos routes query to a single shard 3. Shard returns results of query
  16. Routed Request 1 2 3 4 mongos shard shard shard

    1. Query arrives at Mongos 2. Mongos routes query to a single shard 3. Shard returns results of query 4. Results returned to client
  17. Scatter Gather Request 1 1. Query arrives at Mongos 2

    2 2 shard shard shard mongos 2. Mongos broadcasts query to all shards
  18. Scatter Gather Request 1 1. Query arrives at Mongos 2

    2 2 3 3 3 shard shard shard mongos 2. Mongos broadcasts query to all shards 3. Each shard returns results for query
  19. Scatter Gather Request 1 4 1. Query arrives at Mongos

    2 2 2 3 3 3 shard shard shard mongos 2. Mongos broadcasts query to all shards 3. Each shard returns results for query 4. Results combined and returned to client
  20. mongos Distributed Merge Sort Req. 1 2 2 2 shard

    shard shard 1. Query arrives at Mongos 2. Mongos broadcasts query to all shards
  21. mongos Distributed Merge Sort Req. 1 2 2 2 shard

    shard shard 3 3 3 1. Query arrives at Mongos 2. Mongos broadcasts query to all shards 3. Each shard locally sorts results
  22. mongos Distributed Merge Sort Req. 1 2 2 2 4

    4 4 shard shard shard 3 3 3 1. Query arrives at Mongos 2. Mongos broadcasts query to all shards 3. Each shard locally sorts results 4. Results returned to mongos
  23. mongos Distributed Merge Sort Req. 1 5 2 2 2

    4 4 4 shard shard shard 3 3 3 1. Query arrives at Mongos 2. Mongos broadcasts query to all shards 3. Each shard locally sorts results 4. Results returned to mongos 5. Mongos merges sorted results
  24. mongos Distributed Merge Sort Req. 1 6 5 2 2

    2 4 4 4 shard shard shard 3 3 3 1. Query arrives at Mongos 2. Mongos broadcasts query to all shards 3. Each shard locally sorts results 4. Results returned to mongos 5. Mongos merges sorted results 6. Combined results returned to client
  25. Queries By Shard Key Routed db.users.find({email: “[email protected]”}) Sorted by shard

    key Routed in order db.users.find().sort({email:-1}) Find by non shard key Scatter Gather db.users.find({state:”NY”}) Sorted by non shard key Distributed merge sort db.users.find().sort({state:1})
  26. Writes Inserts Requires shard key db.users.insert({ name: “Bob”, email: “[email protected]”})

    Removes Routed db.users.delete({ email: “[email protected]”}) Removes Scattered db.users.delete({name: “Bob”}) Updates Routed db.users.update( {email: “[email protected]”}, {$set: { state: “NY”}}) Updates Scattered db.users.update( {state: “CA”}, {$set:{ state: “NY”}} )
  27. Bad {node: 1} { node: "ny153.example.com", application: "apache", time: "2011-01-02T21:21:56Z",

    level: "ERROR", msg: "something is broken" } Chunks should be able to split
  28. Better {node:1, time:1} Bad {node: 1} { node: "ny153.example.com", application:

    "apache", time: "2011-01-02T21:21:56Z", level: "ERROR", msg: "something is broken" } Chunks should be able to split
  29. { node: "ny153.example.com", application: "apache", time: "2011-01-02T21:21:56Z", level: "ERROR", msg:

    "something is broken" } Bad { time : 1 } Writes should be distributed
  30. { node: "ny153.example.com", application: "apache", time: "2011-01-02T21:21:56Z", level: "ERROR", msg:

    "something is broken" } Bad { time : 1 } Better {node:1, application:1, time:1} Writes should be distributed
  31. Bad {msg: 1, node: 1} { node: "ny153.example.com", application: "apache",

    time: "2011-01-02T21:21:56Z", level: "ERROR", msg: "something is broken” } Queries should be routed to one shard
  32. Better {node: 1, time: 1} Bad {msg: 1, node: 1}

    { node: "ny153.example.com", application: "apache", time: "2011-01-02T21:21:56Z", level: "ERROR", msg: "something is broken” } Queries should be routed to one shard
  33. Write scaling - add shards write read shard1 node_c1 node_b1

    node_a1 shard2 node_c2 node_b2 node_a2
  34. Write scaling - add shards write read shard1 node_c1 node_b1

    node_a1 shard2 node_c2 node_b2 node_a2 shard3 node_c3 node_b3 node_a3