Slide 1

Slide 1 text

BUILDING TO SCALE David Cramer twitter.com/zeeg Tuesday, February 26, 13

Slide 2

Slide 2 text

The things we build will not and can not last Tuesday, February 26, 13

Slide 3

Slide 3 text

Who am I? Tuesday, February 26, 13

Slide 4

Slide 4 text

Tuesday, February 26, 13

Slide 5

Slide 5 text

Tuesday, February 26, 13

Slide 6

Slide 6 text

Tuesday, February 26, 13

Slide 7

Slide 7 text

What do we mean by scale? Tuesday, February 26, 13

Slide 8

Slide 8 text

DISQUS Massive traffic with a long tail Sentry Counters and event aggregation tenXer More stats than we can count Tuesday, February 26, 13

Slide 9

Slide 9 text

Does one size fit all? Tuesday, February 26, 13

Slide 10

Slide 10 text

Practical Storage Tuesday, February 26, 13

Slide 11

Slide 11 text

Postgres is the foundation of DISQUS Tuesday, February 26, 13

Slide 12

Slide 12 text

MySQL powers the tenXer graph store Tuesday, February 26, 13

Slide 13

Slide 13 text

Sentry is built on SQL Tuesday, February 26, 13

Slide 14

Slide 14 text

Databases are not the problem Tuesday, February 26, 13

Slide 15

Slide 15 text

Compromise Tuesday, February 26, 13

Slide 16

Slide 16 text

Scaling is about Predictability Tuesday, February 26, 13

Slide 17

Slide 17 text

Augment SQL with [technology] Tuesday, February 26, 13

Slide 18

Slide 18 text

Tuesday, February 26, 13

Slide 19

Slide 19 text

Simple solutions using Redis (I like Redis) Tuesday, February 26, 13

Slide 20

Slide 20 text

Counters Tuesday, February 26, 13

Slide 21

Slide 21 text

Counters are everywhere Tuesday, February 26, 13

Slide 22

Slide 22 text

Counters in SQL UPDATE table SET counter = counter + 1; Tuesday, February 26, 13

Slide 23

Slide 23 text

Counters in Redis INCR counter 1 >>> redis.incr('counter') Tuesday, February 26, 13

Slide 24

Slide 24 text

Counters in Sentry event ID 1 event ID 2 event ID 3 Redis INCR Redis INCR Redis INCR SQL Update Tuesday, February 26, 13

Slide 25

Slide 25 text

Counters in Sentry ‣ INCR event_id in Redis ‣ Queue buffer incr task ‣ 5 - 10s explicit delay ‣ Task does atomic GET event_id and DEL event_id (Redis pipeline) ‣ No-op If GET is not > 0 ‣ One SQL UPDATE per unique event per delay Tuesday, February 26, 13

Slide 26

Slide 26 text

Counters in Sentry (cont.) Pros ‣ Solves database row lock contention ‣ Redis nodes are horizontally scalable ‣ Easy to implement Cons ‣ Too many dummy (no-op) tasks Tuesday, February 26, 13

Slide 27

Slide 27 text

Alternative Counters event ID 1 event ID 2 event ID 3 Redis ZINCRBY Redis ZINCRBY Redis ZINCRBY SQL Update Tuesday, February 26, 13

Slide 28

Slide 28 text

Sorted Sets in Redis > ZINCRBY events ad93a 1 {ad93a: 1} > ZINCRBY events ad93a 1 {ad93a: 2} > ZINCRBY events d2ow3 1 {ad93a: 2, d2ow3: 1} Tuesday, February 26, 13

Slide 29

Slide 29 text

Alternative Counters ‣ ZINCRBY events event_id in Redis ‣ Cron buffer flush ‣ ZRANGE events to get pending updates ‣ Fire individual task per update ‣ Atomic ZSCORE events event_id and ZREM events event_id to get and flush count. Tuesday, February 26, 13

Slide 30

Slide 30 text

Alternative Counters (cont.) Pros ‣ Removes (most) no-op tasks ‣ Works without a complex queue due to no required delay on jobs Cons ‣ Single Redis key stores all pending updates Tuesday, February 26, 13

Slide 31

Slide 31 text

Activity Streams Tuesday, February 26, 13

Slide 32

Slide 32 text

Streams are everywhere Tuesday, February 26, 13

Slide 33

Slide 33 text

Streams in SQL class Activity: SET_RESOLVED = 1 SET_REGRESSION = 6 TYPE = ( (SET_RESOLVED, 'set_resolved'), (SET_REGRESSION, 'set_regression'), ) event = ForeignKey(Event) type = IntegerField(choices=TYPE) user = ForeignKey(User, null=True) datetime = DateTimeField() data = JSONField(null=True) Tuesday, February 26, 13

Slide 34

Slide 34 text

Streams in SQL (cont.) >>> Activity(event, SET_RESOLVED, user, now) "David marked this event as resolved." >>> Activity(event, SET_REGRESSION, datetime=now) "The system marked this event as a regression." >>> Activity(type=DEPLOY_START, datetime=now) "A deploy started." >>> Activity(type=SET_RESOLVED, datetime=now) "All events were marked as resolved" Tuesday, February 26, 13

Slide 35

Slide 35 text

Stream == View == Cache Tuesday, February 26, 13

Slide 36

Slide 36 text

Views as a Cache TIMELINE = [] MAX = 500 def on_event_creation(event): global TIMELINE TIMELINE.insert(0, event) TIMELINE = TIMELINE[:MAX] def get_latest_events(num=100): return TIMELINE[:num] Tuesday, February 26, 13

Slide 37

Slide 37 text

Views in Redis class Timeline(object): def __init__(self): self.db = Redis() def add(self, event): score = float(event.date.strftime('%s.%m')) self.db.zadd('timeline', event.id, score) def list(self, offset=0, limit=-1): return self.db.zrevrange( 'timeline', offset, limit) Tuesday, February 26, 13

Slide 38

Slide 38 text

Views in Redis (cont.) MAX_SIZE = 10000 def add(self, event): score = float(event.date.strftime('%s.%m')) # increment the key and trim the data to avoid # data bloat in a single key with self.db.pipeline() as pipe: pipe.zadd(self.key, event.id, score) pipe.zremrange(self.key, event.id, MAX_SIZE, -1) Tuesday, February 26, 13

Slide 39

Slide 39 text

Queuing Tuesday, February 26, 13

Slide 40

Slide 40 text

Introducing Celery Tuesday, February 26, 13

Slide 41

Slide 41 text

RabbitMQ or Redis Tuesday, February 26, 13

Slide 42

Slide 42 text

Asynchronous Tasks # Register the task @task(exchange=”event_creation”) def on_event_creation(event_id): counter.incr('events', event_id) # Delay execution on_event_creation(event.id) Tuesday, February 26, 13

Slide 43

Slide 43 text

Fanout @task(exchange=”counters”) def incr_counter(key, id=None): counter.incr(key, id) @task(exchange=”event_creation”) def on_event_creation(event_id): incr_counter.delay('events', event_id) incr_counter.delay('global') # Delay execution on_event_creation(event.id) Tuesday, February 26, 13

Slide 44

Slide 44 text

Object Caching Tuesday, February 26, 13

Slide 45

Slide 45 text

Object Cache Prerequisites ‣ Your database can't handle the read-load ‣ Your data changes infrequently ‣ You can handle slightly worse performance Tuesday, February 26, 13

Slide 46

Slide 46 text

Distributing Load with Memcache Memcache 1 Memcache 2 Memcache 3 Event ID 01 Event ID 04 Event ID 07 Event ID 10 Event ID 13 Event ID 02 Event ID 05 Event ID 08 Event ID 11 Event ID 14 Event ID 03 Event ID 06 Event ID 09 Event ID 12 Event ID 15 Tuesday, February 26, 13

Slide 47

Slide 47 text

Querying the Object Cache def make_key(model, id): return '{}:{}'.format(model.__name__, id) def get_by_ids(model, id_list): model_name = model.__name__ keys = map(make_key, id_list) res = cache.get_multi() pending = set() for id, value in res.iteritems(): if value is None: pending.add(id) if pending: mres = model.objects.in_bulk(pending) cache.set_multi({make_key(o.id): o for o in mres}) res.update(mres) return res Tuesday, February 26, 13

Slide 48

Slide 48 text

Pushing State def save(self): cache.set(make_key(type(self), self.id), self) def delete(self): cache.delete(make_key(type(self), self.id) Tuesday, February 26, 13

Slide 49

Slide 49 text

Redis for Persistence Redis 1 Redis 2 Redis 3 Event ID 01 Event ID 04 Event ID 07 Event ID 10 Event ID 13 Event ID 02 Event ID 05 Event ID 08 Event ID 11 Event ID 14 Event ID 03 Event ID 06 Event ID 09 Event ID 12 Event ID 15 Tuesday, February 26, 13

Slide 50

Slide 50 text

Routing with Nydus # create a cluster of Redis connections which # partition reads/writes by (hash(key) % size) from nydus.db import create_cluster redis = create_cluster({ 'engine': 'nydus.db.backends.redis.Redis', 'router': 'nydus.db...redis.PartitionRouter', 'hosts': { {0: {'db': 0} for n in xrange(10)}, } }) github.com/disqus/nydus Tuesday, February 26, 13

Slide 51

Slide 51 text

Planning for the Future Tuesday, February 26, 13

Slide 52

Slide 52 text

One of the largest problems for Disqus is network-wide moderation Tuesday, February 26, 13

Slide 53

Slide 53 text

Be Mindful of Features Tuesday, February 26, 13

Slide 54

Slide 54 text

Sentry's Team Dashboard ‣ Data limited to a single team ‣ Simple views which could be materialized ‣ Only entry point for "data for team" Tuesday, February 26, 13

Slide 55

Slide 55 text

Sentry's Stream View ‣ Data limited to a single project ‣ Each project could map to a different DB Tuesday, February 26, 13

Slide 56

Slide 56 text

Preallocate Shards Tuesday, February 26, 13

Slide 57

Slide 57 text

DB5 DB6 DB7 DB8 DB9 DB0 DB1 DB2 DB3 DB4 redis-1 Tuesday, February 26, 13

Slide 58

Slide 58 text

redis-2 DB5 DB6 DB7 DB8 DB9 DB0 DB1 DB2 DB3 DB4 redis-1 When a physical machine becomes overloaded migrate a chunk of shards to another machine. Tuesday, February 26, 13

Slide 59

Slide 59 text

Takeaways Tuesday, February 26, 13

Slide 60

Slide 60 text

Enhance your database Don't replace it Tuesday, February 26, 13

Slide 61

Slide 61 text

Queue Everything Tuesday, February 26, 13

Slide 62

Slide 62 text

Learn to say no (to features) Tuesday, February 26, 13

Slide 63

Slide 63 text

Complex problems do not require complex solutions Tuesday, February 26, 13

Slide 64

Slide 64 text

QUESTIONS? Tuesday, February 26, 13