Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Mixing a persistence cocktail

Mixing a persistence cocktail

On the story of scaling your applications, the tactics, and how to overcome the fear of making big, uncertain changes to your application.

Adam Keys

April 21, 2011
Tweet

More Decks by Adam Keys

Other Decks in Programming

Transcript

  1. The story of scaling Let’s start by taking a whirlwind

    tour through most of the other talks you’ve heard on scaling in the past.
  2. Grow ‗ Prototype it The first chapter in your application’s

    scaling story is…building it. You’ll probably end up using something with ad-hoc queries. Something like MySQL, PostgreSQL, or MongoDB.
  3. Grow ‗ Ship it! This is important! You have to

    ship your app, and then do a bunch of business-y and lean/ agile stuff to get it to stick before you even need to think about scaling.
  4. Grow ‗ Ad-hoc queries → indices Your application starts to

    get some attention and your scaling story begins. Some kinds of data start to grow really quickly. You find hotspots, track down the queries that aren’t so fast, and you add an index in your database to make that query faster. Rinse, repeat. Most apps don’t go past this point.
  5. Grow ‗ Database indices → specialized storage If your app

    continues to grow, you’ll find yourself extracting indexes into specialized storage. You’ll replace indexed queries (or queries that are difficult to index) with caches like Memcache, Redis, or maybe something custom.
  6. Grow ‗ Slow processes → queues and workers At some

    point, your application will accrue more work than you’d really want to do during a transaction/request. That’s when you want to go slightly asynchronous, queue work up and process it out-of-process. Delayed Job is a great way to start, then move up to something like Resque or Rabbit MQ.
  7. Grow ‗ Relax consistency requirements As you start to embrace

    caches and queues, it’s important to start thinking about the observability of your data. Depending on how you read and write data, you can get confusing results depending on timing. The good news is, lots of distributed databases are explicit about this, so it’s easy to read about. The bad news is, it’s a bit of brain-bender.
  8. Integrating that hot, new database That’s what mixing a persistence

    cocktail looks like when you’re talking about it over beers. But how does it work out in practice?
  9. Integrate ‘ Client libraries After you’ve selected a database for

    your second database, you need a gem to connect to it. This is usually a pretty simple task; there’s usually a widely preferred library for your database. The main caveat is if you’re not on MRI or you’re trying to go non-blocking; in this case, you’ll want to take more care to in selecting a library.
  10. Integrate ‘ Configuration Once you’ve made your selection, you’ll end

    up inventing a configuration file for it. Mostly I see people doing something like `database.yml`, but plain-old-Ruby works great too. Make sure you make it easy to configure different environments. Don’t worry about avoiding global references to connections, it hasn’t seemed to hurt us on Gowalla.
  11. Integrate ‘ Application code Once your connection is configured, it’s

    time to use it in application code. We’ve been using direct access from a global variable in Gowalla and it hasn’t bit us. I’d like to see us adopt something like redis-objects, toystore, etc. to do domain modeling against our uses of Memcached, Redis, etc. but it’s definitely not something that is holding us back.
  12. Integrate ‘ Deployment Once you’ve got a feature coded up

    using your new database it’s time to deploy. Get your ops person to set up the database, figure out exactly which steps you need to roll the new code out in production, and then go for it.
  13. Integrate ‘ Data Migration Once you’re in production for a

    while, it will end up that you need to rejigger how your data is stored. Absent migrations ala AR, you’ve got a couple options. One is read-repair, where you make your application code resilient to different versions of a data structure and only update it on writes. Another is to version the key you’re storing data with and increment the key version when you change the structure.
  14. Overcoming THE FEAR So that’s the tactical level. But there’s

    another level of tactics that’s really important as you’re mixing your persistence cocktail. It’s easy to develop anxiety as you get close to deploying your new database. There’s so much to go wrong, so much uncertainty. You need the tactics that allow you to overcome THE FEAR.
  15. Fear ☢ Training wheels First off, give yourself a project

    with extremely low stakes. New features, or features you’re not sure you always need work great. The important part is that it’s something with low risk. Low risk means you can push the envelope a bit, which is exactly what adding a new database involves.
  16. Fear ☢ Instrument and log everything Once you’ve got your

    training wheels in place, you need to know how things are working. You need numbers on how often things happen and how much time it takes when they do happen. Use Scout, RPM, or log inspection for this. You’ll also want to log things you’re unsure about. Log profusely, and get handy with grep, sed, and awk for digesting those logs.
  17. Fear ☢ Dark reads and double writes When you’re ready

    to use your new database for critical features, you can ease into it with two techniques. Start off by writing data to your existing database _and_ the new one. Once you’ve got the new writes debugged and working, start doing reads from it but discarding the results. Debug, optimize, and remove the double write, only using the new system. Success!
  18. Fear ☢ Feature switches, progressive rollout Sometimes, adding new stuff

    doesn’t work out so well. In this case, you want the ability to turn features on and off willy-nilly. Feature toggles make this really easy. Branching in code isn’t the prettiest thing, but it’s a great safety net. The other great thing about toggles is you can roll out to more and more users, making it easy to ease a feature in, rather than hoping it works the first time.
  19. Fear ☢ Everyday I iterate The most important tool, of

    course, is iteration. Depending on the scope of your project, it could be weeks or month before the new thing is the thing. Everyday, move the ball forward. Everyday, make it better. Everyday, figure out how to make the next step without shooting yourself in the foot. Everyday, deliver business value.
  20. THE FEAR Training wheels Feature flippers Dark reads, double writes

    Iterate, a lot Instrument and log everything Integrate Configuration Deployment Client libraries Application code Data migration Grow 1. Prototype it 2. Ship it 3. Convert ad-hoc queries to indexes 5. Queue it, work it 4. Extract indexes into other systems 6. Go asynchronous 7. Relax consistency So here’s your map. As you can see, it’s all interconnected. That’s the way of things. I think it’s neat.