Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Jones

 Jones

Configuration with ZooKeeper

Matthew Hooker

November 10, 2012
Tweet

More Decks by Matthew Hooker

Other Decks in Programming

Transcript

  1. about me • worked at Digg where we had a

    system very similar to Jones (built by Rich Schumacher). • Then at Disqus where I worked on Jones Saturday, November 10, 12
  2. problem • I want to be able to change config

    values without redeploying Saturday, November 10, 12
  3. problem • I want my app to see these new

    values as soon as they change Saturday, November 10, 12
  4. always ship trunk by Paul Hammond[1] • Argues that web

    apps are not like shipped software • You only have one user (you), so usually only a single copy of your code is in use at a time. • except when you’re deploying • branch management doesn’t apply Saturday, November 10, 12
  5. always ship trunk • “You can deploy the code for

    a feature long before you launch it and nobody will know” • “You can completely rewrite your infrastructure and keep the UI the same and nobody will know” “Idea one: separate feature launches from infrastructure launches” Saturday, November 10, 12
  6. always ship trunk • “You can repeatedly switch between two

    backend systems and keep the UI the same and nobody will know” • “You can deploy a non-user facing change to only a small percentage of servers and nobody will know” “Idea two: run multiple versions of your code at once” Saturday, November 10, 12
  7. Flickr’s Flipper • Flickr has implemented this idea with Flipper

    • unfortunately it’s Flickr only Saturday, November 10, 12
  8. • Jones gives you a way to make configuration changes

    to your app in real time • It manages different types of environments – staging, production, development • It can also manage config on a host-by-host basis Saturday, November 10, 12
  9. ZooKeeper In order to really understand how Jones works, we

    need to understand ZK. Saturday, November 10, 12
  10. ZooKeeper “ZooKeeper is a centralized service for maintaining configuration information,

    naming, providing distributed synchronization, and providing group services.” [3] Saturday, November 10, 12
  11. ZooKeeper • Hierarchical namespaces (like filesystems) • Data stored in

    “znodes” – vertices in a data graph Saturday, November 10, 12
  12. Reading from ZK • You address znodes with a string

    representing the path to the znode you wish to access >>> zookeeper.get( ... '/services/pycon/conf')[0] '{ "locale": "Canada", "times_talk_given": 0, "is_awesome": true }' Saturday, November 10, 12
  13. Reading from ZK • You can also list immediate children

    of a node >>> zookeeper.get_children('/friends') [u'Matt', u'Marry', u'Mark'] Saturday, November 10, 12
  14. Reading from ZK • When accessing data, you can optionally

    be notified if the znode ever changes >>> def cb(data, stat): ... print "I changed. New value: ", data ... >>> kc.create('/test', 'foobar') u'/test' >>> kc.get('/test', watch=cb) ('foobar', ...) >>> kc.set('/test', 'baz') I changed. New value: baz Saturday, November 10, 12
  15. Writing to ZK • Each znode is versioned • ZooKeeper

    supports MVCC • Multiversion concurrency control • a way of enforcing consistency • ensures multiple writers don’t clobber each other Saturday, November 10, 12
  16. Suddenly, code >>> # zk is our handle to ZooKeeper

    >>> # stat holds metadata about the znode >>> config, stat = zk.get('/test') >>> # let's look at the current version >>> stat.version 1 >>> # try updating the znode with the current version >>> zk.set('/test', 'foobar', version=stat.version) >>> # success >>> zk.get('/test')[0] 'foobar' >>> # we can also choose to overwrite any value >>> zk.set('/test', 'baz', version=-1) >>> zk.get('/test')[0] 'baz' >>> # let's see what happens if we pass a wrong version >>> zk.set('/test', 'foobaz', version=9000) >>> # we get an exception because version must be the >>> # current version of the znode you're trying to change kazoo.exceptions.BadVersionError: ((), {}) Saturday, November 10, 12
  17. Now that we all know a little about ZooKeeper (hopefully!),

    how does Jones work? Saturday, November 10, 12
  18. Jones • Config is stored as JSON object • Enter

    values here and they’ll immediately be reflected in the client • Uses Jos de Jong’s json editor[2] Saturday, November 10, 12
  19. environment tree • environments inherit from their parents • The

    actual config for that environment is shown in the “inherited view” box Saturday, November 10, 12
  20. associations • Connect an environment to a physical host using

    associations • Any string you want, but defaults to fqdn • All hosts associated with the root node by default Saturday, November 10, 12
  21. Jones client >>> from jones.client import JonesClient >>> jones =

    JonesClient(zookeeper_client, 'pycon') >>> jones['locale'] u'Canada' service name Saturday, November 10, 12
  22. use-cases • configuration • define database slave membership • service

    endpoints • tune algorithms • switches Saturday, November 10, 12
  23. switches # toggle features if jones.get('enable_flux_capacitor'): flux_capacitate() # enable features

    for a percentage of users if jones.get('pct_new_queue', 0) > random(): queue = new_queue # enable features by user bucket buckets = jones['macguffin_buckets'] if (user.id % bucket) in jones['macguffin_enabled_for']: user.wants_macguffin = True Saturday, November 10, 12
  24. switches • Commit features early. Hide it behind a switch

    until it’s ready • Public betas • Turn off buggy or expensive features under heavy load • A/B testing Saturday, November 10, 12
  25. • Jones was designed with 3 goals in mind •

    Clients MUST only talk to ZooKeeper • Accessing configuration MUST be simple (i.e. no computation) • Unique views of the config MUST be available on a host- by-host basis Saturday, November 10, 12
  26. • Wanted clients to be as simple as possible to

    make porting clients easy • So server has to do all the work Saturday, November 10, 12
  27. • environments map directly to the znode graph. • each

    service has a root containing • environment config • associations • materialize views environments Saturday, November 10, 12
  28. data model /services /{service name} # root /conf /nodemaps {host}

    -> {path to view} /views Saturday, November 10, 12
  29. Views • The final config data is materialized so only

    a single read is required. • This dramatically simplifies any client. • However the server becomes more complex. Saturday, November 10, 12
  30. Jones server • Simple flask app • Sentry support •

    optional ZK ACLs to ensure consistency • Jones class deals with complexities of materializing views and managing associations. Saturday, November 10, 12
  31. Jones client • Initialize with service name • Sets watches

    on nodemaps and environment view • nodemaps watch makes sure we always know what environment is ours • view watch keeps config dict up to date. Can optionally invoke callback • Simple! Saturday, November 10, 12
  32. What you should have seen • create new service •

    set some root config • show that we can get value from client • change config. show that it reflects in client • add child env. associate to my laptop • show that config changes & inheritance works Saturday, November 10, 12
  33. in summary... • Jones doesn’t really do all that much

    • provides a hierarchy of configurations, with children inheriting from parents • a web UI for managing config as a JSON object • a way to peg certain configurations to specific hosts/ processes/clusters/etc. Saturday, November 10, 12
  34. Roadmap • UI needs help: error messages, stress test •

    Web App auth/ACLs for compartmentalization • Audit log • Ability to peg to versions • i.e. this service always needs version N • see github issues Saturday, November 10, 12
  35. It’s a golden age for ZooKeeper in Python • Ben

    Bangert & co. are diligently working on Kazoo, a pure python ZK client. Full featured and well written [4]. Saturday, November 10, 12
  36. Kazoo Patterns • Lock • Party • Partitioner • Election

    • Counter • Barrier Saturday, November 10, 12
  37. Lock • Serialize access to a shared resource zk  =

     KazooClient() lock  =  zk.Lock("/macguffin",  "mwhooker") with  lock:    #  blocks  waiting  for  lock  acquisition        use_macguffin() Saturday, November 10, 12
  38. Party • Determine members of a party • Who’s currently

    providing service X? zk = KazooClient() party = zk.Party('/birthday', 'matt') party.join() list(party) ['matt'] # =( Saturday, November 10, 12
  39. Partitioner • Divide up resources amongst participants zk = KazooClient()

    qp = client.SetPartitioner( path='/birthday_cake', set=('piece-1', 'piece-2', 'piece-3') ) while True: if qp.failed: raise Exception("no more cake left") elif qp.acquired: for cake_piece in qp: nomnom(cake_piece) elif qp.allocating: qp.wait_for_acquire() Saturday, November 10, 12
  40. Election • Elect a leader from a party • Who’s

    going to perform this bit of work? zk = KazooClient() election = zk.Election( "/election2012", "obama-biden" ) # blocks until the election is won, then calls # swear_in() election.run(swear_in) Saturday, November 10, 12
  41. Counter • Store a count in ZK • Relies on

    MVCC and retry, so may time out zk = KazooClient() counter = zk.Counter("/int") counter += 2 counter -= 1 counter.value == 1 Saturday, November 10, 12
  42. Barrier • Block clients until a condition is met #

    coffee master zk = KazooClient() barrier = zk.Barrier('/coffee-barrier') barrier.create() brew_coffee() barrier.remove() # coffee slave zk = KazooClient() barrier = zk.Barrier('/coffee-barrier') barrier.wait() drink_coffee() Saturday, November 10, 12
  43. znode types • So far we’ve only talked about data

    nodes, but there are 2 other types • ephemeral • sequence • they can be mixed Saturday, November 10, 12
  44. Ephemeral Nodes • Only exist as long as the creator

    maintains a connection to ZK • How the Party, Lock, and Barrier recipes are achieved Saturday, November 10, 12
  45. Sequence Nodes • When creating a sequence znode, ZK appends

    a monotonically increasing counter to the end of path. • e.g. 2 calls to create sequence znodes at /lock- will result in • /lock-0 • /lock-1 • sequences are unique to the parent Saturday, November 10, 12
  46. ZooKeeper is highly available* • ZK is distributed • a

    ZK cluster is known as an ensemble • * unless there is a network partition Saturday, November 10, 12
  47. • Writes to ZK are committed to a majority (aka

    quorum) of nodes before success is communicated • some nodes may have old data • Reads happen from any node • Writes are forwarded through master • As ensemble size grows, read performance increases while write performance decreases. Saturday, November 10, 12
  48. • ZooKeeper can only work if a majority of servers

    are correct (i.e., with 2f + 1 server we can tolerate f failures).[5] • Means we need to run an odd number of server • 3 is the minimum, 5 is recommended • with 5 we can tolerate 2 failures Saturday, November 10, 12
  49. Thank you! Matthew Hooker I’m looking for a job [email protected]

    twitter @mwhooker github https://github.com/mwhooker https://speakerdeck.com/mwhooker/jones https://github.com/mwhooker/jones Saturday, November 10, 12