Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Load Balancing 101

Load Balancing 101

Brief overview of loadbalancing, terminology, tools and the fact that it's really just queueing

E85644104e56d9f8be2fcea8b20f5e97?s=128

Bradley Whittington

May 07, 2013
Tweet

More Decks by Bradley Whittington

Other Decks in Technology

Transcript

  1. Load balancing 101 Smoke and mirrors.

  2. What is load balancing Spreading load over multiple workers, servers,

    cores, processes, and time. • A queue of discrete requests/jobs/transactions and workers which can complete this work • A slew of concurrent, long lived connections
  3. When do you have a load balancing problem? Always.

  4. What can you load balance? • HTTP level load balancing

    • Socket level load balancing • Queue + slow legacy system
  5. Queue everything and delight everyone (when you can, at the

    right priority)
  6. Every man and his dog has a load balancer these

    days • HAProxy • Nginx • Amazon ELB • Varnish • Pound • Apache mod_proxy_balancer • Gearman • Celery • ØMQ • keepalived etc.
  7. Load balancing terms • Hardware: ◦ Fast/Efficient ◦ Fixed circuitry/purpose

    ◦ In-stream decisions ◦ Reason people get Cisco qualifications ◦ SPoF (can be mitigated) • Software ◦ Plethora of options ◦ Performance tradeoffs ◦ More cleverness ◦ More deployability
  8. Scheduling • Round robin • Least-connections • Random! • Clever

    ◦ Make decisions based on latency, worker load, geography, etc. (e.g. ELB) ◦ Dynamic: see Varnish DNS Director and Nginx Resolver / Nginx lua ◦ Hash Based: Consistent hashing based on incoming request data: see haproxy map Links: http://rapgenius.com/James-somers-herokus-ugly-secret-lyrics
  9. Session stickyness / Persistence: • Sticky: route requests from known/seen

    requestors to the same backend worker ◦ Source IP ◦ Cookie-based ◦ How does this work in high availability? • Non-sticky: don't stick to backend workers ◦ Means you have to be clever about shared session state
  10. Health checks • Do them right • Check frequently, but

    not too much • Check that you're getting a valid response from the backend ◦ not just that port 80 is open ◦ is it accepting http? ◦ is the page loading? ◦ is it loading correctly (ie, is the DB up?) • Build a url which gives you good health indicators, and tell the LB to check it • Spend time configuring your LB error pages
  11. Example 1 - naïveté level 8 • Requests land on

    http://router.myapp.com • Redirect with HTTP 302 Found to http://web {1,2,3}.myapp.com Kakness: • Discuss
  12. Example 2.0 - Let's use hardware! • Buy a hardware

    load balancer • Plug it in • Packets are free to roam! Kakness • Discuss
  13. Example 2.1 - let's use software! • User hits http://myapp.com

    • Proxies to backend worker servers • Responses flow freely Kakness: • Discuss
  14. Example 3 - let's use DNS! • User requests http://myapp.com

    • DNS round robins you an IP • Be careful about TTL! Kakness • Discuss
  15. Cool links Varnish cleverness: http://dev.theladders.com/2013/05/varnish-in-five-acts/ Link header: http://blog.kevburnsjr.com/tagged-cache-invalidation Nginx cleverness:

    http://wiki.nginx.org/HttpUpstreamConsistentHash Heroku's ugly secret: http://rapgenius.com/James-somers-herokus-ugly-secret-lyrics
  16. Thanks? Collaborated by • Adrian Moisey • Bradley Whittington •

    Bearnard Hibbins • Jonathan Hitchcock • Simon de Haan