Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Taking Django Distributed

Taking Django Distributed

A talk I gave at DjangoCon US 2017.

Andrew Godwin

August 16, 2017
Tweet

More Decks by Andrew Godwin

Other Decks in Programming

Transcript

  1. Taking
    Django
    Distributed
    Andrew Godwin
    @andrewgodwin
    Taking
    Django
    Distributed

    View Slide

  2. Hi, I’m
    Andrew Godwin
    • Django core developer
    • Senior Software Engineer at
    • Needs to stop running towards code on fire

    View Slide

  3. Computers hate you.

    View Slide

  4. This makes distributed hard.

    View Slide

  5. 2001: A Space Odyssey Copyright Warner Brothers

    View Slide

  6. It’s time to split things up a bit.
    But how? And why?

    View Slide

  7. Code
    Databases
    Team

    View Slide

  8. There is no one solution

    View Slide

  9. Read-heavy?
    Write-heavy?
    Spiky?
    Predictable?
    Chatty?

    View Slide

  10. Code

    View Slide

  11. Use apps! They’re a good start!
    Ignore the way I wrote code for the first 5 years of Django.

    View Slide

  12. Formalise interfaces between apps
    Preferably in an RPC style

    View Slide

  13. Split along those interfaces
    Into separate processes, or machines

    View Slide

  14. Inventory Payments
    Presentation

    View Slide

  15. How do you communicate?
    HTTP? Channels? Smoke signals?

    View Slide

  16. View Slide

  17. View Slide

  18. View Slide

  19. View Slide

  20. Databases

    View Slide

  21. Users
    Vertically Partitioned Database
    Images
    Comments

    View Slide

  22. Main DB
    Replica Replica Replica
    Single main database with replication

    View Slide

  23. Partition Tolerant
    Available Consistent

    View Slide

  24. Non-consistency is everywhere
    It’s sneaky like that

    View Slide

  25. National Museum of American History

    View Slide

  26. Load Balancing

    View Slide

  27. Equally balanced servers
    Consistent load times
    Similar users

    View Slide

  28. Split logic
    Different processor loads
    Wildly varying users

    View Slide

  29. Reqs
    Time

    View Slide

  30. Reqs
    Time

    View Slide

  31. W E B S O C K E T S

    View Slide

  32. W E B S O C K E T S
    ● They can last for hours
    ● There’s not many tools that handle them
    ● They have 4 different kinds of failure

    View Slide

  33. Design for failure, and then use it!
    Kill off sockets early and often.

    View Slide

  34. Team

    View Slide

  35. Developers are people too!
    They need time and interesting things

    View Slide

  36. Technical debt can be poisonous
    But you need a little bit to compete

    View Slide

  37. Single repo? Multiple repos?
    Each has distinct advantages.

    View Slide

  38. Teams per service? Split
    responsibility?
    Do you split ops/QA across teams too?

    View Slide

  39. Ownership gaps
    They’re very hard to see.

    View Slide

  40. Strategies

    View Slide

  41. Don’t go too micro on those services
    It’s easier in the short term, but will confuse you in the long term.

    View Slide

  42. Communicate over a service bus
    Preferably Channels, but you get to choose.

    View Slide

  43. Work out where to allow old data
    Build in deliberate caching or read only modes

    View Slide

  44. Design for future sharding
    Route everything through one model or set of functions

    View Slide

  45. Expect long-polls/sockets to die
    Design for load every time, and treat as a happy optimisation

    View Slide

  46. Independent, full-stack teams
    From ops to frontend, per major service

    View Slide

  47. Architect as a part-time position
    You need some, but not in an ivory tower

    View Slide

  48. 2001: A Space Odyssey Copyright Warner Brothers

    View Slide

  49. Maybe, just maybe, keep that monolith
    A well maintained and separated one beats bad distributed

    View Slide

  50. Thanks.
    Andrew Godwin
    @andrewgodwin aeracode.org

    View Slide