Upgrade to Pro — share decks privately, control downloads, hide ads and more …

An Asynchronous, Scalable Django with Twisted (PyCon TW 2016 Keynote)

An Asynchronous, Scalable Django with Twisted (PyCon TW 2016 Keynote)

Amber Brown (HawkOwl)

June 04, 2016
Tweet

More Decks by Amber Brown (HawkOwl)

Other Decks in Programming

Transcript

  1. Binary release management across 3 distros Ported Autobahn|Python (Tx) and

    Crossbar.io to Python 3 Web API/REST integration in CB
  2. nginx gunicorn worker thread thread thread thread gunicorn worker thread

    thread thread thread Example server: two workers with four threads each
  3. nginx gunicorn worker thread thread thread thread gunicorn worker thread

    thread thread thread nginx gunicorn worker thread thread thread thread gunicorn worker thread thread thread thread HAProxy Server 2 Server 3 Server 1
  4. Of N Python threads only 1 may run Python code

    because of the Global Interpreter Lock
  5. Of N Python threads only N may run C code,

    since the Global Interpreter Lock is released
  6. Events can be: incoming data on the network some computation

    is finished a subprocess has ended etc, etc
  7. Selector functions take a list of file descriptors (e.g. sockets,

    open files) and tell you what is ready for reading or writing
  8. Web Server Task Queue Worker Server 2 CPU3 Worker Server

    2 CPU2 Worker Server2 CPU1 Worker Server 1 CPU2 Worker Server 1 CPU1 Worker Server 2 CPU4
  9. Channel Queue Server 2 Interface Server Worker Worker Worker Worker

    Worker Worker Server 1 Server 4 Server 5 Channel Queue Server 3 (Sharding)
  10. Workers do not need to be on the web server...

    but you can put them there if you want!
  11. For small sites, the channel layer can simply be an

    Inter- Process-Communication bus
  12. What is a request? - incoming HTTP requests - connected

    WebSocket connection - data on a WebSocket
  13. e.g: add all open WebSocket connections to a group that

    is notified when your model is saved
  14. Interface Server Channel Queue Worker Worker Server 1 Server 2

    Server 3 (high performance) Server 4 (standard) http.request bigdata.process
  15. Because you can create and listen for arbitrary channels, you

    can funnel certain kinds of work into different workers
  16. All our big data worker needs to do then is

    send the response on the reply channel!