Concurrency Basics for Elixir

Concurrency Basics for Elixir

Slides from internal presentation at https://appunite.com

373dd7c51433dc3c38436dcfdec79cdc?s=128

Maciej Kaszubowski

August 02, 2018
Tweet

Transcript

  1. Concurrency basics For Elixir-based Systems

  2. None
  3. So, what’s concurrency?

  4. Sequential Execution (3 functions, 1 thread)

  5. Sequential Execution (3 functions, 1 thread) Concurrent Execution (3 functions,

    3 threads)
  6. Sequential Execution (3 functions, 1 thread) Concurrent Execution (3 functions,

    3 threads) Preemptive scheduling
  7. Where’s the benefit?

  8. Req1 Req2 Req3 Resp Sequential Execution time Waiting time

  9. Req1 Req2 Req3 Resp Req1 Resp Req2 Req3 Sequential Concurrent

    Execution time Waiting time
  10. CPU bound Re Re Re Res Re Res Re Re

    I/O bound
  11. Concurrent or Parallel What’s the difference?

  12. Concurrent Execution (3 functions, 3 threads)

  13. Concurrent Execution (3 functions, 3 threads) Parallel Execution (3 functions,

    3 threads, 2 cores) core 1 core 2
  14. root@kingschat-api-c8f8d6b76-4j65j:/app# nproc 12 root@tahmeel-api-prod-b5979bdc6-q5wz6:/# nproc 1 How many cores?

  15. Concurrent Execution (3 functions, 3 threads) Parallel Execution (3 functions,

    3 threads, 2 cores) core 1 core 2 (by default) One erlang scheduler per core
  16. :observer_cli.start()

  17. None
  18. Req1 Req2 Req3 Resp Req1 Resp Req2 Req3 Sequential Concurrent

    Execution time Waiting time Req1 Resp Req2 Req3 Parallel
  19. Sequential execution

  20. Phoenix Request Req 1

  21. Phoenix Request Resp

  22. Phoenix Request Req 2

  23. Phoenix Request Resp

  24. Phoenix Request Req 3

  25. Phoenix Request Resp

  26. Concurrent execution

  27. Phoenix Request

  28. Phoenix Request Task 1 Task 2 Task 3

  29. Phoenix Request Task 1 Task 2 Task 3 Req 1

    Req 2 Req 3
  30. Phoenix Request Task 1 Task 2 Task 3 Resp Resp

    Resp
  31. Phoenix Request Task 1 Task 2 Task 3

  32. R1 APP Server DB Server (3 cores) R2 R1 R2

    Time Execution time Waiting time
  33. R1 APP Server DB Server (3 cores) Send resp R2

    R3 R1 R2 R3 Time Execution time Waiting time
  34. How much can we gain?

  35. Amdahl’s Law

  36. Amdahl’s Law

  37. Amdahl’s Law in a nutshell The more synchronisation, the less

    benefit from multiple cores
  38. R1 APP Server Send resp R2 R3 R1 R2 R3

    Time Execution time Waiting time Almost 100% parallel (almost no synchronisation) DB Server (3 cores)
  39. But…

  40. R1 APP Server Send resp R2 R3 R1 R2 R3

    Time Execution time Waiting time This is not constant DB Server (3 cores)
  41. R1 APP Server Send resp R2 R3 R1 R2 R3

    Time Execution time Waiting time This is not infinite DB Server (3 cores)
  42. R1 APP Server R2 R3 R1 R2 R3 R4 Time

    Execution time Waiting time DB Server (3 cores)
  43. R1 APP Server R2 R3 R1 R2 R3 R4 Time

    Execution time Waiting time DB Server (3 cores)
  44. R1 APP Server R2 R3 R1 R2 R3 R4 R4

    Time Execution time Waiting time DB Server (3 cores)
  45. R1 APP Server R2 R3 R1 R2 R3 R4 R4

    Time Execution time Waiting time DB Server (3 cores)
  46. R1 APP Server R2 R3 R1 R2 R3 R4 R4

    Time Execution time Waiting time DB Server (3 cores)
  47. R1 APP Server Send resp R2 R3 R1 R2 R3

    R4 R4 Time Execution time Waiting time DB Server (3 cores)
  48. R1 APP Server R2 R3 R1 R2 R3 R4 R4

    Time Execution time Waiting time R5 R6 R7 R5 R6 R7 DB Server (3 cores)
  49. Phoenix Request Task 1 Task 2 Task 3 Req 1

    Req 2 Req 3 Remember this?
  50. This isn’t exactly true

  51. None
  52. Connection pool (Prevents from overworking the DB)

  53. Pool Manager (Blocks until a free worker is available)

  54. None
  55. Pool Manager (Blocks until a free worker is available)

  56. None
  57. It gets worse

  58. Pool Manager Mailbox Has to be synchronised

  59. Pool Manager Message Passing Is just copying data in shared

    memory
  60. Pool Manager Remember semaphores?

  61. Logger Metrics Sentry

  62. Network stack

  63. Network stack

  64. Network stack

  65. Network stack Sentry Metrics

  66. OS Threads (Garbage Collection) Data Bus Virtual Machines Memory characteristics

    (e.g. processor caches) … Other synchronisation points
  67. That’s hard

  68. That’s REALLY hard

  69. That’s REALLY hard Seriously, people spend their entire careers on

    this
  70. So, what to do?

  71. Measure

  72. Measure Measure

  73. Measure Measure Measure

  74. Measure ON PRODUCTION

  75. Measure ON PRODUCTION You WILL get false results on staging/locally

  76. Measure Entire system You WILL get false results for single

    functions
  77. Measure ONLY IF YOU HAVE TRAFFIC

  78. “premature optimization is the root of all evil”

  79. If something takes X ms, it will always take X

    ms.
  80. Async execution cannot “remove” this time It can only hide

    it
  81. BACK PRESSURE

  82. Producent Consumer Consumer

  83. Producent Consumer Consumer

  84. Producent Consumer Consumer

  85. Producent Consumer Consumer

  86. Producent Consumer Consumer

  87. Producent Consumer Consumer

  88. Producent Consumer Consumer

  89. Producent Consumer Consumer

  90. Producent Consumer Consumer

  91. Producent Consumer Consumer

  92. Producent Consumer Consumer

  93. Producent Consumer Consumer

  94. Producent Consumer Consumer

  95. Producent Consumer Consumer Stop

  96. Producent Consumer Consumer

  97. Producent Consumer Consumer

  98. Producent Consumer Consumer

  99. Producent Consumer Consumer

  100. Producent Consumer Consumer OK, give me more

  101. Producent Consumer Consumer

  102. None
  103. Back pressure

  104. Thanks!