Upgrade to Pro — share decks privately, control downloads, hide ads and more …

.NET Day 2024: Five common mistakes with distri...

dotnetday
September 11, 2024

.NET Day 2024: Five common mistakes with distributed systems

More info at: https://dotnetday.ch

dotnetday

September 11, 2024
Tweet

More Decks by dotnetday

Other Decks in Technology

Transcript

  1. Event Latency Scaled 1 CPU cycle (3.7 MHz) 0.3 ns

    1 s Level 1 cache access 1 ns 3 s Level 2 cache access 3 ns 9 s Level 3 cache access 12 ns 36 s Main memory access (SDRAM) 120 ns 6 min SSD I/O 50–150 µs 2–6 days HDD I/O 1–10 ms 1–12 months Internet: San Francisco to New York 40 ms 4 years Internet: San Francisco to London 81 ms 8 years Internet: San Francisco to Sydney 183 ms 20 years TCP packet retransmit 1–3 s 100–300 years OS virtualisation system reboot 4 s 400 years SCSI command time-out 30 s 3 millenia Hardware virtualisation system reboot 40 s 4 millenia Physical system reboot 2 min 12 millenia
  2. Event Latency Scaled 1 CPU cycle (3.7 MHz) 0.3 ns

    1 s Level 1 cache access 1 ns 3 s Level 2 cache access 3 ns 9 s Level 3 cache access 12 ns 36 s Main memory access (SDRAM) 120 ns 6 min SSD I/O 50–150 µs 2–6 days HDD I/O 1–10 ms 1–12 months Internet: San Francisco to New York 40 ms 4 years Internet: San Francisco to London 81 ms 8 years Internet: San Francisco to Sydney 183 ms 20 years TCP packet retransmit 1–3 s 100–300 years OS virtualisation system reboot 4 s 400 years SCSI command time-out 30 s 3 millenia Hardware virtualisation system reboot 40 s 4 millenia Physical system reboot 2 min 12 millenia
  3. Event Latency Scaled 1 CPU cycle (3.7 MHz) 0.3 ns

    1 s Level 1 cache access 1 ns 3 s Level 2 cache access 3 ns 9 s Level 3 cache access 12 ns 36 s Main memory access (SDRAM) 120 ns 6 min SSD I/O 50–150 µs 2–6 days HDD I/O 1–10 ms 1–12 months Internet: San Francisco to New York 40 ms 4 years Internet: San Francisco to London 81 ms 8 years Internet: San Francisco to Sydney 183 ms 20 years TCP packet retransmit 1–3 s 100–300 years OS virtualisation system reboot 4 s 400 years SCSI command time-out 30 s 3 millenia Hardware virtualisation system reboot 40 s 4 millenia Physical system reboot 2 min 12 millenia
  4. Event Latency Scaled 1 CPU cycle (3.7 MHz) 0.3 ns

    1 s Level 1 cache access 1 ns 3 s Level 2 cache access 3 ns 9 s Level 3 cache access 12 ns 36 s Main memory access (SDRAM) 120 ns 6 min SSD I/O 50–150 µs 2–6 days HDD I/O 1–10 ms 1–12 months Internet: San Francisco to New York 40 ms 4 years Internet: San Francisco to London 81 ms 8 years Internet: San Francisco to Sydney 183 ms 20 years TCP packet retransmit 1–3 s 100–300 years OS virtualisation system reboot 4 s 400 years SCSI command time-out 30 s 3 millenia Hardware virtualisation system reboot 40 s 4 millenia Physical system reboot 2 min 12 millenia
  5. Event Latency Scaled 1 CPU cycle (3.7 MHz) 0.3 ns

    1 s Level 1 cache access 1 ns 3 s Level 2 cache access 3 ns 9 s Level 3 cache access 12 ns 36 s Main memory access (SDRAM) 120 ns 6 min SSD I/O 50–150 µs 2–6 days HDD I/O 1–10 ms 1–12 months Internet: San Francisco to New York 40 ms 4 years Internet: San Francisco to London 81 ms 8 years Internet: San Francisco to Sydney 183 ms 20 years TCP packet retransmit 1–3 s 100–300 years OS virtualisation system reboot 4 s 400 years SCSI command time-out 30 s 3 millenia Hardware virtualisation system reboot 40 s 4 millenia Physical system reboot 2 min 12 millenia
  6. Event Latency Scaled 1 CPU cycle (3.7 MHz) 0.3 ns

    1 s Level 1 cache access 1 ns 3 s Level 2 cache access 3 ns 9 s Level 3 cache access 12 ns 36 s Main memory access (SDRAM) 120 ns 6 min SSD I/O 50–150 µs 2–6 days HDD I/O 1–10 ms 1–12 months Internet: San Francisco to New York 40 ms 4 years Internet: San Francisco to London 81 ms 8 years Internet: San Francisco to Sydney 183 ms 20 years TCP packet retransmit 1–3 s 100–300 years OS virtualisation system reboot 4 s 400 years SCSI command time-out 30 s 3 millenia Hardware virtualisation system reboot 40 s 4 millenia Physical system reboot 2 min 12 millenia
  7. Event Latency Scaled 1 CPU cycle (3.7 MHz) 0.3 ns

    1 s Level 1 cache access 1 ns 3 s Level 2 cache access 3 ns 9 s Level 3 cache access 12 ns 36 s Main memory access (SDRAM) 120 ns 6 min SSD I/O 50–150 µs 2–6 days HDD I/O 1–10 ms 1–12 months Internet: San Francisco to New York 40 ms 4 years Internet: San Francisco to London 81 ms 8 years Internet: San Francisco to Sydney 183 ms 20 years TCP packet retransmit 1–3 s 100–300 years OS virtualisation system reboot 4 s 400 years SCSI command time-out 30 s 3 millenia Hardware virtualisation system reboot 40 s 4 millenia Physical system reboot 2 min 12 millenia
  8. Event Latency Scaled 1 CPU cycle (3.7 MHz) 0.3 ns

    1 s Level 1 cache access 1 ns 3 s Level 2 cache access 3 ns 9 s Level 3 cache access 12 ns 36 s Main memory access (SDRAM) 120 ns 6 min SSD I/O 50–150 µs 2–6 days HDD I/O 1–10 ms 1–12 months Internet: San Francisco to New York 40 ms 4 years Internet: San Francisco to London 81 ms 8 years Internet: San Francisco to Sydney 183 ms 20 years TCP packet retransmit 1–3 s 100–300 years OS virtualisation system reboot 4 s 400 years SCSI command time-out 30 s 3 millenia Hardware virtualisation system reboot 40 s 4 millenia Physical system reboot 2 min 12 millenia
  9. Event Latency Scaled 1 CPU cycle (3.7 MHz) 0.3 ns

    1 s Level 1 cache access 1 ns 3 s Level 2 cache access 3 ns 9 s Level 3 cache access 12 ns 36 s Main memory access (SDRAM) 120 ns 6 min SSD I/O 50–150 µs 2–6 days HDD I/O 1–10 ms 1–12 months Internet: San Francisco to New York 40 ms 4 years Internet: San Francisco to London 81 ms 8 years Internet: San Francisco to Sydney 183 ms 20 years TCP packet retransmit 1–3 s 100–300 years OS virtualisation system reboot 4 s 400 years SCSI command time-out 30 s 3 millenia Hardware virtualisation system reboot 40 s 4 millenia Physical system reboot 2 min 12 millenia
  10. Event Latency Scaled 1 CPU cycle (3.7 MHz) 0.3 ns

    1 s Level 1 cache access 1 ns 3 s Level 2 cache access 3 ns 9 s Level 3 cache access 12 ns 36 s Main memory access (SDRAM) 120 ns 6 min SSD I/O 50–150 µs 2–6 days HDD I/O 1–10 ms 1–12 months Internet: San Francisco to New York 40 ms 4 years Internet: San Francisco to London 81 ms 8 years Internet: San Francisco to Sydney 183 ms 20 years TCP packet retransmit 1–3 s 100–300 years OS virtualisation system reboot 4 s 400 years SCSI command time-out 30 s 3 millenia Hardware virtualisation system reboot 40 s 4 millenia Physical system reboot 2 min 12 millenia
  11. Event Latency Scaled 1 CPU cycle (3.7 MHz) 0.3 ns

    1 s Level 1 cache access 1 ns 3 s Level 2 cache access 3 ns 9 s Level 3 cache access 12 ns 36 s Main memory access (SDRAM) 120 ns 6 min SSD I/O 50–150 µs 2–6 days HDD I/O 1–10 ms 1–12 months Internet: San Francisco to New York 40 ms 4 years Internet: San Francisco to London 81 ms 8 years Internet: San Francisco to Sydney 183 ms 20 years TCP packet retransmit 1–3 s 100–300 years OS virtualisation system reboot 4 s 400 years SCSI command time-out 30 s 3 millenia Hardware virtualisation system reboot 40 s 4 millenia Physical system reboot 2 min 12 millenia
  12. Event Latency Scaled 1 CPU cycle (3.7 MHz) 0.3 ns

    1 s Level 1 cache access 1 ns 3 s Level 2 cache access 3 ns 9 s Level 3 cache access 12 ns 36 s Main memory access (SDRAM) 120 ns 6 min SSD I/O 50–150 µs 2–6 days HDD I/O 1–10 ms 1–12 months Internet: San Francisco to New York 40 ms 4 years Internet: San Francisco to London 81 ms 8 years Internet: San Francisco to Sydney 183 ms 20 years TCP packet retransmit 1–3 s 100–300 years OS virtualisation system reboot 4 s 400 years SCSI command time-out 30 s 3 millenia Hardware virtualisation system reboot 40 s 4 millenia Physical system reboot 2 min 12 millenia
  13. Event Latency Scaled 1 CPU cycle (3.7 MHz) 0.3 ns

    1 s Level 1 cache access 1 ns 3 s Level 2 cache access 3 ns 9 s Level 3 cache access 12 ns 36 s Main memory access (SDRAM) 120 ns 6 min SSD I/O 50–150 µs 2–6 days HDD I/O 1–10 ms 1–12 months Internet: San Francisco to New York 40 ms 4 years Internet: San Francisco to London 81 ms 8 years Internet: San Francisco to Sydney 183 ms 20 years TCP packet retransmit 1–3 s 100–300 years OS virtualisation system reboot 4 s 400 years SCSI command time-out 30 s 3 millenia Hardware virtualisation system reboot 40 s 4 millenia Physical system reboot 2 min 12 millenia
  14. Event Latency Scaled 1 CPU cycle (3.7 MHz) 0.3 ns

    1 s Level 1 cache access 1 ns 3 s Level 2 cache access 3 ns 9 s Level 3 cache access 12 ns 36 s Main memory access (SDRAM) 120 ns 6 min SSD I/O 50–150 µs 2–6 days HDD I/O 1–10 ms 1–12 months Internet: San Francisco to New York 40 ms 4 years Internet: San Francisco to London 81 ms 8 years Internet: San Francisco to Sydney 183 ms 20 years TCP packet retransmit 1–3 s 100–300 years OS virtualisation system reboot 4 s 400 years SCSI command time-out 30 s 3 millenia Hardware virtualisation system reboot 40 s 4 millenia Physical system reboot 2 min 12 millenia
  15. Event Latency Scaled 1 CPU cycle (3.7 MHz) 0.3 ns

    1 s Level 1 cache access 1 ns 3 s Level 2 cache access 3 ns 9 s Level 3 cache access 12 ns 36 s Main memory access (SDRAM) 120 ns 6 min SSD I/O 50–150 µs 2–6 days HDD I/O 1–10 ms 1–12 months Internet: San Francisco to New York 40 ms 4 years Internet: San Francisco to London 81 ms 8 years Internet: San Francisco to Sydney 183 ms 20 years TCP packet retransmit 1–3 s 100–300 years OS virtualisation system reboot 4 s 400 years SCSI command time-out 30 s 3 millenia Hardware virtualisation system reboot 40 s 4 millenia Physical system reboot 2 min 12 millenia
  16. Event Latency Scaled 1 CPU cycle (3.7 MHz) 0.3 ns

    1 s Level 1 cache access 1 ns 3 s Level 2 cache access 3 ns 9 s Level 3 cache access 12 ns 36 s Main memory access (SDRAM) 120 ns 6 min SSD I/O 50–150 µs 2–6 days HDD I/O 1–10 ms 1–12 months Internet: San Francisco to New York 40 ms 4 years Internet: San Francisco to London 81 ms 8 years Internet: San Francisco to Sydney 183 ms 20 years TCP packet retransmit 1–3 s 100–300 years OS virtualisation system reboot 4 s 400 years SCSI command time-out 30 s 3 millenia Hardware virtualisation system reboot 40 s 4 millenia Physical system reboot 2 min 12 millenia
  17. Event Latency Scaled 1 CPU cycle (3.7 MHz) 0.3 ns

    1 s Level 1 cache access 1 ns 3 s Level 2 cache access 3 ns 9 s Level 3 cache access 12 ns 36 s Main memory access (SDRAM) 120 ns 6 min SSD I/O 50–150 µs 2–6 days HDD I/O 1–10 ms 1–12 months Internet: San Francisco to New York 40 ms 4 years Internet: San Francisco to London 81 ms 8 years Internet: San Francisco to Sydney 183 ms 20 years TCP packet retransmit 1–3 s 100–300 years OS virtualisation system reboot 4 s 400 years SCSI command time-out 30 s 3 millenia Hardware virtualisation system reboot 40 s 4 millenia Physical system reboot 2 min 12 millenia
  18. The first rule of distributed systems is don’t distribute your

    system. …until you have an observable reason to do so.
  19. RabbitMQ Delayed messages Ordering Ordering Queue TTL 120 s Queue

    TTL 60 s Queue TTL 30 s Queue TTL 15 s Queue
  20. RabbitMQ Delayed messages Queue TTL 32 s Queue TTL 16

    s Queue TTL 8 s Queue TTL 4 s Queue TTL 2 s Queue TTL 1 s
  21. RabbitMQ Delayed messages Queue TTL 32 s Queue TTL 16

    s Queue TTL 8 s Queue TTL 4 s Queue TTL 2 s Queue TTL 1 s 1 0 1 0 1 0
  22. RabbitMQ Delayed messages Queue TTL 32 s Queue TTL 16

    s Queue TTL 8 s Queue TTL 4 s Queue TTL 2 s Queue TTL 1 s 1 0 1 0 1 0
  23. RabbitMQ Delayed messages Queue TTL 32 s Queue TTL 16

    s Queue TTL 8 s Queue TTL 4 s Queue TTL 2 s Queue TTL 1 s 1 0 1 0 1 0
  24. RabbitMQ Delayed messages Queue TTL 32 s Queue TTL 16

    s Queue TTL 8 s Queue TTL 4 s Queue TTL 2 s Queue TTL 1 s 1 0 1 0 1 0
  25. RabbitMQ Delayed messages Queue TTL 32 s Queue TTL 16

    s Queue TTL 8 s Queue TTL 4 s Queue TTL 2 s Queue TTL 1 s 1 0 1 0 1 0
  26. RabbitMQ Delayed messages Queue TTL 32 s Queue TTL 16

    s Queue TTL 8 s Queue TTL 4 s Queue TTL 2 s Queue TTL 1 s 1 0 1 0 1 0 4210 seconds
  27. 1. Distributing everything 2. Re-writing everything 3. Building a service

    bus Five common mistakes with distributed systems
  28. Customer service ID First name Last name Status Product service

    ID Name Description Price ? Customer fi rst name Customer last name
  29. Customer service ID First name Last name Status Product service

    ID Name Description Price ? Customer fi rst name Customer last name ? Customer status Product price
  30. Customer service ID First name Last name Status Product service

    ID Name Description Price ? Customer fi rst name Customer last name ? Customer status Product price ? Product name Product description
  31. Customer service ID First name Last name Status Product service

    ID Name Description Price ? Customer fi rst name Customer last name ? Customer status Product price ? Product name Product description
  32. Customer service ID First name Last name Status Product service

    ID Name Description Price ? Customer fi rst name Customer last name ? Customer status Product price ? Product name Product description Customer ID
  33. Customer service ID First name Last name Status Product service

    ID Name Description Price ? Customer fi rst name Customer last name ? Customer status Product price ? Product name Product description Customer ID Customer ID
  34. Customer service ID First name Last name Status Product service

    ID Name Description Price ? Customer fi rst name Customer last name ? Customer status Product price ? Product name Product description Customer ID Customer ID
  35. Customer service ID First name Last name Status Product service

    ID Name Description Price ? Customer fi rst name Customer last name ? Customer status Product price ? Product name Product description Customer ID Customer ID Product ID
  36. Customer service ID First name Last name Status Product service

    ID Name Description Price ? Customer fi rst name Customer last name ? Customer status Product price ? Product name Product description Customer ID Customer ID Product ID Product ID
  37. Customer service ID First name Last name Status Product service

    ID Name Description Price ? Customer fi rst name Customer last name ? Customer status Product price ? Product name Product description Customer ID Customer ID Product ID Product ID
  38. 1. Distributing everything 2. Re-writing everything 3. Building a service

    bus 4. De fi ning services as nouns Five common mistakes with distributed systems
  39. 1. Distributing everything 2. Re-writing everything 3. Building a service

    bus 4. De fi ning services as nouns 5. Striving for consistency Five common mistakes with distributed systems
  40. 1. Distributing everything 2. Re-writing everything 3. Building a service

    bus 4. De fi ning services as nouns 5. Striving for consistency Five common mistakes with distributed systems
  41. 1. Distributing everything 2. Re-writing everything 3. Building a service

    bus 4. De fi ning services as nouns 5. Striving for consistency Five common mistakes with distributed systems 6. (Bonus!) Conflating service boundaries (logical) with deployment boundaries (physical) Six!
  42. 1. Distributing everything 2. Re-writing everything 3. Building a service

    bus 4. De fi ning services as nouns 5. Striving for consistency Five common mistakes with distributed systems 6. (Bonus!) Conflating service boundaries (logical) with deployment boundaries (physical) Six!