Slide 1

Slide 1 text

Logan Martel | @martelogan A Commerce-Centric take on High Throughput Fair Queueing

Slide 2

Slide 2 text

👈 Me • works on scaling Checkout @ Shopify • advocating stateful throttles today • shipped a scalable stateful throttle with: • Scott Francis • Bassam Mansoob • Jay Lim • Osama Sidat • Jonathan Dupuis 🛒 Docs as legal-lang

Slide 3

Slide 3 text

Simple Example Queue

Slide 4

Slide 4 text

The Plan (Roughly) 01 02 03 04 “Flash Sale” Thundering Herds Prior Work & Drawbacks “Stateful Throttle” Solutions Test in prod!

Slide 5

Slide 5 text

Flash Scale

Slide 6

Slide 6 text

No content

Slide 7

Slide 7 text

Shopify handles some of the largest flash sales in the world

Slide 8

Slide 8 text

8 32M Requests per minute (peak) 11TB MySQL read I/O per second 24B Background jobs performed 42B API calls made to partner apps

Slide 9

Slide 9 text

Shopify (Core) Tech Stack

Slide 10

Slide 10 text

Shopify (Core) Tech Stack

Slide 11

Slide 11 text

Shopify (Core) Tech Stack

Slide 12

Slide 12 text

Browsing Storefront

Slide 13

Slide 13 text

Add to Cart

Slide 14

Slide 14 text

Writes during Checkout

Slide 15

Slide 15 text

Payment Finalization

Slide 16

Slide 16 text

Order Confirmation

Slide 17

Slide 17 text

Shopify’s “Thundering Herd” → Write-heavy Bursts up to 5x our baseline traffic

Slide 18

Slide 18 text

No content

Slide 19

Slide 19 text

Need Backpressure! What are some of our options?

Slide 20

Slide 20 text

No content

Slide 21

Slide 21 text

No content

Slide 22

Slide 22 text

No content

Slide 23

Slide 23 text

• blocking λ dequeues • “stateful” memory requirement • nevertheless, we’ll circle back to this idea Why not simply queue users in order (FIFO)?

Slide 24

Slide 24 text

No content

Slide 25

Slide 25 text

No content

Slide 26

Slide 26 text

• “leaky bucket as queue” → stateful FIFO equivalent → buffered in-order requests • “leaky bucket as metre” → stateless throttle → requests either dropped (z > β) or forwarded (z ≤ β)

Slide 27

Slide 27 text

• token buckets are equivalent mirror images to “leaky bucket as metre” • both statelessly throttle at rate ρ → support “bursty traffic” up to burst size z ≤ β

Slide 28

Slide 28 text

Stateless Throttles

Slide 29

Slide 29 text

Common Throttle Challenges ● Capacity problem → limiting service rate to sustainable throughput ● Starvation problem → ensuring prompt service for all buyers → (fast sellout) ● Fairness problem → limiting deviations from FIFO service order (e.g. don’t incentivize a “race to poll”!)

Slide 30

Slide 30 text

Compromises? Let’s consider some semi-stateful windowed approaches

Slide 31

Slide 31 text

Fixed Window

Slide 32

Slide 32 text

Fixed Window

Slide 33

Slide 33 text

Fixed Window

Slide 34

Slide 34 text

Fixed Window

Slide 35

Slide 35 text

Fixed Window (Redis Transaction)

Slide 36

Slide 36 text

Fixed Window (Redis via Lua)

Slide 37

Slide 37 text

Problem: Boundary Bursts

Slide 38

Slide 38 text

Problem: Boundary Bursts

Slide 39

Slide 39 text

Adjust dynamically → Sliding Window

Slide 40

Slide 40 text

Adjust dynamically → Sliding Window

Slide 41

Slide 41 text

Adjust dynamically → Sliding Window

Slide 42

Slide 42 text

Adjust dynamically → Sliding Window

Slide 43

Slide 43 text

Adjust dynamically → Sliding Window

Slide 44

Slide 44 text

Variations on Windows ● Sliding Window log→ track arrivals in-memory; pop outdated entries ● Generic Cell Rate (GCRA) → metered leaky bucket with predicted arrivals ● Concurrency & congestion controls → counting semaphores & TCP-style adaptive window sizes

Slide 45

Slide 45 text

No content

Slide 46

Slide 46 text

“Standard” Window Approaches Retro ● Police Capacity → (limit concurrent buyers) ✅ ● Don’t Starve Throughput → (fast sellout) ✅ ● Promote Fairness → avoid a “race to poll” ❌

Slide 47

Slide 47 text

Let’s try going a step beyond *just* throttling.

Slide 48

Slide 48 text

Our North Star Slides to follow along

Slide 49

Slide 49 text

The Journey 1. Stateless V1 2. Stateful V2 3. Rollout Slides to follow along

Slide 50

Slide 50 text

Step 1: Stateless V1 Our Edge-tier Legacy Throttle

Slide 51

Slide 51 text

No content

Slide 52

Slide 52 text

OpenResty Lua Module: Enables scripting NGINX load balancers to manipulate request & response traffic.

Slide 53

Slide 53 text

OpenResty Hello World Example (with custom headers)

Slide 54

Slide 54 text

Legacy Throttle Architecture

Slide 55

Slide 55 text

Legacy Throttle Architecture

Slide 56

Slide 56 text

Legacy Throttle Architecture

Slide 57

Slide 57 text

Legacy Throttle Architecture

Slide 58

Slide 58 text

Servicing polls first-in-first-out is unfair. New users could simply poll first to “jump the line”

Slide 59

Slide 59 text

Let’s issue (signed) tickets to each user for the timestamp when they arrived.

Slide 60

Slide 60 text

No content

Slide 61

Slide 61 text

No content

Slide 62

Slide 62 text

No content

Slide 63

Slide 63 text

Control Theory Idea: Adjust our “accepted traffic window” on-the-fly (à la TCP) Seeking stable fair throughput just as Thermostat “PID controllers”1 seek stable temperatures 1 Proportional-Integral-Derivative Controllers

Slide 64

Slide 64 text

Adaptive “lag” slider

Slide 65

Slide 65 text

Adaptive “lag” slider

Slide 66

Slide 66 text

Adaptive “lag” slider

Slide 67

Slide 67 text

worked well at prioritizing “very lagged” user poll traffic difficult to stabilize → led to frequent window fluctuations never “quite” stateless → led to inconsistent behaviour across load balancers → complicated scaling across regional clusters Legacy Throttle “Adaptive Lag” Retro

Slide 68

Slide 68 text

In Legacy throttle, users could also be queued for >30 mins only to discover that their cart's inventory had already gone out-of-stock

Slide 69

Slide 69 text

Step 2: Stateful V2 Our Application-tier Fair Waiting Room

Slide 70

Slide 70 text

No content

Slide 71

Slide 71 text

Let’s issue (signed) tickets to each user for the timestamp when they arrived.

Slide 72

Slide 72 text

Shopify’s “Thundering Herd” → Write-heavy Bursts up to 5x our baseline traffic

Slide 73

Slide 73 text

No content

Slide 74

Slide 74 text

No content

Slide 75

Slide 75 text

No content

Slide 76

Slide 76 text

Consider a distribution of user arrival times

Slide 77

Slide 77 text

Arrival tickets land in different buckets arrival tickets

Slide 78

Slide 78 text

There’s more than one queue in this image arrival tickets

Slide 79

Slide 79 text

Queue bins

Slide 80

Slide 80 text

Intra-bin Queues y=10% into 1s . . . x-axis = arrival second x = 2s x = 1s . . . x = 3s y-axis = % into one-second bin y=25.2% into 1s y=33.33% into 1s (integer-valued) (decimal-valued)

Slide 81

Slide 81 text

Idea: Limit unfairness between Queue Bins

Slide 82

Slide 82 text

Tolerate unfairness within bins y=10% into 1s . . . x-axis = arrival second x = 2s x = 1s . . . x = 3s y-axis = % into one-second bin y=25.2% into 1s y=33.33% into 1s (integer-valued) (decimal-valued)

Slide 83

Slide 83 text

No content

Slide 84

Slide 84 text

No content

Slide 85

Slide 85 text

No content

Slide 86

Slide 86 text

Queue Library (Ruby Gem) Interface

Slide 87

Slide 87 text

Queue Library (Ruby Gem) Interface

Slide 88

Slide 88 text

Simple Mock Service to test Queue gem

Slide 89

Slide 89 text

Bin Scheduling • latest_bin - bin # currently assigned to arriving users • client_bin - bin # assigned to particular user (signed & encoded) • working_bin - max eligible bin to accept poll traffic from clients

Slide 90

Slide 90 text

Lua Routine Stored in Redis

Slide 91

Slide 91 text

When should we ask clients to poll?

Slide 92

Slide 92 text

Inventory Awareness - highly cacheable reads, powered by application tier. Enriched by a stateful React + GraphQL client.

Slide 93

Slide 93 text

State as an enabler for scalability ● Multi-layered caching - most requests don’t even reach Redis ● Adaptive working_bin - increments can react to signals such as: ○ compliance - do clients poll at advised poll times (not too early or late)? ○ system health - do we have capacity to allow more traffic? ● Sellout as backoff signal - traffic backoff after sellout → shorter queue times! ● Horizontal Scaling - if needed, could shard bins over multiple Redis instances

Slide 94

Slide 94 text

Step 3: Rollout! Simulation-driven Migration

Slide 95

Slide 95 text

No content

Slide 96

Slide 96 text

No content

Slide 97

Slide 97 text

Middleware Experiment in Prod

Slide 98

Slide 98 text

Middleware Experiment in Prod

Slide 99

Slide 99 text

No content

Slide 100

Slide 100 text

Similar Concept in Amusement Parks See Defunctland’s “Fastpass: A Complicated History” on YouTube

Slide 101

Slide 101 text

Simulates Diverse Polling Behavior

Slide 102

Slide 102 text

Simulates Diverse Polling Behavior

Slide 103

Slide 103 text

Simulates Diverse Polling Behavior

Slide 104

Slide 104 text

Simulates Diverse Polling Behavior

Slide 105

Slide 105 text

Redis Queue Simulator: goqueuesim

Slide 106

Slide 106 text

Redis Queue Simulator: goqueuesim

Slide 107

Slide 107 text

Example Metrics

Slide 108

Slide 108 text

Example Metrics

Slide 109

Slide 109 text

Example Metrics

Slide 110

Slide 110 text

Mock Services

Slide 111

Slide 111 text

Genghis: Our Load Testing Tool Talk on Genghis

Slide 112

Slide 112 text

Simple Mock Service to test Queue gem

Slide 113

Slide 113 text

Mock API for Test Shops in Production

Slide 114

Slide 114 text

Experiment Results: Success!

Slide 115

Slide 115 text

Some Takeaways 01 02 03 04 Race to poll drawback in rate limiters Benefits of queue state to fairness & UX Horizontal & adaptive scaling options Simulation-driven migrations! Thoughts? Chat with me sometime @martelogan !