Slide 1

Slide 1 text

Reconciling Accuracy, Cost, and Latency of Inference Serving Systems Pooyan Jamshidi https://pooyanjamshidi.github.io/ University of South Carolina

Slide 2

Slide 2 text

No content

Slide 3

Slide 3 text

Multi-Objective Optimization with Known Constraints under Uncertainty Solutions: InfAdapter [2023]: Autoscaling for ML Inference IPA [2024]: Autoscaling for ML Inference Pipeline Sponge [2024]: Autoscaling for ML Inference Pipeline Dynamic SLO Problem: Different Assumptions

Slide 4

Slide 4 text

Thank you, Saeid Ghafouri! 4

Slide 5

Slide 5 text

InfAdapter [2023]: Autoscaling for ML Model Inference IPA [2024]: Autoscaling for ML Inference Pipeline Sponge [2024]: Autoscaling for ML Inference Pipeline with Dynamic SLO

Slide 6

Slide 6 text

“More than 90% of data center compute for ML workload, is used by inference services” 6

Slide 7

Slide 7 text

ML inference services have strict requirements 7 Highly Responsive!

Slide 8

Slide 8 text

ML inference services have strict requirements 8 Highly Responsive! Cost-Efficient!

Slide 9

Slide 9 text

ML inference services have strict requirements 9 Highly Accurate! Highly Responsive! Cost-Efficient!

Slide 10

Slide 10 text

ML inference services have strict & conflicting requirements 10 Highly Accurate! Highly Responsive! Cost-Efficient!

Slide 11

Slide 11 text

More challenge: Dynamic workload 11

Slide 12

Slide 12 text

Resource allocation 12

Slide 13

Slide 13 text

Resource allocation 13

Slide 14

Slide 14 text

Resource allocation 14

Slide 15

Slide 15 text

Resource allocation 15

Slide 16

Slide 16 text

Resource allocation 16

Slide 17

Slide 17 text

Resource allocation 17 Over Provisioning Under Provisioning

Slide 18

Slide 18 text

In ML pipelines, we can now adapt the quality of services, too! 18 ResNet18: Tiger ResNet152: Dog

Slide 19

Slide 19 text

Quality adaptation 19

Slide 20

Slide 20 text

First insight: The same throughput can be achieved with different computing resources by switching the model variants 20

Slide 21

Slide 21 text

Multi-models (our solution—InfAdapter) vs single-model (Model-Switching) Higher average accuracy by using multiple model variants 21

Slide 22

Slide 22 text

22

Slide 23

Slide 23 text

InfAdapter: Implementation details 23 Selecting a subset of model variants, each having its size meeting latency requirements for the predicted workload while maximizing accuracy and minimizing resource cost

Slide 24

Slide 24 text

InfAdapter: Formulation 24

Slide 25

Slide 25 text

InfAdapter: Formulation 25 Maximizing Average Accuracy

Slide 26

Slide 26 text

InfAdapter: Formulation 26 Maximizing Average Accuracy Minimizing Resource and Loading Costs

Slide 27

Slide 27 text

InfAdapter: Formulation 27

Slide 28

Slide 28 text

InfAdapter: Formulation 28 Supporting incoming workload

Slide 29

Slide 29 text

InfAdapter: Formulation 29 Supporting incoming workload Guaranteeing end-to-end latency

Slide 30

Slide 30 text

InfAdapter: Design 30

Slide 31

Slide 31 text

InfAdapter: Design 31

Slide 32

Slide 32 text

InfAdapter: Design 32

Slide 33

Slide 33 text

InfAdapter: Experimental evaluation setup Workload: Twitter-trace sample (2022-08) Baselines: Kubernetes VPA and Model-Switching Used models: Resnet18, Resnet34, Resnet50, Resnet101, Resnet152 Interval adaptation: 30 seconds Kubernetes cluster: 48 Cores, 192 GiB RAM 33

Slide 34

Slide 34 text

Workload Pattern 34

Slide 35

Slide 35 text

InfAdapter: P99-Latency evaluation 35

Slide 36

Slide 36 text

InfAdapter: P99-Latency evaluation 36

Slide 37

Slide 37 text

InfAdapter: P99-Latency evaluation 37

Slide 38

Slide 38 text

InfAdapter: P99-Latency evaluation 38

Slide 39

Slide 39 text

InfAdapter: P99-Latency evaluation 39

Slide 40

Slide 40 text

InfAdapter: Accuracy evaluation 40

Slide 41

Slide 41 text

41 InfAdapter: Cost evaluation

Slide 42

Slide 42 text

InfAdapter: Tradeoff Space 42

Slide 43

Slide 43 text

Takeaway 43 Inference Serving Systems should consider accuracy, latency, and cost at the same time.

Slide 44

Slide 44 text

Takeaway 44 Model variants provide the opportunity to reduce resource costs while adapting to the dynamic workload. Using a set of model variants simultaneously provides higher average accuracy compared to having one variant. Inference Serving Systems should consider accuracy, latency, and cost at the same time.

Slide 45

Slide 45 text

Takeaway 45 Model variants provide the opportunity to reduce resource costs while adapting to the dynamic workload. Using a set of model variants simultaneously provides higher average accuracy compared to having one variant. Inference Serving Systems should consider accuracy, latency, and cost at the same time. InfAdapter!

Slide 46

Slide 46 text

46 https://github.com/reconfigurable-ml-pipeline/InfAdapter

Slide 47

Slide 47 text

InfAdapter [2023]: Autoscaling for ML Model Inference IPA [2024]: Autoscaling for ML Inference Pipeline Sponge [2024]: Autoscaling for ML Inference Pipeline with Dynamic SLO

Slide 48

Slide 48 text

Inference Pipeline 48 Video Decoder Stream Muxer Primary Detector Object Tracker Secondary Classifier # Configuration Options 55 86 14 44 86

Slide 49

Slide 49 text

49 The Variabilities ML Pipelines

Slide 50

Slide 50 text

Search Space

Slide 51

Slide 51 text

Is only scaling enough? ?

Slide 52

Slide 52 text

Effect of Batching

Slide 53

Slide 53 text

53 Goal: Providing a flexible inference pipeline

Slide 54

Slide 54 text

Problem Formulation Accuracy Objective Resource Objective Batch Control

Slide 55

Slide 55 text

Problem Formulation Latency SLA Throughput Constraint One active model per node

Slide 56

Slide 56 text

Evaluations 56

Slide 57

Slide 57 text

1. Industry standard 2. Used in recent research 3. Complete set of autoscaling, scheduling, observability tools (e.g. CPU usage) 4. APIs for changing the current AutoScaling algorithms 1. Industry standard ML server 2. Have the ability make inference graph 3. Rest and GRPC endpoints 4. Have many of the features we need like monitoring stack out of the box How to navigate Model Variants

Slide 58

Slide 58 text

58 Evaluation https://github.com/reconfigurable-ml-pipeline/ipa

Slide 59

Slide 59 text

59 We compared IPA with RIM and FA2

Slide 60

Slide 60 text

60 Audio + QA Pipeline

Slide 61

Slide 61 text

61 Adaptivity to multiple objectives

Slide 62

Slide 62 text

62 Effect of predictor

Slide 63

Slide 63 text

63 Gurobi solver scalability

Slide 64

Slide 64 text

Full replication package is available https://github.com/recon fi gurable-ml-pipeline

Slide 65

Slide 65 text

Model Serving Pipeline Is only scaling enough? ? X Snapshot of the System X Adaptivity to multiple objectives

Slide 66

Slide 66 text

InfAdapter [2023]: Autoscaling for ML Model Inference IPA [2024]: Autoscaling for ML Inference Pipeline Sponge [2024]: Autoscaling for ML Inference Pipeline with Dynamic SLO

Slide 67

Slide 67 text

Dynamic User -> Dynamic Network Bandwidths ˻ Users move ˻ Fluctuations in the network bandwidths ˻ Reduced time-budget for processing requests 67 SLO network latency processing latency

Slide 68

Slide 68 text

Dynamic User -> Dynamic Network Bandwidths ˻ Users move ˻ Fluctuations in the network bandwidths ˻ Reduced time-budget for processing requests 68 SLO network latency processing latency

Slide 69

Slide 69 text

Inference Serving Requirements 69 Highly Responsive! Cost-Efficient! Resource Scaling In-place Vertical Scaling Horizontal Scaling (end-to-end latency guarantee) (least resource consumption) (more responsive) (more cost efficient) Sponge!

Slide 70

Slide 70 text

Vertical Scaling DL Model Profiling ˻ How much resource should be allocated to a DL model? ˻ Latency/batch size → linear relationship ˻ Latency/CPU allocation → inverse relationship 70

Slide 71

Slide 71 text

Problem Formulation 71

Slide 72

Slide 72 text

Problem Formulation 72 Minimize resource costs

Slide 73

Slide 73 text

Problem Formulation 73 Limit the batch size to grow infinitely! Minimize resource costs

Slide 74

Slide 74 text

Problem Formulation 74 Limit the batch size to grow infinitely! Minimize resource costs

Slide 75

Slide 75 text

3 design choices: 1. In-place vertical scaling • Fast response time 2. Request reordering • High priority requests 3. Dynamic batching • Increase system utilization 75 System Design

Slide 76

Slide 76 text

Evaluation SLO guarantees (99th percentile) with up to 20% resource save up compared to static resource allocation. Sponge source code: https://github.com/saeid93/sponge 76

Slide 77

Slide 77 text

Future Directions 77 Resource Scaling In-place Vertical Scaling Horizontal Scaling (more responsive) (more cost efficient) Sponge! How can both scaling mechanisms be used jointly under a dynamic workload to be responsive and cost efficient while guaranteeing SLOs?

Slide 78

Slide 78 text

Performance goals are competing and users have preferences over these goals The variability space (design space) of (composed) systems is exponentially increasing Systems operate in uncertain environments with imperfect and incomplete knowledge Goal: Enabling users to f ind the right quality tradeoff Lander Testbed (NASA) Turtlebot 3 (UofSC) Husky UGV (UofSC) CoBot (CMU)