Slide 1

Slide 1 text

Reconciling High Accuracy, Cost-Ef f iciency, and Low Latency of Inference Serving Systems Pooyan Jamshidi University of South Carolina

Slide 2

Slide 2 text

2 Teamwork

Slide 3

Slide 3 text

2 Teamwork

Slide 4

Slide 4 text

2 Teamwork

Slide 5

Slide 5 text

2 Teamwork

Slide 6

Slide 6 text

Outline 3 InfAdapter IPA Background

Slide 7

Slide 7 text

Multi-objective performance tradeoff

Slide 8

Slide 8 text

5 Research Production Objectives Model performance* Different stakeholders have different objectives “*” It’s actively being worked. See Utility is in the Eye of the User: A Critique of NLP Leaderboards (Ethayarajh and Jurafsky, EMNLP 2020) ML in research vs. in production

Slide 9

Slide 9 text

6 ML team highest accuracy Stakeholder objectives

Slide 10

Slide 10 text

7 ML team highest accuracy Sales sells more ads Stakeholder objectives

Slide 11

Slide 11 text

8 ML team highest accuracy Sales sells more ads Stakeholder objectives Product fastest inference

Slide 12

Slide 12 text

9 ML team highest accuracy Sales sells more ads Manager maximizes profit = laying off ML teams Stakeholder objectives Product fastest inference

Slide 13

Slide 13 text

10 Research Production Objectives Model performance Different stakeholders have different objectives Computational priority Fast training, high throughput Fast inference, low latency Computational priority generating predictions

Slide 14

Slide 14 text

Latency matters Latency 100 -> 400 ms reduces searches 0.2% - 0.6% (2009) 30% increase in latency costs 0.5% conversion rate (2019) 11

Slide 15

Slide 15 text

12 ● Latency: time to move a leaf ● Throughput: how many leaves in 1 sec

Slide 16

Slide 16 text

13 ● Real-time: low latency = high throughput ● Batched: high latency, high throughput

Slide 17

Slide 17 text

ML Serving

Slide 18

Slide 18 text

System = Software + Middleware + Hardware CPU Memory Controller GPU Lib API Clients Devices Network Task Scheduler Device Drivers File System Compilers Memory Manager Process Manager Frontend Application Layer OS/Kernel Layer Hardware Layer Deployment SoC Generic hardware Production Servers

Slide 19

Slide 19 text

Model Serving Abstract level

Slide 20

Slide 20 text

Model Serving TF Serving

Slide 21

Slide 21 text

Model Serving Web app

Slide 22

Slide 22 text

Model Serving Internet of Thing

Slide 23

Slide 23 text

Model Serving Stream Processing System

Slide 24

Slide 24 text

Model Serving Pipeline

Slide 25

Slide 25 text

InfAdapter

Slide 26

Slide 26 text

EuroMLSys ’23, May 8, 2023, Rome, Italy 23

Slide 27

Slide 27 text

“More than 90% of data center compute for ML workload, is used by inference services” 24

Slide 28

Slide 28 text

ML inference services have strict requirements 25 Highly Responsive!

Slide 29

Slide 29 text

ML inference services have strict requirements 26 Highly Responsive! Cost-Efficient!

Slide 30

Slide 30 text

ML inference services have strict requirements 27 Highly Accurate! Highly Responsive! Cost-Efficient!

Slide 31

Slide 31 text

ML inference services have strict & conflicting requirements 28 Highly Accurate! Highly Responsive! Cost-Efficient!

Slide 32

Slide 32 text

More challenge: Dynamic workload 29

Slide 33

Slide 33 text

Existing adaptation mechanisms 30 Resource Scaling Vertical Scaling (AutoPilot EuroSys’20) Horizontal Scaling (MArk ATC’19) Quality Adaptation Multi Variants (Model-Switching Hotcloud’20)

Slide 34

Slide 34 text

Resource allocation 31 Over Provisioning Under Provisioning

Slide 35

Slide 35 text

Resource allocation 32

Slide 36

Slide 36 text

Resource allocation 33

Slide 37

Slide 37 text

Resource allocation 34

Slide 38

Slide 38 text

Resource allocation 35

Slide 39

Slide 39 text

Resource allocation 36

Slide 40

Slide 40 text

Quality adaptation 37 ResNet18: Tiger ResNet152: Dog

Slide 41

Slide 41 text

Quality adaptation 38

Slide 42

Slide 42 text

Solution: InfAdapter InfAdapter is a latency SLO-aware, highly accurate, and cost-efficient inference serving system. 39

Slide 43

Slide 43 text

InfAdapter: Why? Different throughputs with different model variants 40

Slide 44

Slide 44 text

InfAdapter: Why? Higher average accuracy by using multiple model variants 41

Slide 45

Slide 45 text

InfAdapter: How? Selecting a subset of model variants, each having its own size Meeting latency requirement for the predicted workload while maximizing accuracy and minimizing cost 42

Slide 46

Slide 46 text

InfAdapter: Design 43

Slide 47

Slide 47 text

InfAdapter: Design 44

Slide 48

Slide 48 text

InfAdapter: Formulation 45

Slide 49

Slide 49 text

InfAdapter: Formulation 46 Maximizing Average Accuracy

Slide 50

Slide 50 text

InfAdapter: Formulation 47 Maximizing Average Accuracy Minimizing Resource and Loading Costs

Slide 51

Slide 51 text

InfAdapter: Formulation 48

Slide 52

Slide 52 text

InfAdapter: Formulation 49 Supporting incoming workload

Slide 53

Slide 53 text

InfAdapter: Formulation 50 Supporting incoming workload Guaranteeing end-to-end latency

Slide 54

Slide 54 text

InfAdapter: Design 51

Slide 55

Slide 55 text

InfAdapter: Experimental evaluation setup Twitter-trace sample (2022-08) Baselines Kubernetes VPA and adapted Model-Switching Used models Resnet18, Resnet34, Resnet50, Resnet101, Resnet152 Interval adaptation 30 seconds A Kubernetes cluster of 2 computing nodes 48 Cores, 192 GiB RAM 52

Slide 56

Slide 56 text

Workload Pattern 53

Slide 57

Slide 57 text

InfAdapter: P99-Latency evaluation 54

Slide 58

Slide 58 text

InfAdapter: P99-Latency evaluation 55

Slide 59

Slide 59 text

InfAdapter: P99-Latency evaluation 56

Slide 60

Slide 60 text

InfAdapter: P99-Latency evaluation 57

Slide 61

Slide 61 text

InfAdapter: P99-Latency evaluation 58

Slide 62

Slide 62 text

InfAdapter: P99-Latency evaluation 59

Slide 63

Slide 63 text

InfAdapter: P99-Latency evaluation 60

Slide 64

Slide 64 text

InfAdapter: P99-Latency evaluation 61

Slide 65

Slide 65 text

InfAdapter: Accuracy evaluation 62

Slide 66

Slide 66 text

63 InfAdapter: Cost evaluation

Slide 67

Slide 67 text

InfAdapter: Experimental evaluation 64 Compare aggregated metrics of latency SLO violation, accuracy and cost with other works on different β values to see how they perform on different accuracy-cost trade-off

Slide 68

Slide 68 text

Takeaway 65 Inference Serving Systems should consider accuracy, latency, and cost at the same time.

Slide 69

Slide 69 text

Takeaway 66 Model variants provide the opportunity to reduce resource costs while adapting to the dynamic workload. Using a set of model variants simultaneously provides higher average accuracy compared to having one variant. Inference Serving Systems should consider accuracy, latency, and cost at the same time.

Slide 70

Slide 70 text

Takeaway 67 Model variants provide the opportunity to reduce resource costs while adapting to the dynamic workload. Using a set of model variants simultaneously provides higher average accuracy compared to having one variant. Inference Serving Systems should consider accuracy, latency, and cost at the same time. InfAdapter!

Slide 71

Slide 71 text

68 https://github.com/reconfigurable-ml-pipeline/InfAdapter

Slide 72

Slide 72 text

IPA

Slide 73

Slide 73 text

70

Slide 74

Slide 74 text

Inference Pipeline Recommender Systems Source: https://developer.nvidia.com/blog/ optimizing-dlrm-on-nvidia-gpus/ Video Pipelines Source: https://docs.nvidia.com/metropolis/ deepstream/5.0/dev-guide/index.html#page/ DeepStream_Development_Guide/ deepstream_overview.html

Slide 75

Slide 75 text

72 Autoscaling Previous works have used auto scaling for cost optimization of inference pipeline

Slide 76

Slide 76 text

Is only scaling enough? ?

Slide 77

Slide 77 text

Effect of Batching

Slide 78

Slide 78 text

How to navigate Accuracy/latency trade off? Model Variants and Model Switching! Previous works INFaaS and Model-Switch have proven that there is a big a latency-accuracy- resource footprint tradeoffs of models trained for the same task

Slide 79

Slide 79 text

How to navigate Accuracy/latency trade off? Model Variants and Model Switching! Previous works INFaaS and Model-Switch have proven that there is a big a latency-accuracy- resource footprint tradeoffs of models trained for the same task

Slide 80

Slide 80 text

How to navigate Accuracy/latency trade off? Model Variants and Model Switching! Previous works INFaaS and Model-Switch have proven that there is a big a latency-accuracy- resource footprint tradeoffs of models trained for the same task

Slide 81

Slide 81 text

Search Space

Slide 82

Slide 82 text

77 Goal: Providing a flexible inference pipeline

Slide 83

Slide 83 text

78 Snapshot of the System

Slide 84

Slide 84 text

79 System Design

Slide 85

Slide 85 text

Problem Formulation Objective function Accuracy Objective Resource Objective Batch Control Latency SLA Throughput Constraint One active model per node

Slide 86

Slide 86 text

Implementation and Experimental
 Setup 81

Slide 87

Slide 87 text

1. Industry standard 2. Used in recent research 3. Complete set of autoscaling, scheduling, observability tools (e.g. CPU usage) 4. APIs for changing the current AutoScaling algorithms 1. Industry standard ML server 2. Have the ability make inference graph 3. Rest and GRPC endpoints 4. Have many of the features we need like monitoring stack out of the box How to navigate Model Variants

Slide 88

Slide 88 text

83 Experimental Setup ● A six node Kubernetes cluster

Slide 89

Slide 89 text

Experimental Results 84

Slide 90

Slide 90 text

85 Video Pipeline

Slide 91

Slide 91 text

86 Audio + QA Pipeline

Slide 92

Slide 92 text

87 Summarization + QA
 Pipeline

Slide 93

Slide 93 text

88 Summarization + QA
 Pipeline

Slide 94

Slide 94 text

89 NLP Pipeline

Slide 95

Slide 95 text

90 Adaptivity to multiple objectives

Slide 96

Slide 96 text

91 Effect of predictor

Slide 97

Slide 97 text

92 Gurobi solver scalability

Slide 98

Slide 98 text

Model Serving Pipeline https://github.com/reconfigurable-ml-pipeline/ipa

Slide 99

Slide 99 text

Model Serving Pipeline Is only scaling enough? ? https://github.com/reconfigurable-ml-pipeline/ipa

Slide 100

Slide 100 text

Model Serving Pipeline Is only scaling enough? ? X Snapshot of the System https://github.com/reconfigurable-ml-pipeline/ipa

Slide 101

Slide 101 text

Model Serving Pipeline Is only scaling enough? ? X Snapshot of the System X Adaptivity to multiple objectives https://github.com/reconfigurable-ml-pipeline/ipa