Slide 1

Slide 1 text

How to Apply Large ML Models for AI-Text Filtering Models Hyung Rak Kim / LINE Plus

Slide 2

Slide 2 text

• Hyungrak Kim • NLP Engineer • AI text filter model • Likes to learn and use new technology

Slide 3

Slide 3 text

No content

Slide 4

Slide 4 text

• (Stable AI): https://beta.dreamstudio.ai/dream “beautiful forest”

Slide 5

Slide 5 text

Contents › Introduction › Large ML model training tech › Apply large ML model to AI text filter › Experiment Result › Expected Effectiveness › Conclusion

Slide 6

Slide 6 text

Introduction

Slide 7

Slide 7 text

Introduction What is AI text filter User LINE Monitoring System 私と付き合いたい 場合は連絡してくださ い [email protected] Please contact me if you want to date me [email protected] Translation Check 380,000,000 Data every month Normal Personal Info Porn Harass illegal Advertising 0 0 0 1 1 JP Language AI Text filter Model 0

Slide 8

Slide 8 text

Introduction What is AI text Filter Problem JP Language AI Text filter Model Fine-Tuning JP BERT JP Char BERT JP RoBERTa JP small BERT JP Distill BERT ....... ..... Public Pre-training Model

Slide 9

Slide 9 text

Introduction What is AI text Filter Problem JP Language AI Text filter Model › Which model performance is better? › What if the language is different? • Research Cost • Development Cost • Service Cost Fine-Tuning Problem JP BERT JP Char BERT JP RoBERTa JP small BERT JP Distill BERT ....... ..... Public Pre-training Model

Slide 10

Slide 10 text

Introduction Solution Language Multi Language Performance Large ML Model Technique Large ML Training Tech

Slide 11

Slide 11 text

Introduction Solution Language Multi Language Performance Large ML Model Technique Large ML Training Tech

Slide 12

Slide 12 text

Introduction Solution Language Multi Language Performance Large ML Model Technique Large ML Training Tech

Slide 13

Slide 13 text

Introduction What is AI text Filter Solution Single Language Model Multi Language Model Impact X100 Cost Reduction Service Extension AI Text filter Model 110 Million Japan English Thailand Taiwan Indonesia ....... ..... .. 11 Billion Large Model Training Technique

Slide 14

Slide 14 text

Introduction Contribution › Introduction and sharing of large ML model training technology › AI text filter advancement using large multi-language model › With MLU team of LINE MLOps › Model serving › With MLU serving team of LINE ML service

Slide 15

Slide 15 text

Large ML Model Training Tech

Slide 16

Slide 16 text

Large ML Model Training Tech Basic Scaling Lightweight Pruning Quantization Knowledge Distillation Data Parallelism Model Parallelism CPU-Offload

Slide 17

Slide 17 text

Large ML Model Training Tech Data Parallelism Data 1 Data 2 Data 3 V100 GPU 1 V100 GPU 2 V100 GPU 3 ML model ML model ML model

Slide 18

Slide 18 text

Large ML Model Training Tech Model Parallelism Input Layer 1 Layer 2 Output GPU 1 GPU 2 Intra Operator Parallelism All Reduce

Slide 19

Slide 19 text

Large ML Model Training Tech Model Parallelism + CPU Offload CPU Space Model Size UP CPU Offload Input Layer 1 Layer 2 Output GPU 1 GPU 2 Intra Operator Parallelism All Reduce

Slide 20

Slide 20 text

Large ML Model Training Tech Large ML model training framework › Framework › Why choose DeepSpeed › Open Source › CPU offload › Supported best method from ML Tech • (ICML2022 Big model tutorial): https://icml.cc/virtual/2022/tutorial/18440 • (DeepSpeed):https://www.deepspeed.ai

Slide 21

Slide 21 text

Apply large ML model to AI Text filter

Slide 22

Slide 22 text

Apply large ML model to AI text filter Large model training › Constructure GPU GPU GPU GPU Node 1 GPU GPU GPU GPU GPU GPU GPU GPU Node 2 GPU GPU GPU GPU GPU GPU GPU GPU Node 3 GPU GPU GPU GPU DeepSpeed Multi-Node Setting • (DeepSpeed): https://www.deepspeed.ai/ › Node › GPU : A100 40G › CPU core: 70 › CPU memory: 1T › GPU Number: 8

Slide 23

Slide 23 text

Apply large ML model to AI text filter Large model training › Constructure GPU GPU GPU GPU Node 1 GPU GPU GPU GPU GPU GPU GPU GPU Node 2 GPU GPU GPU GPU GPU GPU GPU GPU Node 3 GPU GPU GPU GPU DeepSpeed Multi-Node Setting Training Configure + • (DeepSpeed): https://www.deepspeed.ai/ Training Pre-training Model Multi Language 11 Billion Fine-tuning AI Text Filter + + Data 730,000 › Node › GPU : A100 40G › CPU core: 70 › CPU memory: 1T › GPU Number: 8

Slide 24

Slide 24 text

Problem Environment Setting Multi-node Sharing Pre-training model Dependency Apply large ML model to AI text filter

Slide 25

Slide 25 text

Problem Environment Setting Multi-node Sharing Pre-training model Dependency Apply large ML model to AI text filter

Slide 26

Slide 26 text

Problem Environment Setting Multi-node Sharing Pre-training model Dependency Apply large ML model to AI text filter

Slide 27

Slide 27 text

Apply large ML model to AI text filter Environment setting problem • (DeepSpeed): https://hub.docker.com/r/deepspeed/deepspeed/tags?page=1&ordering=last_updated DeepSpeed Environment CPU GPU Library Dependency DeepSpeed OS System Library

Slide 28

Slide 28 text

Apply large ML model to AI text filter Environment setting problem • (DeepSpeed): https://hub.docker.com/r/deepspeed/deepspeed/tags?page=1&ordering=last_updated DeepSpeed Environment CPU GPU Library Dependency DeepSpeed OS System Library Cuda Extension Build System CUDA Extension Ninja G++/C++ DeepSpeed

Slide 29

Slide 29 text

Apply large ML model to AI text filter Environment setting solution DeepSpeed Env Setting OS System Library DeepSpeed Library Multi Node Library

Slide 30

Slide 30 text

Apply large ML model to AI text filter Environment setting solution All Function used in MLU Fixed DeepSpeed Stable Version Training Library Free MLU Environment DeepSpeed Env Setting OS System Library DeepSpeed Library Multi Node Library

Slide 31

Slide 31 text

Apply large ML model to AI text filter Environment setting solution All Function used in MLU Fixed DeepSpeed Stable Version Training Library Free MLU Environment Docker Installation Document Docker Hub DeepSpeed Env Setting OS System Library DeepSpeed Library Multi Node Library

Slide 32

Slide 32 text

Apply large ML model to AI text filter Multi-node training file sharing problem 1 Training MLU Environment GPU Server First Training Start CUDA Extension GPU Accelerator • (DeepSpeed): https://www.deepspeed.ai/tutorials/advanced-install/

Slide 33

Slide 33 text

Apply large ML model to AI text filter Multi-node training file sharing problem 2 Training Header GPU Node 1 MLU Environment Worker GPU Node 2 Worker GPU Node 3 Training Start CUDA Extension ? ? • (DeepSpeed): https://www.deepspeed.ai/tutorials/advanced-install/ ssh ssh

Slide 34

Slide 34 text

Apply large ML model to AI text filter Multi-node training file sharing solution 1 Worker GPU Node 2 Worker GPU Node 3 Header GPU Node 1 Training Start CUDA Extension Worker Node IP address List Worker GPU Node N Secure File Transfer Sharing Module Multi Node File Sharing Module

Slide 35

Slide 35 text

Apply large ML model to AI text filter Multi-node training file sharing solution 2 Header GPU Node 1 MLU Environment Worker GPU Node 2 Worker GPU Node 3 Training Start CUDA Extension Sharing Training • (DeepSpeed): https://www.deepspeed.ai/tutorials/advanced-install/ ssh ssh

Slide 36

Slide 36 text

Apply large ML model to AI text filter Pre-training model parallelism dependency problem Input Layer 1 Layer 2 Output GPU 1 GPU 2 Intra Operator Parallelism All Reduce Coding

Slide 37

Slide 37 text

Apply large ML model to AI text filter Pre-training model parallelism dependency problem Public Pre-training model JP BERT JP Char BERT JP RoBERTa JP small BERT JP Distill BERT ....... ..... Un Parallelized Model Code Input Layer 1 Layer 2 Output GPU 1 GPU 2 Intra Operator Parallelism All Reduce Coding

Slide 38

Slide 38 text

Apply large ML model to AI text filter Pre-training model parallelism dependency problem Public Pre-training model JP BERT JP Char BERT JP RoBERTa JP small BERT JP Distill BERT ....... ..... Un Parallelized Model Code Pre-training Model Parallelism Dependency Fine-tuning Input Layer 1 Layer 2 Output GPU 1 GPU 2 Intra Operator Parallelism All Reduce Coding

Slide 39

Slide 39 text

Apply large ML model to AI text filter Pre-training model parallelism dependency solution Parallelism Converting Pre-training Model Code Parallelism

Slide 40

Slide 40 text

Apply large ML model to AI text filter Pre-training model parallelism dependency solution Parallelism Converting Pre-training Model Code Parallelism Pre-training Model Weight Partitioning

Slide 41

Slide 41 text

Apply large ML model to AI text filter Pre-training model parallelism dependency code parallelism 1 Public Pre-training Model Transformer model Encoder Layer 1 Layer 2 Layer N Decoder Layer 1 Layer 2 Layer N

Slide 42

Slide 42 text

Apply large ML model to AI text filter Pre-training model parallelism dependency code parallelism 1 Public Pre-training Model Transformer model Encoder Layer 1 Layer 2 Layer N Decoder Layer 1 Layer 2 Layer N Layer 1 Multi Head Attention Key Query Value Feed Forward Network + Intermediate Feed Forward Network H to 4H FFN 4H to H FFN

Slide 43

Slide 43 text

Apply large ML model to AI text filter Pre-training model parallelism dependency code parallelism 2 Multi Language Pre-training Model Code Parallelism Layer GPU 1 GPU 2 Key Query Multi Head Attention Layer Value + All Reduce • (Megatron ML): https://arxiv.org/pdf/1909.08053.pdf FFN Feed Forward Layer

Slide 44

Slide 44 text

Apply large ML model to AI text filter Pre-training model parallelism dependency code parallelism 2 Multi Language Pre-training Model Code Parallelism Layer GPU 1 GPU 2 Key Query Multi Head Attention Layer Value + Intermediate Feed Forward Layer All Reduce Output All Reduce • (Megatron ML): https://arxiv.org/pdf/1909.08053.pdf H to 4H 4H to H FFN Feed Forward Layer

Slide 45

Slide 45 text

Apply large ML model to AI text filter Pre-training model parallelism dependency code parallelism Model Parameter Partitioning Algorithm • (Megatron ML): https://github.com/NVIDIA/Megatron-LM Model Code Parallelism Model Load Weight Partitioning Pre-Training Model Weight Fine-tuning

Slide 46

Slide 46 text

Apply large ML model to AI text filter Pre-training model parallelism dependency code parallelism Model Parameter Partitioning Algorithm • (Megatron ML): https://github.com/NVIDIA/Megatron-LM Model Code Parallelism Model Load Weight Partitioning Pre-Training Model Weight Multi Head Attention Layer Feed Forward Layer Intermediated Feed Forward Layer Fine-tuning Auto Partitioning GPU 1 GPU 2

Slide 47

Slide 47 text

Apply large ML model to AI text filter Pre-training model parallelism dependency solution Public Pre-training model Un-Parallelized Model Code Fine-tuning Model Parallelism Parallelized model Group N Model GPU 1 GPU 2 Parallelism Converter Model Code Parallelism Model Weight Partitioning

Slide 48

Slide 48 text

Apply large ML model to AI text filter Pre-training model parallelism dependency solution analysis Disadvantage Advantage Unstable Converge Model Performance Down Parallelism Dependency Free More Research Model Size Up

Slide 49

Slide 49 text

Apply large ML model to AI text filter Performance tunning label correlation Label Correlation Normal Advertising Personal Info Porn illegal Harass Algorithm Normal Advertising Personal Info Porn illegal Harass Global Correlation Embedding

Slide 50

Slide 50 text

Apply large ML model to AI text filter Large model Serving › Large model serving • (DeepSpeed Inference):https://www.deepspeed.ai/tutorials/inference-tutorial/ Model Model Optimize FP16 Loss Scaling

Slide 51

Slide 51 text

Apply large ML model to AI text filter Large model Serving › Large model serving • (DeepSpeed Inference):https://www.deepspeed.ai/tutorials/inference-tutorial/ Model GPU Kernel Optimization Inference Parallelism Inference Model Optimize FP16 Loss Scaling

Slide 52

Slide 52 text

Apply large ML model to AI text filter Large model Serving › Large model serving • (DeepSpeed Inference):https://www.deepspeed.ai/tutorials/inference-tutorial/ V100 GPU Auto Scaling MLU Serving Model GPU Kernel Optimization Inference Parallelism Inference Model Optimize FP16 Loss Scaling

Slide 53

Slide 53 text

Experiment Result

Slide 54

Slide 54 text

Experiment Result Experiment setting VS AI-Text Filter Japanese single language model 110 Million Service model AI-Text Filter Multi Language Large Model 11 Billion Tuning VS AI-Text Filter Multi Language Large Model 11 Billion Not Tuning

Slide 55

Slide 55 text

Experiment Result Experiment test data Label Count Ratio(%) Normal 99,996 86.2 Info 10,278 8.8 Porn 2,299 1.9 Harass 1,106 0.9 illegal 106 0.09 AD 2,180 1.8 Total 115,965 99996 10278 2299 1106 106 2180 0 20000 40000 60000 80000 100000 120000 Normal Info Porn Harass illegal AD Test Data Count

Slide 56

Slide 56 text

Experiment Result F1 Score result 0.4 0.5 0.6 0.7 0.8 0.9 1 Normal Info Porn Harass illegal AD F1 score Multi-Tuning Multi JP Service

Slide 57

Slide 57 text

Experiment Result F1 Score result 0.4 0.5 0.6 0.7 0.8 0.9 1 Normal Info Porn Harass illegal AD F1 score Multi-Tuning Multi JP Service 0.68 0.7 0.72 0.74 0.76 0.78 0.8 0.82 0.84 0.86 Total Average F1 score Multi Tuning Multi JP Service -1% -9.9% 0%

Slide 58

Slide 58 text

Experiment Result AUC result 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 Normal Info Porn Harass illegal AD AUC score Multi Tuning Multi JP Service

Slide 59

Slide 59 text

Experiment Result AUC result 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 Normal Info Porn Harass illegal AD AUC score Multi Tuning Multi JP Service 0.76 0.78 0.8 0.82 0.84 0.86 0.88 0.9 0.92 Total Average AUC score Multi Tuning Multi JP Service -1% -9.1% 0%

Slide 60

Slide 60 text

Experiment Result Qualitative evaluation 経営難で銀行等からの融資待ちの方、収入がなく 生活が出来ない……等々、コロナショックで困って る方🙀 連絡頂ければ即融資可能です😊‼ Those who are in trouble due to the corona shock, such as those who are waiting for loans from banks due to financial difficulties, those who cannot live without income, etc. 🙀 If you contact us, you can finance immediately 😊!! Translation User

Slide 61

Slide 61 text

Experiment Result Qualitative evaluation 経営難で銀行等からの融資待ちの方、収入がなく 生活が出来ない……等々、コロナショックで困って る方🙀 連絡頂ければ即融資可能です😊‼ Those who are in trouble due to the corona shock, such as those who are waiting for loans from banks due to financial difficulties, those who cannot live without income, etc. 🙀 If you contact us, you can finance immediately 😊!! Translation User JP Service Model illegal 12%

Slide 62

Slide 62 text

Experiment Result Qualitative evaluation 経営難で銀行等からの融資待ちの方、収入がなく 生活が出来ない……等々、コロナショックで困って る方🙀 連絡頂ければ即融資可能です😊‼ Those who are in trouble due to the corona shock, such as those who are waiting for loans from banks due to financial difficulties, those who cannot live without income, etc. 🙀 If you contact us, you can finance immediately 😊!! Translation User Multi Language Large Model - Tuning JP Service Model VS illegal 99% illegal 12%

Slide 63

Slide 63 text

Expected Effectiveness

Slide 64

Slide 64 text

Expected Effectiveness Effect › The effect of introducing the Large ML model Improvement Performance Service Extension Large ML Model Training Tech 1 2 3

Slide 65

Slide 65 text

Expectation 1 › 10% more accurate AI text filter of LMP system › 0.3% Monitoring rate down based current AI text filter service model Total 380,000,000 every month JP Service Model 5,700,000 Monitoring Data 1.5% Expected Effectiveness

Slide 66

Slide 66 text

Expectation 1 › 10% more accurate AI text filter of LMP system › 0.3% Monitoring rate down based current AI text filter service model Total 380,000,000 every month JP Service Model Multi Language Large Model 5,700,000 Monitoring Data 1.5% 1.2% 4,560,000 Monitoring Data − Expected Effectiveness

Slide 67

Slide 67 text

Expectation 1 › 10% more accurate AI text filter of LMP system › 0.3% Monitoring rate down based current AI text filter service model Total 380,000,000 every month JP Service Model Multi Language Large Model Monthly Monitoring Resource 1.5% 1.2% 4,560,000 Monitoring Data − Expected Effectiveness 1,140,000 Reduction 5,700,000 Monitoring Data

Slide 68

Slide 68 text

Expectation 2 Service Resource Monitored for a Year 300 400 500 600 700 Resource Multi Language Large Model JP Service Model X axis: 𝟏𝟎𝟓 -13,680,000 Expected Effectiveness

Slide 69

Slide 69 text

Expectation 2 Service Resource Monitored for a Year 300 400 500 600 700 Resource Multi Language Large Model JP Service Model X axis: 𝟏𝟎𝟓 One Year Monitoring Resource -20% -13,680,000 Expected Effectiveness

Slide 70

Slide 70 text

Conclusion

Slide 71

Slide 71 text

Conclusion Conclusion & Future Work › Conclusion › Not easy to understand and put in practice › Fun to study as much as it was difficult › Large model effectiveness › Need more collaboration with other teams

Slide 72

Slide 72 text

Conclusion Conclusion & Future Work › Conclusion › Not easy to understand and put in practice › Fun to study as much as it was difficult › Large model effectiveness › Need more collaboration with other teams › Future work › Large model hyper-parameter tuning

Slide 73

Slide 73 text

Next Session Info MLU & MLU Serving

Slide 74

Slide 74 text

Thank you