Slide 1

Slide 1 text

Performance Testing of and in the Cloud Dr. Philipp Leitner [email protected] @xLeitix

Slide 2

Slide 2 text

Chalmers !2 https://icet-lab.eu @IcetLab

Slide 3

Slide 3 text

Chalmers !3 Performance Testing of and in the Cloud

Slide 4

Slide 4 text

Chalmers !4 Lecture Outline • Part 1 - Introduction to Software Performance and Benchmarking • Part 2 - Benchmarking of IaaS Clouds • Practical Interlude - Cloud Workbench • Part 3 - Benchmarking in IaaS Clouds

Slide 5

Slide 5 text

Chalmers !5 About Software Performance

Slide 6

Slide 6 text

Chalmers !6 Different Views

Slide 7

Slide 7 text

Chalmers !7 Performance Matters

Slide 8

Slide 8 text

Chalmers !8 Factors of Influence of Software Performance Code Environment Users cloud computing

Slide 9

Slide 9 text

Chalmers !9 Benchmarking IaaS Clouds

Slide 10

Slide 10 text

Chalmers !10 Capacity Planning in the Cloud is hard

Slide 11

Slide 11 text

Chalmers !11

Slide 12

Slide 12 text

Chalmers !12 What cloud provider should I choose? Should I go for many small or few large instances? General-purpose or *-optimized? Pay for better IOPS or not? …………… ➡ Need for Benchmarking

Slide 13

Slide 13 text

Chalmers !13 Basic Cloud Benchmarking Approach Benchmark Manager Provider API results provision Instance start benchmark destroy

Slide 14

Slide 14 text

Chalmers !14 Basic Cloud Benchmarking Approach CCGrid 2017 “An Approach and Case Study of Cloud Instance Type Selection for Multi-Tier Web Applications” CWB Server Chef Server Vagrant Scheduler Provider API IaaS Provider JMeter Master SUT AcmeAir Webapplication MongoDB request DRIVER response Test Plan results JMeter Slave provision provision provision provision acquire start-up CWB Client Chef Client Chef Client JMeter Slave Chef Client Chef Client JMeter Slave Chef Client Chef Client

Slide 15

Slide 15 text

Chalmers !15 Example Studies “of” and “in” the cloud

Slide 16

Slide 16 text

Chalmers !16 Example Study 1 - Performance Testing of the Cloud Study setup Benchmarked 22 cloud configurations using 5 benchmarks Two types of experiments Isolated: 300 - 500 repetitions Continuous: 15 repetitions per configuration TOIT 2016 “Patterns in the Chaos - A Study of Performance Variation and Predictability in Public IaaS Clouds”

Slide 17

Slide 17 text

Chalmers !17 Results Summary TOIT 2016 “Patterns in the Chaos - A Study of Performance Variation and Predictability in Public IaaS Clouds”

Slide 18

Slide 18 text

Chalmers !18 Results Summary TOIT 2016 “Patterns in the Chaos - A Study of Performance Variation and Predictability in Public IaaS Clouds”

Slide 19

Slide 19 text

Chalmers !19 Observed CPU Models (for m1.small and Azure Small in North America) TOIT 2016 “Patterns in the Chaos - A Study of Performance Variation and Predictability in Public IaaS Clouds”

Slide 20

Slide 20 text

Chalmers !20 Impact of Different Days / Times (for m3.large in Europe) TOIT 2016 “Patterns in the Chaos - A Study of Performance Variation and Predictability in Public IaaS Clouds” 0 10 20 30 40 50 00:00 04:00 08:00 12:00 16:00 20:00 Time of the Day IO Bandwidth [Mb/s] Mon Tue Wed Thu Fri Sat Sun Day of the Week 0 10 20 30 40 50 00:00 04:00 08:00 12:00 16:00 20:00 Time of the Day IO Bandwidth [Mb/s] Mon Tue Wed Thu Fri Sat Sun Day of the Week 0 10 20 30 40 50 00:00 04:00 08:00 12:00 16:00 20:00 Time of the Day IO Bandwidth [Mb/s] Mon Tue Wed Thu Fri Sat Sun 0 10 20 30 40 50 00:00 04:00 08:00 12:00 16:00 20:00 Time of the Day IO Bandwidth [Mb/s] Mon Tue Wed Thu Fri Sat Sun Day of the Week 0 10 20 30 40 50 00:00 04:00 08:00 12:00 16:00 20:00 Time of the Day IO Bandwidth [Mb/s] Mon Tue Wed Thu Fri Sat Sun Day of the Week 0 10 20 30 40 50 00:00 04:00 08:00 12:00 16:00 20:00 Time of the Day IO Bandwidth [Mb/s] Mon Tue Wed Thu Fri Sat Sun Day of the Week 0 10 20 30 40 50 00:00 04:00 08:00 12:00 16:00 20:00 Time of the Day IO Bandwidth [Mb/s] Mon Tue Wed Thu Fri Sat Sun Day of the Week 0 10 20 30 40 50 00:00 04:00 08:00 12:00 16:00 20:00 Time of the Day IO Bandwidth [Mb/s] Mon Tue Wed Thu Fri Sat Sun Day of the Week 0 10 20 30 40 50 00:00 04:00 08:00 12:00 16:00 20:00 Time of the Day IO Bandwidth [Mb/s] 0 10 20 30 40 50 00:00 04:00 08:00 12:00 16:00 20:00 Time of the Day IO Bandwidth [Mb/s] Mon Tue Wed Thu Fri Sat Sun Day of the Week 0 10 20 30 40 50 00:00 04:00 08:00 12:00 16:00 20:00 Time of the Day IO Bandwidth [Mb/s] Mon Tue Wed Thu Fri Sat Sun Day of the Week 0 20 40 60 00:00 04:00 08:00 12:00 16:00 20:00 Time of the Day IO Bandwidth [Mb/s] Mon Tue Wed Thu Fri Sat Sun Day of the Week 0 10 20 30 40 50 00:00 04:00 08:00 12:00 16:00 20:00 Time of the Day IO Bandwidth [Mb/s] Mon Tue Wed Thu Fri Sat Sun Day of the Week

Slide 21

Slide 21 text

Chalmers !21 Cloud Workbench CloudCom 2014 “Cloud Work Bench - Infrastructure-as-Code Based Cloud Benchmarking” Tool for scheduling cloud experiments Code: https://github.com/sealuzh/cloud-workbench Demo: https://www.youtube.com/watch? v=0yGFGvHvobk

Slide 22

Slide 22 text

Chalmers !22 A Simple CWB Benchmark (Optional) Step 1: Write Chef Cookbook

Slide 23

Slide 23 text

Chalmers !23 A Simple CWB Benchmark Step 1I: Define IaaS config and schedule

Slide 24

Slide 24 text

Chalmers !24 A Simple CWB Benchmark Step III: Execute and download result CSV

Slide 25

Slide 25 text

Chalmers !25 Demo Session I http://cwb.io

Slide 26

Slide 26 text

Chalmers !26 Example Study 2 - Software Performance Testing in the Cloud Research question: Executed 19 software performance tests in different environments How small performance regressions can we find? Study setup: 4 open source projects in Java and Go Study executed in AWS, Azure, Google Baseline: baremetal server in Softlayer / Bluemix Under submission.

Slide 27

Slide 27 text

Chalmers !27 Background - JMH Microbenchmarks MSR’18. An Evaluation of Open-Source Software Microbenchmark Suites for Continuous Performance Assessment.

Slide 28

Slide 28 text

Chalmers !28 Demo Session II

Slide 29

Slide 29 text

Chalmers !29 Summary - Variability of Software Benchmark Results Under submission.

Slide 30

Slide 30 text

Chalmers !30 Sources of Variability Under submission. 0 2 4 6 RSD AWS CPU / etcd−2 0 25 50 75 100 RSD Azure Std / etcd−2 0 25 50 75 RSD AWS CPU / log4j2−5 0 10 20 30 RSD GCE Mem / etcd−4 Per Trial Per Instance Total

Slide 31

Slide 31 text

Chalmers !31 Sources of Variability Under submission. ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 6e+07 8e+07 1e+08 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 Instances Avg. Exec. Time [ns] Instance Variability − bleve−2 on GCE CPU

Slide 32

Slide 32 text

Chalmers !32 Sources of Variability Under submission. ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 9.0e+06 1.2e+07 1.5e+07 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 Instances Avg. Exec. Time [ns] Instance Variability − bleve−4 on Azure CPU

Slide 33

Slide 33 text

Chalmers !33 What to do? Albedi and Brecht. Conducting Repeatable Experiments in Highly Variable Clouds (ICPE 2017) Estimate needed repetitions in advance Experiment interleaving Kalibera and Jones. Rigorous Benchmarking in Reasonable Time (ISMM 2013)

Slide 34

Slide 34 text

Chalmers !34 Key Takeaways

Slide 35

Slide 35 text

Chalmers !35 References Philipp Leitner, Jürgen Cito (2016). Patterns in the Chaos - A Study of Performance Variation and Predictability in Public IaaS Clouds. ACM Transactions on Internet Technology, 16(3), pp. 15:1–15:23. New York, NY, USA. Christian Davatz, Christian Inzinger, Joel Scheuner, Philipp Leitner (2017). An Approach and Case Study of Cloud Instance Type Selection for Multi-Tier Web Applications. In Proceedings of the 17th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, pp. 534–543, Piscataway, NJ, USA. Joel Scheuner, Philipp Leitner, Jürgen Cito, Harald Gall (2014). Cloud WorkBench - Infrastructure-as-Code Based Cloud Benchmarking. In Proceedings of the 6th IEEE International Conference on Cloud Computing Technology and Science (CloudCom’14) Christoph Laaber, Philipp Leitner (2018). An Evaluation of Open-Source Software Microbenchmark Suites for Continuous Performance Assessment. In Proceedings of the 15th International Conference on Mining Software Repositories