Slide 1

Slide 1 text

The Art of Java Performance: What, Why, How? Christos Kotselidis Lecturer School of Computer Science Advanced Processors Technology (APT) Group www.kotselidis.net

Slide 2

Slide 2 text

Quizz Java C Which is faster? • Java • C

Slide 3

Slide 3 text

Quizz Correct Answer: It all depends on what, why and how we measure it… Both

Slide 4

Slide 4 text

What? Performance has different meanings: • Raw performance (end-to-end) • Responsiveness • Latency • Throughput • Custom quantified SLAs* *SLA: Service Level Agreement

Slide 5

Slide 5 text

Why? Performance metrics are used to: • Assess SW external qualities • Quantify the effects of optimizations • Understand HW/SW synergies • Pricing?! • …and many other reasons…

Slide 6

Slide 6 text

How? That’s the tricky part!

Slide 7

Slide 7 text

How? If not measured properly: • Wrong comparisons • Wrong conclusions/decisions • Biasing • False positives/negatives

Slide 8

Slide 8 text

How hard can it be? Not quite… (Hands on in Part 2)

Slide 9

Slide 9 text

Managed (Java) vs Unmanaged (C) languages Java end-to-end perf. numbers include: • Class loading times • Interpreter times • Compilation times • GC times • …and finally the application code time

Slide 10

Slide 10 text

What do we measure? Depends on what we want to show or hide • Peak performance • Start-up times • Throughput • End-to-end performance • All the above

Slide 11

Slide 11 text

What do we measure? Depends on the context • Micro-benchmarks • Large benchmarks • Applications - Mobile, Desktop, Web, Enterprise • Hardware, Software, or both?

Slide 12

Slide 12 text

Two axes of correctness • Experimental Design - Benchmarks, input sizes, data sets - VM parameters, hardware platform • Evaluation Methodology - Which numbers to report (avg, geomean, best, worst) - Non-determinism in or out (compilation, GC times, etc.) - Statistical rigorous methodologies [1] [1] A. George, D. Buytaert, L. Eeckhout. Statistically Rigorous Java Performance Evaluation, In OOPSLA 2007.

Slide 13

Slide 13 text

Experimental Design • Choose representative benchmarks - Not sure? Apply “five-why” rule • Diversify data sets and input sizes • Diversify hardware platform - Unless you optimize for a specific one • Pin down VM version - Change of VM (version, vendor, etc.)? - Redo all experiments • Diversify VM parameters - Can dramatically change performance - Top influencers: GC, heap sizes, heap layout

Slide 14

Slide 14 text

Evaluation Methodology • Which numbers to report? - Find those that most relate to your app - Mobile app | Startup times - Enterprise app | Peak performance - Real time app | Non-deterministic factors, outliers, noise • Best vs Avg vs Worst numbers - Closely related to what we measure - Always report stdev - Be clear and precise about the reported numbers • Always perform “apples-to-apples” comparison

Slide 15

Slide 15 text

Next Slot | Hands On • Java vs C • Java vs JVM • How not to fake results

Slide 16

Slide 16 text

Thank you! Questions? [email protected] https://github.com/beehive-lab