The Art of Java Performance: What, Why, How?

The Art of Java Performance: What, Why, How?

Copyright: Christos Kotselidis

3772ed72c143c2f0fe110d1a4124a46a?s=128

zakkak

May 08, 2018
Tweet

Transcript

  1. The Art of Java Performance: What, Why, How? Christos Kotselidis

    Lecturer School of Computer Science Advanced Processors Technology (APT) Group www.kotselidis.net
  2. Quizz Java C Which is faster? • Java • C

  3. Quizz Correct Answer: It all depends on what, why and

    how we measure it… Both
  4. What? Performance has different meanings: • Raw performance (end-to-end) •

    Responsiveness • Latency • Throughput • Custom quantified SLAs* *SLA: Service Level Agreement
  5. Why? Performance metrics are used to: • Assess SW external

    qualities • Quantify the effects of optimizations • Understand HW/SW synergies • Pricing?! • …and many other reasons…
  6. How? That’s the tricky part!

  7. How? If not measured properly: • Wrong comparisons • Wrong

    conclusions/decisions • Biasing • False positives/negatives
  8. How hard can it be? Not quite… (Hands on in

    Part 2)
  9. Managed (Java) vs Unmanaged (C) languages Java end-to-end perf. numbers

    include: • Class loading times • Interpreter times • Compilation times • GC times • …and finally the application code time
  10. What do we measure? Depends on what we want to

    show or hide • Peak performance • Start-up times • Throughput • End-to-end performance • All the above
  11. What do we measure? Depends on the context • Micro-benchmarks

    • Large benchmarks • Applications - Mobile, Desktop, Web, Enterprise • Hardware, Software, or both?
  12. Two axes of correctness • Experimental Design - Benchmarks, input

    sizes, data sets - VM parameters, hardware platform • Evaluation Methodology - Which numbers to report (avg, geomean, best, worst) - Non-determinism in or out (compilation, GC times, etc.) - Statistical rigorous methodologies [1] [1] A. George, D. Buytaert, L. Eeckhout. Statistically Rigorous Java Performance Evaluation, In OOPSLA 2007.
  13. Experimental Design • Choose representative benchmarks - Not sure? Apply

    “five-why” rule • Diversify data sets and input sizes • Diversify hardware platform - Unless you optimize for a specific one • Pin down VM version - Change of VM (version, vendor, etc.)? - Redo all experiments • Diversify VM parameters - Can dramatically change performance - Top influencers: GC, heap sizes, heap layout
  14. Evaluation Methodology • Which numbers to report? - Find those

    that most relate to your app - Mobile app | Startup times - Enterprise app | Peak performance - Real time app | Non-deterministic factors, outliers, noise • Best vs Avg vs Worst numbers - Closely related to what we measure - Always report stdev - Be clear and precise about the reported numbers • Always perform “apples-to-apples” comparison
  15. Next Slot | Hands On • Java vs C •

    Java vs JVM • How not to fake results
  16. Thank you! Questions? christos.kotselidis@manchester.ac.uk https://github.com/beehive-lab