Upgrade to Pro — share decks privately, control downloads, hide ads and more …

The Art of Java Performance: What, Why, How?

The Art of Java Performance: What, Why, How?

Copyright: Christos Kotselidis

zakkak

May 08, 2018
Tweet

More Decks by zakkak

Other Decks in Programming

Transcript

  1. The Art of Java Performance:
    What, Why, How?
    Christos Kotselidis
    Lecturer
    School of Computer Science
    Advanced Processors Technology (APT) Group
    www.kotselidis.net

    View full-size slide

  2. Quizz
    Java C
    Which is faster?
    • Java
    • C

    View full-size slide

  3. Quizz
    Correct Answer:
    It all depends on what, why and how we measure it…
    Both

    View full-size slide

  4. What?
    Performance has different meanings:
    • Raw performance (end-to-end)
    • Responsiveness
    • Latency
    • Throughput
    • Custom quantified SLAs*
    *SLA: Service Level Agreement

    View full-size slide

  5. Why?
    Performance metrics are used to:
    • Assess SW external qualities
    • Quantify the effects of optimizations
    • Understand HW/SW synergies
    • Pricing?!
    • …and many other reasons…

    View full-size slide

  6. How?
    That’s the tricky part!

    View full-size slide

  7. How?
    If not measured properly:
    • Wrong comparisons
    • Wrong conclusions/decisions
    • Biasing
    • False positives/negatives

    View full-size slide

  8. How hard can it be?
    Not quite…
    (Hands on in Part 2)

    View full-size slide

  9. Managed (Java) vs Unmanaged (C) languages
    Java end-to-end perf. numbers include:
    • Class loading times
    • Interpreter times
    • Compilation times
    • GC times
    • …and finally the application code time

    View full-size slide

  10. What do we measure?
    Depends on what we want to show or hide
    • Peak performance
    • Start-up times
    • Throughput
    • End-to-end performance
    • All the above

    View full-size slide

  11. What do we measure?
    Depends on the context
    • Micro-benchmarks
    • Large benchmarks
    • Applications
    - Mobile, Desktop, Web, Enterprise
    • Hardware, Software, or both?

    View full-size slide

  12. Two axes of correctness
    • Experimental Design
    - Benchmarks, input sizes, data sets
    - VM parameters, hardware platform
    • Evaluation Methodology
    - Which numbers to report (avg, geomean, best, worst)
    - Non-determinism in or out (compilation, GC times, etc.)
    - Statistical rigorous methodologies [1]
    [1] A. George, D. Buytaert, L. Eeckhout. Statistically Rigorous Java Performance Evaluation, In OOPSLA 2007.

    View full-size slide

  13. Experimental Design
    • Choose representative benchmarks
    - Not sure? Apply “five-why” rule
    • Diversify data sets and input sizes
    • Diversify hardware platform
    - Unless you optimize for a specific one
    • Pin down VM version
    - Change of VM (version, vendor, etc.)?
    - Redo all experiments
    • Diversify VM parameters
    - Can dramatically change performance
    - Top influencers: GC, heap sizes, heap layout

    View full-size slide

  14. Evaluation Methodology
    • Which numbers to report?
    - Find those that most relate to your app
    - Mobile app | Startup times
    - Enterprise app | Peak performance
    - Real time app | Non-deterministic factors, outliers, noise
    • Best vs Avg vs Worst numbers
    - Closely related to what we measure
    - Always report stdev
    - Be clear and precise about the reported numbers
    • Always perform “apples-to-apples” comparison

    View full-size slide

  15. Next Slot | Hands On
    • Java vs C
    • Java vs JVM
    • How not to fake results

    View full-size slide

  16. Thank you!

    Questions?

    [email protected]

    https://github.com/beehive-lab

    View full-size slide