Slide 1

Slide 1 text

Measure, don’t guess Benchmarking stories from the trenches

Slide 2

Slide 2 text

What is benchmarking?

Slide 3

Slide 3 text

What is benchmarking? BenchmarkETing?

Slide 4

Slide 4 text

No content

Slide 5

Slide 5 text

What is benchmarking?

Slide 6

Slide 6 text

99,999% science

Slide 7

Slide 7 text

All by myself *OpenJDK Performance Tactic Index

Slide 8

Slide 8 text

So You Want to Write a (Micro)Benchmark 1. Read a reputable paper on JVMs and micro-benchmarking 2. Always include a warmup phase which runs your test kernel all the way through, enough to trigger all initializations and compilations before timing phase(s) 3. Always run with -XX:+PrintCompilation, -verbose:gc, etc., so you can verify that the compiler and other parts of the JVM are not doing unexpected work during your timing phase. a. Print messages at the beginning and end of timing and warmup phases, so you can verify that there is no output during the timing phase. 4. Be aware of the difference between -client and -server, and OSR and regular compilations. Also be aware of the effects of -XX:+TieredCompilation, which mixes client and server modes together. 5. Be aware of initialization effects. Do not print for the first time during your timing phase, since printing loads and initializes classes. Do not load new classes outside of the warmup/reporting phase, unless you are testing class loading. 6. Be aware of deoptimization and recompilation effects. 7. Use appropriate tools to read the compiler's mind, and expect to be surprised by the code it produces. Inspect the code yourself before forming theories about what makes something faster or slower. 8. Reduce noise in your measurements. Run your benchmark on a quiet machine, and run it several times, discarding outliers.

Slide 9

Slide 9 text

Easy peasy….DONE!

Slide 10

Slide 10 text

No content

Slide 11

Slide 11 text

Déjà vu? Benchmarking field access

Slide 12

Slide 12 text

E = mc^2

Slide 13

Slide 13 text

Fastest CPU EVER

Slide 14

Slide 14 text

Who stole my code? NOTE: ASM with AT&T syntax is op SRC, DEST

Slide 15

Slide 15 text

In Java-ish... The optimized version execute the load of the field just once for each test and (incredibly) get the same results too! * the actual optimization depends on the JVM version

Slide 16

Slide 16 text

USE JMH USE JMH USE JMH “A badly written benchmark can lead you to wrong conclusions that will let you focus on useless optimizations, confusing yourself and wasting others’ time” - An anonymous performance engineer - *Effects of a poorly written benchmark

Slide 17

Slide 17 text

A bad benchmark (and its meaningless results) also mislead others How many times a badly written blog post has pushed developers to adopt bad practices? 😢

Slide 18

Slide 18 text

JMH TLDR “JMH is a Java harness for building, running, and analysing nano/micro/milli/macro benchmarks written in Java and other languages targeting the JVM.” - OpenJDK Code Tools -

Slide 19

Slide 19 text

“Is a collection of software and test data configured to test a program unit by running it under varying conditions and monitoring its behavior and outputs. ... The typical objectives of a test harness are to: ● Automate the testing process. ● Execute test suites of test cases. ● Generate associated test reports.” - Wikipedia: Test Harness - Test Harness

Slide 20

Slide 20 text

Déjà vu? (2)

Slide 21

Slide 21 text

Under the hood Generated code

Slide 22

Slide 22 text

Under the hood Method under benchmark nanoTime() is a costly operation, called only once isDone is a volatile variable set by a timer

Slide 23

Slide 23 text

Just works?

Slide 24

Slide 24 text

No content

Slide 25

Slide 25 text

No content

Slide 26

Slide 26 text

Purpose is everything “Benchmark numbers don’t matter on their own. It’s important what models you derive from those numbers.”

Slide 27

Slide 27 text

No content

Slide 28

Slide 28 text

Making sense of data: Active vs. passive benchmarking ● Passive Benchmarking ○ Benchmarks are commonly executed and then ignored until they have completed. That is passive benchmarking, where the main objective is the collection of benchmark data. Data is not Information. ● Active Benchmarking ○ With active benchmarking, you analyze performance while the benchmark is still running (not just after it's done), using other tools. You can confirm that the benchmark tests what you intend it to, and that you understand what that is. Data becomes Information. This can also identify the true limiters of the system under test, or of the benchmark itself.

Slide 29

Slide 29 text

No content

Slide 30

Slide 30 text

No content

Slide 31

Slide 31 text

Let’s get our hands dirty!!!

Slide 32

Slide 32 text

To recap Benchmarks are experiments intended to reproduce in a controlled environment exactly the same behaviour that you would otherwise experience into the wild

Slide 33

Slide 33 text

To recap ( yes, I should have tell you before 😛 ) Software Engineer Software Performance Engineer ● Mostly don’t care about underlying hardware and data specifics ● Work based on abstract principles, actual formal science ● Care writing beautiful, readable, composable, reusable … code ● Explore complex interactions between hardware, software, and data ● Work based on empirical evidence, more similar to natural science ● Sacrifice all good software principles to squeeze the last microsecond

Slide 34

Slide 34 text

References ● Code examples - https://github.com/mariofusco/jmh-playground ● So You Want to Write a Micro-Benchmark - https://wiki.openjdk.org/display/HotSpot/MicroBenchmarks ● Active Benchmarking - https://www.brendangregg.com/activebenchmarking.html ● JMH - https://github.com/openjdk/jmh ● JMH Samples - https://github.com/openjdk/jmh/tree/master/jmh-samples/src/main/java/org/openjdk/jmh/s amples ● VM Options Explorer - https://chriswhocodes.com/ ● HotSpot disassembly plugin - https://chriswhocodes.com/hsdis/ ● Environment OSTuning - https://github.com/ionutbalosin/jvm-performance-benchmarks?tab=readme-ov-file#os-tu ning ● JMH Visualizer - https://jmh.morethan.io/ ● Mastering the mechanics of Java method invocation - https://blogs.oracle.com/javamagazine/post/mastering-the-mechanics-of-java-method-in vocation ● What’s Wrong With My Benchmark Results? Studying Bad Practices in JMH Benchmarks