Slide 1

Slide 1 text

holly cummins Devoxx UK when benchmarks go bad what I learned from measuring performance wrong

Slide 2

Slide 2 text

No content

Slide 3

Slide 3 text

No content

Slide 4

Slide 4 text

measure don’t guess

Slide 5

Slide 5 text

measure don’t guess is just the beginning

Slide 6

Slide 6 text

quarkusio/spring-quarkus-perf-comparison

Slide 7

Slide 7 text

was this a good measurement?

Slide 8

Slide 8 text

was this a good measurement?

Slide 9

Slide 9 text

was this a good measurement?

Slide 10

Slide 10 text

what was wrong?

Slide 11

Slide 11 text

No content

Slide 12

Slide 12 text

building a benchmark is easy.

Slide 13

Slide 13 text

building a benchmark is easy. building a good benchmark is hard.

Slide 14

Slide 14 text

you are not measuring what you think you are measuring

Slide 15

Slide 15 text

yanmwytam you are not measuring what you think you are measuring

Slide 16

Slide 16 text

workload environment what’s run how it’s run

Slide 17

Slide 17 text

anatomy of a benchmark

Slide 18

Slide 18 text

anatomy of a benchmark

Slide 19

Slide 19 text

anatomy of a benchmark

Slide 20

Slide 20 text

anatomy of a benchmark

Slide 21

Slide 21 text

anatomy of a benchmark

Slide 22

Slide 22 text

anatomy of a benchmark

Slide 23

Slide 23 text

anatomy of a benchmark

Slide 24

Slide 24 text

anatomy of a benchmark

Slide 25

Slide 25 text

anatomy of a benchmark

Slide 26

Slide 26 text

reproducibility

Slide 27

Slide 27 text

reproducibility

Slide 28

Slide 28 text

reproducibility

Slide 29

Slide 29 text

you are not measuring what you think you are you are measuring noise

Slide 30

Slide 30 text

variation caused by environment issues ~40%

Slide 31

Slide 31 text

state is noise

Slide 32

Slide 32 text

caches are noise

Slide 33

Slide 33 text

sync && echo 3 > /proc/sys/vm/drop_caches

Slide 34

Slide 34 text

container images get cached

Slide 35

Slide 35 text

reproducibility stale database images (oops)

Slide 36

Slide 36 text

reproducibility stale database images (oops)

Slide 37

Slide 37 text

realism

Slide 38

Slide 38 text

realism

Slide 39

Slide 39 text

realism

Slide 40

Slide 40 text

even on rack-mounted servers cpu frequency is not deterministic

Slide 41

Slide 41 text

-disable turboboost -disable intel speedshift -set scaling_governor to ‘performance’ -set process priority (no more nice guy)

Slide 42

Slide 42 text

reproducibility load generation on the same machine

Slide 43

Slide 43 text

reproducibility load generation on the same machine

Slide 44

Slide 44 text

reproducibility if you can’t do that, set cpu affinity

Slide 45

Slide 45 text

everything is connected to everything problems may have asymmetric effects

Slide 46

Slide 46 text

No content

Slide 47

Slide 47 text

quarkus can handle 1.8x more load

Slide 48

Slide 48 text

quarkus can handle 1.8x more load we thought we were isolating the app to 4 cores, but the app, load generator, and database were sharing all 16

Slide 49

Slide 49 text

quarkus can handle 1.8x more load we thought we were isolating the app to 4 cores, but the app, load generator, and database were sharing all 16 - Hibernate second-level cache was also enabled between these runs - Note the November 2025 dates on these charts - These runs had some other issues we discovered later; look at the latest data for our most-correct comparison quarkus can handle 3x more load

Slide 50

Slide 50 text

# disable turbo boost echo 1 >/sys/devices/system/cpu/intel_pstate/no_turbo # drop pagecache, dentries and inodes sync sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches’ taskset --cpu-list 0-3 java -Xms512m -Xmx512m -XX: +UseNUMA -XX:+UnlockDiagnosticVMOptions -XX: +DebugNonSafepoints -jar

Slide 51

Slide 51 text

# disable turbo boost echo 1 >/sys/devices/system/cpu/intel_pstate/no_turbo # drop pagecache, dentries and inodes sync sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches’ taskset --cpu-list 0-3 java -Xms512m -Xmx512m -XX: +UseNUMA -XX:+UnlockDiagnosticVMOptions -XX: +DebugNonSafepoints -jar

Slide 52

Slide 52 text

no one would run a real app like this

Slide 53

Slide 53 text

realism

Slide 54

Slide 54 text

realism be cautious of micro-benchmarks

Slide 55

Slide 55 text

realism warm up before measuring

Slide 56

Slide 56 text

realism have data in the database

Slide 57

Slide 57 text

No content

Slide 58

Slide 58 text

I tried your benchmark, and Quarkus is only 1.4x faster

Slide 59

Slide 59 text

realism hardware schedulers make a big difference

Slide 60

Slide 60 text

relevance

Slide 61

Slide 61 text

are we measuring the right thing?

Slide 62

Slide 62 text

which is faster, a or b?

Slide 63

Slide 63 text

what even is “faster”? which is faster, a or b?

Slide 64

Slide 64 text

No content

Slide 65

Slide 65 text

No content

Slide 66

Slide 66 text

what problem are we trying to solve?

Slide 67

Slide 67 text

“performance” could be

Slide 68

Slide 68 text

“performance” could be throughput

Slide 69

Slide 69 text

“performance” could be response times throughput

Slide 70

Slide 70 text

“performance” could be response times throughput memory footprint

Slide 71

Slide 71 text

“performance” could be response times throughput memory footprint start time

Slide 72

Slide 72 text

response times throughput memory footprint start time

Slide 73

Slide 73 text

response times throughput memory footprint start time operational elasticity

Slide 74

Slide 74 text

response times throughput memory footprint start time operational elasticity hardware requirements

Slide 75

Slide 75 text

response times throughput memory footprint start time operational elasticity hardware requirements user satisfaction

Slide 76

Slide 76 text

response times throughput memory footprint start time operational elasticity hardware requirements user satisfaction

Slide 77

Slide 77 text

response times throughput memory footprint start time operational elasticity hardware requirements user satisfaction

Slide 78

Slide 78 text

response times throughput memory footprint start time operational elasticity hardware requirements user satisfaction

Slide 79

Slide 79 text

response times throughput memory footprint start time operational elasticity hardware requirements user satisfaction

Slide 80

Slide 80 text

response times throughput memory footprint start time operational elasticity hardware requirements user satisfaction

Slide 81

Slide 81 text

measuring response times the wrong way

Slide 82

Slide 82 text

measuring response times the wrong way

Slide 83

Slide 83 text

No content

Slide 84

Slide 84 text

No content

Slide 85

Slide 85 text

coordinated omission

Slide 86

Slide 86 text

coordinated omission 2.5s 2.1s

Slide 87

Slide 87 text

coordinated omission 2.5s 2.1s

Slide 88

Slide 88 text

coordinated omission 2.5s 2.1s 1.9s 2.3s

Slide 89

Slide 89 text

coordinated omission 2.5s 2.1s 1.9s 2.3s

Slide 90

Slide 90 text

coordinated omission 2.5s 2.1s 1.9s 2.3s 20.9s 21.4s

Slide 91

Slide 91 text

coordinated omission 2.5s 2.1s 1.9s 2.3s 20.9s 21.4s

Slide 92

Slide 92 text

coordinated omission 2.5s 2.1s 1.9s 2.3s 20.9s 21.4s 1.8s 2.2s

Slide 93

Slide 93 text

coordinated omission 2.5s 2.1s 1.9s 2.3s 20.9s 21.4s 1.8s 2.2s what?! probably should be >20s

Slide 94

Slide 94 text

coordinated omission - if your system under starts slowing down, the injector slows down too - makes your measures look better than they should

Slide 95

Slide 95 text

No content

Slide 96

Slide 96 text

not ok

Slide 97

Slide 97 text

not ok JMeter

Slide 98

Slide 98 text

not ok JMeter wrk

Slide 99

Slide 99 text

not ok JMeter wrk ok

Slide 100

Slide 100 text

not ok JMeter wrk ok Gatling

Slide 101

Slide 101 text

not ok JMeter wrk ok Gatling wrk2

Slide 102

Slide 102 text

not ok JMeter wrk ok Gatling wrk2 HyperFoil

Slide 103

Slide 103 text

- never use self-reported values - measure time to first request measuring start time the wrong way

Slide 104

Slide 104 text

ts=$(_date) while ! (curl -sf http://localhost:8080/fruits > /dev/ null) do # Spin here for max precision : done TTFR=$((($(_date) - ts)/1000000))

Slide 105

Slide 105 text

ts=$(_date) while ! (curl -sf http://localhost:8080/fruits > /dev/ null) do # Spin here for max precision : done TTFR=$((($(_date) - ts)/1000000))

Slide 106

Slide 106 text

# Start the client loop before the application so it's already polling ( while true; do # Try to open a TCP connection to the target host and port and use file descriptor 3 for it if exec 3<>/dev/tcp/"$HOST"/"$PORT"; then # Send HTTP GET request to the server if ! echo -e "GET $URL_PATH HTTP/1.0\r\nHost: $HOST\r\nConnection: close\r\n\r\n" >&3; then exec 3>&- continue fi # Read the HTTP response status line and extract the status code if ! read -r _ status_code _ <&3; then exec 3>&- continue fi # Close the file descriptor exec 3>&- # If we got a 200 OK response, exit the loop if [[ "$status_code" == "200" ]]; then break fi fi # Spin here and do nothing rather than waiting some arbitrary unlucky timing done # Record the timestamp when we successfully got a 200 response _date > "$END_TS_FILE" ) 2>/dev/null & CURL_PID=$! # Record start time and launch the application. # Redirect and exec inside the subshell so the application process directly replaces the subshell, making $APP_PID its actual PID. ts=$(_date) ( exec $RUN_CMD &>"$LOG_FILE" ) & APP_PID=$! # Ensure cleanup on exit (e.g. on timeout) trap "kill -15 $APP_PID $CURL_PID 2>/dev/null; wait $APP_PID 2>/dev/null || true; rm -f $END_TS_FILE" EXIT # Wait for the client loop to get a successful response wait $CURL_PID 2>/dev/null || true TTFR=$(($(cat "$END_TS_FILE") - ts))

Slide 107

Slide 107 text

measuring memory footprint

Slide 108

Slide 108 text

measuring memory footprint

Slide 109

Slide 109 text

do not just measure heap measure “resident set size” (RSS)

Slide 110

Slide 110 text

all these metrics affect each other

Slide 111

Slide 111 text

do not try to measure RSS and max throughput in the same experiment

Slide 112

Slide 112 text

No content

Slide 113

Slide 113 text

there is no “true” RSS:

Slide 114

Slide 114 text

there is no “true” RSS: you can trade off RSS against CPU by shrinking max heap

Slide 115

Slide 115 text

ts=$(_date) while ! (curl -sf http://localhost:8080/fruits > /dev/null) do # Spin here for max precision : done TTFR=$((($(_date) - ts)/1000000)) RSS=`ps -o rss= -p $CURRENT_PID | sed 's/^ *//g'`

Slide 116

Slide 116 text

ts=$(_date) while ! (curl -sf http://localhost:8080/fruits > /dev/null) do # Spin here for max precision : done TTFR=$((($(_date) - ts)/1000000)) RSS=`ps -o rss= -p $CURRENT_PID | sed 's/^ *//g'`

Slide 117

Slide 117 text

ts=$(_date) while ! (curl -sf http://localhost:8080/fruits > /dev/null) do # Spin here for max precision : done TTFR=$((($(_date) - ts)/1000000)) RSS=`ps -o rss= -p $CURRENT_PID | sed 's/^ *//g'` it’s ok to measure time-to- fi rst-request and RSS in the same experiment

Slide 118

Slide 118 text

ts=$(_date) while ! (curl -sf http://localhost:8080/fruits > /dev/null) do # Spin here for max precision : done TTFR=$((($(_date) - ts)/1000000)) RSS=`ps -o rss= -p $CURRENT_PID | sed 's/^ *//g'`

Slide 119

Slide 119 text

science 101: do not vary more than one thing at once

Slide 120

Slide 120 text

you are not measuring what you think you are especially if you’re measuring more than one thing

Slide 121

Slide 121 text

are we measuring what we think we are?

Slide 122

Slide 122 text

https://quarkus.io/blog/reactive-crud-performance-case-study/

Slide 123

Slide 123 text

https://quarkus.io/blog/reactive-crud-performance-case-study/ spring quarkus initial test 338 req/s 390 req/s with faster disk [not measured] 25,599 req/s

Slide 124

Slide 124 text

https://quarkus.io/blog/reactive-crud-performance-case-study/ spring quarkus initial test 338 req/s 390 req/s with faster disk [not measured] 25,599 req/s these are suspiciously similar

Slide 125

Slide 125 text

https://quarkus.io/blog/reactive-crud-performance-case-study/ spring quarkus initial test 338 req/s 390 req/s with faster disk [not measured] 25,599 req/s

Slide 126

Slide 126 text

https://quarkus.io/blog/reactive-crud-performance-case-study/ spring quarkus initial test 338 req/s 390 req/s with faster disk [not measured] 25,599 req/s

Slide 127

Slide 127 text

https://quarkus.io/blog/reactive-crud-performance-case-study/ spring quarkus initial test 338 req/s 390 req/s with faster disk [not measured] 25,599 req/s

Slide 128

Slide 128 text

relevance is the bottleneck what you think?

Slide 129

Slide 129 text

ca

Slide 130

Slide 130 text

ca 0.14% faster

Slide 131

Slide 131 text

ca 0.14% faster 0.26% better

Slide 132

Slide 132 text

ca metrics are suspiciously similar 0.14% faster 0.26% better

Slide 133

Slide 133 text

does this tell us quarkus is as fast as rust?

Slide 134

Slide 134 text

does this tell us quarkus is as fast as rust?

Slide 135

Slide 135 text

no. does this tell us quarkus is as fast as rust?

Slide 136

Slide 136 text

it does tell us switching to rust for performance might be a waste of effort

Slide 137

Slide 137 text

it does tell us switching to rust for performance might be a waste of effort

Slide 138

Slide 138 text

No content

Slide 139

Slide 139 text

load generation on a different machine can be a fail

Slide 140

Slide 140 text

network can become the bottleneck

Slide 141

Slide 141 text

all-in-one-topology -pin load driver, database, and app to named cores -use taskset, not --cpus -use numa for memory affinity

Slide 142

Slide 142 text

how do you know what you’re measuring?

Slide 143

Slide 143 text

active benchmarking how do you know what you’re measuring?

Slide 144

Slide 144 text

brendan gregg’s USE method

Slide 145

Slide 145 text

for every resource, check -utilization -saturation -errors

Slide 146

Slide 146 text

for every resource, check -utilization -saturation -errors

Slide 147

Slide 147 text

for every resource, check -utilization -saturation -errors

Slide 148

Slide 148 text

Running 30s test @ http://localhost:8080/fruits 2 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 14.88ms 14.36ms 343.93ms 90.45% Req/Sec 6564.03 1260.71 7405.00 96.77 203485 requests in 30.01s, 162.60MB read Requests/sec: 6780.57 Transfer/sec: 5.42MB Socket errors: connectionErrors 2007, requestTimeouts 0

Slide 149

Slide 149 text

No content

Slide 150

Slide 150 text

we had to tune keep-alive to avoid spring errors

Slide 151

Slide 151 text

spring quarkus app with code that deadlocked – 1.75 req/s initial test with correct code 338 req/s 390 req/s with faster disk [not measured] 25,599 req/s https://quarkus.io/blog/reactive-crud-performance-case-study/

Slide 152

Slide 152 text

spring quarkus app with code that deadlocked – 1.75 req/s initial test with correct code 338 req/s 390 req/s with faster disk [not measured] 25,599 req/s 2022-06-17 15:20:44,507 ERROR [org.hib.rea.errors] (vert.x-eventloop-thread-45) HR000057: Failed to execute statement [select zipcode0_.zip as zip1_0_0_, zipcode0_.city as city2_0_0_, zipcode0_.county as county3_0_0_, zipcode0_.state as state4_0_0_, https://quarkus.io/blog/reactive-crud-performance-case-study/ 2022-06-17 15:20:44,507 ERROR [org.hib.rea.errors] (vert.x-eventloop-thread-45) HR000057: Failed to execute statement [select zipcode0_.zip as zip1_0_0_, zipcode0_.city as city2_0_0_, zipcode0_.county as county3_0_0_, zipcode0_.state as state4_0_0_, zipcode0_.timezone as timezone5_0_0_, zipcode0_.type as type6_0_0_ from ZipCode zipcode0_ where zipcode0_.zip=?]: could not load an entity: [com.baeldung.quarkus_project.ZipCode#08231]: java.util.concurrent.CompletionException: io.vertx.core.impl.NoStackTraceThrowable Timeout at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:332) at java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:347) at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:636) at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510) at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2162) at io.vertx.core.Future.lambda$toCompletionStage$2(Future.java:362) ... at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:503) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.base/java.lang.Thread.run(Thread.java:833) Caused by: io.vertx.core.impl.NoStackTraceThrowable: Timeout 2022-06-17 15:20:44,507 ERROR [org.hib.rea.errors] (vert.x-eventloop-thread-45) HR000057: Failed to execute statement [select zipcode0_.zip as zip1_0_0_, zipcode0_.city as city2_0_0_, zipcode0_.county as county3_0_0_, zipcode0_.state as state4_0_0_, zipcode0_.timezone as timezone5_0_0_, zipcode0_.type as type6_0_0_ from ZipCode zipcode0_ where zipcode0_.zip=?]: could not load an entity: [com.baeldung.quarkus_project.ZipCode#08231]: java.util.concurrent.CompletionException: io.vertx.core.impl.NoStackTraceThrowable Timeout at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:332) at java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:347) at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:636) at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510) at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2162) at io.vertx.core.Future.lambda$toCompletionStage$2(Future.java:362) ... at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:503) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.base/java.lang.Thread.run(Thread.java:833) Caused by: io.vertx.core.impl.NoStackTraceThrowable: Timeout

Slide 153

Slide 153 text

spring quarkus app with code that deadlocked – 1.75 req/s initial test with correct code 338 req/s 390 req/s with faster disk [not measured] 25,599 req/s 2022-06-17 15:20:44,507 ERROR [org.hib.rea.errors] (vert.x-eventloop-thread-45) HR000057: Failed to execute statement [select zipcode0_.zip as zip1_0_0_, zipcode0_.city as city2_0_0_, zipcode0_.county as county3_0_0_, zipcode0_.state as state4_0_0_, https://quarkus.io/blog/reactive-crud-performance-case-study/ 2022-06-17 15:20:44,507 ERROR [org.hib.rea.errors] (vert.x-eventloop-thread-45) HR000057: Failed to execute statement [select zipcode0_.zip as zip1_0_0_, zipcode0_.city as city2_0_0_, zipcode0_.county as county3_0_0_, zipcode0_.state as state4_0_0_, zipcode0_.timezone as timezone5_0_0_, zipcode0_.type as type6_0_0_ from ZipCode zipcode0_ where zipcode0_.zip=?]: could not load an entity: [com.baeldung.quarkus_project.ZipCode#08231]: java.util.concurrent.CompletionException: io.vertx.core.impl.NoStackTraceThrowable Timeout at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:332) at java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:347) at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:636) at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510) at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2162) at io.vertx.core.Future.lambda$toCompletionStage$2(Future.java:362) ... at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:503) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.base/java.lang.Thread.run(Thread.java:833) Caused by: io.vertx.core.impl.NoStackTraceThrowable: Timeout 2022-06-17 15:20:44,507 ERROR [org.hib.rea.errors] (vert.x-eventloop-thread-45) HR000057: Failed to execute statement [select zipcode0_.zip as zip1_0_0_, zipcode0_.city as city2_0_0_, zipcode0_.county as county3_0_0_, zipcode0_.state as state4_0_0_, zipcode0_.timezone as timezone5_0_0_, zipcode0_.type as type6_0_0_ from ZipCode zipcode0_ where zipcode0_.zip=?]: could not load an entity: [com.baeldung.quarkus_project.ZipCode#08231]: java.util.concurrent.CompletionException: io.vertx.core.impl.NoStackTraceThrowable Timeout at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:332) at java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:347) at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:636) at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510) at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2162) at io.vertx.core.Future.lambda$toCompletionStage$2(Future.java:362) ... at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:503) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.base/java.lang.Thread.run(Thread.java:833) Caused by: io.vertx.core.impl.NoStackTraceThrowable: Timeout caused by misuse of @Transactional

Slide 154

Slide 154 text

spring quarkus app with code that deadlocked – 1.75 req/s initial test with correct code 338 req/s 390 req/s with faster disk [not measured] 25,599 req/s 2022-06-17 15:20:44,507 ERROR [org.hib.rea.errors] (vert.x-eventloop-thread-45) HR000057: Failed to execute statement [select zipcode0_.zip as zip1_0_0_, zipcode0_.city as city2_0_0_, zipcode0_.county as county3_0_0_, zipcode0_.state as state4_0_0_, https://quarkus.io/blog/reactive-crud-performance-case-study/ 2022-06-17 15:20:44,507 ERROR [org.hib.rea.errors] (vert.x-eventloop-thread-45) HR000057: Failed to execute statement [select zipcode0_.zip as zip1_0_0_, zipcode0_.city as city2_0_0_, zipcode0_.county as county3_0_0_, zipcode0_.state as state4_0_0_, zipcode0_.timezone as timezone5_0_0_, zipcode0_.type as type6_0_0_ from ZipCode zipcode0_ where zipcode0_.zip=?]: could not load an entity: [com.baeldung.quarkus_project.ZipCode#08231]: java.util.concurrent.CompletionException: io.vertx.core.impl.NoStackTraceThrowable Timeout at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:332) at java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:347) at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:636) at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510) at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2162) at io.vertx.core.Future.lambda$toCompletionStage$2(Future.java:362) ... at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:503) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.base/java.lang.Thread.run(Thread.java:833) Caused by: io.vertx.core.impl.NoStackTraceThrowable: Timeout 2022-06-17 15:20:44,507 ERROR [org.hib.rea.errors] (vert.x-eventloop-thread-45) HR000057: Failed to execute statement [select zipcode0_.zip as zip1_0_0_, zipcode0_.city as city2_0_0_, zipcode0_.county as county3_0_0_, zipcode0_.state as state4_0_0_, zipcode0_.timezone as timezone5_0_0_, zipcode0_.type as type6_0_0_ from ZipCode zipcode0_ where zipcode0_.zip=?]: could not load an entity: [com.baeldung.quarkus_project.ZipCode#08231]: java.util.concurrent.CompletionException: io.vertx.core.impl.NoStackTraceThrowable Timeout at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:332) at java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:347) at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:636) at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510) at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2162) at io.vertx.core.Future.lambda$toCompletionStage$2(Future.java:362) ... at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:503) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.base/java.lang.Thread.run(Thread.java:833) Caused by: io.vertx.core.impl.NoStackTraceThrowable: Timeout caused by misuse of @Transactional and, yes, it’s a bad error message and we fi xed that :)

Slide 155

Slide 155 text

https://quarkus.io/blog/reactive-crud-performance-case-study/ spring quarkus app with code that deadlocked – 1.75 req/s initial test with correct code 338 req/s 390 req/s with faster disk [not measured] 25,599 req/s

Slide 156

Slide 156 text

https://quarkus.io/blog/reactive-crud-performance-case-study/ spring quarkus app with code that deadlocked – 1.75 req/s initial test with correct code 338 req/s 390 req/s with faster disk [not measured] 25,599 req/s

Slide 157

Slide 157 text

https://quarkus.io/blog/reactive-crud-performance-case-study/ spring quarkus app with code that deadlocked – 1.75 req/s initial test with correct code 338 req/s 390 req/s with faster disk [not measured] 25,599 req/s

Slide 158

Slide 158 text

https://quarkus.io/blog/reactive-crud-performance-case-study/ spring quarkus app with code that deadlocked – 1.75 req/s initial test with correct code 338 req/s 390 req/s with faster disk [not measured] 25,599 req/s

Slide 159

Slide 159 text

https://quarkus.io/blog/reactive-crud-performance-case-study/ spring quarkus app with code that deadlocked – 1.75 req/s initial test with correct code 338 req/s 390 req/s with faster disk [not measured] 25,599 req/s

Slide 160

Slide 160 text

No content

Slide 161

Slide 161 text

- can we get the same answer repeatedly?

Slide 162

Slide 162 text

- can we get the same answer repeatedly? - are results deterministic?

Slide 163

Slide 163 text

- can we get the same answer repeatedly? - are results deterministic? - is this close to real-world?

Slide 164

Slide 164 text

- can we get the same answer repeatedly? - are results deterministic? - is this close to real-world? - is this representative of the way applications will be run?

Slide 165

Slide 165 text

- can we get the same answer repeatedly? - are results deterministic? - are we reporting useful metrics? - is this close to real-world? - is this representative of the way applications will be run?

Slide 166

Slide 166 text

- can we get the same answer repeatedly? - are results deterministic? - are we reporting useful metrics? - does this help us make a decision? - is this close to real-world? - is this representative of the way applications will be run?

Slide 167

Slide 167 text

- can we get the same answer repeatedly? - are results deterministic? - are we reporting useful metrics? - does this help us make a decision? - is it answering a question we actually care about? - is this close to real-world? - is this representative of the way applications will be run?

Slide 168

Slide 168 text

it’s easy to make all three worse

Slide 169

Slide 169 text

for improvements, choose one :(

Slide 170

Slide 170 text

No content

Slide 171

Slide 171 text

- evaluating before-and- after of performance tweaks - comparing frameworks against each other

Slide 172

Slide 172 text

- evaluating before-and- after of performance tweaks - comparing frameworks against each other - capacity planning - cost estimation - carbon footprint estimation

Slide 173

Slide 173 text

- evaluating before-and- after of performance tweaks - comparing frameworks against each other - answering questions - making decisions - capacity planning - cost estimation - carbon footprint estimation

Slide 174

Slide 174 text

beware the mcnamara fallacy

Slide 175

Slide 175 text

check in with the real world guides decisions validates measurements

Slide 176

Slide 176 text

No content

Slide 177

Slide 177 text

3 times denser deployments without sacrificing availability and response times of services

Slide 178

Slide 178 text

3 times denser deployments without sacrificing availability and response times of services

Slide 179

Slide 179 text

the distillate

Slide 180

Slide 180 text

- measuring what you want to measure is hard the distillate

Slide 181

Slide 181 text

- measuring what you want to measure is hard - aim for reproducibility the distillate

Slide 182

Slide 182 text

- measuring what you want to measure is hard - aim for reproducibility - isolate application on its own hardware the distillate

Slide 183

Slide 183 text

- measuring what you want to measure is hard - aim for reproducibility - isolate application on its own hardware - disable turbo-boost the distillate

Slide 184

Slide 184 text

- measuring what you want to measure is hard - aim for reproducibility - isolate application on its own hardware - disable turbo-boost - aim for realism the distillate

Slide 185

Slide 185 text

- measuring what you want to measure is hard - aim for reproducibility - isolate application on its own hardware - disable turbo-boost - aim for realism - prod-like hardware, app, and data the distillate

Slide 186

Slide 186 text

- measuring what you want to measure is hard - aim for reproducibility - isolate application on its own hardware - disable turbo-boost - aim for realism - prod-like hardware, app, and data - aim for relevance the distillate

Slide 187

Slide 187 text

- measuring what you want to measure is hard - aim for reproducibility - isolate application on its own hardware - disable turbo-boost - aim for realism - prod-like hardware, app, and data - aim for relevance - similar answers for different things usually mean you only measured a sneaky bottleneck the distillate

Slide 188

Slide 188 text

- measuring what you want to measure is hard - aim for reproducibility - isolate application on its own hardware - disable turbo-boost - aim for realism - prod-like hardware, app, and data - aim for relevance - similar answers for different things usually mean you only measured a sneaky bottleneck - vary one thing at a time the distillate

Slide 189

Slide 189 text

- measuring what you want to measure is hard - aim for reproducibility - isolate application on its own hardware - disable turbo-boost - aim for realism - prod-like hardware, app, and data - aim for relevance - similar answers for different things usually mean you only measured a sneaky bottleneck - vary one thing at a time - validate measurements by active benchmarking the distillate

Slide 190

Slide 190 text

- measuring what you want to measure is hard - aim for reproducibility - isolate application on its own hardware - disable turbo-boost - aim for realism - prod-like hardware, app, and data - aim for relevance - similar answers for different things usually mean you only measured a sneaky bottleneck - vary one thing at a time - validate measurements by active benchmarking - measure a metric with business relevance the distillate

Slide 191

Slide 191 text

https://hollycummins.com/when-benchmarks-go-bad/

Slide 192

Slide 192 text

https://hollycummins.com/when-benchmarks-go-bad/