Slide 1

Slide 1 text

No content

Slide 2

Slide 2 text

Before getting into performance measurement and benchmarking WHY TO OPTIMIZE? WHAT TO OPTIMIZE? WHAT TO MEASURE? WHERE TO MEASURE? WHAT IS THE REPRESENTATIVE ENVIRONMENT? WHAT TECHNIQUES AND TOOLS TO USE? HOW TO INTERPRET MEASUREMENT RESULTS?

Slide 3

Slide 3 text

Execution of a CPI Groovy script in a local environment Scripts are executed by JVM of a CPI runtime node using the relevant script engine. Concept and principle of operation are defined in JSR 223 (Scripting for the Java Platform). An approach to execute a Groovy script for CPI in a local environment / emulator. Controlled environment Well-defined boundary conditions Feature-rich analysis toolbox Is a local environment representative? Can a steady state be reached?

Slide 4

Slide 4 text

Basic approach – stopwatch Basic API 1. Capture timestamp before and after execution of the code block, calculate elapsed time. 2. Save it (in an MPL entry, exchange property, trace file) to enable subsequent analysis. Abstraction layer API JDK API: • System.currentTimeMillis() • System.nanoTime() Stopwatch APIs provided by Apache Camel, Apache Commons Lang, Google Guava, Spring and other frameworks and libraries

Slide 5

Slide 5 text

Detailed analysis – profiling • Based on instrumentation of classes • Accurate • Noticeable performance overheard • Not appropriate for production environments Instrumenting profilers Dynamic measurement of appropriate performance relevant metrics of a running application – CPU, memory, I/O, method invocations and execution times, etc. Some popular Java profilers: VisualVM, JConsole, Java Mission Control, Java Flight Recorder, JProfiler, YourKit, Oracle Developer Studio, Honest Profiler, async-profiler Sampling profilers • Based on frequent periodic collection of all threads’ stack traces and their comparison • Less accurate (e.g. fast calls, safepoints) • Less performance overhead • Can be used in production environments

Slide 6

Slide 6 text

Performance benchmarking – synthetic tests • Application level benchmarking • Requires development of the reference “golden” application and accompanying testing infrastructure Macrobenchmarking Microbenchmarking • Method level benchmarking • Focus on the specific part of the source code, the most granular Mesobenchmarking • Module level benchmarking • In the middle, measure the action / feature, isolation on modular or operational level

Slide 7

Slide 7 text

Microbenchmarking – some tools https://github.com/google/caliper Google Caliper https://openjdk.java.net/projects/code-tools/jmh/ Java Microbenchmark Harness (JMH) https://openjdk.java.net/jeps/230 Microbenchmark Suite (a part of JDK 12+, based on JMH)

Slide 8

Slide 8 text

Outro Analysis shall be based on reliable, representative and repeatable emulation. Be clear about what is measured, what metrics are relevant and how measurement results can be interpreted. Get familiar with best practices and gain hands-on experience. Don’t disregard bad practices – awareness of them helps avoid common problems and pitfalls. Design of emulators can turn into an expensive venture. In-depth understanding of emulated components’ architecture and side effects is a must. Techniques such as profiling and benchmarking require efforts to learn and master – be prepared to a steep learning curve. Don’t overcomplicate things – keep them simple where possible.

Slide 9

Slide 9 text

Dr. Vadim Klimov SAP Integration Architect https://people.sap.com/Vadim.Klimov Thank you