Performance Measurement and Benchmarking of Custom Code in SAP Cloud Platform Integration
Event: SAP Online Track 2020
Date: May 31, 2020
Speaker: Vadim Klimov
Session: Performance measurement and benchmarking of custom code in SAP Cloud Platform Integration
WHAT TO OPTIMIZE? WHAT TO MEASURE? WHERE TO MEASURE? WHAT IS THE REPRESENTATIVE ENVIRONMENT? WHAT TECHNIQUES AND TOOLS TO USE? HOW TO INTERPRET MEASUREMENT RESULTS?
Scripts are executed by JVM of a CPI runtime node using the relevant script engine. Concept and principle of operation are defined in JSR 223 (Scripting for the Java Platform). An approach to execute a Groovy script for CPI in a local environment / emulator. Controlled environment Well-defined boundary conditions Feature-rich analysis toolbox Is a local environment representative? Can a steady state be reached?
and after execution of the code block, calculate elapsed time. 2. Save it (in an MPL entry, exchange property, trace file) to enable subsequent analysis. Abstraction layer API JDK API: • System.currentTimeMillis() • System.nanoTime() Stopwatch APIs provided by Apache Camel, Apache Commons Lang, Google Guava, Spring and other frameworks and libraries
• Accurate • Noticeable performance overheard • Not appropriate for production environments Instrumenting profilers Dynamic measurement of appropriate performance relevant metrics of a running application – CPU, memory, I/O, method invocations and execution times, etc. Some popular Java profilers: VisualVM, JConsole, Java Mission Control, Java Flight Recorder, JProfiler, YourKit, Oracle Developer Studio, Honest Profiler, async-profiler Sampling profilers • Based on frequent periodic collection of all threads’ stack traces and their comparison • Less accurate (e.g. fast calls, safepoints) • Less performance overhead • Can be used in production environments
Requires development of the reference “golden” application and accompanying testing infrastructure Macrobenchmarking Microbenchmarking • Method level benchmarking • Focus on the specific part of the source code, the most granular Mesobenchmarking • Module level benchmarking • In the middle, measure the action / feature, isolation on modular or operational level
emulation. Be clear about what is measured, what metrics are relevant and how measurement results can be interpreted. Get familiar with best practices and gain hands-on experience. Don’t disregard bad practices – awareness of them helps avoid common problems and pitfalls. Design of emulators can turn into an expensive venture. In-depth understanding of emulated components’ architecture and side effects is a must. Techniques such as profiling and benchmarking require efforts to learn and master – be prepared to a steep learning curve. Don’t overcomplicate things – keep them simple where possible.