Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Presentation WASP Software Technology Cluster 2025

Avatar for xLeitix xLeitix
September 11, 2025
65

Presentation WASP Software Technology Cluster 2025

Avatar for xLeitix

xLeitix

September 11, 2025
Tweet

Transcript

  1. Chalmers 5 Luca Traini. Exploring performance assurance practices and challenges

    in agile software development: An ethnographic study. Empirical Software Engineering. 2022. “performance tests were executed at most once per release” “35 performance requirements were never tested in the last 6 releases, i.e., since 2 years”
  2. Chalmers 14 Costa, Bezemer, Leitner, Andrzejak (2021). What's Wrong with

    My Benchmark Results? Studying Bad Practices in JMH Benchmarks. IEEE Transactions on Software Engineering Bad Practices often massively affect benchmark results
  3. Chalmers 15 Some issues and solutions: Benchmarks are difficult to

    write Benchmark bug finders using static analysis Benchmark generators Costa, Bezemer, Leitner, Andrzejak (2021). What's Wrong with My Benchmark Results? Studying Bad Practices in JMH Benchmarks. IEEE Transactions on Software Engineering Jangali, Tang, Alexandersson, Leitner, Yang, Shang (2022). Automated Generation and Evaluation of JMH Microbenchmark Suites from Unit Tests. IEEE Transactions on Software Engineering Rodriguez-Cancio, Combemale, Baudry (2016). Automatic microbenchmark generation to prevent dead code elimination and constant folding. In ASE '16
  4. Chalmers 16 Laaber, Leitner (2018). An Evaluation of Open-source Software

    Microbenchmark Suites for Continuous Performance Assessment. In Proceedings of the 15th International Conference on Mining Software Repositories (MSR)
  5. Chalmers 17 Laaber, Würsten, Gall, Leitner (2020). Dynamically Reconfiguring Software

    Microbenchmarks: Reducing Execution Time without Sacrificing Result Quality. In Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE), pp. 989–1001, New York, NY, USA.
  6. Chalmers 18 Laaber, Würsten, Gall, Leitner (2020). Dynamically Reconfiguring Software

    Microbenchmarks: Reducing Execution Time without Sacrificing Result Quality. In Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE), pp. 989–1001, New York, NY, USA.
  7. Chalmers 19 Laaber, Würsten, Gall, Leitner (2020). Dynamically Reconfiguring Software

    Microbenchmarks: Reducing Execution Time without Sacrificing Result Quality. In Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE), pp. 989–1001, New York, NY, USA.
  8. Chalmers 20 Some issues and solutions: Benchmarks take a long

    time to execute Smart reconfiguration Benchmark selection Laaber, Würsten, Gall, Leitner (2020) Dynamically reconfiguring software microbenchmarks: reducing execution time without sacrificing result quality. In ESEC/FSE 2020 Traini, Cortellessa, Di Pompeo, Tucci (2023) Towards effective assessment of steady state performance in Java software: are we there yet? In Empirical Software Engineering Laaber, Gall, Leitner (2021) Applying test case prioritization to software microbenchmarks. In Empirical Software Engineering
  9. Chalmers 24 Maybe LLMs will generate optimal code for us?

    Some early results: Under submission