Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Are Comprehensive Quality Models Necessary for ...

Are Comprehensive Quality Models Necessary for Evaluating Software Quality?

by Klaus Lochmann, Jasmin Ramadani and Stefan Wagner

More Decks by PROMISE'13: The 9th International Conference on Predictive Models in Software Engineering

Other Decks in Research

Transcript

  1. www.uni-stuttgart.de Stefan Wagner PROMISE 2013 Baltimore, USA 9 October 2013

    Comprehensive Quality Models Necessary for Are @prof_wagnerst joined work with Klaus Lochmann and Jasmin Ramadani Evaluating ? Software Quality
  2. "We deployed a bug prediction algorithm across Google, and found

    no identifiable change in developer behavior." Lewis et al., ICSE'13
  3. "Quality is a complex and multi-faceted concept... it is also

    the source of great confusion." –David A. Garvin
  4. Statically unused method Analyzability Quality attribute Measure Usefulness of method

    Product factor Maintainability Gendarme: Avoid Uncalled Private Code PMD: Unused Private Method Instrument
  5. RQ 1: What is the performance of focused quality models

    built using machine learning algorithms? RQ 2: What is the performance of the focused quality models including additional expert-based measures?
  6. Predictor Models Used • Random guessing • Linear regression (forward

    selection) • Linear regression (backward elimination) • Linear regression (bidirectional elimination) • Classification tree • Random forest
  7. • Mean absolute residual (MAR) • Standardised accuracy measure (SA)

    • Effect size Model Comparison MAR = Pn 1 |(yi ˆ yi)| n SApi = 1 MARpi MARp0 = MARpi MARp0 sp0
  8. Study Objects • 1994 Java systems from SDS repository •

    15 Java systems for which we have manual measures
  9. Procedure • Collection of all measures and evaluations for maintainability

    • Building of predictors (4-fold cross validation) • Calculation of model comparison measures
  10. 0 10 20 30 40 50 60 35 40 45

    50 55 60 65 SA (percentage of improvement over random guessing) Random Forest (Forward Selection) Classification Tree (Forward Selection) Classification Tree (different complexity param.) Regression (Forward Selection) Regression (Bidirectional Elimination) Regression (Backward Elimination) SA / # of Variables # of variables
  11. 0 10 20 30 40 50 50 55 60 65

    Number of Variables Random Forest (Forward Selection) SA / # of Variables
  12. 0 5 10 15 20 25 30 0 10 20

    30 40 50 60 SA Systems without experXďFEWIHQIEWYVIW Systems with experXďFEWIHQIEWYVIW With and Without Manual Measures # of variables
  13. Threats to Validity • Expert measures not included in RQ

    1 • For RQ 2 only 15 systems • Set of predictors and comparison measures • Only maintainability • Only Java systems
  14. • Comprehensive models to capture all the different aspects and

    quality factors • More focused models measuring only few measures • Focused model with 61% accuracy but only 10 measures (compared to 378) • Expert-based measures reduce accuracy • So what should we use?