Upgrade to Pro — share decks privately, control downloads, hide ads and more …

RefDiff: Detecting Refactorings in Version Histories (MSR 2017)

RefDiff: Detecting Refactorings in Version Histories (MSR 2017)

Refactoring is a well-known technique that is widely adopted by software engineers to improve the design and enable the evolution of a system. Knowing which refactoring operations were applied in a code change is a valuable information to understand software evolution, adapt software components, merge code changes, and other applications. In this paper, we present RefDiff, an automated approach that identifies refactorings performed between two code revisions in a git repository. RefDiff employs a combination of heuristics based on static analysis and code similarity to detect 13 well-known refactoring types. In an evaluation using an oracle of 448 known refactoring operations, distributed across seven Java projects, our approach achieved precision of 100% and recall of 88%. Moreover, our evaluation suggests that RefDiff has superior precision and recall than existing state-of-the-art approaches.

ASERG, DCC, UFMG

May 21, 2017
Tweet

More Decks by ASERG, DCC, UFMG

Other Decks in Research

Transcript

  1. RefDiff: Detecting Refactorings in Version Histories Danilo Silva, Marco Tulio

    Valente Universidade Federal de Minas Gerais Belo Horizonte, Brazil
  2. Introduction • Knowledge of the refactoring operations applied is a

    valuable information – Analyze software evolution – Study refactoring practice – Review and merge code 3
  3. Problem: Finding refactoring activity is a non- trivial task Instrumenting

    refactoring engines? • Refactorings are not always performed using automated support 6
  4. Problem: Finding refactoring activity is a non- trivial task Source

    code analysis? • Viable, but current approaches have precision and recall issues – Refactoring Miner: 63% precision – Ref-Finder: 35% precision and 24% recall 7
  5. RefDiff 9 • A refactoring detection approach – Employs a

    combination of heuristics based on static analysis and code similarity – 13 well-known refactoring types – TF-IDF based similarity index
  6. Computing Similarity 18 • Source code represented as a multiset

    (or bag) of tokens • Similarity index based on Information Retrieval techniques (TF-IDF)
  7. Calibration of Thresholds 19 • Oracle of known refactorings in

    10 commits of a public dataset (Silva et al., 2016) • Thresholds from 0.1 to 0.9 by 0.1 increments • We choose the value that optimize the F1 score
  8. Evaluation: Precision and Recall 22 • Oracle of known refactorings

    applied by students – 7 open-source systems – 448 refactoring relationships • Compare RefDiff’s precison and recall with – Refactoring Miner – Refactoring Crawler – Ref-Finder
  9. Conclusion 24 • RefDiff has better precision and recall than

    other approaches • Execution time is acceptable (1.96s per commit)
  10. Future Work 25 • Extended evaluation of RefDiff using actual

    refactorings applied in open-source systems
  11. Evaluation: Execution Time 29 • We analyzed each commit between

    January 1, 2017 and March 27, of 10 Java repositories – 1990 commits • We compared execution time with Refactoring Miner
  12. Computing Similarity 34 token frequency in entity e inverse document

    frequency of the token in the collection weighted Jaccard coefficient