Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Salvaging a Project Heavily Burdened by Technic...

Salvaging a Project Heavily Burdened by Technical Debt

Disorganized code that no one fully understands, changes that cause bugs in seemingly unrelated parts of the application, plus a constant sense of urgency on top of it - does it sound familiar? The answer, sadly, is often "YES", and a project I currently work on was no exception just a year ago. Nevertheless, we managed to gradually turn the things around, and brought the project to a much healthier state.

The talk given at WebCamp Ljubljana in April 2017 focused on strategies that were used to tackle what seemed to be an unapproachable problem - primarily on the technical (code) level, but with some management aspects as well.

Peter Lamut

April 22, 2017
Tweet

Other Decks in Programming

Transcript

  1. WebCamp Ljubljana 2017 2 / 19 The GGRC Project •

    GGRC – Google Governance, Risk, and Compliance • Ruby app → Python app, 3 teams changed
  2. WebCamp Ljubljana 2017 3 / 19 Initial State (Late 2015)

    • Always late, crunch mode • Regressions • No automated tests (almost) • A single QA person, two weeks to fully verify the application • Virtually no documentation • Onboarding period: 3-6 months
  3. WebCamp Ljubljana 2017 4 / 19 Notable Artwork “New engineer

    getting familiar with the GGRC codebase” Edvard Munch, 1893 2015
  4. WebCamp Ljubljana 2017 5 / 19 Creating a Safety Net

    • Start with high-level “smoke tests” – Benefits: Can quickly cover large chunks of codebase – Drawbacks: Less precise, take time to run • The worst sin – releasing a bug that severely hampers the usability of an application • The problem – full QA verification took two weeks ☹ • The solution – automate!
  5. WebCamp Ljubljana 2017 6 / 19 Making It Readable (Linting)

    – 1/6 Linting – checking the code for style and programmatic errors.
  6. WebCamp Ljubljana 2017 7 / 19 Making It Readable (Linting)

    – 2/6 • Tools: ESLint, Pylint, flake8… • Easier to read, understand, and maintain the code • Catching (some) bugs for free • Less noise on PR reviews • Less personal
  7. WebCamp Ljubljana 2017 8 / 19 Making It Readable (Linting)

    – 3/6 • Action plan, Christmas 2015: – Let’s do it! (finally) – Autofix with eslint ­­fix (easy-peasy) – (oops, failed) Umm… let’s just find and fix these new bugs. – (1000+ lines long diff) OK, let’s do it manually...
  8. WebCamp Ljubljana 2017 9 / 19 Making It Readable (Linting)

    – 4/6 $ eslint src/path/to/file/utils.js ... 29:11 error Use ‘===’ to compare with ‘null’ no­eq­null 29:13 error Expected '!==' and instead saw '!=' eqeqeq ... if (x != null) { // do something } if (x !== null) { // do something } “fixing” the issue if (x !== null && x !== undefined) { // do something } Should really be: >>> var x = undefined; >>> (x != null) === (x !== null) false Consider this: Manual linting can be susceptible to errors:
  9. WebCamp Ljubljana 2017 10 / 19 Making It Readable (Linting)

    – 5/6 • Incremental approach: Focus on not making things worse. (the error count should not increase) master merge-base topic HEAD MERGE_BASE_HASH=$(git merge­base master HEAD) git checkout “$MERGE_BASE_HASH” # run linter, store issue count in ISSUES_BASE git checkout ­ # back to PR branch (topic) # run linter, store issue count in ISSUES_PR ISSUE_DIFF=$((ISSUES_PR – ISSUES_BASE)) if [[ "$ISSUE_DIFF" ­gt 0 ]]; then exit 1 # check failed else exit 0 # check passed fi
  10. WebCamp Ljubljana 2017 11 / 19 Making It Readable (Linting)

    – 6/6 • Does not (significantly) slow down regular work → more acceptable by management • Incremental linting feels “effortless” • Easier to locate issues
  11. WebCamp Ljubljana 2017 12 / 19 Automated Tests – 1/4

    • Faster (but dumber) version of the QA team. • What to check? (CI server metrics) 1) All tests must pass (obviously...) 2) Tests must cover “enough” code • How to quantify “enough”?
  12. WebCamp Ljubljana 2017 13 / 19 Automated Tests – 2/4

    ... ... if (!x) { return false; } else if (x === ‘y’) { return true; } else if (x === ‘Y’) { return true; } return false; ... ... return x && x.toLowerCase() === ‘y’; Old value: 80 % New value: ~33 % FAIL?!? Metric used: Code coverage
  13. WebCamp Ljubljana 2017 14 / 19 Automated Tests – 3/4

    ... ... if (!x) { return false; } else if (x === ‘y’) { return true; } else if (x === ‘Y’) { return true; } return false; ... ... return x && x.toLowerCase() === ‘y’; Old value: 2 New value: 2 PASS! Metric used: Lines not covered
  14. WebCamp Ljubljana 2017 15 / 19 Automated Tests – 4/4

    • The same principle – focus on not making things worse. Unit Integration E2E • Choose the right kind of test: – High-level tests: Cover more, but slow, less specific. – Rule of thumb: Write the most basic test that can catch an issue.
  15. WebCamp Ljubljana 2017 16 / 19 • Break dependencies, componentize

    – Limiting change impact – Easier to reason about – More easily testable • Extract method/class, splitting into modules, etc. • Rule of thumb: Too much mocks in unit tests → might need refactoring/decoupling Refactoring
  16. WebCamp Ljubljana 2017 17 / 19 Enhancing the Work Process

    • Hard numbers → quantify technical debt costs • Allocate time for tests (include in the time estimates) • Do not compromise on the code quality • Continuous refactoring
  17. WebCamp Ljubljana 2017 18 / 19 Summary • Safety net

    • Incremental improvements • Breaking dependencies (components) • Integrating quality into the work process itself
  18. We are hiring! (web developers, Python + JavaScript) WebCamp 2017,

    Ljubljana Salvaging a Project Heavily Burdened by Technical Debt Peter Lamut [email protected] @plamut