Upgrade to Pro — share decks privately, control downloads, hide ads and more …

The Ultimate Feedback Loop: Learning From Customer Reported Defects

emanuil
April 28, 2017

The Ultimate Feedback Loop: Learning From Customer Reported Defects

“In God we trust; all others bring data.” W. Edwards Deming

A defect reported by your customers is the most expensive one. Everyone in the developer<->customer chain is getting paid for rework, and wastes time for something that should have beed done right in the first place. Beyond pure money loss, those defects are an embarrassment for any organization. They passed all the quality gates. But the greatest cost that cannot be measured easily is the loss of reputation.

It comes as a great surprise then, that almost no company investigates the defects reported by its customers. The companies try to quickly patch the problem and move forward. It’s a shame as a great deal knowledge can be gained about the system that produced the defect in the first place.

We’ve analyzed more than two years of customer reported defects data. Even though we thought that each defect is unique snowflake, some obvious patterns emerged quickly. We were able to debunk some widely believed software dogmas that were not working for us. We figured out which of the techniques listed were helping us or not to lower the defects count:

Following the software testing pyramid guidelines?
Switching the backend from PHP to Java?
Writing a simple unit test, where there was none?
Writing a simple integration test, where there was none?
Focusing test engineers to use specific techniques?
Using static code analysis?
Determining the typical profile of a method thats likely to contain an error?
What are MTBF, MTTD, MTTR and do they matter?

Each company is different, what works in one situation will not work in another. But we all need to learn from the most expensive kind of defects. This is very powerful feedback mechanism that should not be wasted. We’ll share our experience in building a simple framework for analyzing such defects and well as tips and trick so that you can build a similar program in your organization.

emanuil

April 28, 2017
Tweet

More Decks by emanuil

Other Decks in Technology

Transcript

  1. CHECK DO ACT PLAN Most companies focus on plan -

    do We’re going to talk about check - act
  2. Tester Defect Found Defect Resolved Defects found after this line

    are way more expensive to fix. Support Customer Developer All the defenses were bypassed The most expensive defects Reputation loss hard to measure Manager
  3. 189 customer reported defects analyzed Spanning two and a half

    years All data was collected manually Including all the fixes
  4. 38%of all the defects were regressions An automated test would

    have alerted us before they reach production.
  5. With unit test defect detection “yield” so low, which methods

    should we test? 72%of the defects were in methods with complexity 3 and above 82%of the defects were in methods with more than 10 lines of code
  6. public function loadCursor($channel_id, $from, $to) { $collection = $this->getMongoCollection(); $this->applyChannelId($channel_id);

    $this->applyTimeRanges($from, $to); $this->cursor = $collection->find($this->filter); } public function loadCursor($channel_id, $from, $to) { $collection = $this->getMongoCollection(); $this->applyChannelId(urldecode($channel_id)); $this->applyTimeRanges($from, $to); $this->cursor = $collection->find($this->filter); }
  7. 30%of the defects can never be realistically detected in-house, only

    in the “real world” Edge Cases Configuration Issues Unintended Usage Incomplete Requirements Unit API UI
  8. 2.3%of the defects could be detected by custom static code

    analysis PHP: linter, HHVM, custom checks Java: The Sonar Suite, FindBugs JavaScript: ESLint
  9. 6%of the backend defects could be avoided if switching from

    PHP to Java. (but Java has hidden costs)
  10. 5% of the bugs were caused by me Only developers

    should fix defects and learn from them.
  11. At least one automated test written per method Manual sanity

    check even the smallest fixes Mandatory code reviews Test with boundary values
  12. Weird data generator for the automated tests Monitor exceptions after

    each automated tests run Super fast API tests - from 3 hours to 3 minutes Monitor and immediately fix errors in production
  13. 10 20 30 40 Q3/14 Q4/14 Q1/15 Q2/15 Q3/15 Q4/15

    Q1/16 Q2/16 Q3/16 Q4/16 Q1/17 Java Backend Fast Tests Monitor Java Code Reviews Monitor JS Weird Test Data Generator Check for exceptions after test Monitor PHP Defects Count Lines of Code 35000 70000 105000 140000 Q3/14 Q4/14 Q1/15 Q2/15 Q3/15 Q4/15 Q1/16 Q2/16 Q3/16 Q4/16 Q1/17
  14. Allocate time Track customer reported defects Defect id in the

    commit message Investigate immediately Independent analysis Figure which metrics to track