Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Your Own Metric System

Your Own Metric System

“What should I work on next?” Code metrics can help you answer that question. They can single out sections of your code that are likely to contain bugs. They can help you get a toehold on a legacy system that’s poorly covered by tests.

Erin Dees

July 21, 2012
Tweet

More Decks by Erin Dees

Other Decks in Programming

Transcript

  1. pragprog.com/titles/dhwcr I also write books, mostly about Ruby topics. A

    group of us—me, plus two major contributors to the Cucumber test framework—are working on a new book of specific testing techniques. Ruby and its various test frameworks were my gateway drug to code metrics, though for this talk we’ll be concentrating on other languages.
  2. Oscilloscopes have been available commercially since the 1940s. Their architecture

    changes slowly. Software needs to last, and it tends to last whether we wish for a rewrite or not. Our team’s exploration of this mix of old and new code led to our interest in code metrics.
  3. Setting And even if you’re not working on a large

    legacy code base, there are likely issues that we face in common.
  4. The forces against us ❥ Entropy drags our code down

    ❥ Apathy drags us down There are a lot of forces that push on us and our teams. Today, I want to talk about two very different forces that have surprisingly similar effects: the entropy that drags our code down over time, and the apathy that drags us down personally over time.
  5. Stay engaged and productive How do we fight these forces?

    How do we keep our interest after our tenth straight hour wading into the weeds of an incomprehensible legacy routine? How do we prevent the code we write today from being someone’s nightmare tomorrow?
  6. Knowing our code can help us do our jobs and

    have more fun We have many tools in our chest; one is a good set of metrics—information about our code base. My hope is that you’ll consider code metrics at least as an intriguing, low-cost possibility for making the day go by a little better.
  7. Risk #1 Missing or poor information can waste our time

    or lead us to cause harm The risk with doing this—and there’s always a risk—is that we might waste our time making changes we don’t need, or worse, end up trashing our code in the name of blindly satisfying some target number.
  8. Two steps forward 1. Ask questions about your code 2.

    Choose metrics that answer those questions How do we address that risk? By letting our project needs dictate our metric choices, not the other way around. It sounds simple. But as we’ll see, it’s possible to misapply a metric and make a big mess.
  9. Purpose of metrics Since getting the reasons right is so

    important, let’s talk about why we’re gathering this data.
  10. purpose of metrics Help you answer a question The purpose

    of any metric should be to help you answer a question. Since we’re developers who maybe also do a little testing, let’s ask a few example questions now.
  11. purpose of metrics What mess should I clean up next?

    For example, if several files need some love, where should I concentrate my efforts?
  12. purpose of metrics The product backlog isn’t a substitute for

    your brain Something else may be giving us guidance on what part of the code to work in—like the product backlog. But you may be in a situation where you’ve got a little more leeway, like an explicit charter to pay down technical debt.
  13. Risk #2 Making structural changes can introduce new bugs (or

    expose existing ones) That said, when you do wander off the map, you do risk creating a bug. With legacy code, you may also uncover an existing bug and get the blame nonetheless. One way to address this risk is to improve your test coverage, and make small changes at a time. Another is to choose the right metrics; fixing static analysis warnings has anecdotally been one of the lowest-risk change activities I’ve ever seen.
  14. purpose of metrics Where are the bugs (likely to be)?

    Here’s another question we might ask. Where are the bugs? Where are the old bugs we haven’t found yet? Where are the new ones we might have created recently?
  15. purpose of metrics /** * REMOVE THIS CODE * BEFORE

    WE SHIP! */ We can also turn to our code for ideas of what questions to ask. Has anyone seen something like this comment in production code? The number of these red flags in your code is a kind of code metric you can measure and reduce.
  16. purpose of metrics Have we forgotten anything for this release?

    That quantitative measurement—number of bad comments in the code—is helping us make a qualitative determination.
  17. purpose of metrics These questions are for us The questions

    we’ve heard so far are things we might ask,...
  18. purpose of metrics Questions from others: (outside the scope of

    our metrics) Not that other people’s questions aren’t legitimately interesting, or that they might not apply metrics of their own.
  19. purpose of metrics Should we hold the release? For example,

    the SQA team might be looking for red flags that could hold up the release.
  20. purpose of metrics time → errors/KLOC → So they might

    look at aggregate errors per thousand lines of code. Not something I necessarily use to make decisions as a developer, but it doesn’t scare me if this metric is in use somewhere.
  21. purpose of metrics Who’s got the best KLOC or error

    rate? On a more sinister note, tracking rates of code production or error creation/resolution are outright destructive of teams.
  22. purpose of metrics It was time to fill out the

    management form for the first time. When he got to the lines of code part, he thought about it for a second, and then wrote in the number: -2000. After a couple more weeks, they stopped asking Bill to fill out the form, and he gladly complied. —folklore.org There was apparently a brief, dark time at Apple when employees were tracked by lines of code produced, until Bill Atkinson showed that you can improve and shorten the code at the same time.
  23. purpose of metrics Have we met our target complexity or

    coverage? Another, more subtle trap is setting absolute thresholds for various metrics.
  24. Doing so is like blindly obeying a GPS device: sooner

    or later, you’ll drive off a cliff.
  25. purpose of metrics Metrics serve you, not the other way

    around Metrics are supposed to be here for our benefit.
  26. purpose of metrics Keep the job fun And indeed, in

    addition to answering specific questions about our projects, they can make coding seem a little bit like a game where the side effect is to produce better code...
  27. purpose of metrics More fun than actually working? ...as long

    as we still get around to writing the code eventually.
  28. Risk #3 There is a trap here for the distractible

    We have to be careful not to spend all day writing fancier shell scripts and slapping our stats onto elaborate dashboards (though there are quick-and-cheap dashboards I like; see the Tranquil project).
  29. Common metrics Now that we have a few questions in

    mind about our code base, let’s look at some metrics commonly used by many projects. (Later, we’ll look at writing our own.) The nice thing about prefab metrics is that we can find open source implementations and supporting research.
  30. common metrics Languages ❥ C: a case study ❥ Perl:

    the beginner’s experience ❥ <your lang> just ask! Rather than present you with a laundry list, I’m going to stick to a few targeted examples in C and Perl. But similar tools likely exist for your language; catch me in the hall afterwards if you’d like to explore that together.
  31. common metrics Repo for this talk github.com/undees/oscon The code samples

    you’re about to see are on GitHub; feel free to send a pull request if you’d like your favorite language to be included.
  32. common metrics Cyclomatic complexity The granddaddy of modern code metrics

    is McCabe Cyclomatic Complexity. It’s meant to be a loose measure of how many different paths there are through a piece of code.
  33. common metrics E – N + 2P The fancy explanation

    is that you draw a graph of control flow through your function, then calculate a score from the number of edges, nodes, and return points.
  34. common metrics 1. Start with a score of 1 2.

    Add 1 for each if, case, for, or boolean condition The simpler explanation is that we walk through the code and add a point for each decision the code has to make.
  35. Volume speaking_volume( bool correct_room, bool correct_time) { if (correct_room &&

    correct_time) { return INTELLIGIBLE; } else { // rehearsing return INAUDIBLE; } } complexity: 1 So we’d start with a value of 1 for this code sample...
  36. Volume speaking_volume( bool correct_room, bool correct_time) { if (correct_room &&

    correct_time) { return INTELLIGIBLE; } else { // rehearsing return INAUDIBLE; } } complexity: 2 ...add 1 point for the if statement...
  37. Volume speaking_volume( bool correct_room, bool correct_time) { if (correct_room &&

    correct_time) { return INTELLIGIBLE; } else { // rehearsing return INAUDIBLE; } } complexity: 3 ...and add 1 final point for the boolean operator. Depending on the implementation, we might add a point for the multiple returns.
  38. $ pmccabe *.c | sort -nr | head -10 3

    3 3 6 8!oscon.c(6): speaking_volume 1 1 2 16 5!oscon.c(16): main When we run it, it prints the complexity, size, and location of each function in our project.
  39. common metrics Perl::Metrics::Simple CPAN has several metrics modules for Perl;

    Perl::Metrics::Simple is an easy one to get started with.
  40. sub speaking_volume { my $correct_room = shift; my $correct_time =

    shift; if ($correct_room && $correct_time) { return 'intelligible'; } else { # rehearsing return 'inaudible'; } } Here’s a Perl subroutine similar to the one we saw.
  41. $ countperl lib ... Tab-delimited list of subroutines, with most

    complex at top ----------------------------------------------------------- complexity sub path size 4 speaking_volume lib/OSCON.pm 9 ... Similar to pmccabe, Perl::Metrics::Simple gives us the size and complexity of each method.
  42. Speaking of size and complexity, this paper reexamined several previous

    studies and found that several popular code metrics were effectively just expensive ways...
  43. $ wc -l oscon.c ...of counting lines. The paper didn’t

    consider cyclomatic complexity alone (and there were other issues dealt with in subsequent papers by other authors), but we should always be skeptical of our own metrics. Fortunately, most tools give us both a line count and a complexity metric; we can decide for ourselves.
  44. Risk #4 Blindly reducing one number can add complexity and

    bugs Some teams set complexity targets. In the degenerate case, they turn their code into a bunch of tiny functions that do nothing—making the overall code base more complex and prone to bugs.
  45. common metrics Test coverage Another widely used metric is the

    percentage of your code that gets executed by your tests.
  46. common metrics 1. Instrument your program 2. Watch your tests

    run 3. Report which lines get executed Measuring this typically involves instrumenting your code, so that you can watch it as it runs your tests.
  47. common metrics Addresses “epic confidence” fail opensourcebridge.org/sessions/923 Knowing our test

    coverage helps address the “epic confidence” problem that Laura Thomson described in her Open Source Bridge talk, “How Not To Release Software.” Teams afflicted by this bug assert without evidence that their tests are great.
  48. common metrics Testable code is more... testable In addition to

    combating hubris, measuring coverage helps us make our code more testable. Testability is not an end in itself, but a property with beneficial side effects.
  49. common metrics gcov For C projects, it’s easy to measure

    coverage. GCC comes with the gcov coverage tool.
  50. int main() { assert(speaking_volume(true, true) == INTELLIGIBLE); return 0; }

    Here’s a test that exercises just one branch of our code from earlier.
  51. $ gcc -fprofile-arcs \ -ftest-coverage \ -c oscon.c $ gcc

    -fprofile-arcs \ oscon.o First, we’d compile and link our program with a couple of gcov’s required flags.
  52. $ gcov oscon.c $ cat oscon.c.gcov Then, we’d run our

    tests and point gcov at the logfiles.
  53. 1: 6:Volume speaking_volume(bool correct_room, bool correct_time) { 1: 7: if

    (correct_room && correct_time) { 1: 8: return INTELLIGIBLE; -: 9: } else { -: 10: // rehearsing ####: 11: return INAUDIBLE; -: 12: } -: 13:} The result is a list of what lines did and didn’t get executed. In this case, we never ran the “else” clause.
  54. $ cover -test $ cat cover_db/coverage.html You just point Devel::Cover

    at your tests, and it produces an HTML report for you.
  55. Devel::Cover gives us more information than gcov did. We executed

    line 26 once, but didn’t exercise both sides of the “&&”.
  56. Risk #5 High code coverage can make you think your

    code is good Which brings us to another thing to keep in mind. Hitting each line of code once isn’t the same as hitting each combination of branches. Code coverage is meant to help you look for holes, not to lull you into false security.
  57. Custom metrics The advantage of applying commonly used measurements is

    good support. The downside is lack of context; the creators of those metrics have nowhere near the knowledge of your project that you do. So you may want to supplement common metrics with a few of your own. I can’t tell you what those metrics are, but I can tell you a couple of the ones I’ve seen used.
  58. 1. 2. 3. 4. 5. 6. 7. custom metrics Carlin’s

    7 Dirty Words Just as George Carlin gave us his famous list of words you can’t say on television,...
  59. custom metrics 1.XXX 2.TODO 3.FIXME 4.TBD 5.HACK 6.#if 0 7.#ifndef

    TESTING Our 7 Dirty Words ...software teams have their own lists of bad words.
  60. custom metrics 1. 2. 3. 4. 5. 6. 7. Our

    7 Dirty Words (Sorry, I should have blurred those out. ;-)
  61. $ ack -cl 'XXX|TODO|FIXME' oscon.c:1 This is dead simple to

    do with ack, the modern-day replacement for grep. Just count string occurrences across your files, and optionally do a little sorting.
  62. custom metrics Test::Fixme Grepping works on nearly every language, of

    course. But Perl has its own specific implementation of this metric.
  63. use Test::Fixme; run_tests(where => 'lib', match => qr/XXX|TODO|FIXME/); All you

    have to do is throw a couple of lines into a “.t” file...
  64. $ make test ... t/test-fixme.t .. 1/1 # Failed test

    ''lib/OSCON.pm'' # at t/test-fixme.t line 2. # File: 'lib/OSCON.pm' # 34 # XXX:remove the temp limit before we deploy # Looks like you failed 1 test of 1. t/test-fixme.t .. Dubious, test returned 1 (wstat 256, 0x100) Failed 1/1 subtests ...and Perl won’t even let your tests pass if you’ve got a naughty word in your code.
  65. custom metrics Churn Another metric that’s not universally used, but

    can still come in handy, is code churn: how often does a given piece of code change?
  66. custom metrics Recently changed code may have new bugs Churn

    can tell us what parts have changed recently; those parts may have new bugs.
  67. custom metrics Frequently-changed code may have problems Churn can also

    tell us what parts change often; those parts can become trouble spots.
  68. git log --pretty=oneline \ --since=2012-05-04 \ oscon.c | wc -l

    You can get as crazy as you want with churn: examining which lines have changed the most, which functions have had the most people working on them, and so on. Git can tell you a lot more than a simple metric can, but if you’re on a centralized system you may want to just grab the data yourself and pick it apart with UNIX tools.
  69. custom metrics Missing documentation If you’re writing code that’s going

    to get used by developers outside of your team, you might use a metric like documentation coverage to identify the parts of the code that most badly need docs.
  70. custom metrics Errors by time of day Most of the

    metrics we’ve seen so far have been one-shot numbers. But it’s also possible to track things over time, like occurrences of compiler errors or test failures.
  71. custom metrics Play by Play: Zed Shaw peepcode.com/products/play-by-play-zed-shaw Zed Shaw

    does a great demo of this in his Play by Play screencast with PeepCode.
  72. What do we get from all this? We’ve talked about

    the kinds of questions we want to ask about our code, and the metrics that can help us answer those questions. Now for the bigger question: what’s the effect on our software? Well, here are some of the things that happened with my team.
  73. Found a real dependency problem with pmccabe One, I found

    a surprisingly high complexity number in what was supposed to be a simple math routine. Somebody had snuck in an unwanted dependency on an unrelated system.
  74. Found dead code with gcov While looking for untested code,

    we found some code that didn’t need any tests—because it was never called anyway!
  75. Did a quick churn check at manual test time I

    personally like to look at what features have changed when it’s time to do manual testing.
  76. Found places we can DRY up the code Some designs

    come at a time when our understanding of the domain is imperfect. As our understanding improves, we refactor the code. Complexity metrics can be handy for prioritizing.
  77. Relative, not absolute! One of the common themes woven through

    much of this discussion is that absolute limits for code metrics are not as helpful as relative measures within a project.
  78. Content-Type: multipart/wish My hope is that you come away from

    this session with a couple of ideas for metrics you’d like to try, and with the well-founded belief that you can get started with very little time investment.
  79. ❥ Find the answers you need ❥ Look like heroes

    ❥ Have fun I hope you find the answers you need for your project, and that you have fun getting them.