Tim on Leadership

Musings on Management and Leadership from Tim Parker

Tracking Bug Counts

A key metric I start tracking as soon as I arrive at any new position is bug counts.  Why?  Bug counts are a sure primary indicator of two things: existing quality of a product, and tracking changes in quality as time passes and more focus is placed on pre-release testing.

Of course, not all companies actually use proper bug tracking systems.  Some use paper or email (awful!).  Some use spreadsheets (not much better) and some use software not designed for tracking bugs.  Many small companies deploy open source or low-cost software that can track bugs, and while some of this software is very good, I prefer to deploy an all-in-one tool such as Microsoft VSTS TFS, which acts as a repository, management tool, collaboration tool, and bug tracker all in one. Sure, there's a cost (acquisition and learning curve) but I prefer an all-in-one tool to a half dozen unconnected, disparate tools. 

And not all companies use bug tracking software properly.  Any bug found by a developer, QA, or customer needs to be entered into the system with details, and tracked through to closure (and closing should not be done by anyone other than the person who opened or created the bug, or a manager; this prevents abuse of the system and leads to better metrics).  It's fine to separate pre-release and release software bugs (and this is often the best way), but all bugs need to be tracked for both accountability and reporting purposes.  Bug counts serve as a really valuable metric for quality of the product and the skills of the developers.

As part of the bug tracking system, there has to be a way to assign an importance to the bug, whether it is a software flaw, cosmetic UI issue, usability issue, feature request, and so on.  Knowing you are comparing apples to apples is important to getting value from bug tracking software.

When I arrive at a company, there's usually a big backlog of bugs.  At one company that deployed good bug tracking software, there was over 10,000 bugs in the system, some dating back more than five releases of software!  Old bug reports need to be reviewed, weeded, closed, or promoted at regular intervals.  Once a company adopts a solid development process, I make old bug reviews part of the standard development cycle, before anyone is assigned new work.  This ensures all irrelevant bugs are removed from the system, and anything that was overlooked last time gets a fresh review in case it is valid and important.  Each software release, I like to have a set number of bugs (usually a percentage based on bug importance) handled as part of each cycle, along with new features.  For products with a lot of bugs, I've even used a method of alternating releases, where one release of every two is devoted to nothing but bug fixes.

So what do I track?  Each software cycle, and usually every week, I run through the bug tracking software to see how many new bugs are being opened and how many are closed with each new release (dot minor releases are tracked especially because these almost always focus on bugs).  There's nothing wrong with having a high number of bug reports opened: it tends to mean the QA team is doing their job and finding stuff before the customer does.  But customer-reported bugs are a bad thing and I track those carefully: we should catch them before the customer does! 

Bug closure rates are also as important.  Ideally, closure rates will be the same or higher than open rates (if there is a backlog) but this is not always the case, and usually varies depending on where in the development cycle we are.  Generally, though, the rate of closure of bug reports is a valid measurement that shows how well the team is doing, and how the quality of the product is changing as more process gets involved.

Every week, I tend to email product and program managers their bug counts.  I don't expect massive changes early in a group's experience e with higher quality processes, but I do look for overall trends over a year or two of operation.  If the quality of the product is not improving slowly and surely, we need to revisit both the development and the testing cycles, and focus on quality more than anything else, as the customer-reported bugs mean annoyed customers! 

I have been fortunate to work with some great QA managers, who push their teams of QA engineers to find and report all manner of bugs, and then follow up to ensure they are properly closed.  Active participation in the bug tracking process means quality inevitably goes up, as the developers are held more accountable and responsible for bug creation and closure.  And that means the product gets better quality, the developers get better at coding, the testers get more adept at testing, and the company gets less complaints about their software.  And most customers would rather have a solid software product that does what it is supposed to over a flashier product with lots of features that don't work.  And that's what software development is supposed to be all about.