Blog

Richard Taylor

Richard Taylor

‘A Bug Like This’

Education
Not every bug is the same. Testing different bugs can cost different amount of money.

‘A Bug Like This’

You might have seen in a previous article that my method of calculating the Return on Investment (ROI) from testing depends on being able to say “Every time we find ‘a bug like this’ we save the business €nnn because it didn’t go live”.  That begs the question, “A bug like what?”.  The ROI calculation won’t be realistic unless we correctly assign the theoretical cost of fixing it in production to each bug that we actually find in test, and this will vary widely according to the nature of the bug and where in the product it is.

To show how the ROI calculation works I postulated that we could do this on the basis of Severity.  This would work if our Severity levels were defined by the cost of fixing such a bug in production but I’ve never seen that scheme used in the real world and I don’t expect to, because it wouldn’t necessarily reflect the impact of a defect on the users.  Severity levels are better defined by a scheme like, for example,

  1. Blocker: entire system unusable
  2. Critical: a business-critical function is entirely unusable, or, loss or corruption of business-critical data
  3. Major: a single business-critical feature is unusable with no acceptable work-around, or, loss or corruption of non-critical data
  4. Minor: ...

… but this scheme doesn’t help us to understand the costs of fixing.  To do that, we need to think about what the fix will entail.  So I’ll start with the premise that there’s a basic cost to make and deploy any bug fix and then add extra costs that depend on what the fix must include and any other factors.

Let’s assume that any Blocker, Critical or Major bug fix will be deployed individually and that bugs at the two lower severity levels, Minor and Trivial, will be batched up and deployed in the next scheduled maintenance release.  Those in a maintenance release will share deployment overheads and some of the regression test overheads, so will be individually much cheaper.  So let’s imagine, just for the purposes of an example, that for confirmation testing, regression testing and deployment we have base costs of

  • Blocker / Critical / Major        - €900 each
  • Minor / Trivial                         - €60 each

To this base cost we can add a standard amount for each work product that needs to be corrected.  For example, if the requirements spec was ok but the business system design (BSD) was wrong then we might have to change the BSD, the technical system design and a component specification as well as the code.  For another bug, maybe the BSD is ok but we’ll have to change the technical architecture spec., two component spec’s and those two components’ code.  For my illustration I’ve assumed that the cost of correcting a work product won’t differ significantly according to defect severity so I’ve used the same amounts for all severities; if you find that it does then you can adapt the scheme accordingly.  We can model it in a workbook as follows (the values I have used are purely arbitrary ones for the purposes of illustration):

Bug idSeverityBase costIf rqts need correction. Add €190Technical architecture fix needed? Add €250If BSD fix needed. Add €280Technical architecture fix needed? Add €250Component spec fixes. Add €60 eachUser guide fix? Add €90No. of componentsTotal dev / test cost, €
1B / C / Maj900



50
1950
2Min / Triv60



50
1110
3B / C / Maj9001902802501801509032040
4Min / Triv601902802506050
1890
5B / C / Maj900

250120100
21370

So, with whatever modifications might be needed for your particular context, that deals with the basic costs of developing and deploying a fix.  What about the other costs?  The table above could be extended with columns containing typical values for whatever might be relevant, including perhaps (depending on what happened and what the product does) such things as:

  • Lost actual sales per hour of down time
  • Lost future sales (non-returning customers) per hour of down time
  • No. of hours of down time (as multiplier for the two above)
  • Repair / restoration of live data
  • Development of workaround
  • Extra user training (if the fix changes functionality)
  • Extra business costs (e.g. finding replacement suppliers or agents)
  • Extra public relations costs (e.g. advertisements in media to reassure customers)
  • Compensation payments and related legal costs
  • Fines for non-compliance with regulations and related legal costs
  • Financial consequence of n% share price fall (may need >1 column for selected values of ‘n‘)
  • Travel costs for HR to visit company directors in prison ... etc.

It’s likely that, by definition, none of these extra costs would apply to any bug less severe than Major.  It’s also likely that the items which are relevant will vary with severity.  For example, a Major defect is unlikely to cause a fall in the organisation’s share price, whereas development of a workaround wouldn’t be useful for a Blocker.  If any such defect does escape into production then the cost estimates can be adjusted with real experience.

Finally, having calculated the cost of fixing this bug if it had (or did) go live, it is worth recording this as an extra field on the defect report.  This will facilitate gathering data for the ROI / COQ calculations.

Earlier in this piece I gave example definitions for some levels of defect severity.  I’m often asked what severity scheme I like to use for defect management, so watch this space for full answer in a later article.

This article is a part of a series. You can check the next one right now.

Part 2                                                                                                                                                                                                                                     Part 4

Are you a contractor looking for a new project?

Are you a contractor looking for a new project?

Book an online meeting with Hana and discuss your current options.
Choose a meeting date