Bug density is measured in bugs per thousand Lines of Code (KLOC). It is the easiest metric to calculate and a critical indicator. Simply divide the number of lines of code by the number of bugs and multiply by 1,000, as in the following example.
The larger the x number, the bigger the problem on your hands. One bug per 1,000 is widely deemed acceptable. However, the problem with this is that it tells you nothing about the severity of the flaws. If you had a 100,000-line program with 23 bugs in it you'd be coming in with an apparently healthy KLOC of 0.23; but if they were all Class A bugs the program would be a disaster area. If, on the other hand, they were all Class C and D bugs, you might release it as it stands.
Was this article helpful?