Reliability assesses the statistical dependability of the program. Software should be 100 percent reliable. We are more familiar with reliability, or rather lack of it, in machinery.
Hard disk manufacturers rate the reliability of their drives as so many hours between failures, Mean Time Between Failures (MTBF). What they are saying is that it should work when you buy it for an average of X hours. Similarly, a CD duplication manufacturer might state their machine is 99.998 percent reliable. This means that on average, 1 in 50,000 CDs might be a dud.
Software reliability is compromised when it is receiving data from external hardware devices. If a sensor had 99.998 percent reliability and provided 50 readings a second, it would screen a dud software reading every 16 minutes 40 seconds. How reliable would you rate that software if it was monitoring heart readings during protracted surgery and you were the patient? Reliability needs to be brought up to a higher than acceptable level. What that level should be depends on the program's place and purpose.
Was this article helpful?