This week's SANS NewsBites (Vol. 12, Num 67) has a story of the potential of security (or lack thereof) in the Spanair air crash that killed 154 people. According to the post, the official cause of the crash was due to pilot error. The investigation also discovered that a warning indicator did not activate. These events would have been logged in the company's maintenance system. It has been alleged that the maintenance system was riddled with malware. Could this be a case where not patching a system could indirectly lead to deaths?
Recently, I've audited systems and applications that reside in medical treatment facilities. One system was responsible for the delivery of radiation to patients. The vendor stated that they are the only authority allowed to administer patches to the system as they need to test out each and every patch before it could be released live in production so as not to endanger a patient. They talked about one particular case where the pushing of patches by a medical treatment facility enabled a system to administer too much radiation. And, if were not due to the diligence of an alert technician, fatal conditions would have been met.
Granted, the day-to-day security decisions and risk analysises we make are not going to be that critical. Heck, just driving to work each day we go through a risk analysis. Sure, there's a risk that we could get in an accident, but it's not that high and we accept it. But when it comes to mission-critical systems, or systems that are deemed of high importance, well thought-out risk analysis could be what causes or averts a dire situation.