How to Identify and Measure Risk
Without risk, there would be no business — nothing ventured, nothing gained, as the saying goes. Without business, there would be no IT departments. Without IT departments, there would be no job for you to have (or pursue) as you read this. Hence, risk is good, unless you have to manage it as part of your security plan.
The question, of course, is how to manage it, or even better, how to minimize it. You could build a network that’s as hard to penetrate as a Citibank vault, but you need the dollars to do it — and in today’s environment of shrinking budgets and efficiencies (a code word for “less money, more service”), your ability to defend your network is sadly linked to the size of your budget and how you choose to distribute it.
In a nutshell, you need to put your dollars where they stop the most problems. To start, determine which systems would cause you the most pain if they were sandbagged. If you’re not sure how, simply compute the cost of downtime for each system in question.
For instance, if you lose a rickety old intranet or simply a section of your intranet that distributes announcements, calendars and the rules for the new dress code, your business won’t grind to a halt. But if you lose your CRM application, you could lose $10,000 per hour or more, and you might not survive the month.
The bottom line? Those of you with a stinting threat-detection, anti-virus and patching budget should spend it on systems whose downtime will give your employer the biggest headache.
Calling All Engineers
Using downtime as your sole metric is basic and, according to some, reductive. If you’re searching for a slightly more complex method, upgrade to the risk definition that’s widely used in engineering:
Risk = P x L, where P is the probability of an accident and L is the cost of the accident.
This formula takes into account not only the cost of a risk itself, but how likely the risk is to happen. Hence, a risk that costs you $10,000 per hour of downtime won’t keep you up at night if its chance of taking place is merely 1 percent (because $10,000 x .01 = $100 per hour, which is unlikely to break the bank).
On the other hand, if that old intranet costs you $1,000 per hour of downtime and has a 50 percent chance of attracting a system-crashing attack, its risk is $500 per hour, or five times the risk of your mission-critical system and a hefty $12,000 per day.
There are, of course, more complex ways to quantify risk, and risk assessment experts can help. In writing disaster plans and insuring their assets, companies in a post-9/11 world increasingly have sought risk specialists who know how to measure risk in a matrix of legal, fiscal, PR and HR fallout.
Risk assessment experts come in as many flavors as risk itself, from IT specialists to physicians and academics who specialize in epidemics. You can find them in small consulting firms or huge, multinational shops.
And, of course, you can find them in the software space too: Symantec, Palisade and dozens of others make risk management software that runs the gamut from Excel simulations to neural networks.
Which is right for you? Whether you need high-end algorithms or chicken scratch on the back of an envelope, there’s an option that fits your business, your level of sophistication and your needs. The only real risk is not using it.
David Garrett is a Web designer and former IT director, as well as the author of “Herding Chickens: Innovative Techniques in Project Management.” He can be reached at editor (at) certmag (dot) com.