There is a wonderful discussion occurring in SIRA (Society of Information Risk Analysts) these days. I missed the beginning of this group, and I regret it, because the messages coming out of the discussions are extremely insightful and critically important for anyone who is managing risks around Information Security, or any type of security for that matter. The discussion I want to hit on is one that I am sure is already contentious debate within and without SIRA; Should I perform a risk analysis at my company? The subsequent questions are the source of much of the resistance: What model should I use? How do I measure the likelihood? Does impact include hard and soft costs? Do I need a PhD in statistics? Why does Excel always crash when I try to do big equations like this?
I can’t answer why Excel is crashing, but I think the rest has an easier answer than we might think.
Let the Gurus do the Risk Modeling, Statistical Analysis:
The most substantial and accurate challenge to Risk Modeling in Information Security is that there is not enough data around probabilities and as such, the quantitative rigor of our analyses declines rapidly. I would absolutely agree. Any insurance company will tell you that there is little, if any, actuarial data on Information Security. But the only way we are going to overcome this challenge is by collecting and analyzing that data. Let the experts do this work and collect the knowledge. Let them build the complex models, be the PhD’s in statistics, and find better ways to analyze the data than Excel. Let this data become the source of the probabilities that we need.
Look at the value we get from seeing what types of attacks are most frequent against Payment Card Data, or the mix of sources of data breaches or the records stolen by types, what vulnerabilities are typically the most often exploited….I’ll calm down now. The excellent work that is being done to analyze the probabilities through current studies needs to be pushed forward. The showcase example has been the VzB breach studies. They have contributed significantly to our knowledge of what is really happening. I would love it more if there was a clearing house for the statistics so we could merge all the data of those who are jumping on board. Imagine the collective knowledge based on a myriad of views, experiences and organizational cultures. And let’s face it, data is useful. It validates what we see, it removes ambiguity, and allows us to correlate events and situations, it even highlights differences and nuances that we don’t see. It has the capability to remove pre-disposed biases and correct a priori assumptions.
Don’t Let the Data Rule You:
However statistics don’t tell the whole story. Let’s be honest about it. There are stories behind the statistics, not the other way around. Statistics will show us a story about the data we feed it. It won’t tell us where the data came from, what factors affected the source of that data, or what the outcomes of that data were. We have to supply that information. Remember, data in=data out, or garbage in=garbage out. It is always important that as we make use of the data that we read the fine print (or big print if they make it available) to understand the sources. The VzB breach reports have their biases: the 2010 report is potentially different from the 2008 or 2009 reports because of data input from the US Secret Service. Differences can emerge in data sources from a business collecting breach data versus the US Government collecting breach data.
Bias in the data will affect some of the outcomes. As an example, companies are probably more likely to use private security firms to investigate internal issues to avoid public disclosure and embarrassment, while the US Government resources will more likely be involved when the breach source is external, or the company feels their legal repercussions are minimized. These are the stories that we have to consider when we look at the analyses, and should be disclosed to make sure we use the data correctly.
Use the Data Not The Math
For you, the new IT Manager, the result of all of this data research is that you now have a set of probabilities that you can say are based on reality, and you know the biases of the sources and resulting analysis. You can now take your finger out of the wind, put away your “8 Ball”, and use real data. It’s not perfect data (remember its story!) but it is far better than when I started 20 years ago in this field. You do not need to have a PhD in statistics or mathematics. You do need to know how to read the outcome reports from the analysis (some reading skills are necessary). You do not need to build a complex Risk Management model. You do need to build a simplistic model. Your risks can be built on the field of possible threats using the data from the detailed analysis. Your vulnerabilities can be built from your known environment. And the probabilities can now have some teeth. Even if you don’t feel you can build a risk model (time, effort, Excel just won’t work) you can always refer to the global models of probability and risk from the studies that have been done, which have been vetted, and which are based on extensive data.
Lastly As I wrote in an earlier post, my biases have changed, and all as a result of the data. I made a change in focus several years ago after reading the data gathered in Visible Ops. Now I am changing again, by using the data from the breach reports from various (trustworthy) sources. I’ve changed my previous biases because the data has told me to. The story for me, is that now, I can monitor threats, vulnerabilities and risks being realized, and identify what they are, their frequency, and their likelihood in of occurring versus other threats, vulnerabilities and risks. I can focus my priorities…
1) Let those who can analyze the data (and have the PhD’s in statistics) analyze the data
2) Use the results of their work to simplify and increase the accuracy of your risk analysis