Data Facts vs. My Bias…how I am losing (and why its good)

I have to admit as I listen to the sages on collecting data (Alex Hutton, Mike Dahn, Josh Corman…) I am getting more and more conscious of my own biases about security (guilty as charged!).  Ever since Alex’s post a few weeks ago, the whole concept has been rolling around in my mind.  While reading RSA’s Security for Business Innovation Council Report for Fall 2010 on the plane I found myself questioning the risks and comments as I read them.  More importantly I started realizing that I even suffered biases of my own.  As I worked through control objectives for PCI I noticed that I was questioning certain PCI controls, “Is this based on real threats and attacks?”, “Is that really effective, or is it a legacy belief?”, “Aren’t there other ways to achieve the same objective?”

I began to question the attack vectors and prescriptive controls that I have been taught to accept.  Josh Corman made comments in the “Hug-it-Out” series that this can create some very unique opportunities for alternatives once we clearly understand what we are trying to protect against.  As an example today I looked at the prescriptive PCI-DSS controls in section 3 for encryption.  I don’t doubt the power that encryption has, but I began to question what were the controls trying to achieve?   Think about what is the objective behind encryption.  I would argue that it is two fold (at least for the PCI Security Standard).  If you deconstruct and reverse engineer Section 3 of the PCI-DSS I believe you find two ideas:

(a) The need to ensure that access to Payment Card Data is strictly limited, and this control is ensured throughout the Payment Card Data’s lifecycle and on any medium where it might exist.

(b) Providing a clear assurance that access to Payment Card Data is limited to a minimal number of approved/authorized users and cannot be bypassed through the use of privileged (think administrative)access.

If you find issue with these objectives, suspend your disbelief for a moment and assume for a second that they are accurate.  Now think hard about objectives and are their other ways to achieve them besides encryption? (Please send me ideas since I’d love to hear them!)

When I reflect on how I might have constructed this structure of preconceptions in my mind I see that I have in some way been co-opted by fear mongering, sensationalism, and directed focus by media and industry pundits on isolated incidents of security.  To be fair, it is not wholly their fault.  Prior to breach notification laws virtually no one (pundits included) had any awareness of what breaches had happened.  Most information was hearsay.  Even now with the breach laws we have little to no insight into the causes for the breaches.  That fortunately changed with the Verizon Breach Reports.

Now with my biases hanging out for a flogging, I am ready to see the data.

That being said, I would exhort the researchers and readers to carefully consider the following issues when they analyze the data.

(1) Keep in mind that legacy attack vectors do not necessarily disappear.  Because the data is fairly new, it will be less likely it can reflect on what controls are still necessary even though the attacks they prevent against might now be rarely seen in the wild.  Just like the world of viruses or diseases, it is virtually impossible to completely eradicate attack vectors.  We perform defacto inoculations even though we rarely see the diseases under the assumption that the inoculations are what continues to keep the threat in check.  The assumption is probably accurate, but if you looked purely at the statistics of a disease occuring you could surmise that the control was no longer needed.

We are all familiar with address spoofing, and probably would be hard pressed to find an attack based on external address spoofing, but that doesn’t mean we should stop “vaccination” against it, or does it?

(2) The data will naturally have a bias towards new / evolving threats.  It would be wonderful if it could  include the context of older threats and attacks but that would require different sets of data than what some of the current research is providing.  The several years of Verizon Breach Reports have been quite helpful since the historical data has given us an evolution of threats and weaknesses, but even they have a limited history.  An option would be to correlate the data collected through the Breach Reports, VERIS, with analysis of attacks seen in the wild (successful *and* unsuccessful – think of sniffing, intrusion detection, and other attack reporting methods).

(3) As certain controls become commonplace the attacks that they prevent against will begin to fade.  Breaches associated with the weaknesses will drop.  However, if a vulnerability reappears or a control is set aside with the assumption that it is no longer relevant, as we have found, attackers will rediscover them.  All we have to do is examine the re-emergence of old vulnerabilities that are exploited by newer attacks.  Attackers aren’t sitting still, and they aren’t shy of visiting history for ideas.

(4) Include other critical research on “effectiveness” outside of the aspects of confidentiality (the current Verizon research focused heavily on “breaches” which I consider to be cases of failed confidentiality).  We should also consider the other two legs of our security stool: integrity and availability.  I am a huge fan of Gene Kim’s Visible Ops and I’ve been using it over the last four years to promote controls that support effectiveness across all three legs of the security triad.  Having clear research that not only promotes security but also points out what other justifications we can have for the controls that bring us confidentiality.

What I find most exciting is that as we challenge traditional models of what security is supposed to be, we will also define solutions that we can support with quantitative measures to prove that our actions can help our companies (customers too!) achieve better security.  And near and dear to my heart, these solutions can incorporate ideas that are based in facts, real probabilities, and information that we can show in clear quantitative measures that management can understand.

Or to expand an analogy Alex Hutton shared on Twitter, when we can clearly show management that hiring Reggie Jackson to bat for me in October would be a good idea statistically, I can do so with confidence in facts, not just based on Reggie’s claims.

About Daniel Blander

Information Security consultant who has spent twenty plus years listening, discussing, designing, and creating solutions that fit the requirements presented. President, Techtonica, Inc.
This entry was posted in Information Security, InfoSec, IT Risk Management, PCI. Bookmark the permalink.

3 Responses to Data Facts vs. My Bias…how I am losing (and why its good)

  1. Pingback: Tweets that mention Data Facts vs. My Bias…how I am losing (and why its good) | Information Security Management -- Topsy.com

  2. Alex says:

    It’s one of the great problems in InfoSec, control effectiveness. As you intimate, threat actions & vectors do not disappear, but their value diminishes over time with the expectation that any particular vulnerability for a vector will be (or is) solved.

    At some point, we (Verizon) do hope to build up a table of frequencies with successes and failures. Thing is, while most people may be motivated to do in depth analysis around failures – they may be interested in recording successes to the point where we would be able to have a reference body of work.

  3. I would clarify one thing – I don’t think that threat vectors are ever “solved”, but rather mitigated. As such we shouldn’t intimate that they are “solved” lest we dismiss it prematurely.

    I do truly look forward to analysis around successes since they will give us greater insight into what is effective. I would suggest that collection of data from detection, logging and various “community watch” programs could provide some of this information. The challenge will be detailed form that it comes in.

Comments are closed.