Do you have SOCD? (Security Obsessive Compulsive Disorder)

Are you SOCD?

You have it if:

  • You feel the constant need to force drastic security measures.
  • You say: “This company really needs to revise all the (SOX) controls.  There’s absolutely no reason to have management involved in the process.”
  • You threaten “We need to just block everything and then open up stuff when something breaks.”
  • You believe that technology can solve all security problems.
  • You use biometrics or RSA tokens to access your blog.

Look at this statement:

“Security is about eliminating risk.  Business is about taking risk to make money.  See how they are a perfect match?” – @sh*tmycsosays

Which sentence do you examine and have the greatest curiosity about?  Which sentence makes you roll your eyes?

Security Obsessive Compulsive Disorder is an obsession with imposing security in the face of competing requirements for accessibility to the asset you are trying to protect.  In simple English, you won’t let anyone near anything despite other people needing it.

Now, what are your real desires.

Deep down do you really want to be appreciated? (Probably yes.)
Do you wish someone in the company would listen to you?
Do you wish people stopped avoiding looking you in the eye when you pass them in the hallway?
Do you wish you were invited to the big meeting when the new project design was being discussed?

Then I would recommend some treatments.  Don’t worry, I promise to not make you lie down on a couch and tell me about your sordid relationship with your RSA key fob, or late night googling of awk and sed scripts.  Promise.

A. Deep Breathing Exercise

1) Giving attention fully to your stomach, slowly draw in two deep breaths.  As you inhale, allow the air to gently push your belly out.  As you exhale, consciously relax your belly so that it feels soft.  If it already feels soft, that’s okay too.  Too much time staring at EnVision consoles will do that.

2) On the third breath, bring to your minds eye an image of a user with good intentions and a desire to just do a good job for their boss.  Imagine the happiness when they receive their bonus for having completed their project on time, or for becoming more efficient in their job.

3) Take a forth breath, and imagine the CEO of the company talking to the board of directors about how the money they invested in the company is producing profits because everyone could do their job, efficiency was up, and the new products could be released on time.

Now close your eyes and imagine what you can do to make these two people happier, more successful.  Think of what things will protect their goals of getting that bonus, or satisfying the investors who have made this company possible.  Remember that security can be part of this equation, but you have to consider their happiness too.

B. Unenforceable Rules

If you are still struggling, I’d like you to think of something Frederic Luskin calls Unenforceable Rules.  Unenforceable rules are rules that we might currently expect others to adhere to, but which aren’t really in our control, and we do not have the power to “make them right”.  Are the rules about security you think are necessary really unenforceable?  Let me counter the question with another question: how many of your rules have been implemented?  How many have not met significant resistance?  You might ask if that means their aren’t any rules that others will share?  There will be, trust me, but let me share a little secret.  No security expert ever shares the same rules about security with everyone in their company.  Even the best and most respected CSO will find disagreement on tactics or rules they may think are perfect.  The difference is their ability to recognize that they are unenforceable.

Think then about what your hope is – your goal, your real focus for what you are trying to achieve.  Then look at the rules you want to enforce.  Do you think someone might object to them? (Notice I don’t say they are wrong, just that someone else might not share them.)  Now think about why they might not agree.  What might their objectives be?  What might their goals or focus be?  How do the unenforceable rules violate their goals?

Now you will likely find yourself much more able to understand their goals.  Now you will find yourself able to design new rules – rules and associated actions that users and that CEO will find appealing because they support their goals too.  These new rules and actions can achieve security goals without requiring SOCD.  Recognize you still may not be able to exercise the level of control or security you wished for.  You may not have solved the level of security you wish for, but you likely will have made an impact that you otherwise would not have if you had held to your unenforceable rules.

Credit to Frederic Luskin, with absolutely no malicious intention to parody his incredible work.

Posted in CISO, CSO, Information Security, Information Security Governance, InfoSec, InfoSec Governance, IT Risk Management, Security Governance, Uncategorized | Leave a comment

Mentoring Outside the Echo Chamber

I have been incensed by certain “pundit” activities through a recent encounter that unfortunately mirrors the frustration I felt 20 years ago as a result of the actions of certain academics where I once taught.  The actions of which I refer?

  • Sweeping generalizations
  • Nihilistic critiques
  • An unwillingness to offer or model a solution

Let me give you my recent trigger:

A small company whose security team had announced to a shocked management that they wished to stop using Firewalls and Desktop Anti-Virus because they were ineffective. Probing questions led to a recent encounter that this small security team  had with a pundit who professed that these tools were ineffective and new times needed new tools.

Now I’m going to carefully chose my fight here.  My issue is in the advice which was presented in an abstract vacuum, devoid of situational awareness and environment.  The pundit’s goal to incite thought and discourse through the abrasiveness of the comments unfortunately served this SMB poorly.  I do not wish to debate here whether Firewalls or Anti-virus are valuable because there are too many variables to make that a meaningful discussion in a one-sided forum such as a blog.  Such a debate will depend upon what you a trying to achieve, the relative effectiveness of the specific vendor’s technology employed, and the effectiveness and appropriateness of the implementation.  These are many variables which make the sweeping generalization that “Firewalls are ineffective” quite dangerous.

Yet, as this poor security team understood it, their “ancient” tools had zero value. A one hour question and answer session with the security team (unfortunately in front of management) led to revelations that they had a entered what I call a nihilistic vacuum. They had not considered what controls those tools were intended to provide, what threat and risks were most relevant to their environment and, not surprisingly, they had no strategy beyond the overly simplistic objective of “buying a new technology”.  There was no thought of how to address the openings left by their abolition of their only source of network access controls or detection of malicious software.  Their new found idealism was directionless and without purpose.  This is far from productive, and in a small company, potentially devastating.

What ensued for the remaining two hours was an exercise of modeling how this security team should have reacted to their advice.

I first inflicted some pain by saying that yanking a tool, even if limited in effectiveness, was dangerous if no thoughtful examination is made of what is lost, what is gained, and what will fill the void.  What I did next was to model a thought and design process for this team that examined the decision and how they could have approached it far more effectively.  Things we discussed:

a) what is valuable to protect here at this company?
b) what are the ways these things are used, handled, or stored?
c) what controls are in place to make sure they are used and protected appropriately?
d) which of these controls will you loose when you abolish the “ancient” technology
e) what designs do you have in place to replace these controls?
f) what level of improved effectiveness and efficiency do you gain from this new design? (and how you can try to model it)

I then showed them that “ineffective” or “ancient” rarely applies to control objectives (such as prevent inappropriate network access to systems, resources and data) without a much greater shifting of heaven and earth.  I taught them in the hour I had left that design is an act that we must all undertake, and not to defer this act to some Pundit who lacks the awareness of an environment and goals to make the determination for you.

For those of you wondering about what incensed me 20 years ago; as a Teaching Assistant in two different architecture schools I watched professors launch into scathing reviews of students’ work without a thought given to the student’s or project’s situational awareness. The critique was nihilistic, abstract, and linguistically incomprehensible. The student left with nothing new but tears (or a stiff upper lip). There was no growth from replacing the mistake with a new idea or process, no modeling by the professor of how what they said worked in reality (or a physical world). The student had to grope at random straws to identify the faults in his demolished design (in one case, literally demolished). I rallied against these monstrous outrages then, as I do now.

So all you Good and Bad Pundits, dig deep.  Think carefully about what you say, because many hang on your every word.  Your words have value, but they also need context.  Teach completely and give this context.  Be specific and explicit in your critiques.  And when you finish with your critique, show how to correct the issues, evaluate effectiveness and model how to find solutions.  Inside the context of the InfoSec Echo Chamber we attempt to incite each other to action, but we forget that those who are on the fringe do not always benefit from our battle scars and insights.

I issue this challenge to Pundits because you hold the mantle of leadership through the papers, lectures and conferences which proffer your ideas.  Those on the fringe also have the responsibility, but they are the naive, and look to you to overcome this naïveté.

Students, there is no utopia. If you find after you have listened to one of these Pundits you suffer a vacuous nihilism in your InfoSec soul, grab some ABBA, a bean bag chair, and sit down with someone who can explain what it all really means.  Unlike unicorns, these people really do exist.

If you need some thoughts about how to do this, I recommend reading Donald Schon “The Reflective Practitioner”, and Chris Argyris “Theory in Practice” (as well as any of his books on direct explicit feedback).

Posted in Uncategorized | Leave a comment

My Take Away Moment from BSidesSF

I won’t attempt to rehash the conference, except to say, if you have a chance to attend a BSides event, do so in great haste. Despite being free, they are worth every penny you could invest in visiting one.  What a great respite from the RSA Conference!

What I do want to cover was a very interesting panel at the end of the conference.  The panel included some great minds: Will Gragido; Josh Corman; Marc Eisenbarth; HD Moore; Dave Shackleford; Alexander Hutton; Caleb Sima.  The subject was of interest since it drew quite the crowd: “State of the Scape: The Modern Threat landscape and Our Ability to React Intelligently”

But what came out of the panel as a result of some “heckling” on the subject of APT, Cloud Computing was priceless (kinda like a MasterCard commercial).  It was not what I think the panel had planned or was expecting (but that’s the fun of a panel, and BSides).  If you are a budding CSO or Security Manager take note:

  • Don’t make people security experts.  Make it easy for people.
  • Make security accessible and something that people care about.
  • Make it easier for programmers to program securely than it is to program insecurely (an example of Microsoft’s .Net work was offered as an example).
  • Get out of the echo chamber where we only talk about security in obscure terms and treat everything as unique and terrifying.  People need it to be accessible and simple.

Wow.  This echos stories I’ve told for years, and stories that have been popping up around the world as I’ve been traveling the last year:

    • At a conference I attended in the EU, the local CERT authority described a company who had spent millions of Euro on top-of-the-line security technology, and yet it was all turned off.  It was turned off because users always looked for ways around it because it made their jobs too difficult if not impossible to perform.
    • As a traveler do you enjoy the TSA security line, do you enjoy dumping out your entire belongings into a plastic tray for the world to peruse, being subject to numbing technology scans, and in the end a joyous pat down?  Or would you prefer a simple process to ensure your flight is safe?
    • Is it easier to teach programmers to write code void of SQL injection flaws, or is it easier for Microsoft to write .Net functions that make it more difficult to make direct SQL calls, thus significantly reducing the probability of someone writing code that results in SQL injection vulnerabilities?  (P.S. Microsoft did the latter, hooray!)

      Simplicity for all of us is the best way.  Simplicity that anyone can use, and makes it easier for all of us to do things the right way rather than the wrong way.  And that does not necessarily mean making the hard way painful by imposing fines, penalties or punishments.

      So as a Security Professional I would highly recommend you take the following actions in your strategy, and tactics:

      1. Make security invisible – it shouldn’t get in anyone’s way, or stop them from doing what they need to do to get their job done.  But it should be part of what they do.
      2. Remind people of what they value – so they can protect that.  It may be the teenager’s pictures and music, it may be the accounting departments numbers, it may be the sales person’s leads, or it may be the IT infrastructure.  Whatever it is, make sure the people who care about it are aware that you are trying to protect what they value.
      3. Look for methods that make security easier for users than the lack of security.  Whether that is through technology that makes authentication easy (biometrics for execs?), or programming libraries that are inherently secure, or handling data easier to do securely than insecurely.
      4. Always give something back.  If you find that a security control you have to put in place has an impact, be ready to give something back to the users.  They will be more likely to comply if you can show that you care about their priorities (such as how they can get their job done successfully and efficiently).
      Posted in CISO, CSO, Information Security, InfoSec | Leave a comment

      Sophisticated Analysis of Risk Management is Critical…don’t do Sophisticated Analysis Risk Management

      There is a wonderful discussion occurring in SIRA (Society of Information Risk Analysts) these days. I missed the beginning of this group, and I regret it, because the messages coming out of the discussions are extremely insightful and critically important for anyone who is managing risks around Information Security, or any type of security for that matter. The discussion I want to hit on is one that I am sure is already contentious debate within and without SIRA; Should I perform a risk analysis at my company? The subsequent questions are the source of much of the resistance: What model should I use? How do I measure the likelihood? Does impact include hard and soft costs? Do I need a PhD in statistics? Why does Excel always crash when I try to do big equations like this?

      I can’t answer why Excel is crashing, but I think the rest has an easier answer than we might think.

      Let the Gurus do the Risk Modeling, Statistical Analysis:

      The most substantial and accurate challenge to Risk Modeling in Information Security is that there is not enough data around probabilities and as such, the quantitative rigor of our analyses declines rapidly. I would absolutely agree. Any insurance company will tell you that there is little, if any, actuarial data on Information Security. But the only way we are going to overcome this challenge is by collecting and analyzing that data. Let the experts do this work and collect the knowledge. Let them build the complex models, be the PhD’s in statistics, and find better ways to analyze the data than Excel. Let this data become the source of the probabilities that we need.

      Look at the value we get from seeing what types of attacks are most frequent against Payment Card Data, or the mix of sources of data breaches or the records stolen by types, what vulnerabilities are typically the most often exploited….I’ll calm down now.  The excellent work that is being done to analyze the probabilities through current studies needs to be pushed forward. The showcase example has been the VzB breach studies. They have contributed significantly to our knowledge of what is really happening. I would love it more if there was a clearing house for the statistics so we could merge all the data of those who are jumping on board. Imagine the collective knowledge based on a myriad of views, experiences and organizational cultures. And let’s face it, data is useful. It validates what we see, it removes ambiguity, and allows us to correlate events and situations, it even highlights differences and nuances that we don’t see. It has the capability to remove pre-disposed biases and correct a priori assumptions.

      Don’t Let the Data Rule You:

      However statistics don’t tell the whole story. Let’s be honest about it. There are stories behind the statistics, not the other way around. Statistics will show us a story about the data we feed it. It won’t tell us where the data came from, what factors affected the source of that data, or what the outcomes of that data were. We have to supply that information. Remember, data in=data out, or garbage in=garbage out. It is always important that as we make use of the data that we read the fine print (or big print if they make it available) to understand the sources. The VzB breach reports have their biases: the 2010 report is potentially different from the 2008 or 2009 reports because of data input from the US Secret Service. Differences can emerge in data sources from a business collecting breach data versus the US Government collecting breach data.

      Bias in the data will affect some of the outcomes. As an example, companies are probably more likely to use private security firms to investigate internal issues to avoid public disclosure and embarrassment, while the US Government resources will more likely be involved when the breach source is external, or the company feels their legal repercussions are minimized. These are the stories that we have to consider when we look at the analyses, and should be disclosed to make sure we use the data correctly.

      Use the Data Not The Math

      For you, the new IT Manager, the result of all of this data research is that you now have a set of probabilities that you can say are based on reality, and you know the biases of the sources and resulting analysis. You can now take your finger out of the wind, put away your “8 Ball”, and use real data. It’s not perfect data (remember its story!) but it is far better than when I started 20 years ago in this field. You do not need to have a PhD in statistics or mathematics. You do need to know how to read the outcome reports from the analysis (some reading skills are necessary). You do not need to build a complex Risk Management model. You do need to build a simplistic model. Your risks can be built on the field of possible threats using the data from the detailed analysis. Your vulnerabilities can be built from your known environment. And the probabilities can now have some teeth. Even if you don’t feel you can build a risk model (time, effort, Excel just won’t work) you can always refer to the global models of probability and risk from the studies that have been done, which have been vetted, and which are based on extensive data.

      Lastly As I wrote in an earlier post, my biases have changed, and all as a result of the data. I made a change in focus several years ago after reading the data gathered in Visible Ops. Now I am changing again, by using the data from the breach reports from various (trustworthy) sources. I’ve changed my previous biases because the data has told me to. The story for me, is that now, I can monitor threats, vulnerabilities and risks being realized, and identify what they are, their frequency, and their likelihood in of occurring versus other threats, vulnerabilities and risks. I can focus my priorities…

      1) Let those who can analyze the data (and have the PhD’s in statistics) analyze the data

      2) Use the results of their work to simplify and increase the accuracy of your risk analysis

      Posted in Uncategorized | Leave a comment

      Handing Back Responsibility for Security

      There is a great lesson that unfolded at one of my customer’s sites during an audit.  It is a great story to tell, but more importantly, it lets me illustrate that as Security Professionals, we need to design security to work in a way that makes them natural to the business.  I know, shocking isn’t it?  But it can be done…

      During an audit of a company’s security program the gentleman doing the audit asked for evidence of “…specific Security Testing…” in the development process.  The development manager responded, “We do testing, but not any specific Security Testing.  We do code reviews by someone who hasn’t written the code but is part of the same team so they understand the objectives and how it might impact other code.  We use the material we receive from annual training we have with our development tools vendor on how to write more secure and stable code.  We do data input and processing tests to make sure the system doesn’t break.  Then we test the functional specifications to make sure we met all the design specifications.”

      The auditor’s answer was, “That’s not specific Security Testing.”

      I stopped the auditor and asked him to tell me what “specific Security Testing” was.  His answer was, “It includes testing of the code, looking for security vulnerabilities, testing with tools that look for security problems, testing for error conditions or code failures that could result in the disclosure of data.  The testing you do here is Functional Testing.”

      So I asked a question of the Auditor:

      “What is the ideal objective that we, as Security Professionals would like to see when we look at application development?”

      When I got the same response back about what is specific Security Testing, I responded, “What if a software development program includes Security from design, through functional specification, through development and into testing.  Security is built in to every aspect, and it is natural.  Is that not a better model?”  There was affirmative nodding.

      “Then, is it not appropriate then that a company include Security Testing in their existing testing methodologies and refer to it as Testing rather than as specific Security Testing?”  At this point there was some silence on both sides.  I then prodded the development manager who proceeded to discuss how Security was wrapped diligently in their design and functional specs, and that their input and processing testing included many of the elements of specific Security Testing that the assessor was looking for, but they never called it Security Testing.  It was called just Testing.

      Let us be honest about something.  Not every development team thinks this way.  I happen to have a few very brilliant managers at clients who think this way.  Hats off to them.  But our goal as Security Professionals is to get all of our clients to think this way!  Security should not be a standalone activity operated in isolation.  Security should be a natural part of what we do every day. To paraphrase many security professionals, if we naturally did the “secure” things we should do in the first place, we wouldn’t need much of the artificial layer of protection and tools we build.

      We must drag auditors, assessors, and every other critic away from their “Deformation professionnelle” – their tendency to look at things through the lens of their profession and forgetting about the bigger picture or the real goal.  In the case of software development, most auditors think of the world after we decided (unilaterally) that developers can’t do it on their own, so we must put in place controls, tools and other activities to stop their bad code.  Instead the goal should be to create an environment where the developers do include security in their processes – at every step.

      I don’t argue against the tools that are used in Security Testing.  I just argue that keeping these tools and processes out of the developers hands tells them it is okay for them to write bad code.  You are implicitly telling them that it is someone else’s job to make sure it is secure.  What we as security professionals need to do is hand that responsibility back, give them the tools, give them the training, and assign penalty and blame when they do not take up the bit.

      The lesson from this little story?  Let me walk you down the garden path:

      a)      Security should be built in as a natural part of our existing business processes.  It becomes a cultural and behavioral change.

      b)      Security should be everyone’s responsibility, not one group in isolation.

      c)      We need to play the coaches, not the ringleaders.

      Being in the Information Security profession is a lot like being someone’s coach or trainer.  Your goal is not to run their business, or to swing the golf club.  Your goal is to adjust them so that they improve their performance and results.

      Posted in CISO, CSO, Information Security, InfoSec, IT Risk Management | Leave a comment

      Data Facts vs. My Bias…how I am losing (and why its good)

      I have to admit as I listen to the sages on collecting data (Alex Hutton, Mike Dahn, Josh Corman…) I am getting more and more conscious of my own biases about security (guilty as charged!).  Ever since Alex’s post a few weeks ago, the whole concept has been rolling around in my mind.  While reading RSA’s Security for Business Innovation Council Report for Fall 2010 on the plane I found myself questioning the risks and comments as I read them.  More importantly I started realizing that I even suffered biases of my own.  As I worked through control objectives for PCI I noticed that I was questioning certain PCI controls, “Is this based on real threats and attacks?”, “Is that really effective, or is it a legacy belief?”, “Aren’t there other ways to achieve the same objective?”

      I began to question the attack vectors and prescriptive controls that I have been taught to accept.  Josh Corman made comments in the “Hug-it-Out” series that this can create some very unique opportunities for alternatives once we clearly understand what we are trying to protect against.  As an example today I looked at the prescriptive PCI-DSS controls in section 3 for encryption.  I don’t doubt the power that encryption has, but I began to question what were the controls trying to achieve?   Think about what is the objective behind encryption.  I would argue that it is two fold (at least for the PCI Security Standard).  If you deconstruct and reverse engineer Section 3 of the PCI-DSS I believe you find two ideas:

      (a) The need to ensure that access to Payment Card Data is strictly limited, and this control is ensured throughout the Payment Card Data’s lifecycle and on any medium where it might exist.

      (b) Providing a clear assurance that access to Payment Card Data is limited to a minimal number of approved/authorized users and cannot be bypassed through the use of privileged (think administrative)access.

      If you find issue with these objectives, suspend your disbelief for a moment and assume for a second that they are accurate.  Now think hard about objectives and are their other ways to achieve them besides encryption? (Please send me ideas since I’d love to hear them!)

      When I reflect on how I might have constructed this structure of preconceptions in my mind I see that I have in some way been co-opted by fear mongering, sensationalism, and directed focus by media and industry pundits on isolated incidents of security.  To be fair, it is not wholly their fault.  Prior to breach notification laws virtually no one (pundits included) had any awareness of what breaches had happened.  Most information was hearsay.  Even now with the breach laws we have little to no insight into the causes for the breaches.  That fortunately changed with the Verizon Breach Reports.

      Now with my biases hanging out for a flogging, I am ready to see the data.

      That being said, I would exhort the researchers and readers to carefully consider the following issues when they analyze the data.

      (1) Keep in mind that legacy attack vectors do not necessarily disappear.  Because the data is fairly new, it will be less likely it can reflect on what controls are still necessary even though the attacks they prevent against might now be rarely seen in the wild.  Just like the world of viruses or diseases, it is virtually impossible to completely eradicate attack vectors.  We perform defacto inoculations even though we rarely see the diseases under the assumption that the inoculations are what continues to keep the threat in check.  The assumption is probably accurate, but if you looked purely at the statistics of a disease occuring you could surmise that the control was no longer needed.

      We are all familiar with address spoofing, and probably would be hard pressed to find an attack based on external address spoofing, but that doesn’t mean we should stop “vaccination” against it, or does it?

      (2) The data will naturally have a bias towards new / evolving threats.  It would be wonderful if it could  include the context of older threats and attacks but that would require different sets of data than what some of the current research is providing.  The several years of Verizon Breach Reports have been quite helpful since the historical data has given us an evolution of threats and weaknesses, but even they have a limited history.  An option would be to correlate the data collected through the Breach Reports, VERIS, with analysis of attacks seen in the wild (successful *and* unsuccessful – think of sniffing, intrusion detection, and other attack reporting methods).

      (3) As certain controls become commonplace the attacks that they prevent against will begin to fade.  Breaches associated with the weaknesses will drop.  However, if a vulnerability reappears or a control is set aside with the assumption that it is no longer relevant, as we have found, attackers will rediscover them.  All we have to do is examine the re-emergence of old vulnerabilities that are exploited by newer attacks.  Attackers aren’t sitting still, and they aren’t shy of visiting history for ideas.

      (4) Include other critical research on “effectiveness” outside of the aspects of confidentiality (the current Verizon research focused heavily on “breaches” which I consider to be cases of failed confidentiality).  We should also consider the other two legs of our security stool: integrity and availability.  I am a huge fan of Gene Kim’s Visible Ops and I’ve been using it over the last four years to promote controls that support effectiveness across all three legs of the security triad.  Having clear research that not only promotes security but also points out what other justifications we can have for the controls that bring us confidentiality.

      What I find most exciting is that as we challenge traditional models of what security is supposed to be, we will also define solutions that we can support with quantitative measures to prove that our actions can help our companies (customers too!) achieve better security.  And near and dear to my heart, these solutions can incorporate ideas that are based in facts, real probabilities, and information that we can show in clear quantitative measures that management can understand.

      Or to expand an analogy Alex Hutton shared on Twitter, when we can clearly show management that hiring Reggie Jackson to bat for me in October would be a good idea statistically, I can do so with confidence in facts, not just based on Reggie’s claims.

      Posted in Information Security, InfoSec, IT Risk Management, PCI | 3 Comments

      Sustainable Security by Showing Tangible Benefits

      I spent a large part of my involuntary layover in Atlanta last month thinking about PCI, Control Objectives and Maturity.  Sometimes interruptions to our business lives like this are good, since stepping back and interrupting our non-stop business life for moments of thought is critical to our own personal growth, and the growth of others as well (like our businesses).

      I found my thoughts continually returning to the chasm that exists between compliance and maturity.  Why do I call this a chasm?  Because companies still, to this day shoot for “compliance” with the goal of avoiding penalties.  For InfoSec, Security shouldn’t be the objective.  The real objective should be sustainable security and the tangible benefits it can bring.

      Sustainable security is when you have an effective, repeatable process or cycle of continuous improvement.  This is a concept borrowed from CMMI, wonderfully articulated by SEI in the OCTAVE model, and used by CoBiT for measuring effectiveness of controls.  There are various levels of maturity starting at ignorance and moving up through ad-hoc controls, defined controls, managed controls, and continuous improvement.

      If we look at “compliance” we will typically find companies either at ad-hoc controls (ones which are based on heroics) or just defined/managed controls.  In these situations companies are “going through the motions” to satisfy an external master.  These companies create an end-goal of passing an audit or assessment and then move on.  Continuous improvement is not in their plan.  “Just tell me what to do so I can do it, and get on with my job.”  Their view is that compliance is an impediment to their business – one more hurdle to jump over before moving on to other more important things that they as being more directly beneficial to their business.

      Maturity comes when we move beyond “going through the motions” and actually monitor and measure the success of our program. A bank or an insurance company would never manage its financial risks the same way year after year.  They would evaluate their existing controls, and evaluate the external environment, threats and the variables which change constantly.  Risk management requires awareness of the effectiveness of our efforts measured against objectives, and evaluation of the objectives themselves.

      The same applies to Information Security.  An effective security risk management process evaluates the environment, assets and evolution of threats to chose appropriate controls, and evaluates if the selected controls are operating effectively.  These evaluations should be continuous and ongoing because the environment is ever changing.  We must perform two types of evaluations.

      So what is the challenge we have in moving from compliance to maturity?

      As a concept sustainable security isn’t very attractive to many executives and I can understand why – how does it bring a benefit to customers and the company bottom line?  If you take sustainable security at face value, the answer is, “Not much.”  It looks on the surface like a nice “process improvement” practice, but without any significant returns for the business.

      How do you answer this challenge?  How do you make a model of sustainable security and maturity meaningful?  The answer is in the facts.  Show these managers and executives the business risks, AND benefits of security controls.  Use quantitative research (for example, Visible Ops by Gene Kim) that shows the specific benefits of specific controls.  Put those benefits into terms they can understand from their tower of denial…

      (a)    Managing, controlling, and creating awareness around changes to systems and programs in your environment is proven to create a more stable and predictable working environment for your employees.  Users are prepared for changes and are more quickly able to take advantage of the benefits the changes offer them.

      (b)   Appropriately testing new systems and programs before putting them in to production results in higher customer satisfaction as customers and users have more positive (and fewer negative) experiences with the systems and programs.  Happy customers are the result of systems that work properly and are available when they are needed.  Testing ensures that this is the case.

      (c)    Building and maintaining systems in a consistent manner through standards has been proven to create a more stable and predictable environment where problems are more easily detected and fixed.  This results in higher availability for the tools that your customers and employees need to help you create revenue and customer satisfaction.

      I use these examples since these are the subject of great studies and I can pull out the quantitative data to support it.  We will always need more research on what works, and what doesn’t.  More importantly we need to be ready to convert this research into meaningful messages that make security meaningful to executives.  Once the company understands the benefit is much greater than just a check box or risk management, they will move faster towards the goal.  Our challenge is to take our research beyond just “what got broken in to” and in to “what creates tangible benefits for a company”.

      Posted in Information Security, Information Security Governance, InfoSec, InfoSec Governance, Security Governance | Leave a comment

      They Just Don’t Get It

      “They just don’t get security!”

      As InfoSec professionals we often curse our management, our users or our customers (or all three) because they have done something “stupid” which either creates or nearly creates a security incident.  We howl, we complain, and wish users would just “wake up and learn!”

      I think we are all wrong – yes, the InfoSec professionals are wrong, management is wrong, users are wrong and our customer are wrong.  Why?  We all don’t get security.  There are a few exceptions, but as soon as we bemoan our users, management or customers, we are just as guilty of ignorance as they are.

      “Okay, now you’re off the deep end!”

      Let me tell you a comment that I heard at a panel I was on where we were meeting with the media.  One of the panelists said, “I know a bank which has put in state of the art security, and some of the best controls.  But they are all turned off because the users won’t use them and they just go around them.”  We have all heard this story before, and usually we find ourselves saying “They just don’t get security!”

      The problem is not with the users.  It is with the InfoSec professional who thought that the best, state-of-the-art tools that inhibit the ability of users to do their job or act in a productive manner was appropriate.  How can users be expected to respect, learn about and engage with security tools when we as InfoSec professionals so often fail to learn about or engage with other business units in our companies and understand what they must do to be successful.  Let me give you a list of questions to ask and think to yourself if you can answer these without making a phone call:

      1)      What is the most important function or process in each business group?

      2)      What function or business process in each business group generates the greatest revenue?

      3)      What efficiencies in each business group can or does create the biggest savings?

      4)      What processes in each business group are the most time consuming?

      5)      What business risks keep the managers in each business group “awake at night”?

      6)      How does knowledge and information flow through the company?

      As I have mentioned in lectures and blogs before, I have walked into companies where the InfoSec group has no idea what the business does, or refuses to talk to other business groups about their needs, their views, and their operations.  One company even insisted that their Business Continuity Plan did not need to include anyone outside of IT since, “We know all of it anyway.”

      If you as a CSO want to promote security tools and controls you had better understand the business and be able to talk about their problems.  You had better have your team ready to design and select security tools and controls that enable the critical processes, increase efficiency, reduce time to perform a job and increase revenues (or customer satisfaction).  If you can’t do that, then you will fail.  And don’t be surprised when the company looks at the security group and says, “They just don’t get business.”

      Posted in Information Security, InfoSec | Leave a comment

      Model for Building PCI Control Objectives

      Maybe it’s the excitement of getting re-Tweeted today, or maybe it’s just the outpouring of love and emotion I felt when I watched the video of the Mike+Josh hug, but I thought I’d provide a bit more thought around how to build these Control Objectives for PCI.  That and the fact that I live by the motto of taking “Massive Action” whenever I sink my teeth into something…

      My intent is to demonstrate three things:

      a)      Control Objectives can be linked to a business objective or goal

      b)      Prescriptive Controls can be abstracted to Control Objectives (with some effort)

      c)       There can be rational behind both Control Objectives and Prescriptive Controls that once explained can result in better management support, and potentially more creative design.

      My objective is not to co-opt this discussion, but rather to feed the debate with ideas.  It’s great to see this level of thought being spread in InfoSec, since I too often have listened to the bits and bytes battle in the boardroom, and that situation doesn’t end well.  If we can create meaningful discussions around the end goals and spread the knowledge top-to-bottom, we can begin to socialize the objectives of PCI so that ideas can come not just from technologists and (shudder) vendors alone, but actually come from a much broader spectrum – which leads ultimately to greater support and buy-in at executive levels.

      The Guinea Pig Control

      I’d like to take a simple example since it is very fresh in my mind, and it allows me to demonstrate the technique.  Let’s look at PCI Requirement 8.5.16.  This control is oddly grouped with other 8.5.x requirements but create a pretty significant challenge for many organizations I’ve talked to.

      8.5.16 “Authenticate all access to any database containing cardholder data.  This includes access by applications, administrators, and all other users.  Restrict all direct access or queries to database to database administrators.

      Now in the PCIHugItOut they mentioned the document “Navigating the PCI DSS” document from the PCI SSC as a starting point, but rightfully point out that it doesn’t go far enough.  The Guidance from the document states:

      “Without user authentication for access to databases and applications, the potential for unauthorized or malicious access increases, and such access cannot be logged since the user has not been authenticated and is therefore not known to the system.  Also, database access should be granted through programmatic methods only (for example, through stored procedures), rather than via direct access to the database by end users (except DBAs, who can have direct access to the database for their administrative duties.”

      Unfortunately the statement is not particularly “guiding”.  As a technologist you might look at it and say it is a great guiding statement.  What you are doing is applying a priori assumptions about logging user access when theft occurs or for the purposes of forensics or theory about limitations of database select statements.  What you are also doing is sidestepping the problem – management lacks that cognitive awareness, and it is highly unlikely they will care to invest their grey matter in changing that situation.  Your assumptions will be left trampled in the boardroom floor.  Trust me when I say that this one small control can create a firestorm when users (marketing, finance) are told that they cannot run their Crystal Reports or ODBC queries against their database that just happens to contain Payment Card Data.

      The Control Objective Example

      So let us give these people a reason why they can’t delve into the database ad-hoc.  I am going to assume the same model that I have used for years when building SOX Control Matrices:

      Threat: Theft of Payment Card Data

      Vulnerability: Lack of controls that limit the ability of users to collect/transfer/misappropriate Payment Card Data from locations where it is appropriately (securely) stored.

      Risk: Users may collect Payment Card Data from secured repositories (either one-by-one, or in significant quantities) and either (a) store it on insecure systems which exposes it to inappropriate activity, or (b) misappropriate the Payment Card Data it for nefarious purposes.

      Outcome for the Business: if Payment Card Data is placed on unsecured systems, it increases the risk posture to the company, by creating in un-mitigated risk.  Any compromise of this mismanaged data must be reported publicly (most state/local privacy disclosure laws) and can result in fines, and the company being on the evening news (or daily podcast for the technologically evolving masses).  Because this “uncontrolled data” now becomes virtually impossible to manage and track, the likelihood that it is misplaced, and the company must report it publicly rises exponentially.  Focus is not just on unauthorized data but data that is also accidentally misplaced (example – TSA laptop with personal information that created a stir when it couldn’t be found and the loss had to be disclosed by law).

      Control Objective: Ensure that the ability to transfer Payment Card Data from databases is appropriately controlled in a manner that prevents the transfer of such Payment Card Data to unapproved and unsecured locations or environments.

      Now, given this control objective companies could have the choice to either:

      (A)   Follow to the letter of the law DSS Requirement 8.5.16 (although this is a slightly less prescriptive control than others in the DSS).

      (B)   Present to the QSA the manner in which you feel you have met the Control Objective in a manner that achieves the same goal (to give you an idea – perhaps row level ACL’s that prevent access to actual PAN data by anyone who can log in directly, or tokenizing databases that users directly access, or completely eliminating the ability to store the data locally by using a terminal server for producing these types of reports or data).

      The Outcome

      What this has done is given a business view (Outcome for the Business) that most executives can understand.  You might need to give some real-life examples from their personal lives to make it meaningful and associate it with concepts that are in their cognitive realm (have them think of handing out their credit cards to their kids, or car keys to a valet and the resulting risk they know exists for those “assets” to be misplaced when they are in someone else’s hands).  You have also give an implementer a framework to evaluate how a control should fit, and decide if the PCI prescriptive controls are satisfactory for him.  The implementer will have to justify to his management his choice of alternative controls, and depending upon his savvy or guile, he will either succeed in alternate (more effective, more efficient) controls, or he will find himself forced into the default PCI prescriptive control.

      The Challenges in All of This

      There are a few challenges here that we must face as well.  Some can be easily overcome, but others will be subject to massive forces of economics and politics.

      First, the Control Objectives will overlap with the prescriptive controls of the DSS.  The example shown above overlays with 12.3.10, “personnel accessing cardholder data via remote access technologies”.  This isn’t a problem for those of us who have played with Control Matrices for some time, so this can be easily overcome with good minds, careful thought and creative mapping.

      Second challenge is that most companies and QSA’s will view this as a troublesome approach.  The process proposed is no different than the current process of “mitigating controls” but this process currently has a culture around it of being discouraged.   The cause for this is, in my mind, because it results in more work for the QSA’s.  With QSA services becoming a commodity these days, it isn’t attractive to customers looking to save money on their assessment.  Likewise QSA’s trying to increase profits will not likely steer from what will give them a price advantage when they can test against prescriptive controls.  Like all good plans both parties will digress into the world of loopholes and exemptions to satisfy their own motivations.  I don’t have a good plan for this except perhaps my original idea of encouragement and reward for maturity by the Card Brands which can potentially raise the interest of management.

      Thirdly, there will be a challenge with the inevitable question or view “well, it hasn’t happened to us”, “how real is that”, “well TJMAXX did just fine after their break-in” (which from a shareholder point of view is actually pretty accurate).  I love this one since I have faced it – especially a CEO who challenged me on TJMAXX’s stock price a year after the infamous break-in.  My response is calculated but the same.

      “We are discussing a balance between risks which could impact your company (paying $250M in fines which no CEO wants to shell out, no matter how much his stock price might stabilize or grow) and functionality which your users need to grow the company.  If you will provide us with the support, we will design a set of controls that will achieve a balance between the two that we believe both sides will be happy with.”

      Please note no effluence of FUD, no criticism of user’s desires or business goals.  Just a confident statement that there is a need to balance the two demands, and to come up with a solution that works for both.  Now, you just have to deliver on that promise…

      Posted in CISO, CSO, Information Security, IT Risk Management, PCI | Leave a comment

      Moving Beyond Compliance – Commentary on PCI-Hug-It-Out

      I finally got around to listening to the Tripwire sponsored, Martin McKeay and Gene Kim hosted PCI Hug It Out with Josh Corman and Mike Dahn.  If you haven’t heard it, you should.  Two very smart people (well four actually, but two we are focused on) talking about PCI and the challenges it faces as a standard.

      There was a great theme that came up from Josh Corman’s section, and I hear something that I believe in strongly – moving beyond compliance, and my favorite question, how do you move beyond it.  The discussion goes into how compliance is a stick (or the fines associated with it), and asks what could be the carrot?

      I’ve got some ideas…

      First it requires a change from the Payment Card Brands (and maybe the PCI-SSC).  It requires that they think the carrot model can work, and they are willing to come along for the ride.  They must be willing to give rewards when companies exercise maturity, evolution, and creativity that exceeds expectations.  Maybe make the reward an additional reduction in transaction fees for the increased reduction in fraud costs.  These are rewards that a company will look for – and I recall back in 2007 the Card Brands decided to extend rewards to companies who complied.  Maybe it is time to revisit this strategy.

      Second, it requires a different model for the standard; a model that is based on control objectives (as Josh and Gene discuss).  The objectives need to show the business (not just IT or Information Security) why these objectives are important to a business.  This model would need to be forged out of the PCI SSC and its various working groups so that the reasons behind their choice of controls are clear.  It needs to tell companies “WHY” they need to do these things, and not just in FUD terminology.  The objectives need to be put into business terms – which is far beyond what the “Navigating the PCI-DSS” does now.  I would point at HIPAA as an example.  One of my favorite controls is the requirement that medical records must be capable of being restored in 24 hours.  I point out to my customers that this is about the timely treatment of a patient and availability of their records to a doctor who must treat them.  An IT problem put into business terms.  In terms of PCI, this information is lacking.  I just need to reflect upon the times customers have asked me about a strategy for a “mitigating control” to recognize the value in this.  In these cases my best answer came from analyzing the DSS and finding out what the objective was, and was it being achieved with the customer’s mitigating control.  Not all the controls are so easy to understand or abstract and I have struggled in several cases to give an answer with a straight face (Why do we need to label every device in an organization with a contact and its purpose?  I’m not etching my iPhone’s purpose on its already fragile case and antenna!)  The process of abstracting the current DSS controls is not an overly difficult exercise, except that it tends to highlight the not-so-bright choices in controls, and biased choices not based on empirical facts.  It also tends to highlight when certain parties have ulterior motives (such vendors pushing their solutions).  If Josh, Mike, Gene or Martin would like to find good candidates to do this work, they need not look any further than the auditors and assessors who work in SOX every year.  They have had to struggle through this work for at least seven years now and have gotten pretty good at building control objectives and building logic behind it.  I believe Gene has also published a paper on the framework for this that is intriguing, and should get everyone’s support.

      Third, I would recommend that success in compliance not just be achieved by creating controls to meet an objective but by sustaining maturity of the controls put in place to achieve that objective.  Imagine if you will that the Payment Card Brands awarded lower per-transaction-fees to companies that demonstrated a higher level of maturity in their controls.  Maturity meaning a process of continuous improvement – based on CMMI, OCTAVE, and any other highly relevant process of improvement that is seen as useful.  Now we get at what I think the Hug-it-Out was talking about when they said “It’s like raising children.”  Give them rewards when they grow up and mature the right way.

      Fourth, it will require greater maturity on the part of the QSA’s.  Forgive my rant on this point, but in order to effectively evaluate elements of security posture such as maturity and achieving control objectives, you need a significant amount of maturity from the assessors.  This comes from twenty-two years in the industry watching my clients suffer through the opinions (or lack of opinions) from auditors and assessors for whom the ink has barely dried on their diploma.  I fear the QSA market is too immature to sustain a solid model of maturity.  I wish (hope?) this was different.

      With these four steps, you have made it much clearer to companies the concepts you want them to comply with.  You have given them incentives (carrots) to lead them along, and made the process to reach those carrots one that requires maturing, learning and growth.  You have asked them to think about their business, think about the risks and goals, and start to include them in their business planning.  Companies need to recognize that PCI-DSS compliance is not an IT or Information Security challenge; it is a challenge for a company’s process of Revenue Recognition, Fraud/Loss Prevention, and Company Image/Reputation.  Now we are talking in terms that the C-levels will understand.

      If you don’t include this, all the data and evidence about what works and what doesn’t will still be meaningless.  It has to have meaning for companies who fall under PCI Compliance, and it has to align with their goals.  This is why I feel, like Josh, that a risk based approach would be so much more effective.

      Josh, Mike, Gene, Martin…thanks for this podcast, and please keep this movement going.  I’m game for it!

      Posted in Information Security, InfoSec, PCI | 1 Comment