BSides San Francisco Presentation

So I did a little talk at BSides San Francisco 2012.  Its a pre-quel to my book “So You Want to Be the CSO…”  The talk was recorded so you can view it at your leisure.  Just pity the poor guy in the front row who I accused of being “sexy”.

A BrightTALK Channel

Posted in CISO, CSO, Information Security Governance, IT Risk Management, Security Governance | Leave a comment

#SecBiz or The Better Answer to Martin’s Question

I had the good fortune of a long drive (12 hours to be exact) which allowed me time to catch up on four months of backlogged Martin McKeay’s Network Security Podcasts.  My fortune improved when I listened to the June 7th 2011 edition.  I hadn’t known about the #SecBiz thread on Twitter, and I am sorry I missed it when it started.   The discussion on the Podcast was fantastic.  The identification of the issues, the perspectives offered, ideas on distribution of duties and the consensus that everyone had about the need was spot on.  The stories of employees having to work in every part of an organization are excellent, and a great insight.  A well placed CEO I know of did the same his first month after being hired and created a significant level of trust across the organization.

If you haven’t heard the Podcast, please do so.  It is all excellent.  Well, except for the last 18:27, after Martin asks the question: “What can we do…”  To me, the answers at that point fell flat and missed an opportunity.  So many great ideas that helped bridge the gap were provided before the question that the opportunity to expand on them was missed.  So I’ve decided to provide some answers, and make up for my lost time on the #SecBiz discussion.  This blog post will be a bit fractured and piece-meal, but the intent should come through.  The thoughts are all part of lectures I’ve given since Shakacon in 2007, ongoing research, and a book I’m writing based on  my research and case study collection.

First I’d like to point out something I think is very important to the discussion.  Years ago a wise CIO taught me to avoid the great mistake of referring to the non-IT portion of the company as “The Business”.  IT and InfoSec are part of the Business, and together with the other parts of a business create solutions and better the organization as a whole.  Referring to “The Business” separate from IT perpetuates the “Them” vs. “Us” we are trying to overcome.  Create new language, since our language is a reflection of our thoughts and intentions.  Let us re-arrange our intention and build the first link between ourselves and the other parts of the business in our mind.

The Goal

The goal that the #SecBiz thread shoots for is an achieved mutual appreciation between InfoSec and the rest of the business.  The goal is noble however too often we look at it in InfoSec or technical terms.  The answers to Martin’s question highlighted this for me.  The answers talked about how to structure InfoSec, how technical knowledge is key, and how teams need to take responsibility.

But the business will never understand the depth of technical issues in InfoSec, just as we will never understand the intricacies of finance and accounting.  We both can communicate high level concepts, but the technical details are why we have “specialties”.  Generalists who can also dive deep are rare.  We must stop trying to make everyone outside InfoSec experts.  The answer we need to focus on instead is based in the dynamic of how to build collaboration and a common base of understanding regarding our goals, and our priorities.  To do this we need to think deeper into psychology, or in my favorite parlance, organizational psychology.

Understanding Motivation and Perspective

Each of us has a motivation – things that we value and strive towards to achieve our goals.  These goals include the things we value, the objectives we want to achieve (both long term and short term) as well as the way we act to support these values.  Every business group (which includes IT) has numerous individuals working in it who have their own motivations and values. There are often commonalities – values such as recognition and significance, certainty, and personal connection – but with individual variations in priority and manifestation.  A CFO and the finance group are, from a business perspective, focused on the goal of ensuring the financials are accurate, timely, and assist in the objective of maintaining profitability through the appropriate management of monies in all forms.  There are also personal motivations layered on top of this such as being recognized for your work, and maintaining personal relationships.

This might seem tangential, but I assure you it is not.  If the InfoSec group comes along and tells the finance group that they cannot implement software that in the eye of the CFO and the finance group helps them achieve their goals faster, better and with the potential for them to be recognized for improvements in their group, how do you think it will go over?  Think.  You just told a group that they cannot pursue things they value.  Their value is based on their perspective through their motivation.  They do not see your perspective because it is not part of their goals or values.

Until we understand the motivations, goals and values of various groups within our businesses, we cannot accurately address security in those groups.  We must apply security with their motivations in mind.  If we derail their motivations, we will fail.  If we align with their motivations or show how our goals and values align with their motivations, we will create wins, and the understanding we are looking for.

[These have been discussed in academic circles by Maslow’s theories, Chris Argyris, and cognitive psychologists and adapted in more contemporary discussions on motivation through the works of business and personal development by Steven Covey, Jack Canfield and Tony Robbins.]

Building Collaboration – Towards Empathy

I have long held that collaboration is the method to creating buy-in and understanding and I suspect few would disagree.  My definition of collaboration is bi-directional actions and behaviors that include honest communication, active listening, and empathy.  The latter is what I consider the critical end-game you need to achieve.  I do not advocate outright sympathy, but rather an understanding and appreciation for another person’s thoughts, concerns, challenges, and ultimately their motivation.  From the above conversation, understanding a person’s or group’s motivation allows us to align or at least discuss issues in relation to their motivations.

Collaboration is not built by re-inventing how we shuffle InfoSec groups about but by building new paths and methods of communication.  The path to achieving this requires that we in InfoSec be willing to learn and lead in building these new paths and methods of communication.  Either side can initiate and lead this effort, but since we are speaking of the initiative, and are the ones calling out for greater recognition, let us take the lead building that bridge.  Let us model the methods so we can all benefit.

Modeling collaboration is first achieved reaching out to open lines of communication.  The techniques to achieve this include asking questions first rather than trying to “tell” someone things.  Ask to understand because it allows the other party to feel listened to, and for you to understand their frame of reference.  We all value when we are listened to.  Be the bigger person and listen to those outside of IT and InfoSec so you can understand their business, their fears, their needs and their motivation.

Second step of opening lines of communication is through active listening.  Being able to restate what the other party has said to demonstrate you understanding of it.  This creates respect from the other party as they feel even stronger that you are attempting to understand them.

Third step is active and sincere empathy.  Empathy is the ability to understand and comprehend the other party’s view, values and justifications for what they do.  You can understand their frame of reference.  Do not abuse this understanding since you can dismantle and shatter the trust you have built with the other party.

Lastly, use the knowledge you have gained to relate your position and view to their view of the world, their goals and their motivations.  When you have tied your objectives to their motivations, you have created the foundation for collaboration.  They now see the value in understanding your goals since it aligns to their goals.  Your goals are being achieved because they are aligned to the other party’s goals.  We call this a win-win.  Both sides get their needs met.

Some of the ideas that have come about in my case studies:

Business Impact Assessments: Dragging the Information Security team around to do Business Impact assessments with each of the groups within the business – sales, accounting, logistics…  The questions that were asked were “What is the most important process in your group?”, “What keeps you up at night?”, “What processes or systems would cause you the most impact if they were to fail?”  The result was a very personal discussion about what each group cared about, what their priorities were, and what they wanted attention given to.  By doing this under the guise of a BIA, we were able to better understand what each group cared about, and what was most valuable to them.  We also were able to understand in great detail the operational processes of the organization.  Think of it as a business mapping or process flow exercise.  We listened, we described what we heard to ensure we heard it correctly, and made sure we identified their biggest processes and biggest values.  The result was much more than just our knowledge of our business.  It built camaraderie.  The business groups felt we cared about them because we listened, we showed empathy for their needs and goals.  Now when we discussed security we had two things working in our favor – a knowledge of the entire business that we could use in determining risks and where to apply useful controls, and an audience who felt respected and felt it acceptable to show us respect.

Security or Risk Council: An internal “governance” group, not unlike an IT Governance structure which reviews business and IT objectives and budget to make sure IT aligns with the priorities and objectives of the entire organizations.  The council is made up of leadership from all business groups, and are free to share their concerns for security and risk management.  Monthly meetings are held, and all domains of Information Security are discussed but with a focus first on areas outside of the IT and Information Security Groups (such as perhaps HR background checks, concerns for fraud and loss in distribution, safety of workers in the workplace…)  By first making the council about their security concern the participants felt it was a collaborative effort and their views were valued.  This example worked well in several companies.

Risk Management and Business Process Discovery: Businesses understand risk management.  Banks and insurance companies for obvious reasons prove to be particularly adept and aware of risk management and process evaluation as valuable and integral to the organization.  While listening to Edition 10 of the Risk Hose Podcast I re-discovered the concept of risk management – in a process oriented sense – to reflect the ideas I discussed above.  The Risk Management teams explore the business processes with the business, understand it, evaluate the risk, and decide what to focus on with the business.  The InfoSec team in undertaking a business process discovery can understand the business.  By framing the analysis in Risk Management terms, you can increase the likelihood that the other areas of the business will relate to the findings.

Distributing Responsibility for Security: One of the conversations in the Podcast revolved around Security Operations.  I’m going to go down this rabbit hole even though on many levels it’s not a direct #SecBiz discussion.  It can however serve as a model of how to collaborate on security.

I prefer to demarcate Security Operations in to two groups:

a)      the acts of providing preventative security functions such as Anti-Virus, Patching, Firewalls, System Configuration (for security).

b)      the acts of providing detective security functions such as Security Incident and Event Monitoring, Unauthorized System and File Changes, and validation of controls  (such as reviewing system configuration standards or firewall rules for approval).  I also sometimes refer to this as segregation of duty functions since they are checks against potential inappropriate activities and control failures.

I divide this way because I prefer to assign responsibility for the preventative functions with the administrative groups who are usually tied to systems and devices (e.g. configuration standards and patching as the responsibility of each system group, firewalls as network devices, etc).  This takes security from being an InfoSec only function and makes it part of the job description for groups outside of InfoSec.  They become accountable for security and it begins to be part of their culture, and their thoughts.  Holding them accountable is the second part – the detective controls that are assigned to an InfoSec group.  The outcome of these role designations are conversations about security that spread wider than just the InfoSec group, and control designs are collaborated on.

What does Collaboration Achieve?

I conducted a survey in summer of 2007.  Over 100 companies responded, and while the survey was highly un-scientific, the results were clear.  They survey asked what was the perceived acceptance of the company’s Information Security Policies, and what parts of the business were involved in creating those policies.  Unsurprisingly, of the organizations who said they developed their InfoSec Policies with the business, 80% said their policies were well accepted, and the remaining 20% felt the policies were accepted and challenged, but not outright rejected.  Of the organizations who developed their policies just within IT or the InfoSec group, 36% felt their policies were well accepted.

The Quote

I’m going to leave you with two quotes since they both contribute some insight:

Chris Hayes: “We have to accept that it’s not our risk tolerance that matters as risk practitioners or security professionals.  Its the person accountable for the risk at the end of the day. And until you overcome that you’re almost a barrier to what you’re trying to achieve.”

We have to work with the business to get them to understand the risk, and design with it (for better solutions).  In order to do this we need to understand what the business is about in the first place.  And then we need to demonstrate we understand it, with empathy for their motivation.

Ultimately InfoSec is juggling risk and business goals, or as @shitmyCSOsays quoted: “Security is about eliminating risk.  Business is about taking risk to make money.  See how they are a perfect match?”

Posted in CISO, CSO, Information Security Governance, InfoSec Governance, IT Risk Management, Security Governance | 1 Comment

Do you have SOCD? (Security Obsessive Compulsive Disorder)

Are you SOCD?

You have it if:

  • You feel the constant need to force drastic security measures.
  • You say: “This company really needs to revise all the (SOX) controls.  There’s absolutely no reason to have management involved in the process.”
  • You threaten “We need to just block everything and then open up stuff when something breaks.”
  • You believe that technology can solve all security problems.
  • You use multiple biometrics or RSA tokens to access your blog.

Look at this statement:

“Security is about eliminating risk.  Business is about taking risk to make money.  See how they are a perfect match?” – @sh*tmycsosays

Which sentence do you examine and have the greatest curiosity about?  Which sentence makes you roll your eyes?

Security Obsessive Compulsive Disorder is an obsession with imposing security in the face of competing requirements for accessibility to the asset you are trying to protect.  In simple English, you won’t let anyone near anything despite other people needing it.

Now, what are your real desires.

Deep down do you really want to be appreciated? (Probably yes.)
Do you wish someone in the company would listen to you?
Do you wish people stopped avoiding looking you in the eye when you pass them in the hallway?
Do you wish you were invited to the big meeting when the new project design was being discussed?

Then I would recommend some treatments.  Don’t worry, I promise to not make you lie down on a couch and tell me about your sordid relationship with your RSA key fob, or late night googling of awk and sed scripts.  Promise.

A. Deep Breathing Exercise

1) Giving attention fully to your stomach, slowly draw in two deep breaths.  As you inhale, allow the air to gently push your belly out.  As you exhale, consciously relax your belly so that it feels soft.  If it already feels soft, that’s okay too.  Too much time staring at EnVision consoles will do that.

2) On the third breath, bring to your minds eye an image of a user with good intentions and a desire to just do a good job for their boss.  Imagine the happiness when they receive their bonus for having completed their project on time, or for becoming more efficient in their job.

3) Take a forth breath, and imagine the CEO of the company talking to the board of directors about how the money they invested in the company is producing profits because everyone could do their job, efficiency was up, and the new products could be released on time.

Now close your eyes and imagine what you can do to make these two people happier, more successful.  Think of what things will protect their goals of getting that bonus, or satisfying the investors who have made this company possible.  Remember that security can be part of this equation, but you have to consider their happiness too.

B. Unenforceable Rules

If you are still struggling, I’d like you to think of something Frederic Luskin calls Unenforceable Rules.  Unenforceable rules are rules that we might currently expect others to adhere to, but which aren’t really in our control, and we do not have the power to “make them right”.  Are the rules about security you think are necessary really unenforceable?  Let me counter the question with another question: how many of your rules have been implemented?  How many have not met significant resistance?  You might ask if that means there aren’t any rules that others will share?  There will be, trust me, but let me share a little secret.  No security expert ever shares the same rules about security with everyone in their company.  Even the best and most respected CSO will find disagreement on tactics or rules they may think are perfect.  The difference is their ability to recognize that they are unenforceable.

Think then about what your hope is – your goal, your real focus for what you are trying to achieve.  Then look at the rules you want to enforce.  Do you think someone might object to them? (Notice I don’t say they are wrong, just that someone else might not share them.)  Now think about why they might not agree.  What might their objectives be?  What might their goals or focus be?  How do the unenforceable rules violate their goals?

Now you will likely find yourself much more able to understand their goals.  Now you will find yourself able to design new rules – rules and associated actions that users and that CEO will find appealing because they support their goals too.  These new rules and actions can achieve security goals without requiring SOCD.  Recognize you still may not be able to exercise the level of control or security you wished for.  You may not have solved the level of security you wish for, but you likely will have made an impact that you otherwise would not have if you had held to your unenforceable rules.

Credit to Frederic Luskin, with absolutely no malicious intention to parody his incredible work.

Posted in CISO, CSO, Information Security, Information Security Governance, InfoSec, InfoSec Governance, IT Risk Management, Security Governance, Uncategorized | Leave a comment

Mentoring Outside the Echo Chamber

I have been incensed by certain “pundit” activities through a recent encounter that unfortunately mirrors the frustration I felt 20 years ago as a result of the actions of certain academics where I once taught.  The actions of which I refer?

  • Sweeping generalizations
  • Nihilistic critiques
  • An unwillingness to offer or model a solution

Let me give you my recent trigger:

A small company whose security team had announced to a shocked management that they wished to stop using Firewalls and Desktop Anti-Virus because they were ineffective. Probing questions led to a recent encounter that this small security team  had with a pundit who professed that these tools were ineffective and new times needed new tools.

Now I’m going to carefully chose my fight here.  My issue is in the advice which was presented in an abstract vacuum, devoid of situational awareness and environment.  The pundit’s goal to incite thought and discourse through the abrasiveness of the comments unfortunately served this SMB poorly.  I do not wish to debate here whether Firewalls or Anti-virus are valuable because there are too many variables to make that a meaningful discussion in a one-sided forum such as a blog.  Such a debate will depend upon what you a trying to achieve, the relative effectiveness of the specific vendor’s technology employed, and the effectiveness and appropriateness of the implementation.  These are many variables which make the sweeping generalization that “Firewalls are ineffective” quite dangerous.

Yet, as this poor security team understood it, their “ancient” tools had zero value. A one hour question and answer session with the security team (unfortunately in front of management) led to revelations that they had a entered what I call a nihilistic vacuum. They had not considered what controls those tools were intended to provide, what threat and risks were most relevant to their environment and, not surprisingly, they had no strategy beyond the overly simplistic objective of “buying a new technology”.  There was no thought of how to address the openings left by their abolition of their only source of network access controls or detection of malicious software.  Their new found idealism was directionless and without purpose.  This is far from productive, and in a small company, potentially devastating.

What ensued for the remaining two hours was an exercise of modeling how this security team should have reacted to their advice.

I first inflicted some pain by saying that yanking a tool, even if limited in effectiveness, was dangerous if no thoughtful examination is made of what is lost, what is gained, and what will fill the void.  What I did next was to model a thought and design process for this team that examined the decision and how they could have approached it far more effectively.  Things we discussed:

a) what is valuable to protect here at this company?
b) what are the ways these things are used, handled, or stored?
c) what controls are in place to make sure they are used and protected appropriately?
d) which of these controls will you loose when you abolish the “ancient” technology
e) what designs do you have in place to replace these controls?
f) what level of improved effectiveness and efficiency do you gain from this new design? (and how you can try to model it)

I then showed them that “ineffective” or “ancient” rarely applies to control objectives (such as prevent inappropriate network access to systems, resources and data) without a much greater shifting of heaven and earth.  I taught them in the hour I had left that design is an act that we must all undertake, and not to defer this act to some Pundit who lacks the awareness of an environment and goals to make the determination for you.

For those of you wondering about what incensed me 20 years ago; as a Teaching Assistant in two different architecture schools I watched professors launch into scathing reviews of students’ work without a thought given to the student’s or project’s situational awareness. The critique was nihilistic, abstract, and linguistically incomprehensible. The student left with nothing new but tears (or a stiff upper lip). There was no growth from replacing the mistake with a new idea or process, no modeling by the professor of how what they said worked in reality (or a physical world). The student had to grope at random straws to identify the faults in his demolished design (in one case, literally demolished). I rallied against these monstrous outrages then, as I do now.

So all you Good and Bad Pundits, dig deep.  Think carefully about what you say, because many hang on your every word.  Your words have value, but they also need context.  Teach completely and give this context.  Be specific and explicit in your critiques.  And when you finish with your critique, show how to correct the issues, evaluate effectiveness and model how to find solutions.  Inside the context of the InfoSec Echo Chamber we attempt to incite each other to action, but we forget that those who are on the fringe do not always benefit from our battle scars and insights.

I issue this challenge to Pundits because you hold the mantle of leadership through the papers, lectures and conferences which proffer your ideas.  Those on the fringe also have the responsibility, but they are the naive, and look to you to overcome this naïveté.

Students, there is no utopia. If you find after you have listened to one of these Pundits you suffer a vacuous nihilism in your InfoSec soul, grab some ABBA, a bean bag chair, and sit down with someone who can explain what it all really means.  Unlike unicorns, these people really do exist.

If you need some thoughts about how to do this, I recommend reading Donald Schon “The Reflective Practitioner”, and Chris Argyris “Theory in Practice” (as well as any of his books on direct explicit feedback).

Posted in Uncategorized | Leave a comment

My Take Away Moment from BSidesSF

I won’t attempt to rehash the conference, except to say, if you have a chance to attend a BSides event, do so in great haste. Despite being free, they are worth every penny you could invest in visiting one.  What a great respite from the RSA Conference!

What I do want to cover was a very interesting panel at the end of the conference.  The panel included some great minds: Will Gragido; Josh Corman; Marc Eisenbarth; HD Moore; Dave Shackleford; Alexander Hutton; Caleb Sima.  The subject was of interest since it drew quite the crowd: “State of the Scape: The Modern Threat landscape and Our Ability to React Intelligently”

But what came out of the panel as a result of some “heckling” on the subject of APT, Cloud Computing et.al. was priceless (kinda like a MasterCard commercial).  It was not what I think the panel had planned or was expecting (but that’s the fun of a panel, and BSides).  If you are a budding CSO or Security Manager take note:

  • Don’t make people security experts.  Make it easy for people.
  • Make security accessible and something that people care about.
  • Make it easier for programmers to program securely than it is to program insecurely (an example of Microsoft’s .Net work was offered as an example).
  • Get out of the echo chamber where we only talk about security in obscure terms and treat everything as unique and terrifying.  People need it to be accessible and simple.

Wow.  This echos stories I’ve told for years, and stories that have been popping up around the world as I’ve been traveling the last year:

    • At a conference I attended in the EU, the local CERT authority described a company who had spent millions of Euro on top-of-the-line security technology, and yet it was all turned off.  It was turned off because users always looked for ways around it because it made their jobs too difficult if not impossible to perform.
    • As a traveler do you enjoy the TSA security line, do you enjoy dumping out your entire belongings into a plastic tray for the world to peruse, being subject to numbing technology scans, and in the end a joyous pat down?  Or would you prefer a simple process to ensure your flight is safe?
    • Is it easier to teach programmers to write code void of SQL injection flaws, or is it easier for Microsoft to write .Net functions that make it more difficult to make direct SQL calls, thus significantly reducing the probability of someone writing code that results in SQL injection vulnerabilities?  (P.S. Microsoft did the latter, hooray!)

      Simplicity for all of us is the best way.  Simplicity that anyone can use, and makes it easier for all of us to do things the right way rather than the wrong way.  And that does not necessarily mean making the hard way painful by imposing fines, penalties or punishments.

      So as a Security Professional I would highly recommend you take the following actions in your strategy, and tactics:

      1. Make security invisible – it shouldn’t get in anyone’s way, or stop them from doing what they need to do to get their job done.  But it should be part of what they do.
      2. Remind people of what they value – so they can protect that.  It may be the teenager’s pictures and music, it may be the accounting departments numbers, it may be the sales person’s leads, or it may be the IT infrastructure.  Whatever it is, make sure the people who care about it are aware that you are trying to protect what they value.
      3. Look for methods that make security easier for users than the lack of security.  Whether that is through technology that makes authentication easy (biometrics for execs?), or programming libraries that are inherently secure, or handling data easier to do securely than insecurely.
      4. Always give something back.  If you find that a security control you have to put in place has an impact, be ready to give something back to the users.  They will be more likely to comply if you can show that you care about their priorities (such as how they can get their job done successfully and efficiently).
      Posted in CISO, CSO, Information Security, InfoSec | Leave a comment

      Sophisticated Analysis of Risk Management is Critical…don’t do Sophisticated Analysis Risk Management

      There is a wonderful discussion occurring in SIRA (Society of Information Risk Analysts) these days. I missed the beginning of this group, and I regret it, because the messages coming out of the discussions are extremely insightful and critically important for anyone who is managing risks around Information Security, or any type of security for that matter. The discussion I want to hit on is one that I am sure is already contentious debate within and without SIRA; Should I perform a risk analysis at my company? The subsequent questions are the source of much of the resistance: What model should I use? How do I measure the likelihood? Does impact include hard and soft costs? Do I need a PhD in statistics? Why does Excel always crash when I try to do big equations like this?

      I can’t answer why Excel is crashing, but I think the rest has an easier answer than we might think.

      Let the Gurus do the Risk Modeling, Statistical Analysis:

      The most substantial and accurate challenge to Risk Modeling in Information Security is that there is not enough data around probabilities and as such, the quantitative rigor of our analyses declines rapidly. I would absolutely agree. Any insurance company will tell you that there is little, if any, actuarial data on Information Security. But the only way we are going to overcome this challenge is by collecting and analyzing that data. Let the experts do this work and collect the knowledge. Let them build the complex models, be the PhD’s in statistics, and find better ways to analyze the data than Excel. Let this data become the source of the probabilities that we need.

      Look at the value we get from seeing what types of attacks are most frequent against Payment Card Data, or the mix of sources of data breaches or the records stolen by types, what vulnerabilities are typically the most often exploited….I’ll calm down now.  The excellent work that is being done to analyze the probabilities through current studies needs to be pushed forward. The showcase example has been the VzB breach studies. They have contributed significantly to our knowledge of what is really happening. I would love it more if there was a clearing house for the statistics so we could merge all the data of those who are jumping on board. Imagine the collective knowledge based on a myriad of views, experiences and organizational cultures. And let’s face it, data is useful. It validates what we see, it removes ambiguity, and allows us to correlate events and situations, it even highlights differences and nuances that we don’t see. It has the capability to remove pre-disposed biases and correct a priori assumptions.

      Don’t Let the Data Rule You:

      However statistics don’t tell the whole story. Let’s be honest about it. There are stories behind the statistics, not the other way around. Statistics will show us a story about the data we feed it. It won’t tell us where the data came from, what factors affected the source of that data, or what the outcomes of that data were. We have to supply that information. Remember, data in=data out, or garbage in=garbage out. It is always important that as we make use of the data that we read the fine print (or big print if they make it available) to understand the sources. The VzB breach reports have their biases: the 2010 report is potentially different from the 2008 or 2009 reports because of data input from the US Secret Service. Differences can emerge in data sources from a business collecting breach data versus the US Government collecting breach data.

      Bias in the data will affect some of the outcomes. As an example, companies are probably more likely to use private security firms to investigate internal issues to avoid public disclosure and embarrassment, while the US Government resources will more likely be involved when the breach source is external, or the company feels their legal repercussions are minimized. These are the stories that we have to consider when we look at the analyses, and should be disclosed to make sure we use the data correctly.

      Use the Data Not The Math

      For you, the new IT Manager, the result of all of this data research is that you now have a set of probabilities that you can say are based on reality, and you know the biases of the sources and resulting analysis. You can now take your finger out of the wind, put away your “8 Ball”, and use real data. It’s not perfect data (remember its story!) but it is far better than when I started 20 years ago in this field. You do not need to have a PhD in statistics or mathematics. You do need to know how to read the outcome reports from the analysis (some reading skills are necessary). You do not need to build a complex Risk Management model. You do need to build a simplistic model. Your risks can be built on the field of possible threats using the data from the detailed analysis. Your vulnerabilities can be built from your known environment. And the probabilities can now have some teeth. Even if you don’t feel you can build a risk model (time, effort, Excel just won’t work) you can always refer to the global models of probability and risk from the studies that have been done, which have been vetted, and which are based on extensive data.

      Lastly As I wrote in an earlier post, my biases have changed, and all as a result of the data. I made a change in focus several years ago after reading the data gathered in Visible Ops. Now I am changing again, by using the data from the breach reports from various (trustworthy) sources. I’ve changed my previous biases because the data has told me to. The story for me, is that now, I can monitor threats, vulnerabilities and risks being realized, and identify what they are, their frequency, and their likelihood in of occurring versus other threats, vulnerabilities and risks. I can focus my priorities…

      1) Let those who can analyze the data (and have the PhD’s in statistics) analyze the data

      2) Use the results of their work to simplify and increase the accuracy of your risk analysis

      Posted in Uncategorized | Leave a comment

      Handing Back Responsibility for Security

      There is a great lesson that unfolded at one of my customer’s sites during an audit.  It is a great story to tell, but more importantly, it lets me illustrate that as Security Professionals, we need to design security to work in a way that makes them natural to the business.  I know, shocking isn’t it?  But it can be done…

      During an audit of a company’s security program the gentleman doing the audit asked for evidence of “…specific Security Testing…” in the development process.  The development manager responded, “We do testing, but not any specific Security Testing.  We do code reviews by someone who hasn’t written the code but is part of the same team so they understand the objectives and how it might impact other code.  We use the material we receive from annual training we have with our development tools vendor on how to write more secure and stable code.  We do data input and processing tests to make sure the system doesn’t break.  Then we test the functional specifications to make sure we met all the design specifications.”

      The auditor’s answer was, “That’s not specific Security Testing.”

      I stopped the auditor and asked him to tell me what “specific Security Testing” was.  His answer was, “It includes testing of the code, looking for security vulnerabilities, testing with tools that look for security problems, testing for error conditions or code failures that could result in the disclosure of data.  The testing you do here is Functional Testing.”

      So I asked a question of the Auditor:

      “What is the ideal objective that we, as Security Professionals would like to see when we look at application development?”

      When I got the same response back about what is specific Security Testing, I responded, “What if a software development program includes Security from design, through functional specification, through development and into testing.  Security is built in to every aspect, and it is natural.  Is that not a better model?”  There was affirmative nodding.

      “Then, is it not appropriate then that a company include Security Testing in their existing testing methodologies and refer to it as Testing rather than as specific Security Testing?”  At this point there was some silence on both sides.  I then prodded the development manager who proceeded to discuss how Security was wrapped diligently in their design and functional specs, and that their input and processing testing included many of the elements of specific Security Testing that the assessor was looking for, but they never called it Security Testing.  It was called just Testing.

      Let us be honest about something.  Not every development team thinks this way.  I happen to have a few very brilliant managers at clients who think this way.  Hats off to them.  But our goal as Security Professionals is to get all of our clients to think this way!  Security should not be a standalone activity operated in isolation.  Security should be a natural part of what we do every day. To paraphrase many security professionals, if we naturally did the “secure” things we should do in the first place, we wouldn’t need much of the artificial layer of protection and tools we build.

      We must drag auditors, assessors, and every other critic away from their “Deformation professionnelle” – their tendency to look at things through the lens of their profession and forgetting about the bigger picture or the real goal.  In the case of software development, most auditors think of the world after we decided (unilaterally) that developers can’t do it on their own, so we must put in place controls, tools and other activities to stop their bad code.  Instead the goal should be to create an environment where the developers do include security in their processes – at every step.

      I don’t argue against the tools that are used in Security Testing.  I just argue that keeping these tools and processes out of the developers hands tells them it is okay for them to write bad code.  You are implicitly telling them that it is someone else’s job to make sure it is secure.  What we as security professionals need to do is hand that responsibility back, give them the tools, give them the training, and assign penalty and blame when they do not take up the bit.

      The lesson from this little story?  Let me walk you down the garden path:

      a)      Security should be built in as a natural part of our existing business processes.  It becomes a cultural and behavioral change.

      b)      Security should be everyone’s responsibility, not one group in isolation.

      c)      We need to play the coaches, not the ringleaders.

      Being in the Information Security profession is a lot like being someone’s coach or trainer.  Your goal is not to run their business, or to swing the golf club.  Your goal is to adjust them so that they improve their performance and results.

      Posted in CISO, CSO, Information Security, InfoSec, IT Risk Management | Leave a comment

      Data Facts vs. My Bias…how I am losing (and why its good)

      I have to admit as I listen to the sages on collecting data (Alex Hutton, Mike Dahn, Josh Corman…) I am getting more and more conscious of my own biases about security (guilty as charged!).  Ever since Alex’s post a few weeks ago, the whole concept has been rolling around in my mind.  While reading RSA’s Security for Business Innovation Council Report for Fall 2010 on the plane I found myself questioning the risks and comments as I read them.  More importantly I started realizing that I even suffered biases of my own.  As I worked through control objectives for PCI I noticed that I was questioning certain PCI controls, “Is this based on real threats and attacks?”, “Is that really effective, or is it a legacy belief?”, “Aren’t there other ways to achieve the same objective?”

      I began to question the attack vectors and prescriptive controls that I have been taught to accept.  Josh Corman made comments in the “Hug-it-Out” series that this can create some very unique opportunities for alternatives once we clearly understand what we are trying to protect against.  As an example today I looked at the prescriptive PCI-DSS controls in section 3 for encryption.  I don’t doubt the power that encryption has, but I began to question what were the controls trying to achieve?   Think about what is the objective behind encryption.  I would argue that it is two fold (at least for the PCI Security Standard).  If you deconstruct and reverse engineer Section 3 of the PCI-DSS I believe you find two ideas:

      (a) The need to ensure that access to Payment Card Data is strictly limited, and this control is ensured throughout the Payment Card Data’s lifecycle and on any medium where it might exist.

      (b) Providing a clear assurance that access to Payment Card Data is limited to a minimal number of approved/authorized users and cannot be bypassed through the use of privileged (think administrative)access.

      If you find issue with these objectives, suspend your disbelief for a moment and assume for a second that they are accurate.  Now think hard about objectives and are their other ways to achieve them besides encryption? (Please send me ideas since I’d love to hear them!)

      When I reflect on how I might have constructed this structure of preconceptions in my mind I see that I have in some way been co-opted by fear mongering, sensationalism, and directed focus by media and industry pundits on isolated incidents of security.  To be fair, it is not wholly their fault.  Prior to breach notification laws virtually no one (pundits included) had any awareness of what breaches had happened.  Most information was hearsay.  Even now with the breach laws we have little to no insight into the causes for the breaches.  That fortunately changed with the Verizon Breach Reports.

      Now with my biases hanging out for a flogging, I am ready to see the data.

      That being said, I would exhort the researchers and readers to carefully consider the following issues when they analyze the data.

      (1) Keep in mind that legacy attack vectors do not necessarily disappear.  Because the data is fairly new, it will be less likely it can reflect on what controls are still necessary even though the attacks they prevent against might now be rarely seen in the wild.  Just like the world of viruses or diseases, it is virtually impossible to completely eradicate attack vectors.  We perform defacto inoculations even though we rarely see the diseases under the assumption that the inoculations are what continues to keep the threat in check.  The assumption is probably accurate, but if you looked purely at the statistics of a disease occuring you could surmise that the control was no longer needed.

      We are all familiar with address spoofing, and probably would be hard pressed to find an attack based on external address spoofing, but that doesn’t mean we should stop “vaccination” against it, or does it?

      (2) The data will naturally have a bias towards new / evolving threats.  It would be wonderful if it could  include the context of older threats and attacks but that would require different sets of data than what some of the current research is providing.  The several years of Verizon Breach Reports have been quite helpful since the historical data has given us an evolution of threats and weaknesses, but even they have a limited history.  An option would be to correlate the data collected through the Breach Reports, VERIS, with analysis of attacks seen in the wild (successful *and* unsuccessful – think of sniffing, intrusion detection, and other attack reporting methods).

      (3) As certain controls become commonplace the attacks that they prevent against will begin to fade.  Breaches associated with the weaknesses will drop.  However, if a vulnerability reappears or a control is set aside with the assumption that it is no longer relevant, as we have found, attackers will rediscover them.  All we have to do is examine the re-emergence of old vulnerabilities that are exploited by newer attacks.  Attackers aren’t sitting still, and they aren’t shy of visiting history for ideas.

      (4) Include other critical research on “effectiveness” outside of the aspects of confidentiality (the current Verizon research focused heavily on “breaches” which I consider to be cases of failed confidentiality).  We should also consider the other two legs of our security stool: integrity and availability.  I am a huge fan of Gene Kim’s Visible Ops and I’ve been using it over the last four years to promote controls that support effectiveness across all three legs of the security triad.  Having clear research that not only promotes security but also points out what other justifications we can have for the controls that bring us confidentiality.

      What I find most exciting is that as we challenge traditional models of what security is supposed to be, we will also define solutions that we can support with quantitative measures to prove that our actions can help our companies (customers too!) achieve better security.  And near and dear to my heart, these solutions can incorporate ideas that are based in facts, real probabilities, and information that we can show in clear quantitative measures that management can understand.

      Or to expand an analogy Alex Hutton shared on Twitter, when we can clearly show management that hiring Reggie Jackson to bat for me in October would be a good idea statistically, I can do so with confidence in facts, not just based on Reggie’s claims.

      Posted in Information Security, InfoSec, IT Risk Management, PCI | 3 Comments

      Sustainable Security by Showing Tangible Benefits

      I spent a large part of my involuntary layover in Atlanta last month thinking about PCI, Control Objectives and Maturity.  Sometimes interruptions to our business lives like this are good, since stepping back and interrupting our non-stop business life for moments of thought is critical to our own personal growth, and the growth of others as well (like our businesses).

      I found my thoughts continually returning to the chasm that exists between compliance and maturity.  Why do I call this a chasm?  Because companies still, to this day shoot for “compliance” with the goal of avoiding penalties.  For InfoSec, Security shouldn’t be the objective.  The real objective should be sustainable security and the tangible benefits it can bring.

      Sustainable security is when you have an effective, repeatable process or cycle of continuous improvement.  This is a concept borrowed from CMMI, wonderfully articulated by SEI in the OCTAVE model, and used by CoBiT for measuring effectiveness of controls.  There are various levels of maturity starting at ignorance and moving up through ad-hoc controls, defined controls, managed controls, and continuous improvement.

      If we look at “compliance” we will typically find companies either at ad-hoc controls (ones which are based on heroics) or just defined/managed controls.  In these situations companies are “going through the motions” to satisfy an external master.  These companies create an end-goal of passing an audit or assessment and then move on.  Continuous improvement is not in their plan.  “Just tell me what to do so I can do it, and get on with my job.”  Their view is that compliance is an impediment to their business – one more hurdle to jump over before moving on to other more important things that they as being more directly beneficial to their business.

      Maturity comes when we move beyond “going through the motions” and actually monitor and measure the success of our program. A bank or an insurance company would never manage its financial risks the same way year after year.  They would evaluate their existing controls, and evaluate the external environment, threats and the variables which change constantly.  Risk management requires awareness of the effectiveness of our efforts measured against objectives, and evaluation of the objectives themselves.

      The same applies to Information Security.  An effective security risk management process evaluates the environment, assets and evolution of threats to chose appropriate controls, and evaluates if the selected controls are operating effectively.  These evaluations should be continuous and ongoing because the environment is ever changing.  We must perform two types of evaluations.

      So what is the challenge we have in moving from compliance to maturity?

      As a concept sustainable security isn’t very attractive to many executives and I can understand why – how does it bring a benefit to customers and the company bottom line?  If you take sustainable security at face value, the answer is, “Not much.”  It looks on the surface like a nice “process improvement” practice, but without any significant returns for the business.

      How do you answer this challenge?  How do you make a model of sustainable security and maturity meaningful?  The answer is in the facts.  Show these managers and executives the business risks, AND benefits of security controls.  Use quantitative research (for example, Visible Ops by Gene Kim) that shows the specific benefits of specific controls.  Put those benefits into terms they can understand from their tower of denial…

      (a)    Managing, controlling, and creating awareness around changes to systems and programs in your environment is proven to create a more stable and predictable working environment for your employees.  Users are prepared for changes and are more quickly able to take advantage of the benefits the changes offer them.

      (b)   Appropriately testing new systems and programs before putting them in to production results in higher customer satisfaction as customers and users have more positive (and fewer negative) experiences with the systems and programs.  Happy customers are the result of systems that work properly and are available when they are needed.  Testing ensures that this is the case.

      (c)    Building and maintaining systems in a consistent manner through standards has been proven to create a more stable and predictable environment where problems are more easily detected and fixed.  This results in higher availability for the tools that your customers and employees need to help you create revenue and customer satisfaction.

      I use these examples since these are the subject of great studies and I can pull out the quantitative data to support it.  We will always need more research on what works, and what doesn’t.  More importantly we need to be ready to convert this research into meaningful messages that make security meaningful to executives.  Once the company understands the benefit is much greater than just a check box or risk management, they will move faster towards the goal.  Our challenge is to take our research beyond just “what got broken in to” and in to “what creates tangible benefits for a company”.

      Posted in Information Security, Information Security Governance, InfoSec, InfoSec Governance, Security Governance | Leave a comment

      They Just Don’t Get It

      “They just don’t get security!”

      As InfoSec professionals we often curse our management, our users or our customers (or all three) because they have done something “stupid” which either creates or nearly creates a security incident.  We howl, we complain, and wish users would just “wake up and learn!”

      I think we are all wrong – yes, the InfoSec professionals are wrong, management is wrong, users are wrong and our customer are wrong.  Why?  We all don’t get security.  There are a few exceptions, but as soon as we bemoan our users, management or customers, we are just as guilty of ignorance as they are.

      “Okay, now you’re off the deep end!”

      Let me tell you a comment that I heard at a panel I was on where we were meeting with the media.  One of the panelists said, “I know a bank which has put in state of the art security, and some of the best controls.  But they are all turned off because the users won’t use them and they just go around them.”  We have all heard this story before, and usually we find ourselves saying “They just don’t get security!”

      The problem is not with the users.  It is with the InfoSec professional who thought that the best, state-of-the-art tools that inhibit the ability of users to do their job or act in a productive manner was appropriate.  How can users be expected to respect, learn about and engage with security tools when we as InfoSec professionals so often fail to learn about or engage with other business units in our companies and understand what they must do to be successful.  Let me give you a list of questions to ask and think to yourself if you can answer these without making a phone call:

      1)      What is the most important function or process in each business group?

      2)      What function or business process in each business group generates the greatest revenue?

      3)      What efficiencies in each business group can or does create the biggest savings?

      4)      What processes in each business group are the most time consuming?

      5)      What business risks keep the managers in each business group “awake at night”?

      6)      How does knowledge and information flow through the company?

      As I have mentioned in lectures and blogs before, I have walked into companies where the InfoSec group has no idea what the business does, or refuses to talk to other business groups about their needs, their views, and their operations.  One company even insisted that their Business Continuity Plan did not need to include anyone outside of IT since, “We know all of it anyway.”

      If you as a CSO want to promote security tools and controls you had better understand the business and be able to talk about their problems.  You had better have your team ready to design and select security tools and controls that enable the critical processes, increase efficiency, reduce time to perform a job and increase revenues (or customer satisfaction).  If you can’t do that, then you will fail.  And don’t be surprised when the company looks at the security group and says, “They just don’t get business.”

      Posted in Information Security, InfoSec | Leave a comment