Loving the John In All of Us

I found myself in one of my least favorite moments a few weeks ago.  I was having a discussion about the build out of a new environment.  Someone brought up the subject of how people should access the environment and I started laying out my vision.  It included several specific and significantly restrictive controls and requirements.  I got through half of my list and the most senior person in the room jumped up and said they were unreasonable.  I almost had a knee-jerk reaction of defending them with a “You must do this to be secure!”, but stopped myself as I realized I fell into a trap I so often preach against.

What I had done was bury my head in the sand of a regulation, a checklist of requirements and let myself preach from what I thought security was, and not try to find what the business or the operational environment needed.  I was wrong.  Dead wrong.

Finger To The Foreheadppcover-3d

I had the great fortune to be invited by a Gene Kim to read his early drafts of his book “The Phoenix Project”.  It is the story of one company’s attempt to overcome its obstacles and survive.  One of the characters in the book is named John.  He is in charge of Information Security at the company.  He carries a binder of controls, and is continuously focused on security because he needs to save the company from its security failures.  Except it isn’t security the company is struggling with – it is struggling with its own business and operational survival.  John however is not attuned to this.  He is focused on a checklist of requirements that are completely tangential to the company’s needs.  John has his own climatic scene where the antagonist of the story finally beats down John’s character with a finger to the forehead and a stern lecture that he better find out what’s important to the company and get out of the way.  I laughed hard as I read this scene.  I laughed because I can think of all the times I deserved that finger in the forehead.  If you can’t think of the times you deserved that finger in your forehead you are deluding yourself.

Why Do We Act This Way

There are probably a litany of reasons why we tend to operate this way.  The one reason that always seems to make the most sense to me is the simple constraint in our ability to operate outside of what we know.  We use the skills and knowledge (cognitive domain, awareness, call it what you will) we know best, what we have been schooled in, read and heard.  I have been, like many of us in information security, fed lists of controls, told that things had to be a certain way, and that breaches, like burglary or murder, carried huge consequences.  I was taught responses to situations from the perspective of security – a professional deference – because that was my job and task.

And we are not alone.  Others do the same within their profession.  There are people in marketing who only see the world through a marketing perspective; or sales; or financial; and the list goes on.  Even our own children see the world from the limits of what they know and what they’ve been taught.  If we all knew the bigger picture we likely wouldn’t have had the embarrassing stories from our high school and college years, and use the phrase “If I only knew then what I know now.”  We all have a bit of John in us – even when we consider ourselves enlightened.

Learn To Embrace the John in All of Us

We all have our constraints so the best way to overcome them is to first accept that we have them.  Acknowledge them.  Admit that many of the things that we discuss, propose, and recommend to people come from our perspective on the problem.  This suddenly makes the problem have multiple angles that it can be viewed from.  You may not be able to see all of them, but you certainly can ask someone else to tell you how it looks from their angle.

Ask questions.  One of the first things I do when I find myself in the situation of being dead wrong is to set aside all my security concerns, suspend my preconceptions, pretend to be a complete outsider, and ask what is important to the business – what is the real business goal and objective.  Things like how it creates revenue, how does it help the company, and what would happen if it was to stop working.  The perspective is suddenly very different than when I look at it as a security person.

Then, I take one of my favorite steps.  I create a solution that focuses on achieving the business goal, and that gives back just as much as it takes away.  I have a rule with my teams, “For every control you put in place, you must give something back to the people affected by the control.”  This creates some shock, some amusement, and then very puzzled looks.  Several people have asked me why I do this.  Some have resisted the rule, but I rarely waiver.  This rule forces my teams to focus on and understand the impact of what they are doing when they put controls, policies, rules or anything else in place that is restrictive.  And then it forces them to think of how they can make it less restrictive, or provide some benefit that is in line with the original business objectives and goals.  It makes them understand what the affected people need to do their jobs better and what really matters to them.  You also create some raving fans when they realize you understand their needs.

And lastly, and most importantly, recognize the Johns in all of us – in everyone around you.  Encourage them to do the same as you – to learn to accept their inner John, to explore and ask questions, and to look from different perspectives.  As role models we can develop the patterns in others and they will begin to mirror our behavior.  Poke people in the forehead once in a while, and remind them to learn what is really important, and listen a little better.

Posted in CISO, CSO, Information Security, InfoSec | Leave a comment

The Quantum Vulnerability Tunneling Effect

I know I had promised to talk about how to implement a risk management program in your small organization, but bear with me for a blog (or two).  Given that my brain has been wrapping itself carefully around risk management for the last few weeks, I have found myself revisited ideas from my past.  One particular incident this week reminded me of a subject that I’ve talked and written about before.

One of the individuals on my client’s InfoSec team is responsible for vulnerability scanning and management.  He’s quite talented, has good insight on the vulnerabilities, but like many others in InfoSec, he suffers from the blinding effects of Quantum Vulnerability Tunneling.

“The What?” you ask.

Yes, you heard me, Quantum Vulnerability Tunneling Effect.  For those of you not familiar with physics, this is akin to a process whereby a particle can bypass barriers that it should not normally be able to surmount.  So what does that have to do with vulnerabilities?

The barrier we place to separate vulnerabilities to address and those to accept is typically an arbitrary line we set that says “We’ll address fives and fours, but we’re going to let threes, twos and ones go for now.  This is our barrier, and heaven help the vulnerability that thinks it is going to make its way over that line.  Except….

Did you ever do a vulnerability scan, read through the findings, and find yourself stopping on one vulnerability in particular.  You see it and the thought runs through your head, “Oh, Sheiße!”  Suddenly the world around you stops and you focus on the vulnerability.  You know how it can be exploited.  You’ve read about it in magazines, and you’ve even done some of the necessary tricks yourself in a lab using your kit of tools.  In this case the individual at my client’s site had found a vulnerability that had been classified by the vulnerability scanner as just below the event horizon of “critical vulnerabilities”.

He saw this and upon looking at it had his “Oh, Sheiße!” moment.  He went to his manager and presented his case for why this vulnerability should be remediated.  Immediately.  He proceeded in a very animated fashion demonstrate with his hands and his words how this vulnerability could be exploited and how dangerous it was.  His manager had some good replies to his demand, but the individual walked away unsatisfied – probably because the replies talked to business impact and other metrics that did not have meaning to a vulnerability guru.  When all you have is a vulnerability scanner everything looks like a…

So I sat him down and had a little chat so he could consider the same answer from a different perspective.  I didn’t focus on the impact to the business operations since I saw that it was not clicking for him.  What I did was asked him to do a risk assessment of the vulnerability with me:

I asked, “What is the population of threat actors.”  We had already had a chat within the group that we would classify threat actors by loose groups of individuals so we could get groupings of actors.  We agreed on classifications of Universe/Internet, Company Internal, (specific) Department, Local Machine Users, Administrators, and No One.  He replied that it was *anyone* Internal (said with animation).

I asked him, “What level of difficulty was the vulnerability, keeping in mind commonly known mitigating controls in our environment.”.  He commented that it was a module in Metasploit.  Ah, so it was below HDMoore’s line.  I asked him how certain simple controls we had in place would mitigate it.  His reply, it would make it pretty difficult but not impossible, and it had been documented.  So we agreed to put it right at HD Moore’s line. (We haven’t really qualitatively classified difficulty yet, working on that definition still, but HD Moore’s Line is the start).

I asked, “What is the frequency of attempts to exploit this vulnerability.”  We use attempts since there is rarely good data on actual breach counts, but with a good honey-pot we’ve found we can estimate pretty well the frequency of attempts.  I’m really warming up to the importance of a honey-pot in a company’s environment.  The data you can collect!  And it makes frequency something you can lump into categories.  In this case we didn’t have any data at all since no one would set up an internal honey-pot, so we deferred to Threat Actors as a reference point.

I asked, “What’s the value of Assets that are vulnerable.”  The individual responded, “All things on the computers!”  I whittled him down to some tangible types of data.

We merged all of his answers into a sentence that he could say.

And then I asked the magic questions.

“How many vulnerabilities have we identified in the environment?”

He gave me a number.

“Using the same risk measures, how many of these vulnerabilities are a greater risk than the one you just pointed out to your manager?”

Silence for a moment, and a sheepish smile came across his face, and he said, “I get it.”

I have seen this situation many times before.  In the moment of discovery we get too close to a vulnerability or a threat, and we obsess on it.  We study it intently and learn everything we can about how to leverage it, how it can work.  It becomes real because we can understand it and perform at least portions of the attack ourselves.  We focus on it because it is tangible and at the forefront of our mind.  We become obsessed and let that item tunnel its way beyond any barriers of urgency to place itself at the front of our priorities.  The Quantum Vulnerability Tunneling Effect.  We’ve all fallen prey to it.  We’ve all tunneled our issues to the forefront out of fear and uncertainty.  That’s why I liked using the risk assessment.  It required that he re-examine his assumption that this vulnerability was critical, and test it with facts through a risk assessment.  It reset the perspective of the vulnerability in relation to everything else that should be considered with.  He wasn’t happy that the vulnerability was going to be accepted as a risk, but he also recognized where it belonged in the universe of risks.  He could look at the forest and see that it was filled with trees, and some were more worth harvesting than others.

I used to do a similar exercise with my team when I was leading security.  We did an in-house risk assessment.  I made the team list all of their perceived priorities regardless of how big or small, how insane or sane, and regardless of whether they thought it urgent or not urgent.  I wanted them to know that their ideas and concerns were going to be considered.  We then went through a highly interactive and risk analysis session that resulted with a list of priorities based on those ideas.  We put the top ten that we felt we could accomplish during the year on a board at my desk, and the remainder went in a book on my desk so we could say they never got lost.

Someone on my team would invariably come to my desk, hair on fire to say they had a risk that*had* to be taken care of right away.  My response was cool and calm.  I would simply ask, “Does it require greater attention than any of the items on that board.”  This would stop them in their tracks and make them think.  They would look at the board, think for a few minutes and respond with a “Yes”, or a “No”.  Usually it was a “No”.  If it was a No, we would pull out my book and write down the issue.  If it was a Yes, I would have them write it on the board where they thought it should go, and put their name next to it.  They could claim the success, or suffer the ridicule from our team if they were way off.  Priorities and perspective were maintained.

The Quantum Vulnerability Tunneling Effect was avoided, we stayed calm and on course, and we could react well when a real emergency came along.

But those are just the effects of when you think using your risk.

Posted in CISO, CSO, Information Security, InfoSec, IT Risk Management, Uncategorized | Leave a comment

Accuracy vs. Precision – My Risk Epiphany

Did you ever have a moment where a concept you have never been able to figure out or understand suddenly clicks in your head?  I had long struggled to understand a key element of Risk Management – how to perform a risk assessment model that included likelihood.  And a strange confluence of circumstances made my light bulb go off.

Now before I go into the story, let’s cover a bit of background on this.  Risk Management is a field that I admire, and consider critical to any organization, its operations, and especially important to my field which is Information Security.  Being able to communicate risk using tangible descriptions message of risks to an organization is critical.  But I could never quite seem to do it with the precision that I felt necessary.

I always stumbled on the issue of likelihood.  I could estimate with surprising ease the cost of an incident.  I have mastered the process of asking key business groups about the cost of failure and know how to test their attributions of cost.  I have been extremely comfortable identifying the costs of an incident – the cost of lost productivity, the cost of lost sales, the cost of lost intellectual property, and “range of losses” was a concept I could easily make tangible.  For retail companies I could estimate a range of lost revenues by looking at highest day revenues (Black Friday) and lowest day revenues.  That became my range.  I’d find the median and we’d have three values to work with.  I would also be able to factor idle time of workers, unused infrastructure and equipment, and compute these down to the last dollar if I so cared to be that details (which I usually didn’t – getting to the nearest $100,000 was more than enough for these companies).  I could even sit with a marketing team and estimate lost good will based on cost of advertising to regain those customers lost, and revenue downturns due to those lost customers.

But I could never feel comfortable with creating a picture of the likelihood that some event would occur.

Why? I wanted it to be perfect.  I wanted no one to question the numbers – they would be facts, let the chips fall where they may.  I wanted people to know in absolutes with absolute precision.  Except there is no such thing as an absolute – especially in risk. The light bulb that went off in my head was the light bulb of “imperfect knowledge”.  Risk is an estimate of possible outcomes.  It is about being accurate, not about being precise.  Bad risk analysis is when you pretend you can give absolutes, or make you make no attempt to find a range of things that are “more likely”.  Do I have you scratching your head yet?  Good.

Let me give you an analogy to illustrate what I mean by accuracy and precision.  In a battle, accuracy would be knowing where your enemy is attacking from, or even where they are most likely attacking from.  If you find out that your attacker has the capability to scale that 3000 foot cliff that you discounted due to level of difficulty you would add that  because it would show a more accurate picture of all possible ways your enemy will attack you.  That accuracy is accounting for all possible outcomes.  Precision is knowing exactly where to aim your cannon so that it hits your enemy at an exact spot (biggest tank, largest warship, best group of archers).  Accuracy won’t help you to aim the cannon.  Accuracy will tell you where to put the cannon and what range of fire it will need.  Precision be about aiming your cannon, but it will fall short on telling you where to position your entire army.

The problem I have struggled with in risk analysis is that I wanted precision – and that made me struggle with the determining likelihood.  The confluence of ideas hit me two days ago.  Somehow the idea of Alex Hutton’s and Josh Corman’s “HDMoore’s Law” (an InfoSec bastardization of the “Mendoza Line”) combined with having just chatted quickly about CVSS scores and the idea about “difficulty” associated with vulnerability scores made something click.  That and a peek at a risk analysis methodology that didn’t try to make likelihood a precise number.  Instead it asked a simple question – describe the skill required to achieve the event, and provide a range of frequency that the event would occur.  Bing!  I could work with descriptions, and so could executives!  If you try to arrive at a precise number, executives who play with numbers all day long will probably rip it apart.  If you give them probable ranges and descriptions of the likelihood, they get the information they need to make their decision.  It is imperfect knowledge.  And executives make decisions using this imperfect knowledge every day.  The more accurate the imperfect knowledge is, the more comfortable the executive will feel making the decision.  And for an executive, the easier for him to understand the imperfect knowledge you give him, the more he will appreciate your message.

So what did my epiphany look like?

First I realized likelihood is balance of understanding level of difficulty for an event to occur and its frequency.  Level of difficulty is really about the level of effort or confluence of circumstances it would require to bring about an event.  Take a vulnerability (please, take them all).  How much skill would the person require to exploit a given vulnerability?  Is the exploit something that even the average person could exploit (an open unauthenticated file share), something that is available in Metasploit, or is it a rare, highly complex attack requiring unknown tools and ninja skills?  This is not to say that the exploit cannot be done – it is determining if the population that can perform the exploit is smaller than the universe, and hence likelihood reduced.  The difficulty of having a tsunami hit the eastern cost of the United States is based on the rarity of unstable geographic features in the Atlantic Ocean that would generate one.  The Pacific Ocean on the other hand has a large population of unstable areas that can generate a tsunami.  The skill required to exploit an unauthenticated file share or FTP server is far different than the skill to decrypt AES or to play spoofed man-in-the-middle attacks against SSL.  I can already see the binary technologists fuming – “but, but, people can do it!”  Sure they can.  Any attack that has been published can be done – and there are many more that haven’t even been made public yet that also can be done.  A business cannot afford to block against everything, much like we cannot stop every car thief.  What we can do is avoid the stupid things, the easy things, and more importantly – the most likely things.  This is a calculated defense – choose those things that are more likely to occur until you run out of reasonable money to stop them.

Then I took an old concept I had around frequency.  For me there are multiple source that I can use to extrapolate frequency.  Courtesy of the three different highly data-driven analyses of breaches produced by the major forensics organizations, we can begin to estimate the frequency of various types of attacks.  Data repositories like VERIS, the various incident reports and general research of the news can give us a decent picture of how often various breach types occur.  A great illustration of this is Jay Jacob’s research on the Verizon DBIR data looking for number of times that encryption was broken in the breaches researched.  The data set was a grandiose zero (0).  Frequency can be safely ruled “low”.

Suddenly I was able to walk through a vulnerability report I had been handed and put together a quick risk analysis.  I asked five questions:

  1. What assets are on the affected systems?  (for example email, payment card data, PII, intellectual property…)
  2. What population of people would have access to directly exploit this vulnerability? (Internal employees, administrators, or anyone on the Internet)
  3. What is the level of difficulty in exploiting this vulnerability? (CVSS provides a numerical scale which I was more than happy to defer to, and in some cases where the general user population could exploit it, we created a “-1” category)
  4. What is the frequency that this type of exploit has occurred elsewhere, and what have we seen in our organization? (research into DBIR, asking security team at client site)
  5. What controls are in place that would mitigate the ability of someone to exploit this vulnerability? (such as a firewall blocking access to it, or user authentication, application white-listing etc.)

I took all the data that was collected and turned the risk into a sentence that read something like this:

“Examining the risk of being able to see information sent in encrypted communications:  Anyone on the Internet would have access to attempt to exploit this, however a very high level of competency and skill is needed to decrypt the communications.  The frequency that this type of attack occurs is very low (typically done in research or government environments with mad skills, and lots of money).  There are no additional controls in place that would mitigate this risk.”

The last glue that fit this all together was making all of your assumptions about the risk explicit.  I’ve talked extensively about the value of being explicit – it makes the data easier to examine, challenge, correct, and make even better.  The result is a more accurate risk assessment based on more accurate data.

The true detractors of Risk Management would point out that none of this is perfect or certain.  They would be correct, but then nothing in life is certain.  We tend to want to be perfect, to be right and not wrong because we fear wrong.  The sources of this tendency are boundless, but a bit of that I suspect is from our high level of exposure to the highly precise and binary world of computers, and as a result we look to make the rest of the world much like this model that we idealize.  One’s or Zero’s, exact probabilities, exact measures of cost… but life outside of the artificial construct of computers is not like that.  It is full of uncertainty and non-binary answers.  Those subtleties are what Risk Management can capture and help us understand in a way that is closer to our binary desires.  But never completely.  What Risk Management does do is give us better accuracy – so we can make more accurate decisions and be less erroneous.

So step away from the perfection.  Give your team a view of the risk that is in terms they understand.  You might just find that giving them a description, a narrative and ranges to draw from much more accurate than anything they’ve used in the past.  But whatever you do – do not aim for precision.  Aim for accuracy, even if that means the guess is even less precise.  Your management wants the accuracy.  Just like their profits, they will never be precise, but only data can make them more accurate.

Now you might still have a question of “so how do I quantify this?” Ah, that’s for next time…

Posted in Information Security Governance, InfoSec Governance, IT Risk Management, Security Governance | 1 Comment

BSides San Francisco Presentation

So I did a little talk at BSides San Francisco 2012.  Its a pre-quel to my book “So You Want to Be the CSO…”  The talk was recorded so you can view it at your leisure.  Just pity the poor guy in the front row who I accused of being “sexy”.

A BrightTALK Channel

Posted in CISO, CSO, Information Security Governance, IT Risk Management, Security Governance | Leave a comment

#SecBiz or The Better Answer to Martin’s Question

I had the good fortune of a long drive (12 hours to be exact) which allowed me time to catch up on four months of backlogged Martin McKeay’s Network Security Podcasts.  My fortune improved when I listened to the June 7th 2011 edition.  I hadn’t known about the #SecBiz thread on Twitter, and I am sorry I missed it when it started.   The discussion on the Podcast was fantastic.  The identification of the issues, the perspectives offered, ideas on distribution of duties and the consensus that everyone had about the need was spot on.  The stories of employees having to work in every part of an organization are excellent, and a great insight.  A well placed CEO I know of did the same his first month after being hired and created a significant level of trust across the organization.

If you haven’t heard the Podcast, please do so.  It is all excellent.  Well, except for the last 18:27, after Martin asks the question: “What can we do…”  To me, the answers at that point fell flat and missed an opportunity.  So many great ideas that helped bridge the gap were provided before the question that the opportunity to expand on them was missed.  So I’ve decided to provide some answers, and make up for my lost time on the #SecBiz discussion.  This blog post will be a bit fractured and piece-meal, but the intent should come through.  The thoughts are all part of lectures I’ve given since Shakacon in 2007, ongoing research, and a book I’m writing based on  my research and case study collection.

First I’d like to point out something I think is very important to the discussion.  Years ago a wise CIO taught me to avoid the great mistake of referring to the non-IT portion of the company as “The Business”.  IT and InfoSec are part of the Business, and together with the other parts of a business create solutions and better the organization as a whole.  Referring to “The Business” separate from IT perpetuates the “Them” vs. “Us” we are trying to overcome.  Create new language, since our language is a reflection of our thoughts and intentions.  Let us re-arrange our intention and build the first link between ourselves and the other parts of the business in our mind.

The Goal

The goal that the #SecBiz thread shoots for is an achieved mutual appreciation between InfoSec and the rest of the business.  The goal is noble however too often we look at it in InfoSec or technical terms.  The answers to Martin’s question highlighted this for me.  The answers talked about how to structure InfoSec, how technical knowledge is key, and how teams need to take responsibility.

But the business will never understand the depth of technical issues in InfoSec, just as we will never understand the intricacies of finance and accounting.  We both can communicate high level concepts, but the technical details are why we have “specialties”.  Generalists who can also dive deep are rare.  We must stop trying to make everyone outside InfoSec experts.  The answer we need to focus on instead is based in the dynamic of how to build collaboration and a common base of understanding regarding our goals, and our priorities.  To do this we need to think deeper into psychology, or in my favorite parlance, organizational psychology.

Understanding Motivation and Perspective

Each of us has a motivation – things that we value and strive towards to achieve our goals.  These goals include the things we value, the objectives we want to achieve (both long term and short term) as well as the way we act to support these values.  Every business group (which includes IT) has numerous individuals working in it who have their own motivations and values. There are often commonalities – values such as recognition and significance, certainty, and personal connection – but with individual variations in priority and manifestation.  A CFO and the finance group are, from a business perspective, focused on the goal of ensuring the financials are accurate, timely, and assist in the objective of maintaining profitability through the appropriate management of monies in all forms.  There are also personal motivations layered on top of this such as being recognized for your work, and maintaining personal relationships.

This might seem tangential, but I assure you it is not.  If the InfoSec group comes along and tells the finance group that they cannot implement software that in the eye of the CFO and the finance group helps them achieve their goals faster, better and with the potential for them to be recognized for improvements in their group, how do you think it will go over?  Think.  You just told a group that they cannot pursue things they value.  Their value is based on their perspective through their motivation.  They do not see your perspective because it is not part of their goals or values.

Until we understand the motivations, goals and values of various groups within our businesses, we cannot accurately address security in those groups.  We must apply security with their motivations in mind.  If we derail their motivations, we will fail.  If we align with their motivations or show how our goals and values align with their motivations, we will create wins, and the understanding we are looking for.

[These have been discussed in academic circles by Maslow’s theories, Chris Argyris, and cognitive psychologists and adapted in more contemporary discussions on motivation through the works of business and personal development by Steven Covey, Jack Canfield and Tony Robbins.]

Building Collaboration – Towards Empathy

I have long held that collaboration is the method to creating buy-in and understanding and I suspect few would disagree.  My definition of collaboration is bi-directional actions and behaviors that include honest communication, active listening, and empathy.  The latter is what I consider the critical end-game you need to achieve.  I do not advocate outright sympathy, but rather an understanding and appreciation for another person’s thoughts, concerns, challenges, and ultimately their motivation.  From the above conversation, understanding a person’s or group’s motivation allows us to align or at least discuss issues in relation to their motivations.

Collaboration is not built by re-inventing how we shuffle InfoSec groups about but by building new paths and methods of communication.  The path to achieving this requires that we in InfoSec be willing to learn and lead in building these new paths and methods of communication.  Either side can initiate and lead this effort, but since we are speaking of the initiative, and are the ones calling out for greater recognition, let us take the lead building that bridge.  Let us model the methods so we can all benefit.

Modeling collaboration is first achieved reaching out to open lines of communication.  The techniques to achieve this include asking questions first rather than trying to “tell” someone things.  Ask to understand because it allows the other party to feel listened to, and for you to understand their frame of reference.  We all value when we are listened to.  Be the bigger person and listen to those outside of IT and InfoSec so you can understand their business, their fears, their needs and their motivation.

Second step of opening lines of communication is through active listening.  Being able to restate what the other party has said to demonstrate you understanding of it.  This creates respect from the other party as they feel even stronger that you are attempting to understand them.

Third step is active and sincere empathy.  Empathy is the ability to understand and comprehend the other party’s view, values and justifications for what they do.  You can understand their frame of reference.  Do not abuse this understanding since you can dismantle and shatter the trust you have built with the other party.

Lastly, use the knowledge you have gained to relate your position and view to their view of the world, their goals and their motivations.  When you have tied your objectives to their motivations, you have created the foundation for collaboration.  They now see the value in understanding your goals since it aligns to their goals.  Your goals are being achieved because they are aligned to the other party’s goals.  We call this a win-win.  Both sides get their needs met.

Some of the ideas that have come about in my case studies:

Business Impact Assessments: Dragging the Information Security team around to do Business Impact assessments with each of the groups within the business – sales, accounting, logistics…  The questions that were asked were “What is the most important process in your group?”, “What keeps you up at night?”, “What processes or systems would cause you the most impact if they were to fail?”  The result was a very personal discussion about what each group cared about, what their priorities were, and what they wanted attention given to.  By doing this under the guise of a BIA, we were able to better understand what each group cared about, and what was most valuable to them.  We also were able to understand in great detail the operational processes of the organization.  Think of it as a business mapping or process flow exercise.  We listened, we described what we heard to ensure we heard it correctly, and made sure we identified their biggest processes and biggest values.  The result was much more than just our knowledge of our business.  It built camaraderie.  The business groups felt we cared about them because we listened, we showed empathy for their needs and goals.  Now when we discussed security we had two things working in our favor – a knowledge of the entire business that we could use in determining risks and where to apply useful controls, and an audience who felt respected and felt it acceptable to show us respect.

Security or Risk Council: An internal “governance” group, not unlike an IT Governance structure which reviews business and IT objectives and budget to make sure IT aligns with the priorities and objectives of the entire organizations.  The council is made up of leadership from all business groups, and are free to share their concerns for security and risk management.  Monthly meetings are held, and all domains of Information Security are discussed but with a focus first on areas outside of the IT and Information Security Groups (such as perhaps HR background checks, concerns for fraud and loss in distribution, safety of workers in the workplace…)  By first making the council about their security concern the participants felt it was a collaborative effort and their views were valued.  This example worked well in several companies.

Risk Management and Business Process Discovery: Businesses understand risk management.  Banks and insurance companies for obvious reasons prove to be particularly adept and aware of risk management and process evaluation as valuable and integral to the organization.  While listening to Edition 10 of the Risk Hose Podcast I re-discovered the concept of risk management – in a process oriented sense – to reflect the ideas I discussed above.  The Risk Management teams explore the business processes with the business, understand it, evaluate the risk, and decide what to focus on with the business.  The InfoSec team in undertaking a business process discovery can understand the business.  By framing the analysis in Risk Management terms, you can increase the likelihood that the other areas of the business will relate to the findings.

Distributing Responsibility for Security: One of the conversations in the Podcast revolved around Security Operations.  I’m going to go down this rabbit hole even though on many levels it’s not a direct #SecBiz discussion.  It can however serve as a model of how to collaborate on security.

I prefer to demarcate Security Operations in to two groups:

a)      the acts of providing preventative security functions such as Anti-Virus, Patching, Firewalls, System Configuration (for security).

b)      the acts of providing detective security functions such as Security Incident and Event Monitoring, Unauthorized System and File Changes, and validation of controls  (such as reviewing system configuration standards or firewall rules for approval).  I also sometimes refer to this as segregation of duty functions since they are checks against potential inappropriate activities and control failures.

I divide this way because I prefer to assign responsibility for the preventative functions with the administrative groups who are usually tied to systems and devices (e.g. configuration standards and patching as the responsibility of each system group, firewalls as network devices, etc).  This takes security from being an InfoSec only function and makes it part of the job description for groups outside of InfoSec.  They become accountable for security and it begins to be part of their culture, and their thoughts.  Holding them accountable is the second part – the detective controls that are assigned to an InfoSec group.  The outcome of these role designations are conversations about security that spread wider than just the InfoSec group, and control designs are collaborated on.

What does Collaboration Achieve?

I conducted a survey in summer of 2007.  Over 100 companies responded, and while the survey was highly un-scientific, the results were clear.  They survey asked what was the perceived acceptance of the company’s Information Security Policies, and what parts of the business were involved in creating those policies.  Unsurprisingly, of the organizations who said they developed their InfoSec Policies with the business, 80% said their policies were well accepted, and the remaining 20% felt the policies were accepted and challenged, but not outright rejected.  Of the organizations who developed their policies just within IT or the InfoSec group, 36% felt their policies were well accepted.

The Quote

I’m going to leave you with two quotes since they both contribute some insight:

Chris Hayes: “We have to accept that it’s not our risk tolerance that matters as risk practitioners or security professionals.  Its the person accountable for the risk at the end of the day. And until you overcome that you’re almost a barrier to what you’re trying to achieve.”

We have to work with the business to get them to understand the risk, and design with it (for better solutions).  In order to do this we need to understand what the business is about in the first place.  And then we need to demonstrate we understand it, with empathy for their motivation.

Ultimately InfoSec is juggling risk and business goals, or as @shitmyCSOsays quoted: “Security is about eliminating risk.  Business is about taking risk to make money.  See how they are a perfect match?”

Posted in CISO, CSO, Information Security Governance, InfoSec Governance, IT Risk Management, Security Governance | 1 Comment

Do you have SOCD? (Security Obsessive Compulsive Disorder)

Are you SOCD?

You have it if:

  • You feel the constant need to force drastic security measures.
  • You say: “This company really needs to revise all the (SOX) controls.  There’s absolutely no reason to have management involved in the process.”
  • You threaten “We need to just block everything and then open up stuff when something breaks.”
  • You believe that technology can solve all security problems.
  • You use biometrics or RSA tokens to access your blog.

Look at this statement:

“Security is about eliminating risk.  Business is about taking risk to make money.  See how they are a perfect match?” – @sh*tmycsosays

Which sentence do you examine and have the greatest curiosity about?  Which sentence makes you roll your eyes?

Security Obsessive Compulsive Disorder is an obsession with imposing security in the face of competing requirements for accessibility to the asset you are trying to protect.  In simple English, you won’t let anyone near anything despite other people needing it.

Now, what are your real desires.

Deep down do you really want to be appreciated? (Probably yes.)
Do you wish someone in the company would listen to you?
Do you wish people stopped avoiding looking you in the eye when you pass them in the hallway?
Do you wish you were invited to the big meeting when the new project design was being discussed?

Then I would recommend some treatments.  Don’t worry, I promise to not make you lie down on a couch and tell me about your sordid relationship with your RSA key fob, or late night googling of awk and sed scripts.  Promise.

A. Deep Breathing Exercise

1) Giving attention fully to your stomach, slowly draw in two deep breaths.  As you inhale, allow the air to gently push your belly out.  As you exhale, consciously relax your belly so that it feels soft.  If it already feels soft, that’s okay too.  Too much time staring at EnVision consoles will do that.

2) On the third breath, bring to your minds eye an image of a user with good intentions and a desire to just do a good job for their boss.  Imagine the happiness when they receive their bonus for having completed their project on time, or for becoming more efficient in their job.

3) Take a forth breath, and imagine the CEO of the company talking to the board of directors about how the money they invested in the company is producing profits because everyone could do their job, efficiency was up, and the new products could be released on time.

Now close your eyes and imagine what you can do to make these two people happier, more successful.  Think of what things will protect their goals of getting that bonus, or satisfying the investors who have made this company possible.  Remember that security can be part of this equation, but you have to consider their happiness too.

B. Unenforceable Rules

If you are still struggling, I’d like you to think of something Frederic Luskin calls Unenforceable Rules.  Unenforceable rules are rules that we might currently expect others to adhere to, but which aren’t really in our control, and we do not have the power to “make them right”.  Are the rules about security you think are necessary really unenforceable?  Let me counter the question with another question: how many of your rules have been implemented?  How many have not met significant resistance?  You might ask if that means their aren’t any rules that others will share?  There will be, trust me, but let me share a little secret.  No security expert ever shares the same rules about security with everyone in their company.  Even the best and most respected CSO will find disagreement on tactics or rules they may think are perfect.  The difference is their ability to recognize that they are unenforceable.

Think then about what your hope is – your goal, your real focus for what you are trying to achieve.  Then look at the rules you want to enforce.  Do you think someone might object to them? (Notice I don’t say they are wrong, just that someone else might not share them.)  Now think about why they might not agree.  What might their objectives be?  What might their goals or focus be?  How do the unenforceable rules violate their goals?

Now you will likely find yourself much more able to understand their goals.  Now you will find yourself able to design new rules – rules and associated actions that users and that CEO will find appealing because they support their goals too.  These new rules and actions can achieve security goals without requiring SOCD.  Recognize you still may not be able to exercise the level of control or security you wished for.  You may not have solved the level of security you wish for, but you likely will have made an impact that you otherwise would not have if you had held to your unenforceable rules.

Credit to Frederic Luskin, with absolutely no malicious intention to parody his incredible work.

Posted in CISO, CSO, Information Security, Information Security Governance, InfoSec, InfoSec Governance, IT Risk Management, Security Governance, Uncategorized | Leave a comment

Mentoring Outside the Echo Chamber

I have been incensed by certain “pundit” activities through a recent encounter that unfortunately mirrors the frustration I felt 20 years ago as a result of the actions of certain academics where I once taught.  The actions of which I refer?

  • Sweeping generalizations
  • Nihilistic critiques
  • An unwillingness to offer or model a solution

Let me give you my recent trigger:

A small company whose security team had announced to a shocked management that they wished to stop using Firewalls and Desktop Anti-Virus because they were ineffective. Probing questions led to a recent encounter that this small security team  had with a pundit who professed that these tools were ineffective and new times needed new tools.

Now I’m going to carefully chose my fight here.  My issue is in the advice which was presented in an abstract vacuum, devoid of situational awareness and environment.  The pundit’s goal to incite thought and discourse through the abrasiveness of the comments unfortunately served this SMB poorly.  I do not wish to debate here whether Firewalls or Anti-virus are valuable because there are too many variables to make that a meaningful discussion in a one-sided forum such as a blog.  Such a debate will depend upon what you a trying to achieve, the relative effectiveness of the specific vendor’s technology employed, and the effectiveness and appropriateness of the implementation.  These are many variables which make the sweeping generalization that “Firewalls are ineffective” quite dangerous.

Yet, as this poor security team understood it, their “ancient” tools had zero value. A one hour question and answer session with the security team (unfortunately in front of management) led to revelations that they had a entered what I call a nihilistic vacuum. They had not considered what controls those tools were intended to provide, what threat and risks were most relevant to their environment and, not surprisingly, they had no strategy beyond the overly simplistic objective of “buying a new technology”.  There was no thought of how to address the openings left by their abolition of their only source of network access controls or detection of malicious software.  Their new found idealism was directionless and without purpose.  This is far from productive, and in a small company, potentially devastating.

What ensued for the remaining two hours was an exercise of modeling how this security team should have reacted to their advice.

I first inflicted some pain by saying that yanking a tool, even if limited in effectiveness, was dangerous if no thoughtful examination is made of what is lost, what is gained, and what will fill the void.  What I did next was to model a thought and design process for this team that examined the decision and how they could have approached it far more effectively.  Things we discussed:

a) what is valuable to protect here at this company?
b) what are the ways these things are used, handled, or stored?
c) what controls are in place to make sure they are used and protected appropriately?
d) which of these controls will you loose when you abolish the “ancient” technology
e) what designs do you have in place to replace these controls?
f) what level of improved effectiveness and efficiency do you gain from this new design? (and how you can try to model it)

I then showed them that “ineffective” or “ancient” rarely applies to control objectives (such as prevent inappropriate network access to systems, resources and data) without a much greater shifting of heaven and earth.  I taught them in the hour I had left that design is an act that we must all undertake, and not to defer this act to some Pundit who lacks the awareness of an environment and goals to make the determination for you.

For those of you wondering about what incensed me 20 years ago; as a Teaching Assistant in two different architecture schools I watched professors launch into scathing reviews of students’ work without a thought given to the student’s or project’s situational awareness. The critique was nihilistic, abstract, and linguistically incomprehensible. The student left with nothing new but tears (or a stiff upper lip). There was no growth from replacing the mistake with a new idea or process, no modeling by the professor of how what they said worked in reality (or a physical world). The student had to grope at random straws to identify the faults in his demolished design (in one case, literally demolished). I rallied against these monstrous outrages then, as I do now.

So all you Good and Bad Pundits, dig deep.  Think carefully about what you say, because many hang on your every word.  Your words have value, but they also need context.  Teach completely and give this context.  Be specific and explicit in your critiques.  And when you finish with your critique, show how to correct the issues, evaluate effectiveness and model how to find solutions.  Inside the context of the InfoSec Echo Chamber we attempt to incite each other to action, but we forget that those who are on the fringe do not always benefit from our battle scars and insights.

I issue this challenge to Pundits because you hold the mantle of leadership through the papers, lectures and conferences which proffer your ideas.  Those on the fringe also have the responsibility, but they are the naive, and look to you to overcome this naïveté.

Students, there is no utopia. If you find after you have listened to one of these Pundits you suffer a vacuous nihilism in your InfoSec soul, grab some ABBA, a bean bag chair, and sit down with someone who can explain what it all really means.  Unlike unicorns, these people really do exist.

If you need some thoughts about how to do this, I recommend reading Donald Schon “The Reflective Practitioner”, and Chris Argyris “Theory in Practice” (as well as any of his books on direct explicit feedback).

Posted in Uncategorized | Leave a comment

My Take Away Moment from BSidesSF

I won’t attempt to rehash the conference, except to say, if you have a chance to attend a BSides event, do so in great haste. Despite being free, they are worth every penny you could invest in visiting one.  What a great respite from the RSA Conference!

What I do want to cover was a very interesting panel at the end of the conference.  The panel included some great minds: Will Gragido; Josh Corman; Marc Eisenbarth; HD Moore; Dave Shackleford; Alexander Hutton; Caleb Sima.  The subject was of interest since it drew quite the crowd: “State of the Scape: The Modern Threat landscape and Our Ability to React Intelligently”

But what came out of the panel as a result of some “heckling” on the subject of APT, Cloud Computing et.al. was priceless (kinda like a MasterCard commercial).  It was not what I think the panel had planned or was expecting (but that’s the fun of a panel, and BSides).  If you are a budding CSO or Security Manager take note:

  • Don’t make people security experts.  Make it easy for people.
  • Make security accessible and something that people care about.
  • Make it easier for programmers to program securely than it is to program insecurely (an example of Microsoft’s .Net work was offered as an example).
  • Get out of the echo chamber where we only talk about security in obscure terms and treat everything as unique and terrifying.  People need it to be accessible and simple.

Wow.  This echos stories I’ve told for years, and stories that have been popping up around the world as I’ve been traveling the last year:

    • At a conference I attended in the EU, the local CERT authority described a company who had spent millions of Euro on top-of-the-line security technology, and yet it was all turned off.  It was turned off because users always looked for ways around it because it made their jobs too difficult if not impossible to perform.
    • As a traveler do you enjoy the TSA security line, do you enjoy dumping out your entire belongings into a plastic tray for the world to peruse, being subject to numbing technology scans, and in the end a joyous pat down?  Or would you prefer a simple process to ensure your flight is safe?
    • Is it easier to teach programmers to write code void of SQL injection flaws, or is it easier for Microsoft to write .Net functions that make it more difficult to make direct SQL calls, thus significantly reducing the probability of someone writing code that results in SQL injection vulnerabilities?  (P.S. Microsoft did the latter, hooray!)

      Simplicity for all of us is the best way.  Simplicity that anyone can use, and makes it easier for all of us to do things the right way rather than the wrong way.  And that does not necessarily mean making the hard way painful by imposing fines, penalties or punishments.

      So as a Security Professional I would highly recommend you take the following actions in your strategy, and tactics:

      1. Make security invisible – it shouldn’t get in anyone’s way, or stop them from doing what they need to do to get their job done.  But it should be part of what they do.
      2. Remind people of what they value – so they can protect that.  It may be the teenager’s pictures and music, it may be the accounting departments numbers, it may be the sales person’s leads, or it may be the IT infrastructure.  Whatever it is, make sure the people who care about it are aware that you are trying to protect what they value.
      3. Look for methods that make security easier for users than the lack of security.  Whether that is through technology that makes authentication easy (biometrics for execs?), or programming libraries that are inherently secure, or handling data easier to do securely than insecurely.
      4. Always give something back.  If you find that a security control you have to put in place has an impact, be ready to give something back to the users.  They will be more likely to comply if you can show that you care about their priorities (such as how they can get their job done successfully and efficiently).
      Posted in CISO, CSO, Information Security, InfoSec | Leave a comment

      Sophisticated Analysis of Risk Management is Critical…don’t do Sophisticated Analysis Risk Management

      There is a wonderful discussion occurring in SIRA (Society of Information Risk Analysts) these days. I missed the beginning of this group, and I regret it, because the messages coming out of the discussions are extremely insightful and critically important for anyone who is managing risks around Information Security, or any type of security for that matter. The discussion I want to hit on is one that I am sure is already contentious debate within and without SIRA; Should I perform a risk analysis at my company? The subsequent questions are the source of much of the resistance: What model should I use? How do I measure the likelihood? Does impact include hard and soft costs? Do I need a PhD in statistics? Why does Excel always crash when I try to do big equations like this?

      I can’t answer why Excel is crashing, but I think the rest has an easier answer than we might think.

      Let the Gurus do the Risk Modeling, Statistical Analysis:

      The most substantial and accurate challenge to Risk Modeling in Information Security is that there is not enough data around probabilities and as such, the quantitative rigor of our analyses declines rapidly. I would absolutely agree. Any insurance company will tell you that there is little, if any, actuarial data on Information Security. But the only way we are going to overcome this challenge is by collecting and analyzing that data. Let the experts do this work and collect the knowledge. Let them build the complex models, be the PhD’s in statistics, and find better ways to analyze the data than Excel. Let this data become the source of the probabilities that we need.

      Look at the value we get from seeing what types of attacks are most frequent against Payment Card Data, or the mix of sources of data breaches or the records stolen by types, what vulnerabilities are typically the most often exploited….I’ll calm down now.  The excellent work that is being done to analyze the probabilities through current studies needs to be pushed forward. The showcase example has been the VzB breach studies. They have contributed significantly to our knowledge of what is really happening. I would love it more if there was a clearing house for the statistics so we could merge all the data of those who are jumping on board. Imagine the collective knowledge based on a myriad of views, experiences and organizational cultures. And let’s face it, data is useful. It validates what we see, it removes ambiguity, and allows us to correlate events and situations, it even highlights differences and nuances that we don’t see. It has the capability to remove pre-disposed biases and correct a priori assumptions.

      Don’t Let the Data Rule You:

      However statistics don’t tell the whole story. Let’s be honest about it. There are stories behind the statistics, not the other way around. Statistics will show us a story about the data we feed it. It won’t tell us where the data came from, what factors affected the source of that data, or what the outcomes of that data were. We have to supply that information. Remember, data in=data out, or garbage in=garbage out. It is always important that as we make use of the data that we read the fine print (or big print if they make it available) to understand the sources. The VzB breach reports have their biases: the 2010 report is potentially different from the 2008 or 2009 reports because of data input from the US Secret Service. Differences can emerge in data sources from a business collecting breach data versus the US Government collecting breach data.

      Bias in the data will affect some of the outcomes. As an example, companies are probably more likely to use private security firms to investigate internal issues to avoid public disclosure and embarrassment, while the US Government resources will more likely be involved when the breach source is external, or the company feels their legal repercussions are minimized. These are the stories that we have to consider when we look at the analyses, and should be disclosed to make sure we use the data correctly.

      Use the Data Not The Math

      For you, the new IT Manager, the result of all of this data research is that you now have a set of probabilities that you can say are based on reality, and you know the biases of the sources and resulting analysis. You can now take your finger out of the wind, put away your “8 Ball”, and use real data. It’s not perfect data (remember its story!) but it is far better than when I started 20 years ago in this field. You do not need to have a PhD in statistics or mathematics. You do need to know how to read the outcome reports from the analysis (some reading skills are necessary). You do not need to build a complex Risk Management model. You do need to build a simplistic model. Your risks can be built on the field of possible threats using the data from the detailed analysis. Your vulnerabilities can be built from your known environment. And the probabilities can now have some teeth. Even if you don’t feel you can build a risk model (time, effort, Excel just won’t work) you can always refer to the global models of probability and risk from the studies that have been done, which have been vetted, and which are based on extensive data.

      Lastly As I wrote in an earlier post, my biases have changed, and all as a result of the data. I made a change in focus several years ago after reading the data gathered in Visible Ops. Now I am changing again, by using the data from the breach reports from various (trustworthy) sources. I’ve changed my previous biases because the data has told me to. The story for me, is that now, I can monitor threats, vulnerabilities and risks being realized, and identify what they are, their frequency, and their likelihood in of occurring versus other threats, vulnerabilities and risks. I can focus my priorities…

      1) Let those who can analyze the data (and have the PhD’s in statistics) analyze the data

      2) Use the results of their work to simplify and increase the accuracy of your risk analysis

      Posted in Uncategorized | Leave a comment

      Handing Back Responsibility for Security

      There is a great lesson that unfolded at one of my customer’s sites during an audit.  It is a great story to tell, but more importantly, it lets me illustrate that as Security Professionals, we need to design security to work in a way that makes them natural to the business.  I know, shocking isn’t it?  But it can be done…

      During an audit of a company’s security program the gentleman doing the audit asked for evidence of “…specific Security Testing…” in the development process.  The development manager responded, “We do testing, but not any specific Security Testing.  We do code reviews by someone who hasn’t written the code but is part of the same team so they understand the objectives and how it might impact other code.  We use the material we receive from annual training we have with our development tools vendor on how to write more secure and stable code.  We do data input and processing tests to make sure the system doesn’t break.  Then we test the functional specifications to make sure we met all the design specifications.”

      The auditor’s answer was, “That’s not specific Security Testing.”

      I stopped the auditor and asked him to tell me what “specific Security Testing” was.  His answer was, “It includes testing of the code, looking for security vulnerabilities, testing with tools that look for security problems, testing for error conditions or code failures that could result in the disclosure of data.  The testing you do here is Functional Testing.”

      So I asked a question of the Auditor:

      “What is the ideal objective that we, as Security Professionals would like to see when we look at application development?”

      When I got the same response back about what is specific Security Testing, I responded, “What if a software development program includes Security from design, through functional specification, through development and into testing.  Security is built in to every aspect, and it is natural.  Is that not a better model?”  There was affirmative nodding.

      “Then, is it not appropriate then that a company include Security Testing in their existing testing methodologies and refer to it as Testing rather than as specific Security Testing?”  At this point there was some silence on both sides.  I then prodded the development manager who proceeded to discuss how Security was wrapped diligently in their design and functional specs, and that their input and processing testing included many of the elements of specific Security Testing that the assessor was looking for, but they never called it Security Testing.  It was called just Testing.

      Let us be honest about something.  Not every development team thinks this way.  I happen to have a few very brilliant managers at clients who think this way.  Hats off to them.  But our goal as Security Professionals is to get all of our clients to think this way!  Security should not be a standalone activity operated in isolation.  Security should be a natural part of what we do every day. To paraphrase many security professionals, if we naturally did the “secure” things we should do in the first place, we wouldn’t need much of the artificial layer of protection and tools we build.

      We must drag auditors, assessors, and every other critic away from their “Deformation professionnelle” – their tendency to look at things through the lens of their profession and forgetting about the bigger picture or the real goal.  In the case of software development, most auditors think of the world after we decided (unilaterally) that developers can’t do it on their own, so we must put in place controls, tools and other activities to stop their bad code.  Instead the goal should be to create an environment where the developers do include security in their processes – at every step.

      I don’t argue against the tools that are used in Security Testing.  I just argue that keeping these tools and processes out of the developers hands tells them it is okay for them to write bad code.  You are implicitly telling them that it is someone else’s job to make sure it is secure.  What we as security professionals need to do is hand that responsibility back, give them the tools, give them the training, and assign penalty and blame when they do not take up the bit.

      The lesson from this little story?  Let me walk you down the garden path:

      a)      Security should be built in as a natural part of our existing business processes.  It becomes a cultural and behavioral change.

      b)      Security should be everyone’s responsibility, not one group in isolation.

      c)      We need to play the coaches, not the ringleaders.

      Being in the Information Security profession is a lot like being someone’s coach or trainer.  Your goal is not to run their business, or to swing the golf club.  Your goal is to adjust them so that they improve their performance and results.

      Posted in CISO, CSO, Information Security, InfoSec, IT Risk Management | Leave a comment