Glass Houses…and Music Majors

First, a disclaimer…this post is *not* about bashing or ranting about Equifax’s security practices. Why? Because I do not have first hand knowledge of what they did or did not do, or what specific exploits and vulnerabilities were leveraged throughout the kill-chain of the event. Frankly, it’s likely only the security investigators (internal and external), legal team, and outside counsel will ever know the details. Which is just fine by me. If you wonder why, then you’ve obviously never been involved in a breach and the subsequent investigation. There is a lot of conjecture (some logical, some not so logical), and lot of hand wringing, certainly a lot of drinking (after hours), and a whole lot of lost sleep and hair (if you have any to begin with).

So why would I mention that?

Because I want to rant for a moment about the security community and the press who seem to have soundly drubbed Equifax for the breach.

This has nothing to do with their response to the breach.  Lets set aside Equifax’s horrible response after the breach. I will not condone, support, or even pretend to empathize with their response. To put it mildly, their response to the breach sucks. You were breached. Mae Cupla, and treat your customers, constituents, not-so-willing-pubic-whose-data-you-have like your own injured child who you just accidentally knocked off a ladder and gave the a lump on the head (and maybe a concussion).

Let’s instead talk about the blame we seem so eager to apportion.  Security professionals, take note of something we always say:

– It is not “IF” you will be breached, but “WHEN”

So suddenly Equifax is evil because they were breached?

You may counter, “but they had a vulnerability that was *3* months old!!!!”

Um, yeah….about that. Let me ask you how old the vulnerabilities are on your laptop that you use for your pen-testing. And if you are a CISO or other security professional employed at a company, and you believe you patch your public facing systems perfectly in less than 90 days, you are *woefully* uniformed, I would argue “naive” in understanding how companies work, and not plugged into something called “risk acceptance”. Ouch, I think I just touched some nerves, but let me assure you, this is not personal. It is about the dynamic of an organization – something that outsizes the best of us.

Again, I cannot say this is Equifax, but I can say that nearly every company I’ve come in touch with struggles with this same problem.

Security Team: “Bad vulnerability, and its out there exposed to the Internet. We must patch it right away!”
Development Team: “Can we test this first? Its probably going to break the application.”
Business Team: “This is a really critical app for our business group. Please don’t break it.”
Managers: “Don’t break the app. Can this wait?”
Executives: “We’re listening to all the people here, and can we please just not break things? Lets take it slow and test.”
Development Team: “We have features to get out that are a priority and are scheduled to go out in the next three weeks for a customer.”
Business Team: “Please don’t interfere with this critical customer need.”
Executives: “Can we please not break things…”
Development Team: “The patch breaks something. It will take us a couple of months to figure out what. Meanwhile we have these other features to get in.”
….

See a trend? I don’t want to get wrapped up into the hyperbole of this being an endless cycle. Reality is (at least for the organizations I’ve worked with) they do eventually, in a fairly reasonable period of time (which I will admit is a *very* subjective assessment) get around to figuring it out and fixing whatever is broken by the patch. Some organizations are great at it and it might take one or two sprints to figure it out. Others, either other priorities or their backlogs are long, and maintenance work doesn’t fit into such a high priority, but they still get to it within 3-6 months. In some cases, depending upon the complexity of what a patch breaks, that’s pretty darn good. And if you are skeptical of that, you need to spend a bit more time embedded in a development team.

I remember quite a few years ago listening to a talk at BSidesSF (one of the early years) from someone whose day job was to teach companies how to write secure code, and evaluate code for security vulnerabilities.  He talked about a program that a customer asked them to write, and how, in their efforts they found that they committed exactly the same mistakes they lectured their customers to avoid.  They had vulnerabilities in their code, found that deadlines made them take shortcuts and not put all the best practices in play that they could (or maybe should) have.  And these were some individuals who I had very high regard for in the application security field.  They admitted – “Its hard in the real world to do it right.”

So what should we learn from Equifax?

Security isn’t perfect.  We shouldn’t gang up on an organization just because they had a breach.  Every organization is trying to balance a business opportunity with the risk being posed to those opportunities. Its a balance. Its a risk equation. Its never pretty, but lets face it, most organizations are not in the business purely for the sake of security.  Every control costs money, causes a customer frustration, and has an impact on revenue.  You may say a breach does, and it does, but there is a balance.  Where exactly that balance is can be a subject of great debate because it is not precise, and can never be predicted.

Patching is subject to much more than just “patch and forget”.  Application patching is even more complex.  The alleged vulnerability cited in the Equifax breach was 90 days old.  Even if it was 180 days old, there are factors we cannot even begin to understand.  Competing business interests, a belief that its exploitation couldn’t be leveraged further, a penetration team that didn’t find it or the exposure it could lead to because the applications were too complex to understand, or even human error missing the finding through a reporting snafu.  Stuff happens….no one is perfect, and we shouldn’t throw stones when our own houses have (despite our own protestations otherwise) just as much glass in them.

Ultimately, there are some practices that can help, but I will put a disclaimer here – these may already have been in place at Equifax.  Again, we are human, and systems/organizations are complex.  Complexity is hard to get right.  We also don’t know the full kill-chain in the Equifax scenario.  There may be more things that would help, or for that matter, these things may have been in place, it required even more complex efforts to address the root cause.  That said, here’s some things I would suggest:

  • Try to understand every application in your environment and how they tie together.  Knowing the potential chains of connection can help you understand potential kill-chains.
  • Run red-team exercises with targets as goals (e.g. that big database with customer data, or the AD domain user list).  Let your red team think like an outsider with a fun goal, and flexibility of time to figure out how to get there.  The results will inform you.
  • Patch external systems with far more urgency than internal.  This seems pretty obvious, but sometimes how we represent vulnerabilities is too abstract.  I have found that using the language of FAIR has been an immense help.  Two factors I try to focus on: Exposure (what population of the world is it exposed to) and Skill/Effort to exploit (is it easy or hard).  Given the volume of opportunistic threat attempts (a.k.a. door knob twisting), it makes sense to point to those values as key indicators of what will happen with exposed vulnerabilities.  I once pointed to the inordinate number of queries on a specific service port that a client used as proof that the “Internet knew they were there…” which leads to my last point…
  • Communicate in a language that people can understand, and in ways that make it real.  If you talk in CVSS scores, you need to go home.  Sorry, but to quote a favorite line of mine, its “Jet engine times peanut butter equals shiny.” (thank you Alex Hutton, your quote is always immortalized in that fan-boy t-shirt).  Put it in terms like: “The vulnerability is exposed to the Internet, there is nothing blocking or stopping anyone from accessing it, and the tools to exploit it are available in code distributed openly to anyone who has Metasploit (an open-source, freely available toolkit).  The attacker can then execute any command on your server that the attacker wants including getting full, unfettered access to that server, its data, and….”

Those are things I coach my teams on.  Things we should look at and learn from.  Because we need to find data that helps us get better.

One last thing that chafed my hide…

Some people had the audacity to say “…who would hire a CISO with a college major in music…”

Setting aside the rather comical philosophical rant I could make based on UCI’s research on the effects of Mozart on students studying mathematics, I’d like to put forth my own experience.

I hold a Bachelor of Architecture (yes, buildings!) and have a minor in Music, and two years post-bachelors in organizational psychology.  I am a fairly accomplished security consultant (who used to hack and program much more than I do now) and occasional CISO.  My degree is not a disqualification from being a CISO, any more than a Music degree disqualifies the former CISO for Equifax for having her job.  Simply put, “COMPUTER SCIENCE IS NOT A REQUISITE FOR BEING A CISO”.

I have interviewed dozens of CISOs around the world.  Nearly every one of them said they liked having liberal arts majors and people outside of Computer Science fields in their teams because they brought a very different insight and analysis to the team.  It is my opinion that by the time you have reached five (5) years of experience, your college education is largely immaterial.  There are data points (actual data facts such as what a Pedaflop is, or what the tensile strength of a certain quality of steal is) that college will inform you of, but college does not tell you which of these skills you will need, or how to weave them into the ever-changing dynamic of real life.  I call these skills the ability to analyze, synthesize, and respond.  E.g. the act of design.

For the CISO of Equifax, it is likely that her skills of analytics and design, and her ability to communicate those thoughts to executives were highly skilled.  It is also likely that she had experience with software, with networks, and other technical areas.  I can relate because in my undergraduate education for Architecture we had to take a Pascal programming class in our freshman year.  We had to take a “Computers in Architecture” class.  What I did with it was unique, and I would suspect what the former CISO of Equifax did with her experiences was unique as well.  Putting a blanket assumption over anyone’s experience is ill-informed, and frankly, quite naive.  Have a chat with them.  Know their skills, learn what made them capable and skilled, or at least trusted at what they do.  Then critique what they have brought to the table *today* as a result of all of their experiences (school included, but also all their work since then).

So let everyone put down the pitchforks, stones we were going to throw at someone else’s glass house, and go back to tending our own glass houses and note how someone else’s glass house got broken as a way to learn how to protect our own.  Because what I’m hearing so far isn’t helping, and is based on a lot of arm flapping and people far too interested at pointing at other people’s glass houses than tending their own.

Posted in Uncategorized | Leave a comment

Shifting the Conversation (An SDLC Story)

I’d like to tell a story (a mostly real one) that can help you think through how to make your DevOps transition a little smoother, level set some over-exuberance, and ensure everyone feels they are getting a fair shake in a way that is collaborative.

I had a customer whose teams talked endlessly about how they wanted to get to DevOps, continuous integration, and high velocity of deployments.

The challenge is that they talked about DevOps making Deployment going faster.  They wanted rapid deployment, daily changes, and to push code to production every day.  As a result everyone latched onto what they thought it meant.  They talked about faster creation and deployment of new features.  They talked about end outcomes and the excitement of reaching that end goal of daily pushes.  Developers thought they had reached nirvana and could get all the code that was backlogged into production whenever they wanted it. Operations teams thought it meant that development would write cookbooks and test everything and they could focus on undoing technical debt, getting rid of crappy code, and making things work right in production.

Now these are all valid goals of DevOps.  They all are things we want to strive for.  But they were being framed in the legacy biases of Dev vs. Ops.  As an example, someone who is typically production and operations focused could easily quickly admonish the developers for being “unrealistic” in their expectation to jump straight to daily releases and rapid increases in speed and velocity.  You don’t jump straight from typing in code to putting it in production.  At least not in reality, and certainly not with quality.  While there is truth in these statements, any admonishment is going to be perceived by developers as blocker to the speed and velocity they want.  And they’ll push back and say, “Its all Operations being the blocker and slowing us down!”  And they’d be partially right.

So instead of admonishing the developers, we changed our language and effort to focus on one of the source of our issues –  environmental stability. The development and QA environments were unstable, systems were undersized to run any meaningful tests or even run the programs run on production systems, and they did not have representative data to work with.

We started saying “we are going to give you stable Dev and test environments”, “we’re going to increase speed and accuracy of testing”, “we’re going to get you good test data that is as close to current and complete as possible in prod as possible”, “we’re going to give you any data you need to identify, debug, analyze and respond to test and prod failures”.  This shifted the conversation from being adversarial (Devs pointing at Ops and saying their obstructionists) to being collaborative (ooooh, they’re going to give us shiny new toys!).

Ops focused on building a proper development and QA environment that could very accurately depict production.  We first sized resources (hardware, networks) that could support the effort.  This might seem “wasteful” since development doesn’t generate money, why don’t we go with left over systems.  But the point I raised was that development was where the real work was taking place – where undersizing would be a mistake and lead to all the mistakes happening in production, where mistakes cost money.  Lets instead make mistakes in an environment where it doesn’t cost the company money.  This doesn’t mean that we spend exorbitantly, but that we shouldn’t be foolishly cheap.  Development/QA was built in the way that the teams wanted to build production.  It used the tools they wanted to use in production.  And we ignored any further work on production.  Yes, you heard that, we didn’t go after technical debt in production (unless it caused an outage).  Why did we do that?  Because there was no sense in fixing things that we didn’t know yet if those fixes were appropriate.  We needed to test the entire infrastructure, not just the code, as a development effort.  We needed to get code that was tested and optimized and architected the best way through prototyping.  We needed to test building systems, deploying the operating system configured in the way the development teams needed it configured, installing databases, and doing anything else that was needed to give the developers the environment they would expect to deploy on top of.  We needed to do this in an environment where we could make mistakes, learn from them, and correct them – all without impacting generation of revenue.

What we accomplished was a double win.  We gave Developers the resources they needed to be productive.  We gave them tools, stability, data, and capacity to experiment.  We gave them testing tools…and the operations teams got to test right along side of them.  They got to build the tools, build the stability, learn how to handle the data, and build the capacity.  It was no longer pie in the sky but what each Dev team wanted and needed to go faster, and lessons on how Operations could clean up the technical debt in a way that mirrored the Developers’ intention.  It was about how we could positively influence the  lives of our Dev teams, and Ops teams.

Posted in Uncategorized | Leave a comment

Random Favorite Quotes

The following are quotes or paraphrased notes taken from talks I have seen, podcasts, or general conversations with people I know.  If you feel you didn’t say these words, or wish to correct them, just contact me.

———

Microsoft gets it: you don’t teach programmers to be security people.  You do it for them (or make it hard for them to do it wrong). – Unknown

——–

“Don’t make people security experts, make it easy for people.  Get out of the echo chamber.  Make accessible the message that people care about.  People don’t want to think about security in what they do – they just want it to be there.”  – Josh Corman

——–

“Make things simple and they will do it.  Make it easier so people will use it.”  – Unknown

——–

“People respond to transparency and openness.  When issues are exposed – surfaced.” – Unknown SIRAcon 2016

——–

“We have to accept that its not our risk tolerance that matters as risk practitioners or security professionals.  Its the person accountable for the risk at the end of the day. And until you overcome that your almost a barrier to what you’re trying to achieve.”  – Chris Hayes

——-

“We have to work with the biz to get them to understand the risk, and design with it (for better solutions). This is why security should have 2 parts (maybe 3). A) understand and design ways to mitigate the risk for the new, B) manage risk day to day, operations C) Analyze the performance and effectiveness over time”

———

Risk Manager’s job is helping CSO sell security – sell the project.  Whether its a great big investment decision, or small item – what are the attributes, the Risk and Opportunity measures (estimates and forces at play).  – Alex Hutton

———

Risk Management / Security Metrics is a Security Optimization Program

Posted in Uncategorized | Leave a comment

The Legacy of Controls (A DevOps Story)

I recently had a pair of encounters that have opened my eyes further to both the causes of our current messy state of IT affairs, and given me hope for a better future.  In both cases the issue that came up with access to production environments.

In one particular case a user had their access removed – ostensibly on the grounds that their access violated “segregation of duties between development and production”.  There are numerous control frameworks that demand a segregation of production and development environments.  There are even others that say personnel should be fully segregated.  Lets look at where this came from, and what the outcome has been:

  • Segregation of duties came about as a control for preventing one person from performing an end-to-end activity by introducing a check that the activity was appropriate.  It started largely as a financial control.  The most obvious is preventing an Accounts Payable clerk from inputting a purchase or payment request, and then processing that payment request themselves – all for the benefit of their own personal bank account.
  • This control was extended to IT – especially during the Sarbanes-Oxley days – as a way to ensure that a developer could not introduce ways into the programs to siphon off pennies all for the benefit of their own personal bank account.
  • This control was then extended further to include personnel access to anything in production because (again ostensibly) it was believed that sharing information about production would create knowledge that developers could exploit.

Lets be clear.  Controls that prevent the theft of money (fraud) are important.  However the lengths to which this control has been extended has become ludicrous.  What it has done is damage the workflow, trust, collaboration and functioning of the IT department and its ability to support the business needs of all other parts of the company.  How you ask?

  • The segregation-of-duties controls are extended to deny developers visibility into the environment, which means their situational awareness of how their programs are running is removed.
  • They lose the belief that other groups trust them since their visibility is removed.  They pull up a wall.
  • They now view the operation of a program as “someone else’s problem since they don’t let us in”.  The pull that wall up higher.
  • They now throw programs over the wall – because “we’re not responsible for them in operations”.  Operations hates when this happens.
  • Myriads of other controls flow in to stop-gap the problems that development teams don’t have the visibility to understand.  Testing requirements increase to address the problems since it is believed the problem is in insufficient testing.  The testing becomes cumbersome, laborious, and yet largely ignorant to the problems that happen in production.
  • Costs go up, blame goes up, and failures happen…and the speed of work goes down.

Sounding familiar yet?

So how does this fit into my realization?  Access into production for developers is not a bad thing.  Developers should have visibility into application and system logs so they can view the reaction of their code in real world situations.  Developers should have the ability to see elements that are not sensitive.  They likely shouldn’t see sensitive data like payment cards, or encryption keys, but they should be able to see configuration files, data types and definitions.  Give developers what they need to create a feedback loop that is clear, unobstructed, but doesn’t violate regulations.

That being said, developers promoting code into production without checks and balances is a bad thing.  That I think we can agree on, but how does that fly with a DevOps mentality?  How about:

  • Changes can go into production once they go through an automated test suite.  They are only available for check-out when they meet that criteria of that automated test suite.
  • Production personnel (ops) can promote into production anything that has gone through the test suite and is available for check-out into production.
  • Development personnel can check problems and push fixes through this same chain.

If you notice, in the better world, developers have access to view, and monitor the production environment – they have a feedback loop.  In the better world, developers still have to have their programs vetted by a testing procedure before changes are pushed to production.  The key control objective is still met – reduce the probability for fraud – but with controls that keep the collaboration, accountability, and teamwork in place.

Now in the two cases that I came across, both arrived at the same conclusion.  Both believed that visibility was important.  Both believed that it could be achieved.  The challenge was to educate those who have accepted the de facto standard of full segregation without understanding the original goal, and the impact of such a decision.

Posted in Uncategorized | Leave a comment

Velocity vs. Anti-Velocity

No, its not the new anti-matter, or maybe it is.

I’ve watched IT organizations now for 26 years.  The sadness I feel is that I’ve continuously seen the same downward spiral:

  • Failures are reacted to as a only that – failures.  And failures cannot be tolerated.
  • Someone gets blamed because of course it is always a human error
  • Focus is put solely on slowing things down because if we slow down, of course things will get better (right?)  More time can be spent on analyzing every action to make sure it never happens again.
  • More steps are added to processes to make things less prone to failure – usually manual, because of course humans can imbue greater success and less failure into IT systems (remember the old joke: there are two problems in every computer problem, and the first is blaming the computer)
  • Changes, features and maintenance slow down because it requires more manual intervention to get them in place
  • Management, sales, and all that revenue focus pushes for those changes, those features, those requirements and usually overrides the slow down – but for features that are not ready, not tested because we’re still working on the old changes from months prior
  • The CIO and IT Managers fight with sales and management because they are asking too much
  • CIO and IT Managers quit, or are fired because someone always loses that battle.

I dub this cycle anti-velocity.  It is the failure of IT organizations to create velocity.  Organizations reduce their movement to a crawl – frozen and frustrated, unable to move forward and certainly unable to move back.  The freeze themselves in fear, in mis-guided notion of what it takes to correct failures.  “Slow things down so we can study them more.” “Find out who did it and fire their *ss!” “We never test this stuff enough – we need weeks to do this right.” “This requires full review of all test documentation during the Change Control Meeting with all documentation brought to the meeting where everyone must attend.”  (Yes, the last one is a real procedure for Change Control that I’ve encountered.)

Now, lets talk about what builds velocity, or the ability to move forward at a constant and ever growing speed.

  • Find the root cause – the honest the root cause.  What really caused the failure, and be honest and open about it.  Track the causes and know where they come from.  Look for patterns in the analyses.
  • Don’t believe that rote assumptions will tell you where to fix it – use the data you collect and the root cause analysis to really identify patterns.  I have watched companies assume that certain activities are the reason they have failures because they have been schooled to think this way – without ever questioning, “How would I know if my assumption was wrong, how could I test it?”
  • Do not go on witch hunt, and do not go about the task of root causes analysis looking for someone to hang.  Remember that failures are where you learn where you need to improve.  If you fire someone, who says his replacement is going to be any better?
  •  Identify ways to prevent the failure that do not slow down the process.  Remember the death spiral of anti-velocity above?  Remember that you want to do everything to avoid it.  Slow downs are the beginning of that death spiral.
  • I’ll give one caveat allowing for slow-downs: if your slow down is temporary to get a correction to your process in place that allows you to go faster, be more accurate, and be more resilient, then it is okay…because you are gaining a longer term velocity for the sake of what I would call a hiccup.
  • Build solutions that eradicate the faults, the errors and anti-velocity in your environment.  You will learn over time how to do this – through a process of continuous improvement.
  • We want to eradicate the faults, bad practices and build an environment that can sustain itself through human errors.  (Because lets face it, we are the first problem in every computer problem.)

I become quite excited when I see velocity and a process that is fluid and working to speed itself.  The greatest excitement is that their change processes improve dramatically.  They process more changes, they do so with a higher success rate of implementation, and recover from failed implementations because every process has failures.  I have watched four different organizations recover from anti-velocity.  I have seen two who knew how to create velocity, and we were able to build powerful sets of controls that did nothing to slow that velocity.

Unfortunately I have seen just as many mired in their anti-velocity and unwilling to emerge.  The believe in big-bang changes – long cycles of review, backlogs of changes due to failures, blocking pre-requisite implementations stuck in review, and long cycles to get through a cumbersome process.

But then, from what I’ve heard, companies that have anti-velocity in IT, have this tendency to gather anti-velocity in their business as well….hmmmm…..

Posted in IT Governance | Leave a comment

Loving the John In All of Us

I found myself in one of my least favorite moments a few weeks ago.  I was having a discussion about the build out of a new environment.  Someone brought up the subject of how people should access the environment and I started laying out my vision.  It included several specific and significantly restrictive controls and requirements.  I got through half of my list and the most senior person in the room jumped up and said they were unreasonable.  I almost had a knee-jerk reaction of defending them with a “You must do this to be secure!”, but stopped myself as I realized I fell into a trap I so often preach against.

What I had done was bury my head in the sand of a regulation, a checklist of requirements and let myself preach from what I thought security was, and not try to find what the business or the operational environment needed.  I was wrong.  Dead wrong.

Finger To The Foreheadppcover-3d

I had the great fortune to be invited by a Gene Kim to read his early drafts of his book “The Phoenix Project”.  It is the story of one company’s attempt to overcome its obstacles and survive.  One of the characters in the book is named John.  He is in charge of Information Security at the company.  He carries a binder of controls, and is continuously focused on security because he needs to save the company from its security failures.  Except it isn’t security the company is struggling with – it is struggling with its own business and operational survival.  John however is not attuned to this.  He is focused on a checklist of requirements that are completely tangential to the company’s needs.  John has his own climatic scene where the antagonist of the story finally beats down John’s character with a finger to the forehead and a stern lecture that he better find out what’s important to the company and get out of the way.  I laughed hard as I read this scene.  I laughed because I can think of all the times I deserved that finger in the forehead.  If you can’t think of the times you deserved that finger in your forehead you are deluding yourself.

Why Do We Act This Way

There are probably a litany of reasons why we tend to operate this way.  The one reason that always seems to make the most sense to me is the simple constraint in our ability to operate outside of what we know.  We use the skills and knowledge (cognitive domain, awareness, call it what you will) we know best, what we have been schooled in, read and heard.  I have been, like many of us in information security, fed lists of controls, told that things had to be a certain way, and that breaches, like burglary or murder, carried huge consequences.  I was taught responses to situations from the perspective of security – a professional deference – because that was my job and task.

And we are not alone.  Others do the same within their profession.  There are people in marketing who only see the world through a marketing perspective; or sales; or financial; and the list goes on.  Even our own children see the world from the limits of what they know and what they’ve been taught.  If we all knew the bigger picture we likely wouldn’t have had the embarrassing stories from our high school and college years, and use the phrase “If I only knew then what I know now.”  We all have a bit of John in us – even when we consider ourselves enlightened.

Learn To Embrace the John in All of Us

We all have our constraints so the best way to overcome them is to first accept that we have them.  Acknowledge them.  Admit that many of the things that we discuss, propose, and recommend to people come from our perspective on the problem.  This suddenly makes the problem have multiple angles that it can be viewed from.  You may not be able to see all of them, but you certainly can ask someone else to tell you how it looks from their angle.

Ask questions.  One of the first things I do when I find myself in the situation of being dead wrong is to set aside all my security concerns, suspend my preconceptions, pretend to be a complete outsider, and ask what is important to the business – what is the real business goal and objective.  Things like how it creates revenue, how does it help the company, and what would happen if it was to stop working.  The perspective is suddenly very different than when I look at it as a security person.

Then, I take one of my favorite steps.  I create a solution that focuses on achieving the business goal, and that gives back just as much as it takes away.  I have a rule with my teams, “For every control you put in place, you must give something back to the people affected by the control.”  This creates some shock, some amusement, and then very puzzled looks.  Several people have asked me why I do this.  Some have resisted the rule, but I rarely waiver.  This rule forces my teams to focus on and understand the impact of what they are doing when they put controls, policies, rules or anything else in place that is restrictive.  And then it forces them to think of how they can make it less restrictive, or provide some benefit that is in line with the original business objectives and goals.  It makes them understand what the affected people need to do their jobs better and what really matters to them.  You also create some raving fans when they realize you understand their needs.

And lastly, and most importantly, recognize the Johns in all of us – in everyone around you.  Encourage them to do the same as you – to learn to accept their inner John, to explore and ask questions, and to look from different perspectives.  As role models we can develop the patterns in others and they will begin to mirror our behavior.  Poke people in the forehead once in a while, and remind them to learn what is really important, and listen a little better.

Posted in CISO, CSO, Information Security, InfoSec | Leave a comment

The Quantum Vulnerability Tunneling Effect

I know I had promised to talk about how to implement a risk management program in your small organization, but bear with me for a blog (or two).  Given that my brain has been wrapping itself carefully around risk management for the last few weeks, I have found myself revisited ideas from my past.  One particular incident this week reminded me of a subject that I’ve talked and written about before.

One of the individuals on my client’s InfoSec team is responsible for vulnerability scanning and management.  He’s quite talented, has good insight on the vulnerabilities, but like many others in InfoSec, he suffers from the blinding effects of Quantum Vulnerability Tunneling.

“The What?” you ask.

Yes, you heard me, Quantum Vulnerability Tunneling Effect.  For those of you not familiar with physics, this is akin to a process whereby a particle can bypass barriers that it should not normally be able to surmount.  So what does that have to do with vulnerabilities?

The barrier we place to separate vulnerabilities to address and those to accept is typically an arbitrary line we set that says “We’ll address fives and fours, but we’re going to let threes, twos and ones go for now.  This is our barrier, and heaven help the vulnerability that thinks it is going to make its way over that line.  Except….

Did you ever do a vulnerability scan, read through the findings, and find yourself stopping on one vulnerability in particular.  You see it and the thought runs through your head, “Oh, Sheiße!”  Suddenly the world around you stops and you focus on the vulnerability.  You know how it can be exploited.  You’ve read about it in magazines, and you’ve even done some of the necessary tricks yourself in a lab using your kit of tools.  In this case the individual at my client’s site had found a vulnerability that had been classified by the vulnerability scanner as just below the event horizon of “critical vulnerabilities”.

He saw this and upon looking at it had his “Oh, Sheiße!” moment.  He went to his manager and presented his case for why this vulnerability should be remediated.  Immediately.  He proceeded in a very animated fashion demonstrate with his hands and his words how this vulnerability could be exploited and how dangerous it was.  His manager had some good replies to his demand, but the individual walked away unsatisfied – probably because the replies talked to business impact and other metrics that did not have meaning to a vulnerability guru.  When all you have is a vulnerability scanner everything looks like a…

So I sat him down and had a little chat so he could consider the same answer from a different perspective.  I didn’t focus on the impact to the business operations since I saw that it was not clicking for him.  What I did was asked him to do a risk assessment of the vulnerability with me:

I asked, “What is the population of threat actors.”  We had already had a chat within the group that we would classify threat actors by loose groups of individuals so we could get groupings of actors.  We agreed on classifications of Universe/Internet, Company Internal, (specific) Department, Local Machine Users, Administrators, and No One.  He replied that it was *anyone* Internal (said with animation).

I asked him, “What level of difficulty was the vulnerability, keeping in mind commonly known mitigating controls in our environment.”.  He commented that it was a module in Metasploit.  Ah, so it was below HDMoore’s line.  I asked him how certain simple controls we had in place would mitigate it.  His reply, it would make it pretty difficult but not impossible, and it had been documented.  So we agreed to put it right at HD Moore’s line. (We haven’t really qualitatively classified difficulty yet, working on that definition still, but HD Moore’s Line is the start).

I asked, “What is the frequency of attempts to exploit this vulnerability.”  We use attempts since there is rarely good data on actual breach counts, but with a good honey-pot we’ve found we can estimate pretty well the frequency of attempts.  I’m really warming up to the importance of a honey-pot in a company’s environment.  The data you can collect!  And it makes frequency something you can lump into categories.  In this case we didn’t have any data at all since no one would set up an internal honey-pot, so we deferred to Threat Actors as a reference point.

I asked, “What’s the value of Assets that are vulnerable.”  The individual responded, “All things on the computers!”  I whittled him down to some tangible types of data.

We merged all of his answers into a sentence that he could say.

And then I asked the magic questions.

“How many vulnerabilities have we identified in the environment?”

He gave me a number.

“Using the same risk measures, how many of these vulnerabilities are a greater risk than the one you just pointed out to your manager?”

Silence for a moment, and a sheepish smile came across his face, and he said, “I get it.”

I have seen this situation many times before.  In the moment of discovery we get too close to a vulnerability or a threat, and we obsess on it.  We study it intently and learn everything we can about how to leverage it, how it can work.  It becomes real because we can understand it and perform at least portions of the attack ourselves.  We focus on it because it is tangible and at the forefront of our mind.  We become obsessed and let that item tunnel its way beyond any barriers of urgency to place itself at the front of our priorities.  The Quantum Vulnerability Tunneling Effect.  We’ve all fallen prey to it.  We’ve all tunneled our issues to the forefront out of fear and uncertainty.  That’s why I liked using the risk assessment.  It required that he re-examine his assumption that this vulnerability was critical, and test it with facts through a risk assessment.  It reset the perspective of the vulnerability in relation to everything else that should be considered with.  He wasn’t happy that the vulnerability was going to be accepted as a risk, but he also recognized where it belonged in the universe of risks.  He could look at the forest and see that it was filled with trees, and some were more worth harvesting than others.

I used to do a similar exercise with my team when I was leading security.  We did an in-house risk assessment.  I made the team list all of their perceived priorities regardless of how big or small, how insane or sane, and regardless of whether they thought it urgent or not urgent.  I wanted them to know that their ideas and concerns were going to be considered.  We then went through a highly interactive and risk analysis session that resulted with a list of priorities based on those ideas.  We put the top ten that we felt we could accomplish during the year on a board at my desk, and the remainder went in a book on my desk so we could say they never got lost.

Someone on my team would invariably come to my desk, hair on fire to say they had a risk that*had* to be taken care of right away.  My response was cool and calm.  I would simply ask, “Does it require greater attention than any of the items on that board.”  This would stop them in their tracks and make them think.  They would look at the board, think for a few minutes and respond with a “Yes”, or a “No”.  Usually it was a “No”.  If it was a No, we would pull out my book and write down the issue.  If it was a Yes, I would have them write it on the board where they thought it should go, and put their name next to it.  They could claim the success, or suffer the ridicule from our team if they were way off.  Priorities and perspective were maintained.

The Quantum Vulnerability Tunneling Effect was avoided, we stayed calm and on course, and we could react well when a real emergency came along.

But those are just the effects of when you think using your risk.

Posted in CISO, CSO, Information Security, InfoSec, IT Risk Management, Uncategorized | Leave a comment

Accuracy vs. Precision – My Risk Epiphany

Did you ever have a moment where a concept you have never been able to figure out or understand suddenly clicks in your head?  I had long struggled to understand a key element of Risk Management – how to perform a risk assessment model that included likelihood.  And a strange confluence of circumstances made my light bulb go off.

Now before I go into the story, let’s cover a bit of background on this.  Risk Management is a field that I admire, and consider critical to any organization, its operations, and especially important to my field which is Information Security.  Being able to communicate risk using tangible descriptions message of risks to an organization is critical.  But I could never quite seem to do it with the precision that I felt necessary.

I always stumbled on the issue of likelihood.  I could estimate with surprising ease the cost of an incident.  I have mastered the process of asking key business groups about the cost of failure and know how to test their attributions of cost.  I have been extremely comfortable identifying the costs of an incident – the cost of lost productivity, the cost of lost sales, the cost of lost intellectual property, and “range of losses” was a concept I could easily make tangible.  For retail companies I could estimate a range of lost revenues by looking at highest day revenues (Black Friday) and lowest day revenues.  That became my range.  I’d find the median and we’d have three values to work with.  I would also be able to factor idle time of workers, unused infrastructure and equipment, and compute these down to the last dollar if I so cared to be that details (which I usually didn’t – getting to the nearest $100,000 was more than enough for these companies).  I could even sit with a marketing team and estimate lost good will based on cost of advertising to regain those customers lost, and revenue downturns due to those lost customers.

But I could never feel comfortable with creating a picture of the likelihood that some event would occur.

Why? I wanted it to be perfect.  I wanted no one to question the numbers – they would be facts, let the chips fall where they may.  I wanted people to know in absolutes with absolute precision.  Except there is no such thing as an absolute – especially in risk. The light bulb that went off in my head was the light bulb of “imperfect knowledge”.  Risk is an estimate of possible outcomes.  It is about being accurate, not about being precise.  Bad risk analysis is when you pretend you can give absolutes, or make you make no attempt to find a range of things that are “more likely”.  Do I have you scratching your head yet?  Good.

Let me give you an analogy to illustrate what I mean by accuracy and precision.  In a battle, accuracy would be knowing where your enemy is attacking from, or even where they are most likely attacking from.  If you find out that your attacker has the capability to scale that 3000 foot cliff that you discounted due to level of difficulty you would add that  because it would show a more accurate picture of all possible ways your enemy will attack you.  That accuracy is accounting for all possible outcomes.  Precision is knowing exactly where to aim your cannon so that it hits your enemy at an exact spot (biggest tank, largest warship, best group of archers).  Accuracy won’t help you to aim the cannon.  Accuracy will tell you where to put the cannon and what range of fire it will need.  Precision be about aiming your cannon, but it will fall short on telling you where to position your entire army.

The problem I have struggled with in risk analysis is that I wanted precision – and that made me struggle with the determining likelihood.  The confluence of ideas hit me two days ago.  Somehow the idea of Alex Hutton’s and Josh Corman’s “HDMoore’s Law” (an InfoSec bastardization of the “Mendoza Line”) combined with having just chatted quickly about CVSS scores and the idea about “difficulty” associated with vulnerability scores made something click.  That and a peek at a risk analysis methodology that didn’t try to make likelihood a precise number.  Instead it asked a simple question – describe the skill required to achieve the event, and provide a range of frequency that the event would occur.  Bing!  I could work with descriptions, and so could executives!  If you try to arrive at a precise number, executives who play with numbers all day long will probably rip it apart.  If you give them probable ranges and descriptions of the likelihood, they get the information they need to make their decision.  It is imperfect knowledge.  And executives make decisions using this imperfect knowledge every day.  The more accurate the imperfect knowledge is, the more comfortable the executive will feel making the decision.  And for an executive, the easier for him to understand the imperfect knowledge you give him, the more he will appreciate your message.

So what did my epiphany look like?

First I realized likelihood is balance of understanding level of difficulty for an event to occur and its frequency.  Level of difficulty is really about the level of effort or confluence of circumstances it would require to bring about an event.  Take a vulnerability (please, take them all).  How much skill would the person require to exploit a given vulnerability?  Is the exploit something that even the average person could exploit (an open unauthenticated file share), something that is available in Metasploit, or is it a rare, highly complex attack requiring unknown tools and ninja skills?  This is not to say that the exploit cannot be done – it is determining if the population that can perform the exploit is smaller than the universe, and hence likelihood reduced.  The difficulty of having a tsunami hit the eastern cost of the United States is based on the rarity of unstable geographic features in the Atlantic Ocean that would generate one.  The Pacific Ocean on the other hand has a large population of unstable areas that can generate a tsunami.  The skill required to exploit an unauthenticated file share or FTP server is far different than the skill to decrypt AES or to play spoofed man-in-the-middle attacks against SSL.  I can already see the binary technologists fuming – “but, but, people can do it!”  Sure they can.  Any attack that has been published can be done – and there are many more that haven’t even been made public yet that also can be done.  A business cannot afford to block against everything, much like we cannot stop every car thief.  What we can do is avoid the stupid things, the easy things, and more importantly – the most likely things.  This is a calculated defense – choose those things that are more likely to occur until you run out of reasonable money to stop them.

Then I took an old concept I had around frequency.  For me there are multiple source that I can use to extrapolate frequency.  Courtesy of the three different highly data-driven analyses of breaches produced by the major forensics organizations, we can begin to estimate the frequency of various types of attacks.  Data repositories like VERIS, the various incident reports and general research of the news can give us a decent picture of how often various breach types occur.  A great illustration of this is Jay Jacob’s research on the Verizon DBIR data looking for number of times that encryption was broken in the breaches researched.  The data set was a grandiose zero (0).  Frequency can be safely ruled “low”.

Suddenly I was able to walk through a vulnerability report I had been handed and put together a quick risk analysis.  I asked five questions:

  1. What assets are on the affected systems?  (for example email, payment card data, PII, intellectual property…)
  2. What population of people would have access to directly exploit this vulnerability? (Internal employees, administrators, or anyone on the Internet)
  3. What is the level of difficulty in exploiting this vulnerability? (CVSS provides a numerical scale which I was more than happy to defer to, and in some cases where the general user population could exploit it, we created a “-1” category)
  4. What is the frequency that this type of exploit has occurred elsewhere, and what have we seen in our organization? (research into DBIR, asking security team at client site)
  5. What controls are in place that would mitigate the ability of someone to exploit this vulnerability? (such as a firewall blocking access to it, or user authentication, application white-listing etc.)

I took all the data that was collected and turned the risk into a sentence that read something like this:

“Examining the risk of being able to see information sent in encrypted communications:  Anyone on the Internet would have access to attempt to exploit this, however a very high level of competency and skill is needed to decrypt the communications.  The frequency that this type of attack occurs is very low (typically done in research or government environments with mad skills, and lots of money).  There are no additional controls in place that would mitigate this risk.”

The last glue that fit this all together was making all of your assumptions about the risk explicit.  I’ve talked extensively about the value of being explicit – it makes the data easier to examine, challenge, correct, and make even better.  The result is a more accurate risk assessment based on more accurate data.

The true detractors of Risk Management would point out that none of this is perfect or certain.  They would be correct, but then nothing in life is certain.  We tend to want to be perfect, to be right and not wrong because we fear wrong.  The sources of this tendency are boundless, but a bit of that I suspect is from our high level of exposure to the highly precise and binary world of computers, and as a result we look to make the rest of the world much like this model that we idealize.  One’s or Zero’s, exact probabilities, exact measures of cost… but life outside of the artificial construct of computers is not like that.  It is full of uncertainty and non-binary answers.  Those subtleties are what Risk Management can capture and help us understand in a way that is closer to our binary desires.  But never completely.  What Risk Management does do is give us better accuracy – so we can make more accurate decisions and be less erroneous.

So step away from the perfection.  Give your team a view of the risk that is in terms they understand.  You might just find that giving them a description, a narrative and ranges to draw from much more accurate than anything they’ve used in the past.  But whatever you do – do not aim for precision.  Aim for accuracy, even if that means the guess is even less precise.  Your management wants the accuracy.  Just like their profits, they will never be precise, but only data can make them more accurate.

Now you might still have a question of “so how do I quantify this?” Ah, that’s for next time…

Posted in Information Security Governance, InfoSec Governance, IT Risk Management, Security Governance | 1 Comment

BSides San Francisco Presentation

So I did a little talk at BSides San Francisco 2012.  Its a pre-quel to my book “So You Want to Be the CSO…”  The talk was recorded so you can view it at your leisure.  Just pity the poor guy in the front row who I accused of being “sexy”.

A BrightTALK Channel

Posted in CISO, CSO, Information Security Governance, IT Risk Management, Security Governance | Leave a comment

#SecBiz or The Better Answer to Martin’s Question

I had the good fortune of a long drive (12 hours to be exact) which allowed me time to catch up on four months of backlogged Martin McKeay’s Network Security Podcasts.  My fortune improved when I listened to the June 7th 2011 edition.  I hadn’t known about the #SecBiz thread on Twitter, and I am sorry I missed it when it started.   The discussion on the Podcast was fantastic.  The identification of the issues, the perspectives offered, ideas on distribution of duties and the consensus that everyone had about the need was spot on.  The stories of employees having to work in every part of an organization are excellent, and a great insight.  A well placed CEO I know of did the same his first month after being hired and created a significant level of trust across the organization.

If you haven’t heard the Podcast, please do so.  It is all excellent.  Well, except for the last 18:27, after Martin asks the question: “What can we do…”  To me, the answers at that point fell flat and missed an opportunity.  So many great ideas that helped bridge the gap were provided before the question that the opportunity to expand on them was missed.  So I’ve decided to provide some answers, and make up for my lost time on the #SecBiz discussion.  This blog post will be a bit fractured and piece-meal, but the intent should come through.  The thoughts are all part of lectures I’ve given since Shakacon in 2007, ongoing research, and a book I’m writing based on  my research and case study collection.

First I’d like to point out something I think is very important to the discussion.  Years ago a wise CIO taught me to avoid the great mistake of referring to the non-IT portion of the company as “The Business”.  IT and InfoSec are part of the Business, and together with the other parts of a business create solutions and better the organization as a whole.  Referring to “The Business” separate from IT perpetuates the “Them” vs. “Us” we are trying to overcome.  Create new language, since our language is a reflection of our thoughts and intentions.  Let us re-arrange our intention and build the first link between ourselves and the other parts of the business in our mind.

The Goal

The goal that the #SecBiz thread shoots for is an achieved mutual appreciation between InfoSec and the rest of the business.  The goal is noble however too often we look at it in InfoSec or technical terms.  The answers to Martin’s question highlighted this for me.  The answers talked about how to structure InfoSec, how technical knowledge is key, and how teams need to take responsibility.

But the business will never understand the depth of technical issues in InfoSec, just as we will never understand the intricacies of finance and accounting.  We both can communicate high level concepts, but the technical details are why we have “specialties”.  Generalists who can also dive deep are rare.  We must stop trying to make everyone outside InfoSec experts.  The answer we need to focus on instead is based in the dynamic of how to build collaboration and a common base of understanding regarding our goals, and our priorities.  To do this we need to think deeper into psychology, or in my favorite parlance, organizational psychology.

Understanding Motivation and Perspective

Each of us has a motivation – things that we value and strive towards to achieve our goals.  These goals include the things we value, the objectives we want to achieve (both long term and short term) as well as the way we act to support these values.  Every business group (which includes IT) has numerous individuals working in it who have their own motivations and values. There are often commonalities – values such as recognition and significance, certainty, and personal connection – but with individual variations in priority and manifestation.  A CFO and the finance group are, from a business perspective, focused on the goal of ensuring the financials are accurate, timely, and assist in the objective of maintaining profitability through the appropriate management of monies in all forms.  There are also personal motivations layered on top of this such as being recognized for your work, and maintaining personal relationships.

This might seem tangential, but I assure you it is not.  If the InfoSec group comes along and tells the finance group that they cannot implement software that in the eye of the CFO and the finance group helps them achieve their goals faster, better and with the potential for them to be recognized for improvements in their group, how do you think it will go over?  Think.  You just told a group that they cannot pursue things they value.  Their value is based on their perspective through their motivation.  They do not see your perspective because it is not part of their goals or values.

Until we understand the motivations, goals and values of various groups within our businesses, we cannot accurately address security in those groups.  We must apply security with their motivations in mind.  If we derail their motivations, we will fail.  If we align with their motivations or show how our goals and values align with their motivations, we will create wins, and the understanding we are looking for.

[These have been discussed in academic circles by Maslow’s theories, Chris Argyris, and cognitive psychologists and adapted in more contemporary discussions on motivation through the works of business and personal development by Steven Covey, Jack Canfield and Tony Robbins.]

Building Collaboration – Towards Empathy

I have long held that collaboration is the method to creating buy-in and understanding and I suspect few would disagree.  My definition of collaboration is bi-directional actions and behaviors that include honest communication, active listening, and empathy.  The latter is what I consider the critical end-game you need to achieve.  I do not advocate outright sympathy, but rather an understanding and appreciation for another person’s thoughts, concerns, challenges, and ultimately their motivation.  From the above conversation, understanding a person’s or group’s motivation allows us to align or at least discuss issues in relation to their motivations.

Collaboration is not built by re-inventing how we shuffle InfoSec groups about but by building new paths and methods of communication.  The path to achieving this requires that we in InfoSec be willing to learn and lead in building these new paths and methods of communication.  Either side can initiate and lead this effort, but since we are speaking of the initiative, and are the ones calling out for greater recognition, let us take the lead building that bridge.  Let us model the methods so we can all benefit.

Modeling collaboration is first achieved reaching out to open lines of communication.  The techniques to achieve this include asking questions first rather than trying to “tell” someone things.  Ask to understand because it allows the other party to feel listened to, and for you to understand their frame of reference.  We all value when we are listened to.  Be the bigger person and listen to those outside of IT and InfoSec so you can understand their business, their fears, their needs and their motivation.

Second step of opening lines of communication is through active listening.  Being able to restate what the other party has said to demonstrate you understanding of it.  This creates respect from the other party as they feel even stronger that you are attempting to understand them.

Third step is active and sincere empathy.  Empathy is the ability to understand and comprehend the other party’s view, values and justifications for what they do.  You can understand their frame of reference.  Do not abuse this understanding since you can dismantle and shatter the trust you have built with the other party.

Lastly, use the knowledge you have gained to relate your position and view to their view of the world, their goals and their motivations.  When you have tied your objectives to their motivations, you have created the foundation for collaboration.  They now see the value in understanding your goals since it aligns to their goals.  Your goals are being achieved because they are aligned to the other party’s goals.  We call this a win-win.  Both sides get their needs met.

Some of the ideas that have come about in my case studies:

Business Impact Assessments: Dragging the Information Security team around to do Business Impact assessments with each of the groups within the business – sales, accounting, logistics…  The questions that were asked were “What is the most important process in your group?”, “What keeps you up at night?”, “What processes or systems would cause you the most impact if they were to fail?”  The result was a very personal discussion about what each group cared about, what their priorities were, and what they wanted attention given to.  By doing this under the guise of a BIA, we were able to better understand what each group cared about, and what was most valuable to them.  We also were able to understand in great detail the operational processes of the organization.  Think of it as a business mapping or process flow exercise.  We listened, we described what we heard to ensure we heard it correctly, and made sure we identified their biggest processes and biggest values.  The result was much more than just our knowledge of our business.  It built camaraderie.  The business groups felt we cared about them because we listened, we showed empathy for their needs and goals.  Now when we discussed security we had two things working in our favor – a knowledge of the entire business that we could use in determining risks and where to apply useful controls, and an audience who felt respected and felt it acceptable to show us respect.

Security or Risk Council: An internal “governance” group, not unlike an IT Governance structure which reviews business and IT objectives and budget to make sure IT aligns with the priorities and objectives of the entire organizations.  The council is made up of leadership from all business groups, and are free to share their concerns for security and risk management.  Monthly meetings are held, and all domains of Information Security are discussed but with a focus first on areas outside of the IT and Information Security Groups (such as perhaps HR background checks, concerns for fraud and loss in distribution, safety of workers in the workplace…)  By first making the council about their security concern the participants felt it was a collaborative effort and their views were valued.  This example worked well in several companies.

Risk Management and Business Process Discovery: Businesses understand risk management.  Banks and insurance companies for obvious reasons prove to be particularly adept and aware of risk management and process evaluation as valuable and integral to the organization.  While listening to Edition 10 of the Risk Hose Podcast I re-discovered the concept of risk management – in a process oriented sense – to reflect the ideas I discussed above.  The Risk Management teams explore the business processes with the business, understand it, evaluate the risk, and decide what to focus on with the business.  The InfoSec team in undertaking a business process discovery can understand the business.  By framing the analysis in Risk Management terms, you can increase the likelihood that the other areas of the business will relate to the findings.

Distributing Responsibility for Security: One of the conversations in the Podcast revolved around Security Operations.  I’m going to go down this rabbit hole even though on many levels it’s not a direct #SecBiz discussion.  It can however serve as a model of how to collaborate on security.

I prefer to demarcate Security Operations in to two groups:

a)      the acts of providing preventative security functions such as Anti-Virus, Patching, Firewalls, System Configuration (for security).

b)      the acts of providing detective security functions such as Security Incident and Event Monitoring, Unauthorized System and File Changes, and validation of controls  (such as reviewing system configuration standards or firewall rules for approval).  I also sometimes refer to this as segregation of duty functions since they are checks against potential inappropriate activities and control failures.

I divide this way because I prefer to assign responsibility for the preventative functions with the administrative groups who are usually tied to systems and devices (e.g. configuration standards and patching as the responsibility of each system group, firewalls as network devices, etc).  This takes security from being an InfoSec only function and makes it part of the job description for groups outside of InfoSec.  They become accountable for security and it begins to be part of their culture, and their thoughts.  Holding them accountable is the second part – the detective controls that are assigned to an InfoSec group.  The outcome of these role designations are conversations about security that spread wider than just the InfoSec group, and control designs are collaborated on.

What does Collaboration Achieve?

I conducted a survey in summer of 2007.  Over 100 companies responded, and while the survey was highly un-scientific, the results were clear.  They survey asked what was the perceived acceptance of the company’s Information Security Policies, and what parts of the business were involved in creating those policies.  Unsurprisingly, of the organizations who said they developed their InfoSec Policies with the business, 80% said their policies were well accepted, and the remaining 20% felt the policies were accepted and challenged, but not outright rejected.  Of the organizations who developed their policies just within IT or the InfoSec group, 36% felt their policies were well accepted.

The Quote

I’m going to leave you with two quotes since they both contribute some insight:

Chris Hayes: “We have to accept that it’s not our risk tolerance that matters as risk practitioners or security professionals.  Its the person accountable for the risk at the end of the day. And until you overcome that you’re almost a barrier to what you’re trying to achieve.”

We have to work with the business to get them to understand the risk, and design with it (for better solutions).  In order to do this we need to understand what the business is about in the first place.  And then we need to demonstrate we understand it, with empathy for their motivation.

Ultimately InfoSec is juggling risk and business goals, or as @shitmyCSOsays quoted: “Security is about eliminating risk.  Business is about taking risk to make money.  See how they are a perfect match?”

Posted in CISO, CSO, Information Security Governance, InfoSec Governance, IT Risk Management, Security Governance | 1 Comment