The Fear Mongers

“APT is your biggest risk.”

“Public cloud cannot be secure, just look at CapitalOne.”

“Insiders are your biggest threat.”

“You must have a SIEM if you are going to pass your SOX audits!”

Bah, humbug. Fear, Uncertainty, and Doubt (or FUD as we sometimes refer to it).

Most who revert to this pattern I find have particular characteristics.

They Haven’t Done It

Some people have never done security, now or in their past. However they do read the newspaper, or watch TV (okay, who watches TV anymore…YouTube!), or they are handed a sales script. Someone feeds them a story and they take it at face value. They repeat it. They preach it everywhere because they’ve been convinced through some disassociated argument. Or their livelihood depends upon it. You don’t earn money if your products don’t sell. Think of them loosely as the carpetbaggers. That may be a bit harsh, but security is not in their blood, under their fingernails, and they certainly don’t have any scars to show for it.

Those that have this characteristic sit outside of experience. They haven’t seen or experienced the realities nor do they have the insights. Note, I am willing to move those that do research out of this group because they at least can present some knowledge based on data, but those are few and far between, and they generally don’t use FUD. The rest are hard to convince, and are usually best just dismissed. If you find yourself 15 minutes into an argument with one them over whether you really need a SIEM to pass SOX, and they won’t budge, you have wasted 14 minutes and 59 seconds of your time. Well, maybe that estimate is a second short. Although kudos to those who spend the time to educate, and the 2% of Those Who Don’t Do who actually listen and understand.

They Don’t Do It

This characteristic has more to do with links by association. You find that your dinner conversation about being “in IT” means you must be able to fix someone’s home computer problems. Just like anyone in information security should be able to be a TV pundit about the latest ransomware attack, or motivations of hackers. The one difference in this category is that while the people who claim this ground have security in their blood, it may not be the right blood type, and the scars may be from completely different battles.

Those with this characteristic tend to build on an existing platform of knowledge, yet extend it through precarious cantilevers into subjects they haven’t really examined. That person who managed your mainframe security is probably not the best person to judge the security of public cloud, or at least not at first. Just like you wouldn’t (necessarily) ask your plumber to give you an opinion on how to replace your roof. But that does not mean that the cannot be educated. They just need to take the time to learn.

I find here the opportunity to teach, mentor, and share most rewarding. But it also can be the most challenging. Some people take to new information and views, but some cling to their old models like a survivor and a raft, even when the rescue ship is right next to them.

They Do It Wrong

Doing it wrong is usually a mix of taking what you’re told to do at face value, and not having the skills or experience yet to do it properly. The really egregious examples cling to their ways like that survival raft. The causes can be youth and inexperience, which is best overcome with good mentorship and opportunity to learn, or by stubbornly clinging to bad patterns despite every opportunity to learn otherwise.

In this category I find the opportunity to teach, mentor, and share most rewarding. But it also can be the most challenging. Some people take to new information and views, but some cling to their old models like a survivor and a raft, even when the rescue ship is right next to them.

Don’t Have Data to Back It Up

This is my favorite characteristic, and the one I like to “troll” the most. Some anecdote, recency bias, or availability bias creates “facts”. Everyone loves to use APT, or now ransomware as the way to drive attention to their solution, because it is an availability bias. The attempt to convince me that “Insider Threat is greater than External Attackers” will fall flat. You better be prepared to be challenged with data. I will take you to task.

Those that exhibit this characteristic either cling to their belief, even in the face of clear data, or eventually, and sheepishly, admit that their story has holes. Its often amusing to see how they will tread a fine line between saying, “Yeah, the data is right”, and “Still buy our product”.

Don’t Ever Do It In My House

For anyone who wants to do business with me, do me a big favor. Do not come with FUD. Do not come with anecdotes unless it is only to demonstrate how to accomplish building something, or to demonstrate an example. Do not come to educate me on something you haven’t done. Come to me with data that supports your point. Come to me with experience. Be willing to accept contrary views, and challenges to your solutions. Be willing to engage in discourse (note, I do not say debate or argument!). Let’s have a sensible conversation using data, attempting to find common ground, and points of reference. I will respect an informed view and one that is willing to be challenged any day. Anyone not willing to be challenged, and not having (accurate and relevant) data to back up their assertions will be summarily fed to the bears. They live under my desk…

If you want a really good read on the subject, Bruce Schneier has written a great article on the subject, and his book Beyond Fear: Thinking Sensibly About Security.

Posted in Uncategorized | Leave a comment

Better Late than Never: My First Foray into Real Metrics

Author’s Note, this blog was written back in 2013, but never made it this far. Forgive the delay and references to old presentations that may not be accessible.

It’s been a while since my last post, and I’ll blame it on the extra fun work that I’ve taken on this year. One of those projects has been starting a metrics and data analysis project for a client. Because I have a pretty long runway with this client, I’ve had time to think about what might be most useful and approach. I’m early in my journey so I’m going to talk about my experiences and the right and wrong turns I took.

First Step I took:

I gathered as many killer insights from listening to Risk Hose podcast, sitting across the table at Metricon dinner with Jay Jacobs, and a great presentation by Brian Keefer on his and Jared Pfost’s experiences. If you haven’t looked up one of these resources, you need to. They are great resources to get you going. So is Andrew Jaquith’s “Security Metrics” book and the SIRA site. I also stopped worrying about being perfect in my metrics. I learned from several of these people that metrics start where they start, and you can always perfect later. Getting started is the biggest hurdle.

Second Step I took:

I listened to what the business worried about. I had several forces at play here. There were concerns out of security teams about “unknowns”. What things were happening that we didn’t know about. There was a belief that some of these unknowns were “huge” issues. A belief, but no data. The business also believed that certain security processes were too cumbersome or prohibitive. I compiled a list of these concerns. I also thought about what metrics would help the security team understand what was going on – situational awareness.

Third Step I took:

Because I’m a critical thinker and empathist (some one who leans towards empathy) I decided to look for data and measures that could prove the negative beliefs against our current security posture and operations.

Some specific measures I decided on: 

  • Number of firewall rule changes per week, mean and average days to approval from time of submission of ticket, mean and average days to close ticket. This helped us track performance in support of the business. 
  • Firewall acceptance and rejection rates by port. We found this partially useful, and partially a great source of a betting pool. The data gave us an understanding of what were of ports of interest for external parties. We watched patterns of opportunistic probes as they evolved (which turned into a betting pool as to what what was the top port-of-the-week). It also provided us with intelligence on targeted probes based on our industry which meant someone had mapped the IP addresses to our company and/or our services.
  • Weekly total number of potential data exfiltration communications, variance of potential data exfiltration communications week to week, end-user initiated data exfiltration (activity by internal parties as opposed to externally based activity)
  • Types of detected activity on the Web Application Firewalls. While I knew this measure was fraught with issues around data accuracy and relevance, I decided to collect it anyway so we could at least have a baseline to measure our effective use of the tool. My justification (to myself) was that it was better to recognize there were 40,000 too many alerts through exact measures than to simply argue the fact based on a “gut feel” that there was “just too much noise”.

Each of these became key metrics I’m tracking.

Challenges so far:

Collecting data. In Andrew’s book, one of his criteria for good data was data that is easy to collect. Phooey. There is no such thing. Consolidating the data collection is one of our biggest hopes in new tools. We collect data from 7 different sources and we’re not done yet. Each tool has it’s own dashboard, it’s own very useful graphs, but no way to get this in to what we lovingly call “a single pane of glass”. Also some data require tools to gather. Some would wish to buy the “blocking” tool first. First lesson learned: start cheap, measure, determine if you really have anything to worry about. We already found one purchase that while in it’s current state is useful, we found, through metrics, that the original plan of massive spending was wasteful. Just not enough risk presented to justify million dollar expenditures over data exfiltration.

We have also found that many of the metrics create a situational awareness (albeit post-situation), and through looking at comparison metrics (vulnerabilities compared to “excessive” website communications; Data exfiltration traffic to vulnerabilities or viruses detected).

There will be more, as we find things, and as we find success and outright fails.

Posted in Uncategorized | Leave a comment

Three Key Patterns for Information Security Programs

After too many years witnessing the sham that are “security standards” and regulations, I feel like I have to be a bit of a grumpy old man. I’m not usually this way…well, I am old, but usually not terribly grumpy.

Let’s be really, really clear about something. There are some security standards that I think are quite nice, and give people the right nudge, particularly those who are growing in skills and experience. But then I watch others in the industry latch onto certain standards as if they have found the holy grail, and must bludgeon all who will not bow before this grail. Yes, you Clement Onan*, with your COSO devised SOC TSCs that are incomprehensibly obtuse. Yes, you Herbert Anchovy* with your mis-representation of the NIST CSF as a linear process instead of cyclical. And yes, you, Ernest Scribbler*, who adores the concept of merging every possible standard into an incomprehensible mess with 800 (yes, eight-hundred) individual control requirements used to bludgeon customers into sniveling gits who willingly offer up their checkbooks.

*The names of the guilty (and the innocent) have been changed, or faked, or otherwise obfuscated.

So now that you know my whipping boys, let’s talk about the proper guidance towards a solid security program:

Manage to your risks. Nearly every standard (ISO 27001, GLBA, GDPR, NIST CSF, 800-53, The Bruces…) speak about performing a risk assessment, identifying an appropriate level or risk, and what not. With one big word. RISK. Get to know it. Know what it means – its not how many vulnerabilities your scanner finds. It is about knowing what the business is trying to do to be successful, make money, take care of customers, pay employees, and pay its bills – OPPORTUNITY. It does mean that the company should be able to do those things successfully without having every Tom, Rick, and Dennis Moore running through the IT systems creating havoc, stealing information that is supposed to be secret or proprietary, and stopping productivity of employees with mounds of lupins scattered all about. Now you may have a few distractions, incidents, and lupins, but they should be at a level your executives can tolerate!

Agility to adjust to change. One thing that is certain (so we are told from the age of 12) is change. Business goals change, technology changes, attackers change, thieves change, and even tactics change as security professionals and attackers up the ante. How flexible are you? How flexible is your team? Have you bought into a system of security that you believe you can rely on for the next 300 years? Or are you smart and consider that it is probably out of date before you even buy it. (Let’s face it, all security defense is out of date since it is responding to an attack that has already happened!). Your ability to change, pivot, and adjust should be just as fast as your executive team’s tolerance for what goes wrong.

Continuous improvement. The ability to change is also coupled to the ability to know what works and what does not. Understand what threats are actually preying on your environment, what incidents will cause your executives to be upset, and when your hovercraft is full of eels. Data points like these can be very helpful to identify where your program is working, and where it is not. And with simple translation you can communicate the same to your executives by showing what (realistic) monetary impact is being avoided, and what opportunities are able to continue unabated.

Keep a very important point in mind: Standards and regulations should inform our security programs, not drive them. Our company’s needs, opportunities, and the threats to them should drive our security programs. Standards should be used only as a guide to give those programs a form – a way to order or structure them, nothing more. Trying to take a standard or regulation and say that it covers everything your program needs, is like saying a comfy cushion will get the confession you want!

Posted in Uncategorized | Leave a comment

The Fallacy of Permanence

I’m sure Daniel Kahneman has defined this fallacy in better terms, but it is a good story to show one of the potential reasons why the concept of DevOps and Lean are so valuable. And also why certain types of IT business are profitable and others eek by on small margins.

When you decide you want to put in a new lawn, rarely will you find yourself starting your search by looking for the “cheapest” option. Your first inclination is to find the most attractive or appealing lawn you can find. You may set a budget and search within that criteria, but more often then not, you settle on something that is at the upper end of that scale. You want something nice. The choice of purchase leans towards the aesthetics (and in many senses) the outcome you want to achieve – a nice attractive lawn.

Then phase two happens – someone has to mow it, fertilize it, and care for it. This is an operational process that goes on over time, all in parallel to your day-to-day life. As time passes, that operational process is subject to your emotional and financial ups and downs, and resource availability. When money is scarce, you try to save and maybe skimp on how often you fertilize that lawn or mow it. When times are flush you might re-patch that section that went brown when a dog decided it was a nice spot for a bio-break. Cost, effort, and resource management become your focus, with cost being the one item that shows up time and time again (aside from those recurring brown spots).

The original image of what you were buying when you purchased the sod to make your lawn beautiful affected your purchase decision, and your willingness to “spend a bit more” to get what you wanted. That willingness to put out money went away when you got to the operational aspect of that lawn because it was subject to the ups and downs of real life.

If the parallel to IT build and operations is not apparent, then let me illustrate.

Companies will be enamored with the image of a beautiful new system, complete with beautiful interfaces, fantastic business processes and workflows, and a few colorful blinky lights thrown in for good measure. They will pay a premium dollar to achieve that outcome because their eye is on the outcome and opportunity that they see. They are making decisions from a macro view of opportunity (what money am I going to make from all this goodness) and cost (what is my view of capital investment to achieve all this goodness).

With the system now installed, the company goes through its ups and downs. The original opportunity sees its ups and downs with revenue being variable over time due to economic conditions or customer satisfaction issues. Company revenue also is affected, and the picture of spend now changes. The focus is how to squeeze a few more pennies out of day-to-day operations, how to be more efficient, and how many brown spots can we tolerate in our lawn.

The lessons to learn from this story:

  1. When looking at an opportunity, recognize the bias you have towards thinking that it has one state, and that state involves a big celebration of its success and the money it earned – all at a single point in time. That thinking is not about its ongoing lifecycle and that opportunities live, breath, change, grow, turn brown, and require careful care and feeding.
  2. In the world of rapid development and Agile (and for the most part DevOps) systems are not permanent. They evolve, change, and adjust to the world at hand which can reduce the bias. While the initial standup of an IT system (just like putting in a new lawn) has upfront work, it is an ever evolving system that changes over time. Even your lawn goes through evolution (weather changes, adjustments to allow for those rose bushes, or patches to those designer dog markings).
  3. In the world of DevOps, operations comes with the development. They are inextricably intertwined (or should be). This means development, new ideas, and new features go up and down with the fortunes of the company as well. Decisions are made as a whole – considering the new opportunity *and* the current operation of the system. Efficiency is evaluated as a system, rather than in discrete parts. Even a lawn can be managed this way – with incremental change such as increases in a capital spend (weather driven sprinklers) that can reduce an operational spend (water costs).

The moral to the story is that different phases in the lifecycle of your IT induce different spending biases. These can lead to overspend in implementation, and underspend in operations. An awareness of this bias, and the use of an evolutionary and incremental view of systems can positively affect those patterns. Consider them…

And as a careful aside, if you are selling IT services and systems to customers, recognize where the spending bias occurs in this cycle (and hence where margin lies).

Posted in Uncategorized | Leave a comment

DevOps is dead, long live Dev!

Yes, it’s hyperbole.  But the headline is important.  In 2020 I still encounter companies who are moving into cloud, yet are immovable mired in their traditional way of doing IT.  They are somehow convinced that a group of infrastructure folks build some things, some developers roll up, drop off some applications, and they are done. Worse are those that believe that they simply hand a set of tools to a team and call them DevOps because they’re using Chef, Terraform, or some such “automation” that makes it DevOps.  When in fact all they are doing is putting lipstick on a pig.

What these organizations miss is that when you want to make your IT modern, you leave behind a siloed approach where things are thrown over a wall.  You no longer have a bastion of operations that is saddled with unknown environments requiring full run-books and detailed operations manuals, and you no longer build infrastructure as a stand-alone function of IT that is separate from development.

You must be one!  You must engineer!  Not like an operations person, but in the way a developer engineers.  He writes just what he needs to get the job/request done and move to the next task he’s been assigned.  He thinks about how things fit together and builds constructs in his code to link his components together.  He understands and runs test to make sure what he wrote does what he wants and needs it to do.  If all of this is done under the right conditions (which means there is verification all is okay) then his code is ready to be pushed into production.

At this point you might be saying, that’s great for a developer, but not for infrastructure.  My retort is that I *am* describing infrastructure.  An Infrastructure Developer.  And if you are working in cloud, anyone working on infrastructure had better be writing code.  Any other pattern and you are repeating the sins of the past.  They are now developers.  If you are unable to recognize the error of not thinking this way, perhaps it’s time to step back from your mouse and GUI console and reconsider.  Anything other than treating infrastructure as development of code risks making your environment untestable, unstable, unrepeatable, unrecoverable, and un-understandable.  Sa think carefully.  The world is now the world of developers and code.  If you fail to recognize this and adapt, you risk succumbing to the real (r)evolution.

And just to be clear, I’m describing DevOps (in its proper, pure form). But because people have already taken to bastardizing and mis-using the term, I’m just going to slap you with some hyperbole.

Posted in DevOps, DevSecOps, Uncategorized | Leave a comment

I Love the Subject of Change Control

I love it not because it is wrapped in complexity, but for quite the opposite reason; it is (and should be) a perfect case of simplicity.

Let me explain why with a quick story of bad change control.

I watched an organization react to the pretty significant failure of a change by instituting a new step in their change control process.  “The CIO or VP of Operations must now approve all changes.”  You couldn’t make me laugh (and cry) at the naivety of such a move.  What value can a CIO or VP of Operations add to a change that would make it any less likely to fail? The answer is none, unless they are the one who designed or is implementing the change, or is somehow an intimate expert in the change, which for a large organization would be a gross misuse of their time.  The only thing that insertion of such a step would achieve is to demonstrate that you have no confidence in the people who work for you.  While that lack of confidence may be true, the action achieves absolutely nothing to improve the probability that changes will be better in the future.  Instead it breeds a culture of fear, a belief in inferiority in the staff which then slows velocity for new changes and services to a crawl, with no one in IT willing to take chances let alone do things that should be necessary.  The IT department grinds to a standstill, its customers become dissatisfied, and innovation and new products die on the vine.

Instead, what if every change failure was subject to a proper root-cause analysis (5 Whys).  What if that root cause analysis led to investment to correct the root cause (training, corrected documentation, better testing, patience to do things right, breaking a cycle of rushing, executing necessary maintenance, coordinating schedules changes, communicating changes to customers…)

Now you’ll have IT that corrects its mistakes by fixing the real cause, with support from management to do so.  Replacing a culture of distrust to one that employs Maslow’s higher order needs – growth and contribution, that can be painful when errors are made, but are personally (and collectively) rewarding when learned and executed on correctly.  

Psychology studies have shown that both punishment and encouragement can both create changes in behavior, but that one creates fear of taking chances, while the other tends to promote development.  Each has their place.  They should each be used appropriately to create the environment necessary.  But starting with punishment and distrust is not a good precedence.  Start with trust, and punish when that trust is repeatedly broken.

Posted in Uncategorized | Leave a comment

Unicorns (and how the Gene Kim challenges us yet again…)

I had the opportunity to read Gene’s new book The Unicorn Project last month. Like the Phoenix Project, I was riveted – nearly missing my tube stops on the way to work. My distractions came from usually as a result of my attempts to align the concepts of The Five Ways with how many of my legacy clients have (continue) to work. I found myself (once again) immersed in looking for ways to best unravel those old ways of thinking that need to change. Yes, once again, I found a book that makes me think hard.

The Unicorn Project is written from the perspective of application developers – a perspective that was somewhat glossed over in The Phoenix Project – not intentionally I am sure, but because each book has to choose an audience. The Unicorn Project challenges many of the ways companies have done things in application development and the approach to the products and services they offer out through IT. Some of the challenges seem trite (and yet still pervasive) such as silos, turf wars, politics, and finger pointing. Some of the challenges are pervasive and yet unavoidable such as legacy systems that no one wants to touch, applications that have tentacles of dependencies that seem insurmountable, and an unwillingness to tackle the big issues and technical debt that litters our data centers.

I took away many ideas from The Phoenix Project back in 2013, and now I have many more from The Unicorn Project. For one, the concepts of startups and innovation within large organizations really tickled my fancy. And as always, here are the usual parade of wonderful ideas for achieving Lean, collaboration, and The Five Ways. Yet, one concept that really endeared me The Unicorn Project was the focus on Flow and Psychological Safety – something that is very near and dear to me. This book should inspire you to scratch your head, maybe even miss a few stops on your ride to work, and get you thinking about how to change how you and your people can work better in IT.

Oh, and now its available, here: https://itrevolution.com/the-unicorn-project/

Posted in Uncategorized | Leave a comment

Where should the CSO Report?

I was recently asked the question, “Where does Security belong in an organization?”

It is an intriguing question, and one that I think about quite often.  Currently most CSOs report to the CIO or CTO.  In a few, rare cases, they report to the Chief Risk Officer (CRO) or Legal.  I hear security professionals expressing the belief that the CSO should report directly to the CEO in order to make their voices heard and make security a priority in the organization.

Where does the CSOs role come from?

Decades ago, IT security consisted of rudimentary tools.  Firewalls were one of the first bastions and “network security” became synonymous with “security”.  As attacks evolved, security broadened, as did the tooling to protect against them.  At the same time breaches also became more public, costly, and embarrassing.  Security professionals began clamoring for organizations to take security more seriously.  There was a strong belief that security risks were lost on managers and executives.

With this perceived disconnect, various organizations attempted to step in and remedy the problem.  Government regulation (Sarbanes-Oxley, banking regulations), industry regulation (PCI, ISO 27001, SOC 2) appeared in an effort to mandate security risk mitigation. 

In the midst of these movements, security professionals began to clamor that they “need a seat at the table” – a reference to needing a direct audience with the CEO and board-of-directors.  While some regulations mandated the creation of “Chief Security Officers” and “Chief Privacy Officers” with accountability, none have (yet) stated a mandatory reporting structure.

It is in this landscape that we pose the question, where should the CSO report in an organization.  I would put forward that where the CSO should report may not seem obvious for some reasons not often considered.

“Risk and Opportunity are two sides of the same coin.”

Security’s role is to help the organization realize the opportunity its pursuing by not falling into traps that can destroy the opportunity.  If you think about how security operates, you might think, at face value, the preceding statement fits with what you think security does today.  However, there is some subtlety in the statement that might be overlooked.  Consider the word opportunity, and how the goal of security is “…to help the organization realize the opportunity…”. How often have you heard a security team say, “…over my dead body…”, “…that will never happen…” or something similar that reflects an outright resistance to a project, a change, or a technology.  I’ve heard it said about Online Internet Banking in early 2000s.  I heard it said about mobile payments in the early 2010s.  My point isn’t that opportunities proceed despite security, but rather that they have proceeded by solving for security.  Each of those technologies had detractors who looked for reasons it shouldn’t work, rather than creating solutions that made it possible for it to work.

You might ask what this has to do with where a CSO and their organization reports.  I’ll give you a very simple answer – the CSO and their security group should exist in the structure the enables them to best collaborate with groups generating opportunities.  Security should be just as embedded in building solutions the lead to opportunities as everyone else in the organization.

It is all about how to contribute

My view of where a CSO and their organization exists has much more to do with how they can collaborate and contribute the most in new initiatives.  In my view, the CSO is there to contribute and collaborate on building success for an opportunity by understanding risk and how to mitigate it.

Based on that view, a CSO should:

  • Ensure that new opportunities can resist the most likely threats that can disrupt them by examining and measuring the probabilities of threats and their impact, communicating that information to those building the opportunity, and working collaboratively to devise mitigation for those risks most likely to disrupt the opportunity.  (If you say they must resist all threats, then we need to have a separate discussion on how every executive and manager decides to pursue an opportunity with no absolute certainty that the opportunity will succeed.)
  • Ensure that security, regulatory, and compliance regimens can be met by creating solutions that meet the requirements of the regimens in ways that allow the opportunity to proceed.  Help the organization design, build, and operate the opportunities in a way that meets the security, regulatory, and compliance regimens. 
  • Focus on early collaboration, early engagement, design, early testing, and early feedback.  When the focus is made earlier in developing the opportunity, efficiency increases, and the flow of work becomes more rapid.

There is a question to all of this – who enforces “the rules” when a team, or opportunity does not follow the security, regulatory, or compliance regimens.  In my opinion, that is up to the executive team, and board of directors.  The CSOs role is to provide guidance, and insight, not to enforce or punish.

The CSO should:

  • Measure compliance, identify security incidents and risks that can disrupt the opportunity, and refine designs with a focus on making the opportunity a success, and ensuring it stays that way.

The CSO should not:

  • Play enforcer.  This conflates the roles of auditor and of guide.  Enforcement is an after-the-fact activity that too often occurs when evaluating a production environment, or a solution that is well on its way.  Guidance occurs, and effective collaboration occurs when there is fast, early feedback to teams that are building the opportunities.

When security thinks of itself as an audit or enforcement function it separates itself from those creating opportunity.  It creates an “us-vs-them” dynamic that is counter to building.  What is needed is an “Us” approach that helps to create solutions that are secure, that meet regulatory and compliance regimens, and protect the opportunity from risks that can disrupt.

So where should the CSO report?

At the end of the day, where the CSO reports should be a reflection of where she contributes to the opportunities of the organization, rather than where she can wield the largest stick of punishment and enforcement.

I do not mind if the CSO exists outside the IT department, but only if they collaborate and contribute in opportunities and initiatives outside of IT.  If the CSO is closer to a Chief Risk Officer and works with every business unit to identify, measure, and treat their risks – whether its identifying criteria for accepting or disqualifying job applicants, measuring impact of workplace safety, or devising strategies to ensure continuous availability of business operations.  A CRO should report at an executive level, but care should be taken not to simply conflate the role of CSO and CRO.  The role and responsibilities of a CRO within a financial services organization is much broader than the skills of most CSOs.

If the CSO is only focused on IT issues, then that CSO should remain within the IT organization, and report to the CIO.  Their role is to identify and prioritize security risks need to be addressed for the sake of the success of the opportunity and collaborate with the rest of the organization on the design, building, and implementation of mitigations against these risks.

CSOs should not lose sight of their role, or that they are one of many parts to making opportunities in an organization.  Security issues are only one of many risks that can make an opportunity fail!  While breaches can cause losses, delayed projects cause losses in sunk costs and lost opportunities.  This should by no means diminishes the role of the CSO as their work helps an opportunity succeed as much as any other part of the organization.  But let’s not inflate the importance of security over the need for an organization to take risks, experiment, and pursue new opportunities.  For the CSO, that should be an opportunity to help the organization take these chances in ways that balance risk and opportunity.

And if you really want me to give a hard opinion, I do not believe it is necessary for a CSO to report to the CEO. My view is that a CSO is an informer, a designer, and a collaborator. They can communicate to a CEO, but they are not the sole mouthpiece of risk and enforcement.

Posted in Uncategorized | Leave a comment

The Three Phases to DevOps in Security

The Three Phases to DevOps in Security

Many of those who aspire to create a high-performing security function within a company are looking at DevSecOps and what it represents.  This is laudable, as the concepts that are represented in DevSecOps mirror many of the successful organizations I’ve experienced, as well as the views of dozens of CSOs that I’ve interviewed since 2010.  The CSOs I interviewed often reflected that many of the skills that they valued were not traditional technology skills, but instead skills in critical thinking and collaborative discourse. (Before you toss aside this assertion, bear in mind, they also asserted that technology skill is also needed, but not in isolation)

When a group of us were reviewing very early drafts of “The DevOps Cookbook”, many of us felt something was missing in its approach.  It was David Mortman who first put to paper what many felt was an underpinning concept or theme that was needed: culture.  DevOps depended upon a culture – one that was seemingly at odds with how things were currently being done, and that required buy-in to change.  It required an agent of change, and long-term commitment to overcome disfunction within an organization that may feel counter to existing dogma.

The journey I’ve taken over the past eight years has allowed me to codify some of the successful approaches I’ve taken and understand the why around their success.  This is my collection of ideas, and the basis for them.  It maps the path I’ve charted with my teams towards a culture of broad collaboration, empathy for the “customer”, and a willingness to take chances and learn.  The results I and others have seen are quite rewarding, and you’ll see how each played out in my stories.  These are far from the end-all of approaches, but if they help give you a jumpstart in your journey, then this has achieved what I hoped.

People, Process, Technology

We all know the mantra (principle) of – People, Process, Technology.  It is a fantastic model to explain how things should be.  I even stack them much like I might order Maslow’s Hierarchy of Needs.  People are needed to operate an environment and are the culture of doing and knowledge.  These people build processes that do things that reflect their views on how things get done.  And then you build technology to facilitate the speed of those processes that are designed by these people.

That is wonderful if you are an anthropologist examining how an organization is.  The challenge is that this is rarely how people try to transform their organizations.  They (mistakenly) start upside down.

Technology, Process, People

How many of you have watched an organization declare it’s starting its “DevOps Transformation” and bring in a bunch of technology tools (automation, deployment, cloud)?  This is the “technology shall cure all our ills” club.  I will often tell them, if a process is broken and bad, then all technology will do is make the “bad” faster.  If your process for approving user access requires five different approvals from people who have no idea what they are approving, what system it refers to, or what data it exposes, all technology will do is make that inappropriate access happen faster.  Garbage In, Garbage Out, at Speed.  Have you really made anything better?  How would you know?

How many of you have seen an organization build an isolated team in IT and give it the title “DevOps”?  This one irritates me – if you call your team DevOps, you don’t get it.  Either this implies that only this one team needs to do DevOps, or more likely a naïve notion that it’s all about the automation.  Have you helped the company move forward and improve? How are other groups improving and work across the organization getting better?

Security teams are no better at fostering DevOps.  Too frequently I encounter teams sitting behind walls throwing darts (findings) over those walls at groups they barely know.  This grates me more than someone calling their team “DevOps”.  I call this the “We Do This, You Do That” club.  (By the way, I also see Development teams and Infrastructure teams doing the same).  How do your findings relate to what the company is trying to achieve?  How do they relate to the company’s tolerance for risk?

DevOps is the Journey

You would think that by now people would have learned what DevOps is, but instead DevOps has been miscast as purely automation, or more commonly, deployment tooling.  Let’s get over this myth.  Tooling is an outcome.  Even refinement of work and processes is an outcome.  Make no mistake, I love the technology solutions that have come out of the DevOps movement – methods and tools that have refined the flow of work and that have increased speed of getting things done.  But these solutions are the outcome.  DevOps is the ongoing journey of getting there.  It’s about how we work together with a common goal of making things better (maybe even faster and stronger…) in a way that makes it possible to focus on the real customers, (blamelessly) identify inefficiencies, collectively learn and make leaps of faith, and create rapid and large shifts in how we do things.  It’s a mindset, or I would say, a culture change that allows us to get to that state where we can make these changes.

My Strategy of Change

When I start working with an organization, I put most of my effort into organizational behavior.  The words I use for this are: Embedded, Collaboration, Discourse, Learning, Growth and Refinement.  I’ll concede that there is often badly broken technology or nasty compliance failures – but even these situations I use as an opportunity to teach Security, Development, Infrastructure and Operations teams to work together and learn the cultural workings of DevSecOps.

I first focus on changing how people work and think – their perceptions, understanding, as well as interpersonal and organizational interactions.  I call this Changing People.

I next weave in changing the processes of working – how they communicate, problem solve, and learn.  This overlaps with changing people and can even overlap with changing the company’s operational processes as people try to refine how they work.  I call this Changing Process.

Lastly we look to evolve the processes, tools, and technology used in our operational work.  This can and usually does include changing security controls and operational processes as well as looking at techniques of refinement.  I call this Changing Technology – but in reality it’s about everything that DevSecOps can consume, refine, and make better.  By this time the ideas will start flowing, and the DevSecOps machine is in motion.

Change People

My objectives are to lead the team towards collaboration, communication, discourse, and learning while avoiding being anonymous, disconnection, and debilitating blame:

  • Building “emotional capital” with customers
  • Broad collaboration as the core to succeeding
  • Making the Team feel valued – contributing
  • Using Empathy & Discourse to collaborate and solve problems
  • Leading by Example
  1. Meet Everyone (the Customers): The very first thing I do with a Security team is ask them, who do you know in the organization.  So far, other than one lone individual at one company, they respond just with other IT people – usually support desk, or infrastructure (usually networking).  My response is to task them with getting out and meeting all of the company.  I remind them that what the company does pays their salaries and bonuses, so it would be really good if they knew what that was.  We set up a grand tour where we meet every business unit within the company, and the Security team is tasked with only one task – listening.  They visit Marketing, Sales, Manufacturing, Distribution, Finance, Legal…any and all groups. I challenge the Security team to ask: “What is it that your department does?”, “How does what you do provide value to the company?”, “What keeps you up at night?”, “If you’re department wasn’t available, what would happen?”, “What processes are critical?  What technology is critical for you?”

Oh, and I remind them that there isn’t a wall between IT and “The Business”.  IT is part of the Business (thank you to David Schenk for that mantra).  Further references to “The Business” as a “them” costs them a quarter in a cookie jar.

Result: The team finds out who their customer is.  They will gain an appreciation for what the company does, and what is important to it. Their customer’s value, concerns, and problems become real.  Now the things that Security thinks about can become grounded in what the organization values.  There will be an affinity and empathy towards what the organization does, what pain points exist, and how Security and its actions have an impact.  It Changes the team’s approach from an abstract “Do this so the company doesn’t fail!” to “This will help distribution because they system won’t be disrupted!” or “Charge records will get to Fraud Prevention on time.”

  1. Communication: I mandate communication patterns between the teams.  I set down a few rules, many of which will sound like some training you had at some HR event:
    • If your email exchange goes beyond two messages, make it a phone call.
    • Better yet, always start with a phone call. Email is only for transmitting data (files, file manager…)
    • No, Better yet, if you can, walk over and talk to the person face-to-face (I do it by the theory “Managing by Walking Around”.
    • Group meetings are either Face-to-Face or with Video. Video is good when everyone is remote (global).
    • Communicate frequently – have team group meetings and everyone attends. I have one-on-one’s weekly to ensure people feel listened to.

Result: The team knows each other, their faces, and who each other is.  The team learns that most of communication is about facial expressions, vocal tones and things that don’t transmit via email.  Do not let people become anonymous.  Encourage people to feel included.

  1. Collaboration & Discourse: During meetings encourage feedback and contribution from all team members in priorities, learning, teaching, and what gets done. I have found that putting people on the spot for feedback doesn’t work well for those that may be more introspective, however making it clear that feedback is welcome, and will be considered opens the opportunity for them to speak.  This is achieved by making it clear that you expect your ideas to be challenged, and that you allow the team to do so.  Consider any feedback you do receive carefully – testing it with the team members offering it, examining how it can disprove your ideas (not how it confirm them).  Make sure the comments are focused on the idea, not the person suggesting it.  We all have ideas that have faults, so no sense blaming.  Rather it is better to refine the idea which becomes a learning experience.  You show value in their ideas and feedback by publicly considering it.  As a manager, allow your statements to be challenged.  Ask your team to disprove them – how could my idea be disproved?

Another technique I use is to ask each person in the team to come with updates on what they are doing so that they understand that everything going on is important and we should discuss it together.  Give them praise publicly for presenting the idea (not just when its right).

Result: You’ll be modeling what you expect your team to do in their interactions.  You’ll surface assumptions, find faults in designs and ideas, and gain a lot of opportunities to teach, and learn!  You’ll create feedback loops – a willingness to discuss openly any issues, problems or concerns.  You will do it in a manner that is open, lacking in blaming the individual, but focusing on the idea.  You will create an environment where people will feel they can participate.

  1. Be Willing to Fail: Model this from the top. Admit when you make mistakes.  Give others in the team credit and make it wildly public.  Recognize success globally but keep mistakes internal.  If a team member makes a mistake, take the blame on your shoulders to address, and have the conversation one-on-one with the team member.  Understand the issue, and encourage the learning process.  As the organization as a whole learns blameless environments, you can let mistakes be examined more broadly, but until that adoption occurs, you need to ensure that the team knows that you won’t hold mistakes against them (unless they are systemic and chronic).

Result: You’ll have a dedicated, loyal team.  One that sees learning as a sign of strength.  One that feels they contribute, they’re recognized, and that faults, while always painful and frustrating, will be less so – that they feel they can move forward, learn, grow and correct what goes wrong.

Change Process

  1. Allocate Expertise: I take stock of the team, their expertise, strengths, and of course challenges. I also ask what their interests and goals are – what do they like doing, what do they want to do?  With this information I divvy up responsibilities across the Security team.  While the structure depends on needs of the organization, and available skills, I make sure I’ve created comprehensive coverage of responsibilities.  I then collectively let the team know what my thoughts are, let them challenge them, point out what I might have missed, what things need to be added, and where someone feels strengths are not being leveraged properly.  Ultimately its about recognizing expertise in the team, and making sure that expertise is externalized – made public so that everyone knows who they can turn to.

Result: Recognition of expertise in your team, and point to them as the go-to for answers

  1. Embed in Projects: Now to break down more walls.  This is how I ensure that the team not only learns about what is going on in the organization, but also participates in creating the solution.  I assign the more experienced security people in my teams (those with broad insight) to projects within the company.  If the effort has a significant need for security, they become the Security Program Manager – the person who triages all requests from security, and who acts as liaison between the project personnel and security specialists within the security team.  This Program Manager needs to be very involved – participating in as many project meetings as possible, engaging with the project personnel, regularly communicating needs, and “Managing by Walking Around”.

I’ve made this arrangement at every client I’ve led.  I’ve had some people take to this like a fish in water – they love the interaction and actively participate with the project team, feel part of the team, and take its success personally.  In one particular case we attended a new project initiation.  We listened, recognized that security was not a material consideration for the project, provided non-security feedback and questions, and were rewarded with a big thank you.  They appreciated our input and made a habit of inviting us for every new project they considered.

Result: Engaging and Embedding in projects.  Participating and having ownership for the participation.  Create low friction, high return work environments where Security is perceived as being invested in the success of the project – through the time committed and the willingness to listen and care about the goal of the project.

  1. For Every Control You Implement You Must Give Something Back: This one probably sounds like process, but at its heart is empathy. Security teams have a tendency to impose controls that make tasks harder or take longer.  This is a problem for those people in the company trying to get their work done in time for a deadline imposed by their manager.  In an effort to meet the deadline they will be willing to take any steps to achieve that goal – including side-stepping security.  Security needs to empathize with this.  Hence my rule.

I ask my security teams to look at what their requirements are taking away in terms of efficiency.  How much time is lost because people have to go through the security control (perhaps versus the old way they did things, or versus the intended design).  I’ve had discussions about complex passwords being a requirement, and I ask about the challenge of remembering complex passwords, versus something less cumbersome to remember – like a biometric attribute (a finger) that is rather hard to forget.  Or multi-factor authentication where a token is something to lose, but using a mobile device is less likely to be forgotten because it is ingrained in our culture and daily lives.  Even elements such as change control, or segregation of duties can be examined carefully to see what is really the objective behind the controls, and how can it be arrived at in a manner that is far less cumbersome and obtrusive, yet deliver the same level of resistance to a threat.

Result: A mentality around the potential effects of security, and a thoughtful approach that looks to minimize that impact.  A view that Security Team actually cares and is sensitive to personal success.

  1. Prioritize – Be Great at Important Things: This is where I insert a bit of Security – but where understanding the customer comes strongly into play. Nearly every organization struggles with how to prioritize its work.  For Security Teams, this includes the endless list of “must have” projects, tickets, vulnerabilities, and audits.  How do you decide what goes first?  Some use a thumb in the wind, while others claim “professional expertise”.  I prefer measures and collaboration in the team.  I force the team through what I call “Risk Week”.  It’s a week-long session (that gets shorter over time as they get great at it) where we create our Risk model and mitigation priorities for the year.  It is a highly collaborative effort.  It includes revisiting all the organizational groups.  It includes assigned responsibilities within the team so they all participate.  It involves learning how to measure and make the exercise objective and repeatable.  It involves presenting their ideas, each participant challenging assumptions, and creating active discourse as priorities are weighed.  We even include an executive presentation where the team is welcome to present so that they gain the experience and the exposure.

Result: Risk Assessment that is based on the company’s goals and priorities, as well as reinforcing the collaborative nature and interaction that we want to foster.

  1. Manage to the Priorities:  Everyone has had the situation where a problem or finding crops up, and suddenly there is belief it needs to be the foremost problem we solve.  It is a “hair-on-fire” moment, and the belief is that all other work must stop so this can be fixed.  While Lean promotes pulling the Andon cord, I like to point out that there are likely many issues in Quality when Security is involved.  I stop everyone in the moment of “hair-on-fire” and ask them to calm down for a minute.  Breathe.  And then look at the list of prioritized items we agreed to work on.  I ask if this “hair-on-fire” issue should displace any of those issues.  If the answer is yes, we codify it with a risk profile that matches what we did during the Risk Assessment.  If it doesn’t (which is almost always the case), we add it to our master list of “all-the-things-we-should-do” so it’s not forgotten.

Result: You recognize the need to fix issues of Quality, but also to balance that against where the greatest returns are achieved, and how they align with the company’s objectives.  People still feel their concerns are valued, but you also ensure they maintain a balanced and normalized view of priorities.

  1. Manage the Flow of Work: This effort was far more ad-hoc in many respects, but I drew on numerous methods and tools for managing work.
    • Make Work Visible: Kanban – nearly every security team I’ve worked with has preferred Kanban as they way to visualize and manage their work. One task in, one task out, pick up the next task.  Because so much of security’s work is intertwined with other teams, it is hard to march to sprint cycles.  We could instead weave in and out of activities – pushing things into “on hold”, and flow with any other style of work more easily.  What we gained was visibility into what was being worked on, and what was yet to be done.
    • Fit to capacity & Level the Workload – we monitored the Kanban, as well as I had conversations about people’s workload.  If I felt they were being overwhelmed, or if they put in more than 40-45 hours of real work (e.g. I found them in the office after hours all the time), then I would postpone work based on company priorities.  I recognized that quality was going to be the first thing that was sacrificed if I didn’t put things on hold (see Build for Quality).  The team now respected that I valued their sanity, would avoid overwork, and would balance priorities.  They knew they could do the same.  I likewise drove those who didn’t put in the time to deliver.  Taking advantage of my desire for quality was not rewarded, and would be privately confronted.  To quote Nick Galbreath from his time at Etsy: “If you don’t take responsibility, then you probably don’t belong.”
  1. Build for Quality:  In many projects I have seen people start with a goal, and a deadline of a date.  I get frustrated by this model because most people are very bad at estimating how long something will take to accomplish.  Even with the concept of an MVP (Minimal Viable Product) they still underestimate the amount of work and time it will require.  To overcome this bias, I lay down a set of rules for every project:
    • Estimate your timeline and amount of work using the worst-case scenario.  We are so over optimistic at time estimation that this will be far more accurate.
    • You are allowed to remove features, but you are not allowed to remove quality.  If the solution will fail to operate shortly after launch, or there is a probability of disruption to regular operations, go back and fix.  Features can be added later.  Quality failures are highly disruptive.
    • Test, Test, Test.  Work on the change, try it, and make the change as many times as you want.  In test (non-prod) environments.  Get good at it.  Make mistakes, practice.  Learn.  Then when you get to production, its close to rote.  You’ve tested all the ways you can think of to fail, and have learned from them for the long term, not just this change.
    • If a deadline is going to be missed, evaluate the cost of doing so.  Then evaluate the cost to the company of pushing it out with the missing quality (e.g. what happens if it fails every other day, if operations stop, if it gives the wrong answers).  Measure this using money and time (which can be equated with money).  Things like loss of customers, financial errors, product failures all have monetary costs that can be measured.  Deadline pushers will give pause.

Result: You will surface over-optimism (it will take time, but you’ll be right more often than the optimists).  You will keep a focus on Quality, and make sure it stays in the forefront.  You’ll encourage learning during testing so that failures are a reward and help avoiding them when they are painful.  You’ll also provide a model for everyone to evaluate the impact of quality versus feature deadlines (there is no correct answer until the measure is made).

Change Technology

By now, I shouldn’t even need to talk anymore.  You should have a team that is on a path to functioning, collaborating, and looking for ways to save time and effort.  They know what they want to do, where they are frustrated, and have surfaced these issues.

Now go Lean.  Find the weaknesses in your flow of work and in your security risks.  Where do you need speed, where do you need to mitigate risk, and where do you need more data.  Examine, communicate, collaborate, and never forget that the Security Team is one piece of an organization that is carefully balancing Opportunity and Risk, and the greatest service the Security Team can deliver is to help guide those Opportunities past the potholes of Risk.

Posted in CISO, CSO, DevOps, DevSecOps | Leave a comment

Glass Houses…and Music Majors

First, a disclaimer…this post is *not* about bashing or ranting about Equifax’s security practices. Why? Because I do not have first hand knowledge of what they did or did not do, or what specific exploits and vulnerabilities were leveraged throughout the kill-chain of the event. Frankly, it’s likely only the security investigators (internal and external), legal team, and outside counsel will ever know the details. Which is just fine by me. If you wonder why, then you’ve obviously never been involved in a breach and the subsequent investigation. There is a lot of conjecture (some logical, some not so logical), and lot of hand wringing, certainly a lot of drinking (after hours), and a whole lot of lost sleep and hair (if you have any to begin with).

So why would I mention that?

Because I want to rant for a moment about the security community and the press who seem to have taken issue with how Equifax was breached.

This has nothing to do with their response to the breach.  Lets set aside Equifax’s horrible response after the breach. I will not condone, support, or even pretend to empathize with their response. To put it mildly, their response to the breach sucks. You were breached. Mae Cupla, and treat your customers, constituents, not-so-willing-pubic-whose-data-you-have like your own injured child who you just accidentally knocked off a ladder and gave the a lump on the head (and maybe a concussion).

Let’s instead talk about the blame we seem so eager to apportion.  Security professionals, take note of something we always say:

– It is not “IF” you will be breached, but “WHEN”

So suddenly Equifax is evil because they were breached?

You may counter, “but they had a vulnerability that was *3* months old!!!!”

Um, yeah….about that. Let me ask you how old the vulnerabilities are on your laptop that you use for your pen-testing. And if you are a CISO or other security professional employed at a company, and you believe you patch your public facing systems perfectly in less than 90 days, you are *woefully* uniformed, I would argue “naive” in understanding how companies work, and not plugged into something called “risk acceptance”. Ouch, I think I just touched some nerves, but let me assure you, this is not personal. It is about the dynamic of an organization – something that outsizes the best of us.

Again, I cannot say this is Equifax, but I can say that nearly every company I’ve come in touch with struggles with this same problem.

Security Team: “Bad vulnerability, and its out there exposed to the Internet. We must patch it right away!”
Development Team: “Can we test this first? Its probably going to break the application.”
Business Team: “This is a really critical app for our business group. Please don’t break it.”
Managers: “Don’t break the app. Can this wait?”
Executives: “We’re listening to all the people here, and can we please just not break things? Lets take it slow and test.”
Development Team: “We have features to get out that are a priority and are scheduled to go out in the next three weeks for a customer.”
Business Team: “Please don’t interfere with this critical customer need.”
Executives: “Can we please not break things…”
Development Team: “The patch breaks something. It will take us a couple of months to figure out what. Meanwhile we have these other features to get in.”
….

See a trend? I don’t want to represent this as being an endless cycle. Reality is (at least for the organizations I’ve worked with) they do eventually, in a fairly reasonable period of time (which I will admit is a *very* subjective assessment) get around to figuring it out and fixing whatever is broken by the patch. Some organizations are great at it and it might take one or two sprints to figure it out. Others, either other priorities or their backlogs are long, and maintenance work doesn’t fit into such a high priority, but they still get to it within 3-6 months. In some cases, depending upon the complexity of what a patch breaks, that’s pretty darn good. And if you are skeptical of that, you need to spend a bit more time embedded in a development team.

I remember quite a few years ago listening to a talk at BSidesSF (one of the early years) from someone whose day job was to teach companies how to write secure code, and evaluate code for security vulnerabilities.  He talked about a program that a customer asked them to write, and how, in their efforts they found that they committed exactly the same secure programming mistakes they lectured their customers to avoid.  They had vulnerabilities in their code, that were easily exploitable.  They found that deadlines made them take shortcuts and not get around to putting to use all the best practices that they could (or maybe should) have.  And these were some individuals who I had very high regard for in the application security field.  They admitted – “Its hard in the real world to do it right.”

So what should we learn from Equifax?

Security isn’t perfect.  We shouldn’t gang up on an organization just because they had a breach.  Every organization is trying to balance a business opportunity with the risk being posed to those opportunities. Its a balance. Its a risk equation. Its never pretty, but lets face it, most organizations are not in the business purely for the sake of security.  Every control costs money, causes a customer frustration, and has an impact on revenue.  You may say a breach does, and it does, but there is a balance.  Where exactly that balance is can be a subject of great debate because it is not precise, and can never be predicted.

Patching is much more than just “patch and forget”.  Application patching is even more complex.  The alleged vulnerability cited in the Equifax breach was 90 days old.  Even if it was 180 days old, there are factors we cannot even begin to understand.  Competing business interests, a belief that its exploitation couldn’t be leveraged further, a penetration team that didn’t find it or the exposure it could lead to because the applications were too complex to understand, or even human error missing the finding through a reporting snafu.  Stuff happens….no one is perfect, and we shouldn’t throw stones when our own houses have (despite our own protestations otherwise) just as much glass in them.

Ultimately, there are some practices that can help, but I will put a disclaimer here – these may already have been in place at Equifax.  Again, we are human, and systems/organizations are complex.  Complexity is hard to get right.  We also don’t know the full kill-chain in the Equifax scenario.  There may be more things that would help, or for that matter, these things may have been in place, it required even more complex efforts to address the root cause.  That said, here’s some things I would suggest:

  • Try to understand every application in your environment and how they tie together.  Knowing the potential chains of connection can help you understand potential kill-chains.
  • Create multiple layers of protection – so you can avoid a single failure being the result of catastrophic loss.  You can liken this to the “swiss cheese effect” where the failure occurs at multiple layers (or there aren’t any layers) and the breach easily cascades further and further into systems and data.
  • Run red-team exercises with targets as goals (e.g. that big database with customer data, or the AD domain user list).  Let your red team think like an outsider with a fun goal, and flexibility of time to figure out how to get there.  The results will inform you where you can improve primary controls, or where you can add additional layers or protection.
  • Patch external systems with far more urgency than internal.  This seems pretty obvious, but sometimes how we represent vulnerabilities is too abstract.  I have found that using the language of FAIR has been an immense help.  Two factors I try to focus on: Exposure (what population of the world is it exposed to) and Skill/Effort to exploit (is it easy or hard).  Given the volume of opportunistic threat attempts (a.k.a. door knob twisting), it makes sense to point to those values as key indicators of what will happen with exposed vulnerabilities.  I once pointed to the inordinate number of queries on a specific service port that a client used as proof that the “Internet knew they were there…” which leads to my last point…
  • Communicate in a language that people can understand, and in ways that make it real.  If you talk in CVSS scores, you need to go home.  Sorry, but to quote a favorite line of mine, its “Jet engine times peanut butter equals shiny.” (thank you Alex Hutton, your quote is always immortalized in that fan-boy t-shirt).  Put it in terms like: “The vulnerability is exposed to the Internet, there is nothing blocking or stopping anyone from accessing it, and the tools to exploit it are available in code distributed openly to anyone who has Metasploit (an open-source, freely available toolkit).  The attacker can then execute any command on your server that the attacker wants including getting full, unfettered access to that server, its data, and….”

Those are things I coach my teams on.  Things we should look at and learn from.  Because we need to find data that helps us get better.

One last thing that chafed my hide…

Some people had the audacity to say “…who would hire a CISO with a college major in music…”

Setting aside the rather comical philosophical rant I could make based on UCI’s research on the effects of Mozart on students studying mathematics, I’d like to put forth my own experience.

I hold a Bachelor of Architecture (yes, buildings!) and have a minor in Music, and two years post-bachelors in organizational psychology.  I am a fairly accomplished security consultant (who has done penetration testing and programming) and CISO.  My degree is not a disqualification from being a CISO, any more than a Music degree disqualifies the former CISO for Equifax for having her job.  Simply put, “COMPUTER SCIENCE IS NOT A PREREQUISITE FOR BEING A CISO”.

I have interviewed dozens of CISOs around the world.  Nearly every one of them said they liked having liberal arts majors and people outside of Computer Science fields in their teams because they brought a very different insight and analysis to the team.  It is my opinion that by the time you have reached five (5) years of experience, your college education is largely immaterial.  There are theories and data that college informs you of – such as what a petaflop is, if-then statements, and theory of assymetric encryption, but college does not tell you how to use these skills in the ever-changing dynamic of real life.  I call these skills the ability to analyze, synthesize, and respond.  E.g. the act of design.

For the CISO of Equifax, it is likely that her skills of analytics and design, and her ability to communicate those thoughts to executives were highly skilled.  It is also likely that she had experience with software, with networks, and other technical areas.  I can relate because in my undergraduate education for Architecture we had to take a Pascal programming class in our freshman year.  We had to take a “Computers in Architecture” class.  What I did with it was unique, and I would suspect what the former CISO of Equifax did with her experiences was unique as well.  Putting a blanket assumption over anyone’s experience is ill-informed, and frankly, quite naive.  Have a chat with them.  Know their skills, learn what made them capable and skilled, or at least trusted at what they do.  Then critique what they have brought to the table *today* as a result of all of their experiences (school included, but also all their work since then).

So let everyone put down the pitchforks and stones we were going to throw at someone else’s glass house, and go back to tending our own glass houses and note how someone else’s glass house got broken as a way to learn how to protect our own.  Because what I’m hearing so far isn’t helping, and is based on a lot of arm flapping and people far too interested at pointing at other people’s glass houses than tending their own.

Posted in Uncategorized | Leave a comment