Security Maturity vs Risk Based Security

I have spent much of my career exploring various security frameworks, compliance regimens and standards. I have dabbled in most of them, primarily because I am curious to see what value can be derived, what benefit they bring, and why someone believes that a certain approach is the best. I will be honest. After thirty years of exploring, I’ve formed some opinions. This blog is about one of them.

What is Security Maturity

Security maturity grew out of the CMMI model. The CMMI model was an effort (by my alma mater) to define process maturity. It specified five tiers of maturity: incomplete (unaware or undefined), initial (awareness but reactive), managed, defined, quantitatively managed, optimizing. The intent was to measure the process of discovery, growth, and improvement. It manifests as a measure of how well an organization executes a program of continuous improvement. A low level of maturity indicates that the organization has not implemented or operated a plan for continuous improvement. A high level maturity indicates that they have iterated continuously to identify methods of optimization and improvement. Demming’s Plan-Do-Check-Act (PDCA) is a great illustration of a program of continuous improvement. A maturity model would reflect how frequently and with how much rigor the PDCA cycle is operated. The PDCA cycle may focus on a the continuous improvement of a domain such as software development, a specific mitigation or control such as malware prevention, or even the organizational operations as a whole. The key is that maturity is designed as a measure of how the organization has progressed in the maturity of operating that cycle.

There is one weakness in this approach. How do you know what you are supposed to do. CMMI and Plan-Do-Check-Act are methods of refinement and improvement (which could also be phrased as capability and maturity), but refinement, improvement, and maturity of what?

I have had far too many conversations with people who insist that a security maturity model is the way out of their poor security posture, and that security maturity is a model of security practices and mitigations. Security maturity in their mind is a series of every increasing mitigation techniques that are in some cases loosely defined, and in others very rigidly defined. To them maturity is a representation of what a good organization does to mitigate threats.

There is however a fallacy in this belief. If we look at the purpose and definition of a capability maturity model, its intent is to mature the process of continuous improvement. In fact CMMI website states:

“Capability levels apply to an organization’s performance and process improvement achievements in individual practice areas.”

https://cmmiinstitute.com/learning/appraisals/levels

Maturity was never intended, and should never be used to define what practice areas are, and yet many people write or describe maturity as a sequence of technical mitigations or capabilities instead of process improvement maturity. Why have they done this? It is likely the case that because CMMI uses the word capability (capability maturity model) and people have latched onto the word and used it to define “what you should do”. It has become a proxy for their beliefs of what good or necessary security controls are. Yet CMMI and maturity was never intended, nor should it be used to define what security mitigations you should chose, nor the risk mitigation levels required to adequately protect your organization. Its intent was to be process improvement capability – how far you had advanced in the direction of continuous improvement.

Transposing process improvement for technical or mitigation expectations as the definition of maturity perverts the intent of maturity. Maturity (as defined in CMMI) is measuring the maturity of the process you use to derive what optimal means for your security program. Maturity should represent your end state of mitigations. Perverting maturity in this way makes maturity a representation of ideal controls and mitigations across every possible security domain, and can demand mitigations and technology that impose costs and effort that vastly exceeds the protections the asset needs. Applying maturity as a measure of “have you done this mitigation or applied that technology” is optimizing for a single view of what good security looks like. It assumes that security controls and mitigations are one-size-fits-all. I wish you well if you believe one standard set of mitigations can universally fit all organizations. Such an assumption becomes dogmatic about how organizations should secure themselves regardless of the threats to those organizations, the value they are protecting, their budget constraints, and their risk tolerance. And yet there are multiple resources on the Internet that focus on “what mitigations you should have at each maturity level.”

If you think I am crazy, see what the NCSC had to say about this and their own IAMM here: https://www.ncsc.gov.uk/blog-post/maturity-models-cyber-security-whats-happening-iamm

Bottom line for security maturity: use it as a tool to measure how well you are managing your process to define good security at your organization, not as a measure of what good security is.

What is Risk Based Security

Risk based security has several roots with various compliance programs that require a “risk assessment”, and the rise of risk ontologies such as FAIR. Risk based security is probably best defined as a program that identifies the greatest security risks to the organization based on impact and probability of occurrence, and focuses security efforts on those areas by prioritizing spend and levels of control. To me, Risk based security focuses me on the assets I need to protect, and what are the most likely threats against them. It tells me that I should spend more time, money, and effort on protecting certain assets and less on others. It does not however, necessarily tell me I should use processes of continuous improvement to measure the success and failure of those efforts. (Humor me here, I’m trying to be reductionist to illustrate a dichotomy.) Risk based security does provide a few clues however. Risk based security traditionally does focus on “how much do I want to reduce the risk” so it does help to define outcomes, such as how many threats we want to prevent against or how much impact do we want to prevent against. These are important points as they hint at the idea that measuring the result can indicate whether we have reduced the risk to the tolerances we can accept. What Risk based security still does not point to however is the process of this continuous improvement.

Do you see a pattern here? Maybe a hint at a good path forward?

If you think I am going to leave you here with a scathing critique, but no path forward, you do not know me well. I am too kind for that, and to me teaching means I give you a push in the right direction.

The Approach I Like – ISO 27001

ISO 27001 is the path that I love the most because it marries Security Risk Management for the definition of “what should we do”, with Security Maturity as the method of managing and continuously improving the “what should we do”.

I had the great pleasure about 14 years ago to meet Angelika Plate, one of the authors of ISO 27001, and one of the people in the industry I truly admire. The part of the conversation that stuck out for me was the very balanced way she presented the intent of ISO 27001, and in particular its relationship to ISO 27002. I had commented that people obsessed over ISO 27002 and that it was misleading because it gave people the impression that they had to do all the things in ISO 27002. Her reply was that the real two goals of ISO 27001 was to first identify the risks to the organizations that were relevant to the company, and second was to manage the security through a Plan-Do-Check-Act process. ISO 27002 was a guideline (not a standard!) that provided a broad menu of security mitigation control options, and that it was up to the organization to find which controls derived the best mitigation through continuous improvement, even quantitative measurement, and optimization. Her answer was that the two approaches – risk management, and maturity – are not mutually exclusive, and should be used together. Her answer also points to the parallels with maturity and CMMI. ISO 27001 uses Plan-Do-Check-Act. CMMI measures the organization’s maturity in developing processes of continuous improvement through awareness of the need for controls, operating those controls, managing them, quantifying measurement, and optimizing them. The parallels should be obvious, each with their own purpose.

Then of course you are going to next ask me “how do you choose the right controls that are effective?” That answer is well outside the scope of this blog, but suffice to say Maturity will give you a path to choose, observe, test, and optimize what you do. There is no one solution for mitigating risk. Use your best judgement, avoid the pressure of FUD that vendors give you, and think practically. If you know what is important to do (your risk assessment) then you know where to do more research and choices, and you know roughly where to monitor for its success or failures. You also know that being mature will include monitoring and improving on the choices you made. Turning your back on the problem after your first try is not a good path to maturity.

So think about your programs. Are you heavily biased toward Security Maturity? Step back and assess your risks. Are you heavily focused only on Security Risk Management? Then I would suggest you look at your process of continuous improvement that moves your security program forward and builds on your Security Risk Management. Both are useful, both are necessarily, and both will create an appropriate security program that is mature, because even a risk management program can be continuously improved.

Posted in Uncategorized | Leave a comment

Mandatory versus Guidelines: A story of FUD

I recently received a message in my inbox from a vendor (note the use of the word “mandatory”):

"Are you all caught up in the third ISO 27001 edition which introduces new mandatory controls regarding threat intelligence."

Is it really mandatory?

Too often people confuse the guidance that standards provide with what is mandatory. Too often vendors use a new line in standards guidance as a catalyst for new sales. Regardless of the source of this mis-information, you should always be on your guard. Even I was caught for a moment by the request above. “Was there a change…have I not kept up with the latest version…am I getting too slow to track all these things?” Happily, none of these questions are true, but it took a moment to figure that out.

Let’s give you some tools to help you answer these questions quickly.

Is it a standard or is it a guideline?

I find this question to be the most telling, and ISO is a great example. In a fantastically informative conversation I had over a decade ago with the lead of the working group for ISO 27001, I was told how many times she had to correct people that ISO 27001 was the standard, and that ISO 27002 was a guideline framework (my italics) to help understand the potential scope of their security program (ISMS). ISO 27002 was not meant as a mandatory set of controls, but rather to help people see how far and wide security controls could go, what was possible, and what they should consider.

When I received the email I quoted from at the top of the post, I did a quick check of ISO 27001. No mention of threat intelligence. I checked the link they included in their email, and it pointed to ISO 27002. Problem solved. It was not mandatory. Not by any stretch of the imagination. Was it a good practice? Yes. Was it mandatory? No.

Save the argument that everyone should have threat intelligence. I’m not saying people should not. I am saying it will look vastly different to different people. Big banks may want deep intelligence across their company and across the globe. A small company may just want enough to keep their systems patched. ISO 27001 mandates a risk-based approach that prioritizes efforts where they create the greatest risk-returns. It does not mandate that you do that through threat intelligence.

What is the intent?

I regularly find people dropping reference to Sarbanes-Oxley in reasoning for controls, or justification for blocking a modernization effort. I ask them to read the intent of the regulation or standard they are referring to. In the case of Sarbanes-Oxley it is to provide assurance of the controls over financial reporting. It has little to nothing to do with choices about how to modernize your applications or infrastructure, unless that intent is to eliminate all controls. In fact it has been repeatedly proven that companies can maintain Sarbanes-Oxley compliance without IT controls. Its hard, its not pretty, and I wouldn’t recommend it, but it can be done. Using a regulation as a bludgeon to stop an otherwise useful activity is the wrong approach.

Make sure you know the intent. If you focus on controls that give you assurance that the financials are accurate, then the rest is immaterial. If you focus on accounting controls, controls for data integrity, and prevention of inappropriate change to data, then you probably have Sarbanes-Oxley locked up. If someone uses Sarbanes-Oxley to stop an infrastructure modernization program, or as a criteria for stopping going to public cloud, they probably have no clue what they are talking about. Ask them how the change affects control over the financials. The blank stare you get in response will tell you everything.

Can you achieve better?

This is the most contentious, but my favorite. Some standards (I’m looking at you PCI-DSS!) are highly prescriptive. They mandate specific settings and technology to the detriment to innovation and best practice. PCI-DSS for a long time has mandated password controls of:

  • Require a minimum length of at least seven characters
    • Contain both numeric and alphabetic characters
      • Change user passwords/passphrases at least once every 90 days.

Now, NIST has long since advised that changing passwords every 90 days is dangerous in that in promotes bad behavior such as writing down passwords in unsafe places, forgetting passwords, and making passwords too simple so they can be remembered. PCI hasn’t caught up yet. But what should you do if you can do better? What is you use SSH keys? What if you choose not to rotate them every 90 days? What if you use MFA on top of the SSH keys? What if you use biometrics? Can you change those every 90 days?

The obvious point being made here is that some controls can be improved upon and made better than a “good practice”. In defense of PCI-DSS (which I often like to kick) they do have a category of “compensating controls” that allows you to define things better than their standard. And from what I’ve seen so far, PCI-DSS 4.0 focuses on objectives over prescriptive control activities. How that settles out is still to be seen, but again, it takes us away from the silly view that something is “mandatory”.

Think Carefully Before Acting

For all these calls for mandatory, be cautious. It is an easy way to install Fear, Uncertainty, and Doubt. It should not be that way, but it is human nature to illicit an illogical reaction. (Ah biases). Take your time, think it through. Is it a guideline or is it mandated in the letter of the law? Is it fulfilling the intent, or is the intent really in some other area? Is the requirements the best you can do, or can you do better? Only you as the owner of security can answer these. Do consider them carefully before making a rash decision that otherwise might ruin your budget for the year.

Posted in Uncategorized | Leave a comment

A Little Tech – Reset Troubles with MFA

Recently I encountered a bug in one of my second factor authentication apps that caused me to lose all the registered tokens for multiple sites. As you can imagine, losing (or having destroyed) the second factor for important sites can be a heart-in-the-throat moment (or three). Several of the sites I use fairly frequently, and I knew some of the sites would make resetting token generators difficult. Or so I thought. Hence the reason for this blog post.

In this post I shall discuss the challenges I faced, and the laughable situation of MFA recovery across the industry. Keep in mind, I’m examining this from my viewpoint of how much value each site held for me, and how I viewed the level of effort to gather the information to impersonate myself. This means I’m not going to dismiss SMS or email verification out of hand, but if you have something of particular value (identity, financial assets) I consider it a bad idea to use SMS or Email as the only element for verification.

A couple of posts that describe the issues with SMS and email for MFA:
https://auth0.com/blog/why-sms-multi-factor-still-matters/ https://www.youtube.com/watch?v=SOQgABDSYZE&t=250s

I have only named one company in this list, and that’s because they deserve it. You’ll see why.

And lastly, yes, yes, I know. Backups or synchronization would be great. But that’s before I was enlightened. After this mess I have looked at several options, but they are few, and did not include the (popular) token generation tool that I have been was using for years.

Allora…

Do You MFA?

The first step in my journey was to try and remember every account that I had MFA for. Fortunately I had just done a rough accounting of them. I created a list and started stepping through them one by one.

My accuracy was pretty good. Except one, and it happened to highlight something I didn’t expect. We expect financial services institutions to implement some form of MFA, and same for large CSP, and domain registrars, largely because the security industry and auditors have harped about how important it is in today’s Internet society. So I was happy to see that ISC2 had MFA. However ISACA does not. Yes, your friendly association of auditors does not walk the walk. I am in the process of serving them with a material audit finding.

Two is Better Than One

The next batch of sites I had to recover were not quite as disappointing, but still not great.

Most asked me to push a button to send a link in an email or a code in an SMS I had previously supplied. While this might seem reasonable for a recovery option, and for some of the sites this didn’t upset me terribly – particularly where the loss of the account wouldn’t impact me financially – it did bother me that some were tied to services that I did care about maintaining the service and value I had paid for them. I absolutely understand the argument that SMS MFA is better than no MFA, and that many people (probably a generation older than me) still use phones without apps, or struggle with the concept of apps, but not giving me better MFA options for those accounts frustrated me. I want to protect them because I’ve invested time into them being useful for me. Knowing that someone could take advantage of that (particularly being in my line of work, and the couple of interesting blips I’ve notice the last two years) made me quite concerned, particularly because most of these never notified me that I had registered a new MFA. Although some did.

What did really amuse me was a company that used email and SMS together. You first had to validate the pre-shared email to then click on a link that started a phone call via your pre-shared phone number. I found it amusing because both SMS and Email are susceptible to hijack and intercept, and are usually the easiest pieces of information to gather about someone. If I thought of the big, big, big companies who do use one of their product lines for some big-name stuff, the whole email/phone validation process became quite concerning. Of course the probability of one of those companies using a well known email address like service-admin@mybigcompany.com is really unlikely right, but so is the probability of them assigning the admin account a phone number that is real.

Know Your Friends, Know Your Enemies Better

One particular site with sensitive information actually started to give me a greater degree of comfort. It first asked for email verification of my request. Then it proceeded to ask a series of questions about certain pieces of information that generally only I would know. It was not perfect. I know this information is stored in some locations that may not be that secure (not as a result of my own choice mind you), but at least there was a degree of obscurity to the information, and multiple factors to verify my identity.

Kill and Recover

The most dramatic of all my recovery efforts was one particularly sensitive account. I had saved recovery keys, and pre-shared contact info. However it didn’t accept any of these. It is likely that when I first registered for this site (probably 10 years ago) I captured recovery codes that didn’t carry over when they updated their authentication system. Sigh.

Their resolution to this problem was to delete the authentication ID, create a new one, and then relink the products I owned back to that ID. While the process of deleting the account was fairly painless (it did require a 24 hour waiting period), deletion of the account only required my password. No other verification was needed. To their credit however, I did receive an email letting me know that the account was being deleted, and I had 24 hours to rescind the deletion. But knowing that just that piece of information could be used to delete my account was a little unsettling, if only for a threat to availability.

Relinking the sensitive services to my account required a sensitive piece of information, so that was reasonably secure, but given the sensitivity, I would have liked if they verified my identity through another means.

Recommendations (My Real Favorites)

  • If your customers are in the camp of “must have SMS or email” then give customers options. Several sites had “use token”, “use SMS”, “use email”, “use recovery codes”. This gave me options to decide how I could protect my assets. It also gives me options based on what I can support. (I’ll take 47 full ASCII character one-time-passphrases please – I love to hunt for non-US keyboard characters!)
  • Leverage multiple discrete methods of verification for anyone looking to reset their MFA. Create a form of MFA for MFA resets. Mix together different elements that typically will not be found together; recovery secrets and SMS; personal information and SMS. Frankly three different elements would be the best. Individually these items are not secure, but together they raise the bar and provide a greater level of resistance to an attacker.
  • For more secure sites, use methods that provide greater integrity. Pictures of identity documents coupled with SMS, recovery codes, (and fast customer service) can provide a level of certainty for sites containing highly confidential information.
  • If you feel the need to use personal data to verify an individual, make sure that information is not likely to be floating around in the public eye. Social Security Numbers, pay stubs, mother’s maiden name are all items that are possible to retrieve by anyone. If you are going to go this route, include multiple pieces of information and utilize a completely different method of verification. Raise the bar.
  • Notify through multiple means that MFA has changed. Just like a single method to verify, also notify via multiple methods. It makes it clearer when something malicious is happening to your account.

Posted in Uncategorized | Leave a comment

The Fear Mongers

“APT is your biggest risk.”

“Public cloud cannot be secure, just look at CapitalOne.”

“Insiders are your biggest threat.”

“You must have a SIEM if you are going to pass your SOX audits!”

Bah, humbug. Fear, Uncertainty, and Doubt (or FUD as we sometimes refer to it).

Most who revert to this pattern I find have particular characteristics.

They Haven’t Done It

Some people have never done security, now or in their past. However they do read the newspaper, or watch TV (okay, who watches TV anymore…YouTube!), or they are handed a sales script. Someone feeds them a story and they take it at face value. They repeat it. They preach it everywhere because they’ve been convinced through some disassociated argument. Or their livelihood depends upon it. You don’t earn money if your products don’t sell. Think of them loosely as the carpetbaggers. That may be a bit harsh, but security is not in their blood, under their fingernails, and they certainly don’t have any scars to show for it.

Those that have this characteristic sit outside of experience. They haven’t seen or experienced the realities nor do they have the insights. Note, I am willing to move those that do research out of this group because they at least can present some knowledge based on data, but those are few and far between, and they generally don’t use FUD. The rest are hard to convince, and are usually best just dismissed. If you find yourself 15 minutes into an argument with one them over whether you really need a SIEM to pass SOX, and they won’t budge, you have wasted 14 minutes and 59 seconds of your time. Well, maybe that estimate is a second short. Although kudos to those who spend the time to educate, and the 2% of Those Who Don’t Do who actually listen and understand.

They Don’t Do It

This characteristic has more to do with links by association. You find that your dinner conversation about being “in IT” means you must be able to fix someone’s home computer problems. Just like anyone in information security should be able to be a TV pundit about the latest ransomware attack, or motivations of hackers. The one difference in this category is that while the people who claim this ground have security in their blood, it may not be the right blood type, and the scars may be from completely different battles.

Those with this characteristic tend to build on an existing platform of knowledge, yet extend it through precarious cantilevers into subjects they haven’t really examined. That person who managed your mainframe security is probably not the best person to judge the security of public cloud, or at least not at first. Just like you wouldn’t (necessarily) ask your plumber to give you an opinion on how to replace your roof. But that does not mean that the cannot be educated. They just need to take the time to learn.

I find here the opportunity to teach, mentor, and share most rewarding. But it also can be the most challenging. Some people take to new information and views, but some cling to their old models like a survivor and a raft, even when the rescue ship is right next to them.

They Do It Wrong

Doing it wrong is usually a mix of taking what you’re told to do at face value, and not having the skills or experience yet to do it properly. The really egregious examples cling to their ways like that survival raft. The causes can be youth and inexperience, which is best overcome with good mentorship and opportunity to learn, or by stubbornly clinging to bad patterns despite every opportunity to learn otherwise.

In this category I find the opportunity to teach, mentor, and share most rewarding. But it also can be the most challenging. Some people take to new information and views, but some cling to their old models like a survivor and a raft, even when the rescue ship is right next to them.

Don’t Have Data to Back It Up

This is my favorite characteristic, and the one I like to “troll” the most. Some anecdote, recency bias, or availability bias creates “facts”. Everyone loves to use APT, or now ransomware as the way to drive attention to their solution, because it is an availability bias. The attempt to convince me that “Insider Threat is greater than External Attackers” will fall flat. You better be prepared to be challenged with data. I will take you to task.

Those that exhibit this characteristic either cling to their belief, even in the face of clear data, or eventually, and sheepishly, admit that their story has holes. Its often amusing to see how they will tread a fine line between saying, “Yeah, the data is right”, and “Still buy our product”.

Don’t Ever Do It In My House

For anyone who wants to do business with me, do me a big favor. Do not come with FUD. Do not come with anecdotes unless it is only to demonstrate how to accomplish building something, or to demonstrate an example. Do not come to educate me on something you haven’t done. Come to me with data that supports your point. Come to me with experience. Be willing to accept contrary views, and challenges to your solutions. Be willing to engage in discourse (note, I do not say debate or argument!). Let’s have a sensible conversation using data, attempting to find common ground, and points of reference. I will respect an informed view and one that is willing to be challenged any day. Anyone not willing to be challenged, and not having (accurate and relevant) data to back up their assertions will be summarily fed to the bears. They live under my desk…

If you want a really good read on the subject, Bruce Schneier has written a great article on the subject, and his book Beyond Fear: Thinking Sensibly About Security.

Posted in Uncategorized | Leave a comment

Better Late than Never: My First Foray into Real Metrics

Author’s Note, this blog was written back in 2013, but never made it this far. Forgive the delay and references to old presentations that may not be accessible.

It’s been a while since my last post, and I’ll blame it on the extra fun work that I’ve taken on this year. One of those projects has been starting a metrics and data analysis project for a client. Because I have a pretty long runway with this client, I’ve had time to think about what might be most useful and approach. I’m early in my journey so I’m going to talk about my experiences and the right and wrong turns I took.

First Step I took:

I gathered as many killer insights from listening to Risk Hose podcast, sitting across the table at Metricon dinner with Jay Jacobs, and a great presentation by Brian Keefer on his and Jared Pfost’s experiences. If you haven’t looked up one of these resources, you need to. They are great resources to get you going. So is Andrew Jaquith’s “Security Metrics” book and the SIRA site. I also stopped worrying about being perfect in my metrics. I learned from several of these people that metrics start where they start, and you can always perfect later. Getting started is the biggest hurdle.

Second Step I took:

I listened to what the business worried about. I had several forces at play here. There were concerns out of security teams about “unknowns”. What things were happening that we didn’t know about. There was a belief that some of these unknowns were “huge” issues. A belief, but no data. The business also believed that certain security processes were too cumbersome or prohibitive. I compiled a list of these concerns. I also thought about what metrics would help the security team understand what was going on – situational awareness.

Third Step I took:

Because I’m a critical thinker and empathist (some one who leans towards empathy) I decided to look for data and measures that could prove the negative beliefs against our current security posture and operations.

Some specific measures I decided on: 

  • Number of firewall rule changes per week, mean and average days to approval from time of submission of ticket, mean and average days to close ticket. This helped us track performance in support of the business. 
  • Firewall acceptance and rejection rates by port. We found this partially useful, and partially a great source of a betting pool. The data gave us an understanding of what were of ports of interest for external parties. We watched patterns of opportunistic probes as they evolved (which turned into a betting pool as to what what was the top port-of-the-week). It also provided us with intelligence on targeted probes based on our industry which meant someone had mapped the IP addresses to our company and/or our services.
  • Weekly total number of potential data exfiltration communications, variance of potential data exfiltration communications week to week, end-user initiated data exfiltration (activity by internal parties as opposed to externally based activity)
  • Types of detected activity on the Web Application Firewalls. While I knew this measure was fraught with issues around data accuracy and relevance, I decided to collect it anyway so we could at least have a baseline to measure our effective use of the tool. My justification (to myself) was that it was better to recognize there were 40,000 too many alerts through exact measures than to simply argue the fact based on a “gut feel” that there was “just too much noise”.

Each of these became key metrics I’m tracking.

Challenges so far:

Collecting data. In Andrew’s book, one of his criteria for good data was data that is easy to collect. Phooey. There is no such thing. Consolidating the data collection is one of our biggest hopes in new tools. We collect data from 7 different sources and we’re not done yet. Each tool has it’s own dashboard, it’s own very useful graphs, but no way to get this in to what we lovingly call “a single pane of glass”. Also some data require tools to gather. Some would wish to buy the “blocking” tool first. First lesson learned: start cheap, measure, determine if you really have anything to worry about. We already found one purchase that while in it’s current state is useful, we found, through metrics, that the original plan of massive spending was wasteful. Just not enough risk presented to justify million dollar expenditures over data exfiltration.

We have also found that many of the metrics create a situational awareness (albeit post-situation), and through looking at comparison metrics (vulnerabilities compared to “excessive” website communications; Data exfiltration traffic to vulnerabilities or viruses detected).

There will be more, as we find things, and as we find success and outright fails.

Posted in Uncategorized | Leave a comment

Three Key Patterns for Information Security Programs

After too many years witnessing the sham that are “security standards” and regulations, I feel like I have to be a bit of a grumpy old man. I’m not usually this way…well, I am old, but usually not terribly grumpy.

Let’s be really, really clear about something. There are some security standards that I think are quite nice, and give people the right nudge, particularly those who are growing in skills and experience. But then I watch others in the industry latch onto certain standards as if they have found the holy grail, and must bludgeon all who will not bow before this grail. Yes, you Clement Onan*, with your COSO devised SOC TSCs that are incomprehensibly obtuse. Yes, you Herbert Anchovy* with your mis-representation of the NIST CSF as a linear process instead of cyclical. And yes, you, Ernest Scribbler*, who adores the concept of merging every possible standard into an incomprehensible mess with 800 (yes, eight-hundred) individual control requirements used to bludgeon customers into sniveling gits who willingly offer up their checkbooks.

*The names of the guilty (and the innocent) have been changed, or faked, or otherwise obfuscated.

So now that you know my whipping boys, let’s talk about the proper guidance towards a solid security program:

Manage to your risks. Nearly every standard (ISO 27001, GLBA, GDPR, NIST CSF, 800-53, The Bruces…) speak about performing a risk assessment, identifying an appropriate level or risk, and what not. With one big word. RISK. Get to know it. Know what it means – its not how many vulnerabilities your scanner finds. It is about knowing what the business is trying to do to be successful, make money, take care of customers, pay employees, and pay its bills – OPPORTUNITY. It does mean that the company should be able to do those things successfully without having every Tom, Rick, and Dennis Moore running through the IT systems creating havoc, stealing information that is supposed to be secret or proprietary, and stopping productivity of employees with mounds of lupins scattered all about. Now you may have a few distractions, incidents, and lupins, but they should be at a level your executives can tolerate!

Agility to adjust to change. One thing that is certain (so we are told from the age of 12) is change. Business goals change, technology changes, attackers change, thieves change, and even tactics change as security professionals and attackers up the ante. How flexible are you? How flexible is your team? Have you bought into a system of security that you believe you can rely on for the next 300 years? Or are you smart and consider that it is probably out of date before you even buy it. (Let’s face it, all security defense is out of date since it is responding to an attack that has already happened!). Your ability to change, pivot, and adjust should be just as fast as your executive team’s tolerance for what goes wrong.

Continuous improvement. The ability to change is also coupled to the ability to know what works and what does not. Understand what threats are actually preying on your environment, what incidents will cause your executives to be upset, and when your hovercraft is full of eels. Data points like these can be very helpful to identify where your program is working, and where it is not. And with simple translation you can communicate the same to your executives by showing what (realistic) monetary impact is being avoided, and what opportunities are able to continue unabated.

Keep a very important point in mind: Standards and regulations should inform our security programs, not drive them. Our company’s needs, opportunities, and the threats to them should drive our security programs. Standards should be used only as a guide to give those programs a form – a way to order or structure them, nothing more. Trying to take a standard or regulation and say that it covers everything your program needs, is like saying a comfy cushion will get the confession you want!

Posted in Uncategorized | Leave a comment

The Fallacy of Permanence

I’m sure Daniel Kahneman has defined this fallacy in better terms, but it is a good story to show one of the potential reasons why the concept of DevOps and Lean are so valuable. And also why certain types of IT business are profitable and others eek by on small margins.

When you decide you want to put in a new lawn, rarely will you find yourself starting your search by looking for the “cheapest” option. Your first inclination is to find the most attractive or appealing lawn you can find. You may set a budget and search within that criteria, but more often then not, you settle on something that is at the upper end of that scale. You want something nice. The choice of purchase leans towards the aesthetics (and in many senses) the outcome you want to achieve – a nice attractive lawn.

Then phase two happens – someone has to mow it, fertilize it, and care for it. This is an operational process that goes on over time, all in parallel to your day-to-day life. As time passes, that operational process is subject to your emotional and financial ups and downs, and resource availability. When money is scarce, you try to save and maybe skimp on how often you fertilize that lawn or mow it. When times are flush you might re-patch that section that went brown when a dog decided it was a nice spot for a bio-break. Cost, effort, and resource management become your focus, with cost being the one item that shows up time and time again (aside from those recurring brown spots).

The original image of what you were buying when you purchased the sod to make your lawn beautiful affected your purchase decision, and your willingness to “spend a bit more” to get what you wanted. That willingness to put out money went away when you got to the operational aspect of that lawn because it was subject to the ups and downs of real life.

If the parallel to IT build and operations is not apparent, then let me illustrate.

Companies will be enamored with the image of a beautiful new system, complete with beautiful interfaces, fantastic business processes and workflows, and a few colorful blinky lights thrown in for good measure. They will pay a premium dollar to achieve that outcome because their eye is on the outcome and opportunity that they see. They are making decisions from a macro view of opportunity (what money am I going to make from all this goodness) and cost (what is my view of capital investment to achieve all this goodness).

With the system now installed, the company goes through its ups and downs. The original opportunity sees its ups and downs with revenue being variable over time due to economic conditions or customer satisfaction issues. Company revenue also is affected, and the picture of spend now changes. The focus is how to squeeze a few more pennies out of day-to-day operations, how to be more efficient, and how many brown spots can we tolerate in our lawn.

The lessons to learn from this story:

  1. When looking at an opportunity, recognize the bias you have towards thinking that it has one state, and that state involves a big celebration of its success and the money it earned – all at a single point in time. That thinking is not about its ongoing lifecycle and that opportunities live, breath, change, grow, turn brown, and require careful care and feeding.
  2. In the world of rapid development and Agile (and for the most part DevOps) systems are not permanent. They evolve, change, and adjust to the world at hand which can reduce the bias. While the initial standup of an IT system (just like putting in a new lawn) has upfront work, it is an ever evolving system that changes over time. Even your lawn goes through evolution (weather changes, adjustments to allow for those rose bushes, or patches to those designer dog markings).
  3. In the world of DevOps, operations comes with the development. They are inextricably intertwined (or should be). This means development, new ideas, and new features go up and down with the fortunes of the company as well. Decisions are made as a whole – considering the new opportunity *and* the current operation of the system. Efficiency is evaluated as a system, rather than in discrete parts. Even a lawn can be managed this way – with incremental change such as increases in a capital spend (weather driven sprinklers) that can reduce an operational spend (water costs).

The moral to the story is that different phases in the lifecycle of your IT induce different spending biases. These can lead to overspend in implementation, and underspend in operations. An awareness of this bias, and the use of an evolutionary and incremental view of systems can positively affect those patterns. Consider them…

And as a careful aside, if you are selling IT services and systems to customers, recognize where the spending bias occurs in this cycle (and hence where margin lies).

Posted in Uncategorized | Leave a comment

DevOps is dead, long live Dev!

Yes, it’s hyperbole.  But the headline is important.  In 2020 I still encounter companies who are moving into cloud, yet are immovable mired in their traditional way of doing IT.  They are somehow convinced that a group of infrastructure folks build some things, some developers roll up, drop off some applications, and they are done. Worse are those that believe that they simply hand a set of tools to a team and call them DevOps because they’re using Chef, Terraform, or some such “automation” that makes it DevOps.  When in fact all they are doing is putting lipstick on a pig.

What these organizations miss is that when you want to make your IT modern, you leave behind a siloed approach where things are thrown over a wall.  You no longer have a bastion of operations that is saddled with unknown environments requiring full run-books and detailed operations manuals, and you no longer build infrastructure as a stand-alone function of IT that is separate from development.

You must be one!  You must engineer!  Not like an operations person, but in the way a developer engineers.  He writes just what he needs to get the job/request done and move to the next task he’s been assigned.  He thinks about how things fit together and builds constructs in his code to link his components together.  He understands and runs test to make sure what he wrote does what he wants and needs it to do.  If all of this is done under the right conditions (which means there is verification all is okay) then his code is ready to be pushed into production.

At this point you might be saying, that’s great for a developer, but not for infrastructure.  My retort is that I *am* describing infrastructure.  An Infrastructure Developer.  And if you are working in cloud, anyone working on infrastructure had better be writing code.  Any other pattern and you are repeating the sins of the past.  They are now developers.  If you are unable to recognize the error of not thinking this way, perhaps it’s time to step back from your mouse and GUI console and reconsider.  Anything other than treating infrastructure as development of code risks making your environment untestable, unstable, unrepeatable, unrecoverable, and un-understandable.  Sa think carefully.  The world is now the world of developers and code.  If you fail to recognize this and adapt, you risk succumbing to the real (r)evolution.

And just to be clear, I’m describing DevOps (in its proper, pure form). But because people have already taken to bastardizing and mis-using the term, I’m just going to slap you with some hyperbole.

Posted in DevOps, DevSecOps, Uncategorized | Leave a comment

I Love the Subject of Change Control

I love it not because it is wrapped in complexity, but for quite the opposite reason; it is (and should be) a perfect case of simplicity.

Let me explain why with a quick story of bad change control.

I watched an organization react to the pretty significant failure of a change by instituting a new step in their change control process.  “The CIO or VP of Operations must now approve all changes.”  You couldn’t make me laugh (and cry) at the naivety of such a move.  What value can a CIO or VP of Operations add to a change that would make it any less likely to fail? The answer is none, unless they are the one who designed or is implementing the change, or is somehow an intimate expert in the change, which for a large organization would be a gross misuse of their time.  The only thing that insertion of such a step would achieve is to demonstrate that you have no confidence in the people who work for you.  While that lack of confidence may be true, the action achieves absolutely nothing to improve the probability that changes will be better in the future.  Instead it breeds a culture of fear, a belief in inferiority in the staff which then slows velocity for new changes and services to a crawl, with no one in IT willing to take chances let alone do things that should be necessary.  The IT department grinds to a standstill, its customers become dissatisfied, and innovation and new products die on the vine.

Instead, what if every change failure was subject to a proper root-cause analysis (5 Whys).  What if that root cause analysis led to investment to correct the root cause (training, corrected documentation, better testing, patience to do things right, breaking a cycle of rushing, executing necessary maintenance, coordinating schedules changes, communicating changes to customers…)

Now you’ll have IT that corrects its mistakes by fixing the real cause, with support from management to do so.  Replacing a culture of distrust to one that employs Maslow’s higher order needs – growth and contribution, that can be painful when errors are made, but are personally (and collectively) rewarding when learned and executed on correctly.  

Psychology studies have shown that both punishment and encouragement can both create changes in behavior, but that one creates fear of taking chances, while the other tends to promote development.  Each has their place.  They should each be used appropriately to create the environment necessary.  But starting with punishment and distrust is not a good precedence.  Start with trust, and punish when that trust is repeatedly broken.

Posted in Uncategorized | Leave a comment

Unicorns (and how the Gene Kim challenges us yet again…)

I had the opportunity to read Gene’s new book The Unicorn Project last month. Like the Phoenix Project, I was riveted – nearly missing my tube stops on the way to work. My distractions came from usually as a result of my attempts to align the concepts of The Five Ways with how many of my legacy clients have (continue) to work. I found myself (once again) immersed in looking for ways to best unravel those old ways of thinking that need to change. Yes, once again, I found a book that makes me think hard.

The Unicorn Project is written from the perspective of application developers – a perspective that was somewhat glossed over in The Phoenix Project – not intentionally I am sure, but because each book has to choose an audience. The Unicorn Project challenges many of the ways companies have done things in application development and the approach to the products and services they offer out through IT. Some of the challenges seem trite (and yet still pervasive) such as silos, turf wars, politics, and finger pointing. Some of the challenges are pervasive and yet unavoidable such as legacy systems that no one wants to touch, applications that have tentacles of dependencies that seem insurmountable, and an unwillingness to tackle the big issues and technical debt that litters our data centers.

I took away many ideas from The Phoenix Project back in 2013, and now I have many more from The Unicorn Project. For one, the concepts of startups and innovation within large organizations really tickled my fancy. And as always, here are the usual parade of wonderful ideas for achieving Lean, collaboration, and The Five Ways. Yet, one concept that really endeared me The Unicorn Project was the focus on Flow and Psychological Safety – something that is very near and dear to me. This book should inspire you to scratch your head, maybe even miss a few stops on your ride to work, and get you thinking about how to change how you and your people can work better in IT.

Oh, and now its available, here: https://itrevolution.com/the-unicorn-project/

Posted in Uncategorized | Leave a comment