First, a disclaimer…this post is *not* about bashing or ranting about Equifax’s security practices. Why? Because I do not have first hand knowledge of what they did or did not do, or what specific exploits and vulnerabilities were leveraged throughout the kill-chain of the event. Frankly, it’s likely only the security investigators (internal and external), legal team, and outside counsel will ever know the details. Which is just fine by me. If you wonder why, then you’ve obviously never been involved in a breach and the subsequent investigation. There is a lot of conjecture (some logical, some not so logical), and lot of hand wringing, certainly a lot of drinking (after hours), and a whole lot of lost sleep and hair (if you have any to begin with).
So why would I mention that?
Because I want to rant for a moment about the security community and the press who seem to have taken issue with how Equifax was breached.
This has nothing to do with their response to the breach. Lets set aside Equifax’s horrible response after the breach. I will not condone, support, or even pretend to empathize with their response. To put it mildly, their response to the breach sucks. You were breached. Mae Cupla, and treat your customers, constituents, not-so-willing-pubic-whose-data-you-have like your own injured child who you just accidentally knocked off a ladder and gave the a lump on the head (and maybe a concussion).
Let’s instead talk about the blame we seem so eager to apportion. Security professionals, take note of something we always say:
– It is not “IF” you will be breached, but “WHEN”
So suddenly Equifax is evil because they were breached?
You may counter, “but they had a vulnerability that was *3* months old!!!!”
Um, yeah….about that. Let me ask you how old the vulnerabilities are on your laptop that you use for your pen-testing. And if you are a CISO or other security professional employed at a company, and you believe you patch your public facing systems perfectly in less than 90 days, you are *woefully* uniformed, I would argue “naive” in understanding how companies work, and not plugged into something called “risk acceptance”. Ouch, I think I just touched some nerves, but let me assure you, this is not personal. It is about the dynamic of an organization – something that outsizes the best of us.
Again, I cannot say this is Equifax, but I can say that nearly every company I’ve come in touch with struggles with this same problem.
Security Team: “Bad vulnerability, and its out there exposed to the Internet. We must patch it right away!”
Development Team: “Can we test this first? Its probably going to break the application.”
Business Team: “This is a really critical app for our business group. Please don’t break it.”
Managers: “Don’t break the app. Can this wait?”
Executives: “We’re listening to all the people here, and can we please just not break things? Lets take it slow and test.”
Development Team: “We have features to get out that are a priority and are scheduled to go out in the next three weeks for a customer.”
Business Team: “Please don’t interfere with this critical customer need.”
Executives: “Can we please not break things…”
Development Team: “The patch breaks something. It will take us a couple of months to figure out what. Meanwhile we have these other features to get in.”
….
See a trend? I don’t want to represent this as being an endless cycle. Reality is (at least for the organizations I’ve worked with) they do eventually, in a fairly reasonable period of time (which I will admit is a *very* subjective assessment) get around to figuring it out and fixing whatever is broken by the patch. Some organizations are great at it and it might take one or two sprints to figure it out. Others, either other priorities or their backlogs are long, and maintenance work doesn’t fit into such a high priority, but they still get to it within 3-6 months. In some cases, depending upon the complexity of what a patch breaks, that’s pretty darn good. And if you are skeptical of that, you need to spend a bit more time embedded in a development team.
I remember quite a few years ago listening to a talk at BSidesSF (one of the early years) from someone whose day job was to teach companies how to write secure code, and evaluate code for security vulnerabilities. He talked about a program that a customer asked them to write, and how, in their efforts they found that they committed exactly the same secure programming mistakes they lectured their customers to avoid. They had vulnerabilities in their code, that were easily exploitable. They found that deadlines made them take shortcuts and not get around to putting to use all the best practices that they could (or maybe should) have. And these were some individuals who I had very high regard for in the application security field. They admitted – “Its hard in the real world to do it right.”
So what should we learn from Equifax?
Security isn’t perfect. We shouldn’t gang up on an organization just because they had a breach. Every organization is trying to balance a business opportunity with the risk being posed to those opportunities. Its a balance. Its a risk equation. Its never pretty, but lets face it, most organizations are not in the business purely for the sake of security. Every control costs money, causes a customer frustration, and has an impact on revenue. You may say a breach does, and it does, but there is a balance. Where exactly that balance is can be a subject of great debate because it is not precise, and can never be predicted.
Patching is much more than just “patch and forget”. Application patching is even more complex. The alleged vulnerability cited in the Equifax breach was 90 days old. Even if it was 180 days old, there are factors we cannot even begin to understand. Competing business interests, a belief that its exploitation couldn’t be leveraged further, a penetration team that didn’t find it or the exposure it could lead to because the applications were too complex to understand, or even human error missing the finding through a reporting snafu. Stuff happens….no one is perfect, and we shouldn’t throw stones when our own houses have (despite our own protestations otherwise) just as much glass in them.
Ultimately, there are some practices that can help, but I will put a disclaimer here – these may already have been in place at Equifax. Again, we are human, and systems/organizations are complex. Complexity is hard to get right. We also don’t know the full kill-chain in the Equifax scenario. There may be more things that would help, or for that matter, these things may have been in place, it required even more complex efforts to address the root cause. That said, here’s some things I would suggest:
- Try to understand every application in your environment and how they tie together. Knowing the potential chains of connection can help you understand potential kill-chains.
- Create multiple layers of protection – so you can avoid a single failure being the result of catastrophic loss. You can liken this to the “swiss cheese effect” where the failure occurs at multiple layers (or there aren’t any layers) and the breach easily cascades further and further into systems and data.
- Run red-team exercises with targets as goals (e.g. that big database with customer data, or the AD domain user list). Let your red team think like an outsider with a fun goal, and flexibility of time to figure out how to get there. The results will inform you where you can improve primary controls, or where you can add additional layers or protection.
- Patch external systems with far more urgency than internal. This seems pretty obvious, but sometimes how we represent vulnerabilities is too abstract. I have found that using the language of FAIR has been an immense help. Two factors I try to focus on: Exposure (what population of the world is it exposed to) and Skill/Effort to exploit (is it easy or hard). Given the volume of opportunistic threat attempts (a.k.a. door knob twisting), it makes sense to point to those values as key indicators of what will happen with exposed vulnerabilities. I once pointed to the inordinate number of queries on a specific service port that a client used as proof that the “Internet knew they were there…” which leads to my last point…
- Communicate in a language that people can understand, and in ways that make it real. If you talk in CVSS scores, you need to go home. Sorry, but to quote a favorite line of mine, its “Jet engine times peanut butter equals shiny.” (thank you Alex Hutton, your quote is always immortalized in that fan-boy t-shirt). Put it in terms like: “The vulnerability is exposed to the Internet, there is nothing blocking or stopping anyone from accessing it, and the tools to exploit it are available in code distributed openly to anyone who has Metasploit (an open-source, freely available toolkit). The attacker can then execute any command on your server that the attacker wants including getting full, unfettered access to that server, its data, and….”
Those are things I coach my teams on. Things we should look at and learn from. Because we need to find data that helps us get better.
One last thing that chafed my hide…
Some people had the audacity to say “…who would hire a CISO with a college major in music…”
Setting aside the rather comical philosophical rant I could make based on UCI’s research on the effects of Mozart on students studying mathematics, I’d like to put forth my own experience.
I hold a Bachelor of Architecture (yes, buildings!) and have a minor in Music, and two years post-bachelors in organizational psychology. I am a fairly accomplished security consultant (who has done penetration testing and programming) and CISO. My degree is not a disqualification from being a CISO, any more than a Music degree disqualifies the former CISO for Equifax for having her job. Simply put, “COMPUTER SCIENCE IS NOT A PREREQUISITE FOR BEING A CISO”.
I have interviewed dozens of CISOs around the world. Nearly every one of them said they liked having liberal arts majors and people outside of Computer Science fields in their teams because they brought a very different insight and analysis to the team. It is my opinion that by the time you have reached five (5) years of experience, your college education is largely immaterial. There are theories and data that college informs you of – such as what a petaflop is, if-then statements, and theory of assymetric encryption, but college does not tell you how to use these skills in the ever-changing dynamic of real life. I call these skills the ability to analyze, synthesize, and respond. E.g. the act of design.
For the CISO of Equifax, it is likely that her skills of analytics and design, and her ability to communicate those thoughts to executives were highly skilled. It is also likely that she had experience with software, with networks, and other technical areas. I can relate because in my undergraduate education for Architecture we had to take a Pascal programming class in our freshman year. We had to take a “Computers in Architecture” class. What I did with it was unique, and I would suspect what the former CISO of Equifax did with her experiences was unique as well. Putting a blanket assumption over anyone’s experience is ill-informed, and frankly, quite naive. Have a chat with them. Know their skills, learn what made them capable and skilled, or at least trusted at what they do. Then critique what they have brought to the table *today* as a result of all of their experiences (school included, but also all their work since then).
So let everyone put down the pitchforks and stones we were going to throw at someone else’s glass house, and go back to tending our own glass houses and note how someone else’s glass house got broken as a way to learn how to protect our own. Because what I’m hearing so far isn’t helping, and is based on a lot of arm flapping and people far too interested at pointing at other people’s glass houses than tending their own.