Better Late than Never: My First Foray into Real Metrics

Author’s Note, this blog was written back in 2013, but never made it this far. Forgive the delay and references to old presentations that may not be accessible.

It’s been a while since my last post, and I’ll blame it on the extra fun work that I’ve taken on this year. One of those projects has been starting a metrics and data analysis project for a client. Because I have a pretty long runway with this client, I’ve had time to think about what might be most useful and approach. I’m early in my journey so I’m going to talk about my experiences and the right and wrong turns I took.

First Step I took:

I gathered as many killer insights from listening to Risk Hose podcast, sitting across the table at Metricon dinner with Jay Jacobs, and a great presentation by Brian Keefer on his and Jared Pfost’s experiences. If you haven’t looked up one of these resources, you need to. They are great resources to get you going. So is Andrew Jaquith’s “Security Metrics” book and the SIRA site. I also stopped worrying about being perfect in my metrics. I learned from several of these people that metrics start where they start, and you can always perfect later. Getting started is the biggest hurdle.

Second Step I took:

I listened to what the business worried about. I had several forces at play here. There were concerns out of security teams about “unknowns”. What things were happening that we didn’t know about. There was a belief that some of these unknowns were “huge” issues. A belief, but no data. The business also believed that certain security processes were too cumbersome or prohibitive. I compiled a list of these concerns. I also thought about what metrics would help the security team understand what was going on – situational awareness.

Third Step I took:

Because I’m a critical thinker and empathist (some one who leans towards empathy) I decided to look for data and measures that could prove the negative beliefs against our current security posture and operations.

Some specific measures I decided on: 

  • Number of firewall rule changes per week, mean and average days to approval from time of submission of ticket, mean and average days to close ticket. This helped us track performance in support of the business. 
  • Firewall acceptance and rejection rates by port. We found this partially useful, and partially a great source of a betting pool. The data gave us an understanding of what were of ports of interest for external parties. We watched patterns of opportunistic probes as they evolved (which turned into a betting pool as to what what was the top port-of-the-week). It also provided us with intelligence on targeted probes based on our industry which meant someone had mapped the IP addresses to our company and/or our services.
  • Weekly total number of potential data exfiltration communications, variance of potential data exfiltration communications week to week, end-user initiated data exfiltration (activity by internal parties as opposed to externally based activity)
  • Types of detected activity on the Web Application Firewalls. While I knew this measure was fraught with issues around data accuracy and relevance, I decided to collect it anyway so we could at least have a baseline to measure our effective use of the tool. My justification (to myself) was that it was better to recognize there were 40,000 too many alerts through exact measures than to simply argue the fact based on a “gut feel” that there was “just too much noise”.

Each of these became key metrics I’m tracking.

Challenges so far:

Collecting data. In Andrew’s book, one of his criteria for good data was data that is easy to collect. Phooey. There is no such thing. Consolidating the data collection is one of our biggest hopes in new tools. We collect data from 7 different sources and we’re not done yet. Each tool has it’s own dashboard, it’s own very useful graphs, but no way to get this in to what we lovingly call “a single pane of glass”. Also some data require tools to gather. Some would wish to buy the “blocking” tool first. First lesson learned: start cheap, measure, determine if you really have anything to worry about. We already found one purchase that while in it’s current state is useful, we found, through metrics, that the original plan of massive spending was wasteful. Just not enough risk presented to justify million dollar expenditures over data exfiltration.

We have also found that many of the metrics create a situational awareness (albeit post-situation), and through looking at comparison metrics (vulnerabilities compared to “excessive” website communications; Data exfiltration traffic to vulnerabilities or viruses detected).

There will be more, as we find things, and as we find success and outright fails.

About Daniel Blander

Information Security consultant who has spent twenty plus years listening, discussing, designing, and creating solutions that fit the requirements presented. President, Techtonica, Inc.
This entry was posted in Uncategorized. Bookmark the permalink.