I encountered an interesting post yesterday over on the Smart Grid Security blog. In it, the author asked the question:
"Without a lingua franca for security, how will anyone ever know which organizations are doing a comparatively better or worse job? Whether one's own organization is kicking butt or having its butt kicked?"
It's an interesting question, and something that has been discussed a lot over the years. However, I think I've reached a new conclusion in my thinking on the subject, which is this: I only think a very few Key Performance Indicators are appropriate, useful, or necessary when it comes to public sharing.
Specifically, coming from an engineering perspective, I think that the standard IT KPIs of Availability, Mean time between failures, Mean time to repair, and Unplanned downtime really matter most, even with regards to security. We may also want to add in there some sort of reasonable risk assessment, as relates to the SEC Q1 filing guidance, but those numbers will tend to be stated more as dollars (which we understand), rather than as operational metrics.
Now, don't get me wrong... I think that each organization should have other measurements. However, what each org measures, and how they use those metrics, will tend to be unique/specialized. Rather, my point here is that there are really very few publicly sharable metrics that matter, and I think that they come back to what we already know as useful IT KPIs. More importantly, moving to this approach tabs in perfectly with a topic that I'll be writing about soon, which is eliminating "security" as a category altogether in favor of focusing on IT operations reliability and GRC practices.