Skip InfoRisk Tolerance

| 5 Comments | 1 TrackBack

In financial risk management - and particularly with investments - there is a reasonable concept of risk tolerance. You can determine ahead of time just how aggressive you want to be with your portfolio, which maps directly to how much risk you're willing to tolerate (essentially meaning how low a probability of return or how high a probability of loss you're willing to accept). In information risk management we've talked about similar concepts, which on the surface might seem fine, but which in practice seems be inconsistent, unnecessary, and - frankly - outright irrelevant.

How this line of thought got started relates to my recently walking through how we would build-out a GRC program, in a generic sense, for customers. Logically, it seemed to make sense that you'd want to first establish your inforisk "tolerance," which could then be used to articulate your priorities and set policy. In practice, however, this isn't how things work. Instead, you gather as much information as possible for a specific decision, and then you decide your risk tolerance on the fly. There isn't really a starting point of "X amount of risk is acceptable."

In flipping things around this way, it seems clear that inforisk management really is better aligned to decision management/analysis than it is to financial risk management. Stop and think about this for a minute (if you'll be so kind). InfoRisk is about decisions, not about risk management; at least, not in the same sense as managing financial risk. This conclusion becomes muddier when you apply a tool like FAIR to your analysis, which helps produce monetary estimates for risk factors around a given scenario. However, if you think about it, you're not using FAIR to establish an up-front risk tolerance, but rather are using it as a decision analysis tool to either evaluate a given scenario, or to evaluate the effectiveness of applying controls to a given scenario.

As such, I think that your GRC process will look something like this:
1) Establish business requirements, including externally-asserted controls (e.g. regs, contracts).
2) Establish governance processes.
3) Articulate and mandate requirements and processes by way of policies, standards, etc.
4) Identify appropriate controls to meet requirements and support processes.
5) Measure effectiveness.
6) Repeat.

In the TEAM Model, I've boiled that down to a cycle of Risk Mgmt, InfoSec Mgmt, and Quality & Performance Mgmt, which are all centered around the articulation of requirements. I think that model is still correct today, though I think that the Risk Mgmt step is really more about the overall strategic and tactical functions of a GRC program.

What you'll note is absent from all of this is any sort of discussion of "risk tolerance." That's because it's not really a question of tolerance (or "appetite" in some circles), but rather a matter of having as much reliable information as is reasonable at decision time. You'll note that this notion tabs very nicely with legal defensibility theory, too. The only thing missing is survivability, which isn't really missing so much as part of a different conversation, which occurs as part of your overall business requirements and governance strategy (that is: what's important to your business operations? this could also rabbit-trail into a discussion of operational risk...).

Just to wrap this thought up... the next time you hear mention of "risk tolerance," please challenge the assertion... ultimately, with information risk it's not a question of tolerance, but rather decision analysis, and hopefully the ability to measure control effectiveness in the end toward achieving overall governance objectives. Or something like that. ;)

1 TrackBack

Soooo... from the firestorm I sparked with my last post, it's very clear I did a really lousy job expressing my point, and that I've also apparently riled folks up by challenging yet another ax of inforisk management. I'm going... Read More

5 Comments

Maybe I can build a bridge between your perspective and the idea of risk tolerance.

First I want to point out that you are framing information security as 'what security people do' or maybe 'what decisions security people make, or decisions they influence'. This is how I interpret your references to decision analysis. This is a very pragmatic perspective, especially for information security specialists, but it's not the only way of looking at it.

As I see it, information security is a *judgment* about the present state of assets, defenses, practices, and controls, relative to likely threats. (see: http://newschoolsecurity.com/2010/05/getting-the-time-dimension-right/). In this view, decisions are only part of the picture.

Also, it appears that you are framing 'risk' as a collection of individual 'things that might go wrong'. I label this view as "little 'r' risk".

But there's another perspective, which I label "big 'R' risk", which is the aggregate financial or economic impact of costs associated with information security. I've proposed a Total Cost framework to measure big 'R' risk: http://meritology.com/resources/Total%20Cost%20of%20Cyber%20(In)security.ppt. Basically, the total cost stream associated with information security is viewed as a stochastic process, and the probability distribution associated with the stochastic process is estimated, not for prediction purposes but to evaluate the impact of alternative security postures, programs, decisions, etc. It is in the big 'R' risk perspective that information security can be seen as a subset of operational risk, which in turn can be seen as a subset of enterprise risk.

The reason I'm explaining all this is that 'risk tolerance' becomes most visible in the big 'R' risk perspective. The probability distribution for Total Costs will always have a 'tail', meaning that the probability of much-higher-than-normal costs is not zero, and in most cases it will be 'material' (to use the accountant's term).

In terms of security costs and investments, a firm can spend more and more money (now or later), but it will never drive that 'tail' to zero. There is some threshold, labeled 'risk tolerance', where the firm will stop spending to lower the curve in the tail. This is the big 'R' risk that the firm retains and will need to cover with other resources should worst come to worst.

Of course, this explanation doesn't elaborate all the problems with estimation, uncertainty, incomplete information, and non-stationary stochastic processes. But each of these, in turn, can be frames as a 'second-order' risk tolerance problem. The book _How to Measure Anything_ by Doug Hubbard has an excellent discussion on the value of information relative to reducing uncertainty in risk analysis. The point at which your firm decides *not* to spend more money to get better information or estimates of risk can be called "uncertainty tolerance", or maybe "uncertainty indifference". The firm is retaining the big 'R' risk associated with this residual uncertainty or lack of knowledge.

All this can be mapped back to the decision analysis that you are focusing on, but only if there is some framework of 'risk drivers', and how individual decisions influence overall big 'R' risk.

Hope this helps.

Thanks for taking the time to respond here, Russell. Unfortunately, based on the feedback, it seems I've done a really crap job of explaining my point. I'll try to put together a better post on it this week and see if I can't get it better.

In a nutshell, my point was this: outside of financial services, very few businesses are doing anything this formal. It's easy to forget that the vast majority of orgs fall into the SMB category. They're not calculating P values, nor are they probably terribly interested in stochastic models. As such, there's no place for "risk tolerance" (big or little R) because there's no process feeding it. I don't believe that SMBs will ever perform a formal analysis like this.

The other thing missing here is the context in which my thoughts were rolling around. I'm looking at risk * from the perspective of how to build out a product or service offering that can be used to automate as much of the process as possible. However, in the end, when I look at defining a "risk tolerance," I don't see a parallel for inforisk to risk tolerance for financial risk. I don't think you can really make up a risk tolerance profile ahead of time and simply stick to it, but rather will have to evaluate each decision one at a time. In such a case, risk tolerance in the bear vs. bull or conservative vs aggressive manner simply doesn't fly. Anyway...

Clearly I need to work-up a new post here that does a better job explaining the context and the problems I'm seeing. Oh, and yes, I'm absolutely, positively being pragmatic, because that's what customers need. Pie-in-the-sky academic theories are great, but they only go so far as to fuel discussions, debates, etc. If you can't put them into practice, then you've reached the limit of efficacy.

cheers,

-ben

I disagree. SMBs (let's take a high-end restaurant as an example) absolutely understand risk tolerance. They engage in it (perhaps not mathematically) when hiring people, choosing when to ensure the kitchen is spotless (commercial kitchens are nasty places) for a surprise inspection, deciding what the specials will be, how much of 'x' fresh ingredient to order, etc. They understand variables of spoilage, inspection schedules, interview responses/body language, etc. They even understand risk tolerance in terms of how much income to report to the IRS.

What they need is (and here's where this maps to your idea) core instruction in the infosec/inforisk concepts boiled down into something they can understand (without resorting to math). They need an understanding of the core threats, core controls and core tradeoffs. I'm over-simplifying, but portions of both your post/comment and Russ' comment are right.

We have to help inform folks better and make it practical (or your word – pragmatic – might be better). Restaurant owners hire lawyers & accountants to help mitigate risks in other areas, perhaps we need a new profession that offers similar services for infosec/inforisk.

It sounds to me like you are saying since nobody is doing formal analysis that you shouldn't - too circular for me. It might be more useful to determine whether people *should* do something more formal. I am not necessarily convinced this is necessary, but everyone is revealing their risk tolerance when they make decisions, whether they know it or not.

Even deciding to ignore risks that have low likelihood and/or low probability is using risk tolerance in a fairly rudimentary way.

Pete Lindstrom

Follow-up post here:
http://www.secureconsulting.net/2011/08/pragmatic-risk-management.html

Hopefully this provides better context and rationale, since it's clear this post didn't get the job done.

About this Entry

This page contains a single entry by Ben Tomhave published on August 8, 2011 11:44 AM.

Eulogizing Stupidity was the previous entry in this blog.

Take 2: Pragmatic Risk Management is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Monthly Archives

Pages

  • about
Powered by Movable Type 6.3.7