I've been peripherally aware of the "Understanding and Managing Risk in Security Systems for the DOE Nuclear Weapons Complex" report released over the past couple weeks, including the bit of a stir it's caused. However, it wasn't until reading Jack Daniel's post that I sat up and started to take notice. Why? Well, for starters, because I get irritated when people criticize something that I don't think they understand (and, here, I'm really talking more about Nuclear and Radiation Studies Board than anybody else).
The fundamental problem with the report is that it seems to be both misleading and misinformed. On the first count, I believe the Abbreviated Version does not really get across the true sentiments of the Board. They both disparage use of quantitative models, and yet endorse their use in certain scenarios. I believe there's a reason for this, which I'll come back to in a minute. On the second count, the comments they make against quantitative risk analysis methods take on a tone very common in criticism of these methods. Specifically, they're questioning the ability to estimate the abilities of an attacker, with a dash of "black swan" thrown in for fun. More on that later as well.
Key Quotes
First, I want to start with analysis of some key quotes from the Abbreviated Version of the report.
"The committee concluded that the solution to balancing cost, security, and operations at facilities in the nuclear weapons complex is not to assess security risks more quantitatively or more precisely. This is primarily because there is no comprehensive analytical basis for defining the attack strategies that a malicious, creative, and deliberate adversary might employ or the probabilities associated with them. However, using structured thinking processes and techniques to characterize security risk could improve NNSA's understanding of security vulnerabilities and guide more effective resource allocation."
This paragraph makes me both nod and wince. I nod, because I recognize key terms like "precisely" in there that reflect a potentially useful understanding that accuracy is more important than precision. Perhaps they understand that they don't need more specific information to make reasoned assessments. However, they then turn around and talk about the inability to do a "comprehensive" analysis because there's no way to enumerate all threat variations. While this statement is true - that you cannot enumerate all threat variations in detail - it does not preclude use of quantitative risk analysis methods, and suggests that the Board doesn't truly understand the methods available.
"The committee has concluded that defining security risks more precisely (e.g., by using a probabilistic risk assessment approach) will not significantly improve NNSA's security planning. This is primarily because there is no comprehensive analytical basis for defining the attack strategies an adversary might employ or the probabilities of success associated with them."
Here, again, we see some confusing and contradictory statements. There seems to be an incorrect belief that "quantitative" equals "precise," which in my experience isn't correct. Instead, it should equate to "accurate" (or "more accurate"). This is a common misunderstanding amongst people new to quantitative risk analysis, as well as to critics. When you ask someone to make an estimate, they typically err toward precision instead of accuracy, choosing a single point value rather than selecting a range. Useful quantitative analysis will focus on this latter method, and then will work - using calibration techniques - to refine the range so that it can be as precise as possible without sacrificing accuracy. This may seem like semantics, but I assure you that it's not.
Also troubling here is that they are hung-up on "attack strategies" instead of instead focusing on the asset and how it could be compromised. Through a basic analysis, I think you would find that there's really only one way to possess a physical asset: to actually take physical control of it. Knowing this, you can then shift your thinking away from being so concerned about specific attacks and instead look at survivability strategies that make sense.
"RECOMMENDATION 3-1: The committee advises against the use of probabilistic risk assessment (PRA) in designing security for the DOE nuclear weapons complex at this time. However, the committee recommends the use of some tools and techniques traditionally associated with PRA to improve NNSA's understanding of the full spectrum of risks to the complex."
I wanted to highlight this recommendation because in one paragraph they have both recommended against using quantitative risk analysis (reduced to PRA in their report), and yet also recommend using it. Talk about misleading...
"RECOMMENDATION 4-1: The committee recommends that DOE/NNSA generate a range of plausible and specific objectives that the site security system is intended to preclude, for use in scenario generation. An adversary perspective should be taken into account when generating these objectives."
I'd be shocked if this wasn't being done already. This is physsec 101 for high-value assets. You have to understand the value of the asset being protected, and you have to understand the environment in which it's being protected. Then and only then can you take a strategic approach to making good decisions to implement security measures, which can and should be supported through a formal risk analysis process (remember: we're talking strategic planning here and, by extension, decision analysis, ergo risk analysis). This recommendation is at best obvious, and at worst implies scary things about the lack of preparedness of key sectors.
Misleading Statements
My first impression of the report was that its statements were particularly misleading. Why would quantitative analysis be deemed pointless in this case? First, let's stop to understand how a quant risk analysis method like FAIR would analysis a given scenario. In FAIR, "risk" is a derived value defined as the probably frequency and probably magnitude of future loss. In this calculation, you look at two aggregate figures: Loss Event Frequency (LEF) and Loss Magnitude (LM). Before I go into more details on what LEF is, let's short-circuit the analysis right here and understand why the Board likely thinks a quant approach doesn't make sense.
If you understand basic mathematics, then you realize that multiplying infinity by any number still gives you infinity. If you consider that the stockpile of nuclear weapons has a very large value, then the estimated magnitude of a loss event (LM) would be very large. As such, even if LEF ends up being very, very, very low, it ends up not mattering all that much since LEF*LM will still be a very big number. In this sense, I can absolutely see why the Board recommends against use of a quant risk analysis method.
Now, that being said, I don't think they understand how risk analysis would operate on such a scale. Moreover, I really wonder if they are providing reasonable estimates on LM, or if they're grossly inflating them based on FUD. I'm not an expert in the field, but I wonder about the following things:
* Who would know about these facilities and assets?
* Who would be capable of compromising the assets?
* Who would be capable of doing something with the assets?
* Is there any scenario where someone could accidentally cause an incident (i.e., not a malicious actor)?
* What are the worst-case scenarios?
From a purely hypothetical perspective, my guess is that you're worried about 2 things: 1) Compromise of the asset by a malicious actor. 2) Disabling the facility so that the assets cannot be used for their intended purpose (offensive or defensive). In both of these cases, I think a quantitative risk analysis can be performed. Given that we can only see part of the report, I have to wonder if much of this is represented already, but simply not shared. On the flip side, maybe things really are so poorly protected today that the Board felt it necessary to force the DOE to go back to basics (how sad and disturbing if true).
Misinformed Statements
As noted in the "Key Quotes" section above, there are several instances where statements are made that suggest a lack of understanding of how quantitative risk analysis methods operate. It seems to suggest a lack of real experience using these methods, and/or a lack of exposure to a variety of methods. I find it unsurprising that a number of experienced and highly credentialed people - particularly from academia - would not balk at spouting off on topics about which they seem to have, at best, a broken understanding.
Some of the fundamental issues in the report include:
* A focus on "precision" with no accounting for "accuracy." This suggests that they're not familiar with estimation and calibration techniques.
* Excessive focus on "probabilistic risk assessment" in a generic sense, with the criticism hinging on "imperfect data." If they truly understood how quantitative risk analysis worked, then they would not discount it.
* A tendency to lean toward scenario-based analysis, which is fine, but without an understanding that such an analysis would be improved through use of quantitative methods. It's almost like they viewed the notions as being incompatible when, in fact, the opposite is true.
Final Thoughts
All of this is to say that I have a hard time lending the Board much credence or credibility given what seems to be a failure to properly understand the topic that they were charged with evaluating. If they cannot provide a more cogent and consistent argument, then the report must be discounted accordingly. I very much question the rigor in the analysis, not to mention qualifications of those making the assessment. There are simply too many apparent errors.