A Cautionary Note on Qualitative Risk Ranking of Homeland Security Threats

Abstract

Qualitative risk ranking systems are often used to assess homeland security threats due to their simplicity and intuitive nature. However, their appropriate use is limited by subtle common underlying difficulties that render them inconsistent with quantitative risk assessments. A better way to assess homeland security threats is to use simple fully quantitative risk models coupled with managerial review and judgment.

download the pdf
Download the pdf

Suggested Citation

Rozell, Daniel J. “A Cautionary Note on Qualitative Risk Ranking of Homeland Security Threats.” Homeland Security Affairs 11, Article 3 (February 2015). https://www.hsaj.org/articles/1800


Ranking the relative risk of various perceived homeland security threats is a necessary activity for security experts. Time and resources are finite and policymakers understandably want to address threats in accordance with their magnitude and urgency. Given that there are many ways to rank security risks, how shall we proceed? Some argue that qualitative risk ranking is simpler, more transparent, and less data intensive than fully quantitative risk ranking; therefore, a qualitative approach is more likely to be used by decision makers during a crisis.1 So should we primarily use qualitative risk ranking for homeland security threats? The short answer is no. Although qualitative risk ranking systems are popular, this practice is not advisable for the following reasons: (1) qualitative risk ranking is inconsistent and in some situations may have reversed rankings compared to quantitative risk ranking; and (2) when the range in risk to be evaluated is large or one risk is orders of magnitudes larger than the others, qualitative risk ranking may mask important information regarding differences in risk.2

Because the idea of ranking reversal is counterintuitive, a simple example of rank inconsistency originally proposed by Cox et al. is appropriate.3 First, let us assume a hypothetical three parameter risk model for a bioterrorism agent (e.g., accessibility, transmissibility, and virulence) where each parameter can be assigned a discrete value of 1, 2, 3, 4, 5, or 6 and that values of 1 or 2 correspond to “low” (L), 3 or 4 are “medium” (M), and 5 or 6 are “high” (H). Now, if we compare the risks of two potential bioterrorism agents, A and B, which have parameter values of (3, 3, 6) and (4, 4, 4) respectively, these will have qualitative rankings of (M, M, H) and (M, M, M) respectively. Using the qualitative rankings, we would decide that, all other things being equal, bioterrorism agent A is more of a threat than agent B. However, the quantitative ranking may not be consistent with the qualitative assessment. If the parameters happen to be additive, the sum of the parameters is the same (i.e., A = B = 12) and we would now say that agent A and B have the same level of risk. The same conclusion applies if the parameters are averaged (i.e., A = B = 4). Instead, if the parameters are multiplicative, the product of A is less than the product of B (A = 36, B = 64) and we would now conclude that agent B is riskier than A. Thus, we cannot assume that even simple rankings are qualitatively and quantitatively consistent. Unfortunately, regardless of the level of complexity, no qualitative risk ranking system that preserves order (i.e., represents risk with an increasing monotonic function) can escape this inconsistency between qualitative and quantitative risk rankings.4

Now let us consider the very simplest version of the previous example where there is a one-to-one mapping of qualitative and quantitative scales such that “low” (L) corresponds to a value of 1, “medium” (M) to a value of 2, and “high” (H) to a value of 3. Now the potential bioterrorism agents A and B that have three parameter model ratings of (M, M, H) and (M, M, M) respectively will correspond to (2, 2, 3) and (2, 2, 2) respectively. In this case, the correct order will be preserved for the quantitative rankings, but we have an important restriction. That is, there must always be a one-to-one mapping of qualitative and quantitative scales. Whenever a qualitative rank can have multiple values, the result is the inconsistency seen in the first example. The important implication here is that there can be no variation in risk within a category. That is, any agents ranked as a “high” risk must not be rankable within that qualitative categorical label. This works if there is no more than one agent within each categorical label or if lack of knowledge prevents anything other than very broad risk ranking. However, if it is believed that one risk is larger than another, it is not advisable to give them the same qualitative categorical label. This is a common issue when the top qualitative rank is a catch-all for very large values (e.g., human morbidity rate categories of: 0-1%, 1.1-10%, 10.1-20%, and >20%).5 This can result in the loss of important distinctions in relative risk – especially when there are outliers in the data set.

A common complementary tool of qualitative risk ranking is the semi-quantitative risk matrix – a table that uses likelihood and impact as the rows and columns and ranks events with high probability and high consequence as being high risk.6 The technique is also popular because it mistakenly appears to provide a simple, intuitive, transparent and visual justification for risk rankings.7 However, when attempting to convert a qualitative risk matrix (e.g., one that uses low/medium/high ordinal rankings) to a quantitative ranking, it can be demonstrated that whenever likelihood and impact are negatively correlated (i.e., when the most unlikely events have the largest impact – common in terrorism assessments) the qualitative risk matrix can actually invert the ranking and give results that are worse than making decisions randomly.8 Since the uncertainty in homeland security risk assessment is generally substantial, it is unlikely that the correlation between probability and consequence will be known. Thus, semi-quantitative risk matrices are a questionable endeavor and it is unfortunate that they have become popular among organizations that want to be proactive in managing risk.9

Given the difficulties of qualitative and semi-qualitative risk ranking systems, it is inadvisable to use them even in emergency situations. If time and resources are very limited, it is better to use either a very simple fully quantitative risk assessment,10 or informal expert managerial review and judgment.11 In this case, a simple quantitative risk assessment consists of restricting the analysis to only the most essential parameters while still using uncategorized numerical data for ranking. Likewise, the use of expert judgment, the simplest and potentially most comprehensive approach, transparently acknowledges the subjectivity in many homeland security risk assessments caused by general lack of data. A strength of expert judgment over qualitative risk ranking is the avoidance of hiding the inherent subjectivity behind a methodology that appears objective, yet has known, but subtle, flaws. When more time and resources are available, a variety of fully quantitative risk assessment techniques exist including: various logic trees (e.g., probability or decision trees), influence diagrams, systems dynamics, Bayesian network analysis, and game theory.12 However, purely quantitative methods have also been critiqued for being overly narrow by using simple probabilities and expectation values as representations of risk.13 Likewise, arguments have been made against using probabilistic risk assessment as a primary decision tool because we cannot account for terrorists’ knowledge of our assessments; rather, robust decision processes that attempt to maximize resilience may be more appropriate.14 Of course, maximizing resilience is a complex task in itself with its own set of difficulties analogous to qualitative risk ranking.15

In summary, while there are no perfect methods, qualitative risk ranking systems have important known limitations that contradict their appearance of simplicity and transparency. To assess the relative risk of potential homeland security threats, a simple quantitative risk ranking used in conjunction with unranked qualitative risk descriptions and expert judgment are more likely to yield results useful to homeland security policymakers.

About The Author

Daniel Rozell, MS, P.E., is a doctoral student in the Department of Technology and Society at Stony Brook University. His research interests include environmental and public safety risk analysis. Daniel Rozell can be contacted at daniel.rozell@stonybrook.edu.


1 K. Tomuzia, A Menrath, H. Frentzel, et al., “Development of a comparative risk ranking system for agents posing a bioterrorism threat to human or animal populations,” Biosecurity and Bioterrorism 11, Suppl 1 (2013): S3–16, doi:10.1089/bsp.2012.0070; Menrath A, Tomuzia K, Frentzel H, Braeunig J, Appel B., “Survey of Systems for Comparative Ranking of Agents that Pose a Bioterroristic Threat,” Zoonoses Public Health (2013), doi:10.1111/zph.12065.

2 L.A. Cox, D. Babayev, W. Huber, “Some limitations of qualitative risk rating systems.” Risk Analysis 25, 3 (2005): 651–62, doi:10.1111/j.1539-6924.2005.00615.x.

3 Ibid.

4 Ibid.

5 K. Tomuzia, A Menrath, H. Frentzel, et al., “Development of a comparative risk ranking system for agents posing a bioterrorism threat to human or animal populations,” Biosecurity and Bioterrorism 11, Suppl 1 (2013): S3–16, doi:10.1089/bsp.2012.0070

6 L.A. Cox, “What’s wrong with risk matrices?” Risk Analysis 28, 2 (2008): 497–512, doi:10.1111/j.1539-6924.2008.01030.x.

7 Ibid.; D.J. Ball and J. Watt, “Further thoughts on the utility of risk matrices,” Risk Analysis 33, 11 (2013): 2068–78, doi:10.1111/risa.12057.

8 Cox, “What’s wrong with risk matrices?”

9 Ball and Watt, “Further thoughts on the utility of risk matrices.”

10 Cox, Babayev, and Huber, “Some limitations of qualitative risk rating systems.”

11 T. Aven, “On how to deal with deep uncertainties in a risk assessment and management context,” Risk Analysis 33, 12 (2013): 2082–91, doi:10.1111/risa.12067.

12 B.C. Ezell, S.P. Bennett, D. von Winterfeldt, J. Sokolowski, and A.J. Collins, “Probabilistic risk analysis and terrorism risk,” Risk Analysis 30, 4 (2010): 575–89, doi:10.1111/j.1539-6924.2010.01401.x.

13 T. Aven, “A semi-quantitative approach to risk analysis, as an alternative to QRAs,” Reliability Engineering and System Safety 93, 6 (2008): 790–797, doi:10.1016/j.ress.2007.03.025.

14 G.G. Brown and L.A. Cox, “How probabilistic risk assessment can mislead terrorism risk analysts,” Risk Analysis 31, 2 (2011): 196–204, doi:10.1111/j.1539-6924.2010.01492.x.

15 D.L. Alderson, G.G. Brown, M.W. Carlyle, and L.A. Cox, “Sometimes There Is No “Most-Vital’’ Arc: Assessing and Improving the Operational Resilience of Systems,” Military Operations Research 18, 1 (2013): 21–37, doi:10.5711/1082598318121.


Copyright © 2015 by the author(s). Homeland Security Affairs is an academic journal available free of charge to individuals and institutions. Because the purpose of this publication is the widest possible dissemination of knowledge, copies of this journal and the articles contained herein may be printed or downloaded and redistributed for personal, research or educational purposes free of charge and without permission. Any commercial use of Homeland Security Affairs or the articles published herein is expressly prohibited without the written consent of the copyright holder. The copyright of all articles published in Homeland Security Affairs rests with the author(s) of the article. Homeland Security Affairs is the online journal of the Naval Postgraduate School Center for Homeland Defense and Security (CHDS).

1 thought on “A Cautionary Note on Qualitative Risk Ranking of Homeland Security Threats”

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top