Abstract
The National Academy of Sciences recommended that the Department of Homeland Security use methods of qualitative comparative risk assessment as part of its approach to strategic planning. To provide insight into how this can be done, this paper examines a set of ten homeland security risks– including natural disasters, terrorist events, and major accidents– in a systematic fashion. These hazards were described in terms of the annualized risk to the United States as a whole using open-source data and a standardized set of attributes. This assessment can be useful on its own, providing a baseline of knowledge about these homeland security risks and a source of data for subsequent risk management and comparative risk assessment studies. Additionally, this assessment can help identify what is known about the homeland security risk generally–the availability of data on homeland security risks and the uncertainty of the risks as they vary by hazard and attribute.
Suggested Citation
Lundberg, Russell, and Henry Willis. “Assessing Homeland Security Risks: A Comparative Assessment of Ten Hazards.” Homeland Security Affairs 11, Article 10 (December 2015). https://www.hsaj.org/articles/7707
Introduction
The Department of Homeland Security (DHS) is a large and complex organization with a mission that covers multiple priorities including security, resilience, and customs and exchange. Managing priorities in preparing for and responding to the range of terrorist events, natural disasters, and major accidents that are in their purview requires an understanding of the diverse set of risks involved. In an organization managing risks that can kill hundreds to thousands and with an annual budget in the tens of billions, properly aligning capabilities to risks can save both dollars and lives. 1 DHS defines risk as the “potential unwanted outcome resulting from an incident, event, or occurrence, as determined by its likelihood and the associated consequences,” and knowing the extent of expected damages of a risk can be useful when considering the costs of risk reduction activities. Accordingly, DHS is committed to utilizing risk assessment in informing decisions and priorities. 2 In a process described in a summary of the Strategic National Risk Assessment, DHS uses risk assessments to identify high risk factors in support of capabilities, to support critical thinking about strategic needs, and to promote a common understanding in order for all components of DHS to act independently but collaboratively. 3
Assessing homeland security risks is a particularly challenging endeavor. This is partially due to the nature of the risks. Homeland security risks include high consequence/ low likelihood events with significant uncertainty, making homeland security a challenging domain for risk assessment. 4 To a certain extent, these challenges in homeland security risk assessment reflect the maturation of the field, as natural hazard risk assessment methods are more advanced than methods used to assess risks associated with terrorism. 5 But terrorism also involves inherent challenges in estimating the likelihood of attacks that are not inherently probabilistic because they are carried out by intelligent adversaries. 6 Bringing these risks together in a comparative fashion is even more challenging. A 2010 National Academies review of DHS’s approach to risk analysis recognized several opportunities for DHS to improve their comparative risk assessments. 7 Homeland security risk assessments are often conducted in an ad hoc fashion, and identifying the attributes of concern in a model specific to the hazard makes it difficult to compare risks. For example, comparing risks based only on estimates of expected casualties will bias results against cyber-attacks, while omitting environmental damages will bias results against oil spills. Additionally, the “heterogeneity and complexity” of risks in DHS’s portfolio limits the use of a single meaningful unit of risk, making a quantitative integrated risk assessment “impractical”. 8
While the review recommended against comparing all-hazards in a quantitative fashion using a single risk measure, they did recognize the benefit of qualitative comparisons to inform decision making. 9 While there are known limitations to the direct use of some qualitative methodologies (such as risk matrices), using a range of quantitative and qualitative measures to inform expert judgment can be useful. 10 DHS has followed this approach in their Second Quadrennial Homeland Security Review (QHSR), including assessments of a range of homeland security risks to support an expert consideration of risk reduction priorities. 11
Another suggestion of the National Academies panel was to incorporate time-tested scientific practices, including external peer-review, in their verification and validation of risk models. It is important for risk models to be transparent both internally (so that policy-makers can understand the assumptions underlying their decisions) and externally (so that common approaches can be used to inform similar risks and to support model validation).
This paper presents a shared starting point for a comparative assessment of homeland security risks, building on the recommendations of the National Academies panel. We compare a set of ten hazards including a varied set of natural disasters, terrorism, and major accidents, using a standardized set of attributes that cover health, economic, societal, environmental, governmental, and non-consequence aspects of risk. We identify and document risks estimated through open-source literature in a transparent fashion. This presents a common framework to examine homeland security risks and a baseline as to what those risks may be.
Methods
We undertook this risk assessment in order to support a comparative risk ranking. While the 2010 National Academies report recommended that DHS avoid quantitative comparative all-hazard risk assessments, it suggested that qualitative techniques can be appropriate. 12 There are a range of techniques which can be useful for all-hazard comparison of risks, including the Analytic Hierarchy Process, multi-objective risk analysis techniques, and others. 13 This study uses one such comparative approach, the Deliberative Method for Ranking Risks. This method was developed in the late 1990s and early 2000s in research out of Carnegie Mellon University to deal with the range of risks faced in environmental policy. 14 The Deliberative Method for Ranking Risks has five steps: 1) identifying the risks to be ranked; 2) identifying important attributes to describe the risks; 3) describing each of the selected risks in terms of the selected attributes; 4) selecting participants and performing the risk ranking, and; 5) analyzing results. Other papers describe the application of the deliberative method in detail. 15 This paper focuses on the first three steps of this method involving the assessment of a set of risks in a comparable fashion. While we used a specific set of comparable risks in this study to support a specific risk ranking, the assessments used to support the rankings are generalizable.
Certain decisions must be made regarding how to conceptualize the risk before any comparative risk assessment can be performed. One initial decision involves deciding how to identify discrete risks to be compared. In homeland security, risks may be broken out by target (as in the Border Zone Protection Program), by city (as in the Urban Area Security Initiative), by sector (as in the National Infrastructure Protection Plan), by hazard (as in the National Planning Scenarios and the Quadrennial Homeland Security Review), or other ways. In the process of considering the alternative ways to categorize risks, no single approach is universally correct; instead the categorizations should be matched to the purpose for which they are used and the structure of the organization using them. 16 While DHS uses a range of approaches to categorize risk depending on the purpose of the assessment, for this study we made the choice to categorize risks in terms of hazard, which the DHS defines as a natural or man-made source or cause of harm or difficulty. 17 This approach considers the risks (i.e. the likelihood and consequences of harm) associated with selected hazards (i.e. the cause of that harm). This consideration of risks associated with hazards reflects high-level strategic documents and planning within DHS. 18
Next, we selected specific hazards. A relevant set of risks should be logically consistent, administratively comparable, equitable and comparable as regards cognitive constraints and biases. 19 For reasons of comparability, we sought hazards which reflected the types of incidents described by DHS in their mission statement, specifically “…a terrorist attack, natural disaster, or other large-scale emergency.” 20
The set of hazards was neither exhaustive nor representative, but was selected to include risks that varied in interesting ways, reflecting a range of causes and consequence levels associated with the types of problems managed by DHS (see Table 1). The specific hazards we selected were drawn from a larger list identified from DHS documents. 21 From this list, we selected a subset of the hazards to cover the domains of terrorism, accidents, and natural disasters. This focus did not address other aspects of the DHS mission such as securing the borders or managing immigration. This is consistent with DHS risk analyses, notably the Strategic National Risk Assessment. 22
Table 1: Hazards selected
Natural |
Terrorist |
Accidental |
Earthquakes |
Nuclear detonation |
Toxic industrial chemical accidents |
Hurricanes |
Explosive bombings |
Oil spills |
Tornadoes |
Anthrax attacks | |
Pandemic influenza |
Cyber-attacks on critical infrastructure |
Comparing risks also requires having a consistent set of attributes to describe them. This set of attributes should be representative of the aspects of risk about which people and policy makers are concerned. Selecting a set of attributes that comprehensively yet parsimoniously describes the aspects of risk that people are concerned about requires significant judgment in applying the scientific literature. 23
We drew the attributes to describe risk from the literature on homeland security and emergency management. We focused our literature review on papers which described an overarching framework for consequences or which reviewed the emergency management or homeland security literature with regards to aspects of consequence. 24 The review also examined DHS papers or processes that utilized a framework in an attempt to comprehensively describe risks. 25 This focused literature review identified 41 attributes which covered a range of consequences about which people were concerned, including not only lives lost and economic damages, but also social, psychological, environmental, and political concerns.
From these identified attributes, we selected 17 attributes describing health, economic damage, societal damage, and non-consequence factors reflecting aspects of dread and uncertainty associated with the psychometric paradigm. 26 As many of the identified attributes described similar concepts, attributes were selected which could cover the range in a parsimonious fashion. These definitions for each attribute were formalized; for example, the distinction between more severe and less severe injuries or illnesses was related to hospitalization, with a definition and examples for each. Most of the selected attributes involved expected value characterizations, describing the risk in terms of expected damage to the nation as a whole for one year as averaged over many years. However, some attributes were described in non-annualized terms, including both perspectives: reflecting damages for a single event and non-consequence characterizations of the risk. We present the set of selected attributes, including consequence and non-consequence aspects of risk, in Table 2. For more details on the process see Lundberg (2013). 27
Table 2: Attributes Used to Describe Homeland Security Risks
Health |
Socioeconomic |
Other attributes |
Average number of deaths per year |
Average economic damages per year |
Natural/human-induced |
Greatest number of deaths in a single event |
Greatest economic damages in a single event |
Ability of individuals to control their exposure |
More severe injuries or illnesses per year on average |
Duration of economic damages |
Time between exposure and health effects |
Less severe injuries or illnesses per year on average |
Size of area affected by economic damages |
Quality of scientific understanding |
Psychological consequences per year on average |
Average environmental damages per year |
Combined uncertainty |
Average displaced households per year | ||
Disruption of government services |
We developed specific estimates of these attributes for each of the selected hazards. Some of these estimates were quantitative, while others were qualitative. We drew data for these estimates from the available literature. These included unclassified government records, peer reviewed articles, books, news reports, and raw data from databases on disasters (such as EM-DAT) and terrorism (such as RAND’s Database of Worldwide Terrorism Incidents). We identified these sources from other risk assessments, literature reviews, other articles and books, and targeted internet searches. We conducted research underlying the assessments throughout 2010 and early 2011.
As estimates of risk in homeland security can have considerable uncertainty, we adopted three approaches to characterize the risk with appropriate degrees of precision: rounding, bounding, and the use of qualitative levels when appropriate. First, we rounded quantitative estimates of risk to only one significant digit so as to not overstate the precision of the estimates. Second, we incorporated uncertainty in quantitative estimates using bounds. For annualized expected consequences, these bounds represented the lowest and highest estimate of risk identified in the literature. For worst case attributes (greatest number of lives lost in a single event and greatest economic damages in a single event), bounds represented low and high estimates for the largest potential event. These reflected the range from the largest consequences that had ever actually occurred and the largest consequences that could theoretically occur as identified in models or the range from the largest consequences that had occurred in the U.S. to the largest consequences that had occurred anywhere in the world. Finally, we only characterized attributes in quantitative terms when there was sufficient justification to do so; attributes that were less concrete (such as environmental damage) were described qualitatively. When qualitative scales were used, we used structured definitions to improve consistency. For example, thresholds for ability of an individual to control their exposure involved the actions one would need to take to avoid exposure to the risk, with high control related to advance warning of a clearly visible event and low control related to risks that can only be reduced by significant lifestyle changes, such as moving away from urban or suburban areas.
While this paper focuses on the estimates associated with the attributes described in Table 2, this was only one part of describing the risks. We described the risks in risk summary sheets that followed a specific format consistent with best practices in risk communication. The summary sheets began with a paragraph summarizing the risk and a table listing the estimates for all of the identified attributes on the first page. The subsequent pages then described in greater detail what is known about the risk and how it can harm people, what the exposure is to the risk, and what is already being done about the risks.
Finally, we brought in hazard-specific experts for the analyses of the hazards (including not only the estimates for the attributes but also the descriptions in the risk summary sheets). This expert review was part of the risk summary process (Deliberative Method for Ranking Risks step 3), separate from but supporting the risk rankings (Deliberative Method for Ranking Risks step 4). In this review, researchers unaffiliated with the project but with expertise on the hazard were identified and were asked to consider the risk summary for the hazard on which they possessed expertise. They were asked to review whether the assessments were based on the best available science about the hazard and whether the estimates accurately reflected that knowledge both in the accuracy and precision presented. We incorporated comments from the reviewers into the final risk summaries.
The result of this process was a set of risk summary sheets describing the risk associated with ten homeland security hazards in a comparable fashion. The detailed assessments are available in online supporting materials and summarized in the remainder of this paper. 28
Results
The risk assessment process identified estimates in 137 documents including datasets, government documents, peer-reviewed articles, NGO publications, published books, and news articles. Many of these documents provided multiple estimates. For example, we used estimates of consequence from nine of the National Planning Scenarios, and in many cases we were able to identify both low and high estimates. 29 Datasets were particularly useful in creating multiple estimates including estimates for multiple hazards, for different periods of time, or for different places. For example, the RAND Database of Worldwide Terrorist Incidents provided a number of estimates for consequences over different time periods and for different countries. 30
The data to develop consequences varied in format. We identified estimates of consequence in terms of counts per year or counts per event. While some hazard-attribute pairs had data that went back over 100 years (e.g. deaths from tornadoes) it is questionable whether historical data reflect contemporary conditions, either because the historical data was not collected as diligently or because the risk has changed over time.
The data could support quantitative estimates for some hazards and attributes, but could only support qualitative levels for others. Data were strongest in support of estimates of lives lost, either in a single event or averaged over many years, as lives lost are a discrete harm important enough to be regularly recorded. Estimates of direct economic damages were widely available; indirect or secondary economic damages were also available but varied widely. Some authors suggested all policy activities following an event should be included in secondary costs (in the case of the events of Sept. 11, 2001 that included the costs of two wars, far beyond the costs of the event itself), while others suggested that the substitution in a mature market economy would attenuate any losses from a disaster. Data on health effects other than lives lost were not as strong. Even in the case of hazards with historical data, injuries were often only recorded for individual events (often the largest events, which may or may not be typical), not for averages over many years.
The differences in data warranted the application of different methods to derive estimates from them. We drew estimates from the data in six general ways: projections from historical data, projections from analogous data, modeled estimates, expert opinion, projections from the proportionality of one attribute to another, and bounding estimates. We present these categories in an order of increased abstraction, not necessarily in strict order of preference; the approach selected depended not only on how directly it described the actual consequences but in how precisely and accurately it described them. It was not possible to choose a single approach that would work well for each hazard and attribute. For example, the historical data on tornadoes allowed estimates of risk by averaging over many years, but the lack of data on terrorist nuclear detonations required decomposing risk into likelihood and consequence then using expert opinion and data from World War II and from scenario models to calculate an estimate. The determination of whether one kind of approach provides a better estimate than another involves an inherently subjective judgment. As expected, most natural disasters lent themselves to averages of historical data while most terrorist hazards involved combining likelihood derived from expert opinion with consequence from modeled data or other approaches (see Table 3). However, this was not always the case. Terrorist explosive bombings, for example, had sufficient data to support statistical analysis, with a record of terrorist explosive bombings over several decades in different contexts. Other terrorist scenarios do not lend themselves to historical averages, as they have rarely (in the case of anthrax attacks) or never (in the case of terrorist nuclear detonations) occurred. The non-probabilistic nature of adaptive adversaries limits the utility of probabilistic risk assessments. 31 Novel or rare events can also require more creative approaches to estimating consequences. For example, while cyber-crime and cyber-espionage are widespread, cyber-attacks on critical infrastructure have been rare; incidents of computer failure in mass transit and the Northeast Blackout of 2003 were selected as analogous events. Similarly, the historical consequences of pandemic influenza may not be useful to describe the contemporary consequences in light of improvements in public health systems, so we used models of consequence data. We present the specific estimates for each of the attributes by hazard in Table 5, presented as an appendix to this article. For details on how particular estimates were selected see Lundberg (2013). 32
Table 3: Approaches Used to Estimate Homeland Security Risks for Selected Hazards and Attributes
The precision of the estimates of homeland security risks had large amounts of uncertainty for some hazards, but that does not mean that they cannot still be useful for some applications. Table 4 presents the precision in the quantitative estimates in terms of the orders of magnitude difference between the lower and upper bound for each of the hazard-attribute pairs. The range of the estimates was substantial for some hazards, with two or even three orders of magnitudes of difference between the low estimate and the high estimate for a given attribute. This lack of precision may limit some quantitative approaches but can be useful for qualitative approaches– similar orders of magnitude for the bounding estimates have been used in studies using the Deliberative Method for Ranking Risk in other domains. 33 Still, uncertain estimates of risk may also suggest areas where future research may be useful.
The precision of the estimates varied by hazard and attribute. When considering the precision by hazard, there is less precision in the estimates of risk for terrorism than for natural disasters, with major accidents falling between them. But this approach presented little variation when comparing the precision by attribute, in part because there was some consistency in the approaches within a hazard; for example, if a hazard were to decompose likelihood and consequence, the same range of likelihoods would be applied to estimates of lives lost, injuries, and economic damages.
Table 4: Precision of Estimates by Hazard and Consequence as Measured by Orders of Magnitude between Lower Bound and Upper Bound
Estimates are Too Imprecise to Quantitatively Differentiate Homeland Security Risks
The quantitative estimates of the risks provide some, but only some, ability to differentiate risks. The ability to quantitatively differentiate risks was limited due to the degree of precision in the estimates. Quantitatively distinguishing the hazards on any single attribute is possible only to a limited extent. Figure 1, for example, shows the estimates of lives lost for each hazard on a logarithmic scale, where the estimates for 26 of the 45 hazard-hazard pairs overlap for expected lives lost per year and 8 of 45 overlap for greatest number of lives lost in a single event. The other attributes are similar, with the estimates of an attribute for any given hazard overlapping with the estimates for another hazard in about half the cases.
Figure 1: A Comparison of Average Lives Lost Per Year and Greatest Lives Lost in a Single Event
These multidimensional risks can also be presented holistically in an easily comparable visual format using radar charts. Figure 2 shows the identified risks of the selected 10 homeland security hazards across all 17 attributes. To create this chart, the “best” estimate for each of the 17 selected attributes of risk was plotted on a normalized scale relative to the other hazards, with 0 (at the center) representing the lowest value in this set of hazards and 1 (at the edge) representing the highest value in this set of hazards. We plotted qualitative attributes ordinally on this scale while quantitative attributes were plotted on logarithmic scales. The attributes are grouped in particular quadrants: the upper right quadrant presents health effects; the lower right presents economic damages; the lower left presents non-economic consequences; and the upper left presents non-consequence attributes. These allow quick visual comparison of the risks. For example, one can see immediately that the risks of hurricanes are greater than those of tornadoes. In this case, hurricanes equal or exceed tornadoes on every dimension of the risk; to borrow the language of game theory, the risk of hurricanes dominates that of tornadoes. Judging hazards that are not dominated can be done but with the caution that concern depends to at least some extent on the subjective value placed on each attribute. For example, if one risk is smaller than another for each attribute except lives lost, where it is larger than the other, it is not entirely clear which of the two risks will be of greater concern. It is also possible to distinguish between large and small risks overall, but one can also identify the aspect of those risks that is particularly large, whether it is in terms of health effects (as in pandemic influenza), economic effects (as in earthquakes, terrorist nuclear detonations), or societal effects (as in hurricanes). The use of radar charts may be a useful tool to rapidly understand homeland security risks in a relative context.
Figure 2: Selected Homeland Security Hazards across Multiple Dimensions of Risk
The risk assessment presented in this paper serves as a starting point for comparing homeland security risks. The hazards described here are only a subset of the possible risks to the nation, but additional hazards can be adapted easily for comparison using the framework of attributes described here. Additionally, while the selected “best” estimates presented here involve inherent subjectivity, the estimates and the justifications for them are transparent, allowing for others to modify those estimates as need be. The identification of risks in a standardized fashion can serve as a starting point for comparative risk assessments in the future.
The results show that it is possible to estimate homeland security risks in a comparative fashion, although there are some limitations in the use of those estimates. One limitation is an inherent subjectivity. Determining estimates of homeland security risks requires many types of subjective judgment. Estimating lower and upper bounds is objective in theory (identifying the lowest and highest estimates in the literature) but can involve subjectivity in determining the scope of the risks– for example, determining which events to use as analogous for a cyber-attack on critical infrastructure. Selecting a “best” estimate involves a greater degree of subjectivity, particularly when dealing with risks that lack historical data. In those cases, selecting a “best” estimate involves subjective judgment as to which risks are more likely or more reasonable. We addressed this subjectivity by being transparent as to how and why the “best” estimate was selected.
The lack of classified information may also bias these estimates. The risk assessments in this paper used only estimates derived from the open-source literature. It is possible (but by no means certain) that estimates for terrorist events may be more accurately characterized using classified information. Terrorists want to both conceal and publicize their actions; while terrorists try to keep their planning and intentions secret, they want their results to be highly public in order to amplify the influence of the attack. Accordingly, open-source estimates of risk describing historically common terrorist events (e.g. explosive bombings, assault scenarios) should be reliable at the 10,000 foot view used in this analysis, assuming that the exposure to the risk has not appreciably changed from the historical record. Rare or novel risks are less likely to be reflected in the historical record. In these cases, classified data may (or may not) support a more accurate estimate than open-source data. It is certainly true that analysts with access to classified data will know more about terrorist activities than those limited to open-source data, but it is not clear whether this translates to greater understanding of the residual risk, as one would assume that any actionable intelligence would be acted upon thus removing the threat from the residual risk that remains. While the attempts to address uncertainty in the estimates– bounding, rounding, and qualitative levels– should attenuate the concern regarding the use of open-source data, the extent to which the estimates developed here would differ from those developed using classified information is unclear.
Still, some things can be learned from this analysis, both with regards to the risks and to the risk assessment process. First, there are sufficient open source data to describe homeland security risks; the precision of the estimates in this study are similar to the precision of estimates for risks studied in other domains using similar methodologies (see Lundberg, 2013). 34 But the extent to which homeland security risks can be estimated does vary by hazard. Looking at the range of estimates generated for risks, it is only possible to distinguish between any two hazards based on a single attribute about half the time; in some cases, upper estimates may be over three orders of magnitude greater than the lower estimates for a given attribute. Natural disasters can be described with more precision than terrorist events, as expected. This is particularly true of rare/novel high consequence events such as terrorist anthrax attacks or nuclear detonations, but even terrorist bombings with a substantial history to draw upon are associated with less precision than natural disasters. However, the degree of precision does not vary greatly by attribute; only greatest number of lives lost in a single event could be described with better precision than the rest of the estimates.
Better precision of these uncertain estimates could plausibly help improve decision-making, but improving the precision of the estimates is not without tradeoffs and it is important to not overstate the precision beyond what is appropriate. 35 There are natural limitations in trying to establish with greater precision the likelihood of essentially non-probabilistic events, notably terrorism. 36 Some hazards (such as earthquakes) have an essentially probabilistic nature with each day akin to the roll of the dice, and imprecision reflects limits in the ability to determine what that is. But other homeland security hazards are not probabilistic; to borrow from Albert Einstein, terrorists do not roll dice. Whether or not a terrorist event occurs is not a probabilistic event—it is the result of the decisions of an adversary—and while we may not know what those decisions are, they are indeed decisions and not random acts. While treating terrorist actions as if they were probabilistic and modelling them using probabilistic methods can be useful in some circumstances, it can be misleading in others (see Brown and Cox, 2011, for a broader treatment of the challenges of PRA in terrorism risk analysis). 37 Accordingly, narrowing the bounds of uncertainty associated with the risk of high-consequence terrorist hazards could overstate what is really known about the risk. Given recent concerns about overstating the precision of this kind of subjective probability (including intelligence estimates as to the likelihood of WMD in prewar assessments on Iraq, 2003), 38 this kind of false precision should be avoided. Instead, both the estimates of risk and the estimates of the precision of that risk should be presented to homeland security decision-makers and decision support tools that can integrate imprecision (such as Robust Decision Making or Assumption Based Planning) should be explored. 39
Another lesson is that the risks do vary by more than just size. The largest risks have their effects predominately in only one aspect (i.e. health, economic damage, or societal damage, but not all three), but the particular aspect where that largest effect is realized varies by hazard– health effects drive damage for pandemic influenza, economic damage for terrorist nuclear detonations and earthquakes, and societal disruption for hurricanes. This provides empirical support for suggestions that analyses of homeland security risks should consider a range of attributes rather than just lives lost and economic damages. One might do this through graphical means (such as the example star charts or a set of complementary metrics in a dashboard approach) or numerically.
This multi-dimensionality can bring limitations to integrated comparative risk assessments. Adding additional attributes quantitatively compounds the lack of precision, as the complication of deciding how to combine multiple attributes inherently involves subjective judgment (in deciding how many dollars an acre of wetlands damaged by oil is worth, to give one example of many). The typical challenges of subjectivity in converting these different attributes into a single metric are compounded by the problems of imprecision. For some of the challenges in summarizing risk using risk measures and risk indices, see MacKenzie, 2014 and NAS 2010. 40 In this respect, we concur with the National Academy of Science panel on DHS’s approach to risk analysis when they say that DHS should not compare all the different risks in their portfolio using a single quantitative metric, and that qualitative tools maintaining the underlying individual attributes should be used to inform, not replace, decision-makers.
To this end, the analysis identifies some ways in which multiple individual attributes can inform policy-makers. Risk summaries can be created using best practices in risk communications, integrating summaries and tables to present information clearly and comprehensively; an approach such as Lundberg (2013) can be useful for getting informed consideration of risks using summary sheets. 41 Additionally, visual aids can, when used correctly, communicate information quickly and clearly. 42 The star charts presented in this paper can quickly illustrate the relative size of the risks as well as the kinds of consequences (i.e. health, economic, or socioeconomic) that are of greatest concern. While these charts present the risks individually, one could plot all of the hazards on a single chart by presenting only the outline. Alternatively, the charts could be changed to incorporate the uncertainty of the estimates, presenting the lower and upper bounds for each of the risks. And radar charts are not the only tool that could be used to present information visually; approaches from business infometrics, including the Five Star Framework or dashboard designs, could also be used to communicate multiple attributes quickly and easily. 43
Comprehending the risks is just a first step to making decisions in the homeland security domain. While risks are important, it is the extent to which activities reduce risk that decision-makers are actually considering. This represents an additional step that can present challenges to estimate. These risk reductions must also be considered with regards to their efficiency: the extent that risk can be reduced for a given amount of money. However, efficiency is not the only value that should be considered; other values, such as liberty and equity, should be considered as well. This makes decision-making in the homeland security domain a complex undertaking, one where a single quantitative estimate cannot substitute for judgment.
While some articles have explored the reasons to be cautious about the use of qualitative risk assessment tools, 44 there are also reasons to be cautious about the use of quantitative estimates. In the context of homeland security hazards at the national level, the precision of available estimates presents one challenge to developing integrated risk assessments and the multiple dimensions of risk in national hazards presents another. There may be situations in which purely quantitative or purely qualitative approaches may be recommended, but it is important to be aware of the limitations and strengths of each. In many cases, it can be important to embrace the complexity of risks in the homeland security domain rather than embrace false simplicity. For now, homeland security risk assessments will remain both an art and a science.
Table 5: Value for Each Attribute by Hazard
Public Health and Safety
Average Number of Deaths per Year | |
Earthquakes |
100 |
Hurricanes |
40 |
Tornadoes |
40 |
Pandemic Influenza |
4,000 |
Anthrax Release |
20 |
Terrorist Nuclear Detonation |
200 |
Terrorist Explosive Bombings |
10 |
Cyber Attack on Critical Infrastructure |
0 |
Toxic Industrial Chemical Accident |
8 |
Oil Spills |
1 |
Greatest Number of Deaths in a Single Episode | |
Earthquakes |
5,000 – 20,000 |
Hurricanes |
2,000 – 4,000 |
Tornadoes |
300 – 700 |
Pandemic Influenza |
300,000 – 2,000,000 |
Anthrax Release |
3,000 – 20,000 |
Terrorist Nuclear Detonation |
100,000 – 800,000 |
Terrorist Explosive Bombings |
200 – 2,000 |
Cyber Attack on Critical Infrastructure |
0 – 10 |
Toxic Industrial Chemical Accident |
3,000 – 20,000 |
Oil Spills |
200 |
Average More Severe Injuries/Illnesses per Year | |
Earthquakes |
70 |
Hurricanes |
600 |
Tornadoes |
200 |
Pandemic Influenza |
20,000 |
Anthrax Release |
60 |
Terrorist Nuclear Detonation |
200 |
Terrorist Explosive Bombings |
30 |
Cyber Attack on Critical Infrastructure |
0 |
Toxic Industrial Chemical Accident |
50 |
Oil Spills |
5 |
Average Less Severe Injuries/Illnesses per Year | |
Earthquakes |
3,000 |
Hurricanes |
1,000 |
Tornadoes |
700 |
Pandemic Influenza |
2,000,000 |
Anthrax Release |
300 |
Terrorist Nuclear Detonation |
100 |
Terrorist Explosive Bombings |
60 |
Cyber Attack on Critical Infrastructure |
0 |
Toxic Industrial Chemical Accident |
500 |
Oil Spills |
60 |
Psychological Damage per Year on Average | |
Earthquakes |
High |
Hurricanes |
High |
Tornadoes |
Moderate |
Pandemic Influenza |
High |
Anthrax Release |
Moderate |
Terrorist Nuclear Detonation |
High |
Terrorist Explosive Bombings |
Low |
Cyber Attack on Critical Infrastructure |
Low |
Toxic Industrial Chemical Accident |
Low |
Oil Spills |
Moderate |
Societal and Economic Damage
Average Economic Damages per Year | |
Earthquakes |
$5B |
Hurricanes |
$10B |
Tornadoes |
$1B |
Pandemic Influenza |
$4B |
Anthrax Release |
$7B |
Terrorist Nuclear Detonation |
$3B |
Terrorist Explosive Bombings |
$100B |
Cyber Attack on Critical Infrastructure |
$50B |
Toxic Industrial Chemical Accident |
$300B |
Oil Spills |
$1B |
Greatest Economic Damages in a Single Event | |
Earthquakes |
$60B – $1T |
Hurricanes |
$60B-$200B |
Tornadoes |
$900M- $3B |
Pandemic Influenza |
$70B-$200B |
Anthrax Release |
$300M- $100B |
Terrorist Nuclear Detonation |
$1T-$10T |
Terrorist Explosive Bombings |
$1B-$40B |
Cyber Attack on Critical Infrastructure |
$100M-$10B |
Toxic Industrial Chemical Accident |
$2B-$700B |
Oil Spills |
$4B-$40B |
Duration of Economic Damages | |
Earthquakes |
Months to Decades |
Hurricanes |
Months to Years |
Tornadoes |
Weeks to Years |
Pandemic Influenza |
Months to Years |
Anthrax Release |
Months |
Terrorist Nuclear Detonation |
Years |
Terrorist Explosive Bombings |
Weeks to Months |
Cyber Attack on Critical Infrastructure |
Days to Weeks |
Toxic Industrial Chemical Accident |
Days to Years |
Oil Spills |
Months to Decades |
Size of Area Affected by Economic Damages | |
Earthquakes |
County to State |
Hurricanes |
Counties to States |
Tornadoes |
Blocks to Counties |
Pandemic Influenza |
Nation/World |
Anthrax Release |
Neighborhood to City |
Terrorist Nuclear Detonation |
Nation/World |
Terrorist Explosive Bombings |
Less than a Block to City |
Cyber Attack on Critical Infrastructure |
Company to Nation |
Toxic Industrial Chemical Accident |
Blocks to Counties |
Oil Spills |
Counties to States |
Average Environmental Damages per Year | |
Earthquakes |
Moderate |
Hurricanes |
High |
Tornadoes |
Low |
Pandemic Influenza |
Low |
Anthrax Release |
Low |
Terrorist Nuclear Detonation |
Moderate |
Terrorist Explosive Bombings |
Low |
Cyber Attack on Critical Infrastructure |
Low |
Toxic Industrial Chemical Accident |
Moderate to High |
Oil Spills |
High |
Average Individuals Displaced per Year | |
Earthquakes |
700 – 20,000 |
Hurricanes |
10,000 – 100,000 |
Tornadoes |
30,000-200,000 |
Pandemic Influenza |
0 |
Anthrax Release |
20-6,000 |
Terrorist Nuclear Detonation |
100-300,000 |
Terrorist Explosive Bombings |
3-100 |
Cyber Attack on Critical Infrastructure |
0 |
Toxic Industrial Chemical Accident |
5,000-200,000 |
Oil Spills |
5 |
Disruption of Government Operations | |
Earthquakes |
Moderate |
Hurricanes |
Moderate to High |
Tornadoes |
Moderate |
Pandemic Influenza |
Moderate to High |
Anthrax Release |
Moderate |
Terrorist Nuclear Detonation |
High |
Terrorist Explosive Bombings |
Low to Moderate |
Cyber Attack on Critical Infrastructure |
Moderate to High |
Toxic Industrial Chemical Accident |
Low |
Oil Spills |
Low |
Other Characteristics
Natural / Human-induced | |
Earthquakes |
Natural |
Hurricanes |
Natural |
Tornadoes |
Natural |
Pandemic Influenza |
Natural |
Anthrax Release |
Human-Induced |
Terrorist Nuclear Detonation |
Human-Induced |
Terrorist Explosive Bombings |
Human-Induced |
Cyber Attack on Critical Infrastructure |
Human-Induced |
Toxic Industrial Chemical Accident |
Human-Induced |
Oil Spills |
Human-Induced |
Ability of Individual to Control Their Exposure | |
Earthquakes |
Low to Moderate |
Hurricanes |
High |
Tornadoes |
Moderate |
Pandemic Influenza |
Low |
Anthrax Release |
Low |
Terrorist Nuclear Detonation |
Low |
Terrorist Explosive Bombings |
Low |
Cyber Attack on Critical Infrastructure |
Low to Moderate |
Toxic Industrial Chemical Accident |
Low to Moderate |
Oil Spills |
Moderate |
Time Between Exposure and Health Effects | |
Earthquakes |
Immediate |
Hurricanes |
Immediate to Years |
Tornadoes |
Immediate |
Pandemic Influenza |
Days |
Anthrax Release |
Days to Weeks |
Terrorist Nuclear Detonation |
Immediate to Decades |
Terrorist Explosive Bombings |
Immediate |
Cyber Attack on Critical Infrastructure |
Immediate |
Toxic Industrial Chemical Accident |
Immediate to Decades |
Oil Spills |
Immediate to Years |
Quality of Scientific Understanding | |
Earthquakes |
High |
Hurricanes |
Moderate to High |
Tornadoes |
High |
Pandemic Influenza |
High |
Anthrax Release |
High |
Terrorist Nuclear Detonation |
High |
Terrorist Explosive Bombings |
High |
Cyber Attack on Critical Infrastructure |
Low to Moderate |
Toxic Industrial Chemical Accident |
Low to Moderate |
Oil Spills |
Low |
Combined Uncertainty | |
Earthquakes |
Moderate |
Hurricanes |
Low |
Tornadoes |
Low |
Pandemic Influenza |
Low |
Anthrax Release |
High |
Terrorist Nuclear Detonation |
High |
Terrorist Explosive Bombings |
Moderate |
Cyber Attack on Critical Infrastructure |
Moderate |
Toxic Industrial Chemical Accident |
Low to Moderate |
Oil Spills |
Low |
About the Authors
Russell P. Lundberg is an assistant professor of security studies at Sam Houston State University and an adjunct with the RAND Corporation. While obtaining his Ph.D. in policy analysis at the Pardee RAND Graduate School, Lundberg contributed to projects on the link between crime and drugs, aviation and postal security, law enforcement intelligence, and corrections. Prior to joining RAND, Lundberg was with the DHS Office of Inspector General and contributed to several reports, including the Hurricane Katrina review. Additionally, Lundberg was the 2011-2012 Harold Brown fellow from RAND’s Center for Global Risk and Security and is the managing editor of the Journal of Drug Policy Analysis. Russell Lundberg can be reached at russell.lundberg@gmail.com.
Henry H. Willis is director of the RAND Homeland Security and Defense Center and a professor at the Pardee RAND Graduate School. Willis has applied risk analysis tools to resource allocation and risk management decisions in the areas of public health and emergency preparedness, homeland and national security policy, energy and environmental policy, and transportation planning. He is the author of dozens of publications, book chapters, and op-ed pieces and has testified before Congress as an expert on applying risk analysis to homeland security policy. Willis’ recent research has involved assessing the costs and benefits of terrorism security measures like the Western Hemisphere Travel Initiative and evaluating the impact of public health emergency preparedness grant programs like the Cities Readiness Initiative. Willis earned his B.A. in chemistry and environmental studies from the University of Pennsylvania, his M.A. in environmental science from the University of Cincinnati, and his Ph.D. from the Department of Engineering and Public Policy at Carnegie Mellon University. Henry Willis can be reached at hwillis@rand.org.
Disclaimer
This research was supported by the United States Department of Homeland Security (DHS) through the National Center for Risk and Economic Analysis of Terrorism Events (CREATE) at the University of Southern California (USC) under award number 2010-ST-061-RE0001. However, any opinions, findings, and conclusions or recommendations in this document are those of the authors and do not necessarily reflect views of the United States Department of Homeland Security, or the University of Southern California, or CREATE.
Notes
1 DHS Budget-in-Brief FY 2013, US Department of Homeland Security, Editor, (2013) Washington, DC.
2 DHS Risk Lexicon, U.S. Department of Homeland Security – Risk Steering Committee, Editor, (2010) Washington, DC.
3 The Strategic National Risk Assessment in Support of PPD 8: A Comprehensive Risk-Based Approach toward a Secure and Resilient Nation, U.S. Department of Homeland Security, Editor, (2011) Washington, DC.
4 P. Slovic and E.U. Weber, Perception of Risk Posed by Extreme Events, Columbia University Center for Hazards and Risk Research, 2002; P. Slovic and E.U. Weber, Perception of Risk Posed by Extreme Events, Columbia University Center for Hazards and Risk Research, 2002.
5 Committee to Review the DHS’s Approach to Risk Analysis, Review of the Department of Homeland Security’s Approach to Risk Analysis, National Research Council of the National Academies, Editor, (2010) National Academies Press: Washington, D.C.
6 G.G. Brown and J.L.A. Cox, “How Probabilistic Risk Assessment Can Mislead Terrorism Risk Analysts,” Risk Analysis 31 no.2(2011): 196-204; B.C. Ezell et al., “Probabilistic Risk Analysis and Terrorism Risk,” Risk Analysis 30 no.4 ( 2010): 575-589.
7 Committee to Review the DHS’s Approach to Risk Analysis, Review of the Department of Homeland Security’s Approach to Risk Analysis, National Research Council of the National Academies, Editor, (2010), National Academies Press: Washington, D.C.
8 Committee to Review the DHS’s Approach to Risk Analysis, Review of the Department of Homeland Security’s Approach to Risk Analysis, National Research Council of the National Academies, Editor, (2010) National Academies Press: Washington, D.C., p. 84.
9 Committee to Review the DHS’s Approach to Risk Analysis, Review of the Department of Homeland Security’s Approach to Risk Analysis, National Research Council of the National Academies, Editor, (2010), National Academies Press: Washington, D.C.
10 A.T.Cox, “What’s Wrong with Risk Matrices?” Risk Analysis 28 no.2( 2008): 497-512; D.J Rozell, “A Cautionary Note on Qualitative Risk Ranking of Homeland Security Threats”, Homeland Security Affairs 11 no.3 (2015).
11 Quadrennial Homeland Security Review Report: A Strategic Framework for a Secure Homeland, U.S. Department of Homeland Security, Editor, (2014): Washington, D.C.
12 Committee to Review the DHS’s Approach to Risk Analysis, Review of the Department of Homeland Security’s Approach to Risk Analysis, National Research Council of the National Academies, Editor, (2010) National Academies Press: Washington, D.C.
13 L. Bodin, L. Gordon, and M. Loeb, “Evaluating Information Security Investments Using the Analytic Hierarchy Process”, Communications of the ACM 48 no.2(2005): 78-83; O.S. Vaidya and S. Kumar, “Analytic Hierarchy Process: An Overview of Applications”, European Journal of Operational Research 169 no.1 (2006): 1-29; T.L.Saaty, “Analytic Hierarchy Process” in Encyclopedia of Operations Research and Management Science (Springer:2013): 52-64; R.L.Keeney and D. von Winterfeldt, “A Value Model for Evaluating Homeland Security Decisions”, Risk Analysis 31 no.9 (2011): 1470-1487; R.L Keeney and H. Raiffa, Decisions with Multiple Objectives: Preferences and Value Trade-offs, (New York: Cambridge University Press, 1993).
14 H.K. Florig et al., “A Deliberative Method for Ranking Risks (I): Overview and Test Bed Development”, Risk Analysis 21 no.5 (2001): 913-913; K.M Morgan, et al., “A Deliberative Method for Ranking Risks (II): Evaluation of Validity and Agreement among Risk Managers”, Risk Analysis 21 no.5 (2001): 923.
15 R.Lundberg, Comparing Homeland Security Risks Using a Deliberative Risk Ranking Methodology, (Santa Monica:Pardee RAND Graduate School, 2013): 408; R.Lundberg, and H.H. Willis, Deliberative Risk Ranking to Inform Homeland Security Strategic Planning, working paper.
16 M.G.Morgan, et al., “Categorizing Risks for Risk Ranking,” Risk Analysis 20 no.1 (2000): 49-58.
17 DHS Risk Lexicon, U.S. Department of Homeland Security – Risk Steering Committee, Editor, (2010) Washington, DC.
18 Quadrennial Homeland Security Review Report: A Strategic Framework for a Secure Homeland, U.S. Department of Homeland Security, Editor,( 2010) Washington, DC.; National Planning Scenarios, Homeland Security Council in partnership with the U.S. Department of Homeland Security, Editor, (2005) Washington, DC.; Bioterrorism Risk Assessment, U.S.D.o.H.S.B.T.C.C.o.t.N.B.A.a.C. Center, Editor,( 2006) Fort Detrick, Md.; National Response Framework- Emergency Support Function #10 Annex, U.S. Department of Homeland Security Federal Emergency Management Agency, Editor,( 2008) Washington, DC.; National Infrastructure Protection Plan, U.S. Department of Homeland Security, Editor, (2009) Washington, DC.
19 M.G. Morgan, et al., “Categorizing Risks for Risk Ranking”, Risk Analysis 20 no.1 (2000): 49-58.
20 Building a Resilient Nation, Department of Homeland Security, Mission 2013 [cited 2013 June 6]; Available from: http://www.dhs.gov/building-resilient-nation.
21 Quadrennial Homeland Security Review Report: A Strategic Framework for a Secure Homeland, U.S. Department of Homeland Security, Editor,(2010) Washington, DC.; National Planning Scenarios, Homeland Security Council in partnership with the U.S. Department of Homeland Security, Editor, (2005) Washington, DC.; Bioterrorism Risk Assessment, U.S.D.o.H.S.B.T.C.C.o.t.N.B.A.a.C. Center, Editor, (2006) Fort Detrick, Md.; National Response Framework- Emergency Support Function #10 Annex, U.S. Department of Homeland Security Federal Emergency Management Agency, Editor, (2008) Washington, DC.; National Infrastructure Protection Plan, U.S. Department of Homeland Security, Editor, (2009) Washington, DC.
22 The Strategic National Risk Assessment in Support of PPD 8: A Comprehensive Risk-Based Approach toward a Secure and Resilient Nation, U.S. Department of Homeland Security, Editor, (2011) Washington, DC.
23 R.L Keeney and H. Raiffa, Decisions with Multiple Objectives: Preferences and Value Trade-offs, (New York: Cambridge University Press, 1993).
24 Committee to Review the DHS’s Approach to Risk Analysis, Review of the Department of Homeland Security’s Approach to Risk Analysis, National Research Council of the National Academies, Editor, (2010) National Academies Press: Washington, D.C.; R.L. Keeney and D. von Winterfeldt, “A Value Model for Evaluating Homeland Security Decisions”, Risk Analysis 31 no.9 (2011): 1470-1487; Committee on Assessing the Costs of Natural Disasters, The Impacts of Natural Disasters: A Framework for Loss Estimation, National Research Council, Editor, (1999) National Academies Press, Washington, D.C.; Committee on Disaster Research in the Social Sciences: Future Challenges and Opportunities, Facing Hazards and Disasters: Understanding Human Dimensions, National Research Council, Editor, (2006) National Academies Press: Washington, D.C; D.S. Mileti, Disasters by Design: A Reassessment of Natural Hazards in the United States, Washington DC.: National Academies Press, (1999); M.K.Lindell and C.S. Prater, “Assessing Community Impacts of Natural Disasters,” Natural Hazards Review 4 (2003): 176.
25 Committee to Review the DHS’s Approach to Risk Analysis, Review of the Department of Homeland Security’s Approach to Risk Analysis, National Research Council of the National Academies, Editor, (2010) National Academies Press: Washington, D.C.; Quadrennial Homeland Security Review Report: A Strategic Framework for a Secure Homeland, U.S. Department of Homeland Security, Editor, (2010) Washington, DC.; National Planning Scenarios, Homeland Security Council in partnership with the U.S. Department of Homeland Security, Editor, (2005) Washington, DC.; National Infrastructure Protection Plan, U.S. Department of Homeland Security, Editor, (2009) Washington, DC.; White House, Homeland Security Presidential Directive (HSPD-7): Critical Infrastructure Identification, Prioritization, and Protection. 2003: Washington, DC.
26 R. Lundberg, Comparing Homeland Security Risks Using a Deliberative Risk Ranking Methodology, (Santa Monica: Pardee RAND Graduate School, 2013).
27 Ibid.
28 Ibid.
29 National Planning Scenarios, Homeland Security Council in partnership with the U.S. Department of Homeland Security, Editor, (2005) Washington, DC.
30 RAND Database of Worldwide Terrorist Incidents, RAND Corporation, Editor, (2012) Santa Monica, CA.
31 G.G.Brown and J.L.A. Cox, “How Probabilistic Risk Assessment Can Mislead Terrorism Risk Analysts,” Risk Analysis, 31 no.2 (2011): 196-204; B.C.Ezell, et al., “Probabilistic Risk Analysis and Terrorism Risk,” Risk Analysis 30 no.4 (2010): 575-589.
32 R. Lundberg, Comparing Homeland Security Risks Using a Deliberative Risk Ranking Methodology, (Santa Monica: Pardee RAND Graduate School, 2013).
33 H.K. Florig et al., “A Deliberative Method for Ranking Risks (I): Overview and Test Bed Development,” Risk Analysis 21 no.5 (2001): 913; K.M Morgan, et al., “A Deliberative Method for Ranking Risks (II): Evaluation of Validity and Agreement among Risk Managers,” Risk Analysis 21 no.5 (2001): 923; H.H.Willis, et al., “Ecological Risk Ranking: Development and Evaluation of a Method for Improving Public Participation in Environmental Decision Making,” Risk Analysis 24 n0.2 (2004): 363-378; H.H.Willis, et al., “Prioritizing Environmental Health Risks in the UAE,” Risk Analysis 30 no.12 (2010): 1842-1856; J Xu, H.K. Florig, and M.L. DeKay, “Evaluating an Analytic–Deliberative Risk‐Ranking Process in a Chinese Context,” Journal of Risk Research14 no.7 ( 2011): 899-918.
34 R. Lundberg, Comparing Homeland Security Risks Using a Deliberative Risk Ranking Methodology, (Santa Monica: Pardee RAND Graduate School, 2013).
35 H.H. Willis and M. Moore, “Improving the Value of Analysis for Biosurveillance,” Decision Analysis 11 no.1 (2013): 63-81.
36 B.C. Ezell, et al., “Probabilistic Risk Analysis and Terrorism Risk,” Risk Analysis 30 no.4 (2010): 575-589.
37 B.C. Ezell, et al., “Probabilistic Risk Analysis and Terrorism Risk,” Risk Analysis 30 no.4 (2010): 575-589; M.Colyvan, “Is Probability the Only Coherent Approach to Uncertainty?” Risk Analysis 28 no.3 (2008): 645-652.
38 Report on the US Intelligence Community’s Prewar Intelligence Assessments on Iraq: Conclusions: Overall Conclusions–Weapons of Mass Destruction, U.S. Senate, Editor, (2004) United States Senate Select Committee on Intelligence: Washington, DC.
39 R.J.Lempert and M.T. Collins, “Managing the Risk of Uncertain Threshold Responses: Comparison of Robust, Optimum, and Precautionary Approaches,” Risk Analysis 27 no.4 (2007): 1009-1026; J.A.Dewar, Assumption-Based Planning: A Tool for Reducing Avoidable Surprises, (New York: Cambridge University Press, 2002).
40 C.A. MacKenzie, “Summarizing Risk Using Risk Measures and Risk Indices,” Risk Analysis 34 no.12 (2014): 2143-62; Committee to Review the DHS’s Approach to Risk Analysis, Review of the Department of Homeland Security’s Approach to Risk Analysis, National Research Council of the National Academies, Editor, (2010) National Academies Press: Washington, D.C.
41 R.Lundberg, Comparing Homeland Security Risks Using a Deliberative Risk Ranking Methodology, (Santa Monica: Pardee RAND Graduate School, 2013).
42 E.R.Tufte and P. Graves-Morris, The Visual Display of Quantitative Information, Vol. 2. (Cheshire CT: Graphics Press, 1983); S.Few and P. EDGE, Data Visualization: Past, Present, and Future, IBM Cognos Innovation Center, 2007.
43 G.E. DeSeve, The Presidential Appointee’s Handbook, Washington DC: Brookings Institution Press, 2009.
44 A.T Cox, “What’s Wrong with Risk Matrices?” Risk Analysis 28 no.2 (2008): 497-512; D.J Rozell, “ A Cautionary Note on Qualitative Risk Ranking of Homeland Security Threats,” Homeland Security Affairs 11 no.2 (2015).
Copyright
Copyright © 2015 by the author(s). Homeland Security Affairs is an academic journal available free of charge to individuals and institutions. Because the purpose of this publication is the widest possible dissemination of knowledge, copies of this journal and the articles contained herein may be printed or downloaded and redistributed for personal, research or educational purposes free of charge and without permission. Any commercial use of Homeland Security Affairs or the articles published herein is expressly prohibited without the written consent of the copyright holder. The copyright of all articles published in Homeland Security Affairs rests with the author(s) of the article. Homeland Security Affairs is the online journal of the Naval Postgraduate School Center for Homeland Defense and Security (CHDS).
Maybe I missed them, but where are the conclusions and recommendations sections? I saw the results. And I saw the content portion that said there is no comparison. I guess I’m left wanting more.
The Federal Disaster Mitigation Act of 2000 requires local governments to develop a Hazard Mitigation Plan (HMP), which identifies strategies to minimize the impact of these hazards, in order to be eligible for pre- or post-disaster mitigation funding. The article is valuable as it provides a possible RFP framework for our next 5 year HMP update. It could also help the county and communities assess a proposal that is based upon this methodology. Plan to forward the article to the Michigan Hazard Mitigation office.
Paul L. Haley EMC – Trenton Michigan
My compliments on this article taking on the difficult task of comparing different levels of risk, and overall a very good job in doing so. However do have some issues about the assessment of Pandemic Influenza. Here are some quick observations.
1. The singling out of Pandemic Influenza I believe is in error. With the rise of drug resistant diseases (Resistant TB comes to mind), the existence of other diseases some of which are novel, the continued (but ignored) pandemics of HIV and the various mosquito borne (among others), the statement should be Pandemic Diseases. In fact in my opinion this should be expanded to other biothreats (toxic plumes, pollution, invasive species), and well a bioweapons. All of these may have different treatments but overall require the same public health measures and procedures.
2. The implication was that deaths are the greatest problem. Though devastating the greater impact will be those who are maimed or have a chronic condition. As cold as it sounds the dead are a short term problem. The still living who will require lifelong support, diverting recovery efforts from other critical needs, will be the greater consequence.
3. There appears to be a reliance on historical and analogous data. These are of definite value, but the conditions of the contemporary world have never occurred in history before. Globally there are high density urban populations, numbering in the millions, that cover hundreds of square miles, now always with adequate medical care and public health. The interconnection of the world is far faster than ever before with communications (spreading fear, misinformation as well as valid data in real time), and transportation systems that can spread a contagious disease globally in just a few days. Economies far removed from the epicenter of an outbreak can be impacted negatively, in fact your estimate of the economic impact at 4B is probably way too low.
4. Pandemics do not have to directly affect (infect) human health. Food sources (coffee, wheat, chocolate, bovines, poultry), raw resources (wood, rubber). These zoodemics can be as or more costly leading to famine, loss of employment, crippling on industries (potentially cascading into other industries), refusal by other countries to buy a particular product, etc. The recent Avian flu outbreaks in the US are an example, resulting in more costly eggs and smaller turkeys for Thanksgiving.
5. I am not sure characterizing pandemic and natural rather than human induced is completely accurate. There was a relatively recent outbreak of “Disney measles” which apparently originated from a single imported case from outside the US, the unfortunately crashed into population clusters of those who did not immunize against it. Same can be said for flu by those who still go to work when sick with “just the flu”. The origin may be non-human, but humans are very effective vectors of diseases to other humans.
6. The indication that health effects of pandemics is within days in not completely accurate. Many diseases can be asymptomatic for years (HIV) appear mild where the health impact may not be known for years (prions, mumps), and diseases tends to be in waves, sometimes with decades between the waves.
7. Characterizing level of scientific understanding as high is clearly not true. True that huge strides have been made in the last 200 years, we can by no means state that we have a high understanding of the micro and macro biology and biomes. In addition even if there is high understanding among a small number of “experts” it is clear that the general population does not have much understanding about disease transmission, treatments, vaccines. This is equivlent that one has an excellent safety program because one has an expert safety officer who has written an excellent safety plan. If not everyone is practicing safety and looking out for others, one does not have a safety. For disease the article states that there is low ability for an individual to protect themselves, while in fact if they had some knowledge, and did simple things such as wash their hands they would have a high ability to limit exposure. There would also be less chance of people falling for miracle cures and other medical scams if they had a better understanding.
8. Your data about displacement from a disease outbreak is historically incorrect. If one looks as the history of the 1918 Influenza millions displaced and many towns isolated themselves to prevent infected people from entering. These were by in large not massive forced migrations crossing international borders, but relatively short distances such as going to the countryside, going to stay at ones summer cottage, taking a long vacation.
9. One last observations (though there are others). The article states uncertainty is low for pandemic influenza, this is also incorrect. Though there is 100% certainty there will be a global pandemic, its nature, lethality, morbidity, who will be infected, treatment/cure, prevention, and all examples of uncertainty. Uncertainty leads to fear, panic, poor decisions, inappropriate actions, rumor, conspiracy theory. One does not even have to have an actual biothreat, just the uncertainty of it can be sufficient. Witness the recent over reactions in the US over a single imported case of Ebola that infected two people (despite thousands that had been in contact) during the short period when it is infectious. This is not to mention the reactions to those with medical expertise who were stigmatized (or worse), or those who were shunned and mistreated because they were born in West Africa, or had visited family at the opposite side of Africa (a distance greater that from San Francisco to New York), or demands to stop all flights.
Looks like Robert made some good points regarding pandemics back in 2015.