Measuring State Resilience: What Actually Makes a Difference?

Jasper Cooke

EXECUTIVE SUMMARY

This thesis aims to answer the question: What drives resilience at the state level in the United States? Specifically, we have tried to address the lack of a clear success metric for broad goals of increasing security or resilience. Further, within the broad range of activities that states can engage in to achieve these goals, some must be more effective than others. It would be beneficial to know, for instance, whether a county emergency manager would save more lives by spending $100,000 on a community preparedness campaign or by spending the same amount on a full-scale exercise. Without a clear measure of resilience or security, however, we cannot know if we are more resilient and we cannot know which actions more effectively achieve that goal.

To answer this question, we conducted a literature review and determined that a composite indicator, also known as an index, is the most quantitatively rigorous way to measure complex idea such as resilience. Additionally, we followed existing precedent and used weather-related deaths and economic damage as external proxies for resilience. We decided to measure resilience at the state level in the United States because of federalism; if Department of Homeland Security grants for improving resilience and security are provided to the states, then it is important to measure resilience at the state level. There were a number of notable forerunners using composite indicators to measure resilience as well, including the Baseline Resilience Indicators for Communities (BRIC) model, which measures county-level resilience in the United States; the Australian Natural Disaster Resilience Index (ANDRI), which measures province- and national-level resilience in Australia; and the National Health Security Preparedness Index (NHSPI), which measures health security at the state level in the United States.

As outlined in the Organisation for Economic Co-operation and Development’s Handbook on Constructing Composite Indicators, the first step in creating a composite indicator is to create a theoretical framework and list of specific indicators.[1] We used existing research to pull together this framework and then validated it with a two-round Delphi method. We also spoke to a group of professional data analysts at the Federal Emergency Management Agency (FEMA) called the Analytics Community Brownbag to gain further insight into existing measures and efforts. We made edits to the framework based on feedback from these groups, then gathered available data and aggregated it into an index. Recognizing the truth of George Box’s remark, “all models are false but some are useful,” our goal was not to create a perfectly accurate model but instead to create a model that would help actual practitioners better evaluate program success.[2]

However, we did not want to leave accuracy out altogether, so used two methods to assess the index: factor analysis and regression analysis. Factor analysis uses a correlation matrix to assess and pull out a handful of “factors,” underlying unobservable trends that can be said to drive the overall concept—resilience in our case. Though our analysis clearly showed five factors in our dataset, when we “extracted” them and looked at the indicators with which they were most strongly correlated, there were no clear labels for these drivers. Moreover, measures of index reliability all showed that the index was not statistically sound.

The regressions were simpler. We compared data for each indicator for which data was available to the total two-year average of per capita weather-related injuries and fatalities, and also to the average economic loss. In short, we used scatter plots to see if any indicators in the framework were correlated with deaths and damage (stand-in proxies for resilience). The answer was no. None of the indicators we used—from building code ratings to emergency management budgets—showed a strong relation to deaths and damage.

While the factor analysis was somewhat inconclusive, the regressions were fairly black and white. Either it is true, as the data show, that better building codes do not save lives or prevent property damage, or there is a problem with the analysis. We believe the latter option—specifically, that analyzing resilience on the state level masks granularity in the data that is necessary for truly understanding resilience.

In the end, the tool we used, a composite indicator measuring resilience at the state level in the United States, did not answer the question, What drives state resilience? If anything, the analysis showed most clearly that the state is too large an area to accurately measure resilience when the output is very local, such as for weather-related deaths.

Taking all this into account, we provide some recommendations for improving resilience measurement. Because the Threat and Hazard Identification and Risk Assessment (THIRA) is the most commonly used assessment of resilience nationally, and it is used at all levels of government, most recommendations focus on improvements to the THIRA. Specifically, emergency managers and security practitioners should use resilience to break down silos and unify effort, add nuance and quantitative measurements where possible, focus on data quality, control for the hazard, and use common sense.

 

 

[1] Organisation for Economic Co-operation and Development, and European Commission, Handbook on Constructing Composite Indicators: Methodology and User Guide (Paris: OECD, 2008).

[2] George E. P. Box, “Robustness in the Strategy of Scientific Model Building,” in Robustness in Statistics, ed. Robert L. Launer and Graham N. Wilkinson (New York: Academic Press, 1979), 201–36.

No Comments

Post a Comment