— Volume I No. 1: Summer 2005 —

Measuring Prevention

Glen Woodbury

Introduction

How do we know if prevention is working? How do we know if all the efforts and resources directed towards stopping the next attack are worthwhile? How do we measure a negative? How do we continue to justify the diversion of public funds from other essential services if our only justification for success will be ‘nothing happened?’ If we could count how many attacks were stopped or deterred, measuring prevention would obviously be a simple task and this article would be superfluous. Unfortunately, inherent in an ability to count what the enemy has decided not to do requires an ability to read the minds of our foes or, at the very least, an ability to constantly observe their internal decision cycles. Even if the absence of an attack were not the result of a conscious decision by the terrorist, but rather the result of some unfortunate (from their point of view) circumstance, our ability to quantify the elimination of the threat would require a much deeper intelligence capability than we are able to construct. If we could peer so deeply into the opposition that we could count each of their failures, this same capability would also make the entire homeland security enterprise moot.

The ability to measure the prevention of terrorist attacks is vitally important for a number of reasons. First, there is the accountability issue. The nation, at all levels of government and the private sector, is investing vast amounts of funds and efforts to ‘prevent the next attack.’ Not only that, we have reorganized significant portions of the federal government (and some states as well) to protect our citizens, economy, infrastructure and way of life from another horrendous attack upon the homeland. How do we know we are succeeding? What type of examination tells us we have been wise in our investments, or have we been lucky in spite of them? Secondly, we need to effectively guide, and justify, future investments. Without a rational argument and measurement process to explain the benefits of expenditures on prevention activities, we are not only apt to suffer deserved criticism; we also risk sacrificing future investments, political credibility and the public’s faith. Third, and most importantly, how can we claim to be effectively protecting the safety and security of the nation without any way of determining whether our path and efforts are, at the least, mostly correct and rational?

This is critical. In the absence of more attacks (and this is a good thing), we are in effect asking the public and appropriators to ‘trust us’ on our near and long term efforts to prevent terrorist attacks. How long will this trust last? Is it already waning? How long can we justify expenditures on this particular public good versus all the others? On the other hand, should another attack occur we will once again dissect every effort we made to prevent it. How will our response to the investigation be viewed if our efforts are based upon unmeasured, unguided and illogically resourced actions to prevent the tragedy in the first place?

Now the good news: not only is the measurement of prevention activities possible, the methodologies of ‘how’ to measure already exist in numerous processes. Additionally, the definitions of ‘what’ to measure have been both experienced and discussed. This article will argue that measuring prevention can be accomplished by examining and evaluating the pieces that make up the whole and demonstrate that not only is prevention measurable, that measurement is well within our reach. I will support this argument by discussing and justifying the concept behind process measurement; by briefly examining some current thoughts of what might comprise prevention; and then by proposing and testing one methodological possibility.

Process Measurement

We are not at the place where we can declare a victory of intelligence. Nor should we be so shortsighted that we are willing to continue a massive investment in preventive action without a means to measure whether it is at all effective. But, to argue against the counting negatives parable, measuring effectiveness is not always done at the level of final outcomes. Often, the processes and systems that lead to preferred outcomes are measured when ultimate outcome measurement is impossible. Emergency management agencies are not (usually) measured by how many houses are or are not flooded in a storm event; rather, the systems and programs that help prevent flooding are measured against accepted standards of practice. 1 Fire agencies are not generally measured by how many houses are saved or burn; their response times are measured to quantify and compare increased/decreased efficiency versus the inputs they invest. We may not know "how many shipwrecks does a lighthouse prevent?’ 2 but we can evaluate the design and decision processes that lead to the specific placement of lighthouses and come to some conclusion as to the soundness of these decisions without knowing whether ships did or did not crash because of them.

Why can‘t prevention efforts, especially at the state and local levels, be evaluated in a similar fashion? We can set in place sound and reasoned prevention practices and standards that we can confidently conclude will lead to the prevention of terrorist attacks. These practices and/or approaches can then be measured in pieces or comprehensively to give some sense of a program’s effectiveness. For example, if it is accepted that a critical piece of the prevention process is the establishment of a collaborative system that enables and promotes the integration and analysis of data from all sources (from both inside and outside the law enforcement community) — which in turn better guides protection measure decisions — then the existence or nonexistence of this system is a measurement. If it does not exist, one could reasonably postulate that prevention is weaker. If we took all the pieces of one simplified prevention process (threat identification, target evaluation, risk assessment, or response/protection decisions) and detailed the sub-components of the process, we could ideally come up with a systematic approach in which many of the individual pieces of the overall process could be measured. If all, or most, of the pieces are effective, then the whole might be effective. This of course assumes that whatever model process we propose actually portrays a sound and reasoned approach that, when employed, leads to better prevention.

The Department of Homeland Security’s Office for Domestic Preparedness proposes a prevention process model in their Prevention Guidelines for Homeland Security 3 that could be used to describe both the process to be assessed as well as up to 165 individual tasks and/or outputs. The National Preparedness Guidance 4 takes the guidelines’ model further and identifies a list of target capabilities that are desired for the entire spectrum of the homeland security effort from prevention to recovery, as well as some common cross-cutting capabilities required in all mission areas.

Specifically, this guidance identifies the prevention and protection target capabilities, or ‘things we should be able to do well,’ 5 as follows:

Prevention Mission Area: Information Collection and Threat Detection, Intelligence Fusion and Analysis, Information Sharing and Collaboration, Terrorism Investigation and Apprehension, and CBRNE Detection;

Protection Mission Area: Risk Analysis, Critical Infrastructure Protection, Food/Agriculture Safety and Defense, Public Health Epidemiological Investigation and Testing, and Citizen Preparedness and Participation.

For discussion purposes, all the processes and elements mentioned above might be incorporated and might be visualized as follows: 6

image

There are arguably (and it will be argued) many ways to portray all the possible prevention components and tasks in a succinct and simple diagram. The point here is not so much how the final, best model process could best be visually depicted; it is how such a reasonably sound process could be measured by component, task, and/or output.

Defining the Pieces of the Process: Outcomes to Outputs

What are outcomes? Harry P. Hatry defines them, as ‘events, occurrences or changes in conditions, behavior, or attitudes that indicate progress toward achievement of the mission and objectives of the program. Thus outcomes are linked to the program’s (and its agency’s) overall mission — its reason for existing, [emphasis added].7 When considering the establishment of any organized structure for prevention, the immediate step after determining its mission or objective should be the establishment of measurable outcomes that will help focus efforts to advance that mission. The General Accounting Office in April of 2002 testified to the Senate Committee on Government Affairs that two key ingredients were missing from then current strategic efforts towards combating terrorism: the lack of measurable outcomes and the lack of the identification of appropriate roles for state and local governments. 8 This testimony occurred prior to the publication of the National Strategy for Homeland Security in June 2002 and its impact on the authors of the strategy is unknown. Can measurable and achievable prevention outcomes be developed?

From the example process above, it is possible to rephrase the four major elements into four measurable desired outcomes or goals for an organizational effort to prevent terrorist attacks: 1) the increased ability to identify indications of an existing or future threat; 2) the increased ability to evaluate the potential of threats as they are identified; 3) the reduced vulnerability of critical infrastructures and other potential targets; and 4) the increased appropriateness of protection and/or other threat response activities.

Together, these four outcomes describe elements of a risk assessment process that could ultimately provide policy makers and executive decision authorities an objective cost-benefit analysis that will help guide their final actions in response to identified terrorist threats.

Taking it one step further, outputs could also be proposed for measuring each of the desired outcomes. Clearly the list of outputs for the prevention of terrorism could draw from hundreds of potential courses of action. But some of the ‘highest order’ outputs could provide a starting point for organized actions. At this level of detail, it is important to note that what might comprise prevention on an international, foreign policy scale is unlikely to be the same as that for state and local governments or the private sector. For the four prevention outcomes identified above, I propose thirteen individual outputs that might be applied in a domestic setting:

Outputs for Outcome One, the increased ability to identify indications of an existing or future threat: 1) development of a strategy and commensurate business plans that describe how to assure the collaboration and coordination amongst all entities that participate in the threat identification processes; 2) creation and implementation of a system to collect, screen and store relevant information with investigative value 9 ; and 3) development of a training system that provides adequate basic level threat awareness education to all public service entities, the private sector, and the general public as appropriate.

Outputs for Outcome Two, the increased ability to evaluate the potential of threats as they are identified: 1) adoption or development of an appropriate analytical model to assess threat indications; 2) ensured collaboration and integration of assessment and evaluation processes from traditional as well as non-traditional investigative entities, (e.g. health and agricultural agencies); 3) creation and/or assignment of a lead organization to oversee and coordinate a system of threat identification and assessment processes; and 4) through policy, legislative and/or executive action, the identification and development of strategies to overcome barriers to the appropriate sharing of information and intelligence products.

Outputs for Outcome Three, reduced vulnerability of critical infrastructures and other potential targets: 1) assignment or creation of a lead entity to oversee the effort to identify, assess vulnerabilities of, analyze consequences, and recommend protective strategies and priorities of critical infrastructures and potential targets of terrorists; 2) development and oversight of strategies and action plans that maximize the collaboration and coordination of the owners of potential targets and the entity’s effort to reduce their vulnerabilities; and 3) provision of a leadership point to assure the coordination between private, local, state, and federal critical infrastructure protection efforts.

Outputs for Outcome Four, increased appropriateness of protection and/or other threat response activities: 1) establishment and oversight of a process that ensures the interconnection of the first three outcomes and results in recommendations for protection decisions and threat response measures; 2) development of a risk management or cost benefit tool that will guide appropriate protection and response action decisions; and 3) development of a methodology that delineates responsibilities for varying degrees of decision-making amongst and between levels of government and the private sector.

These outputs and objectives could be modified, added to, or otherwise changed to better reflect the needs and expectations of individual government or private sector entities. The important and critical assumption is that shouldall of these elements be implemented effectively, it will help lead to the prevention of terrorist attacks, and therefore the evaluation of these outputs will provide a viable measurement tool. The emphasis again is measuring a logical and reasonable process that would, by its implementation, lead to better prevention. This approach does not attempt to measure prevention through the accounting of non-attacks which, as stated earlier, is either impossible or at least not quantifiably consistent. One way to analyze the validity of the proposed outcomes and outputs is to ask the negative of each of the elements. In other words, if these four outcomes and thirteen outputs were not in place, could we reasonably assume that the likelihood of a successful terrorist attack is greater? Without these elements in place, the leaders of an organizational effort to prevent terrorism would have no systems or processes by which to identify the threats, to analyze and evaluate probabilities, to prioritize potential targets for protection or to make good risk management decisions about what actions to take in a threat environment. If this were the case, I would measure this particular entity’s overall ability to prevent a terrorist attack as extremely low. They would have to instead rely upon luck, or on the decision of the enemy not to attack them for some other rationale unknown to the defending organization.

A Measurement Example

There are numerous applications and methodologies for converting desired outcomes and outputs to measurement language. Examples of measurement methodologies include everything from an exhaustive process involving the assignment of values or weights to each element and its detailed tasks, to a simpler exercise in which one might use stop light colors for each element; (e.g. ‘red’ means no effort or system in place to achieve the output or outcome, ‘yellow’ indicates some efforts are underway or partially completed, and ‘green’ designates completion or sustained efforts in effect). Finding the ‘how’ to measure is the easy part. There are any number of models to use and hundreds of expert consultants, contractors and academics ready to engage in methods for measurement. Defining the ‘what’ to measure is where we are challenged.

For a brief illustrative example that applies one method for ‘how’ to measure the ‘what’ I am proposing, I have used outcome number Two: Increased ability to evaluate the potential of threats as they are identified. Applying a simple numerical weighting system to the outputs, each has been graded according to the following scale and criteria.

0 = No effort or system underway nor recognition of the need

1 = Recognition of the need but no effort or resources to accomplish the output

2 = Initial efforts and resources underway to achieve the output

3 = Moderate progress towards accomplishing the output

4 = Sustained efforts underway and output near to fulfillment

5 = Output achieved and resources devoted to sustain the effort

For this example, I have assigned a numerical assessment to each of the outputs and provided a fictional, but plausible, narrative of that evaluation.

Measurement of Outcome Two

(Increased ability to evaluate the potential of threats as they are identified)

Measurement of 2-1

Adoption or development of an appropriate analytical model to assess threat indications.

Score = 2

Through the intended, financed and planned, yet to be implemented, establishment of a state fusion center, it is expected that an analytical model will be utilized based upon national best practices or customized design.

Measurement of 2-2

Ensured collaboration and integration of assessment and evaluation processes from traditional as well as non-traditional investigative entities, (e.g. health and agricultural agencies.)

Score = 2.5

While the formal fusion center and system is yet to be established, all law enforcement agencies and non-law enforcement entities have recognized the requirement to participate in the sharing and dissemination of information and intelligence. Staff have been identified and protocols have been established to transfer threat and vulnerability related data and to participate in evaluation of such data in a collaborative fashion.

Measurement of 2-3

Creation and/or assignment of a lead organization to oversee and coordinate a system of threat identification and assessment processes.

Score = 4

The state patrol’s intelligence division has been assigned lead responsibility for the achievement of this objective. It has been resourced by both state and federal funds and its efforts are monitored and accountable to the Governor, the state patrol chief and the homeland security director. Long term strategic planning and budgeting efforts have been completed and approved, in concept, by the state legislature.

Measurement of 2-4

Through policy, legislative and/or executive action, identification and development of strategies to overcome barriers to the appropriate sharing of information and intelligence products.

Score = 2.5

While much discussion and some action has overcome and/or satisfied some privacy act and civil liberty issues, much work remains to be done; specifically, in the issue area of the sharing and protection of data shared between the public and private sectors.

Out of a possible ‘score’ of 20 for objective Two, this fictional entity measures eleven (the total of the four scored outputs). So one might say that this entity’s ‘increased ability to evaluate threats as they are identified’ is not yet realized but is progressing. Therefore, in combination with the other three prevention objectives, the executives could reasonably assess their overall prevention efforts, at least at a strategic level. While as much subjectivity as possible can be taken out of this process through firmer scoring criteria, the question of whether this objective or its outputs are important to overall prevention will most likely be a subjective decision by the senior officials responsible. But this simple evaluation can at least paint a picture of the level of progress in this element from which further resource, executive guidance, or prioritization can be accomplished. The score may be acceptable at this point in time and the executive directive is to continue as planned. Or the assessment may be judged to be woefully inadequate and the timeframe for the establishment of a fusion center is accelerated.

As stated before, there are many variations and available tools to measure objectives and outputs. Additionally, what is deemed important to be measured could deviate or adjust from the proposed tool presented here and the outputs could be expanded and examined at a greater level of detail. The key question again is if, by measuring the pieces of a logical approach that can be reasonably expected to lead to better prevention, can overall prevention itself be measured and evaluated? If (as this article suggests) the answer is yes, then not only can investments and efforts be more logically and justifiably applied, the public good is better served by measured and guided efforts that actually lead to the intended result.

Conclusion

As proposed in the introduction, this concept of measurement by process effectiveness is not ground-breaking. The public health community proposed this approach for their evaluation of efforts to combat bio-terrorism and other catastrophic threats. 10 Their effort was comprised of two major objectives. One, to measure the ability of the public health community to respond to all events — not just bio-terrorism — by measuring its preparedness for other threats such as West Nile Virus, SARS and an influenza season; and two, to measure such preparedness by evaluating the pieces (e.g. an epidemiological surveillance capacity,) of the overall processes which, when working in concert, are designed to achieve an effective prevention and response capability.

There are also other potentials for measurement, not discussed in this article, involving the measurement of other consequences of those systems primarily designed for counter-terrorism. For example, if the systems and actions to better share law enforcement threat data to identify potential terrorists also serve to increase the ability to identify and capture non-terrorist criminals, then the increase/decrease of common criminals identified could indicate measurement of the overall system as well. While the examples presented above are over-simplified, they demonstrate the enormous potential and opportunities to measure prevention without having to rely on ‘what did not happen.’

This article is written in the middle of the year 2005, a time when the congress, state and local governments, and their executing agencies are all focusing on the homeland security funding questions of ‘how much?’ and ‘how will we know when it is enough?’ For some mission areas, measurement will be easy. Consequently, the policy debates over what capability gaps to ‘buy,’ will be less esoteric. The gap between not enough communications gear and almost enough will be much simpler to quantify than the one between the unknown amount of prevention we possess and the near to perfect results we demand, but cannot define. But if we do nothing else, we need to put the argument of un-measurable prevention behind us and accept that it can be quantified, at least by proxy and/or by evaluating the parts of the whole. Oddly, considering the purpose of this article, the near-term challenge we face is not the establishment and acceptance of a system that depicts prevention in measurable outcomes and outputs. The real challenge will be to avoid the temptation to only resource those missions we already understand versus those of vastly more importance that we are just learning to build.

Glen Woodbury is a faculty member and Associate Director of Executive Education Programs for the Naval Postgraduate School’s Center for Homeland Defense and Security. His responsibilities include the development of executive education workshops, seminars and training for senior state and local officials. He served as the director of the Emergency Management Division for the State of Washington from 1998 through 2004 and is a past president of the National Emergency Management Association. Mr. Woodbury holds a bachelor of arts degree in Engineering Sciences from Lafayette College in Easton, PA (1985) and a masters of arts degree in Security Studies (Homeland Defense and Security) from the Naval Postgraduate School in Monterey, CA (2004).


  1. For example, the emergency management community has established standards for a state and local emergency management system that can be evaluated through a program called the Emergency Management Accreditation Program. More information is available at www.emaponline.org. Additionally, some discipline-specific programmatic standards are proposed for disaster management through the National Fire Protection Association (NFPA) in their NFPA 1600 document: http://www.nfpa.org/assets/files/PDF/NFPA1600.pdf.
  2. While this metaphor is found in other references, e.g. death penalty arguments, I first heard it expressed in the context of terrorism prevention from Dr. William Pelfry during class instruction at the Center for Homeland Defense and Security, Naval Postgraduate School, Monterey, CA.
  3. Office for Domestic Preparedness, U.S. Department of Homeland Security, Prevention Guidelines for Homeland Security. The Office for Domestic Preparedness Guidelines for Homeland Security June 2003: Prevention and Deterrence (Washington D.C., June 2003).
  4. U.S. Department of Homeland Security; Office of State and Local Government Preparedness. National Preparedness Guidance; Homeland Security Presidential Directive 8: National Preparedness. Washington D.C., April 27, 2005.
  5. The National Preparedness Guidance uses the term ‘how prepared do we need to be?’ which is just as useful a question in the process of setting the bar for desired capability. The Department of Homeland Security intends to define the desirable levels of capabilities for all the elements of the Target Capabilities List, including the ones listed here, in late 2005, which states will be required to review as part of their FY2006 grant applications.
  6. This diagram was drafted during a workshop of subject matter experts at the Naval Postgraduate School, Center for Homeland Defense and Security on January 28-29, 2004. The principle author is Bruce Lawlor, first chief of staff for the U.S. Department of Homeland Security. I modified it slightly from its original draft version.
  7. Harry P. Hatry, ‘What Type of Performance Information Should be Tracked?,’ in Quicker, Better, Cheaper? Managing Performance in American Government, ed. Dall W. Forsythe (New York: Rockefeller Institute Press, 2001), 21.
  8. David M. Walker, Comptroller of the United States. In testimony to the Committee on Government Affairs, U.S. Senate. Homeland Security: Responsibility and Accountability in Achieving National Goals. U.S. General Accounting Office. Expected release on April 22, 2002. http://www.gao.gov/new.items/d02627t.pdf, 7 [Accessed February 19, 2004].
  9. Guidelines for Homeland Security June 2003: Prevention and Deterrence, 19.
  10. Division of Public Health, Department of Human Resources, State of Georgia; and the Center for Public Health Preparedness and Research, Rollins School of Public Health, Emory University. Indicators of Preparedness for Public Health Emergencies, DRAFT, April 19, 2004. Copy provided by Dr. Kathleen Toomey, former Director of Public Health for the State of Georgia.

Copyright © 2005 by the author(s). Homeland Security Affairs is an academic journal available free of charge to individuals and institutions. Because the purpose of this publication is the widest possible dissemination of knowledge, copies of this journal and the articles contained herein may be printed or downloaded and redistributed for personal, research or educational purposes free of charge and without permission. Any commercial use of Homeland Security Affairs or the articles published herein is expressly prohibited without the written consent of the copyright holder. The copyright of all articles published in Homeland Security Affairs rests with the author(s) of the article. Homeland Security Affairs is the online journal of the Naval Postgraduate School Center for Homeland Defense and Security (CHDS). http://www.hsaj.org

http://www.hsaj.org/