Abstract
How is the theory behind critical infrastructure/key resources (CIKR) protection evolving? Practitioners who implement strategies should be confident their strategies are based on sound theory, but theory evolves just as strategy evolves. Many theories, techniques, and models/simulations for CIKR protection have been proposed and developed over the years. This paper summarizes several of these approaches and explains how they relate to basic risk concepts explained in the Department of Homeland Security (DHS) Risk Lexicon.
We explain unique contributions of ways to model threat, vulnerability, and consequence, which have implications for how we assess risk. This work builds on previous work in the areas of operations research, prospect theory, network science, normal accident theory, and actuarial science. More specifically, we focus on deterrence measurement to characterize threat differently. We also explain work that models supply chains or “transfer pathways” as networks and applies principles of reliability engineering and network science to characterize vulnerability differently. Next, we explain work to incorporate CIKR resilience and exceedence probability measurement techniques to characterize consequence differently. Finally, we conclude with implications of how CIKR risk may be treated.
We anchor our exposition of these contributions with various terms from the DHS Risk Lexicon. Also, we present these ideas within a framework of three “attack paradigms”: direct attacks against a single CIKR with the intent to destroy just that target, direct attacks against a single CIKR with the intent to disrupt a system of infrastructure, and exploiting CIKR to move a weapon of mass destruction (WMD) through the global commons to its ultimate destination.
Suggested Citation
Taquechel , Eric F., and Ted G. Lewis. “A Right-Brained Approach to Critical Infrastructure Protection Theory in support of Strategy and Education: Deterrence, Networks, Resilience, and “Antifragility.” Homeland Security Affairs 13, Article 8 (October 2017). https://www.hsaj.org/articles/14087
Introduction
A strategy for critical infrastructure and key resource (CIKR) protection should have solid theoretical underpinnings. How is theory regarding CIKR protection evolving? Practitioners who implement strategies should be confident their strategies are based on sound theory, but theory evolves just as strategy evolves.
Many theories supporting CIKR protection and resilience have been proposed for application or repackaged into new theoretical approaches. This paper will focus on recently proposed theoretical approaches to protecting CIKR from terrorism and other threats, summarizing the authors’ work in several realms of CIKR protection, and incorporating other insights. Importantly, the authors’ work in these domains builds on rich foundations of previous work in the risk analysis, network science, reliability engineering, and operations research (OR) fields. This paper will minimize technical discussions of each individual theoretical approach, and instead will propose how these approaches fit together to support implementation of the basic Department of Homeland Security (DHS) risk equation Risk = Threat x Vulnerability x Consequence. Also, we will anchor our exposition of these approaches with other terms from the DHS Risk Lexicon (hereafter “Lexicon”).
This basic equation still forms the DHS foundation for CIKR risk analysis and mitigation, although there are different opinions in the literature on the appropriate ways to characterize the finer details of the equation’s components and how data is collected and analyzed.
Background: Current State
The Lexicon defines risk as the “potential for an unwanted outcome resulting from an incident, event, or occurrence, as determined by its likelihood and the associated consequences.”1 Likelihood is:
“the chance of something happening, whether defined, measured or estimated objectively or subjectively, or in terms of general descriptors (such as rare, unlikely, likely, almost certain), frequencies, or probabilities.”2
And, consequence is:
“the effect of an event, incident, or occurrence, including human consequence, economic consequence, mission consequence, psychological consequence.”3
So, what are considerations for estimating the chance of a terrorist attack on a CIKR being attempted or succeeding, and what are considerations for evaluating effects of an attack on CIKR? We now examine context for threat and vulnerability, the combination of which form the chance of successful execution of an attack. We also examine context for consequence and resilience.
Context: Threat
Threat is the likelihood that an attack occurs, and that likelihood includes attacker intent and attacker capability, estimated as probabilities. Ordinarily, threat is an input to the DHS risk equation. However, there is a body of literature in the OR world that expresses concerns with treating threat as an input to the equation, instead advocating it should be an output. This is because terrorists, as thinking adversaries, can adapt to our defenses. For example, see Cox (2008)4. If this is true, then intent, expressed as a probability an attack is desired, is not necessarily constant. In that case, the quantification of risk would be inconsistent. Instead, those in the OR field have suggested that threat should be an output of vulnerability * consequence, signifying that prospective attackers formulate intent to attack based on observations and estimates of specific CIKR vulnerability and consequence.
Deterrence: Influencing Attacker Intent
The Lexicon defines deterrence as a “measure that discourages, complicates, or delays an adversary’s action or occurrence by instilling fear, doubt, or anxiety.”5 Historical literature on deterrence theory and studies of deterrence theory in action tend to focus on what we refer to as “absolute deterrence”: influencing an opponent’s decision calculus such that they decide not to act. However, we think the concept of “relative deterrence” in CIKR threat and deterrence analysis warrants consideration. The probability of acting in a certain manner may constitute a metric for relative deterrence, as opposed to either acting or not acting.
Game Theory and Deterrence
Game theory has been applied to economics and other fields to model interactions and expected outcomes. It models the interactions of intelligent agents, often quantitatively so. It has also been applied to explain nation-state conflicts. In recent work it has been used in counterterrorism modeling. For example, see Yin et al. (2010) who apply game theory to develop an “intelligently randomized” homeland security boat patrol model for the U.S. Coast Guard.6 The particular approach that Yin et al. develop leverages the concept of a Strong Stackelberg Equilibrium (SSE) to model how an attacker can observe a defender’s defenses and then pick their best course of action, e.g. attack the CIKR that is the best combination of minimally defended and most valuable to attack. One can link the claim that threat should be an output of a risk equation to the attribute of game theoretic modeling that yields preferences as outcomes of strategic interactions. For example, a “mixed strategy” reflects probabilistic preferences of intelligent agents. Evaluating these probabilistic preferences may lay the foundation for making claims of “relative deterrence”.
The Weapon of Mass Destruction (WMD) Threat
There is also a repository of literature that discusses concern over terrorists exploiting the maritime supply chain to move a WMD into the U.S. The DNDO, or Domestic Nuclear Detection Office, was established to help mitigate this threat. Various technological solutions and modeling approaches to reduce WMD risk have been explored.
Context: Vulnerability
Vulnerability is the likelihood an attack is successful, given it is attempted.7 Attacks can be against individual CIKR with the intent of destroying those CIKR. A second paradigm is that attacks might occur against individual CIKR with the intent of destroying/damaging a system of CIKR. The Lexicon defines a system as:
“any combination of facilities, equipment, personnel, procedures, and communications integrated for a specific purpose.”8
Similarly, a network is defined as:
“A group of persons or components that share information or interact with each other in order to perform a function.”9
If we focus on the vulnerability of systems of CIKR to terrorist attack, perhaps our techniques to assess vulnerability should be different than those of the standard individual CIKR vulnerability assessment. Network science offers techniques for assessing vulnerability of systems to perturbation, considering both the vulnerabilities of individual assets, and then characterizing the vulnerability of systems. For examples, see Lewis, 2006, chapter 5,10 and Lewis, 2009, chapter 11.11 Also, Lewis (2011) defines criticality as the degree of system dependence on a single component. But, the Lexicon focuses on criticality of an asset to its customer base.12 Perhaps we can stretch the Lexicon definition to mean the “customer base” of an asset could include its linked components that form a system.
A third paradigm for framing vulnerability analysis is that CIKR are susceptible to exploitation for nefarious purposes, such as moving a WMD through a port infrastructure with the intent to detonate in an inland city. Though some CIKR might have great security against direct attacks, they might have suboptimal security for interdicting a WMD being moved through enroute to a different destination.
In sum, we offer three paradigms for modeling attacks:
Paradigm 1: direct attacks against CIKR with intent to disable/destroy that CIKR;
Paradigm 2: direct attacks against CIKR with intent to cause cascading perturbations throughout a system of CIKR; and
Paradigm 3: exploitation of a CIKR to inflict damage on a different CIKR “downstream” in a system.
Context: Consequence and Resilience
We cited the Lexicon definition of consequence earlier. Then, the Lexicon goes on to define resilience as the “ability to adapt to changing conditions and prepare for, withstand, and rapidly recover from disruption.”13 Thus, to the extent disruptions create undesirable effects or consequences, resilience is the ability to recover from consequences. Vugrin et al. (2010) focus on the magnitude and duration of deviations from desired system performance levels as two parameters of the ability to recover from disruptions.14
DHS websites on resilience acknowledge the evolution of policy emphasis on resilience towards efforts to define it.15 However, when we did our research, DHS policies and programs emphasized resilience but did not explicitly guide stakeholders on how to quantify it or how to implement resilience measures. So, it falls to academia to propose definitions.
Evolution: Possible Future States
Given this context, how might theory supporting threat, vulnerability, and consequence analysis evolve? We now summarize our work in these areas.
How We Analyze Threat: The Importance of Deterrence and Cognitive Biases
We mentioned earlier that the concept of “relative deterrence” in CIKR threat and deterrence analysis warrants consideration. Instead of convincing our opponent not to act at all, or conceding they will definitely commit an undesired act, does it make sense to think of influencing attacker intent on a spectrum – in probabilities? In other words, is it worth exploring the probability one attack is quantitatively more desirable than an alternative attack, when multiple CIKR are possible targets? And, should we try to model how attacker intent might change, as a proxy for deterrence?
The game-theoretic modeling approaches discussed earlier leverage algorithms that produce a probability distribution. This probability distribution is translated into a tactical patrol schedule for armed Coast Guard law enforcement boats throughout their area of operational responsibility. In theory, executing their patrol schedules according to this probabilistic distribution minimizes the chances an observant adversary can plan and execute an attack on maritime CIKR. Moreover, in theory this deters an attacker, at least from a “relative deterrence” standpoint.
Starting Simple: Quantifying Deterrence
Given our belief that relative deterrence warranted attention, and given that previous literature had leveraged game theory to produce a probabilistic approach to deterrence, we published a paper entitled “How to Quantify Deterrence and Reduce Critical Infrastructure Risk” in 2012.16 The thrust of this approach was that deterrence against CIKR attacks can be quantified as the extent to which attacker intent to attack a certain CIKR changes after security measures are implemented at that CIKR, as compared to attacker intent to attack that CIKR before implementation of such measures. The quantification of deterrence took a very simple form:
Equation 1. Quantification of deterrence17
The intent values were based on expected utility ratios of pre-security expected utility from attacking the CIKR in question, and post-security expected utility. These expected utility values were derived from a game theoretical CIKR attack game between a notional attacker and notional defender, such as a CIKR operator.
We claimed that expected utility from an attack should include the quantification of attacker capability as a probability, but should exclude probabilistic expressions of intent. This was our “compromise” between the default risk equation that incorporates both intent and capability into the threat component, and the Operations Research (OR) community objections to including threat as an input because it fails to account for adaptive adversaries.
Also, we used an exploratory approach to the game theoretical scenario, averaging results of possible courses of action that the notional attacker and defender faced, rather than relying on the theoretical Nash Equilibrium solution of the game. A Nash Equilibrium predicts the “optimal” outcome of a game such that each player will choose the best solution they possibly can, given their opponent is also trying to pick their own best solution. Thus, we hedged for the possibility that an attacker might not necessarily pick the theoretically “optimal” solution.
This work built on a previous thesis which claimed risk propensity, or an actor’s attitude toward risk and choice, should influence deterrence.18 In our paper, we made the opposite (but possibly complementary) claim: that deterrence should influence risk analysis. We also incorporated previous work on modeling vulnerability reduction as an exponential function of dollars invested to improve security; Al-Mannai and Lewis (2008) proposed example functional forms.19 We treated vulnerability as a linear function of investment, which may have been an oversimplification.
Furthermore, we explored conditional and unconditional risk. Unconditional risk reflected the risk of CIKR attack given the attacker’s intent (as modified by security investments), combined with their capability, vulnerability, and attack consequence. However, conditional risk reflected the equivalent of attacker expected utility: the product of attacker capability, CIKR vulnerability, and CIKR failure consequence. This was consistent with the Lexicon definition of conditional probability: the probability of some event given the occurrence of some other event.20 The “other event” we surmised was the attacker decision to attack a specific CIKR with 100% intent. Thus, we treated conditional risk as the product of capability, vulnerability, and consequence, multiplied by an intent factor of 1.
Finally, we made a case for differentiating tactical intelligence from strategic intelligence in a game theoretical context. Strategic intelligence in some CIKR risk tools at the time of our writing reflected high-level quantitative estimates of various terrorist group intent to attack certain types of CIKR and capability to use various attack modes. As an alternative, we proposed that tactical level intelligence with regard to CIKR protection entailed a target-specific assessment of vulnerability and consequence by a would-be attacker, both before and after hypothetical security measures were implemented. This tactical intelligence would reflect their target-specific intent to attack (or not attack), and when compared to their estimated intent to attack other CIKR, could be leveraged to estimate unconditional risk and create “deterrence portfolios” to characterize various security investment options and inform decision makers.
One objection we anticipated when we wrote the paper was that deterrence efforts simply may shift prospective attackers to other CIKR with higher consequence. This broached the concept of threat-shifting. The Lexicon defines threat-shifting as the:
“response of adversaries to perceived countermeasures or obstructions, in which the adversaries change some characteristic of their intent to do harm in order to avoid or overcome the countermeasure or obstacle.”21
The Lexicon then goes into detail about domains in which threat-shifting can occur, including target domain: selecting a less protected target. However, we claimed our approach allowed for threat-shifting, more specifically “intent-shifting”, but did not necessarily increase risk to the CIKR in the game.
We applied our methodology to quantify deterrence and measure the change in CIKR risk in a notional case study, with the security investments modeled as hypothetical investments provided by FEMA’s Port Security Grant Program (PSGP).
The Lexicon definition of “adaptive risk” includes:
“threats caused by people that can change their behavior or characteristics in reaction to prevention, protection, response, or recovery measures taken.”22
By examining and quantifying how adversaries might assess desirability of various CIKR attacks in response to hypothetical protection measures, we add granularity to CIKR risk analysis and make more informed CIKR investment decisions. Furthermore, the Lexicon claims,
“for some types of risk, like those involving human volition, the probability of occurrence of an event may not be independent of the consequences and, in fact, may be a function of the consequences.”23
In our approach, the probability of intent, not of attack occurrence, was modeled as a function of a combination of consequences, attacker capability, and modifications to vulnerability, based on hypothetical grant investments.
Increasing Complexity – Threat, Deterrence and Cognitive Biases
Our work on quantifying deterrence assumed Expected Utility Theory (EUT) applied to the expected utility functions. This theory provides that people make decisions linearly, estimating costs, benefits, and probabilities. They make decisions consistently across how information is provided, or “framed.”
However, Daniel Kahneman and Amos Tversky, Nobel Prize winning psychologists, showed experimentally that people often make decisions inconsistently depending on changes in frame, in contravention to the tenets of EUT. They created Prospect Theory (PT) to explain their findings. Therefore, in a follow-up piece to our work on quantifying deterrence, we modified our approach to account for PT considerations in deterrence. We also explored whether information incompletion could influence the quantification of deterrence and resulting CIKR risk.
The Lexicon annotates the definition of “social amplification of risk” as follows:
“a field of study that seeks to systematically link the technical assessment of risk with sociological perspectives of risk perception and risk-related behavior.”24
Kahneman and Tversky discovered that people perceived risk differently when prospective outcomes were presented as losses from a reference point, rather than gains beyond that reference point. They modeled the relationship between gain/loss and value as a nonlinear function:
Figure 1. Relationship between gain/loss and value25
Figure 1 reflects their findings that losses held more “value” or salience to those faced with prospects, than did quantitatively equivalent amounts of gain. This finding violated one of the central tenets of EUT. Kahneman and Tversky also discovered a phenomenon they dubbed the “certainty effect” meaning that subjects generally preferred certain outcomes to probabilistic outcomes. When presented with gains, subjects preferred a certain smaller gain to a larger but probabilistic gain. When presented with losses, subjects preferred probabilistic larger losses to certain smaller losses, thus reversing the certainty effect and yielding the term “reflection effect.” Figure 2 below amplifies on comparisons between EUT and PT, although the claims regarding what behavior losses and gains might predict under PT assumptions omit a discussion of probability – both the “certainty effect” and “possibility effect” that Kahneman discusses in his 2011 book, Thinking Fast and Slow.26
Figure 2. How EUT (also Subjective Expected Utility or SEU) and Prospect Theory (here called “Prospect Utility”) may influence Risk Propensity27
Thus, we applied insights from their discoveries to predict what would-be CIKR attackers might prefer from amongst various CIKR attack options. The overall goal of this new research was to explain and recommend an approach to support decisions on whether to publicize information about CIKR security investments intended to deter attack, or whether to obfuscate those investments, by considering what we called “cognitive biases”.
First, we proposed a new definition of a “prospect” to distinguish the use of that word from its use in PT. A prospect simply meant the aggregation of possible future outcomes from an attacker COA (course of action). We then further specified that an “ordinary prospect” mean a prospect not derived from a game theoretic scenario.
We expanded on these definitions of prospect by then proposing the concept of “equilibrium prospect” meaning a prospect where the outcomes were influenced by what an intelligent opponent might do in a game theoretic interaction. Moreover, we showed what the equation for an ordinary prospect might look like if it was modified based on Kahneman and Tversky’s findings. This equation would reflect a relationship between gains/losses and value ascribed, fitted to the data that Kahneman and Tversky gleaned during their research.
These differentiations in equations for prospects helped us alter the way we proxied attacker intent as we explored how information incompletion and prospect theory could influence deterrence quantification and resulting risk. For example, one assumption was that an attacker would choose the equilibrium solution to a deterrence game; therefore, their quantified intent for that COA would be 100%. Alternatively, they might hedge among all prospective outcomes of the game, comparing the expected utility of one possible outcome to the aggregate of expected utilities of all possible outcomes, thereby creating an “intent ratio” proxy for their intent. Or, they might choose an “aggregate prospect” with maximum value with 100% probability – reflecting the sum of expected utilities if the attacker chose one COA, but reflecting the aggregate influence of possible defender actions in the game. Finally, they might create intent ratios using prospects, rather than using individual game outcomes.
We also proposed a heuristic for analyzing outcomes of deterrence games under conditions of incomplete information. In this case, the attacker would play a different “game” than the defender, since the attacker created proxies for defender deterrence investments at the CIKR in the game, whereas the defender knew their true investments. We proposed the term “organizational obfuscation bias” or OOB to represent attacker bias under conditions of incomplete information. We proposed business rules for how to quantify deterrence and create deterrence portfolios under these conditions.
Furthermore, we used an exponential investment-vulnerability relationship as an alternative to the linear relationship from our 2012 paper. Exponential relationships between effort and result may be more realistic than linear relationships, especially in counterterrorism analysis on the assumption our adversaries adapt to observable (or unobservable) vulnerability reduction measures.
Also, in our update we explored the effects of incomplete information. Different authors in the deterrence theory literature suggest different things. Some suggest deterrence is most effective when both parties share a common estimate of the other’s intentions (for example, see Moran, 200228) whereas others suggest ambiguity might actually enhance deterrence (for example, see Chilton and Weaver, 2009).29 Furthermore, the game theory literature distinguishes incomplete information from imperfect information. The former means that if all players can observe opponents’ previous moves in the game, they might not know all the rules that define the game. In contrast, imperfect information means that even if players know all the rules of the game, they don’t know their opponents’ previous moves.
Results of Notional Case Study
We varied our deterrence games to assume the attacker had incomplete information and thus we used proxy values to represent what they might estimate the quantitative values of CIKR vulnerability to be, based on attacker OOB. This yielded results that defender risk was less when investments were obfuscated than when they were publicized, for all attacker OOBs, and assuming EUT. However, this was specific to the assumption that the attacker used an intent ratio for intent proxy, rather than selecting an equilibrium game solution. In circumstances when the attacker was presumed to choose the equilibrium solution and intent was thus 100%, there was no quantifiable advantage of obfuscating deterrence investments over publicizing them, again under EUT assumptions. Quantifiable advantage here meant that unconditional risk was lower after change in intent was applied. We also found that if we assumed PT held rather than EUT, the defender gained no quantifiable advantage of obfuscating deterrence investments, over publicizing them.
Together, biases from PT and biases from incomplete information formed our “cognitive biases.” The implications of our findings were that under circumstances where it would it would be quantitatively more advantageous to obfuscate details of possible deterrence investments, the government would also have to obfuscate other details such as available budgets and estimated reduced CIKR vulnerabilities after deterrence investments were made. We therefore expanded upon our 2012 paper claim:
“In order to generalize these findings, any advantage of a specific information availability circumstance must be robust given utility theory assumptions.”30
To conclude our discussion on the evolution of how threat can be treated in CIKR risk analysis, we return to the Lexicon which states that risk reduction “can be accomplished by reducing vulnerability and/or consequences.”31 However, based on our research, we propose that threat reduction, through deterrence quantification and consideration of cognitive biases, may be another way to analyze risk reduction.
How We Analyze Vulnerability: Systems Approaches and Organic vs Inherited Vulnerability, Or Exploitation Susceptibility
Starting Simple – Transfer Threat Modeling
Our first approach to exploring vulnerability in a new light involved the third paradigm we offered for modeling attacks. We explored how to model the concept of “layered defense” for defending CIKR networks from exploitation. Previous work on CIKR protection had leveraged the concept of fault trees.32 Fault trees showed how a fault, or in the case of CIKR risk analysis, a terrorist attack, could propagate throughout a network of CIKR. The Lexicon annotates a fault tree as a tool to estimate quantitatively the probability of program or system failure by visually displaying and evaluating failure paths.33
However, fault trees only demonstrated what Taquechel (2010) described as “inherited vulnerability” or the probability of fault propagation as governed by De Morgan’s Law and the logic gates (AND or OR) that connected nodes in the fault tree network. In reality, nodes in a CIKR network also have “organic vulnerability” as reflected by their own inherent security measures, or lack thereof.34
Thus, Taquechel reasoned that risk of exploiting a network composed of nodes that had organic security measures must be assessed using a combination of organic and inherited vulnerability terms. For example, a “terrorist transfer network” of overseas and U.S. ports could be rendered as a network of CIKR nodes, with logic gates governing the propagation of illicit material between nodes, but with each node having a quantifiable organic vulnerability inversely proportional to security measures at the node. Returning to the proposed definitions of criticality, perhaps exploitation of this network would depend highly upon one very vulnerable foreign port. Alternatively, it might depend on a more holistic measure of aggregated network failure probability derived from the combination of organic node vulnerabilities and inherited vulnerability of each “layer” of nodes, ports in this case.
Ultimately, we modified Lewis’ Model Based Risk Assessment (MBRA) network modeling tool to create a logic graph that leveraged fault tree principles, but added an emergence-based algorithm to optimize funding to “harden” ports against terrorist transfer, reducing organic vulnerability and thus reducing overall network vulnerability. We combined the concept of topology from network science with the classic CIKR risk analysis treatment of vulnerability. Topology is a “mapping function” showing the relationship between nodes and links in a network.35 It is the “architecture” of the network, which may change over time if the network is “dynamic.”36
Logic gates in this approach reflected a different type of topology, wherein they represented virtual links between nodes, rather than physical links. The virtual link was a proxy for attacker decision making – whether to transfer illicit materials or people through both nodes to get to the next node (AND gate), or to transfer materials through a single node (OR gate). This extended the existing functionality of the MBRA tool to address a problem of interest to DHS as depicted in figure 3.
Figure 3. MBRA adaptation logic graph: optimal budget allocation minimizes network risk37
Preferential attachment undergirds the MBRA algorithm we used to model terrorist transfer networks as depicted in Figure 1. Lewis discusses how preferential attachment is a source of Self-Organized-Criticality (SOC), meaning a system is on the verge of collapse due to emergent processes occurring within the system to make it more efficient during steady state functioning, but also more susceptible to failure.38 Essentially, the MBRA algorithm is an emergent algorithm that allocates a dollar to a node to reduce organic vulnerability (or exploitation susceptibility). It documents the reduction in overall system vulnerability and risk. Then, it allocates another dollar at random. If the overall system risk is reduced, the dollars remain allocated as such. The algorithm reflects the system’s “preference” for allocations that reduce overall risk or increase overall system resilience. However, if the risk does not change or is increased, the algorithm “retrieves” the previously allocated dollar and searches for another recipient node. This is similar to how ants or termites “self-organize” in their flocking behavior as discussed in Lewis (2011).39
Increasing Complexity – WMD Transfer Modeling
With this third paradigm in mind, our initial work treated terrorist transfer threat as a general threat in our layered defense modeling. However, we decided to then focus more specifically on the WMD (weapon of mass destruction) threat for follow-on work. We also decided to merge our concepts of layered defense and deterrence measurement with a network science approach in our 2015 paper on measuring the deterrence value of securing maritime security chains against the WMD threat.40
In this work, we modeled a supply chain that an adversary might try to exploit by transferring a WMD, but we explicitly modeled port “node” vulnerability, or exploitation susceptibility, as a function of notional WMD detection technology in those ports. We modeled probabilities of encounter and detection at notional U.S. ports of debarkation or ports of entry, holding encounter probabilities constant and modifying detection probabilities proportional to the investment necessary to build and operate detection technology. The “elimination fraction” would represent a 95% probability of detecting a WMD within a container in a U.S. port, and the “elimination cost” would represent the investment necessary to build and operate technology with that 95% detection probability. The detection probability was combined with the encounter probability in a U.S. port to produce a notional “organic failure susceptibility” of that port.
Then, we incorporated logic gate principles from the previous layered defense modeling work to proxy attacker “transfer pathways” from foreign ports, through U.S. ports, and ultimately to inland “target cities”. These transfer pathways thus represented “inherited exploitation susceptibility”, as opposed to inherited vulnerability from previous work. Conceptually, this combined technology effectiveness modeling with network theory and is depicted in figure 4.
We then incorporated concepts from deterrence quantification. Once we could characterize the organic exploitation susceptibility of a port, and incorporate inherited exploitation susceptibility probabilities from logic gates representing transfer pathways, we then could create risk equations, reflecting risk of WMD detonation in a U.S. inland city. These were conditional risk equations that excluded attacker intent probabilities.
These conditional risk equations could change based on the different permutations of transfer pathways an adversary could exploit to transfer a WMD. We converted the equations to utility functions, showing the expected utility an adversary would gain from detonation of a WMD. Doing so allowed us to then create a game theoretic scenario case study wherein the defender had different options to invest in WMD detection technology equipment at U.S. ports, and the attacker had various pathways to exploit. From this game we gleaned proxies for attacker intent, here again an output of risk equations, and created unconditional risk equations for the inland cities. This created a different flavor of “deterrence portfolio” from the portfolios we had created that reflected attacker intent for direct attacks on CIKR in our 2012 work on quantifying deterrence. This allowed us to measure how various investments in WMD detection technology might deter adversary exploitation of supply chains.
Figure 4. Notional MBRA WMD transfer network41
Overall, this work offered an alternative to a claim in the Maritime Commerce Security Plan: that inspecting containers for WMD once they arrive in U.S. ports is too late.42 We suggested that this is not necessarily true if the target is an inland city – after the container is offloaded onto a truck and moved toward a large inland population center. However, we did not claim that it was altogether imprudent to first inspect containers overseas or at U.S. ports of entry.
Results of Notional Case Study
One finding of our case study that applied our methodology was that the best investment in WMD detection technology was against a specific transfer pathway that differed from what traditional attacker-defender modeling efforts might suggest. This was because our methodology did not necessarily rely on the equilibrium output of the deterrence game we analyzed, but instead hedged against the possibility that an adversary might not consider an “optimal” transfer pathway to exploit.
Another finding was that we could put discussion of possible attacker tactics to move WMD into the U.S. into quantitative terms. If a logic gate between a foreign port and a U.S. port was “AND”, this represented that the vessel the WMD was secreted upon would stop at two foreign ports before its voyage to the U.S. If the logic gate in our model was “OR”, this meant the vessel only stopped at one foreign port before its voyage to a U.S. port.
Similarly, an AND gate between the U.S. port node “layer” and target inland U.S. city meant that the attacker intended to “decentralize” the introduction of the WMD by offloading component parts at one U.S. port. Then, the vessel would continue onto another U.S. port and offload the remaining components. Eventually the attacker would arrange for the components to be reunited and continue their transit toward the inland target city. Alternatively, the OR gate would mean the weapon was moved through a US port of debarkation intact and ready to detonate upon arrival at the target city.
A practical implication of this research was that intelligence collection and analysis efforts might focus on attacker preferences for exploiting various US ports. This would help inform decisions on how to invest in WMD detection technology, accounting for foreign port exploitation preferences. To elaborate, if intelligence estimates were confident that multiple US ports would be exploited in a WMD component “decentralized introduction” effort, foreign port exploitation preferences would be not be especially valuable in informing investment decisions, per the model’s approach.
Another practical implication was that the costs to create WMD detection technology could be compared to the probabilistic effectiveness of detection, to calibrate a model that compared actual investment to “desirable investment” to maximize detection probabilities.
The Lexicon claims that event trees are used to project forward in time, modeling probabilities of events leading to some future outcome, whereas fault trees look retrospectively at the cause of an event that has already occurred.43 Fault trees leverage logic gates to combine probabilities. Even though fault trees are recommended for retrospective analysis in the Lexicon, we offer that leveraging logic gates as proxies for attacker decision making and thus leveraging the fault tree approach might be a useful alternative for estimating probabilities of future terrorist attacks.
How We Analyze Consequence: Resilience, Exceedence Probability, Antifragility
Resilience
The Lexicon defines resilience as the:
“ability of systems, infrastructures, government, business, communities, and individuals to resist, tolerate, absorb, recover from, prepare for, or adapt to an adverse occurrence that causes harm, destruction, or loss.”44
With this in mind, we refocused our attention on FEMA’s Port Security Grant Program (PSGP). Taquechel had worked in an office that provided technical expertise on port security to FEMA, and thus developed a technical approach to model grant allocation based on a resilience-oriented, network-focused framework. This approach was touted as one option to support a prospective policy decision to convert the PSGP program to a resilience-based program.
Starting Simple – Networks and Resilience
Returning again to the concept of criticality, we claimed maritime supply chains could be modeled as nodes, here ports and inland cities, and links, here means of transportation between those ports/cities. We wanted to show that supply chains might depend on ports to keep running after a disruption, and proposed an approach to reduce the criticality of the ports to the overall supply chain, thereby increasing supply chain network resilience. Resilience funding allocations would reduce the cascading economic disruption effects caused by port shutdown or damage to port facilities.
First, we discussed the idea that we should identify a certain level of supply chain loss to be expected after an attack, but identified challenges with port facilities sharing specific data, for fear of violating proprietary data restrictions or disclosing information that would give their competitors an advantage.
Then we claimed the current theoretical foundation underpinning the FEMA allocation of grant funding, the classic R=TVC equation, might be insufficient if the grant program transitioned to a resilience-based, network-focused approach. This was because this equation did not capture network metrics such as node degree (number of links to other nodes), and instead took an asset-centric focus on risk, rather than a network-based focus on system resilience. Equation 2 below is a risk equation that accounts for node degree, thus incorporating a network metric:
Equation 2. Risk equation for risk to network with i nodes, g=node degree. Threat (T) is generic threat to network.45
We also discussed an approach to modeling system resilience that used network interdiction methods, an approach espoused in the OR community, and explained the difference between those models and probabilistic risk-based network science models.
Next, we proposed definitions of quantifiable resilience for both individual maritime supply chain networks and ports, because our modeling approach leveraged quantitative values of risk, thus linking risk and resilience. We also needed our approach to remain fairly consistent with the PSGP principle of allocating money to ports, and then the ports redistributing money to various claimants such as port CIKR. We further proposed that resilience can be organic and maximized with organic CIKR resources, or enhanced/further maximized with PSGP allocations earmarked to rebuild damages after an attack. Enhanced resilience can be further broken down into mathematically optimal or sub-optimal resilience, depending on decision maker preferences for funds allocation.
Our approach integrated aspects of OR “reverse-engineering”, but in a way we did not discover during our literature review. Instead of reverse-engineering systems to fine-tune performance for steady state operations, our approach would arguably help reverse-engineer maritime supply chain network “performance potential” to return to standards after a perturbation. Also, we proposed how the network science concept of preferential attachment, wherein hubs accumulate increasingly more links to other nodes based on efficiency and optimization of function, can be counteracted by a different “preferential attachment” – the optimization of grant funding towards the most critical hubs to minimize port failure after a perturbation and thus maximize supply chain resilience. The “counteracting” preferential attachment demonstrated during a simulated distribution of resilience funding to network nodes would reduce the economic efficiency-driven SOC that had naturally evolved in that supply chain network.
Throughout our detailed explanation of our model’s equations, we used the phrases “organic failure susceptibility” and “inherited failure susceptibility” instead of “organic vulnerability” and “inherited vulnerability.” We wanted to emphasize that even though the event that precipitated a supply chain network perturbation might be paradigm 2, direct attack to cause cascading downstream effects, the focus of network resilience modeling was susceptibility to failure after the attack had occurred, not the probability the attack would occur in the first place. Thus, we leveraged an approach from our work on layered defense against a terrorist transfer network, but modified it to accommodate probabilities of failure after an attack had already occurred.
We then formulated detailed equations for supply chain network “expected consequence” that modified maximum consequence by applying the failure susceptibility of the nodes in that network. This approach also incorporated a new way to represent inherited failure susceptibility: node degree or how many links the supplier node and other nodes had to downstream nodes. Our previous approaches to modeling inherited exploitation susceptibility had treated this probability as a function of attacker preferences as modeled via logic gates.
Organic failure susceptibility was now a function of the probability a CIKR node would fail to resume production after a perturbation, based on reserve raw product, relationships with suppliers, and organic ability to rebuild damaged physical infrastructure onsite. This approach leveraged the Lexicon concept of redundancy:
“additional or alternative systems, sub-systems, assets, or processes that maintain a degree of overall functionality in case of loss or failure of another system, sub-system, asset, or process.”46
Then, we created a network conditional risk equation, which excluded attacker intent to attack that network, specifically the maritime port CIKR. Next, we combined network conditional risk values to create a proxy “port conditional risk value”. Fourth, we developed an equation for “port organic resilience” as a function of port conditional risk, and developed “resilience ratios” for each port to govern the first “macro-distribution” of PSGP funding to individual ports. In an approach that was reminiscent of how we converted risk to utility for deterrence and threat analysis, we converted risk to resilience metrics in this approach.
Fifth, we showed how our approach could accommodate flexibility to distribute funding to ports based on unconditional port risk as an alternative to conditional port risk. This leveraged the principle of intent ratios from our work on quantifying deterrence, and changed the formulation of the port organic resilience equation. Sixth, we proposed an equation for supply chain network organic resilience, to help guide “micro-distribution” of PSGP funding or subsequent redistribution to maritime CIKR claimants within each port. Just as port organic resilience can be based on conditional or unconditional port risk, we showed how to model network organic resilience based on conditional or unconditional network risk.
Seventh, we revisited the MBRA iterative emergence-based algorithm to be used to optimize PSGP funding distribution amongst CIKR nodes in each port’s supply chains. The objective function of this algorithm was now to maximize port resilience, enabling us to convert organic port resilience to enhanced port resilience. Importantly, this approach optimized by allocating to multiple CIKR within a port, rather than allocating all resources to the most “attractive” CIKR. Eighth, we explained how this optimization would create enhanced supply chain network resilience as a function of network conditional risk after optimal allocation. We then summed the new network conditional risk values to get port conditional risk after an equilibrium allocation was achieved, and then created a new enhanced port resilience value.
Ultimately, we created an approach to synthesize risk, resilience, network science, performance constraints and tradeoffs, optimization, and quantification of deterrence in a unified modeling/simulation approach to potentially support a paradigm shift in an existing DHS program.
Increasing Complexity – Normal Accident Theory, Self-Organizing Criticality, Topology, Exceedence Probability, and Antifragility
We now return to the concept of self-organized criticality (SOC). SOC reflects the catastrophic failure potential of a tightly coupled system prone to cascading failures. With this in mind, we discuss a related theory. Three key ingredients of Perrow’s normal accident theory are (1) two failures in a system coming together in an unexpected way; (2) failures cascade faster if the system is tightly coupled, and (3) systems prone to normal accident theory have “catastrophic potential.”47 Lewis then goes on to explain how power laws can be used to model unpredictability in systems, and how coupledness of system components can be modeled using network theory.
Topology can also proxy SOC, as discussed earlier. In previous discussions of approaches to characterizing vulnerability, we discussed how logic gates can be a proxy for attacker transfer pathway preferences, and are thus a proxy for network topology. Alternatively, we showed how the degree of supply chain nodes, node degree being another proxy for topology, can influence resilience in the port security grant reallocation approach. Essentially, topology influences the coupledness of systems.
If the topology is such that one hub in a network has many links and other hubs have significantly fewer, that network may be considered “scale-free” and likely has a low resilience exponent and is a high risk system. We will explain resilience exponent later. That is, if the hub fails and transmits the failure throughout its many links to other nodes, or other nodes are cut off from supply, the network fails, possibly catastrophically.
Thus, topology is related to network fragility. One way a network becomes fragile is “link percolation” or accumulation of links at a hub, rendering the system more efficient but also more prone to collapse if the hub fails.48 If links percolate at multiple nodes, not just the hub, this may have different implications for network topology, fragility, and SOC.
Network Science Metrics, SOC, and Organic vs Inherited Failure/Exploitation Susceptibility
We can argue that node degree of a network’s hub, or node with highest link percolation, is a way to proxy network inherited vulnerability or inherited exploitation susceptibility. Furthermore, we can propose that transfer pathways as a proxy for network topology are also a proxy for inherited failure or inherited exploitation susceptibility of a network. This dyad of “physical links” vs “virtual links” is now further explained.
Transfer Networks – Exploitation Susceptibility
A WMD transfer network has high organic exploitation susceptibility if the WMD detection equipment at its nodes is poor. Coupled with OR gates between nodes, here meaning terrorists prefer to ship the WMD components from one foreign port to one U.S. port of exploitation, thus reducing opportunities for detection, this network would have a high exploitation susceptibility, would be fragile, and thus would have high SOC. The prominence of OR gates as a “virtual link” may have similar effect as physical link percolation, in the sense that many links increase exploitation susceptibility by creating many opportunities to transfer a contagion throughout a network.
Focusing on organic exploitation susceptibility, Lewis suggests it makes sense to protect highly connected hubs to prevent network failure. By increasing security at these hubs, we can reduce organic vulnerability or exploitation susceptibility. Returning to the WMD modeling approach, increasing WMD detection technology capability at foreign ports reduces organic exploitation susceptibility of those ports. If they are “hubs” for U.S. shipments, meaning a preponderance of container ships flow through that foreign port enroute to U.S. ports, improving security should in theory reduce overall network exploitation susceptibility and reduce risk, the inherited susceptibilities notwithstanding.
Also, networks can be “rewired” to reduce self-organized criticality, thus changing inherited failure susceptibility. If a hub has some links removed and re-wired to other nodes, the inherited failure susceptibility of downstream nodes might be lowered. To wit, if the newly “less connected” node fails, subsequent cascading network failure may be less likely or have less impact since fewer nodes depend on the hub. However, we would have to evaluate the flow of a failure throughout the remainder of the network if other nodes now have higher degree.
In the case of a WMD transfer network, if an attacker’s desired transfer pathway to move a WMD is forced to change to a riskier pathway (e.g. the AND logic gate which means multiple ports are exploited, increasing their chances of detection), in effect we have “de-percolated” the network. De-percolation may mean reducing the overall number of links in a network, but here we suggest it could also mean re-wiring links away from a hub, reducing degree of that hub. The parallel argument here is that we have reduced options available to the attacker, the equivalent of an AND gate, forcing them to exploit multiple U.S. ports rather than just one. We have thereby increased chances of detection, the organic node WMD detection capabilities notwithstanding, and arguably have reduced SOC.
Supply Chain Networks – Failure Susceptibility
If a maritime port CIKR has many transportation links leading outward to downstream nodes, it has a high degree. Moreover, if that hub and its links (e.g. rail transport in and out of a refinery) are poorly protected, that poor security is a proxy for high network organic exploitation susceptibility. High organic node failure susceptibility (poor security) but few links (low degree) may not have an overall effect on network resilience.
Also, if there are many AND logic gates between nodes, meaning a node needs the supply of multiple upstream suppliers, not just one, then that proxy for network topology increases the inherited failure susceptibility of the network. Therefore, high hub node organic failure susceptibility, coupled with a certain network topology of logic gates, may increase overall supply chain network SOC to the point of high likelihood of collapse.
Link Density and Topology?
Is link density a good proxy for network topology, or helpful for estimating SOC of a network? Link density represents the ratio of actual links to possible links in a network.49 Many links may mean a contagion (e.g. a container with a WMD) can spread easily through a network, meaning a terrorist organization has many options to move the weapon from one node to another. However, many links might also mean a network is resilient, meaning if one link that moves a commodity to another node fails, other links exist to shoulder the load. So, it may depend on what kind of network we are analyzing.
If we are assessing a WMD transfer network, link density may mean there are many links between nodes, or that terrorists consider attractive many different possible transshipment routes between foreign ports, US ports, and inland cities. Therefore, a transfer network with a high link density might naturally be highly exploitable, or have high inherited exploitation susceptibility, notwithstanding the organic security at individual ports of embarkation and debarkation. This network might be said to have high SOC (unless every individual node is highly organically resistant to exploitation). In contrast, a transfer network with low link density might mean very few of the possible transfer pathways are attractive to a terrorist organization. That network would have low SOC.
However, high link density in a supply chain network such as the one we analyzed in our work on the PSGP and resilience might mean something different. If the port “hub” of the network fails or supplier nodes are damaged, downstream cascading effects might be minimized if there are many links. But this would also require high link security or link resilience. Also, it may not matter how many resilient or redundant links exist in the network if CIKR within the port “hub” are the sole sources of supply in the network, but are damaged. Thus, link density may not be a useful metric to ascertain network SOC in this type of CIKR network.
Other Examples of Organic and Inherited Failure/Exploitation Susceptibility
The organic vs inherited failure/exploitation susceptibility dyad appears in other discussions of SOC. For example, Lewis (2011) discusses how to minimize the spread of disease through analysis of a “social network” of people. Prevention of disease is difficult due to the adaptability of microorganisms in response to the evolution of vaccines.50 Therefore, it is difficult to reduce the “organic infection susceptibility”, another way of saying “vulnerability to disease”, of individual humans.
However, the alternative could be to change the topology of the human social network through quarantining measures. This would in effect reduce “inherited infection susceptibility” by increasing the length of the links a disease organism must travel between human “nodes” to propagate the infection. Whereas reduction of the number of links in a network is link depercolation, here one can conceive how increasing the length of links between people could essentially have the same effect as link de-percolation. Conservation of energy means that longer links take energy from shorter links, requiring more expenditure of energy for a disease to propagate, and thus decreasing the likelihood of sustained infection within a population.51 The individual ability of each person to fight infection when exposed is less relevant here; if the disease cannot travel, even the weakest person would be immune.
Lewis summarizes his discussion of de-percolating human social networks by claiming that “inoculation is a form of hardening that reduces vulnerability while depercolation is a form of resiliency that reduces consequence.”52 Here we expand on that concept and claim an alternative interpretation is that inoculation reduces organic failure susceptibility, while quarantine and isolation (depercolation) is also a hardening that reduces network- inherited failure susceptibility. By making it more difficult for failures to cascade between critical infrastructures, for example by increasing redundant sources of supply for downstream refineries in a petrochemical supply chain network, we might de-percolate CIKR networks through removing or effectively bypassing “infected” links. By doing so, we “isolate” infections, here the spread of supply chain failure. Thus, we minimize inherited failure susceptibility, and increase resilience and minimize network SOC.
Long Links: Better or Worse?
Longer links could be good if we are trying to minimize cascading failures brought on by epidemics, or in the case of CIKR protection, failures brought on by exploiting maritime ports to transship a WMD in a container. However, longer links can also be a burden and increase SOC of CIKR networks. This can be demonstrated with a study of the evolution of the power sector. Over time, this sector has evolved and approached SOC through a combination of economic and regulatory forces. Essentially, longer transmission lines between generation stations and customers have increased the fragility of the power network, as these lines become subject to failure from excessive load.53 The longer links have the opposite effect if we are trying to protect our CIKR from failure; instead of making it more difficult for failures to propagate throughout a system, the links themselves are subject to failure. Link density and length may represent a catch 22 for network protection and resilience.
Exceedence Probability
Another concept to consider in resilience analysis is that of exceedence probability. The components of the standard DHS risk equation leverage probabilistic risk analysis (PRA) terms that focus on the probability an attack will be successful given it is attempted. When multiplied by consequence of that attack, we get risk. However, what if we instead consider the probability that the magnitude (consequence) of an event will exceed a certain threshold, rather than focusing on the probability the event will occur in the first place?
This might constitute a paradigm shift of a different flavor. OR advocates have warned against static quantifications of threat, claiming that it fails to account for adaptive adversaries. One shift in response to that concern has been to modify the treatment of threat, through deterrence measurement as described earlier. However, a second shift could be to consider the probability of the consequence exceeding a pre-determined level, hence the term exceedence probability. This way, the issue with static vs dynamic probabilities of attack occurrence may be bypassed.
The insurance industry uses exceedence probability to set premiums. For examples, see Grossi and Kunreuther.54 More recently, Lewis et al. (2011) have used it to classify various hazards CIKR networks face as low risk (high resilience) or high risk (low resilience).55 Exceedence probability is used to create a “resilience exponent” of a network, shown in Equation 3:
Equation 3. Probable maximum loss (PML) as a function of resilience exponent “q”56
Now, instead of PRA, we have PML as an alternative expression of risk, for systems of CIKR.57 q is the resilience exponent, derived from plotting exceedence probability of the system failure exceeding a certain threshold, which yields a power law. If q>1, the system is low risk, or high resilience. If q <1, the system is high risk, or low resilience.
Low risk systems, as characterized by Equation 3, may adapt. High risk systems may collapse and fail, becoming extinct. This distinction between higher and lower-risk systems can be reflected in a feedback loop diagram of “punctuated reality”.
Figure 5. The two major feedback loops of Punctuated Reality58
In this depiction, systems evolve and approach SOC. A “normal accident”, punctuating the equilibrium that existed until that point, will occur and the system may adapt, increasing SOC even more and re-establishing a new equilibrium. However, a “black swan” event of much higher consequence but lower probability may occur, driving the system toward extinction rather than adaptation. Low risk (high resilience) systems may be grouped with those that can achieve small adaptations, but also withstand black swan events, whereas high risk (low resilience) systems may become extinct after a black swan event cripples that system.
Returning to the discussion of supply chain networks and resilience, over time these systems might evolve to become more efficient. However, what happens when a Deepwater Horizon occurs? Arguably this was a Black Swan-type event. This paper does not explore the details of how the petrochemical supply chain in the Gulf of Mexico was impacted, but imagine if the system was optimized such that the Deepwater Horizon platform was the sole source of feedstock to the major Gulf refineries? From an economic standpoint, that might have made sense, but from a redundancy and resilience standpoint, the consequences could be catastrophic.
SOC can be reduced, and system resilience thus increased, by “increasing the resilience exponent” of a system per Equation 3. How do we do this for CIKR systems? Lewis proposes some ways: adding surge capacity, operating systems below capacity, and redesigning networks altogether.59 But, these solutions are not without costs.
The Future – “Antifragility”?
In addition to SOC and exceedence probability, can we extend past resilience and apply the concept of antifragility to CIKR protection? Nassim Nicholas Taleb has written about the concept of “antifragility”, which essentially describes systems that actually benefit from disorder, rather than suffer.60 He emphasizes in his works that antifragility is not the same as resilience. The latter term means the ability to return to a pre-perturbation state; whereas the former term means the system will exceed pre-perturbation performance levels.
This is an interesting concept to explain complex systems like the stock market, where Taleb has experience and observed phenomena that influenced his theories and publications, but what are the implications, if any, for CIKR system protection and resilience? Taleb differentiates between “mechanical” systems, that wear from use, and “organic” systems, which actually benefit from stress and (reasonable) perturbations.61 For example, humans as organic “networks” of organ systems and sub-systems benefit from strenuous exercise over time, whereas a washing machine will wear over time with strenuous use, even with consistent maintenance. If we believe CIKR networks are “mechanical” systems, it may be futile to hope perturbations are beneficial. However, if we believe the “organic” model can be applied to CIKR networks, perhaps systems of CIKR can improve after shocks.
For example, how will the Gulf coast petrochemical industry network adapt to Deepwater Horizon? It may be too early to tell, but many years from now, we might compare productivity and other appropriate metrics to pre-Deepwater levels, and conjecture whether this disaster contributed to long term improvement in petrochemical supply chain network management.
An example from popular culture could further illustrate. In Forrest Gump, the protagonist’s shrimping vessel was subject to perturbations during the storm, but was robust to the elements and survived, while the rest of the fleet was brittle as they were tied up at the pier and were destroyed by the elements. Gump’s subsequent monopoly on the shrimping industry may reflect a flavor of “antifragility” if we consider the entire shrimping community as a network. In fact, the shrimping business might have improved from pre-storm levels. With less competition, the risk of overexploiting the resource may have diminished, allowing better stock health and improving the overall market. This may be an example of a mitigating effect on the “tragedy of the commons”, where a common resource is overexploited to the eventual detriment of all. Taleb claims that “the antifragility of some comes necessarily at the expense of others.”62
Organic systems supposedly respond to acute stressors better than chronic stressors.63 As a real world example, one author has claimed downtown Manhattan, as an “economic system”, may have benefitted from the tragedy of 9/11.64 It may seem distasteful to claim that long term benefit is a product of disaster, but if we look at the hard numbers, we may have a case. Arguably, 9/11 was an “acute stressor.”
Also, organic networks tend to be self-healing.65 During 2012, the New England and New York petrochemical facilities adapted in the aftermath of Hurricane SANDY. They found feedstock from other sources and shared information that they might otherwise manage as proprietary information, to facilitate recovery. Does any data support that those networks are stronger now than they were before SANDY?
Returning to the concept of resilience exponent, the Taleb arguments might extend the utility of this exponent beyond only representing a proxy for system resilience. Could we hypothesize that q could predict antifragility of CIKR networks? The lower the exponent, the more likely the system could benefit from perturbation, increasing output or other performance metrics. This claim would be subject to modeling/simulation and real world event validation.
In our 2013 paper on PSGP resilience, we emphasized that the “desired” post-perturbation system performance level would have to be agreed upon in order to establish a baseline for the resilience modeling effort.66 If stakeholders agree that a goal should be to come back stronger after a perturbation, this would transition the notional model from resilience evaluation to antifragility evaluation. It is unclear how specific resilience investments to rebuild damaged infrastructure would increase productivity beyond pre-perturbation levels, but this is an exercise for future research.
However, there are also arguments against conceptualizing CIKR networks as organic systems. We might claim that if individual CIKR within a network were antifragile, that means the system is also antifragile. For example, as we improve ability to restore node productivity past pre-perturbation levels, thus improving overall system resilience, we might improve system antifragility. However, Taleb’s claim regarding organic system antifragility is that the individual component is fragile whereas the whole is antifragile. Taleb offers the example of genes: humans are individually fragile and thus die, but we may propagate our genetic information before death, meaning the human race writ large is antifragile.67 Therefore, this concept might not apply to networks if we claim improving hubs improves the overall network.
Survival of the Fittest?
Taleb discusses the concept of autophagy, wherein weaker cells in an organism are killed, but the remaining cells become even stronger.68 Can we apply this concept to the CIKR network discussion? This might suggest that laissez-faire economic policies to let industry grow unchecked and let market forces govern would be the ideal approach. The weaker industries would fail or be subject to merger/acquisition. Taleb advocates against excessive intervention in the markets, citing the concept of “iatrogenics” — intervention to manage complex systems that yield long term deleterious effects exceeding benefits of that intervention.69
However, Lewis might argue that laissez-faire policies would enable the evolution of SOC in CIKR networks — economic efficiency at expense of resilience. History has shown that various sectors in the economy tend toward SOC when de-regulated. This would make networks fragile, not antifragile. Therefore, some government regulation might be necessary to ensure antifragility.
A third view could be that is it better for overall antifragility to let SOC evolve and then weaker CIKR systems are eliminated during a punctuated equilibrium or Black Swan event. This would be some low probability but extremely high consequence disaster that affected business networks in a way that the SOC made them vulnerable to. The weak networks would collapse; the resilient networks would survive; antifragile systems would “thrive” and benefit from perturbations. Survival of fittest at the “national economy” ecosystem level, if not at the individual CIKR system level, could be the best approach. Managing public expectation for supply of certain commodities would be critical.
Taleb claims “antifragility of higher levels may require the fragility of lower levels within an ecosystem.”70 In other words, local but not global overconfidence is good within the economic ecosystem — we want individuals to take risks and fail which means systems should improve over time.71 We might extend this argument to claim that individual business systems will take risks and fail, which means the national economy should in theory improve over time as lessons are learned (and hopefully heeded!)
A final thought on Taleb’s analysis. He discusses “transferring fragility from the collective to the unfit.”72 For example, in 2009 the federal government bailed out failing banks. Did this make them more fragile over the long term because they did not have to bear the consequences of their decisions? Applying this logic to the PSGP program, if we subsidize maritime CIKR “hubs” through port security grants, we harden the hubs and increase resilience from a network science perspective — but are we inadvertently harming the system by decreasing self-reliance in those hubs? If left to their own devices but encouraged to be individually antifragile, without government subsidy, would they ignore that encouragement and continue to optimize for economic efficiency but decrease system resilience?
SOC arguably reflects the reverse argument: transfer of fragility from the individually unfit MCIKR to the collective. As a hub accumulates more influence over a network (e.g., through link accumulation) but fails to increase security or individual node resilience, the entire network resilience may suffer as a perturbation to that node could have cascading effects throughout the entire system. Again, we are back to a dilemma: do we allow market forces and deregulation to permit SOC and transfer of fragility from the unfit to the collective, knowing that if a system fails, the next system may or may not be stronger? Or, do we transfer fragility from the collective to the unfit and regulate industry such that resilience is increased but economic efficiency may be stifled? Is there a balance between the two goals?
Alternative Futures
The Lexicon defines “alternative futures analysis” as:
“a set of techniques used to explore different future states developed by varying a set of key trends, drivers, and/or conditions.”73
One example is a statistical forecasting technique known as Winter’s method, used in the past by DHS to project anticipated migrant flow in the Caribbean based on political and economic “push-pull” factors. If these alternative futures techniques included forecasting of probabilities, Taleb might object, as the “black swan” or low probability, high-consequence event cannot be predicted by ordinary probability estimates. Therefore, to have credibility in Taleb’s world, alternative futures analysis might predict the range of possible consequences of an outcome, and then decision makers could hedge for the worst case consequence, rather than relying solely on probability estimates. If we adopted this philosophy, we would be well advised to look at the magnitude and reach of previous disasters, and optimize systems for these consequences first, and then make refinements for economic efficiency second.
Putting It All Together: Implications for CIKR Protection and Resilience?
We have given examples of how to analyze threat, vulnerability, and consequence in different ways. If we use intent as the output of game theoretic modeling, our risk equations may account for “tactical intelligence” as well as “strategic intelligence” and may have implications for deterrence. If we model layered defenses against terrorist transfer of WMD as a network and use logic gates as proxies for attacker preferences, absent more specific intelligence, this approach may provide us with alternate analysis to inform where to invest in WMD detection technology. If we model ports as “hubs” with downstream customer networks, and estimate network resilience, that may have implications for how we allocate funding to protect our port infrastructure through grant programs.
Also, if we calculate exceedence probability and probable maximum loss to CIKR networks instead of the traditional PRA calculations, would this have implications for how we allocate resources? Should we allocate prevention-based resources to high risk/low resilience systems to try and protect against the “black swans”? For higher-resilience or lower risk systems, should we allocate resources toward responding to higher probability, but lower consequence events? Finally, is resilience enough? Are there ways to engineer CIKR systems to come back even stronger after a perturbation, or promote “antifragility”?
The Lexicon defines “risk governance” as:
“actors, rules, practices, processes, and mechanisms concerned with how risk is analyzed, managed, and communicated.”74
If we believe the theories behind CIKR risk analysis, protection, and resilience are evolving, then that naturally influences the “rules, practices, and processes” concerned with how risk is analyzed, managed, and communicated. The DHS Quadrennial Homeland Security Review (QHSR) of 2014 emphasizes deterring terrorists, interdicting WMDs, and safeguarding legal trade.75 It also acknowledges CIKR network interdependencies, and that networked partnership is important to combat terrorism. We hope that the ideas posed in this paper will help inform theory and practice as the homeland security and emergency management enterprise evolves in its understanding of risk.
About the Authors
Eric F. Taquechel is a U.S. Coast Guard officer with experience in shipboard operations, port operations, critical infrastructure risk analysis, contingency planning/force readiness, operations analysis, budget/personnel management, and planning, programming, budgeting, and execution processes. He has authored and coauthored various publications including “Layered Defense: Modeling Terrorist Transfer Threat Networks and Optimizing Network Risk Reduction,” in IEEE Network Magazine; “How to Quantify Deterrence and Reduce Critical Infrastructure Risk,” in Homeland Security Affairs Journal; “Options and Challenges of a Resilience-Based, Network-Focused Port Security Grant Program,” in the Journal of Homeland Security and Emergency Management; “Measuring the Deterrence Value of Securing Maritime Supply Chains against WMD Transfer and Measuring Subsequent Risk Reduction,” in Homeland Security Affairs Journal, and most recently, “More Options for Quantifying Deterrence and Reducing Critical Infrastructure Risk: Cognitive Biases”, in Homeland Security Affairs Journal. Taquechel has taught college courses on critical infrastructure protection and is a FEMA Master Exercise Practitioner. He earned a master’s degree in Security Studies from the Naval Postgraduate School and prior to that earned his undergraduate degree at the U.S. Coast Guard Academy, and is currently an MPA candidate at Old Dominion University. Taquechel (corresponding author) may be contacted at etaqu001@odu.edu.
Ted G. Lewis is an author, speaker, and consultant with expertise in applied complexity theory, homeland security, infrastructure systems, and early-stage startup strategies. He has served in government, industry, and academe over a long career, including Executive Director and Professor of Computer Science, Center for Homeland Defense and Security, Naval Postgraduate School; Senior Vice President of Eastman Kodak; President and CEO of DaimlerChrysler Research and Technology, North America, Inc.; and Professor of Computer Science at Oregon State University, Corvallis, OR. In addition, he has served as the Editor-in-Chief of a number of periodicals: IEEE Computer Magazine, IEEE Software Magazine, as a member of the IEEE Computer Society Board of Governors, and is currently Advisory Board Member of ACM Ubiquity and Cosmos+Taxis Journal (The Sociology of Hayek). He has published more than 35 books, most recently including Book of Extremes: The Complexity of Everyday Things, Bak’s Sand Pile: Strategies for a Catastrophic World, Network Science: Theory and Applications, and Critical Infrastructure Protection in Homeland Security: Defending a Networked Nation. Lewis has authored or co-authored numerous scholarly articles in cross-disciplinary journals such as Cognitive Systems Research, Homeland Security Affairs Journal, Journal of Risk Finance, Journal of Information Warfare, and IEEE Parallel & Distributed Technology. Lewis resides with his wife, in Monterey, California.
Acknowledgements
In addition to all the anonymous (and occasionally non-anonymous) referees who helped improve the quality of the work previously published, the authors wish to thank the Center for Homeland Defense and Security, in particular the University-Agency Partnership Initiative, for the invitation to present a summary of this work at the 10th Annual Homeland Defense/Security Education Summit in March 2017.
Disclaimer
The original opinions and recommendations in this work are those of the authors and are not intended to reflect the positions or policies of any government agency.
Notes
1 U. S. Department of Homeland Security, DHS Risk Lexicon (2010), https://www.dhs.gov/xlibrary/assets/dhs-risk-lexicon-2010.pdf, Web accessed February 18, 2017.
2 Ibid.
3 Ibid.
4 Louis A. Cox, “Some Limitations of ‘Risk=Threat x Vulnerability x Consequence’ for Risk Analysis of Terrorist Attacks,” Risk Analysis 28(2008): 1749-1761.
5 U. S. Department of Homeland Security, DHS Risk Lexicon.
6 Zhengyu Yin et al., “Stackelberg vs. Nash in Security Games: Interchangeability, Equivalence, and Uniqueness”, Proceedings of the Ninth International Conference on Autonomous Agents and Multiagent Systems, (2010), http://teamcore.usc.edu/papers/2010/AAMAS10-OBS.pdf, Web accessed February 18, 2017.
7 U. S. Department of Homeland Security, DHS Risk Lexicon.
8 Ibid.
9 Ibid.
10 Ted G. Lewis, Critical Infrastructure Protection in Homeland Security: Defending a Networked Nation (Hoboken, NJ: Wiley Interscience, 2006).
11 Ted G. Lewis, Network Science: Theory and Applications (Hoboken, NJ: Wiley Interscience, 2009).
12 Ted G. Lewis, Bak’s Sand Pile: Strategies for a Catastrophic World (Williams, CA: Agile Press, 2011).
13 U. S. Department of Homeland Security, DHS Risk Lexicon.
14 Eric D. Vugrin et al., “A Framework for Assessing the Resilience of Infrastructure and Economic Systems,” in Sustainable and Resilient Critical Infrastructure Systems: Simulation, Modeling, and Intelligent Engineering, eds. Kasthurirangan Gopalakrishnan and Srinivas Peeta (New York: Springer, 2010), 77–116.
16 Eric F. Taquechel and Ted G. Lewis, “How to Quantify Deterrence and Reduce Critical Infrastructure Risk,” Homeland Security Affairs 8(August 2012), https://www.hsaj.org/articles/226, Web accessed February 18, 2017.
17 Ibid.
18 Eric F. Taquechel, “Validation of Rational Deterrence Theory: Analysis of U.S. Government and Adversary Risk Propensity and Relative Emphasis on Gain or Loss,” Master’s Thesis, Center for Homeland Defense and Security (2010), www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA519012, Web accessed February 18, 2017.
19 Waleed I. Al Mannai and Ted. G. Lewis, “A General Defender-Attacker Risk Model for Networks,” Journal of Risk Finance 9 (2008): 244-261.
20 U. S. Department of Homeland Security, DHS Risk Lexicon.
21 Ibid.
22 Ibid.
23 Ibid.
24 Ibid.
25 Amos Tversky and Daniel Kahneman, “The Framing of Decisions and the Psychology of Choice,” Science 211(1981): 453-584.
26 Daniel Kahneman, Thinking Fast and Slow, (New York: Farrar, Straus, and Giroux, 2011).
27 Eric F. Taquechel and Ted G. Lewis, “More Options for Quantifying Deterrence and Reducing Critical Infrastructure Risk: Cognitive Biases,” Homeland Security Affairs 12(September 2016), https://www.hsaj.org/articles/12007, Web accessed February 18, 2017.
28 Daniel Moran, “Strategic insight: Deterrence and Preemption,” Strategic Insights 1(2008), https://www.hsdl.org/?view&did=1428, Web accessed February 18, 2017.
29 Kevin Chilton and Greg Weaver, “Waging Deterrence in the Twenty-First Century,” Strategic Studies Quarterly (2009): 31–42. http://www.au.af.mil/au/ssq/2009/Spring/chilton.pdf, Web accessed February 18, 2017.
30 Eric F. Taquechel and Ted G. Lewis, “How to Quantify Deterrence and Reduce Critical Infrastructure Risk.”
31 U. S. Department of Homeland Security, DHS Risk Lexicon.
32 Ted G. Lewis, Critical Infrastructure Protection in Homeland Security: Defending a Networked Nation.
33 U. S. Department of Homeland Security, DHS Risk Lexicon.
34 Eric F. Taquechel, “Layered Defense: Modeling Terrorist Transfer Threat Networks and Optimizing Network Risk Reduction,” IEEE Magazine 24(2010): 30-35
35 Ted G. Lewis, Network Science: Theory and Applications.
36 Ibid.
37 Eric F. Taquechel, “Layered Defense: Modeling Terrorist Transfer Threat Networks and Optimizing Network Risk Reduction.”
38 Ted G. Lewis, Bak’s Sand Pile: Strategies for a Catastrophic World.
39 Ibid.
40 Eric F. Taquechel, Ian Hollan, and Ted G. Lewis, “Measuring the Deterrence Value of Securing Maritime Supply Chains against WMD Transfer and Measuring Subsequent WMD Risk Reduction,” Homeland Security Affairs 11(February 2015), https://www.hsaj.org/articles/1304, Web accessed February 18, 2017.
41 Ibid.
42 The White House, Maritime Commerce Security Plan for the National Strategy for Maritime Security, (2005), https://www.dhs.gov/xlibrary/assets/HSPD_MCSPlan.pdf, Web accessed February 18, 2017.
43 U. S. Department of Homeland Security, DHS Risk Lexicon.
44 Ibid.
45 Eric F. Taquechel, “Options and Challenges of a Resilience-Based, Network-Focused Port Security Grant Program,” Journal of Homeland Security and Emergency Management 10(2013): 521-554.
46 U. S. Department of Homeland Security, DHS Risk Lexicon
47 Ted G. Lewis, Bak’s Sand Pile: Strategies for a Catastrophic World.
48 Ibid.
49 Ted G. Lewis, Network Science: Theory and Applications.
50 Ted G. Lewis, Bak’s Sand Pile: Strategies for a Catastrophic World.
51 Ibid.
52 Ibid.
53 Ibid.
54 Patricia Grossi and Harold Kunreuther, Catastrophe Modeling: A New Approach to Managing Risk (New York: Springer, 2005).
55 Ted G. Lewis, Tom Mackin, and Rudy Darken, “Critical Infrastructure as Complex Emergent Systems,” International Journal of Cyber Warfare and Terrorism 1(2011): 1-12.
56 Ibid.
57 Ibid.
58 Ted G. Lewis, Bak’s Sand Pile: Strategies for a Catastrophic World.
59 Ibid.
60 Nassim Nicholas Taleb, Antifragility: Things that Gain from Disorder (New York: Random House, 2011).
61 Ibid.
62 Ibid.
63 Ibid.
64 David Riedman, “Questioning the Criticality of Critical Infrastructure: A Case Study Analysis, Homeland Security Affairs 12(May 2016), https://www.hsdl.org/?view&did=793055, Web accessed February 18, 2017.
65 Nassim Nicholas Taleb, Antifragility: Things that Gain from Disorder.
66 Eric F. Taquechel, “Options and Challenges of a Resilience-Based, Network-Focused Port Security Grant Program”.
67 Nassim Nicholas Taleb, Antifragility: Things that Gain from Disorder.
68 Ibid.
69 Ibid.
70 Ibid.
71 Ibid.
72 Ibid.
73 U. S. Department of Homeland Security, DHS Risk Lexicon.
74 Ibid.
75 U. S. Department of Homeland Security, The 2014 Quadrennial Homeland Security Review (2014), https://www.dhs.gov/sites/default/files/publications/2014-qhsr-final-508.pdf, Web accessed February 18, 2017.
Copyright © 2017 by the author(s). Homeland Security Affairs is an academic journal available free of charge to individuals and institutions. Because the purpose of this publication is the widest possible dissemination of knowledge, copies of this journal and the articles contained herein may be printed or downloaded and redistributed for personal, research or educational purposes free of charge and without permission. Any commercial use of Homeland Security Affairs or the articles published herein is expressly prohibited without the written consent of the copyright holder. The copyright of all articles published in Homeland Security Affairs rests with the author(s) of the article. Homeland Security Affairs is the online journal of the Naval Postgraduate School Center for Homeland Defense and Security (CHDS).