– Executive Summary –

This thesis examines investigative decision making, cognitive biases, talent sharing, and the relationship between the random nature of lone actor violence and a set of predefined decision-making protocols. Targeted violence presents a paradox for the homeland security enterprise. These single attacker events, whether assassinations, school shootings, or lone wolf terrorist attacks, are difficult to detect and interdict. In spite of the ephemeral nature of targeted violence, many of the most notorious incidents of targeted violence share a common characteristic: the attackers encountered or were closely observed by law enforcement before they attacked.

This thesis is predicated on the assumptions that: 1) in many cases of lone actor violence, the most confounding problem is not detection of the actor but the decision of what to do after the suspect is detected; 2) lone actor violence is a random event that does not follow a predictable pattern over time and space; 3) in spite of the frequency of pre-attack encounters between law enforcement and known lone actors, their actions do not meet the threshold for arrest before they attack; 4) given the other assumptions, when a decision to continue investigation is limited to a single organization, an agency, or task force, the likelihood for a successful outcome is as random as the attacks themselves.

To demonstrate the random nature of lone actor violence, this research uses twotime series statistical techniques. The results of the runs test and the time series analysis indicated that the emergence of these events was random over time and space. This analysis shows that these events are driven by a wide array of motives. These types of attacks are committed by a diverse group of perpetrators, who direct them at a dispersed number of targets. This statistical treatment of the attacks suggests that some detection tools may not be effective and buttresses the case that these events are random and independent.

To evaluate different decision-making protocols outside of the narratives of actual attacks, the researcher ran four separate simulations using the Monte Carlo technique. These simulations illustrate that with the dedication of additional investigative resources come a concomitant effect of diminishing returns, opportunity cost, and exposure to liability. The simulations also suggest that regardless of the single investigative agency’s decision-making process, the outcome relies on the randomness of the event.

These findings suggest that randomness itself may contribute to the decisions investigators make. If an investigator seeks literal “hard” evidence that an attack will occur and does not find it, then there is little wisdom in investigating further if the ultimate goal is arrest, whereby due process obligates a high evidentiary standard. If the decision is framed by the outcomes of earlier investigations, then the lack of evidence can provide an immediate reason to end the investigation they are currently facing. The organizational imperatives of investigative agencies to produce arrests and the consequences of false positives may amplify one another, thus reducing the impetus to commit resources to a less compelling case. These outcomes of these simulations suggest the need for a more precise decision-making model than those used in the Monte Carlo simulations.

The statistical analysis and Monte Carlo simulations of lone actor violence may indicate that these attacks are unpredictable, but they may be detectable. Lacking a defined archetypical “profile” of an attacker renders the search for a definitive predictive model futile; however, identifying behaviors that may indicate a propensity towards violence makes detection of an attacker possible.

To demonstrate a prototype for a new method of threat analysis, a “superforecasting” team of multiorganizational and multidisciplinary analysts participated in an experimental survey. Nine participants reviewed five threat scenarios, and then assigned a score based on various factors such as potential for violence and immediacy of the threat.

Analysis by the experiment participants was highly predictive for three out of five scenarios and better than chance for a fourth scenario. The experiment was too small to claim that a superforecasting method is an improvement over single decision-makers or investigative squads; however, the success of the analysis from this prototype was promising enough to consider a similar experiment on a larger scale.

The survey also measured participants’’ risk tolerances under uncertainty based on a prospect theory model. The participants answered six questions to detect risk aversion or risk seeking behavior and the “framing effect.” The participants’ responses were strongly consistent with prospect theory in some ways and less so in others; however, the pattern that emerged generally favors certain prospects to uncertain ones, despite the greater expected values of alternative choices. A parallel assessment of decision making through scenarios and hypothetical “prospects” presents a possibility for further research to determine the effects of risk tolerance on case study threat assessments.

The model simulations demonstrated that there will always be attacks that are true surprises and that proportion may be large enough to draw the conclusion that many or all lone-actor attacks are undetectable. The prototype superforecasting experiment, together with the, the prospect theory results, may help to explain the relative strength of some certain threat assessments: The results distinguish what may be detectable from what is statistically unpredictable through the use of a collaborative and multidisciplinary method of analysis.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top