Operator Driven Policy: Deriving Action From Data Using The Quadrant Enabled Delphi (QED) Method

Operator Driven Policy: Deriving Action From Data Using The Quadrant Enabled Delphi (QED) Method

By Lilian Alessa, Sean Moon, David Griffith & Andrew Kliskey

 

Abstract

To close the gap in operator-driven policy for the homeland security enterprise, we argue for a bottom-up policy process that acknowledges operator knowledge and opinions. We propose a practical approach to enable policy-makers to incorporate operator knowledge and experience, or operator driven policy (ODP), into policy through the Quadrant Enabled Delphi (QED) approach. We set out the theoretical requirements for QED, based on cognitive science. Using the EARTh-X QED workshop as a case-study, we demonstrate the application of QED focused on emerging Arctic security threats, and highlight key lessons for applying QED. Finally, we recommend an appropriate operator-driven policy-making process that incorporates the QED approach as a bottom-up policy process.

Suggested Citation

Alessa, Lilian, Sean Moon, David Griffith & Andrew Kliskey. “Operator Driven Policy: Deriving Action From Data Using The Quadrant Enabled Delphi (QED) Method.” Homeland Security Affairs 14, Article 6 (September 2018). https://www.hsaj.org/articles/14586

 


 

Introduction

Operators are the tip of the spear for the security enterprise; they are the boots on the ground and the hands that bear the brunt of ensuring a mission succeeds. “Operators” in the context of this paper are those personnel working on field-level operations (for example, Coast Guard officers and sailors assigned to sectors or stations, and Customs and Border Protection Officers assigned to Ports of Entry). It is they who willingly go into harm’s way to protect, serve and safeguard the American people. Despite this, there does not exist a consistent, structured methodology for eliciting operator input at all levels of strategic development.1 Policy that does not include consideration of operator needs is policy that can inadvertently increase the difficulty of mission performance and introduce vulnerabilities into the homeland security enterprise. We seek to change this pattern through the development of Operator Driven Policy (ODP). Toward this end, we offer the Quadrant Enabled Delphi (QED) method as a mechanism for systematically working with operators to better understand their operational needs and construct policies that best serve them to engage in continuous versus rigid planning. The article first introduces the policy process and the role of operator perspectives; then it considers the strength and weaknesses of the classic Delphi method as an approach for incorporating operator perspectives; finally it sets out theoretical underpinnings for an operator driven approach, and details the QED method as an alternative to the standard Delphi approach. We use a case-study to demonstrate the QED method and approach, and to highlight lessons learned in applying QED. Finally, we propose an ODP process for using QED to incorporate operator perspectives within the policy process.

Transforming Policy to Serve Operators

Policy-makers generally define policy as a course of action adopted and pursued by a government, political party, or other body.2 From the standpoint of meeting an organization’s mission objectives in support of national and homeland security, however, a more useful interpretation is that “policy” is the intersection of politics and performance. Put another way, it is the point at which “will” or “guidance” as expressed by law, regulation, or authoritative direction, is interpreted in such a way as to drive strategies, plans, and tactics toward realizing that will or guidance in support of sustained and successful outcomes. One can view policy as the joining of requirements with the abilities, capabilities, and constraints of operations, but frequently policies are viewed as originating from the leadership of an agency or government office.3 ODP is proposed as a less expensive and more effective form of policy-making than current approaches. ODP is based on acquiring and synthesizing information on the needs and concerns of operators (our clients) to frame strategic planning, implementation, and resourcing while also meeting legal and enterprise-wide requirements.4 In short, ODP is a hybridized bottom-up approach to policy-making in contrast to conventional top-down approaches.5 By ensuring that policy and strategic development are fully informed by both political (e.g., societal drivers as expressed through legislative or executive action) and operator levels (e.g., practical constraints on enforcement actions or implementation of strategy), the homeland security enterprise will be more effective in executing its mission while achieving cost-savings. To serve ODP, we developed the QED method as a practical approach to enable policy- makers to solicit and incorporate such information into Operator and Headquarters-driven policy development. The premise here is that good policy should incorporate the perspectives of operators in concert with those of experienced policy-makers, rather than valuing one perspective over the other. This premise was demonstrated when President Trump specifically tasked executive departments and agencies with conducting a bottom-up review of all immigration policies, asking law enforcement professionals (i.e., operators) to identify reforms to protect national interests. In his October, 2017 letter to House and Senate leaders, he noted that those professionals identified dangerous loopholes, outdated laws, and easily exploited vulnerabilities.6

Operator-Driven Data and the Need for Systematic Policy Development

Operator needs span the continuum from procuring equipment to the means to acquire, share, and act on information through communication and coordination. Needs in the field can include everything along this continuum, often in a rapidly evolving incident setting where there is little stability. While certainly important to the enterprise, the critical distinction between operators and rear echelon personnel is the direct and immediate service of a mission imperative (although rear echelon personnel can also be operators, provided they have experience at the field level). For example, operations centers need to be in constant communication with higher-level unity of command centers to receive regional alerts and requests to respond. Specificity and precision of situational awareness are important to ensure that resources, assets, and responses are utilized in the most efficient manner. This seemingly straightforward process belies the complex (i.e., emergent) and complicated nature of field operations and the policies that enable or constrain them. To address this complexity through a dynamic process of strategic planning, resourcing, and execution, we need to examine the current process by which policy is formed.

The prevalent model of policy development is a top-down process typically driven by senior management in Washington, DC.7 Senior leadership, often politically appointed and lacking in field operations experience, interprets political signals or statutory imperatives to drive planning, programming, budgeting, and execution decisions.8 Leadership reinforces this model in organizational structures where they select primary policy-makers from external sources such as academia or think tanks, or where there is no requirement of relevant field experience.9 To underscore this point, consider that as of June 2018, the “Strategy and Analysis” subunit within DHS Policy has no permanent personnel with operational field experience, being entirely staffed by individuals with academic or headquarters-level civil service backgrounds. This results in a Dunning-Kruger effect in which policy-makers may not understand field conditions and are potentially unaware that they lack such understanding.10 While there are some benefits to creating policy at a high organizational level (e.g., headquarters units tend to have broader viewsheds than field units, they are closer to political and economic realities, and have access to the resourcing necessary to implement policy), they frequently omit the operators from the decision-making process, relying instead on working groups of administrators who are readily available in Washington, DC, rather than allocating often limited resources to bring field personnel to the table. A partial exception to this is when subordinate Headquarters units are staffed predominantly by field personnel, such as the U.S. Coast Guard where Headquarters tours are part of an overall rotation and assignment process that focuses predominantly on field operations. This situation means that when resourcing occurs, the need to include views from the ground is not always met.

Consider the following real-world example of this type of top-down decision-making process and its consequences. In 2006, Congress created a requirement that the DHS implement a program in at least three foreign ports to test the feasibility of screening 100% of maritime cargo containers destined for U.S. ports.11 Before the pilot program was fully implemented or the feasibility of the screening approach could be assessed, in 2007 Congress passed and the President signed another law establishing a July 1, 2012 deadline requiring that 100% of all U.S.-bound cargo from foreign ports be scanned.12 The statute authorized the Secretary of Homeland Security to extend the initial 2012 implementation deadline for two additional years every two years, repeatedly, provided that DHS met certain conditions. The implicit policy of the Department, that the conditions preventing implementation exist and remain true, has been consistently communicated to Congress on a biannual cycle. However, the explicit policy of the Department has been that DHS would make every effort to comply with the law. Had field operators been consulted by those preparing the SAFE Port Act or the 9/11 Act or had they been allowed to provide input during the biannual review cycle, certain fundamental policy failures would have been readily identified. It would have become clear that the cargo screening requirement would produce only a slight increase in security while imposing impossible-to-meet operational burdens, because of the obvious issues with implementing an unenforceable policy requiring significant expenditures by trading partners in their own port facilities. Additionally, the policy required the installation by foreign partners of equipment originally manufactured only in the United States and slowed commerce with every cargo container scanned. The policy also failed to create the institutional capacity to review the 20 million or more images that would be produced a year by successful scanning. Instead, due to the divergence of top-down policy and in-the-field reality, the implementation deadline for the law has been extended three times since it came into effect, but the law continues as written despite the recognition by Congress that the requirements are infeasible and would come at an unacceptably high cost both monetarily and in terms of displacement of other effort.13

While there are no guarantees that policy-makers will heed operator insights and inputs, in cases such as the 100% scanning deadline, doing so could have prevented an unenforceable law from being enacted. Addressing why the law and its resultant policies persist is beyond the scope of this article, though it is frequently discussed in trade journals dedicated to global supply chains. The critical factor is that, absent input from people who actually understood field operations and could have immediately identified the challenges, a seemingly reasonable but impractical mandate was issued.

Soliciting Operator Inputs: Re-Thinking the Delphi Method

To solicit systematically the input of operators, the authors examined various methods for acquiring information from experts in a field, starting with the Delphi Method. We found that the Delphi Method, originally developed by the RAND Corporation in the early 1950s to solicit and understand the view of experts related to national defense, is a well-used methodology for gathering and creating consensus among subject matter experts.14 The term originates from Greek mythology – Delphi was the site of the Delphic oracle, the most important oracle in the classical Greek world. The Delphi method can be characterized as a method for structuring information derived from a group of experts to derive a consensus on the best available knowledge with respect to a complex problem.15 In general, the approach utilizes standard survey instruments and the subsequent synthesis of the results to acquire expert opinions.16 The historic strength of the Delphi method was that it provided a structured approach for eliciting and reporting expert knowledge, and in some cases systematically achieving consensus among subject matter experts. It is still used today by the RAND corporation to do their futures scenarios planning.

However, the Delphi method has several shortcomings. Due to the overuse of surveys, individuals are less likely to respond unless they have extremely strong opinions and are more outspoken. There has also been a decline in response rates during iterative surveys, particularly those involving three or four rounds.17 Additionally, use of pre-workshop surveys as a core component of the Delphi approach is not optimal because it creates a strong group bias (among the survey preparation group) that is not necessarily representative of collective experience or knowledge of the experts. This predisposes subsequent Delphi-based workshop rounds to this bias, rendering the outputs less accurate.18 Finally, the traditional RAND Delphi does not take into account the plurality of cognitive skill-sets of participants involved in the process and often introduces significant bias into the outputs in other ways. Introduced biases can result from: pre-determined scenarios and/or surveys sent to workshop participants that introduce a first level bias; facilitators who are not trained in cognitive elicitation methods and who may not be able to work with a plurality of individual communication and interpretation styles; lack of standardization in methodology, both across and within the method and its applications that may produce qualitative data that cannot be compared; and preference for participants who are available versus those who are truly expert attendees (see the “Participant Backgrounds” Section, below). These, among other factors, can yield results that may be inaccurate or misleading.19 To address these shortcomings, we developed the Quadrant Enabled Delphi (QED), and propose that the QED is needed to solicit expert input that serves the development of precise and targeted ODP based on the critique above.20

Theoretical Basis for an Operator-Driven Policy Approach

Two main bodies of knowledge inform the structure of Operator-Driven Policy developed through the Quadrant Enabled Delphi: theories of human organization, specifically agent types, and theories of cognition.

Human Organization: Agent Types

Each of us behave as autonomous agents (e.g., individuals) and possess cognitive elements, which are subject to social influence and infrastructural (e.g., economic) constraints.21 The concept of “agents” and “roles” is not new and occurs in diverse fields of study ranging from artificial intelligence22, to sociology23, to psychology.24 A cognitive agent (i.e., a person) has three key features: cognition (perception), reason (preferences), and purpose (intentions).25 These features encompass the general concept that agents are intelligent actors who respond to social and institutional stimuli in multiple ways including hysteresis (e.g., not responding to a stimulus until it carries risk), learning (e.g., becoming more efficient), and values (e.g., the Prisoners’ Dilemma).26 Roles are carried out by agents based on both their formal and informal situation within a network or group.27 The dynamics of human responses to change are often studied at the scale of institutions and/or populations.28 Agents can be categorized into three broad categories: “initiators” (or alpha agents); “supporters” (or beta agents); and “opportunists” (or gamma agents).29 Agent types can be likened to the theoretical and empirical roles of leaders and followers in a group context concerned with achievement of a common goal.30

In the context of expert workshops, alphas can be outspoken and persuasive, are often articulate, and are generally concerned with moving toward a unity of effort regardless of personal gain or opinions. Betas represent most attendees who are willing to listen to a range of inputs and form conclusions based on discussion. Gammas are individuals, who, like alphas, are often outspoken and persuasive but are focused on advancing a specific agenda regardless of whether it advances unity of effort. Each class of participant requires different interactions with facilitators to ensure their voices are heard and their expertise is equally represented in products. For our purposes, this must be accomplished as part of an operator-driven approach.

Decision Making and Cognition

An extensive body of knowledge provides relatively deep, but highly variable, insights and data regarding the mechanisms of decision-making and cognition. Despite the general thought that “better data equal better decisions,” there is overwhelming evidence that emotions strongly influence cognitive decision processes.31 Quantitative evidence further supports the idea that perception, rather than objective data, often drives decisions.32 As a consequence, we propose incorporating several specific processes into consideration for an operator-driven approach: cognitive traps/biases, cognitive walkthroughs to access both explicit and implicit knowledge possessed by workshop participants, and distinguishing types of knowing and knowledge.

Cognitive Traps (CTs)

For many settings across government, especially at the federal level, decision-making is handled by a group of individuals who come together as representatives of different agencies and services. Such interagency decision-making processes result in a series of “cognitive traps” (CTs, also referred to as “Cognitive Biases”): that is, an inability to explore mentally new strategies or actions beyond those that are well known and familiar. Such cognitive traps must be addressed to enable operator-relevant and effective decision-making.33 CTs help us to understand the resultant processes that can enhance or impede decision making during an operator-driven process, particularly with respect to knowledge of agent types (individuals) and social niches (in professional teams). For groups of participants that have legacy issues (see Addressing Elephants below), repeated actions may be an impediment to listening to alternative viewpoints due to conditioned actions and responses.34

The dynamics resulting from the interactions of these different kinds of cognitive biases in a group produce coordinating cues and signaling behaviors, which are often relayed subconsciously as well as overtly.35 Visual and verbal cues signal to others the status of a given agent in the group.36 These fundamental qualities of individual and group behaviors must be considered in the context of the explicit and implicit knowledge of the participants.

Cognitive Walkthroughs

The use of cognitive walkthroughs is a critical component of an operator-driven approach and is based on observations that daily interactions with computers and other digital technologies are changing the way we perceive, analyze information, and make decisions.37 The use of cognitive walkthroughs ensures that these emergent cognitive patterns are leveraged, particularly among younger participants. For example, Norman’s Human Action Cycle describes the steps a person takes when interacting with a computer system,38 starting with the formulation of the user’s goals through to accomplishment of those goals.

Knowing

Explicit knowledge is generally defined as “knowledge that the knower can make explicit by means of a verbal statement that can be elicited from them by suitable enquiry or prompting.”39 Implicit knowledge is generally defined as any other kinds of knowledge including those often characterized by phrases such as ‘gut feelings’ and ‘I just know,’ or other complex, subjective, and subconsciously, driven means of coming to a decision fork and/or conclusion.40 In this construct, implicit knowledge corresponds roughly to what Polanyi called “tacit knowing”: we can know more than we can tell, but such knowledge is difficult to describe linearly and relay.41 Doctors and decision-makers in the military and law enforcement rely on tacit knowledge in decision-making contexts.42 In the military, especially, tacit knowledge is often identified as intuition and cultivated to support rapid decision making in complex and urgent situations.43 The original Delphi Method primarily addressed explicit knowledge, allowing “experts” to provide question-answer information that was compiled into a report. Operator-driven approaches are designed to acquire and enhance both explicit and implicit knowledge, as the latter is often possessed and valued by the most experienced, effective, and expert operators. We propose augmenting these types of knowing with something called “local and place-based knowledge” (LPBK), which can provide location-specific context for decision-making: a critical component of operator-driven policy which can ensure that decisions are relevant to the geographic and temporal scales on which field agents conduct operations. Since collective knowledge constructs institutions, policy development strategies which don’t capture the diversity of knowledge available (explicit, implicit, and LPBK) necessarily fail to address real-world scenarios while also generating outcomes based on incomplete or flawed data inputs.44

The QED Methodology

The QED methodology was designed by the University of Idaho’s Center for Resilient Communities to serve ODP and is built on the cognitive science and human organization principles described above. QED replaces the traditional Delphi pre-workshop survey with recursive, in-person social processes. Facilitators bring experts together in-person, elicit subject-area knowledge, and build consensus while iteration occurs through a series of both structured and semi-structured sessions in 1-2-day long workshops.

In QED, facilitators initially present a series of carefully crafted challenge questions to a diverse group (the Delphi group) to initiate discussion. These questions are specific to the topic at hand but crafted so as not to introduce bias. Facilitators then oversee responses of the panel of experts, helping them express their opinions concisely and accurately while recording the points raised. Facilitators select members in consultation with the organizing or hosting agency because they are subject matter experts in fields related to the challenge questions; they represent a broad range of relevant organizations and agencies; and they provide a diverse mix of operator experiences. The success of this process depends on the expertise, professionalism, and communication skills of both participants and facilitators. We describe a case study of the implementation of QED below in order to explain the methodology, and we then describe the core elements of QED.

Case-Study– Emerging Arctic Threats

The workshop on Emerging Arctic Threats (EARTh-X) convened in Washington, D.C. on February 1 and 2, 2017, and was attended by 85 security and intelligence professionals from over 35 U.S. and Canadian federal and state or provincial agencies. The workshop was held under the Chatham House Rule: individual participants were not named, although the names of agencies and institutions represented were recorded, to ensure anonymity in workshop reporting and to foster open dialog during the meeting. The workshop goal was to establish an understanding of the emerging security landscape in the Canadian-United States (CANUS) Arctic by eliciting the most severe emerging security threats, developing a consensus on the highest priority threats, identifying capability gaps in tackling these threats, and generating consensus recommendations in response to the identified threats and gaps.45 The workshop employed the QED methodology using a sequential and recursive format, a quadrant layout for facilitation, and the 80/20 rule for prioritization of data.

1 – Sequential and Iterative Phases

The QED structure for the EARTh-X workshop entailed four iterative phases of facilitated brainstorming, discussion, rating, and consensus-building.

Phase 1 involved horizon scanning during which major security threats in the Arctic and the severity of those threats were identified through facilitated brainstorming. The workshop domain experts enumerated collectively 198 threats in the Arctic security sector.

Phase 2 involved prioritization of the major security threats for the Arctic through the Red dot/Green dot exercise (see below). The 80/20 rule dot method resulted in 14 threats ranked as highest severity, ranging from “Dark Target Tracking in Theater” to “Effects of Thawing Permafrost on Critical Infrastructure.” After narrowing the threat list to the most serious threats for the maritime, air, cyber, land, and all security domains, the threats were ranked in terms of overall priority, timescale, and spatial scale.

Phase 3 involved framing the threat continuum by identifying and prioritizing capability gaps in responding to the threats through facilitated rating. Following threat identification, participants identified 47 capability gaps hindering effective security operations in the Arctic domain. In a similar manner to Phase 2, participants ranked the capability gaps. The highest-ranked capability gaps ranged from “[L]ack of Persistent Sensors / Surveillance Frameworks” to “[L]ack of Situational Awareness.” This phase also enabled extraction of serious barriers to effective security operations in the Arctic, for example, the lack of critical information and information-sharing protocols.

Phase 4 involved identifying and prioritizing solutions through facilitated brainstorming and rating of solutions. The final outcome of the QED process at EARTh-X, based on the enumerated threats and the capability gaps, was a set of consensus recommendations on potential solutions to enhancing CANUS Arctic security. Participants achieved consensus through a facilitated group discussion of the group rankings of proposed solutions, that is, as a collective social process between facilitators and participants. For example, the group resolved to “revisit a comprehensive strategy for surveillance as a process” and to “develop tools and processes for better situational awareness for ground operations.”

2 – Quadrant Layout and Facilitation

The meeting venue with seated participants (i.e., operators) was divided into quadrants with a facilitator (or “quadrant manager”) for each quadrant (see Figure 1). The central facilitator (or “room manager”) coordinated the overall discussion and roamed among the different quadrants. Each quadrant facilitator engaged the participants in their quadrant through verbal, visual, and physical cues. Quadrant facilitators also liaised with the central facilitator to adjust information elicitation based on the agent types in their quadrant. During the two-day EARTh-X workshop, the central facilitator role was shared between two people, alternating between primary and secondary roles.

For the EARTh-X workshop, quadrants represented air, maritime, land, and cyber domains. The quadrant facilitator in each quadrant had operational expertise specific to that domain, and this allowed them to not only serve as a resource to the participants in their quadrants but also to understand the comments recorded for their quadrants.

Figure 1. Quadrant layout for group facilitation in Quadrant Enabled Delphi (QED) method. Note that quadrants are defined by participant numbers, e.g. seats at the table, not quartering of the workspace

3 – The 80/20 Rule

As a means of leveraging the collective intrinsic and extrinsic knowledge of participants, QED uses the 80/20 rule to allow for rapid prioritization and sorting of group inputs. The 80/20 rule stems from the “Pareto Principle” which has multiple interpretations and applications depending on the context used, but is simply phrased as “80% of the output results from 20% of the input” and has been applied to quality control in electrical circuits, business management, and human decision-making processes.46

During the EARTh-X workshop, reaching a consensus on ranking, for example, the vulnerabilities of critical infrastructure in the Arctic was accomplished through a series of iterative sessions including group generation of vulnerabilities and threats, group rating of those vulnerabilities and threats, and spatial mapping of vulnerabilities and threats. The ranking sessions involved participants placing color-coded adhesive dots on tabulated sheets generated during brainstorming to differentiate items and to assist with prioritizing them.

Key definitions used for the application of the 80/20 rule for ranking results in EARTh-X included:

  • Severity – defined as the impacts and consequences of an event occurring: Severe (Critical)=immediate loss of safety and/or life, and/or critical infrastructure to multiple people; Significant (Acute)=predisposes the loss of safety and/or life and/or critical infrastructure within weeks; Moderate=may predispose the loss of safety and/or life and/or critical infrastructure within months.
  • Timescale – defined as the time period in which a threat was most likely to manifest: Horizon 1 (1 to 5 years); Horizon 2 (5 to 10 years); Horizon 3 (10 to 30 years).
  • Spatial scale – defined as the scale of impact of a threat or vulnerability: Local, National, and International.

The 80/20 rule modification guided the numbers of dots each participant received and how many cycles of the dot exercise were conducted for rating severity, timescale, and spatial scale for the generated vulnerabilities and threats (see Figure 2 for one visualization of results from ranking threats).

Figure 2. Cube framework for representing Emerging Arctic Threats from the EyesNorth QED Workshop held in 2017 (Alessa et al. 2017).

Accessible Outputs—Reporting Back and Actionable Data

After the first day of the EARTh-X workshop, the team conducted an analysis of the output. This generated a ranked and weighted score from the first 80/20 rule refinement, resulting in sets of 3-8 consensus “highest priority” threats and capability gaps. Those data were then discussed at the beginning of the second day and used as inputs for the next rounds of 80/20 rule exercises. Once the workshop had concluded, the leadership team produced a draft summary (within two weeks) and distributed it to the participants for comment and to verify and validate the results.47

Once the participants had validated the report, it was provided to policy-makers (who were also participants in the exercise) engaged in developing strategies related to the Arctic, where it became the basis of strategic risk landscape analysis. This ensured that operator input was embedded in the strategy development process, while also enabling broader issues such as national objectives to be accommodated.

Lessons for the Application of QED

The QED method was developed and refined to ensure that systematic and rigorous data are acquired directly from the operators themselves. A closer examination of the QED methodology includes the following key components:

  1. careful consideration of participant demographics, identifying and enlisting subject experts;
  2. consistent methodology across QED sessions including operator selection;
  3. establishment of trusted spaces for maximizing free expression of ideas;
  4. rigorous validation and prioritization of information using a 80/20 rule exercise, and;
  5. accessible and comparable outputs between and across QED events.

QED can establish settings where operators can express their expert perspectives and field-informed views without fear of reprisal or stigmatization. Additionally, QED leverages cognitive science and principles of human organization to enable data collection at the individual and group level. It also reveals group dynamics which can be used as proxies to understand broader-scale dynamics at agency and department levels. Finally, it also reveals the “unknown unknowns” by accessing, structuring and analyzing the tacit and implicit knowledge of a group of carefully selected expert “human databases” whose collective reasoning leads to insights that are otherwise not visible.

In addition to the facilitated discussion and brainstorming sessions, the QED method includes a written information submission mechanism that increases the number of solutions generated by larger groups, particularly by allowing beta agents, who often have well thought-out opinions but may not be comfortable vocalizing in a larger group, to express them freely. It also allows a larger number of inputs in a much shorter period, leading to concept diversity and a more robust set of decision options. Our experiences are consistent with data suggesting that in response to three different problems requiring creative thinking, the number and quality of inputs produced by “nominal groups” (whose members spent reflective periods working alone after or during group sessions) was of higher utility to decision makers.

Participant Background

A critical starting point for a successful QED process is ensuring that the attendees are representative of the appropriate subject matter expertise and roles that constitute the operational landscape from unity of command to unity of effort. Workshop attendees cannot be random or opportunistic, but should be carefully identified through an “ideal participant paragraph” in the participant solicitation and through leveraging peer networks. Here is an example of an Ideal Participant Paragraph:

Observing and monitoring system designers, data managers, and the operational users of the data generated by these networks are ideal participants for these workshops. Individuals whose responsibilities include processing, analyzing, and disseminating data for resource management, security and defense operations are also encouraged to attend. These include watch-standers for natural disasters such as wildfires, search and rescue, and humanitarian and disaster response. Individuals involved in countering illicit activities such as illegal, undocumented and unreported fishing (IUUF), and local stakeholders whose place-based knowledge can provide context to regional datasets are also welcome to attend.48

Ensuring that participants have appropriate backgrounds also entails ensuring that the room is secure (peers respecting peers) and that new individuals cannot freely enter and engage with the QED cohort.

Quadrants

The “Q” in QED is critical because it structures the room and the participants into four sections or quadrants (Figure 1). This was based on our experience with the EARTh-X workshop and several other QED workshops that were run in 2017 and 2018, in which it emerged that there are typically four domain areas of critical interest, e.g., marine, air, land, and cyber domains in the case of EARTh-X. The use of four quadrants also allows for a straightforward room layout utilizing the four corners of a rectangular meeting space. Nevertheless, it is possible to run the process using say, 3 or 5 domain areas. However, use of a quadrant-based structure is also resonant with whole-brain theory, also known as the Four Quadrant Model, in which a quadrant-setting supports “Creators,” “Investigators,” “Activators,” and “Evaluators,” as sub-sets of agent-based operators.49 The selection and use of quadrant facilitators/managers allows for a diverse team of facilitators who have the knowledge necessary to work the given subject during elicitation. Each quadrant represents a key area of focus for a given challenge in the issue at hand. Facilitator expertise, knowledge, and process understanding are critical to quality assurance and accuracy. If clarification is necessary, facilitators can pose the correct questions. This construction of information inputs is particularly important to the 80/20 rule dot exercise phase of the QED process.

Facilitators – Forming the QED Team

One of the more critical factors that determines the success of a QED workshop is the training and quality of the facilitators themselves. Facilitators must have the following qualities:

  1. possess field and/or operator-relevant experience;
  2. can parse information and inputs in the context of domain expertise;
  3. possess outstanding listening and intervention skills, and;
  4. have training in QED facilitation.

Beyond content expertise, quadrant managers are systematically trained in the QED approach to acquire specific skills that allow them to engage attendees and ensure that they participate in ways best suited to their personalities. Skilled facilitation may be necessary to ensure all opinions and views are safety elicited, and to maintain the professional decorum of the group. If not carefully managed, strongly expressed opinions or shared experiences may result in confrontations between participants. A quadrant manager uses a range of techniques to interact with attendees, ensuring that the entire suite of agent types is engaged appropriately. These include cognitive and trust-based interventions tailored to each participant that may be detached or overt.50

Rotating the central facilitator role between sessions prevents fatigue during QED sessions and allows each to utilize and leverage individual strengths in group management and content expertise. The quadrant-enabled aspect of QED helps ensure harmonization of participation and improves the quality of information provided by the group. Ultimately, QED provides a structured, trust-based information exchange space where operators can provide honest input in the task of informing superiors within their organizations.

Selection of facilitators for QED is based on nomination and invitation. Trusted, capable facilitators are identified by already qualified personnel and generally vetted by being invited to serve as a quadrant facilitator during a workshop. Once identified and vetted, prospective facilitators undergo a 3-day training program that provides the key background (scientific theory), critical interaction skills (toolbox), and processes (workflows and analytical tools). The course takes place in a neutral location where students can freely explore a range of topics under each of these three areas. The third day is composed of a compressed QED workshop process with actual participants.

Trust Spaces

A critical component of QED is that of trust. We use the term “trust” here to mean the emotional state in which individuals feel secure enough to freely express their professional opinions in a setting free of perceived or actual consequences to personal or professional safety.51 Constructing trust spaces hinges on asking the right questions, those that specifically serve the resolution and illumination of the issue, and providing the means for the respondents to give honest answers without fear of repercussions. Facilitators can utilize several means, outlined below, to ensure that they construct adequate trust spaces.

Chatham House Rule

The Chatham House Rule originated at the Royal Institute of International Affairs, a non-profit, non-governmental organization based in London whose mission is to analyze and promote the understanding of major international issues and current affairs. The Chatham House Rule is designed to provide anonymity to speakers to encourage openness and sharing of information. It is used throughout the world as an aid to free discussion. The Chatham House Rule reads as follows:

When a meeting, or part thereof, is held under the Chatham House Rule, participants are free to use the information received, but neither the identity nor the affiliation of the speaker(s), nor that of any other participant, may be revealed.52

Social Reinforcement and Trust Building in Real Time

Trust spaces rely on several layers of social intervention that can be difficult to balance, depending on the dynamics of the expert group physically present in the room. Ultimately, it begins with a strong and cohesive QED facilitation team whose backgrounds make them capable of empathy and relativity to the attendees themselves. A capable and empathetic, yet disciplined QED team conveys confidence in both their knowledge of procedure (i.e., confidentiality and respect) as well as their content expertise. More functionally, a trust space must be physically constructed by holding the workshop in a secure room where outsiders cannot randomly come and go, by constantly reminding attendees of the gravity of the Chatham House Rule and its origins, and by requesting that digital devices be put away (a request that the facilitation team can reinforce by securing their own devices in an obvious fashion). Activities such as photographs and social media are prohibited (except for providing digital records of facilitation tools and products as an aid to later analysis). To reinforce these rules and create a common normative behavior set, facilitators require the attendees to exercise the highest level of their professional responsibilities. In almost all cases, they will respond favorably and collectively create and uphold shared trust both during the QED process and following it.

The “Magic Box”

Communication styles vary among individuals. In general, alphas and gammas are more likely to speak out than are betas. Among betas, there are sub-categories, which generally include active (Bα), obligate (Bο), and passive (Bπ). Active betas are more likely to provide verbal input followed by written comments after adequate information/evidence has been heard from the group. Obligate betas generally will “go with the flow” and provide the bulk of their inputs via written comments once they have assessed the majority sentiments. Passive betas usually either feel inadequate in terms of domain expertise and/or occupy a social niche which is subjugated to other betas and alphas in the broader group network. This does not mean that passive Betas do not have opinions, expertise, or inputs, but rather that they are unlikely to express that knowledge verbally and are more likely to provide only written comments. Individuals of all personality types may consider written comments to be a more accurate or considered format as the act of writing allows time to reflect and analyze. Additionally, written comments also engage different cognitive processes than verbal exchange, potentially leading to ideas or concepts that would not be accessible in a discussion. For example, it is not uncommon for written comments to reflect on the nature of the QED workshop, challenge questions, or facilitation team, and this can lead to improvements of execution (or even of the method itself). A secure box (or secure boxes – with one in each quadrant) is provided in QED for all such comments with the understanding that every comment is considered, incorporated, and kept secure in the final written analysis. The safe space is further reinforced by ensuring participants that physical written comments will be destroyed once the information is transcribed by the QED team analysts. In the EARTh-X QED Workshop, a total of 412 comments were received in the Magic Box.

Acknowledging Elephants

Among a targeted and carefully considered group of experts, there will be legacy challenges that create tension.53 Having these called out at the beginning of the meeting by a skilled group of facilitators will accomplish three things:

  1. clarification of the scope and details of the problem;
  2. gauging the severity of any disparity between participants’ views and the workshop objectives in impeding the QED process; and
  3. acknowledging that there are multiple viewpoints which will need to be harmonized and that consensus may or may not be achieved.

Acknowledging elephants is not done as a matter of regular practice and requires a high level of skill, knowledge of the issue-scape, and ability to rein-in discussions should they become heated. It should only be attempted when the QED team is pre-briefed on thresholds at which hard-stops dictate a transition to a new discussion phase and/or QED activity.

Story Telling

The types of experts that constitute the range of field operators to command center and support personnel bring with them a rich set of experiences, and hence stories. Telling these stories in a safe space is a critical piece of the QED method. In the EARTh-X QED Workshop, an initial story was offered by both Central Facilitators as a way to link the discussions and their content to a real-world experience. This assisted the group to further understand and relate, not only to each other, but also to the reasons behind the workshop and the information elicitation itself. The benefits of team building are well-documented in industry but have not been readily transferred to the homeland security enterprise where a “Unity of Effort” is overly generalized and difficult to tangibly construct.54 In our experience, storytelling has been a powerful unifying force that both humanizes attendees, decreases isolation, and builds empathy. In EARTh-X, the empathy and team-building that was generated by storytelling opened several lines of communication and identified key convergence and leveraging points between operators across DHS components involved in securing the Nation’s borders.55

Data Analysis and Cross Validation of Information

Information solicited through QED is acquired in a systematic fashion, ensuring that each phase of the method is consistent, so that data from each phase are comparable with each other and with few deviations. A consistent method across QED workshops also ensures that data from diverse topics can be cross-compared further and evaluated for overlaps and/or divergences. Types of analysis can include:

  1. Rank ordered lists of elicited information using numerical normalization techniques;
  2. Iteratively refined inputs using normalized scores (i.e., taking a top-5 and doing a next round based on them);
  3. Dimensional frameworks produced by plotting normalized, ranked scores on multiple dimensions (see figure 2); and
  4. Qualitative summarization of the remainder of the data.

Analysts can produce quantitative textual analysis by feeding all inputs and comments into open-source software such as AIDA.56 It should be noted that there may not be enough data from a single workshop to produce statistical inference or analysis, but repeated workshops using consistent methodologies can produce enough data to use chi-square or similar statistical methods to compare proportionate responses from workshop to workshop.

Supporting an Operator-Driven Policy Process

A key purpose of QED is to provide a way to solicit operator input more effectively, in concert with policy experts, for informing precise and effective ODP. In contrast to a conventional, top-down policy-making process (Figure 3a), an ODP leverages the QED methodology as a means for incorporating operator knowledge and opinion in the policy-making process (Figure 3b) – alongside those with policy experience. Where the need for new or updated policy via operator input (i.e., QED) is recognized, or the potential for a strategic initiative is identified, agency personnel with a strategic policy role (the policy champions) will designate a project lead or team who may in turn establish working groups. Where the potential need for a strategic initiative is identified, the process next requires a validation cycle. Traditionally, this validation step is a high-level overview of the topic, its risk environment, and potential partners, etc. As a basis for ODP, inserting a QED approach to provide input from field-level operators ensures that accurate information on the real-world environment is incorporated. The research cycle in strategy development is the primary mechanism for identifying and detailing the operational space and risk environment that the strategy is intended to address. It is an iterative process wherein the policy champions shape concise descriptions of the strategic environment using: open source and/or classified information; national, department or interagency policy statements; statutes/authorities/regulations; interagency consultation; and engagement with academia and/or think tanks. It informs the background, and guides strategic principles, partnership identification, and current efforts by partner’s sections of the strategy. Where QED is used, the background statements also form the basis of the QED workshop materials, affording an opportunity for the expert operators to inform the basic substance of the strategy by influencing what would normally be a headquarters staff-level assessment process – thus validating it or adding real-world input.

Figure 3. Diagrams of: (a) Traditional top-down policy-making process, and; (b) Policy-making process modified by the bottom-up Quadrant Enabled Delphi approach incorporating operator-driven input.

It should be noted that the application of QED does not necessarily guarantee that the operational perspectives and voices elicited through the workshops are heeded by policymakers and subsequently translated into policy. Nonetheless, the formalization of an alternative policy process (Figure 3) with QED embedded in it may increase the likelihood of ODP being developed. The gap between knowing what to do and what policy actually can be adopted is not addressed here and is beyond the scope of this article.

Conclusion

The process of policy development is optimized with timely and robust input from operators. Like any other complex system, the homeland security enterprise can be improved by addressing multiple scales of risk, needs, and practices. ODP allows field-scale challenges and opportunities to be meaningfully incorporated into strategy and planning in concert with the expertise of policy champions. This inclusion of a bottom-up policy perspective refines end-products by improving the applicability of the resultant policies as well as cooperation between components that improves Unity of Effort overall. The QED method was developed specifically to accomplish these goals in a manner that is unbiased, transparent, cost-effective, and adaptive. The establishment of a federal QED cohort of facilitators will create the means to rapidly assess demand signals and quickly develop policies that seamlessly integrate with operations on the ground, while the QED needs assessment is a valuable tool for validating the need for strategic planning. Operator Driven Policy through the QED represents an innovation in policy-making and planning while also ensuring that the U.S. government is able to grow its own capacity to respond effectively to the ever-changing global dynamics with which it must contend on a daily basis. The cost-savings and precision of progressing to this approach will ultimately benefit the American people by ensuring that the front-line operators, those who serve to keep this Nation safe, are better able to execute their missions.

About the Authors

Dr. Lilian Alessa is a Defense Intelligence Senior Leader (Intergovernmental Personnel Act) with the National Maritime Intelligence Integration Office (NMIO) and President’s Professor at the University of Idaho’s Center for Resilient Communities (CRC). She also serves as Deputy Chief of Global Strategies in the U. S. Department of Homeland Security Office of Policy, Washington, D.C. She received her Ph.D in cellular (systems) biology and cognitive psychology from the University of British Columbia and has worked as a maritime field operator. She has advised agencies in both Canada and the United States on designing resilient landscapes for national security and the maritime domain, emphasizing critical gaps, vulnerabilities, and approaches to addressing them. She sits on several national committees including the Science, Technology and Education Advisory Committee for the National Ecological Observing Network (NEON) through Batelle Corporation. She may be reached at lilian.alessa@hq.dhs.gov .

Sean Moon is the Chief of Global Strategies in the U. S. Department of Homeland Security Office of Policy, Washington, D.C. Among other projects in his portfolio, he is the Policy lead for Arctic strategy development. Between 2011 and 2016, he served the Department as Director, Transportation and Cargo Policy and chaired the Asia-Pacific Economic Cooperation Sub-group for Maritime Security. A 1985 graduate of Willamette University in Salem, Oregon, he spent four years in the private sector before joining the U.S. Coast Guard in 1989. Over the course of a 20-year career, he specialized in port operations and emergency management, community engagement, commercial and passenger vessel and facility safety and security programs, waterways management programs, and oil/hazardous materials and natural disaster response operations. He may be reached at sean.moon@hq.dhs.gov .

Dr. David Griffith is an Intelligence Community Postdoctoral Fellow at the Center for Resilient Communities at the University of Idaho. He received his Ph.D in Environmental Science from the University of Idaho in 2015, with specializations in plant-fungal symbiosis and social-ecological systems science. His subsequent research has focused on community-based observing and the use of indicators to anticipate socio-environmental instability and emergence regimes. Community-based observing networks and systems (CBONS) provide data that are interoperable with other observing systems so that early warning systems can be developed for social conflict, emergent environmental security threats, and transition states. He may be reached at griffith@uidaho.edu .

Dr. Andrew Kliskey is Professor of Social-ecological systems and Director of the Center for Resilient Communities (CRC) at the University of Idaho. Originally from Aotearoa / New Zealand, he trained as a land surveyor, resource planner, landscape behavioral geographer, and landscape ecologist. He has spent the last 18 years working with communities in New Zealand, northwestern and south central Alaska, and in Idaho to co-develop adaptive responses to environmental change. He has co-led several large team-based interdisciplinary research projects funded by the National Science Foundation in Southcentral Alaska, the Bering, Chukchi, and Beaufort Seas, and in the Upper Snake River Basin, Idaho. He may be reached at akliskey@uidaho.edu .

Acknowledgements

The authors are grateful to the National Science Foundation for award ARC 1642847, and to the U.S. Department of Energy and the Office of the Director of National Intelligence for an Intelligence Community Postdoctoral Research Fellowship, which supported this work. Any opinions, findings, or recommendations expressed in this report are those of the authors and do not reflect the views of NSF, DoE, or ODNI.

Notes


1 Ernest Sternberg and George Lee, “Meeting the Challenge of Facility Protection for Homeland Security,” Journal of Homeland Security and Emergency Management 3, no. 1 (2006): https://doi.org/10.2202/1547-7355.1153.

2 Joe Garcea, “Studying Public Policy: Policy Cycles and Policy Subsystems,” Canadian Journal of Political Science 29, no. 1 (1996): 169-170.

3 Martin Alperen, ed., Foundations of Homeland Security: Law and Policy (Hoboken, NJ: John Wiley & Sons, 2017).

4 U.S. Department of Homeland Security, U.S. Department of Homeland Security Strategic Plan for Fiscal Years 2012 – 2016 (Washington, DC: Department of Homeland Security, 2012): https://www.dhs.gov/sites/default/files/publications/DHS%20Strategic%20Plan.pdf.

5 Paul Sabatier, “Top-Down and Bottom-Up Approaches to Implementation Research: A Critical Analysis and Suggested Synthesis,” Journal of Public Policy 6, no. 1 (1986): 21-48.

6 President of the United States of America, “President’s Letter to House and Senate Leaders and Immigration Principles and Policies,” accessed October 8, 2017, https://www.whitehouse.gov/briefings-statements/president-donald-j-trumps-letter-house-senate-leaders-immigration-principles-policies/.

7 Paul Sabatier, “Top-Down and Bottom-Up Approaches to Implementation Research: A Critical Analysis and Suggested Synthesis,” Journal of Public Policy 6, no. 1 (1986): 21-48 ; U.S. Department of Homeland Security, Office of Strategy, Policy, and Plans, “Strategy, Plans, Analysis & Risk,” accessed September 7, 2017, https://www.dhs.gov/strategy-plans-analysis-risk.

8 Martin Alperen, ed., Foundations of Homeland Security: Law and Policy (Hoboken, NJ: John Wiley & Sons, 2017).

9 Ibid; Charles R. Wise, “Organizing for Homeland Security,” Public Administration Review 62, no. 2 (2002): 131-144.

10 David Dunning, “The Dunning-Kruger Effect: On Being Ignorant of One’s Own Ignorance,” Advances in Experimental Social Psychology 44, (2011): 247-296.

11 The Security and Accountability for Every Port Act of 2006 (SAFE Port Act) (P.L. 109-347) §232.

12 Implementing Recommendations of the 9/11 Commission Act of 2007 (9/11 Act) (P.L. 110-53) §1701.

13 U.S. Congress, House, Department of Homeland Security Appropriations Act, 2010, 111th Cong., 1st sess., 2010, H. Rep. 111-298.

14 As well as controversial sociopolitical areas of discourse. See Harold Sackman, Delphi Assessment: Expert Opinion, Forecasting and Group Process (Santa Monica, CA: RAND Corporation, 1974). See also Rodney Custer, Joseph Scarcella, and Bob Stewart, “The Modified Delphi Technique – A Rotational Modification,”Journal of Vocational and Technical Education 15, no. 2, (Spring 1999), http://scholar.lib.vt.edu/ejournals/JVTE/v15n2/custer.html.

15 Olaf Helmer, H.A. Linstone, and M. Turoff, The Delphi Method: Techniques and Applications (Newark, NJ: New Jersey Institute of Technology, 2002). See also Lilian Alessa et al., “The Arctic Water Resources Vulnerability Index: An Integrated Assessment Tool for Community Resilience and Vulnerability with Respect to Freshwater,” Environmental Management 42 (2008): 523-541.

16 Gene Rowe and George Wright, “The Delphi Technique, A Forecasting Tool: Issues and Analysis,” International Journal of Forecasting 15 (1999): 353-375. See also Helmer et al.

17 Chia-Chien Hsu and Brian A. Sandford, “The Delphi Technique: Making Sense of Consensus,” Practical Assessment, Research & Evaluation 12, no. 10 (2007): 1-8.

18 Sinead Keeney, Felicity Hasson, and Hugh P. McKenna, “A Critical Review of the Delphi Technique as a Research Methodology in Nursing,” International Journal of Nursing Studies 38, no. 2 (2001): 195-200.

19 Jon Landeta, “Current Validity of the Delphi Method in Social Sciences,” Technological Forecasting and Social Change 73, no. 5 (2006): 467-482. See also Keeney et al.

20 See Dsam Scheele, “Reality Construction as a Product of Delphi Interaction,” in The Delphi Method: Techniques and Applications, eds. Harold Linstone and Murray Turoff (Reading, MA: Addison Wesley Publishing Company, 1975), 37-71. See also Roy Schmidt et al., “Identifying Software Project Risks: An International Delphi Study,” Journal of Management Information Systems 17, no. 4 (2001): 5-36. See also Celeste Lyn Paul, “A Modified Delphi Approach to a New Card Sorting Methodology,” Journal of Usability Studies 4, no. 1 (2008): 7-30.

21 For examples, see Cristiano Castelfranchi, “Modelling Social Action for AI Agents,” Artificial Intelligence 103, no. 1-2 (1998):157-182. See also Cristiano Castelfranchi, “Engineering Social Order,” in International Workshop on Engineering Societies in the Agent’s World (Berlin: Springer, 2000): 1-18. And see Rosaria Conte et al., “Sociology and Social Theory in Agent Based Social Simulation,” Computational and Mathematical Organization Theory 7, no. 3 (2001): 183-205.

22 John Perry, “Indexicals, Contexts, and Unarticulated Constituents,” in Proceedings of the 1995 CSLI-Amsterdam Logic, Language, and Computation Conference, (Stanford, Calif.: CSLI Publications, 1998).

23 Evan Fales, “The Ontology of Social Roles,” Philosophy of Social Sciences 7, no. 2 (1977):139-161.

24 Bruce J. Biddle, Role Theory: Expectations, Identities, and Behaviors (New York: Academic, 2013).

25 Cristiano Castelfranchi, “The Theory of Social Functions: Challenges for Computational Science and Multi-agent Learning,” Cognitive Systems Research 2, no. 1 (2001): 5-38.

26 Timo Steffens, “Adapting Similarity-Measures to Agent Types in Opponent- Modeling,” in Workshop on Modeling Other Agents From Observations at AAMAS (New York: Autonomous Agents and Multiagent Systems, 2004): 125-128.

27 Evan Fales, “The Ontology of Social Roles,” Philosophy of Social Sciences 7, no. 2 (1977):139-161.

28 David Cash et al., “Scale and Cross-Scale Dynamics: Governance and Information in a Multilevel World,” Ecology and Society 11, no. 2 (2006): 8.

29 Lilian Alessa and Andrew Kliskey, “The Role of Agent Types in Detecting and Responding to Environmental Change,” Human Organization 71, no. 1 (2012): 1-10.

30 Martin M. Chemers, “Leadership Effectiveness: An Integrative Review,” in Blackwell Handbook of Social Psychology: Group Processes (Malden, MA: Blackwell, 2001): 376-399. See also Jeffrey C. Johnson, James S. Bolster, and Lawrence A. Palinkas, “Social Roles and the Evolution of Networks in Isolated and Extreme Environments,” The Journal of Mathematical Sociology 27, no. 2-3 (2003): 89-122. See also Peter G. Northouse, Leadership Theory and Practice, 3rd ed. (Thousand Oaks, CA: Sage Publications, 2018).

31 For reviews, see: Norbert Schwarz and Gerald L. Clore, “Feelings and Phenomenal Experiences,” in Social Psychology, 2nd ed. (New York: The Guilford Press, 2013): 385-407; and Joseph P. Forgas, “Mood and Judgment: The Affect Infusion Model (AIM),” Psychological Bulletin 117, no. 1 (1995): 39-66.

32 Paula Williams et al., “Community-Based Networks and Systems in the Arctic: Human Perception of Environmental Change and Instrumented Data,” Regional Environmental Change 18 (2017): 547-559.

33 Nicolao Bonini and Massimo Egidi, “Cognitive Traps in Individual and Organizational Behavior: Some Empirical Evidence,” Revue D’Economie Industriale 88, no. 1 (1999): 153-186. See also Martin Hilbert, “Toward a Synthesis of Cognitive Biases: How Noisy Information Processing Can Bias Human Decision Making,” Psychological Bulletin 138, no. 2 (2012): 211-237.

34 Tom M. Mitchell, “Mining Our Reality,” Science 326: 1644-1645.

35 Shrikanth Narayanan and Panayiotis G. Georgiou, “Behavioral Signal Processing: Deriving Human Behavioral Informatics from Speech and Language,” Proceedings of the IEEE 101, no. 5 (2013): 1203-1233.

36 Ibid.

37 Unpublished data from CRC and Proteus, Inc. on “Technologically Induced Environmental Distancing.” Contact authors for more information.

38 Donald Norman, The Psychology of Everyday Things (New York: Basic Books, 1988).

39 Michael Dummett, The Logical Basis of Metaphysics (Cambridge, MA: Harvard University Press, 1991); Bill Brewer, “Mental Causation: Compulsion by Reason.” Proceedings of the Aristotelian Society, Supplementary Volume 64: 237-253.

40 Dianne C. Berry and Donald E. Broadbent, “Interactive Tasks and the Implicit Explicit Distinction,” British Journal of Psychology 79, no. 2 (1988): 251-72.

41 Michael Polanyi, “Sense-Giving and Sense-Reading,” Philosophy 42, no. 162 (1967): 301-325.

42 See Robert Sternberg and Joseph Horvath, Tacit Knowledge in Professional Practice: Researcher and Practitioner Perspectives (Mahwah, NJ: Lawrence Earlbaum Associates, Publishers, 1999). Also see Jennifer Hedlund et al., “Identifying and Assessing Tacit Knowledge: Understanding the Practical Intelligence of Military Leaders,” The Leadership Quarterly 14, no. 2 (2003): 117-140.

43 Todd B. McCaffrey, “Gut Feel: Developing Intuition in Army Junior Officers,” Strategy Research Project, http://www.dtic.mil/docs/citations/ADA468976.

44 Chrystostomous Mantzavinos, Douglass C. North, and Syed Shariq, “Learning, Institutions, and Economic Performance,” Perspectives on Politics 2, no. 1 (2004): 75-84.

45 Lilian Alessa et al., Report of the Emerging Arctic Security Threats Matrix (EARTh-X) for Improved Canada-United States (CANUS) Arctic Security Workshop, February 1-2, 2017 (Moscow, ID: Center for Resilient Communities, University of Idaho, 2017).

46 Robert Sanders, “The Pareto Principle: Its Use and Abuse,”Journal of Services Marketing 1, no. 2 (1987): 37-40.Also, see Ralph C. Craft and Charles Leake, “The Pareto Principle in Organizational Decision Making,”Management Decision 40, no. 8 (2002): 729-733.

47 The EARTh-X report and data contain sensitive security information, and are only available for Official Use.

48 Lilian Alessa et al., Data Integration and Information Sharing (DIIS) Workshop, February 26-27, 2018 (Moscow, ID: Center for Resilient Communities, University of Idaho, 2018).

49 Ned Herrmann, “The Creative Brain,” Journal of Creative Behavior 25, no. 4 (1991): 275-295.

50 Gediminas Adomavicius and Alexander Tuzhilin, “Toward the Next Generation of Recommender Systems: A Survey of the State-of-the-Art and Possible Extensions,” IEEE Transactions on Knowledge and Data Engineering 17, no. 6 (2005): 734-749.

51 David J. Lewis and Andrew Weigert, “Trust as a Social Reality,” Social Forces 63, no. 4 (1985): 967-985. See also David W. Johnson, Roger T. Johnson, and Karl Smith, “The State of Cooperative Learning in Postsecondary and Professional Settings,” Educational Psychology Review 19, no. 15 (2007).

52 See https://www.chathamhouse.org/about/chatham-house-rule for history and origins of the Chatham House Rule.

53 Welton Chang and Philip E. Tetlock, “Rethinking the Training of Intelligence Analysts,” Intelligence and National Security 31, no. 6 (2016): 903-920.

54 Thad W. Allen, “Confronting Complexity and Creating Unity of Effort: The Leadership Challenge for Public Administrators,” Public Administration Review 72, no. 3 (2012): 320-321.

55 Lilian Alessa et al., Report of the Emerging Arctic Security Threats Matrix (EARTh-X) for Improved Canada-United States (CANUS) Arctic Security Workshop, February 1-2, 2017 (Moscow, ID: Center for Resilient Communities, University of Idaho, 2017).

56 Mark Altaweel, Lilian Alessa, and Andrew Kliskey, “Visualizing Situational Data: Applying Information Fusion for Detecting Social-Ecological Events,” Social Science Computer Review 28, no. 4 (2010).


Copyright © 2018 by the author(s). Homeland Security Affairs is an academic journal available free of charge to individuals and institutions. Because the purpose of this publication is the widest possible dissemination of knowledge, copies of this journal and the articles contained herein may be printed or downloaded and redistributed for personal, research or educational purposes free of charge and without permission. Any commercial use of Homeland Security Affairs or the articles published herein is expressly prohibited without the written consent of the copyright holder. The copyright of all articles published in Homeland Security Affairs rests with the author(s) of the article. Homeland Security Affairs is the online journal of the Naval Postgraduate School Center for Homeland Defense and Security (CHDS). Cover image by KufoletoAntonio De Lorenzo and Marina Ventayol.

1 Comment

  • larry leibrock

    October 2018 at 1:54 pm Reply

    This is an interesting framework. Can it prove useful at all levels of analysis – i.e. individual, group and political unit?

Post a Comment