– Executive Summary –
The tragic events of 9/11 fundamentally changed how the United States approached personal and homeland security. American citizens questioned how terrorists could operate undetected for so long in the United States and how such events could happen on U.S. soil. In congressional testimony after 9/11, Dianne Feinstein responded to those questions with, “we could not identify them. We did not know they were here. Only if we can identify terrorists planning attacks on the United States do we have a chance of stopping them.”[1] In response to the attacks, the United States and the Homeland Security Enterprise (HSE) began to seek, identify, and close gaps in existing security practices that criminals and terrorists could exploit. Feinstein and others believed biometrics could have prevented 9/11. The idea that the nation failed to identify terrorists was the impetus for the widespread development and implementation of biometric systems. The security gap allowed facial recognition technology (FRT) to emerge as a security solution for identifying and verifying individuals.
Although FRT materialized as a common and efficient security measure in the private sector over the last decade, the public has not widely adopted or accepted the government’s use of the technology. As with any new technology, the public, advocacy groups, and government oversight entities are skeptical and raise concerns about FRT’s purpose and intent, how accuracy impacts the public, and how the technology impinges on privacy and other civil rights. When emerging technology raises privacy and other public concerns, government decision-makers can explore ethical, societal, and legal issues (ELSI) during decision-making to identify common ground with the public to resolve and mitigate the public’s concerns.[2] When regulations do not exist to guide the development of new technology, policy and decision-makers can weigh emerging technology’s benefits against the public’s interest to determine the best path forward for all parties.
A methodology to think about and evaluate ethical dilemmas benefits HSE officials in making difficult choices that impact society.[3] Ethical frameworks provide a set of standards for behavior that decision-makers can use to decide how to act in a range of situations, how to make decisions, and the reasons behind decisions.[4] When decision-makers anticipate and identify problems before implementing novel technology, they can mitigate them, improving public perception and adoption. This research aims to analyze facial biometrics and their relationship with public interest through an ethical framework and a real-world case study to determine how FRT can be implemented in a way that is both efficient and ethical.
This research takes a multi-pronged approach to analyze FRT and outline steps for responsible usage. First, this study explores the decision-making process using the “How to Do it Right” framework. Through the framework, the thesis identifies values and the corresponding vulnerabilities, risks, and mitigation measures. Next, the research reviews academic and security industry literature to identify cross-cutting operational principles that can be applied to FRT programs. Finally, this study explores best practices through a case study of U.S. Customs and Border Protection’s (CBP) Biometric Entry-Exit (BEE) program. The goal is to equip homeland security leaders with a framework to identify issues associated with FRT and align the decision-making process with adjudicating and mitigating ethical and societal concerns to produce a beneficial solution for society.
Mohamed Abomhara et al. developed the “How to Do It Right” framework to analyze biometric technology in border settings. The “How to Do It Right” framework is a four-tiered process. The top tier includes ethical, social, and legal challenges. It is followed by the values affected (by the technology) tier, an assessment tier, and considerations at the bottom level.[5] However, this thesis excludes legal issues from the analysis. The framework links the challenges to the value(s) affected by technology and allows an impact assessment to mitigate the values affected.[6] This analysis adapts the “How to Do It Right” framework and applies it to the U.S. government’s use of FRT. It takes the basic framework and incorporates four overarching categories into the challenge tier. The four categories are derived from the literature and criticisms of FRT. The challenges include privacy implications, constitutional protections, data management, and bias and accuracy. Once decision-makers identify issues falling within the four value categories, they can assess the risks, vulnerabilities, and mitigation measures.
Decision-makers promote ethical and efficient programs when developing and implementing safeguards and mitigation measures that correspond to ethical operating principles; these principles are presented in the considerations or final tier of the framework. The operational principles in the final tier originate as ethical guidelines in the biometrics and security industry literature. The adapted framework allows decision-makers to consider the broad implications of FRT and then extrapolate best practices that can mitigate the challenges and establish responsible biometric collection and usage. Government decision-makers can formulate decisions regarding FRT by thinking through the ethical framework before, during, and after technology implementation.
Security industry organizations and watchdog groups have developed ethical principles, referred to as operational guidelines, to govern the use of facial biometrics.[7] Each organization has proposed guidelines with many common elements, but no accepted standardized principles exist. A crosswalk of the different principles reveals common patterns that evolve into standardized and best practices for government agencies using FRT. Once a common principle is established as an operating guideline, it can be characterized as an ethical operational practice. The “How to Do It Right” framework incorporates these common themes as mitigation measures. The common themes extracted from the crosswalk are privacy by design, transparency, clear and defined purpose, accurate technology, data security, training and access, and accountability. It is up to each agency to do its due diligence and implement as many ethical principles as possible to balance the security benefits and the public impact. Applying these principles and operating guidelines leads to the responsible use of FRT.
The final prong of this research is a case study on CBP’s BEE. CBP’s BEE represents an efficient and ethical FRT program. CBP’s BEE was selected as a case study because the program includes FRT in a border security environment and incorporates ethical operating principles. Although the agency continually improves and enhances the program, it exemplifies an agency that thoughtfully implemented a program through testing, pivoting approaches, internalizing audit recommendations, and due diligence. The program incorporates ethical principles and enhances security. The CBP case study demonstrates how the government can deploy an FRT program that embodies safety and security while considering and addressing public perception.
By implementing safeguards and countermeasures, government agencies can balance the benefits of FRT with public concern. Overall, when government decision-makers adhere to ethical decision-making frameworks and operating principles, FRT can be used responsibly and efficiently. This thesis makes four recommendations for government agencies considering or using FRT programs. These recommendations apply to decision-makers at all process phases, including technology consideration, development, implementation, and post-implementation assessments or enhancements of FRT. The four recommendations include: following the “How to Do It Right” framework; incorporating ethical operating principles; applying sustainable policy and federal regulations; and exploring and implementing FRT best practices. The four recommendations promote the ethical and efficient use of FRT.
[1] Biometric Identifiers and the Modern Face of Terror: New Technologies in the Global War on Terrorism, Senate, 117th Cong., 1st sess., November 14, 2001, https://www.govinfo.gov/content/pkg/CHRG-107shrg81678/html/CHRG-107shrg81678.htm.
[2] Jean-Lou Chameau, William F. Ballhaus, and Herbert S. Lin, eds., Emerging and Readily Available Technologies and National Security: A Framework for Addressing Ethical, Legal and Societal Issues. (Washington, DC: National Academies Press, 2014), 1, https://pubmed.ncbi.nlm.nih.gov/25032403/.
[3] Aaron Nelson, “Ethical Decision Making for Homeland Security” (master’s thesis, Naval Postgraduate School, 2013), https://calhoun.nps.edu/handle/10945/37684.
[4] Sheila Bonde and Paul Firenze, “A Framework for Making Ethical Decisions” (Lecture, Making Choices: Ethical Decisions at the Frontier of Global Science, Brown University, May 2013), https://www.brown.edu/academics/science-and-technology-studies/framework-making-ethical-decisions.
[5] Mohamed Abomhara et al., “How to Do It Right: A Framework for Biometrics Supported Border Control,” in E-Democracy – Safeguarding Democracy and Human Rights in the Digital Age, ed. Sokratis Katsikas and Vasilios Zorkadis (Cham, Switzerland: Springer, 2019), 6.
[6] Abomhara et al., 99.
[7] As seen in the following literature: Security Industry Association, Sia Principles for the Responsible and Effective Use of Facial Recognition Technology (Silver Springs, MD: Security Industry Association, 2020), https://www.securityindustry.org/report/sia-principles-for-the-responsible-and-effective-use-of-facial-recognition-technology/; International Biometrics and Identification Society, Identification Technology & Privacy Policy Principles (Washington, DC: International Biometrics and Identification Society, 2021), https://www.ibia.org/resources/white-papers; Future of Privacy Forum, Summary of Privacy Principles (Washington, DC: Future Privacy Forum, 2018), https://fpf.org/wp-content/uploads/2019/03/Final-Privacy-Principles-Edits-1.pdf; James Andrew Lewis and William Crumpler, Facial Recognition Technology: Responsible Use Principles and the Legislative Landscape (Washington, DC: Center for Strategic and International Studies, 2021), https://www.csis.org/analysis/facial-recognition-technology-responsible-use-principles-and-legislative-landscape; and, World Economic Forum, A Policy Framework for Responsible Limits on Facial Recognition Use Case: Law Enforcement Investigations (Geneva, Switzerland: World Economic Forum, 2021), https://www.weforum.org/whitepapers/a-policy-framework-for-responsible-limits-on-facial-recognition-use-case-law-enforcement-investigations.