Law enforcement is facing a common problem found in the twenty first century: an expansive growth of data and limited personnel to extract useful trends and analyses from it. The development of artificial intelligence (AI) systems, such as facial recognition and machine learning, present useful tools to address this issue. However, AI systems provide a unique problem for users in the law enforcement domain. On one hand, AI systems provide an opportunity for optimizations and faster workflows, especially in the environment of growing data. On the other hand if left unchecked, AI systems have the potential to negatively affect the community served by law enforcement. These negative effects come in the form of bias and inaccuracies within the systems and secondary effects that are not initially obvious when using AI systems. This begs the question, “How does law enforcement usage of artificial intelligence systems impact the communities they serve?”
This thesis looks at three current uses of AI systems in the law enforcement domain: facial recognition, predictive risk assessments, and predictive policing. These cases are examined by their effectiveness, fairness, privacy, transparency, and accountability to determine how law enforcements’ usage impacts their respective communities. When looking at each type of AI system through these criteria, it becomes clear that there are significant considerations to have when using these systems. Without proper policies and regulations in place, AI systems can lead to unjust arrests, unfairly target specific classifications of people, or just may not meaningfully enhance law enforcement operations. Other considerations, such as the Fourth and Fourteenth Constitutional Amendments that provide protections against unreasonable search and seizure and establishes due process, need to be taken into account due to the potential of AI systems to undermine constitutional protections afforded to individuals.
These factors do not mean AI systems are doomed or should not be used in the law enforcement domain. However, these findings point to the need for additional research to be conducted into responsible ways for this technology to be used by law enforcement in ways that do not negative impact their communities. Additionally, this is an emerging technology with new development and discoveries on a regular basis. As such, the government is in a perpetual game of cat and mouse, which has led to some municipalities banning the technology outright. Rather than banning the technology, frameworks should be developed to ensure AI systems are used in a responsible manner. The European Union is currently developing a framework with the intention to prevent many of the negative components of AI systems. This work can potentially serve as a starting ground for similar policies and regulations for law enforcement in the United States.
 John Hollywood et al., Addressing Emerging Trends to Support the Future of Criminal Justice: Findings of the Criminal Justice Technology Forecasting Group (Santa Monica, CA: RAND, 2018), https://doi.org/10.7249/RR1987.
 Osonde Osoba and William Welser, The Risks of Artificial Intelligence to Security and the Future of Work, PE-237-RC (RAND Corporation, 2017), https://doi.org/10.7249/PE237.
 High-Level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy AI: High-Level Expert Group on Artificial Intelligence (Brussels: European Commission, 2019), https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai.