21st Century Crime: How Malicious Artificial Intelligence Will Impact Homeland Security

Kevin Peters

EXECUTIVE SUMMARY

Artificial intelligence (AI) has the potential to dramatically transform how society interacts with information technology, particularly how personal information will interconnect with the hardware and software systems people use on a daily basis. The combination of developing AI systems and a digitally connected society could transform our culture in a manner not seen since the Industrial Revolution. Experts in the field of AI disagree on the pace at which the technology will develop; however, cognitive computing and machine learning are likely to affect homeland security in the coming years. Criminals, motivated by profit, are likely to adapt future AI software systems to their operations, further complicating present-day cybercrime investigations. If the homeland security enterprise is going to be prepared for the potential malicious usage of AI technology, it must begin to examine how criminal elements may use the technology and what should be done today to ensure it is ready for tomorrow’s threat.
This thesis examines how transnational criminal organizations and cybercriminals may leverage developing AI technology to conduct more sophisticated criminal activities and what steps the homeland security enterprise should take to prepare. A byproduct of ongoing research is that criminals may create malevolent AI. Cybercriminals, motivated by profit, may attempt to develop proxy AI systems that mask their involvement, avoid risk, and direct attribution and responsibility. The malicious use of AI could threaten digital security, and machines could become as proficient at hacking and social engineering as human cybercriminals. The ability to detect cybersecurity attacks from malicious AI is predicated on an examination of these technologies and their application to existing criminal patterns and activities. Criminals have long demonstrated that they are early adopters of new technologies, and they will almost certainly incorporate AI into their criminal enterprises.
This thesis applied a red-teaming approach—using a future scenario methodology—to project how cybercriminals may use AI systems and what should be done now to protect the United States from the malicious use of AI. The analysis first considered current fields of AI research, likely timelines for technological developments, and AI’s perceived impact on daily life in the United States over the next ten years. Next, the analysis examined how present-day cybercrime threats—such as remote-controlled aerial systems, the ability to create fake video files, spear phishing attacks, and social media profiling—could be enhanced by future AI systems. The final step in the analysis was to examine these scenarios and build countermeasures that homeland security officials in the United States could employ to mitigate the potential risks of malicious AI. The criminal use of AI will likely affect multiple echelons of government, and a strategic review analyzes the policy framework required to confront the threats identified in the AI scenarios. Best practices from foreign partners were examined to find strategies and methodologies that could be applied within the United States. A tactical review analyzed how law enforcement agencies could respond to the attacks in the AI scenarios and what existing law enforcement operations could be adapted to prepare for malicious AI.
The progression of AI is uncertain, and the scenarios highlight the ways that cybercriminals could leverage even relatively minor technological developments. Education and awareness of emerging technologies should form the basis of how cybercrime is examined. The thesis recommends that the homeland security enterprise expand outreach programs and partner with private industry and academia that are developing AI systems in order to understand the dual-use implications of emerging AI technology. Public security officials also have much to offer the AI research community; perspectives from law enforcement, emergency response, policymakers, and intelligence officials will be vital to assisting in the development of safe and ethical AI systems. Federal agencies with cybercrime enforcement authority should develop strategies that align with existing national cyber and AI strategies and can form the framework for confronting the potential challenge of future AI-enabled cybercrime.
This research concludes that the potential threats posed by cybercriminals’ use of AI are not a challenge that can be mitigated by any one agency. Rather, a coalition of willing partners across multiple echelons of government, private industry, and academia will need to work together to combat future cybercrime. International partnerships with law enforcement agencies and associations that support anti-crime operations will also be critical in tracking, investigating, and prosecuting future cybercrime. This thesis begins the discussion of how to confront the challenge of future AI-enabled cybercrime and seeks to expand awareness of how to combat dual-use emerging technologies.

No Comments

Post a Comment