A Review of Four Battlegrounds, Power in the Age of Artificial Intelligence by Paul Scharre

Suggested Citation

Levenson, Daniel E. “Review of Four Battlegrounds, Power in the Age of Artificial Intelligence by Paul Scharre.” Homeland Security Affairs 20, no. 1 (November 2024). www.hsaj.org/articles23002.


The intersection of AI (Artificial Intelligence) and national security offers fertile ground for experts, pundits, and fear mongers alike. Such conversations range from sober discussions of practical implications and potential moral harm to fever dreams of the sort that make dystopian near-future television programs like Black Mirror appear cozy by comparison. Given this reality, it should come as no surprise that avoiding the temptation to anchor oneself in either the unlikely promise of world peace courtesy of altruistic algorithms or the nightmarish vision a digital Dante might conjure is no mean feat for those wrestling with the implications and actual use of this technology.

One critical issue at the heart of practically every meaningful discussion on this topic is that those in a position to guide or control its use seem to be significantly out of sync with the rapid rate at which the technology itself is advancing and becoming pervasive. There are a range of potential reasons for this, and in an ambitious new book entitled Four Battlegrounds, Power in the Age of Artificial Intelligence, author Paul Scharre explores the question of why the relevant figures and the bodies they lead (be they private or public) often seem to move so slowly compared to the technical advances in AI, at times even compounding confusion instead of offering desperately-needed clarity. Throughout this book, the author turns again to the two essential elements of the discussion—the technology itself and how those who control it may choose to use it as an instrument of power. As Scharre makes clear, it is the nature of this relationship, between human and machine, that will determine the ways in which AI will contribute to or erode the security of nations.

On the human side of the equation, a fundamental disconnect between perception and reality when it comes to how AI systems are designed to function, as opposed to their behavior in the real world, is particularly concerning. In Scharre’s depiction of the present state of affairs, the degree to which many decision-makers are too often inclined to trust the conclusions of AI systems, despite not really understanding how the underlying algorithm itself functions or even that such systems are fallible, should alone be sufficient to keep most sensible people up at night. A short step beyond this, to a place where we begin to believe that AI and the broader field of “cyber” are so advanced that they take on the trappings of magic, and therefore become worthy of unquestioning faith, is where we encounter some very odd (and by extension, one might reasonably surmise, potentially quite bad) things. These negative results may be rooted in cognitive biases (anchoring, observational selection, and recency bias all come to mind here) on the analytic and decision-making side, or bad data or design on the input side. Compounding the difficulty further is the development and deployment of systems so inscrutable with respect to their inner workings that even their own designers don’t completely understand how the algorithm has arrived at a particular conclusion, rendering the system an opaque “black box.”

In the concluding pages of Scharre’s book, he drives toward one of the more serious potential outcomes of AI in war fighting, which is the possibility that, unlike other revolutions in technology which had an impact on the manner in which armed conflict was carried out (the invention of the tank, the introduction of airplanes, etc.) but not the essence of this human activity, AI may actually alter not merely the character, but the nature, of warfare itself.  The author makes an argument that it is the “black box” aspects of the development and deployment of this technology which may allow combat to move at a speed beyond human comprehension and should therefore justifiably give even the most sanguine AI enthusiasts pause. Violence is a terrible thing, but if war until now is at least partly politics by other means, then handing over the conduct of conflict to inscrutable machines may indeed change the nature of war, removing any moral or political calculation from the process, banishing even the most distant strains of humanity from armed conflict.

Scharre’s book also raises the troubling likelihood that bad actors waging a low-intensity or asymmetric conflict may be able to neutralize a broad swath of AI-enabled defensive systems in the military and homeland security space with some very low-tech, low-cost, and simple to employ tools and techniques. The concept of actors at a disproportionate advantage numerically or technologically exploiting whatever means they can access to attempt to “even” the playing field is nothing new, but arguably the scope and scale of what is available at present in the tech space, let alone coming down the pike, does present a sea change.

The modification and creative use of off-the-shelf UAV (Unmanned Aerial Vehicles, or “drones”) in both the Armenia/Azerbaijan conflict as well as the present war in Ukraine offer recent examples of belligerents taking advantage of the speed with which new technologies enter the marketplace and become available to consumers. In fact, since the publication of this book in February 2023, there have been significant advances not only in the development and diffusion of AI but in its integration into the conflict in Ukraine, with multiple media reports in the first half of 2024 reporting the successful development and use of AI-enabled UAVs against Russian targets in the ongoing conflict. Another high-profile example of the use of AI in armed conflict can be seen in the creation and dissemination of synthetic media by Iran following its attack against Israel in April 2024, which reportedly included fake still images and video designed to give the impression that the assault had caused widespread damage. These are just two examples of ways that AI can be leveraged in war, reinforcing Scharre’s observations about potential for future exploitation by combatants.

Scharre gives examples of the ways in which AI can be used offensively, but what may be more worrisome is how it will function on the defensive side. In this respect, we might do well to think back to World War I and the questions of whether the AI systems being developed and deployed to protect people and property are less some kind of wondrous forcefield and more akin to a digital Maginot line—prima facie, logical in their construction and intended use but fundamentally limited by their static nature in the form of the narrowness of their training.

Scharre provides many examples of this in his book, including those in which AI systems have been fooled by something as simple as a sticker placed on a stop sign or an individual wearing a rudimentary facial disguise. As the author notes, in order to blunt the impact of these simplistic countermeasures, the need to anticipate potential flaws and vulnerabilities should be baked into design and production from the beginning, writing, “If countries don’t address security vulnerabilities from the beginning, they risk building an AI ecosystem that is highly vulnerable to attack, much as nations have done with computer networks and cybersecurity vulnerabilities over the last decade.”[1]

One of the strengths of this book lies in Scharre’s exploration of the design and performance faults that pop up unexpectedly when AI-enabled systems are put to the test. Here the potentially disastrous intersection of human bias and AI limitations are rendered in sharp relief. One particularly amusing (and concerning) example the author provides is that of an exercise in which an AI-dependent system that was supposed to be capable of detecting the presence of humans moving through a particular space was able to spot them when they were walking normally, but failed entirely when the participants found more creative ways to move through the same space, literally summersaulting past sensors or disguising their physical appearance using items from their immediate environment.

Readers who are looking for a good overall introduction to the role of AI in international relations and national security will find Scharre’s book useful and accessible, with clear descriptions of the ways in which policymakers and private sector tech companies are struggling with the impact of AI in multiple critical domains, from combat at the tactical and operational level to broader questions concerning the global balance of power. Readers working in these spaces, and those who may be far more familiar with the overarching challenges within it, may find what is an otherwise well-written and researched book to be a bit frustrating, with many good points and potentially quite interesting areas for further exploration only touched upon, but not given the full consideration which might have increased the utility of this book for professionals.

Looking out over the horizon to the roles that AI is likely to play in economic and strategic competition between nation states, China looms large in this book. Whether one sees the U.S.-China relationship as rooted in a traditional model of great power rivalry with each jockeying for influence and economic advantage on the global stage, or one of genuine enmity which may one day lead to open conflict, there is an obvious competitive quality to interactions between the two nations. Scharre makes it clear that Chinese and American officials have developed a kind of modus vivendi for the moment which allows each nation to reap some benefits from the relationship. At the same time, he makes a compelling case that China’s clandestine and often illegal efforts to gain competitive economic advantage over American companies is an issue that warrants significant attention from homeland security practitioners. As Scharre notes, Chinese efforts are neither ad hoc nor limited in scope or scale, writing, “China employs a range of methods to gain access to U.S. technology, including via company insiders, Chinese firms partnering with U.S. companies, cyber theft, investment, and academic espionage. China has an estimated over 200 talent recruitment plans to funnel scientific research back to China through both licit and illicit means.”[2] (Pg.164).

In addition, China itself plays an important role in the manufacturing of critical hardware components that enable AI systems to function, with implications for American interests at home and abroad. Scharre’s exploration of the topic begs the question as to how we came to a point where things are so intertwined, and frankly, whether in a world in which so many elements and systems are interdependent, anything short of a radical policy shift which includes the physical separation of American and Chinese technology can reduce the risks found in the present situation.

On the global stage, the clash of armies wielding AI-based weapons is a serious concern, but another critical issue that Scharre raises which homeland security practitioners should pay close attention to is the ways in which bad actors at all levels can manipulate or neutralize AI-dependent assets. In the long run, the use and misuse of this technology by substate actors may prove equally as disruptive. Drug cartels and other transnational criminal organizations, for example, have shown remarkable creativity when it comes to protecting distribution routes and shipments of illicit goods in the past, and there is no reason to think that they will not similarly attempt to adapt to the introduction of AI-enabled systems into the environment. Similarly, insurgent groups are likely to adopt techniques which have been shown to very effectively counter AI-dependent assets, many of which are painfully simple and take advantage of fundamental limitations in current machine learning processes.

In considering the myriad issues around AI and national security, the author paints a broad and complicated picture—one of a technological revolution happening across practically every domain within the military and homeland security enterprise. Up close, it is a tableau that feels more like pointillism than portraiture, and perhaps that’s the most important aspect of this book. Too often we spend our time standing too close to particular problems or opportunities, seeing only parts and never the whole, and in doing so miss the ways in which seemingly disparate elements are profoundly connected and dependent upon one another. This is true in many fields and disciplines and no less so in Scharre’s examination of these critical topics.


About the Reviewer

Daniel E. Levenson is a PhD student in criminology at Swansea University and Director of the Communal Security Initiative at Combined Jewish Philanthropies in Boston, Massachusetts. He holds an MLA in English and American Literature from Harvard University, an MA in Security Studies with a concentration in Homeland Defense from the University of Massachusetts at Lowell. Daniel is a member of the FBI Boston Mass Bay Threat Assessment Team, and regularly presents on violent extremism, threat assessment, and related topics for a wide variety of law enforcement and other professional audiences.


Notes

[1] Paul Scharre, Four Battlegrounds, Power in the Age of Artificial Intelligence (New York: W.W. Norton, 2023), 239.

[2] Scharre, 164.


Copyright

Copyright © 2024 by the author(s). Homeland Security Affairs is an academic journal available free of charge to individuals and institutions. Because the purpose of this publication is the widest possible dissemination of knowledge, copies of this journal and the articles contained herein may be printed or downloaded and redistributed for personal, research or educational purposes free of charge and without permission. Any commercial use of Homeland Security Affairs or the articles published herein is expressly prohibited without the written consent of the copyright holder. The copyright of all articles published in Homeland Security Affairs rests with the author(s) of the article. Homeland Security Affairs is the online journal of the Naval Postgraduate School Center for Homeland Defense and Security (CHDS).

1 thought on “A Review of <em>Four Battlegrounds, Power in the Age of Artificial Intelligence</em> by Paul Scharre”

  1. World peace can only be possible when there will be a legal ban of child corporal punishment in all countries of the world, says peace researcher Franz Jedlicka. He has written a lot of articles in which explains why a nonviolent childhood is the psychological foundation of peaceful societies.

    Martin

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top