“Sleepwalkers” in Age of Artificial Intelligence

Not coincidentally drowned out by the background noise of secondary topics, the subject of military applications of AI remains confined to researchers and the high-tech industries that fund them. With serious risks to the immediate future of humanity

Now that negotiations will begin in November – hopefully not too late and not too bad – to at least temporarily stop the disastrous war in Ukraine and the great American electoral mass will be over, we should begin to open the channels for negotiating a treaty on the military use of artificial intelligence (AI). With whom? Well, for starters, obviously with “authoritarian” China and Russia. For three reasons. First, the two countries expressed in their joint statement (04.02.2022) their intention to strengthen dialog and contacts on artificial intelligence, as well as deepen cooperation on international cybersecurity and promote the construction of an open, secure, sustainable, and accessible ICT (information and communication technology) environment.

Frankly, it is futile to raise preventive doubts about the bona fides of the two countries, including those stemming from the ongoing war, not because they are not formidable partners, but because disarmament and arms control treaties are routinely concluded with adversaries or potential adversaries. Moreover, it is an issue of global importance, affecting the future of 8 billion people, including current belligerents.

The second reason goes back to the dawn of the nuclear arms race. Very wisely, the UN General Assembly created a special commission to eliminate all nuclear weapons less than a year after the horrors of Hiroshima and Nagasaki (1946), and in the same year, with similar wisdom, the USA presented the Baruch Plan providing for unilateral dismantling of the small US arsenal (12 warheads) and international control of atomic energy. The Soviets didn’t believe it, they abstained from the plan, and their Foreign Minister Andrei Gromyko (the famous Mr. Niet, of which Sergei Lavrov is the friendliest version) made a counter-proposal in 1947, which was rejected by the Western-majority Security Council. Some Western negotiators, 38 years later, had doubts that a unique opportunity had been missed, if not to eliminate all nuclear weapons (there was a Soviet blockade of West Berlin in 1948), then at least to limit the arsenals to a few bombs and not to reach the 10,000 or so warheads on each side in a paroxysm of nuclear accumulation. As we will soon see, the implications of AI in the military are highly problematic.

The last reason has to do with the nature of AI development. Not only do these algorithms operate according to a logic that cannot currently be reconstructed, explained, or tracked (it’s like if an airplane is flying and we don’t know how the engine inside it works), but the research takes place largely outside of real government control. To paraphrase the hardline French statesman Clemenceau: “AI is too serious a matter to entrust to entrepreneurs”. The usual free-market zealots should be reminded that Silicon Valley itself recognizes that economic competition risks fostering innovation at the expense of security, so much so that 1000 professionals have asked for a moratorium to avoid the development of “digital intelligence that no one – not even their creators – can reliably understand, predict, or control” (March 2023). Nothing has happened in a year, despite the non-committal autonomous initiatives.

Even at the international level, we are still at the stage of declarations of intent and principles: China’s Global AI Governance Initiative; Hiroshima Code of Conduct, G7; Bletchely Park Declaration, AI Safety Summit; Governing AI for Humanity report, UN HLAB, all 2023 initiatives, but often foreshadowed by the work of UNESCO and OECD two years earlier.

There are also some initial thoughts on how an international AI control regime might take shape, among which a paper (21.03.2024) from the Carnegie Endowment for International Peace by Emma Klein (Special Assistant to the President) and Stewart Patrick (Director of the Order and Global Institutions Program) stands out. After analyzing previous models and areas of application, it envisions a complicated control regime based on the interaction of different agencies and actors, similar to that used for a broad problem such as climate change.

What is proposed here, however, is not a Gaullist “tall order,” but initially a bilateral treaty (USA, China), open within a reasonable time frame to other recognized nuclear powers (and potentially to unrecognized ones as well), focusing on the use of AI for nuclear deterrence because the race for AI capabilities has already begun, and an agreement must be reached before one side imagines it has sustained superiority.

The nuclear deterrence that too many take for granted, has been gradually eroding since the 1980s due to a number of factors: the development of anti-missile weapons, the gradual loss of strategic invisibility of ballistic missile submarines, the development of maneuverable hypersonic nuclear missiles, the likely emergence of quantum-technology radars, and other capabilities that did not exist during the Cold War.

This has led some US nuclear thinkers to envision a new artificial intelligence-based early warning and decision aid system for the president that, in the event of a nuclear attack, could offer decision-makers a menu of pre-arranged solutions based on the identified attack. In extreme cases, it could respond even if the president was dead or unable to communicate, increasing deterrence with a reasonable certainty of retaliation in the event of a successful surprise enemy attack.

It’s a lot like Dr. Strangelove’s Doomsday Machine

The negotiation of such a treaty should, in principle, ensure equal effectiveness of early warning systems for both sides and then for all counterparties (thus increasing the chances of nuclear vectors’ survival), the inviolability of the associated IT systems for national deterrents, and the integrity and greater reliability of the national chains of command and of the communication channels between counterparts. In other words, AI will be a factor, making a pre-emptive nuclear strategic attack by anyone against other nuclear powers unlikely.

If prudence and wisdom prevail over belligerent and brutish madness, so fashionable at the moment, few countries will have the merit and advantage of eliminating or minimizing one of the risk factors that could lead to the extinction of homo sapiens. “By following this trajectory, we are likely on the path to building quite powerful systems that could either become weapons with catastrophic consequences or spiral out of control,” concludes Gladstone AI’s “Defense in Depth” report for the State Department (02/26/2024).

There’s no time to lose, we need to start negotiating here and now.

Director of the NATO Defense College Foundation

Alessandro Politi