Tech

A.I. could lead to a nuclear war by 2040, think tank warns

Key Points
  • AI in the future could encourage human actors to make catastrophic decisions, researchers at the nonprofit Rand Corporation said.
  • This could break down the notion of mutually assured destruction being a deterrent for the use of nuclear weapons.
Operation Unicorn nuclear test, May 22 1970.
Galerie Bilderwelt | Getty Images

Artificial intelligence (AI) could potentially result in a nuclear war by 2040, according to a research paper by a U.S. think tank.

The paper, by the nonprofit Rand Corporation, warns that AI could erode geopolitical stability and remove the status of nuclear weapons as a means of deterrence.

While peace has been maintained for decades due to the notion that any nuclear attack could trigger mutually assured destruction, the potential for AI and machine-learning to decide military actions could mean that the assurance of stability breaks down, Rand researchers warned.

The researchers, who based their paper on a series of workshops with experts, said that AI in the future could encourage human actors to make catastrophic decisions. Improvements in sensory technology, for instance, could result in the destruction of retaliatory forces like submarine and mobile missiles.

Who owns the world's nuclear weapons?
VIDEO0:0000:00
Who owns the world's nuclear weapons?

AI could also tempt nations to launch a pre-emptive strike against another nation to gain bargaining power, even if they have no intention of carrying out an attack, researchers said.

"Some experts fear that an increased reliance on artificial intelligence can lead to new types of catastrophic mistakes," said Andrew Lohn, co-author of the paper and associate engineer at Rand.

"There may be pressure to use AI before it is technologically mature, or it may be susceptible to adversarial subversion. Therefore, maintaining strategic stability in coming decades may prove extremely difficult and all nuclear powers must participate in the cultivation of institutions to help limit nuclear risk."

The RAND paper highlights the dangers of the use of AI to take military decisions rather than the threat of autonomous drones and other so-called "killer robots."

1983 nuclear false alarm

The researchers pointed to a nuclear false alarm incident in 1983 as a cautionary tale for the post-Cold War development of AI.

In 1983, former Soviet military officer Stanislav Petrov spotted a warning from computers that the U.S. had launched several missiles. The warning was wrong, and Petrov, who died late last year, has been credited since as the man who saved the world from nuclear devastation.

"The connection between nuclear war and artificial intelligence is not new; in fact, the two have an intertwined history," said Edward Geist, co-author of the paper and associate policy researcher at RAND.

"Much of the early development of AI was done in support of military efforts or with military objectives in mind."

Many business leaders and experts have warned against the use of AI in a military setting.

Tesla and SpaceX CEO Elon Musk was among 100 AI experts who called on the United Nations to prevent the development of lethal autonomous weapons.

Musk has said warned that AI could create an "immortal dictator from which we can never escape" and that the technology could result in a third world war.