Life with A.I.

Harvard psychologist Steven Pinker: The idea that A.I. will lead to the end of humanity is like the Y2K bug

Share
Harvard psychologist Steven Pinker
Bloomberg | Getty Images

Climate change. Killer robots. Russian bad actors spreading fake news in social networks. School shootings.

It's easy to feel dour about the future of mankind. But constant, widespread doomsday prophecies are not going to help — it's only going to make matters worse.

That's according to famed Harvard cognitive scientist Steven Pinker. He penned an op-ed in Canadian paper, The Globe and Mail, published Saturday, making the case. His op-ed preceded the release of his new book, "Enlightenment Now: The Case for Reason, Science, Humanism, and Progress," which was released Tuesday.

"Doomsday is hot. For decades, we have been terrified by dreadful visions of civilization-ending overpopulation, resource shortages, pollution and nuclear war. But recently, the list of existential menaces has ballooned," says Pinker.

"We now have been told to worry about nanobots that will engulf us, robots that will enslave us, artificial intelligence that will turn us into raw materials and teenagers who will brew a genocidal virus or take down the internet from their bedrooms."

Billionaire tech titan Elon Musk is one of the loudest voices publicizing the potential threat of AI. He has said robots will be able to do everything better than humans; competition for AI at the national level will cause World War 3; and AI is a greater risk than North Korea.

If you're not concerned about AI safety, you should be. Vastly more risk than North Korea.

Renowned physicist Stephen Hawking has said AI could be the best or the worst event in the history of civilization. His warnings of the latter were frightening. "It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy," said Hawking.

While preparing, or even over-preparing, for threats may seem harmless, there are risks, says Pinker.

"Apocalyptic thinking has serious downsides. One is that false alarms to catastrophic risks can themselves be catastrophic," says Pinker. "Sowing fear about hypothetical disasters, far from safeguarding the future of humanity, can endanger it."

The Harvard professor pointed to the nuclear arms race of the 1960s and the 2003 invasion of Iraq as examples of catastrophic thinking doing harm.

Pinker says constant fear-mongering can make it harder for the human brain to correctly distinguish between a legitimate and a false threat.

"Some of the threats facing us, such as climate change and nuclear war, are unmistakable, and will require immense effort and ingenuity to mitigate," he writes. "Folding them into a list of exotic scenarios with minuscule or unknown probabilities can only dilute the sense of urgency."

Yes, there are threats facing the planet. But we know that there is one measure that will not make the world safer: moaning that we're doomed.

If every doomsday scenario feels possible, then people are actually disincentivized to take action, says the cognitive scientist.

"If humanity is screwed, why sacrifice anything to reduce potential risks? Why forgo the convenience of fossil fuels or exhort governments to rethink their nuclear weapons policies? Eat, drink and be merry, for tomorrow we die!" says Pinker, explaining the thinking process that results.

An overblown notion that technology will be our end is not new.

"Some threats strike me as the 21st-century version of the Y2K bug," he says, referring to the mistaken panic that because of a flaw, dates with the year 2000 and beyond would cause computers around the world to go haywire.

"This includes the possibility that we will be annihilated by artificial intelligence, whether as direct targets of their will to power or as collateral damage of their single-mindedly pursuing some goal we give them," writes Pinker in The Globe and Mail.

Intelligence does not necessarily translate to evil, says Pinker. Also, if humans are able to create unbelievably smart machine intelligence, they will also be smart enough to test said technology before giving it control of the world, he says.

Further, the idea that AI is both smart enough to take over and dumb enough to do so by accident is not logical, Pinker says.

Other overhyped doomsday threats include mass starvation and resource scarcity, says Pinker.

Even when it comes to realistic threats, Pinker is relatively optimistic that things like nuclear war and climate change can, with careful and diligent work, be mitigated.

"Unsolved does not mean unsolvable," says Pinker.

"Pathways to decarbonizing the economy have been mapped out, including carbon pricing, zero-carbon energy sources and programs for carbon capture and storage. So have pathways to denuclearization, including strengthening international institutions, de-alerting nuclear forces, stabilizing systems of deterrence and verifiably reducing (and eventually eliminating) nuclear arsenals," he says.

"The prospect of meeting these challenges is by no means utopian," says Pinker. "But we know that there is one measure that will not make the world safer: moaning that we're doomed."

See also:

Stephen Hawking says A.I. could be 'worst event in the history of our civilization'

Top A.I. experts warn of a 'Black Mirror'-esque future with swarms of micro-drones and autonomous weapons

Steve Wozniak explains why he used to agree with Elon Musk, Stephen Hawking on A.I. — but now he doesn't

This CEO wants to put a computer chip in your brain
VIDEO1:1501:15
This CEO wants to put a computer chip in your brain