Next Gen Investing

'Godfather of AI,' ex-Google researcher: AI might 'escape control' by rewriting its own code to modify itself

Share
Artificial intelligence pioneer Geoffrey Hinton speaks at the Thomson Reuters Financial and Risk Summit in Toronto, December 4, 2017.
Mark Blinch | Reuters

Geoffrey Hinton, the computer scientist known as a "Godfather of AI," says artificial intelligence-enhanced machines "might take over" if humans aren't careful.

Rapidly-advancing AI technologies could gain the ability to outsmart humans "in five years' time," Hinton, 75, said in a Sunday interview on CBS' "60 Minutes." If that happens, AI could evolve beyond humans' ability to control it, he added.

"One of the ways these systems might escape control is by writing their own computer code to modify themselves," said Hinton. "And that's something we need to seriously worry about."

Hinton won the 2018 Turing Award for his decades of pioneering work on AI and deep learning. He quit his job as a vice president and engineering fellow at Google in May, after a decade with the company, so he could speak freely about the risks posed by AI.

Humans, including scientists like himself who helped build today's AI systems, still don't fully understand how the technology works and evolves, Hinton said. Many AI researchers freely admit that lack of understanding: In April, Google CEO Sundar Pichai referred to it as AI's "black box" problem.

As Hinton described it, scientists design algorithms for AI systems to pull information from data sets, like the internet. "When this learning algorithm then interacts with data, it produces complicated neural networks that are good at doing things," he said. "But we don't really understand exactly how they do those things."

Pichai and other AI experts don't seem nearly as concerned as Hinton about humans losing control. Yann LeCun, another Turing Award winner who is also considered a "godfather of AI," has called any warnings that AI could replace humanity "preposterously ridiculous" — because humans could always put a stop to any technology that becomes too dangerous.

'Enormous uncertainty' about AI's future

The worst-case scenario is no sure thing, and industries like health care have already benefitted tremendously from AI, Hinton emphasized.

Hinton also noted the spread of AI-enhanced misinformation, fake photos and videos online. He called for more research to understand AI, government regulations to rein in the technology and worldwide bans on AI-powered military robots.

At a Capitol Hill session last month, lawmakers and tech executives like Pichai, Elon Musk, OpenAI's Sam Altman and Meta's Mark Zuckerberg suggested similar ideas while discussing the need to balance regulations with innovation-friendly government policies.

Whatever AI guardrails get put into place — whether by tech companies or at the mandatory behest of the U.S. federal government — they need to happen soon, Hinton said.

Humanity is likely at "a kind of turning point," said Hinton, adding that tech and government leaders must determine "whether to develop these things further and what to do to protect themselves if they [do]." 

"I think my main message is there's enormous uncertainty about what's going to happen next," Hinton said.

DON'T MISS: Want to be smarter and more successful with your money, work & life? Sign up for our new newsletter!

Want to earn more and land your dream job? Join the free CNBC Make It: Your Money virtual event on Oct. 17 at 1 p.m. ET to learn how to level up your interview and negotiating skills, build your ideal career, boost your income and grow your wealth. Register for free today.

How Vital Farms turned 27 acres into a $450 million egg empire
VIDEO10:2010:20
How Vital Farms became a multi-million dollar egg empire