- Elon Musk is one of the first investors in DeepMind, he helped set up the $1 billion OpenAI research lab, and he's often making comments about AI.
- WWIII, more dangerous than nukes, scariest problem: several people in the AI community told CNBC that they struggle to take Musk's AI comments seriously.
- The Tesla and SpaceX boss thinks that AI is more dangerous than nukes.
Tech billionaire Elon Musk likes to think he knows a thing or two about artificial intelligence (AI), but the research community think his confidence is misplaced.
The Tesla and SpaceX boss has repeatedly warned that AI will soon become just as smart as humans and said that when it does we should all be scared as humanity's very existence is at stake.
Multiple AI researchers from different companies told CNBC that they see Musk's AI comments as inappropriate and urged the public not to take his views on AI too seriously. The smartest computers can still only excel at a "narrow" selection of tasks and there's a long way to go before human-level AI is achieved.
"A large proportion of the community think he's a negative distraction," said an AI executive with close ties to the community who wished to remain anonymous because their company may work for one of Musk's businesses.
"He is sensationalist, he veers wildly between openly worrying about the downside risk of the technology and then hyping the AGI (artificial general intelligence) agenda. Whilst his very real accomplishments are acknowledged, his loose remarks lead to the general public having an unrealistic understanding of the state of AI maturity."
An AI scientist who specializes in speech recognition and wished to remain anonymous to avoid public backlash said Musk is "not always looked upon favorably" by the AI research community.
"I instinctively fall on dislike, because he makes up such nonsense," said another AI researcher at a U.K university who asked to be kept anonymous. "But then he delivers such extraordinary things. It always leaves me wondering, does he know what he's doing? Is all the visionary stuff just a trick to get an innovative thing to market?"
CNBC reached out to Musk and his representatives for this article but is yet to receive a response.
Musk's relationship with AI goes back several years and he certainly has an eye for promising AI start-ups.
He was one of the first investors in Britain's DeepMind, which is widely regarded as one of the world's leading AI labs. The company was acquired by Google in January 2014 for around $600 million, making Musk and other early investors like fellow PayPal co-founder Peter Thiel a tidy return on their investments.
But his motives for investing in AI aren't purely financial. In March 2014, just two months after DeepMind was acquired, Musk warned that AI is "potentially more dangerous than nukes," suggesting that his investment might have been made because he was concerned about where the technology was headed.
The following year, he went on to help set up a new $1 billion AI research lab in San Francisco to rival DeepMind called OpenAI, which has a particular focus on AI safety.
Musk has another company that's looking to push the boundaries of AI. Founded in 2016, Neuralink wants to merge people's brains and AI with the help of a Bluetooth enabled processor that sits in the skull and talks to a person's phone. Last July, the company said human trials would begin in 2020.
In many ways, Musk's AI investments have allowed him to stay close to the field he's so afraid of.
As one of the most famous tech figures in the world, Musk's alarmist views on AI can potentially reach millions of people.
A number of other tech leaders including Microsoft's Bill Gates believe superintelligent machines will exist one day but they tend to be a bit more diplomatic when they air their thoughts to a public audience. Musk on the other hand, doesn't hold back.
In September 2017, Musk said on Twitter that AI could be the "most likely" cause of a third world war. His comment was in response to Russian President Vladimir Putin who said that the first global leader in AI would "become the ruler of the world."
Earlier in the year, in July 2017, Musk warned that robots will become better than each and every human at everything and that this will lead to widespread job disruption.
"There certainly will be job disruption," he said. "Because what's going to happen is robots will be able to do everything better than us ... I mean all of us. Yeah, I am not sure exactly what to do about this. This is really the scariest problem to me, I will tell you."
He added: "Transport will be one of the first to go fully autonomous. But when I say everything — the robots will be able to do everything, bar nothing."
Musk didn't stop there.
"I have exposure to the most cutting edge AI, and I think people should be really concerned by it," he said. "AI is a fundamental risk to the existence of human civilization."
The cutting edge AI he refers to is likely being developed by scientists at OpenAI, and possibly some at Tesla too.
Rather awkwardly, OpenAI has tried to distance itself from Musk and his AI comments on numerous occasions. OpenAI employees don't always like to see "Elon Musk's OpenAI" in headlines, for example.
Musk resigned from the board of OpenAI in February 2018 but he continued to share his punchy views on where AI is headed in public forums.
A spokesperson for OpenAI said he left the board to avoid future conflicts with Tesla.
"As Tesla continues to become more focused on AI, Elon chose to leave the OpenAI board to eliminate future potential conflicts. We are very fortunate that he is always willing to advise us."
Some people in places like Cambridge University's Centre for the Study of Existential Risk or Oxford's Future of Humanity Institute might not disagree with all of Musk's comments.
But his comments in July 2017 were the final straw for some people.
In a rare public disagreement with another tech leader, Facebook CEO Mark Zuckerberg accused Musk of fear-mongering and said his comments were "pretty irresponsible."
Musk responded by saying that Zuckerberg didn't understand the subject.
Undeterred by the encounter, in August 2017, Musk called AI a bigger threat than North Korea and said that people should be more concerned about the rise of the machines than they are.
The prolific tweeter told his millions of followers: "If you're not concerned about AI safety, you should be. Vastly more risk than North Korea." The tweet was accompanied by a photo of a gambling poster that reads "In the end, the machines will win."
Zuckerberg isn't the only Facebooker to question Musk's AI views. Edward Grefenstette, a former DeepMinder, has questioned Musk's views on multiple occasions. "If you needed any further evidence that @elonmusk is an opportunistic moron who was in the right place at the right time once, here you go," he said on Twitter this month after Musk tweeted "FREE AMERICA NOW" in relation to the coronavirus lockdowns.
Yann LeCun, chief AI scientist at Facebook, has questioned Musk's AI views on more than one occasion. In September 2018, he said it was "nuts" for Musk to call for more AI regulation.
It's not just Facebookers who disagree with Musk on AI. Former Google CEO Eric Schmidt said in May 2018 that Musk is "exactly wrong" on AI.
In March 2018, at South by Southwest tech conference in Austin, Texas, Musk doubled down on his comments from 2014 and said that he thinks AI is far more dangerous than nuclear weapons, adding that there needs to be a regulatory body overseeing the development of super intelligence.
These relatively extreme views on AI are shared by a small minority of AI researchers. But Musk's celebrity status means they're heard by huge audiences and this frustrates people doing actual AI research.