The massive market transformation this month that some on Wall Street called a "once in a decade opportunity" might have just been a one-off technical move because of taxes.Marketsread more
The Pentagon will deploy U.S. forces to the Middle East on the heels of the attack on Saudi Arabian oil facilities, United States Secretary of Defense Mark Esper announced...Defenseread more
CNBC did a deep dive through the most recent Wall Street research to find stocks that analysts say are underappreciated.Marketsread more
Shares of MasterCard are up 46% this year, and 1120% since 2011, getting a boost from the strong U.S. consumer.Investingread more
CNBC sat in on an "empathy training" at Amazon PillPack's Somerville offices, which is part of new hire orientation.Technologyread more
Trade with China is the 'big unknown' for the Federal Reserve as it decides how best to support the U.S. economy, says Council on Foreign Relations Director of International...Futures Nowread more
Lobbying experts said the visit is likely an attempt to be in lawmakers' ears as they consider legislation that would impact Facebook.Technologyread more
Yardeni Research's Edward Yardeni believes the U.S. economy is picking up steam.Trading Nationread more
Iran's audacious drone and cruise missile attack on Saudi Arabia's oil producing facilities has provided a critical test yet for the Trump administration's foreign policy. A...Politicsread more
Chinese trade negotiators suddenly canceled a visit to meet U.S. farmers after they wrapped up trade talks in Washington this week.Marketsread more
Superintelligence — a form of artificial intelligence (AI) smarter than humans — could create an "immortal dictator," billionaire entrepreneur Elon Musk warned.
In a documentary by American filmmaker Chris Paine, Musk said that the development of superintelligence by a company or other organization of people could result in a form of AI that governs the world.
"The least scary future I can think of is one where we have at least democratized AI because if one company or small group of people manages to develop godlike digital superintelligence, they could take over the world," Musk said.
"At least when there's an evil dictator, that human is going to die. But for an AI, there would be no death. It would live forever. And then you'd have an immortal dictator from which we can never escape."
The documentary by Paine examines a number of examples of AI, including autonomous weapons, Wall Street technology and algorithms driving fake news. It also draws from cultural examples of AI, such as the 1999 film "The Matrix" and 2016 film "Ex Machina."
Musk cited Google's DeepMind as an example of a company looking to develop superintelligence. In 2016, AlphaGo, a program developed by the company, beat champion Lee Se-dol at the board game Go. It was seen a major achievement in the development of AI, after IBM's Deep Blue computer defeated chess champion Garry Kasparov in 1997.
Musk said: "The DeepMind system can win at any game. It can already beat all the original Atari games. It is super human; it plays all the games at super speed in less than a minute."
The Tesla and SpaceX CEO said that artificial intelligence "doesn't have to be evil to destroy humanity."
"If AI has a goal and humanity just happens to be in the way, it will destroy humanity as a matter of course without even thinking about it. No hard feelings," Musk said.
"It's just like, if we're building a road and an anthill just happens to be in the way, we don't hate ants, we're just building a road, and so, goodbye anthill."
Last year, Musk warned that the global race toward AI could result in a third world war. The entrepreneur has also suggested that the emerging technology could pose a greater risk to the world than a nuclear conflict with North Korea.
Musk believes that humans should merge with AI to avoid the risk of becoming irrelevant. He is the co-founder of Neuralink, a start-up that reportedly wants to link the human brain with a computer interface.
He quit the board of OpenAI, a non-profit organization aimed at promoting and developing AI safely, in February.