Tech

Elon Musk, Stephen Hawking and fearing the machine

Allen Wastler
Watch Berkshire

When the man at the forefront of some of the most cutting-edge enterprises in the world warns you about, well, some potentially disastrous technological dangers, you should probably listen, right?

So pay attention to a warning from Elon Musk, the founder of Tesla, Paypal and SpaceX. During an interview on CNBC this past week he warned about artificial intelligence—you know, computers thinking for themselves.

"I think there's things that are potentially dangerous out there. ...There's been movies about this, like 'Terminator,'" he said on CNBC's "Closing Bell". "There's some scary outcomes and we should try to make sure the outcomes are good, not bad."

Elon Musk: A 'potentially dangerous outcome' with AI
VIDEO2:0902:09
Elon Musk: A 'potentially dangerous outcome' with AI

It's kind of an ironic comment from him, since he just invested in an artificial intelligence company, Vicarious, a start-up that is working on enabling machines to mimic the human brain.

Read MoreWill AI become a major Wall Street power broker?

"It's not from the standpoint of actually trying to make any investment return," he explained. "It's purely I would just like to keep an eye on what's going on with artificial intelligence."

Musk's warning is almost identical to that of another really, really smart guy—renowned physicist Stephen Hawking.

"Success in creating A.I. would be the biggest event in human history," Hawking wrote in a co-authored column in early May. "Unfortunately, it might also be the last, unless we learn how to avoid the risks."

He reiterated the warning recently in a pretty hysterical bit with comedian John Oliver on his new HBO show, "Last Week Tonight." Hawking pointed out that artificial intelligence could design improvements to itself and outsmart humans.

"I know you're trying to get people to be cautious there but why should I not be excited about fighting a robot?" asked Oliver.

"You would lose," said Hawking.

Read MoreComputers will be like humans by 2029

Not all smart folks are worried, however. Roger McNamee, a well-known investor in various technological efforts, poked fun at the notion.

"I don't think just being a billionaire means that the things you think out loud are important," the Elevation Partners co-founder said. "I would like to worry about the problems that are killing us today as opposed to the ones that may kill us in 20 years. If you want something to worry about, you know, just read the newspaper. ... There's a good chance we will have polluted the earth beyond repair way before they can get any of this A.I. stuff to work. So, I'm looking at it and going, 'Seriously? Let's keep our eye on the ball'."

Why Roger McNamee isn't worried about AI
VIDEO2:3902:39
Why Roger McNamee isn't worried about AI

There's also the question of what you could actually do. By Musk's own admission, any artificial intelligence issue is likely to be, well, a surprise.

"In the movie 'Terminator' they didn't expect some sort of 'Terminator'-like outcome," he pointed out. "It's like the Monty Python thing, nobody expects the Spanish Inquisition."

Read MoreGet ready: Robots are going to steal your job

Indeed. And even if we, as a society, decided that artificial intelligence was worrisome enough to keep an eye on, how would we do it? Regulation? The fictional Cyberdyne Systems would have certainly sued over restraint of trade if some sort of A.I. commission tried to keep it from developing Skynet.

Jeez, you'd think these really, really smart guys would have some sort of answer. Or a suggestion even.

—Allen Wastler is managing editor of CNBC Digital. Follow him on Twitter @AWastler. You can catch his commentary here and on CNBC Radio. And check out his fiction.

Berkshire Hathaway Live Event