As man-made robots get smarter, will they eventually outpace man?
A few of the world's smartest technology leaders certainly think so. In recent days, they've taken to sounding the alarm bell about the potential dangers of Artificial Intelligence (AI).
Tesla CEO Elon Musk called AI "our biggest existential threat" while British scientist Stephen Hawking said AI could "spell the end of the human race." In January, Read MoreMicrosoft co-founder Bill Gates sided with Musk, adding, "[I] don't understand why some people are not concerned."
Yet on the other side of the argument are people like Microsoft co-founder, Paul Allen. In 2013, he founded the Allen Institute for Artificial Intelligence in Seattle, whose mission is to advance the study of AI. The man who heads the organization thinks the fears are overblown.
"Robots are not coming to get you," said Allen Institute CEO Oren Etzioni. In an interview with CNBC, he said: "We quite simply have to separate science from science fiction."
Etzioni said Elon Musk and others may be missing the distinction between intelligence and autonomy. One implies streamlined computer functions, while the other means machines think and operate independently.
Etzioni offered two Artificial Intelligence examples. In 1997, IBM's Deep Blue chess computer beat then world champion Garry Kasparov. In 2011, IBM's Watson supercomputer beat two champions on the game show "Jeopardy."
"These are highly targeted savants," said Etzioni. "They say Watson didn't even know it won. And Deep Blue will not play another chess game unless you push a button."
Etzioni said that the machines "have no free will, they have no autonomy. They're no more likely to do damage than your calculator is likely to do its own calculations."