On the Money

More to fear from cyber crimes, viruses than AI: Etzioni

Rise of the robots
VIDEO3:0303:03
Rise of the robots

As man-made robots get smarter, will they eventually outpace man?

A few of the world's smartest technology leaders certainly think so. In recent days, they've taken to sounding the alarm bell about the potential dangers of Artificial Intelligence (AI).

Tesla CEO Elon Musk called AI "our biggest existential threat" while British scientist Stephen Hawking said AI could "spell the end of the human race." In January, Read MoreMicrosoft co-founder Bill Gates sided with Musk, adding, "[I] don't understand why some people are not concerned."

Read More Think tank: Study AI before letting it take over

Yet on the other side of the argument are people like Microsoft co-founder, Paul Allen. In 2013, he founded the Allen Institute for Artificial Intelligence in Seattle, whose mission is to advance the study of AI. The man who heads the organization thinks the fears are overblown.

"Robots are not coming to get you," said Allen Institute CEO Oren Etzioni. In an interview with CNBC, he said: "We quite simply have to separate science from science fiction."

Etzioni said Elon Musk and others may be missing the distinction between intelligence and autonomy. One implies streamlined computer functions, while the other means machines think and operate independently.

Etzioni offered two Artificial Intelligence examples. In 1997, IBM's Deep Blue chess computer beat then world champion Garry Kasparov. In 2011, IBM's Watson supercomputer beat two champions on the game show "Jeopardy."

"These are highly targeted savants," said Etzioni. "They say Watson didn't even know it won. And Deep Blue will not play another chess game unless you push a button."

Etzioni said that the machines "have no free will, they have no autonomy. They're no more likely to do damage than your calculator is likely to do its own calculations."

More to fear from hacks and viruses?

iLexx | Getty Images

However, Etzioni is alarmed about the autonomy of dangerous software, specifically cyber- weapons and -viruses. "These are autonomous systems being sent out over the internet. They can do a lot of damage."

Read MoreFacebook knows you better than your family

He says a "vigorous discussion and ultimately careful safeguards" are needed to prevent widespread damage.

The Allen Institute for Artificial Intelligence employs about 30 scientists and researchers. Etzioni said, "We're starting to see some preliminary successes and some exciting steps forward."

One project is called Semantic Scholar. Etzioni describes it as a new search engine for scientific literature that can cut through clutter of "literally hundreds of millions of academic papers." Eventually, the tool can assist doctors and scientists in their work, he said.

"Doctors are overwhelmed, they're busy and they don't have access to the latest information on the drug they're prescribing you," he added. The project will also "direct scientists to the right papers so they're up on the latest advances."

Etzioni also pointed to AI already in use at major tech firms. "We're starting to see bona fide and successful applications." He mentioned Google's imbedding the technology in speech recognition protocols, and Facebook in its facial recognition software.

Safer roads will be another benefit of Artificial Intelligence, according to Etzioni.

"When you think of driverless cars, there's a huge potential for these cars to save lives," he said. "By preventing accidents and by reducing congestion on highways."

Otzioni said people are focused on what happens if a driverless car hits somebody. "But right now we have people hitting each other." According to the National Highway Traffic Safety Administration, more than 30,000 people are killed in motor vehicle crashes every year.(5)

"We can reduce that number using intelligent technology," he said.

On the Money airs on CNBC Sundays at 7:30 pm, or check listings for airtimes in local markets.