Entrepreneurs

In the same way there was a nuclear arms race, there will be a race to build A.I., says tech exec

Share
Maschinenmensch (machine-human) on display at the Science Museum at Preview Of The Science Museum's Robots Exhibition
Getty Images | Ming Yeung

In the decades following World War II, the United States and the Soviet Union raced to stockpile nuclear weapons, fighting for supremacy in a self-protective move aimed at thwarting mutually assured destruction. In 2017, the world is staring down the beginnings of a similar scenario — but the next generation nuclear arms race will be a race for dominion over artificial intelligence.

So says Ryan Holmes, the founder and CEO of social media management company Hootsuite, which serves over 80 percent of Fortune 1000 companies among its more than 15 million users. Holmes is also an investor and self-described "future enthusiast."

Ryan Holmes, founder and CEO of Hootsuite
Photo courtesy Hootsuite

"In the same way that nuclear [fission] was a game-changer and nuclear weaponry was a game-changer, the actors that harness AI first are going to have an immense amount of power," Holmes tells CNBC Make It.

"We have a nuclear club and we have seen how hard some people want to try to get into the nuclear club. ... There will be a similar, I believe, AI club," says Holmes.

Others share his point of view. Russian President Vladimir Putin has expressed a similar sentiment: "The one who becomes the leader in this sphere will be the ruler of the world," Putin said recently.

Tech titan Elon Musk has also warned that governments will "will obtain AI developed by companies at gunpoint, if necessary," and that the global race to stockpile AI technology will cause a third world war.

Just what artificial intelligence will be able to do in the future is almost impossible to know. But there is an urgent sense that its potential could be dangerous.

"Right now we are trying to fathom what the impact of it will be," Holmes says to CNBC Make It. "We need to be very careful in terms of who achieves this and how they harness it and what that looks like," he says.

Private companies are already investing deeply in developing artificial intelligence and they will continue to do so. "I think there is a strong probability that these AIs fall into the hands primarily of the major technology players, so you know Google will have a flavor of AI, as will Facebook, as will Microsoft, IBM, you know, all have bids in this," says Holmes.

Indeed, Facebook already uses artificial intelligence to determine what to show users in its main news feed and to filter out spam from chat messengers. It has also started using artificial intelligence for speech recognition. And Google's People + AI Research (PAIR) program aims to improve interactions between humans and AI.

Regardless of where the different technologies ultimately germinate, it is the responsibility of inventors to teach AI to make decisions according to some moral code, says Holmes. Robots can not learn ethics from a data set — even a very large data set, he says.

"If we set AI against what is the biggest data set out there right now which is social [media] data — we have billions of people contributing their feelings, thoughts and experiences into social data right now — and if we run AI across that and say, 'This is the human experience. Now, go and be a human.' Or, 'Try to translate what it means to be a human,' I think there is a big risk."

We have to train the AI in the same way that we train children.
Ryan Holmes
founder and CEO of Hootsuite

Indeed, without ethical guidelines, AI can adapt the worst parts of humanity. Microsoft's bot "Tay," for example, learning from the comments of online trolls, famously started tweeting racist responses and had to be shut down, according to reporting from Motherboard. And Google's DeepMind AI bots became aggressive when they were set to compete against each other, according to Quartz. The "smarter" the bots became, the more they aimed to attack their opponent, Quartz says.

Consider how a child would develop if they only watch the news, suggests Holmes.

"You have a child, you have a baby, and you lock it in a room and you put food through the door and all you do is let it look at the news channel. What do you think that human would look like? You would think my god that poor baby would be so distorted in its perception of the universe and the world," says Holmes to CNBC Make It.

"It would be like they are an alien visiting from another planet — and how can we have an expectation that if we did the same for AI that AI would just come out and be a well-formed Buddha and understand what humans really mean and what we are all about? I think it would be a very unrealistic in that belief."

Imbuing artificial intelligence with ethics is a process requiring nuance and sophistication.

"We have to train the AI in the same way that we train children. They start with an empty vessel and we put our ethics into them and talk to them about and guide them as to what is appropriate behavior," says Holmes.

"If we don't think about AI in the same way, I think we have the risk of creating a hugely powerful force that doesn't have an ethical grounding or frame work."

See also:

Elon Musk: 'Robots will be able to do everything better than us'

A lesson in leadership: Elon Musk spends weekend responding to Tesla customers, admits 'foolish oversight'

Elon Musk: Governments will obtain AI technology 'at gunpoint' if necessary

If robots take your job, the government might have to pay you to live
VIDEO1:1501:15
If robots take your job, the government might have to pay you to live

Like this story? Like CNBC Make It on Facebook.