Indeed, without ethical guidelines, AI can adapt the worst parts of humanity. Microsoft's bot "Tay," for example, learning from the comments of online trolls, famously started tweeting racist responses and had to be shut down, according to reporting from Motherboard. And Google's DeepMind AI bots became aggressive when they were set to compete against each other, according to Quartz. The "smarter" the bots became, the more they aimed to attack their opponent, Quartz says.
Consider how a child would develop if they only watch the news, suggests Holmes.
"You have a child, you have a baby, and you lock it in a room and you put food through the door and all you do is let it look at the news channel. What do you think that human would look like? You would think my god that poor baby would be so distorted in its perception of the universe and the world," says Holmes to CNBC Make It.
"It would be like they are an alien visiting from another planet — and how can we have an expectation that if we did the same for AI that AI would just come out and be a well-formed Buddha and understand what humans really mean and what we are all about? I think it would be a very unrealistic in that belief."
Imbuing artificial intelligence with ethics is a process requiring nuance and sophistication.
"We have to train the AI in the same way that we train children. They start with an empty vessel and we put our ethics into them and talk to them about and guide them as to what is appropriate behavior," says Holmes.
"If we don't think about AI in the same way, I think we have the risk of creating a hugely powerful force that doesn't have an ethical grounding or frame work."
Elon Musk: 'Robots will be able to do everything better than us'
A lesson in leadership: Elon Musk spends weekend responding to Tesla customers, admits 'foolish oversight'
Elon Musk: Governments will obtain AI technology 'at gunpoint' if necessary