Why Elon Musk might be right about his artificial intelligence warnings

Elon Musk: Robots will take your jobs, government will have to pay your wage
Elon Musk: Robots will take your jobs, government will have to pay your wage

Tesla and SpaceX CEO Elon Musk has repeatedly said society needs to be more concerned about safety with the increased use of artificial intelligence.

"If you're not concerned about AI safety, you should be," Musk recently tweeted.


Jenny Dearborn, chief learning officer at software solutions company SAP, agrees.

In fact, she says it's critical to educate ourselves on artificial intelligence and how to best use it.

"Artificial intelligence will be everywhere," she tells CNBC Make It. "It will be the most prevalent aspect of our society that won't be visibly seen. But it will be behind everything."

She says it will impact everything we do, including scanning badges, scrolling the internet, using apps and home sensor systems and many other facets of daily living. As a result, AI will constantly collect data and personal information. Companies can then take this info and apply algorithms to get a sense of your behavior, explains Dearborn.

"We need to use artificial intelligence to augment people, not replace them," she says. "And we need more people to have a voice in how we use that so we aren't being taken advantage of."

But who would be taking advantage of us and why should we care? Dearborn says that there are three companies at the forefront of artificial intelligence: Google, Facebook and . They "have the most to gain," she says.

"People need to be savvy as to their participation so they don't wake up one day and say, 'I've been a pawn,'" says Dearborn.

She uses health trackers, such as Fitbit, as an example: A health-care company could decide to use the data provided by fitness apps to learn more about people's health habits. "Then what if one day it decides to increase premiums for people who walk less than 10,000 steps per day?" she says.

Fitbit Charge 2
Source: Fitbit

Dearborn considers this an extreme, but plausible, situation. "Who makes that decision?" she asks. If we don't get involved in the discussion on artificial intelligence, it will be these large companies, she says.

Dearborn worries that people are happily giving away their privacy for convenience's sake. "They say, 'I'm fine with it.' But what happens if [a company] crosses the line?"

"Most companies are now holding more power than the government," says Dearborn. There are going to be big discussions regarding your privacy, personal freedom and boundaries. Therefore, "education is key."

Dearborn says we should determine how we want this new economy to look, understand our role, participate in the conversation and be informed citizens. As a result, society at large can make informed decisions about how artificial intelligence is being "designed and oriented."

Musk agrees: "AI is a fundamental risk to the existence of human civilization in a way that car accidents, airplane crashes, faulty drugs or bad food were not — they were harmful to a set of individuals within society, of course, but they were not harmful to society as a whole."

Dearborn's solution is as follows: "We should be training young people to be critical of where we're going," she says, as someone who champions young people entering the tech industry. In fact, she says that the path to making the greatest impact surrounding major decisions are being made in tech, not politics.

"If you want to do health care, major in tech with a health-care focus. If you want to be a neuroscientist, major in tech with that focus," she says.

Dearborn adds that our biggest responsibility is figuring out how society will use artificial intelligence.

She gives this example: Google has been experimenting with self-driving cars. But what if a self-driving car is about to crash and it has to make a mathematical decision between saving the driver and crashing into a crowd, or avoiding a crowd and crashing the car with the driver in it?

Google's self-driving car, seen at a conference in Paris on June 30, 2016

Although she admits this is a drastic case, Dearborn says people will soon have to answer these types of questions. Therefore, it's important to have a diverse group of people contributing to the conversation about the way artificial intelligence is made and applied.

"It's like creating a government structure or a social order. You need to ensure you have equal representation and diversity and inclusion in the people who are building these systems," she says.

Right now the tech space is mainly white men, she says, and they are the ones who are creating the rules regarding our moral and social principles.

"We can't take a laissez-faire approach," says Dearborn. "These are ethical guidelines and boundaries. Don't let a for-profit company make these decisions on their own."

See also:

Here's why Tesla adding this CEO to its board is a big deal

Elon Musk's No. 1 tip for up-and-coming leaders

Elon Musk is the most cautious CEO in tech, according to IBM data

"It scares the s*** out of me," billionaire Mark Cuban says of AI
"It scares the s*** out of me," billionaire Mark Cuban says of AI