What if robots, rather than developing intelligence that destroys the world, developed feelings instead? Would empathy prevent machines from enslaving mankind?
That thoughtful premise lies at the heart of "Chappie," a new science-fiction action film directed by Academy Award-nominated screenwriters Neill Blomkamp and Terri Tatchell. While artificial intelligence is certainly nothing new to Hollywood—nor is a robot as a main character—"Chappie's" motif puts a new spin on an old trope.
Rather than having an artificially intelligent machine slaughter mankind, a la "The Terminator" or "The Matrix," the title character develops feelings and attaches itself to an unlikely group of parental figures. Movies like "Ex Machina" and Steven Spielberg's 2001 noirish effort "AI: Artificial Intelligence" have also flirted with the idea of an empathetic robot, with mixed success.
This movie arrives at an auspicious juncture, as the world is coming to terms with advances in technology that encroach on everyday human tasks. According to "Chappie's" script writers, the budding AI debate was largely tangential to the film's development.
Rather than serving as a commentary about robot sentience, "Chappie" philosophizes about how humans are often more depraved than the machines they fear, co-writer Tatchell told CNBC in a phone interview.
"We were speaking more about humanity and how we behave, similar themes as 'District 9'. We do not behave humanely," said Tatchell, who also co-wrote the Oscar-nominated "District 9" with husband Blomkamp. Instead of humanizing the concept of a sentient robot, "Chappie" aims to teach humans about behaving better toward one another. As for the timing of the movie—which hits screens as robotics is becoming a hot topic in business and science—Tatchell suggested that was a coincidence.
"We kind of got lucky with that one," Tatchell laughed.
For a movie with AI at its core, "Chappie" blithely skirts the science at the heart of robot empathy and intelligence. Deon Wilson, played in the movie by Dev Patel, creates the robot's operating code in a late night session fueled by Red Bull and defiance over being warned by his boss, played by Sigourney Weaver, that he should abandon all efforts to imbue a damaged police robot with feelings. Yet unlike SkyNet, "Chappie" tugs at the audience's heart strings.
Regardless of whether robots display emotions, the idea of independently-thinking machines has made prominent figures such as Tesla's Elon Musk, Bill Gates and Steven Hawking sound alarm bells.
With increasing frequency, however, engineers are pushing back against the idea that autonomous robots pose a threat. Scientists say that movies such as "The Terminator" and "The Matrix" franchises, along with the upcoming super-hero blockbuster "Avengers 2: Age of Ultron," have habituated the public to think of robots as menacing.
Popular culture portrayals of evil robots "are very powerful," said Tim Oates, a professor of computer science at the University of Maryland and chief scientist at CircleBack, which uses AI technology in an application that consolidates and cleans up digital address books.
"The way those movies are, it would be boring to see a machine that plants crops," and engages in mundane tasks, Oates said. "From a Hollywood perspective, they make great movies, but it's not reflective of the way AI works."
In fact, Tinseltown glitz and glamour is largely what gives life to the concept of a robot with feelings, the science of which is "very, very primitive" at this juncture, Oates said.
Currently, AI programs exist that can, in a very rudimentary way, mimic the human experience, but not in ways that suggest outright emotion, which is still an expressly mammalian quality.
"AI is based on a set of rules and how to react," says Wolfgang Fink, a professor at California Institute of Technology and the founder of the Visual and Autonomous Exploration Systems Research Laboratory.
"If you look at the system from the outside, it looks like it was intelligent," he said. "But if you are presented with a situation for which there are no rules, the system will not know how to react. For all intents and purposes, it will fall flat," Fink states. "A system governed by AI is not something we need to be afraid of."
In a 2006 paper, researchers at the University of Southern California stated flatly that "machines cannot feel and express empathy." But the scientists go on to suggest that the applications of a robot with emotions may be more beneficial to humans in the long run, adding, "creating robots capable of emulating empathy is a very important step towards having them [as] part of our daily lives."
That last part is what has the likes of Musk and Gates warning of an impending apocalypse. Even though the idea of a feeling robot is currently more science fiction than fact, some scientists insist that critics were overstating the risks of a self-aware SkyNet.
"The people who are in the 'we should fear AI' camp make assumptions about the worst case scenario and don't do a lot of probability on those things happening," which is very low, said CircleBack's Oates. At least for the foreseeable future, robot intelligence can't evolve beyond a certain defined set of tasks, he said.
"The machine would have to think 'I am an independent entity' and …that's an enormous step," he said.
"Machines we build like [IBM's] Deep Blue can play chess better than anyone on the planet, but can't recognize how checkers is played. Watson is better than any human at Jeopardy but can't do anything else," he said.