Facebook AI researcher slams 'irresponsible' reports about smart bot experiment

Key Points
  • The research that prompted dramatized reports in the past few days came out in June.
  • Facebook did refine the system to prevent agents from speaking in normal English language but the experiment was completed.
Mark Zuckerberg
Stephen Lam | Reuters

Artificial intelligence researchers in recent days have been speaking out against media reports that dramatize AI research that Facebook conducted.

An academic paper that Facebook published in June describes a normal scientific experiment in which researchers got two artificial agents to negotiate with each other in chat messages after being shown conversations of humans negotiating. The agents' improvement gradually performed through trial and error.

But in the past week or so, some media outlets have published reports on the work that are alarmist in tone. "Facebook shuts down robots after they invent their own language," London's Telegraph newspaper reported. "'Robot intelligence is dangerous': Expert's warning after Facebook AI 'develop their own language,'" as London's Sun tabloid put it.

At times some of the chatter between the agents did deviate from standard correct English. But that wasn't the point of the paper; the point was to make the agents effectively negotiate. The researchers finished their experiment, and indeed they noticed that the agents even figured out how to pretend to be interested in something they didn't actually want, "only to later 'compromise' by conceding it," Mike Lewis, Denis Yarats, Yann Dauphin, Devi Parikh and Dhruv Batra of Facebook's Artificial Intelligence Research group wrote in the paper.

On Monday evening Batra weighed in on the situation in a Facebook post:

While the idea of AI agents inventing their own language may sound alarming/unexpected to people outside the field, it is a well-established sub-field of AI, with publications dating back decades.

Simply put, agents in environments attempting to solve a task will often find unintuitive ways to maximize reward. Analyzing the reward function and changing the parameters of an experiment is NOT the same as "unplugging" or "shutting down AI." If that were the case, every AI researcher has been "shutting down AI" every time they kill a job on a machine.

Batra called certain media reports "clickbaity and irresponsible." What's more, the negotiating agents were never used in production; it was simply a research experiment.

Other researchers have been critical of the fear-mongering reports on social media in recent days.



Researchers at Alphabet and Elon Musk-backed OpenAI are among those who have recently explored the field of agent-to-agent chat -- one of many areas where AI is being applied today -- and at times the agents have developed their own styles of communication, which researchers have been able to subsequently modify.

Notably Facebook released the underlying software and data set for its experiment alongside the academic paper. In other words, if Facebook were trying to do something in secret, this wasn't it.