Facebook's artificial intelligence still has trouble finding hate speech — but it finds a lot of nudity

  • Facebook found 2.5 million pieces of hate speech on its platform during the first three months of the year, but only 38 percent was flagged by its artificial intelligence.
  • It took down 21 million pieces of adult nudity and sexual activity during the period, 96 percent of which was flagged by its artificial intelligence.
Facebook co-founder, Chairman and CEO Mark Zuckerberg testifies before a combined Senate Judiciary and Commerce committee hearing in the Hart Senate Office Building on Capitol Hill April 10, 2018 in Washington, DC.
Zach Gibson | Getty Images
Facebook co-founder, Chairman and CEO Mark Zuckerberg testifies before a combined Senate Judiciary and Commerce committee hearing in the Hart Senate Office Building on Capitol Hill April 10, 2018 in Washington, DC.

Despite Facebook's aggressive stance on improving identification and removal of inappropriate content, the company admitted its artificial intelligence has a hard time finding hate speech.

In a blog post, the company said Tuesday it removed 2.5 million pieces of hate speech content during the first quarter of the year. However, only 38 percent of the problematic items were identified by its technology.

"It's partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important," Facebook vice president of data analytics Alex Schultz wrote in the blog post. "For example, artificial intelligence isn't good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue."

Facebook also said in the posting that it took down 21 million pieces of adult nudity and sexual activity during the period, 96 percent of which was flagged by its artificial intelligence. About 7 to 9 views of every 10,000 pieces of content viewed were part of that category. Around 3.5 million pieces of violent content were taken down or noted with a warning label, with its technology finding 86 percent of the questionable content.

It deleted 583 million fake accounts during the first three months of the year, most of which were found minutes after they were created. About 3 to 4 percent of its monthly active users are false accounts, it said. In addition, Facebook removed 837 million pieces of spam during the same period, and almost all were identified before it was reported by a user.

Facebook has said it will hire 10,000 more people to review content on its platforms by the end of 2018. The Wall Street Journal reported that Facebook's community operations and community-integrity team, which develops technology to find inappropriate items, asked for a budget of $770 million for this year. A source told the publication CEO Mark Zuckerberg allocated even more than that amount. For comparison, the community operations team had a budget of only $220 million in 2017, according to the Journal.