Tech

Covid-19 slowed Facebook's moderation for suicide, self-injury and child exploitation content

Key Points
  • Facebook on Tuesday disclosed that its ability to moderate content involving suicide, self-injury and child exploitation was impacted by the coronavirus between the months of April and June.
  • Covid-19 limited the amount of content involving suicide, self-injury, and child nudity and sexual exploitation that the company reviewed on both Facebook and Instagram.
Facebook co-founder, Chairman and CEO Mark Zuckerberg testifies before the House Energy and Commerce Committee in the Rayburn House Office Building on Capitol Hill April 11, 2018 in Washington, DC.
Yasin Ozturk | Anadolu Agency | Getty Images

Facebook on Tuesday acknowledged that its ability to moderate content involving suicide, self-injury and child exploitation was impacted by the coronavirus between the months of April and June.

Facebook said it was also unable to measure how prevalent violent and graphic content, and adult nudity and sexual activity were on its services during this time, according to the report. The amount of content appeals Facebook was able to review during this period was "also much lower."

The company, which relies on artificial intelligence and humans for its content moderation, was forced to work with fewer of its human moderators throughout the early months of quarantine. The absence of the human moderators reduced the amount of content it was able to act upon, the company reported in the latest version of its Community Standards Enforcement Report.

"With fewer content reviewers, we took action on fewer pieces of content on both Facebook and Instagram for suicide and self-injury, and child nudity and sexual exploitation on Instagram," the company said in a blog post. "Despite these decreases, we prioritized and took action on the most harmful content within these categories. Our focus remains on finding and removing this content while increasing reviewer capacity as quickly and as safely as possible."

The company said that while its technology for content moderation is improving, "there will continue to be areas where we rely on people to both review content and train our technology."

Facebook CEO Mark Zuckerberg warned in May that the company's ability to properly moderate content had been impacted by Covid-19. 

Additionally, Facebook disclosed that it removed more than 7 million pieces of harmful Covid-19 misinformation from Facebook and Instagram from April through June. The company also placed warning labels on 98 million pieces of content containing Covid-19 misinformation on Facebook, said Guy Rosen, Facebook's vice president of integrity, on a conference call on Tuesday. That's up more than double from the 40 million warning labels Facebook placed on Covid-19 content during the first quarter of 2020

Some posts "push fake preventative measures or exaggerated cures that the CDC and other health experts tell us are dangerous," Rosen said.

Despite Covid-19's limitations on its human moderators, Facebook said it was able to improve in other areas through its AI technology. Specifically, the company said it improved its proactive detection rate for the moderation of hate speech, terrorism, and bullying and harassment content. 

The company claims many of its human reviewers are now back online moderating content from their homes. A small number of human reviewers are working in offices to review the most sensitive types of content, such as live video, Rosen said. 

"As the COVID-19 pandemic evolves, we'll continue adapting our content review process and working to improve our technology and bring more reviewers back online," the company said in a statement. 

Facebook bans ads from a pro-Trump PAC
VIDEO0:3600:36
Facebook bans ads from a pro-Trump PAC

To get help: Call the National Suicide Prevention Lifeline at 1-800-273-8255 (TALK), 24 hours a day, seven days a week for free and confidential support.