Facebook explains why its A.I. didn't detect the New Zealand mosque shooting video before it was viewed 4,000 times

  • In a blog post Wednesday night, Facebook explained why its artificial intelligence failed to detect a live streamed video of an attack on a mosque in New Zealand.
  • Facebook said its AI requires a large amount of content to learn patterns that would help it detect harmful content, which is difficult to do in the case of relatively rare mass shootings.
  • Facebook said it has already taken steps to beef up its detection process for live videos.
Facebook founder and CEO Mark Zuckerberg arrives to testify following a break during a Senate Commerce, Science and Transportation Committee and Senate Judiciary Committee joint hearing about Facebook on Capitol Hill in Washington, DC.
Saul Loeb | AFP | Getty Images
Facebook founder and CEO Mark Zuckerberg arrives to testify following a break during a Senate Commerce, Science and Transportation Committee and Senate Judiciary Committee joint hearing about Facebook on Capitol Hill in Washington, DC.

Facebook explained why its artificial intelligence tools failed to detect the video of the New Zealand mosque shooting livestreamed on its site last week before being viewed 4,000 times. A suspected gunman killed 50 people in an attack on two mosques in the area.

The video was removed by Facebook after being flagged for the first time by a user 29 minutes after the stream began, the company said in a blog post Wednesday night. Several social media platforms removed the original video from their sites, but quickly saw copies pop up at a clip with which their moderation systems couldn't keep up. Users also altered the video to slow down automatic detection.

Facebook has relied on a mix of AI and human review to assess and remove content that violates its policies, and has largely seen success when it comes to removing porn and terrorist propaganda from its site. But Facebook said in the post that training AI to detect mass shooting videos is more challenging than training it to detect nudity because it relies on a vast amount of content to learn from. On Tuesday, a congressman asked Facebook CEO Mark Zuckerberg and other tech leaders to brief lawmakers on how the New Zealand video spread while other terrorist content has been largely removed.

"[T]his particular video did not trigger our automatic detection systems," Facebook wrote. "To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare. Another challenge is to automatically discern this content from visually similar, innocuous content – for example if thousands of videos from live-streamed video games are flagged by our systems, our reviewers could miss the important real-world videos where we could alert first responders to get help on the ground."

Facebook said it will take steps to beef up its detection technology. The company said it used an "experimental audio-based technology which we had been building to identify variants of the video." It also said it will explore whether its AI can be used in livestreamed videos.

Facebook said it will also work to more quickly review livestreamed videos, which it has done for videos reported for people who film suicide. The company will expand its categories for accelerated review to include a video like the one from New Zealand.

One strategy Facebook said would not be an effective solution is adding a time delay to live videos. Facebook said the sheer volume of daily broadcasts means this strategy would not get to the core of the problem and that this would only further delay user reports that help it detect harmful content or report criminal activity to the police.

Subscribe to CNBC on YouTube.

WATCH: Here's how to buy a gun in these five countries