Twitter defends itself after people used it to spread false stories about YouTube shooting

  • Twitter defended its current policies and said that it would do more to combat misinformation on the site following the shooting at YouTube's campus on Tuesday
  • As the events unfolded, Twitter accounts maliciously spread hoaxes and a witness's account was hacked
  • Twitter said it is exploring ways to identify malicious accounts and more quickly integrate human review
Jack Dorsey, CEO, Twitter
Getty Images
Jack Dorsey, CEO, Twitter

Twitter defended its current policies while saying it would invest more in combating misinformation, after fielding criticism for the number of fake accounts and hoaxes that arose during the shooting on YouTube's campus on Tuesday.

Twitter has come under fire for fake news in the past, but the number of bad actors was particularly bad this week. After documenting numerous examples of abuse, Buzzfeed declared it "no longer a helpful place to follow breaking news."

People on 4chan coordinated to highlight false suspects.

Someone hacked and posted from the account of a YouTube employee who was on the scene.

Regular people spread unconfirmed reports.

In a blog post titled Serving the Public Conversation During Breaking Events, Twitter says that in the past few months it has gotten better and faster at responding to these issues, but that it never wants to be an "arbiter of truth."

When it sees accounts "deliberately sharing deceptive, malicious information" it can suspend them or delete tweets based on policies it has on abusive behavior, hateful conduct, violent threats, and against spam. After Buzzfeed posted its painstaking documentation of hoaxes, many of the accounts and tweets were in fact deleted.

The company highlighted that it also tried to mitigate the problem by creating a Twitter "Moment" of trusted content within 10 minutes of the first tweets. In the future, it said it would continue to try to improve its technology to catch bad actors, find automated accounts, and more quickly employ human review.

Providing more human content moderation alongside algorithmic flagging has become one of the main solutions touted by Facebook and YouTube, too, which have also become hotbeds for spreading false information after tragedies.

"This work is ongoing," the company writes. "We are continuing to explore and invest in what more we can do with our technology, enforcement options, and policies – not just in the U.S., but to everyone we serve around the world."