The company said it removed more than 3.2 billion fake accounts between April and September, compared with more than 1.5 billion during the same period last year. Facebook also said it removed 11.4 million pieces of hate speech, compared with 5.4 million in the same six-month period in 2018.
For the first time, Facebook included its enforcement action on Instagram in the report. The company said it made progress in detecting child nudity and sexual exploitation on Instagram, removing more than 1.2 million pieces of content between April and September.
The company also added suicide and self-injury as a new category of harmful content. Between April and September, Facebook said it removed more than 1.6 million pieces of suicide and self-injury content on Instagram. On the core Facebook app, the company removed more than 4.5 million pieces of suicide and self-injury content between April and September.
Facebook routinely provides updates on how it enforces its Community Standards, which are the rules that dictate what kinds of content will get users banned from the platform. Last year, Facebook said it made progress in taking down fake accounts and hate speech, as well as removing harmful content around bullying and child nudity and sexual exploitation.
Facebook has taken steps to be more transparent about its enforcement decisions in the wake of the 2016 U.S. presidential election. The company has faced criticism for its failure to prevent election interference on the platform, including the spread of misinformation.
More recently, Facebook has come under fire for refusing to fact-check or remove political ads. The decision was in stark contrast to its competitor Twitter, which banned political ads from its platform. CEO Mark Zuckerberg defended the decision under the guise of free speech, while politicians including Sen. Mark Warner criticized the move, saying Facebook should be held to the same standards as local TV broadcasters.