- TikTok has pledged to do more to tackle hateful content and behavior on its platform.
- The commitment comes after reports that TikTok has a Nazi problem and a White supremacy problem.
- TikTok has over 10,000 people worldwide working in trust and safety.
In a blog post Wednesday, TikTok said it will clamp down on coded language and symbols that some users adopt to try to spread hateful speech. While TikTok already tries to remove hate speech and hateful ideologies like neo-Nazism and white supremacy, it now plans to remove content around neighboring ideologies such as White nationalism and White genocide theory.
The company also said it will remove statements that have their origins in these ideologies, as well as content linked to movements like "Identitarianism" and male supremacy.
Like Facebook and Twitter, TikTok has already banned content on its platform that denies the Holocaust and other violent tragedies. However, it said it is taking further action to remove misinformation and hurtful stereotypes about Jewish, Muslim and other communities. That includes misinformation about well-known Jewish people and families who are used as proxies to spread anti-Semitism.
In July, the BBC found that TikTok's algorithm had promoted an anti-Semitic death camp meme. The company removed a collection of videos, which received 6.5 million views, with a "sickening" anti-Semitic song.
Content that promotes conversion therapy or the idea that no one is born LGBTQ+ will also be removed, TikTok said.
Danny Stone, chief executive of the Antisemitism Policy Trust, said in a statement: "TikTok has a large, and growing audience and an equally big responsibility that those using its platform not be served up hate materials."
"We are therefore pleased that the company is seeking to deepen its understanding and broaden its policies against antisemitism and other forms of racism and welcome the changes being announced today."
TikTok has over 10,000 people worldwide working in trust and safety, with many of them reviewing and moderating content uploaded to its platform. It also uses algorithms to flag and remove offending content.