In a new study mapping how hate travels across the online world, researchers explored how hate groups thrive on social media even when they are banned and offered new solutions to dismantle them.
The study, published in the journal "Nature" on Wednesday, maps out a phenomenon lawmakers and social media companies have struggled to understand and contain. Shortly after a gunman massacred more than 50 people in two mosques in Christchurch, New Zealand, copies of his first-person video flourished on social media, with new ones popping up across Facebook, Twitter, YouTube and Reddit faster than they could be removed. Extreme groups have also been known to migrate from mainstream platforms to places such as 8chan with fewer moderation standards in the face of content removal.
The researchers, composed of a multidisciplinary team from George Washington University and the University of Miami, identified groups they called "hate clusters" on Facebook and its Central European counterpart VKontakte. The researchers traced the path from those groups to adjacent hate clusters to which users explicitly linked. The researchers chose to focus on extreme right-wing hate because they said it is globally prevalent and has been linked to recent real-world violence, but they said the method can be replicated for any type of hate group.
The study found that hate clusters often regenerate and spread across platforms and around the world, even when they are banned.
For example, after Facebook banned the KKK, nearly 60 KKK clusters continued to exist on VKontakte, according to the researchers. But after the Ukrainian government banned VKontakte, the clusters "reincarnated" back on Facebook with "KuKluxKlan" written in Cyrillic, which the authors said made it harder for English-language algorithms to catch.
The researchers proposed a series of actions social media companies can take to stop hate clusters in their tracks. The team is now independently working with an unnamed social media network on the subject, according to Neil Johnson, a physics professor at George Washington University and the lead author of the study. The researchers are also developing software that governments and regulators can use to identify hate clusters.
The proposed solutions focus on removing weaker players from the ecosystem and undermining the hate clusters from within. Johnson and his team suggest that, rather than attacking a highly vocal and powerful player, social media platforms remove smaller clusters and randomly remove individual members. Removing just 10% of members from a hate cluster would cause it to begin to collapse, the researchers say.
"The larger ones have power, they have people, they have money in them, so they'll turn around and sue if they are attacked," Johnson said. "So take care of the smaller ones, because they are the ones that will build in a few years into the larger ones."
The team also proposed having users engage members of hate clusters in an organic way and exploiting philosophical divisions between adjacent hate clusters. Two white supremacist groups they studied were significantly divided over whether Europe should be unified, for example.
Johnson said his background in physics compelled him to approach the question of how hate travels in a big-picture way. He compared the spread of hate online and the way it grooms violent attackers to the process of water boiling.
"All molecules are equally good and bad," he said. "The bubbles kind of cook up energy locally and some, they create molecules that have much more energy than the others locally. And they're the ones that pop their heads up above the water. And that's exactly what's happening."