Violent extremists, racists and Russian-backed propaganda machines have all used social media platforms to disseminate and perpetrate their ideologies. This year, after revelations that Russian-backed ads and fake news attempting to influence the 2016 presidential election were distributed on these platforms, tech behemoths like Facebook and Google have faced increasing criticism for not more appropriately monitoring content.
So how did this happen?
"One of the problems in the industry is that ... we came from a, shall we say, a more naive position — right? — that illegal actors and these actors would not be so active," says Eric Schmidt, the billionaire Executive Chairman of Alphabet, speaking about Google at the Halifax International Security Forum in November.
"But now, faced with the data and what we've seen from Russia in 2016 and with other actors around the world, we have to act" to remove the content, Schmidt says.
But the extent of that naivete is debatable. Case in point: It took Google-owned YouTube from 2009 until recently to heed calls to take down videos of jihadist propaganda featuring (now-deceased) well-known ISIS recruiter Anwar al-Awlaki. In part, Google long argued it was a technology platform and therefore not responsible for policing content. YouTube made a change this year so that such content would be automatically removed, according to The New York Times.