Tech

Critics slam study claiming YouTube's algorithm doesn't lead to radicalization

Key Points
  • A recent study said YouTube's algorithm favors left-leaning and politically neutral channels, and steers people away from radicalizing influences, in contrast to some media reports.
  • Online radicalization and technology experts cited several shortcomings in criticizing the study, which has not been peer-reviewed.
  • One researcher pointed out that experts would be able to do more productive research into the behavior of algorithms if companies opened up the privately held data to researchers.
YouTube CEO Susan Wojcicki speaks during the opening keynote address at the Google I/O 2017 Conference at Shoreline Amphitheater on May 17, 2017 in Mountain View, California.
Justin Sullivan | Getty Images

A recent study that concluded YouTube's algorithm does not direct users toward radical content drew the ire of experts over the weekend.

The researchers claim in the self-published study that, as of late 2019, YouTube's recommendation algorithm appears to be designed to benefit mainstream and cable news content over independent YouTube creators. The study, which was published last week and CNBC previously reported on, also said that YouTube's algorithm favors left-leaning and politically neutral channels.

However, online radicalization and technology experts cited several shortcomings in criticizing the study, which has not been peer-reviewed.

While the co-authors of the study analyzed a large data set of YouTube recommendations, they did so from the perspective of a user who was not logged in. This means that the recommendations were not based on previously viewed videos, and thus, experts say, failed to address the individual experience of the algorithm and the personal nature of online radicalization.

Arvind tweet.

Zeynep Tufekci, associate professor at the University of North Carolina School of Information and Library Science, pointed out that experts would be able to do more productive research into the behavior of algorithms if companies opened up the privately held data to researchers.

"Could we do a proper study of, say, the behavior of recommendation algorithms without the participation of the company? Yes," she wrote on Twitter. "It would need to be a panel study, and it would be expensive and difficult. It's doable, but like any complex phenomenon not cheap or simple."

Zeynep thread.

Becca Lewis, a media manipulation and political digital media researcher at Stanford and Data & Society, emphasized the shortcomings of scientific research into the subject.

Lewis thread.

One author of the study, independent data scientist Mark Ledwich, said in a Medium post last week that the study shows that "YouTube's late 2019 algorithm is not a radicalization pipeline." Co-author of the study Anna Zaitsev, a University of California at Berkeley researcher, said that she contributed to the study, but not last week's Medium post, according to reporter Chris Stokel-Walker.

Ledwich declined to comment, but pointed to an essay Zaitsev posted to Medium on Monday in which she acknowledged the limitations of the study, but added that an empirical examination of the algorithm is valuable research.

"We think the purely empirical quantification of YouTubes' recommendations is meaningful and useful," Zaitsev wrote. "We believe that studying the algorithm might help inform more qualitative research on radicalization."

Ledwich went on to take aim at a series of reports from The New York Times. Reporter Kevin Roose pointed out that since those reports, YouTube has publicly announced many changes to its algorithm. He also defended the reports as a portrayal of the personal experience of online radicalization rather than a quantitative probe of the algorithm.

"Personalized, logged-in, longitudinal data is how radicalization has to be understood, since it's how it's experienced on a platform like YouTube," he wrote on Twitter.

Roose thread.