Marketing.Media.Money

YouTube to use third parties to report ad misplacement in latest brand safety measures

Jaap Arriens/NurPhoto | Getty Images

Google-owned YouTube is to use third parties to help marketers prevent their advertising from appearing next to extreme and other "objectionable" content on the site.

"As part of our commitment to provide even more transparency and visibility to our advertising partners, we'll be working with trusted vendors to provide third party brand safety reporting on YouTube," a Google spokesperson said in an emailed statement today.

"We are working with companies that are MRC (Media Ratings Council)-accredited for ad verification on this initiative and will begin integrating these technologies shortly."

Jaap Arriens/NurPhoto | Getty Images

YouTube has come under fire from advertisers in the past few weeks after the Times of London reported that ads were appearing against – and therefore potentially funding – videos promoting hate. Several brands in the U.K. pulled their ads from YouTube including Marks & Spencer, HSBC and L'Oreal and last month Google's EMEA boss Matt Brittin apologized for the misplacement of their advertising. U.S. companies including Johnson & Johnson, AT&T and Lyft also removed ads.

Google has also announced it will hire "significant numbers of people" to review questionable content, and added more controls for brands to manage where their ads appear on YouTube.

In an email to CNBC, a Google spokesperson said YouTube had launched the following:

  • New machine-learning systems which help enforce our revised policies, identifying content that may be objectionable to advertisers.
  • New rapid response path reducing the review time for flagged videos to just a few hours.
  • New default settings which meet a higher level of brand safety for where ads can appear on YouTube.
  • New account-level controls that let advertisers exclude specific sites, channels and videos across all their campaigns to simplify brand safety management.
  • Additional sensitive subject classifiers to make it easier for brands to exclude high risk content and fine-tune where they want their ads to appear.

This is the latest in a series of moves by internet companies to prevent extreme or fake content from appearing on their sites. Facebook is working with fact-checking companies to highlight questionable stories as "disputed" and letting users mark posts as "fake news," while Twitter has changed its default profile image from an egg to a human head silhouette, partly to reduce trolling, it said Friday in a blog post.

Follow CNBC International on Twitter and Facebook.