Google's YouTube unit says new software improved its ability to spot objectionable content by sixfold within a matter of weeks, according to a report in Fast Company, as the company tries to calm advertisers and European leaders alarmed by some of its online fare.
YouTube and rival Facebook are racing to improve AI-powered technology that spots things like racism and other hate speech in their online videos, including content used by terrorists for recruiting. But as part of Google, YouTube may have an advantage, as Google is making a big bet on AI — enough that CEO Sundar Pichai has called Google an "AI-first" company.
Both companies are moving toward an entertainment industry business model by paying to produce content which they can use to sell video ads.
As they do, however, they've faced criticism from European leaders whose citizens have been killed in terrorist attacks.
YouTube also suffered a backlash from advertisers worried about the safety of their brands next to distasteful videos.
Facebook said last week it would use its own AI-powered software and hire more terrorist experts after leaders of the U.K. and France threatened new laws to punish companies whose content stays online long enough for terrorists to spread their message.
Facebook is hiring producers and putting together its own slate of shows, following YouTube into a market worth $500 billion a year.
But both are finding their move to ape Hollywood carries risks.