- A shooting last week in Germany was streamed on Twitch, raising further questions about how internet platforms could be safer.
- Earlier this year, a similar thing happened after the shooting of more than 50 people at mosques in New Zealand was streamed live on Facebook.
- Advertisers, who fund many of these platforms, say they're working to find solutions.
A shooting last week outside of a synagogue in Halle, Germany, was amplified when a video of it appeared on video streaming site Twitch and then found its way to other sites.
The same thing happened in March after the shooting of more than 50 people at mosques in Christchurch, New Zealand. Facebook, Twitter, Reddit and Google's YouTube rushed to remove the content, but users were still able to find versions of the video hours after the companies had supposedly taken it down.
The apparent inability of the world's biggest tech companies (Twitch is owned by Amazon) to keep scenes of violent rampages from spreading wildly is becoming a problem for brands that are spending increasing amounts of their advertising budgets on those very sites. A Twitch representative told CNBC in an email that there were no ads on the shooter's stream from Halle, and added that the company provides controls for advertisers to block content categories.
But even if ads aren't appearing in or alongside specific videos, the video platforms are financed substantially by ad dollars. In other words, ad-supported content helps subsidize all the ad-free stuff.
"Most of the web is funded by advertisers," said John Montgomery, global executive vice president of brand safety at GroupM, WPP's media agency network. "Advertising has a responsibility for the overall web."
Joshua Lowcock, global brand safety officer at media agency UM, a unit of Interpublic Group of Cos., said livestreaming platforms are now being recognized as tools for terror attacks. The viral spread of videos could incentivize violent acts among people looking for their 15 minutes of fame.
Lowcock, who also serves as UM's chief digital and innovation officer, said there are a few ways the platforms could behave differently to prevent that misuse. For example, there might be some automated due diligence performed on the content creator and the location where the action is taking place. If someone is streaming for the first time from a place of worship, school or hospital, there should be a different level of scrutiny on the video, he said.
"Livestreaming platforms need to start designating places as protected venues with different rules for livestreaming in those locations," Lowcock said.
Other controls Lowcock recommends are the use of delays to review certain content, like in the broadcast world, and stricter repercussions for failure to report criminal content.
Twitch said last week that it had shared the video hash, or a type of digital identifier, of the Germany incident with the Global Internet Forum to Counter Terrorism, a group that aims to keep extremist violence off digital platforms. Some credited that move with preventing the video from spreading even more widely.
But on Thursday afternoon, more than 24 hours after it had been posted, at least one video of the shooting could still be viewed on Twitter.
A Twitter spokesperson told CNBC that the company is "actively removing perpetrator-created content related to the attack" and is working with other hash-sharing members of the Global Internet Forum to Counter Terrorism. The company is committed to "disrupting the online spread of violent and extremist content," they said.
Eric Feinberg, an internet security researcher and founder of deep web monitoring firm GIPEC Worldwide who had also found the videos on Twitter, said that as of Monday, some online accounts that he'd analyzed no longer included videos from Halle but still had clips from Christchurch.
"There's no control of these things," he said. "Once something gets out to what I call the social media stratosphere, it goes anywhere it wants to go."
The industry isn't sitting still. The World Federation of Advertisers recently created the Global Alliance for Responsible Media to address these sorts of issues alongside the platforms in a way that isn't accusatory but acknowledges that it's tough to solve alone, GroupM's Montgomery said.
He said that we should be at the point with technology where the big platforms, which are all investing heavily in artificial intelligence, can identify problematic content and take action more quickly.
"My iPhone can recognize my face with glasses, without glasses, in the dark, in the light, from almost any angle, which is a very complicated thing," Montgomery said. Technology should be able to recognize a live shooter and quarantine the content somehow, he said. "There's work to be done about identifying it and getting it down."
For Twitch, advertiser trust is becoming more important as esports, a market that NewZoo expects to top $1 billion this year for the first time, is exploding in popularity. Twitch grew primarily by users livestreaming video games so people could watch remotely, but the company now hosts sponsored tournaments and has a premium subscription offering that includes digital item bonuses in video games.
Luxury brand Louis Vuitton has jumped into the space, partnering with Riot Games' League of Legends to create real-world and in-game items for its 2019 championship. That's just one example of the exploding interest from nonendemic brands in the segment, said Manny Anekal, who writes and consults about the business of esports.
George Popstefanov, founder and CEO of digital agency PMG, said safety at Twitch is a particularly sensitive matter because the site attracts so many kids.
"It's serious just because it's a global, massive audience and it's a younger audience as well," Popstefanov said. "It's kind of the future of our world."
Twitch's ad service has been improving but remains much less mature than Facebook or YouTube's, he said.
"There's a lot more opportunity to create levers for control," said Popstefanov.
WATCH: The rise of Twitch