Tech

Trolls will use fake videos and other new tricks to try to sway the 2020 election, warns Alphabet researcher

Key Points
  • Social media companies are doing a good job of ridding their platforms of trolls, says Yasmin Green, director of research and development for Alphabet's Jigsaw subsidiary. But that only means trolls are moving to a far wider spread of online platforms.
  • Deepfakes may be concerning, but foreign influence campaigners have shown they can trick Americans into creating real videos that they later use out of context as part of a wider propaganda effort.
  • Researchers are looking for the most effective ways to "inoculate" consumers against fake videos and false information, Green says, with mixed results.
Yasmin Green, director of research and development for Google's Jigsaw.
Brian Snyder | Reuters

There's a race to "inoculate" consumers against false and inflammatory content ahead of the 2020 elections, according to the head of R&D for an Alphabet subsidiary that monitors online disinformation. But two trends are going to be particularly hard to stop: "deepfake" videos, which are videos altered to show speakers saying inflammatory things, and propagandists using real videos out of context.

Yasmin Green is the director of research and development at Jigsaw, an Alphabet subsidiary created to monitor abuse, harassment and disinformation online. She was speaking on a panel of experts in disinformation at the Aspen Institute Cyber Summit in New York on Wednesday.

Election influence is likely to be pushed through different channels, on different websites and using different techniques than in 2016, Green said. Social media companies and researchers such as those at Jigsaw are working both to pinpoint these new or expanded techniques and find "interventions" for them that protect free speech but alert consumers about the authenticity of what they're consuming.

"I'm not as worried about faked accounts at this time," Green said, referring to the popular fake social media accounts that were started sometimes years in advance of the 2016 election on Twitter and Facebook and were used to sow discord among voters. Social media companies are doing a better job of removing those accounts, and would-be trolls are now having to "start from scratch."

"I do commend Facebook, and I see them doing a lot," she said.

Instead, consumers should expect trolls to use a far wider variety of platforms in the upcoming elections, especially companies that don't have a strong advertising business like the social media giants do.

'Inoculating' users

Jigsaw and other researchers have been trying out different methods of warning consumers about altered, fake or false content before they view it, she said.

Results of these interventions have been mixed.

In one study, researchers showed a group of participants a "deepfake" video featuring a comedy routine by actor Walter Matthau that had been altered to feature the face of former President Richard Nixon. The researchers told all viewers that the video was fake.

As deepfakes grow, Facebook, Twitter and Google are working to detect and prevent them
VIDEO12:5612:56
The rise of deepfakes and how Facebook, Twitter and Google work to stop them

Even after being told, only around one-third of the participants correctly identified it as fake.

Further, 17% of participants answered "yes" to the question, "Were you familiar with Richard Nixon's background in comedy?" Nixon did not have a background in comedy.

Green described another recent research project conducted by Jigsaw, in which a group of people was told how disinformation campaigns and propaganda work, then participants were shown a propaganda video. For a second group, researchers showed participants the propaganda video first and then later described how disinformation and propaganda work.

The participants who learned about disinformation first were far less likely to believe the videos, she said, suggesting that it's possible to inoculate users against fakes.

Real videos, false pretenses

Green also cited the story of Brooklyn civil rights activist and martial arts instructor Omowale Adewale as an example to consider in advance of the elections.

In that incident, Adewale was approached by a group that said it was involved in charitable initiatives to support African-American civil rights issues and it wanted to provide free self-defense courses in the community. The organization paid Adewale for providing the free training and sent him "swag," including logo t-shirts that he could wear in videos, Green said.

"For him, it felt very much in line with his social justice and activism," she said.

But promised meetings about the goals for the organization didn't materialize. Eventually, Adewale learned the group was a Russian front organization and it had been using the real videos out of context to create propaganda.

Adewale's case has Green particularly concerned about "Americans either knowingly or unknowingly [creating] real videos that are out of context and used to manipulate people at the other end."

Another panelist, reporter Nina Jankowicz, also said she worried about how the new, widespread knowledge of how foreign influence campaigns work will crop up domestically in the next election.

Jankowicz pointed to an "astroturfing" technique — using fake online profiles — that was used by a Senate candidate in Massachusetts, which she reported on for Buzzfeed. This displayed a concerning trend, she said, of even American candidates and groups deploying tactics similar to those used by foreign influence campaigners in 2016.

"The Russian playbook has been split wide open not only for other foreign actors but also domestic actors," Jankowicz said.

Follow @CNBCtech on Twitter for the latest tech industry news.