- A political organization endorsed by former U.S. Vice President Joe Biden is concerned so-called “deepfakes” could be a threat to democracy.
- It developed an online quiz to see whether people found an AI-generated Trump impersonator more convincing than actors and comedians.
- The next step for the foundation is building deepfake-detection software, rolling it out to journalists, and educating the public about the technology.
Artificial intelligence (AI) is getting frighteningly close to being able to mimic humans, and advances in the technology could be a major risk for democracies worldwide.
That's the worry held by the Transatlantic Commission on Election Integrity, a U.S.-European organization looking at combating interference in Western elections by hostile foreign actors.
It was set up last year by Anders Fogh Rasmussen, Denmark's former prime minister and ex-secretary general of NATO, and Michael Chertoff, former U.S. secretary of homeland security, and is part of Rasmussen's political foundation, the Alliance of Democracies. Members include former U.S. Vice President Joe Biden and ex-Estonian President Toomas Hendrik Ilves.
Using technology developed by London-based AI firm ASI Data Science, the pro-democracy group focused its attention on a new phenomenon in online communities known as "deepfakes," computer-generated video or audio made to look or sound as though someone is doing or saying something they have not. ASI gained attention earlier this year for its work with the British government on spotting and removing online Jihadist propaganda.
The commission and ASI recently developed an online quiz where users can listen to audio from human impersonators of President Donald Trump — including the voices of comedian Alec Baldwin on "Saturday Night Live" and award-winning Trump impersonator John Di Domenico — alongside algorithm-generated audio mimicking Trump's voice repeating their lines.
The idea is that, if people think computer-generated audio sounds anything like Trump, there's nothing to stop malicious actors exploiting the technology to influence opinion during an election.
Results from the quiz showed that, out of 267 people, more respondents thought that the AI-powered deepfake audio was closer to Trump than any of the human voice samples — more than 90 percent of people, for instance, found the algorithm-generated version more convincing than Baldwin's Trump impersonation on SNL.
The quiz is part of a campaign to raise awareness of deepfake technology and its ability to create convincing renditions of political figures as big as Trump. While interference in elections believed to be led by the likes of Russia and Iran has mainly taken the form of coordinated political influence campaigns on social media and fake news, the commission thinks that new advances in AI could prove worrisome come 2020, when the U.S. votes for a new leader.
Deepfakes come in the form of visual and audio representations that are meant to replicate a celebrity and put words in their mouth or superimpose them onto the body of another person.
They first gained notoriety through pornography, with users of the forum site Reddit using the technology to replace the faces of adult entertainers with those of famous actresses like Scarlett Johansson and Maisie Williams.
People used an app called FakeApp, built on top of TensorFlow, Google's open-source platform for developing AI algorithms, to create deepfakes for porn. More frivolous examples of the technology's use include inserting actor Nicolas Cage's face into various other movies, including "The Dark Knight Rises" and "Man of Steel."
But experts are worried the nascent phenomenon could be a threat to democracy.
"We see deepfakes as the next generation of disinformation," Eileen Donahoe, a member of the commission and former U.S. ambassador to the United Nations Human Rights Council, told CNBC in an interview.
She said that governments were "not prepared" for interference in previous elections, making the risk of deepfakes being used in future votes even more worrying. High-profile votes including the U.K. Brexit referendum and the U.S. presidential election were plagued by claims of Russian meddling.
Donahoe said she was especially concerned by the prospect of Russia utilizing the technology, and the potential for other "authoritarian-leaning" governments following the country's lead.
"The concern is that there will be a growing movement globally to undermine the quality of the information sphere and undermine the quality of discourse necessary in a democracy," she said.
"The endgame is to erode confidence in democracy as a form of governance, and this erosion of quality in the information sphere is the new tactic."
The quiz is an attempt to utilize the viral nature of the internet. Social media platforms like Facebook are littered with quizzes and tests aimed at drawing users in — they proved to be part of reason for Facebook's engulfment in a scandal over the improper sharing of user data. The debacle began with the developer of a quiz app sharing the data of 87 million people's profiles with controversial political consultancy Cambridge Analytica.
Explaining why the commission chose Trump as the subject of the quiz, Donahoe said: "He already has set the stage for people to be concerned about the erosion of trust in information." She suggested the project "takes advantage" of Trump's protests about fake news and disinformation.
Viral news site Buzzfeed recently tapped into this idea, creating a fake public service announcement using FakeApp to combine altered footage of former President Barack Obama speaking, with comedian and filmmaker Jordan Peele uttering the words.
But that process was no mean feat. Rendering of the footage to make it look smoother "took more than 56 hours of automatic processing," BuzzFeed's David Mack reported at the time.
To create the audio component of deepfakes is even more arduous, John Gibson, director of data science consulting at ASI Data Science, told CNBC.
"There's a difference between the visual and audio components of this," he said in an interview. "The visual stuff is actually dead easy."
He added: "In order to create sound that is persuasive, you need to create huge quantities of information per millisecond, and the technology there is just a little bit less developed."
The project used two hours of audio of Trump speaking, and took a matter of days to be put together. "It was a little toy that we put together fairly quickly," Gibson said.
The commission and ASI will return to their Trump-imitating AI audio model to create a new one that sounds even more convincing, he added, and are spending around $650 per day on computing power toward improvements.
"Now that the project with the commission is properly rolling, we're going back to scratch and building a much better model that today sounds a lot like a robot, but will rapidly supersede the quality of the first model in the quiz," Gibson said.
There doesn't yet seem to have been a concrete example of a deepfake being used to imitate a political figure — and certainly not one as high-profile as Trump. But the commission is convinced the phenomenon will become a problem, and is aiming to push awareness, particularly ahead of the U.S. 2020 presidential election.
"You can imagine what the political consequences will be at time when billions of people are connecting on networks at the palm of their hands," Nina Schick, an advisor at the commission, told CNBC in an interview.
Schick said she was "surprised" the technology was not deployed during this year's U.S. midterm elections. The November contest was affected by some interference, with Facebook claiming that Russia's Internet Research Agency — an organization that has been dubbed a "troll farm" — may have been tied to numerous accounts that tried to sway public opinion ahead of the vote.
She added that, at the moment, the barrier to entry for anyone to create deepfake content is "still relatively high." But the ease of access to open-source platforms and rising computing power mean that powerful foreign states like Russia might not be the only actors with the ability to influence elections.
"Technology is evolving really, really quickly, and without a doubt potentially malign actors will be putting a lot of resources into this, to the extent that in about 12-18 months time you'll be able to create pretty convincing deepfakes on an app," Schick said.
"This has huge consequences because the ability for anyone to create misinformation means it's no longer just practice of the state — literally your angry teenager can create it and disseminate it as well."
ASI's Gibson said that, given the "viral" nature of the internet, where inaccurate news can spread like wildfire, and given the superiority of audiovisual content to written news, the likelihood of deepfakes being used to influence the 2020 presidential election — even in a minor way — was high.
"I'd be really surprised if, in 2020, there wasn't a reasonable amount of background deepfaking going on," he said. "The thing that I'm most worried about is not that Russia produces a picture-perfect Trump video; I think it will be at local level or an important swing state."
The next step for the commission is the creation of a "toolkit" that enables it to spot deepfakes with ease, to give journalists the ability to detect deepfake material, and to educate the public about the technology. It is working with U.S. universities Stanford and Harvard, as well as London's UCL, to build the deepfake-detection software, and is looking to roll it out in the next 12 months, Schick said.
"Disinformation is generally not illegal in a democracy, and we don't want to inspire governments to move towards content-based regulation," Donahoe said.
"The most effective tool for combating this information's effect is to have the citizenry prepared and resilient to disinformation."