Camille François is a veteran of Alphabet, most recently at Google's analytics offshoot Jigsaw. She left the tech giant eight months ago to lead research and analysis at Graphika, a social network analysis company, where she was this week named chief innovation officer.
One of her first assignments there was to help spearhead a secretive project at the behest of the U.S. Senate Select Committee on Intelligence, alongside Graphika CEO John Kelly and a team of researchers from Oxford University. The report, released in December 2018, was the first to take a data-driven magnifying glass to how the Russians used social media networks like Facebook and Twitter to try to influence the 2016 election.
Inspired by the massive Senate project, François took a separate tack from the formal research and went down the troll-farm rabbit hole. She talked to some of the actual foot soldiers who worked on these influence campaigns to spread misinformation, harass people and magnify divisive news stories to try and sway voters. What she discovered could help countries fight foreign-sponsored attempts to influence elections in the future.
Trolls are people, too
"I was obsessively looking for sources and people to talk to, to bring the hacker mindset to the data security problem," she told CNBC in an exclusive interview. "I wanted to find those answers so that the detection system [for influence campaigns] could be improved."
What she found were people with a diverse set of experiences and opinions. But most of them eventually grew concerned with what they were doing.
In some cases, the workers — who were promised anonymity — had taken what they thought was a legitimate job that then changed into something more sinister.
In one case, a worker took a position helping manage the social media campaign of a local political candidate. After that candidate won, the individual stayed on staff, and soon saw their job description change.
"They realized their day-to-day was now harassing journalists and circulating race threats to opponents," she said. "The feeling was 'oh my God, I woke up into a job that I hadn't really signed up for.'"
Some had different motivations for sharing their stories. One "somewhat notorious hacker who is now in jail," sought to "redeem himself, and be sure he was using his skillset for good."
Some troll farm workers didn't hail from Russia, but other locations, including India, Ecuador and Mexico. Some of them also worked on social media campaigns for legitimate corporate clients while simultaneously attempting to influence the election.
She learned that the trolls typically researched American life so they could more effectively pose as U.S. citizens online. One key trick was to watch American TV shows like House of Cards, she said.
François also obtained documents that essentially described the business model of a social media misinformation campaign.
The troll farm operation was "not unlike a very top-down, controlled social media strategy" of a large company, she said. "The idea is to mimic the diversity of a crowd of people who go onto social media. The manuals and guidelines they would receive — they would say, 'today on this topic, you are going to post the following rebuttals and use the following codes in your comments.'"
But mimicking a genuine, organic social movement is more challenging when researchers look at the broader, full scope of data, François said. So the lessons from this research campaign are likely to help identify fake, foreign-influenced social campaigns in the future.
Some of the most successful trolls were the ones with the longest-standing identities, said François. Some online identities, which were likely managed by multiple people or whole teams, had roots going back as long as a decade.
"I was really baffled by the length of the campaign," said François, "with some accounts being opened in 2009 and ending in 2018. You see these accounts, over multiple years, trying to target the American political conversation, and it's a very polarizing conversation."
Data scientists may not be able to get directly inside the heads of voters. But they can track how much American political conversation was swayed by these fake voices.
"The metric that people can sometimes miss is this: are the people that are doing this misinformation really reaching their audience or are they screaming to themselves in a corner? One useful comparison is looking at Russian influence in the [French] Yellow Vest movement versus Russian influence on the [American] presidential campaign. What really matters is, for the fake accounts that are commenting on a topic, how ingrained are they in the fabric of the communities they are trying to manipulate?" she explains.
By that measurement, Russia's 2016 campaign was genuinely successful, she said.
"What's really interesting is those fake accounts for the [American] election were very much integrated into the communities. From there, we can graph the American political conversation and reconstruct where those troll accounts really were. And they were right in the center."
That kind of success is rare. By contrast, Graphika's observations of the French Yellow Vest movement have shown that Russian influence campaigners have estabilshed a comparatively shallow existence. Those accounts are not firmly rooted in the middle of the wider movement's social activity, she explained. The political conversation there, she said, is not being redirected at the behest of troll accounts as it was, often, in the U.S.
The insights will be helpful, she said, in Graphika's bid to help stop misinformation and foreign influence campaigns in the U.S. and other jurisdictions in the future, she said. "It's difficult to nurture a fake account. Trolls and troll farms invest a lot of time in creating influential accounts. As a result, it's very difficult to detect on first observation," she said. "But if you have a very strong measurement framework, and look into the structure of a coordinated, manufactured campaign, that's when you can ... identify these patterns."