Facebook is making good on its commitment to strip the social network of fake news stories — starting with a handy tool that allows users to flag anything they consider a hoax.
Starting today, Facebook's 1.8 billion users will be able to click the upper right-hand corner of the post to flag content as fake news. Legitimate news outlets won't be able to be flagged.
Flagged stories will then be reviewed by Facebook researchers and sent on to third-party fact-checking organizations for further verification — or marked as fake.
"We believe in giving people a voice and that we cannot become arbiters of truth ourselves, so we're approaching this problem carefully," Adam Mosseri, Facebook's vice president of News Feed, said in a blog post.
Facebook's secret sauce — the algorithm that decides what gets the most prominence in News Feed — will also be tweaked.
If a story is being read but not shared, Mosseri said that may be a sign it's misleading.
"We're going to test incorporating this signal into ranking, specifically for articles that are outliers, where people who read the article are significantly less likely to share it," he said.
The next step in Facebook's plan to rid the site of fake news involves sending flagged stories to third-party fact-checking organizations, which include Snopes, Politifact, and Factcheck.org.
A group of Facebook researchers will initially have the responsibility of sifting through flagged stories and determining which ones to send to the fact-checking organizations. If it's determined to be fake, the story will be flagged as disputed and include a link explaining why.
These stories can still be shared, but you'll be warned before you do and they'll be more likely to appear lower in News Feed. These stories also won't be able to be promoted or turned into advertisements.
Fake news is making some creators a fortune, as a recent NBC News investigation uncovered.
While Facebook hopes these tools will be helpful, they're also aiming to hit purveyors of fake news where it hurts — the pocketbook.
"Spammers make money by masquerading as well-known news organizations, and posting hoaxes that get people to visit to their sites, which are often mostly ads," Mosseri said.
"On the buying side we've eliminated the ability to spoof domains, which will reduce the prevalence of sites that pretend to be real publications. On the publisher side, we are analyzing publisher sites to detect where policy enforcement actions might be necessary," he said.
The new tools come a little more than one month after the fake news fiasco reached a boiling point post-election.
Days after Donald Trump was elected president, Zuckerberg said it was "pretty crazy" to think fake news could have influenced the election and warned Facebook "must be extremely cautious about becoming arbiters of truth."
Less than two weeks later, with the issue still simmering, Zuckerberg shared a more detailed account of projects he said were already underway to thwart the spread of misinformation.
The team at Facebook has made it clear they don't want censorship on the site and that these new tools are just part of the evolving process of combating misinformation. A Facebook representative said users can expect more testing and continued transparency on what Facebook is doing to tackle the problem.