Facebook says it wants to help fix misinformation running rampant across the internet — a problem it may have helped create in the first place.
Facebook parent Meta announced a new AI-powered tool on Monday, called Sphere. It's intended to help detect and address misinformation, or "fake news", on the internet. Meta claims that it's "the first [AI] model capable of automatically scanning hundreds of thousands of citations at once to check whether they truly support the corresponding claims."
The announcement comes after years of criticism over Facebook's own role in allowing online misinformation to thrive and rapidly spread across the globe. Sphere's dataset includes 134 million public webpages, according to Meta's research team. It relies on that collective knowledge of the internet to rapidly scan hundreds of thousands of web citations, in search of factual errors.
It's perhaps fitting, then, that Meta is training the AI model using entries on Wikipedia. According to Meta's announcement, Sphere is already scanning pages on the crowd-sourced internet encyclopedia to test its ability to flag sources that don't actually support the claims in the entry.
Meta also says that when Sphere spots a questionable source, it can recommend a stronger one — or a correction — to help improve the entry's accuracy.
"Wikipedia is the default first stop in the hunt for research information, background material, or an answer to that nagging question about pop culture," Meta said in a statement, noting that Wikipedia hosts more than 6.5 million entries in the English language alone and adds roughly 17,000 new entries to its pages each month.
The company also released a video showing how Sphere works:
A Wikipedia spokesperson tells CNBC Make It that the internet encyclopedia is not officially partnering with Meta on its development of Sphere, and that none of Wikipedia's entries are being automatically updated. Meta also told TechCrunch earlier this month that there's no financial compensation going in either direction.
Existing automated systems were already capable of identifying pieces of information that lacked any citation. But Meta's researchers say the complexity of singling out individual claims with questionable sources and determining if those sources actually support the claims in question "requires an AI system's depth of understanding and analysis."
In a statement, Shani Evenstein Sigalov — a Tel Aviv University researcher and vice chair of the Wikimedia Foundation's Board of Trustees — called Sphere's training with Wikipedia "a powerful example of machine learning tools that can help scale the work of volunteers."
"Improving these processes will allow us to attract new editors to Wikipedia and provide better, more reliable information to billions of people around the world," Sigalov said.
Sphere marks Meta's latest effort to address online misinformation — while potentially deflecting criticism over the company's own role in allowing that misinformation to persist.
Meta has faced consistently harsh criticism over the past several years from users and regulators over the spread of misinformation on the company's social media platforms, which include Facebook, Instagram and WhatsApp. Former employees and leaked internal documents have added fuel to claims that the company has valued profits over battling misinformation, and Meta CEO Mark Zuckerberg has been called in front of Congress to discuss the problem.
Last summer, President Joe Biden accused the social media giant of "killing people" by allowing Covid-19 vaccine misinformation on its platforms to spread. The company pushed back, claiming that Facebook and Instagram were providing "authoritative information about COVID-19 and vaccines" to billions of users.
Correction: This story has been corrected to reflect that Meta is currently using Wikipedia as a training tool for Sphere.