Twitter unveiled new rules Tuesday addressing deepfakes and other forms of synthetic and manipulated media as politicians and academics continue to raise concerns about how misinformation could affect the 2020 U.S. presidential election.
Twitter will not allow users to "deceptively share synthetic or manipulated media that are likely to cause harm," its policy now states. Beginning March 5, it will start labeling some tweets containing synthetic or manipulated media to provide more context, the company shared in a blog post.
Altered media has obvious implications in an election year, when candidates' words are frequently parroted in ads and can be used to undermine them. Lawmakers have been searching for ways to hold tech companies accountable for the spread of misinformation. Last month, the House Subcommittee on Consumer Protection and Commerce held a hearing where experts shared warnings of both deepfakes and potential overregulation of tech platforms that host them.
Twitter will apply a three-pronged test to determine if media violates its new policy and how it should be treated on its platform:
If the answer to all three questions is yes, Twitter said it is very likely to remove the content, though it refrained from making a blanket statement saying it would always do so.
Content that meets fewer criteria is more likely to be labeled as altered or fabricated. It also may direct users to additional context or show a warning to users before they retweet or like the post.
The language of the new Twitter policy is broad enough to allow the company to take action on so-called "cheapfakes," which are relatively low-tech edits meant to deceive other users. The doctored video of Democratic House Speaker Nancy Pelosi that circulated on social media last year, for example, would be an example of such amateur editing, since the video was simply slowed down to give the effect that her speech was slow and slurred. More sophisticated deepfakes can involve transposing a person's face on a video of another person, for example, which could give false impressions of a person's words or actions.
On a call with reporters, Twitter's head of site integrity Yoel Roth said that under the new policy, the doctored Pelosi video would be labeled as such. The company would then determine whether individual tweets containing the video merit removal based on their contents and likelihood to cause harm, Roth said.
Through another recent policy, Twitter has made exceptions to some of its standards for world leaders, claiming it's important for users to see and be able to debate their messages. But on the press call, Twitter's VP of trust and safety Del Harvey said the manipulated media policy "is really focused on the media itself. So if the media is altered or fabricated, regardless of who the individual is, the policy would still apply."
Twitter solicited comments on an initial draft of the policy last year and said it received 6,500 survey responses worldwide. It also turned to civil society and academic experts to help craft the language. It found respondents wanted Twitter to arm them with information about manipulated media by labeling and providing context around it. About 55% of U.S. survey respondents said it's acceptable to remove all synthetic or manipulated media, according to Twitter, but more than 90% of total respondents said Twitter should remove content when it's clearly meant to cause certain types of harm.
As lawmakers float regulation, tech companies have stepped up their policies with regard to manipulated media. Ahead of the January hearing where a Facebook executive testified, Facebook unveiled its own deepfake policy. Facebook's policy prohibits misleading media created by AI that "merges, replaces or superimposes content on to a video, making it appear to be authentic," excluding "parody or satire, or video that has been edited solely to omit or change the order of words." Facebook said at the time the doctored Pelosi video would not meet the threshold for removal under its new policy.