Tech

Researchers warn Congress of the urgent threat of deepfakes and other digital deception

Key Points
  • At a House subcommittee hearing Wednesday, experts advised Congress on steps they should take to combat deepfakes and dark patterns.
  • A Facebook representative testified at the hearing about the company's current policies around manipulation, including its newly released policy on deepfakes.
  • Some of the expert witnesses disagreed on the level of intervention lawmakers should take to ensure tech companies are responsibly handling deceptive posts on their platforms.
Chairwoman Jan Schakowsky, D-Ill., is seen during a Energy and Commerce Subcommittee on Consumer Protection and Commerce hearing in Rayburn Building titled "Driving in Reverse: The Administration's Rollback of Fuel Economy and Clean Car Standards," on Thursday, June 20, 2019.
Tom Williams | CQ Roll Call | Getty Images

Lawmakers have become increasingly concerned about the potential havoc that manipulative media such as "deepfakes" could wreak on American society. But the steps they're willing to take to address the issue remain unclear.

With the presidential election coming up later this year, researchers have raised alarms about the urgency of addressing deepfakes and other forms of digital deception. On Wednesday, a House panel heard digital experts and a Facebook representative testify about what strategies tech companies are employing to combat deepfakes and what more should be done on the federal level.

Experts warned the Subcommittee on Consumer Protection and Commerce of the societal and national security implications of manipulated digital media, such as the potential to fake a remark by a politician or to drive opposing groups to real-world events that would put them in close conflict.

The experts and lawmakers disagreed on the level of involvement needed from Congress to ensure tech companies responsibly patrol deceptive content on their platforms.

In her opening remarks, Chairwoman Jan Schakowsky, D-Ill., lamented Congress' "laissez-faire" approach over the past decade toward digital platform moderation.

"The result is Big Tech failed to respond to the great threats posed by deepfakes ... as evidenced by Facebook scrambling to announce a new policy that strikes me as wholly inadequate," Schakowsky said, referring to the policy Facebook released a day earlier banning highly manipulated videos created by artificial intelligence or machine learning.

Schakowsky noted the new policy would not cover the doctored video of House Speaker Nancy Pelosi that circulated on Facebook and other platforms. The video was simply slowed down to make Pelosi's speech appear slurred, and Facebook said it would not remove the video after it was viewed millions of times.

How easy is it to make a deepfake video?
VIDEO12:1112:11
How easy is it to make a deepfake video?

At the hearing, Monika Bickert, Facebook's vice president of global policy management, confirmed the altered Pelosi video would not be subject to the new deepfake policy but said it would still be subject to existing misinformation policies. Facebook previously said it limited the distribution of the Pelosi video in the News Feed and added additional context after a fact-checking partner rated the video as false.

Several Republicans, including ranking member Cathy McMorris Rodgers of Washington, urged caution toward legislation, warning of potential repercussions on consumers and the threat of China's rising sophistication in developing AI.

"As we discuss ways to combat manipulation online, we must ensure America will remain the global leader in AI development," McMorris Rodgers said. "There's no better place in the world to raise people's standard of living and make sure this technology is used responsibly."

Justin (Gus) Hurwitz, an associate law professor at the University of Nebraska, took a conservative approach to regulation in his testimony Wednesday. With regard to dark patterns — which are certain design choices that can influence a users' behavior, such as making one button large and colorful and another small and dull — Hurwitz said in his written testimony, "we need to be careful in how and why we regulate these practices, including understanding when and whether we should at all. In some cases, regulatory efforts may be better focused on other areas; in some cases, it may make more sense to allow the underlying technology and markets to continue to improve before stepping in with regulatory intervention; and in other cases still beneficial regulatory intervention may simply not be possible."

If regulation is found to be necessary, Hurwitz wrote, it should target "specific design practices" or empower an agency like the Federal Trade Commission to identify practices that it believes violate the FTC Act. Rather than jumping to legislation right away, Hurwitz suggested allowing the FTC to use its rulemaking authority to regulate dark patterns, and have the agency tell Congress if further intervention is needed.

The two other experts on the panel, however, urged Congress to more actively rein in digital manipulation.

Joan Donovan, research director of the technology and social change project at the Shorenstein Center at the Harvard Kennedy School, said it's important to ensure the FTC has access to all the information it needs to effectively investigate and audit tech companies and that the FTC should be able to "assess substantial injuries" in investigations.

But the FTC alone may not have the capacity to deal with the "exponential" scale and issues of the tech industry, said Tristan Harris, executive director of the Center for Humane Technology and former Google design ethicist.

"This is why I'm thinking about how can we have a digital update for each of our different agencies who already have jurisdiction over ... public health or children or scams or deception, and just have them ask the questions that then are forced upon the technology companies to use their resources to calculate, report back, set the objectives for what they're going to do in the next quarter," Harris said. He also warned that centralizing this power in a new federal agency would take too long as these issues accelerate.

Harris also suggested creating an awareness campaign to "inoculate the public" against deception and misinformation, noting that the government released a propaganda film in the 1940s warning against fascism, though research has since questioned the effectiveness of that particular film. Harris even suggested that tech companies help distribute such a campaign.

Hurwitz said the campaign "runs the risk of being called a dark pattern if the platforms are starting to label certain content in certain ways."

As Schackowsky noted in her opening remarks, the question of Section 230, a bill that shields tech from legal liability for its users' content, was an undercurrent of the hearing. Bickert, the Facebook representative, emphasized that the law also allows the company to remove harmful content as it sees fit.

Rep. Greg Walden, R-Ore., who also advised Congress to revisit Section 230, said, "This hearing should serve as a reminder to all online platforms that we are watching them closely."

Subscribe to CNBC on YouTube.

WATCH: The rise of deepfakes and what Facebook, Twitter and Google are doing to detect them

As deepfakes grow, Facebook, Twitter and Google are working to detect and prevent them
VIDEO12:5612:56
The rise of deepfakes and how Facebook, Twitter and Google work to stop them