Tech

What Jack Dorsey and Sheryl Sandberg taught Congress and vice versa

Farhad Manjoo
WATCH LIVE
Sheryl Sandberg, chief operating officer of Facebook Inc., left, listens during a Senate Intelligence Committee hearing in Washington, D.C., U.S., on Wednesday, Sept. 5, 2018.
Andrew Harrer | Bloomberg | Getty Images

We are at a consequential juncture in the technology business and in society.

A handful of tech giants have become the guardians of global speech, amplifying certain kinds of voices and limiting others according to their own bespoke, often opaque and shifting standards. Although these companies — Facebook, Google and Twitter above all — are now integral to just about every corner of our lives, they face little regulatory oversight in the United States. It was only after the last presidential election that lawmakers even began expressing interest in the companies' power.

And yet, against the scale of the issues involved, the congressional hearings held so far about the companies and their clout have often felt petty, misinformed and sharpened for political point-scoring. And President Trump's assertion last week that Twitter, Facebook and Google are biased against conservatives only raised the stakes.

So when Jack Dorsey, Twitter's chief executive, took the hot seat in front of the House Energy and Commerce Committee on Wednesday, I worried we would spend the bulk of the time talking about conspiracy theories peddled by the provocateur Alex Jones. Surprisingly, that was not the case.

Even though Mr. Jones did show up to make a spectacle in the halls of Congress, the hearings in the House and the Senate — which, in addition to Mr. Dorsey, included Sheryl Sandberg, Facebook's chief operating officer in the Senate session — were for the most part serious and substantive. They touched on some of the deepest tech policy issues, and allowed executives to explain in depth the difficulties of managing their networks, and where and how they have failed to do so.

The hearings did not devolve into what many feared — a parade of ham-handed apologies from the executives for trying to police their networks in ways that affected certain political viewpoints. Although Mr. Dorsey apologized for an algorithmic bug that resulted in the disappearance of hundreds of thousands of users from Twitter's follower-suggestion box this year, neither he nor Ms. Sandberg was prepared to genuflect toward cheap claims of bias.

More from The New York Times:
Republicans Accuse Twitter of Bias Against Conservatives
Jeff Bezos' First Major Political Donation Is $10 Million to Elect Veterans
Workout Machines Get a Silicon Valley Upgrade. But Is the Price Right?

A cynic might point out that the hearings arrived at few solutions to the thorny problems they raised. Yet in highlighting the complexities of their systems, Mr. Dorsey's and Ms. Sandberg's testimony helped underscore the difficulties lawmakers will face in figuring out ways to ensure that tech companies are guarding people's political freedoms and privacy while protecting their services from those who would spread misinformation and propaganda.

Contrary to much of what is happening in Washington these days, Wednesday's tech hearings showed a political system earnestly wrestling with issues for which there are no easy answers. They also showed two companies, whether out of embarrassment or fear or brand management, seriously engaging with lawmakers' worries and guiding them though the thicket of issues involved.

But what about the third company, Google? And what about the big question: Are these hearings leading toward anything like a workable tech-policy solution to regulating how tech companies operate?

Let's try to answer them.

Google didn't show up. Big mistake.

Google is the world's largest digital advertising company, and it runs the world's largest search engine and video site, among lots of other things you use all the time. But it was represented in the Senate Intelligence Committee by an empty chair.

The committee had asked for a top executive to testify — either Larry Page, Google's co-founder and the chief executive of its parent company, Alphabet, or Sundar Pichai, the chief executive of Google. Google instead offered its chief lawyer, Kent Walker, who it argued was better versed in the issues. That didn't fly with the committee, hence the spectacle of the empty chair.

Google has spent years building a lobbying operation in Washington, which rests on a foundation of seriousness and good will; while upstarts like Facebook reveled in their break-things disruptive style, Google positioned itself as the grown-up in the room. It also had some positive facts on its side: Google's services were far less vulnerable to Russian misinformation during the 2016 election than Facebook or Twitter. And because it does not run a social network, Google could credibly argue that it was a less important vector for propaganda and social unrest. Mr. Pichai, who is no dummy, could have persuasively made that case to lawmakers.

But in declining to participate, Google left a sour mood in the Capitol. A parade of Democrats and Republicans in the Senate hearing noted the company's absence. Many raised questions about Google's recent actions — for instance, its decision to stop working with the military on artificial intelligence projects and its exploration of a censored search engine for the Chinese market — that the company had no way of defending.

In a statement, Google said of offering up Mr. Walker: "We had informed the Senate Intelligence Committee of this in late July and had understood that he would be an appropriate witness for this hearing."

Jack Dorsey overflowed with candor.

One worry among Twitter employees was that Mr. Dorsey would bow too far toward Republican lawmakers' trumped-up charges of bias against conservatives. He did not do that. What he did instead was more useful. In several answers, Mr. Dorsey explained the many ways in which Twitter had failed its users.

My notes from the hearings are full of instances in which Mr. Dorsey admitted serious flaws. "I believe if you were to go to our rules today and sit down with a cup of coffee, you would not be able to understand them," he said at one point.

He also said Twitter's verification system — the way it determines which users get blue check marks next to their names, signaling that they are V.I.P.s on the service — was broken and needed a lot of work. And he said that the company's system for reporting harassment asked too much of victims, and that the company took too long to take down disparaging content, like a doctored photo of Meghan McCain that made the rounds this weekend.

The candor seemed to work. Even some of the most skeptical lawmakers sounded impressed by Mr. Dorsey's willingness to engage. Representative Joe L. Barton, a Texas Republican who focused almost entirely on the theory that Twitter was limiting conservative voices, congratulated Mr. Dorsey on his appearance "without subpoena, and sitting there all by yourself — that's refreshing."

The regulatory future remains a mystery.

At the heart of these hearings was a perplexing question, one that was rarely addressed: What power does Congress have to regulate how tech companies manage their services?

Under Section 230 of the Communications Decency Act, the 1996 law that governs much of online conduct, tech firms enjoy broad immunity from liability stemming from what users post on their services. There's general agreement among people in tech that the law has been crucial to the internet's rise: Because they cannot be sued for defamation or libel for what people put online, companies like Facebook and YouTube were able to achieve planetary scale.

Would it be a good idea, now that these companies are so big and powerful, to limit that kind of immunity? Should Congress try to wrest more control over how these companies manage their services — perhaps to gain transparency and maybe, in the way the Federal Communications Commission once did with television and radio, to ensure some kind of "fairness" in the way they affect political speech?

A few lawmakers broached these ideas on Wednesday, but none wrestled with the central difficulties. Some of the thorniest involve the First Amendment, which prevents the government from dictating or censoring speech. Although tech companies, being private services, are free to limit any speech they want on their networks, it was unclear if Congress has any basis for requiring certain fairness or speech policies from them.

"I just don't think we've begun to wrestle with the deep constitutional issues here," David Pozen, a constitutional law professor at Columbia Law School, said. He pointed out that tech companies themselves enjoyed a First Amendment right against the government's imposing rules on their services.

Kate Klonick, a professor at St. John's University Law School who has extensively studied tech companies' content policies, said she had consulted with several lawmakers on these questions.

"These are real, serious issues that some of us have been working on for a very long time, and they've been stewing and ripening without resolution," she said. "Maybe you can say, now that people are finally paying attention, that we are maybe stumbling toward some better understanding of what's involved."