Tech

Mark Zuckerberg testifies before Congress Tuesday — here are the tough questions we'd ask

Key Points
  • Mark Zuckerberg testifies before Congress starting Tuesday in a pair of hearings on Facebook's user privacy policies and handling of the Cambridge Analytics data leak.
  • It's the first time before Congress for the founder and CEO and comes amid a firestorm of user concerns and government probes.
  • Zuckerberg has said he's "responsible for what happened," adding, "I started this place, I run it."
Mark Zuckerberg
David Ramos | Getty Images

Mark Zuckerberg testifies before Congress starting Tuesday in a pair of hearings on Facebook's user privacy policies and handling of the Cambridge Analytics data leak.

It's the first time before Congress for the founder and CEO and comes amid a firestorm of user concerns and government probes. The social media giant is facing questions after reports that research firm Cambridge Analytica improperly gained access to the personal information of as many as 87 million Facebook users.

In a prepared testimony released on Monday, Zuckerberg said that Facebook made a "big mistake," adding: "I started Facebook, I run it, and I'm responsible for what happens here." At least some lawmakers have publicly agreed.

Zuckerberg will appear at a joint hearing of the Senate Judiciary and Commerce committees Tuesday and at a hearing of the House Energy and Commerce Committee Wednesday.

After reading his testimony, here are some of the tough questions we'd like answers to:

Why didn't Facebook act sooner to address the data leak?

Facebook has said it first learned about Cambridge Analytica's unauthorized access in 2015, more than two years before the leak was made public by a pair of reports by The Observer newspaper in the U.K. and The New York Times.

The company claims it acted then to ensure the data had been deleted, but the news outlets alleged Cambridge Analytica still had the data as recently as last month.

Despite banning the platform in 2015, Facebook didn't suspend Cambridge Analytica from its platform again until last month and it didn't act to notify users that their data had been improperly accessed until this week.

So why didn't Facebook do more in the two years when it knew about the leak and we didn't?

How did Facebook ensure the data had been deleted and how will it do so going forward?

It's unclear whether Cambridge Analytica still has access to personal user information. Both Facebook and the research firm have said the data was deleted, but whistleblowers and news reports claim it still exists.

That's a pretty significant discrepancy. If the data still exists, then it's likely Facebook doesn't know and its process for ensuring the data was deleted was ineffective.

So, how exactly did Facebook ensure Cambridge Analytica had deleted the data, does it still believe its process was effective and will it continue to use that process going forward as it begins to audit every third party app on its platform?

Can Facebook effectively audit all third party apps?

Facebook outlined a number of steps it would take immediately following the reports of data mishandling, including an audit of all third party apps to understand what data they were collecting and how it was being used.

The company just this weekend suspended a second data analytics firm after CNBC discovered it had been engaging in deceptive data practices.

So, does Facebook feel confident it has the resources and capacity to audit every third party app, and will it be able to continue doing so for new apps that request access to user information?

How many firms are engaging in this sort of deceptive data collection and how might they be using the information? 

CNBC's report over the weekend of a second firm, CubeYou, using tactics similar to Cambridge Analytica, suggests that there may be more firms that have resold data or used it in ways that haven't been disclosed to consumers.

While advertising is an obvious use for Facebook's user data, it's not the only industry that could benefit from some type of data collaboration. For example, CNBC has also reported that Facebook had explored a project that asked several major U.S. hospitals to share anonymized data about their patients, such as illnesses and prescription info.

Facebook said that project never progressed "past the planning phase," but it raises the question of what other uses Facebook has thought up for user information.

So, how many firms are accessing data without user knowledge and has Facebook considered other data projects that haven't been publicly reported?

Does Facebook still deserve our data? 

The Cambridge Analytica leak, for many Facebook users, was a stark realization about just how much information the company collects — and keeps — on its users.

(If you haven't yet downloaded your Facebook archive, you can follow CNBC's steps here. And if you're less than comfortable with what you find, you can delete your data by following the steps here.)

COO Sheryl Sandberg has said a version of Facebook that doesn't require so much data would have to be a paid product — something users have begun to ask for but the company hasn't historically been eager to offer.

When Facebook formed more than a decade ago, it would have been difficult to predict that it would become the shepherd over the data of 2 billion people. The company has since shown, on multiple occasions, that its actions have unintended consequences.

So, should any one entity, let alone one that uses data for profit, be trusted with such a large responsibility? If Facebook can't protect our data, does it still deserve to have it?