Tech

Facebook’s chief security officer let loose at critics on Twitter

Key Points
  • Facebook uses algorithms to determine everything from what you see and don't see in News Feed, to finding and removing other content like hate speech and violent threats.
  • As chief security officer, Stamos is spearheading the company's investigation into how Kremlin-tied Facebook accounts may have used the service to spread misinformation during last year's U.S. presidential campaign.
  • Most Silicon Valley tech companies are notorious for controlling their own message.
Alex Stamos
Brendan Moran | Getty Images

executives don't usually say much publicly, and when they do, it's usually measured and approved by the company's public relations team.

Today was a little different. Facebook's chief security officer, Alex Stamos, took to Twitter to deliver an unusually raw tweetstorm defending the company's software algorithms against critics who believe Facebook needs more oversight.

Facebook uses algorithms to determine everything from what you see and don't see in News Feed, to finding and removing other content like hate speech and violent threats. The company has been criticized in the past for using these algorithms — and not humans — to monitor its service for things like abuse, violent threats, and misinformation.

The algorithms can be fooled or gamed, and part of the criticism is that Facebook and other tech companies don't always seem to appreciate that algorithms have biases, too.

Stamos says it's hard to understand from the outside.

"Nobody of substance at the big companies thinks of algorithms as neutral. Nobody is not aware of the risks," Stamos tweeted. "My suggestion for journalists is to try to talk to people who have actually had to solve these problems and live with the consequences."

Stamos's thread is all the more interesting given his current role inside the company. As chief security officer, he's spearheading the company's investigation into how Kremlin-tied Facebook accounts may have used the service to spread misinformation during last year's U.S. presidential campaign.

The irony in Stamos's suggestion, of course, is that most Silicon Valley tech companies are notorious for controlling their own message. This means individual employees rarely speak to the press, and when they do, it's usually to deliver a bunch of prepared statements. Companies sometimes fire employees who speak to journalists without permission, and Facebook executives are particularly tight-lipped.

This makes Stamos's thread, and his candor, very intriguing. Here it is in its entirety.

tweet

  1. I appreciate Quinta's work (especially on Rational Security) but this thread demonstrates a real gap between academics/journalists and SV.
  2. I am seeing a ton of coverage of our recent issues driven by stereotypes of our employees and attacks against fantasy, strawman tech cos.
  3. Nobody of substance at the big companies thinks of algorithms as neutral. Nobody is not aware of the risks.
  4. In fact, an understanding of the risks of machine learning (ML) drives small-c conservatism in solving some issues.
  5. For example, lots of journalists have celebrated academics who have made wild claims of how easy it is to spot fake news and propaganda.
  6. Without considering the downside of training ML systems to classify something as fake based upon ideologically biased training data.
  7. A bunch of the public research really comes down to the feedback loop of "we believe this viewpoint is being pushed by bots" -> ML
  8. So if you don't worry about becoming the Ministry of Truth with ML systems trained on your personal biases, then it's easy!
  9. Likewise all the stories about "The Algorithm". In any situation where millions/billions/tens of Bs of items need to be sorted, need algos
  10. My suggestion for journalists is to try to talk to people who have actually had to solve these problems and live with the consequences.
  11. And to be careful of their own biases when making leaps of judgment between facts.
  12. If your piece ties together bad guys abusing platforms, algorithms and the Manifestbro into one grand theory of SV, then you might be biased
  13. If your piece assumes that a problem hasn't been addressed because everybody at these companies is a nerd, you are incorrect.
  14. If you call for less speech by the people you dislike but also complain when the people you like are censored, be careful. Really common.
  15. If you call for some type of speech to be controlled, then think long and hard of how those rules/systems can be abused both here and abroad
  16. Likewise if your call for data to be protected from governments is based upon who the person being protected is.
  17. A lot of people aren't thinking hard about the world they are asking SV to build. When the gods wish to punish us they answer our prayers.
  18. Anyway, just a Saturday morning thought on how we can better discuss this. Off to Home Depot. FIN

By Kurt Wagner, Recode.net.

This story originally ran Saturday, Oct. 7.

CNBC's parent NBCUniversal is an investor in Recode's parent Vox, and the companies have a content-sharing arrangement.

More from Recode:

President Trump is willing to support DACA — if Congress funds a border wall and other immigration proposals that tech hates
Switch and Roku stocks are doing much better than Blue Apron and Snap
How AOL's Instant Messenger influenced Mark Zuckerberg's Facebook design