CNBC Work

Facebook data privacy scandal has one silver lining: Thousands of new jobs AI can't handle

Key Points
  • The Facebook data privacy and Russian election interference scandals show that artificial intelligence is still not up to many critical jobs in the technology sector.
  • Facebook CEO Mark Zuckerberg said this week the social media company will be adding another 5,000 jobs before the end of the year, on top of 5,000 recently created positions.
  • A significant number of jobs are in content review, where A.I. still needs a lot of human help in protecting the company from what Facebook COO Sheryl Sandberg called "bad actors."
  • Facebook already employs 7,500 content reviewers, which is a larger number of workers than the entire workforce of most companies.
Facebook COO Sheryl Sandberg: Data scandal a huge breach of trust
VIDEO2:5702:57
Facebook COO Sheryl Sandberg: Data scandal a huge breach of trust

More Americans may be deleting Facebook in the wake of the Cambridge Analytica data privacy scandal, but the social media giant is adding people to its ranks in a way that does pay: through thousands of new jobs.

Facebook's struggles to contain the privacy and linked Russian election interference headlines underscore a big trend in technology careers that is still being sorted out and spent heavily on by major players. As smart as artificial intelligence gets, it is still not up to many critical job tasks that are required to protect companies from risks that can destroy not only bottom lines but reputations. The number of new jobs being created by Facebook reflects the scope of the challenge.

Facebook is adding 5,000 jobs before the end of the year in security and community operations. That is on top of 5,000 new positions already added this year. By year-end Facebook jobs in these areas will have increased from 10,000 to 20,000 (the teams involved are at 15,000 currently).

One position, content reviewer, is already staffed by Facebook with 7,500 workers. That's roughly the same number of employees as the entire workforce of Twitter and Snap combined. The 5,000 Facebook workers still to be added this year is a larger number than either of those companies, and represents 20 percent of Facebook's reported total employee level of approximately 25,000.

An employee waits for an elevator at a Facebook office.
Toby Melville | Reuters

Facebook COO Sheryl Sandberg told CNBC on Thursday, "We need to do more; we continue to do more. We are massively ramping hiring. We are going to continue to ramp even faster."

It was Facebook CEO's Mark Zuckerberg's comments to the New York Times and Wired, though, that laid out the limitations of A.I. and the need for human hiring. "One of the big things we needed to do is coordinate our efforts a lot better across the whole company. It's not all A.I., right? There's certainly a lot that A.I. can do. We can train classifiers to identify content, but most of what we do is identify things that people should look at. So we're going to double the amount of people working on security this year. ... So it's really the technical systems we have working with the people in our operations functions that make the biggest deal," he told the Times.

A.I. is not yet 'solved'

The Facebook CEO told Wired that when he started Facebook in 2004, "People shared stuff and then they flagged it and we tried to look at it. But no one was saying, 'Hey, you should be able to proactively know every time someone posts something bad,' because the A.I. tech was much less evolved, and we were a couple of people in a dorm room. ... But now you fast-forward almost 15 years and A.I. is not solved, but it is improving."

He also said, "As A.I. tools get better and better, will be able to proactively determine what might be offensive content or violate some rules, what therefore is the responsibility and legal responsibility of companies to do that?"

Sandberg told CNBC that A.I. can catch "99 percent" of the "bad actors" in some content areas, but internet watchdogs are less sure, and after a week that was the worst for Facebook shares since 2012 — and saw tens of billions in Facebook market capitalization erased daily — it seems like the "responsibility" of companies merits more spending on human talent. Simply put, algorithms are not as good as people at context.

"What we are learning is that you can't just throw more A.I. at a lot of these problems. You need people who understand how to train these systems so that they have the insight they need, and then how to monitor them," said Natasha Duarte, a policy analyst at the Center for Democracy & Technology. "They have to review flagged content and see what the tools might be missing, and understand how to go back and add training and data and try to improve the tools accuracy."

Facebook is looking for workers to fill jobs in content moderation — "content reviewers" — who are tasked with taking correct action on a reported piece of content based on the company's Community Standards. The focus in hiring for this type of role is language and market knowledge to ensure the company has the most appropriate cultural context to cover content coming from all over the world, according to information provided to CNBC.

These people work to detect fake accounts, improve authentication, reduce harassment and scams and promote child safety, among other tasks related to safety. Facebook moderators are paid above average in the industry, according to information provided by Facebook to CNBC, and are offered additional benefits, which include resiliency training (stress-related training) and support.

It is a topic of conversation that comes up with increasing frequency. It's bleeding edge, A.I. and machine learning and analytics and data scientists, and all of it wraps into this one space with lots of buzzwords, which leads to lots of ambiguity.
Samantha Wallace
a technology practice leader at Korn Ferry

Diversity in hiring is also key, a Facebook spokesperson told CNBC.

Duarte said diversity in hiring is critical. "If you ask me what I think is the most important, whoever tech companies hire, they need people from diverse backgrounds, racially diverse and diverse in terms of gender, and especially when we are talking about content moderation, there is a cultural element to that. You want to make sure people working on automated solutions for content are not just all from one cultural background."

The exact number of Facebook hires to be made in content review is not known and will be a part of the 5,000 new jobs spread across security and safety. The broader team includes experts in enforcement in areas like child safety, hate speech, counterterrorism and legal specialists. So it would include roles such as engineers or those who work directly with law enforcement. Not all of the new jobs are full-time with Facebook. The 5,000 jobs number includes a mix of full-time employees, independent contractors and outsourced positions managed by vendor partners, at least in part driven by the need to match the work with Facebook's needs across global time zones, languages and markets.

Samantha Wallace, who leads the technology practice for Korn Ferry's division that focuses on mid-level search, said the Facebook hiring drive reflects the difficulty companies are having in figuring out how to best take advantage of A.I. and how to not become overly confident in it at the same time.

"It is a topic of conversation that comes up with increasing frequency," Wallace said. "It's bleeding edge, A.I. and machine learning and analytics and data scientists, and all of it wraps into this one space with lots of buzzwords, which leads to lots of ambiguity."

Wallace said that she can't speak to Facebook's hiring specifically, but the search firm has found that outside of the core competencies related to A.I., which require advanced degrees in science and math, traits like curiosity and persistence and creativity are important for hiring in roles related to the massive amount of data. "Harnessing it is one thing, and there's a technical role, but to make it something useful and strategic in decision-making requires non-technical traits within the A.I. world."

Wallace said there is an ocean of data that creates security and content issues, and that requires the subjective perspective from a human being to decide which pieces of data or content make it to the smaller pool that is widely shared. Google has in the past year been hiring more workers to filter terrorism-related content, but this isn't just an issue for social media companies, though they may feel it most acutely.

"Across industries the content-management question is real for all organizations," Wallace said. "In social media there is wide audience interacting with the platform on a daily basis, but others could be web-based or product descriptions disseminated through other channels. The content piece is important. ... These challenges are true for any organization, the integrity of data and security."

You can't build an algorithm to police the algorithm

"We're a successful enough company that we can employ 15,000 people to work on security and all of the different forms of community [operations]," Zuckerberg told the Times.

The Korn Ferry recruiter said there is a huge movement to fill these positions, but there is also ambiguity about the responsibilities of the positions because it's an emerging field.

"I think across industries there is a tidal wave of need coming and there is an ongoing shortage of talent coming into the space," Wallace said. Companies aren't only playing catch-up but "over-resourcing" for these positions, she said, because they know they will have to "upscale" certain workers and create new skill sets in the future.

"They know there will be a need and know there will be a shortage, so they build a team ... to make sure they have the resources in place when risks materialize in a critical way," Wallace said. She added, "Organizations are moving to the 'We have to add these people now, even if we don't know what they will do, what we will do with them.'"

"You can't just build an algorithm to police the algorithm," Duarte said.

Companies need to have a diverse human pool with different academic backgrounds to come together and decide what a platform is really about and how a company wants it to serve users. Duarte said that if that task is left to A.I. and the engineers who build the code, they will invariably get it wrong.

"Hate speech and sarcasm can be confused," Duarte said. "It takes humans who understand the full scope and context to spot issues where we should be worried about an [A.I.] classifier going wrong."

She said Zuckerberg's comments about bridging the technical systems with the people in operations speaks to the potential divisions that stand in the way of proper decision-making. There are teams that work on content policy and on privacy policy and make the decisions to advocate for policies to govern platforms, and then there are the engineers who build the tools.

"Ideally, you have integration between policy and engineers, so policy goals are then informing tool builds," Duarte said.

Even when companies like Facebook try to get it right in response to failings, the responses still have a history of failing themselves. Duarte pointed to her group's efforts to get Facebook to crack down on affinity targeting in ads, for example, routing ads related to homebuying away from minority groups — a social media ad version of the unfair mortgage and real estate industry practices that made homebuying for African Americans difficult for much of the 20th century.

In 2016 the Center for Democracy & Technology advocated for more rules and guidance around how advertisers could target those ads. Facebook said it was fixing the problem and put out a blog post on its efforts. A subsequent investigation by ProPublica found that the Facebook fix didn't fix the problem — ProPublica was able to serve ads in the exact way it shouldn't have been allowed to do anymore.

"We can't just throw A.I. at it, and we don't want to have policy team and engineers siloed separately while building tools," Duarte said.

Facebook CEO Mark Zuckerberg
Getty Images

One option for Facebook suggested this week by one of the New York Times reporters who interviewed Zuckerberg was to cut down on data and content issues by opting to go smaller — Facebook veering away from its free ad-supported model that makes malicious bots' manipulation more possible. Comments from the company's top brass shows that while it continues to consider a version of its platform that could be offered on a paid basis, there are few signs it thinks that's the way to go for the majority of its global audience.

Sandberg's comments on A.I. could seem to express overconfidence in it, or in the least, highlight the tension between man and machine. In addition to "massively ramping" hiring, she told CNBC, "we are massively investing in machine learning and automation. ... There are areas where we've had great success. We take down 99 percent of ISIS or that kind of terrorist content before it even hits the platform. Machines make that possible and our commitment is clear."

"Right now A.I. tools need to be thought of like a student. That is where you get the most value," Wallace said. "But right now many actually operate like a 5-year-old. You are getting some benefit and insight but need to teach them, so they are a ways away from full impact. How we teach them will affect what happens to jobs subsequently."

More from @Work:

Facebook's leadership crisis explained

Your next job interview may be with an AI robot

AT&T's $1 billion gamble on the job of the future