"We're a successful enough company that we can employ 15,000 people to work on security and all of the different forms of community [operations]," Zuckerberg told the Times.
The Korn Ferry recruiter said there is a huge movement to fill these positions, but there is also ambiguity about the responsibilities of the positions because it's an emerging field.
"I think across industries there is a tidal wave of need coming and there is an ongoing shortage of talent coming into the space," Wallace said. Companies aren't only playing catch-up but "over-resourcing" for these positions, she said, because they know they will have to "upscale" certain workers and create new skill sets in the future.
"They know there will be a need and know there will be a shortage, so they build a team ... to make sure they have the resources in place when risks materialize in a critical way," Wallace said. She added, "Organizations are moving to the 'We have to add these people now, even if we don't know what they will do, what we will do with them.'"
"You can't just build an algorithm to police the algorithm," Duarte said.
Companies need to have a diverse human pool with different academic backgrounds to come together and decide what a platform is really about and how a company wants it to serve users. Duarte said that if that task is left to A.I. and the engineers who build the code, they will invariably get it wrong.
"Hate speech and sarcasm can be confused," Duarte said. "It takes humans who understand the full scope and context to spot issues where we should be worried about an [A.I.] classifier going wrong."
She said Zuckerberg's comments about bridging the technical systems with the people in operations speaks to the potential divisions that stand in the way of proper decision-making. There are teams that work on content policy and on privacy policy and make the decisions to advocate for policies to govern platforms, and then there are the engineers who build the tools.
"Ideally, you have integration between policy and engineers, so policy goals are then informing tool builds," Duarte said.
Even when companies like Facebook try to get it right in response to failings, the responses still have a history of failing themselves. Duarte pointed to her group's efforts to get Facebook to crack down on affinity targeting in ads, for example, routing ads related to homebuying away from minority groups — a social media ad version of the unfair mortgage and real estate industry practices that made homebuying for African Americans difficult for much of the 20th century.
In 2016 the Center for Democracy & Technology advocated for more rules and guidance around how advertisers could target those ads. Facebook said it was fixing the problem and put out a blog post on its efforts. A subsequent investigation by ProPublica found that the Facebook fix didn't fix the problem — ProPublica was able to serve ads in the exact way it shouldn't have been allowed to do anymore.
"We can't just throw A.I. at it, and we don't want to have policy team and engineers siloed separately while building tools," Duarte said.