CNBC Work

Silicon Valley is stumped: A.I. cannot always remove bias from hiring

Key Points
  • AI software to eliminate the prejudice of human hiring managers has produced encouraging early results at corporations.
  • But tech executives with experience at Google, Microsoft and Facebook say the algorithmic revolution in hiring is moving too fast.
  • Algorithm auditing firms want to see the code; public policy experts want to press governments to force the algorithms into the open before it's too late, they say.

At a recent MIT event on the future of work in New York City for its high-achieving alumni network, Andrew McAfee, co-director of MIT's Initiative on the Digital Economy and a principal research scientist at the university's Sloan School of Management, said leaders are realizing that a lot of their human practices, human resources and human capital practices are simply outdated.

McAfee's view: "If you want the bias out, get the algorithms in."

Silicon Valley is investing in many start-ups selling the idea that they can solve the problem of human bias in job-hiring decisions with artificial intelligence. But a new class of independent algorithm auditing firms and public policy experts — with experience at some of the largest tech companies in the world and educations from elite institutions — say 'algorithmic bias' has already been proved to exist in other areas. As a result, the rapid uptake of AIs for hiring in the market has moved too fast, and with too little scrutiny, they say.

Source: Pymetrics

Algorithms can help HR professionals make smart hiring decisions, but these algorithms can often be biased against minorities, said speakers on a panel at the MIT event. The biases creep in because human bias influenced the algorithm, and it's up to humans to notice the bias and fix it.

Traditional résumé review leads to women and minorities being at a 50 percent to 67 percent disadvantage, according to start-up pymetrics, which attempts to go well beyond the résumé in assessing job applicants using neuroscience games and AI.

Companies using AI can reduce those figures dramatically, pymetrics said, as long as the input data is accurate and remains unbiased.

That's a big "if."

AI can work, 'as long as' the input data is accurate

Cathy O'Neil, who also spoke at the MIT future-of-work event, said the hiring algorithms now coming into the human resources field are a perfect test case for her skepticism about the tech utopian movement, and she uses these job algorithms often in presentations.

O'Neil, an academically trained mathematician who studied and worked at UC Berkeley, Harvard and MIT — and left a job on Wall Street to join the Occupy Wall Street movement and write a book on the dangers of algorithms — often employs a thought experiment in talks she gives: Imagine what a machine-learning hiring algorithm trained on Fox News data would result in, even if reasonable choices were being made by the data science team. Then she points out that it doesn't have to be an outrageous example like Fox News, because there is no perfect workplace with perfect hiring policies, perfect raises and promotion methods, and a culture that welcomes all people equally.

When we blithely train algorithms on historical data, to a large extent we are setting ourselves up to merely repeat the past. ... We'll need to do more, which means examining the bias embedded in the data.
Cathy O'Neil
author of "Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy"

"It's going to take some real thought," said Dr. Lori Kletzer, an economics professor at Colby College in an interview with CNBC. "It's not just going to happen. And it's important to raise the questions now. ... The implications are societal, so we can't just leave it to the market, because the market only cares about the bottom line."

To start, the makeup of the tech industry creating the hiring algorithms isn't perfect. While Silicon Valley has a long history of encouraging immigrant entrepreneurs and bringing in foreign workers from around the world on skill visas, it has been criticized for diversity in terms of hiring from within the national population.

A report from the federal government's Government Accountability Office released in November 2017 found that the technology industry is behind other sectors in diversity of its workforce. "The estimated percentage of minority technology workers increased from 2005 to 2015, but GAO found that no growth occurred for female and black workers, whereas Asian and Hispanic workers made statistically significant increases. Further, female, black and Hispanic workers remain a smaller proportion of the technology workforce — mathematics, computing and engineering occupations — compared to their representation in the general workforce."

"When we blithely train algorithms on historical data, to a large extent we are setting ourselves up to merely repeat the past. If we want to get beyond that, beyond automating the status quo, we'll need to do more, which means examining the bias embedded in the data. The data is, after all, simply a reflection of our imperfect culture," O'Neil, who now runs her own algorithm auditing firm, said via email.

The traditional job application process isn't working

Dr. Frida Polli, pymetrics CEO and co-founder, also has an extensive academic résumé, which includes an MBA from Harvard and a postdoctoral fellowship in neuroscience from MIT. Though despite the impressive accomplishments, she feels that simply listing them on her résumé didn't provide employers much information about her potential.

Pymetrics is working with companies such as Unilever, Accenture, LinkedIn and Tesla. The company uses behavioral neuroscience and artificial intelligence to help identify candidates in a more predictive and unbiased way. Pymetrics bypasses the résumé, using data generated from brain games to match applicants with roles.

This is how AI is changing the way you apply for jobs
VIDEO4:4004:40
Now AI is deciding if you're qualified for a job

HireVue is another start-up working with the corporate industrial psychologists to make sure employer assessment tools are up to industry standards and, by adding AI to the mix, eliminating bias. It has been around for more than a decade, starting with tech that allowed for video interviews and moving more recently to AI-based job assessments.

"We can measure it, unlike the human mind, where we can't see what they're thinking or if they're systematically biased," Lindsey Zuloaga, director of data science at HireVue, recently told CNBC.

Once the candidate reaches a human recruiter, companies using HireVue have reported a much more diverse candidate pool: Unilever has improved the diversity of its talent pool by 16 percent since partnering with HireVue. "If the team does notice a skew in results, it can evaluate the algorithm to see what went wrong and remove the bad data," Zuloaga said.

"AI is not impartial or neutral," said Meredith Whittaker, co-founder of the AI Now Institute at New York University, and founder of Google's Open Research group. AI Now — which Whittaker co-founded with Kate Crawford, an NYU professor and principal researcher at Microsoft Research — aims to move beyond what it describe as "minimal oversight" of AI. Algorithmic bias is one of its core research areas.

"In the case of systems meant to automate candidate search and hiring, we need to ask ourselves: What assumptions about worth, ability and potential do these systems reflect and reproduce? Who was at the table when these assumptions were encoded?" Whittaker asked.

More from @Work:
There are two classes of workers in Silicon Valley
Co-working spaces for women rise as response to #MeToo
A new robot poised to make coffee better than a barista

Whittaker said HireVue, for instance, creates models based on "top performers" at a firm, then uses emotion detection systems that pick up cues from the human face to evaluate job applicants based on these models. "This is alarming, because firms that are using such software may not have diverse workforces to begin with, and often have decreasing diversity at the top. And given that systems like HireVue are proprietary and not open to review, how do we validate their claims to fairness and ensure that they aren't simply tech-washing and amplifying longstanding patterns of discrimination?"

In a statement to CNBC, Loren Larsen, CTO of HireVue, said, "It is extremely important to audit the algorithms used in hiring to detect and correct for any bias. ... No company doing this kind of work should depend only on a third-party firm to ensure that they are doing this work in a responsible way. Third parties can be very helpful, and we have sought out third-party data-science experts to review our algorithms and methods to ensure they are state-of-the-art. However, it's the responsibility of the company itself to audit the algorithms as an ongoing, day-to-day process."

The potential 'drastic and harmful' downside of AI

Pymetrics said the biggest hurdle with HR teams within corporations are legal concerns about bias. That's why pymetrics developed a process to de-bias their algorithms and has open-sourced that methodology on GitHub. It "wants all companies, regardless of industry, to have the tools to detect and remove bias from their algorithms," Polli said. But it does not let third-party algorithm auditing firms, like O'Neil's, review its actual job-hiring code for undetected bias.

"The algorithms themselves are not the solution, because they could actually make it worse. Audited algorithms that are shown to be free of gender bias ... are the answer to removing bias," Polli said. She added, "If you want to have a third-party auditor, fantastic. But the most critical thing is that it is being done however it is getting done."

Polli said the pymetrics process has now 50,000 pieces of data tested, and that does give it confirmation of no gender or ethnic bias.

That kind of confidence in an internal review process doesn't sit well with Dipayan Ghosh, a Harvard fellow and former Facebook privacy and public policy official who is now with the New America think tank. He said the use of advanced algorithms and AI in recruiting can create tremendous value for the industry, where discrimination by hiring managers has been rampant, but if implemented irresponsibly, it can have drastic and harmful effects for job candidates.

"Algorithms discriminate. There have been countless episodes in different contexts that have illustrated this in high resolution in recent years, from social media advertising to creditworthiness decision-making to subsidy dispensations."

He also said companies reviewing their own code is not enough, especially in the corporate sector, where returns are optimized against near-term revenue, forward investment and stock return, above all else. "We know of too many past cases where all a company needed to do is to self-certify, and it was shown to be perpetuating harms to society and, specifically, certain people. ... The public will have little knowledge as to whether or not the firm really is making biased decisions if it's only the firm itself that has access to its decision-making algorithms to test them for discriminatory outcomes."

There could be a serious risk, and it has the potential to open up the floodgates to something very bad.
Davida Perry
co-founder and managing partner of Schwartz, Perry & Heller LLP, a firm that specializes in employment law

"The hope is that [using technology in recruiting] will save money and take the bias out of the process, but there may be a downside," said Davida Perry, co-founder and managing partner of Schwartz, Perry & Heller LLP, a firm that specializes in employment law, including discrimination cases. "There could be a serious risk, and it has the potential to open up the floodgates to something very bad," Perry said.

Suppose there is a situation where bias is found in a third party's algorithm, leading to discrimination in the hiring process, and a corresponding lawsuit follows. Perry said that instead of having one applicant with a lawsuit, you would have many, because of how much the process has scaled. Companies may try to claim the third party as liable for damages, but that may not hold up in court. "If you hire a recruiting company and it has biases [in their algorithm], you're not going to be able to say, 'I'm so sorry, that's the recruiting company's problem.' If you [as the company] hire them to serve as your agent, I believe that you would be on the hook for damages," Perry said.

Whittaker said algorithm audits need to include experts, advocacy groups and academics reviewing them and studying the effects they'll have on different populations. "We think that's not happening today, and it could lead to serious problems as AI takes off."

Ghosh said the start-ups in this field don't face enough pressure to use outside audit firms: It is not required by law, it costs money and would require "tremendous levels" of compliance beyond what internal audits likely require. But he does think that recent regulation of the technology sector, such as the new European privacy directive GDPR, suggests that policy is moving in the right direction. He believes algorithm audits are a critical need for the public, particularly for people who are and historically have been marginalized.

"Personal prejudices can quickly become reflected in AI," Ghosh said. "In recruiting — a space in which sensitive and life-changing decisions are made all the time and in which we accordingly have established strong civil rights protections — these forms of vicious algorithmic bias are especially important to detect and act against."

— Additional reporting by CNBC Coordinating Producer Krista Braun and CNBC news interns Chris Crouse and Rick Morgan

Talent@Work, the inaugural event of CNBC's @Work series, will take place June 20 in NYC. Focusing on the workforce of tomorrow, the event will feature IAC CEO Joey Levin, LinkedIn head of product @ryros, Boxed CEO @Astrochieh and more. REGISTER NOW.