HireVue is another start-up working with the corporate industrial psychologists to make sure employer assessment tools are up to industry standards and, by adding AI to the mix, eliminating bias. It has been around for more than a decade, starting with tech that allowed for video interviews and moving more recently to AI-based job assessments.
"We can measure it, unlike the human mind, where we can't see what they're thinking or if they're systematically biased," Lindsey Zuloaga, director of data science at HireVue, recently told CNBC.
Once the candidate reaches a human recruiter, companies using HireVue have reported a much more diverse candidate pool: Unilever has improved the diversity of its talent pool by 16 percent since partnering with HireVue. "If the team does notice a skew in results, it can evaluate the algorithm to see what went wrong and remove the bad data," Zuloaga said.
"AI is not impartial or neutral," said Meredith Whittaker, co-founder of the AI Now Institute at New York University, and founder of Google's Open Research group. AI Now — which Whittaker co-founded with Kate Crawford, an NYU professor and principal researcher at Microsoft Research — aims to move beyond what it describe as "minimal oversight" of AI. Algorithmic bias is one of its core research areas.
"In the case of systems meant to automate candidate search and hiring, we need to ask ourselves: What assumptions about worth, ability and potential do these systems reflect and reproduce? Who was at the table when these assumptions were encoded?" Whittaker asked.
More from @Work:
There are two classes of workers in Silicon Valley
Co-working spaces for women rise as response to #MeToo
A new robot poised to make coffee better than a barista
Whittaker said HireVue, for instance, creates models based on "top performers" at a firm, then uses emotion detection systems that pick up cues from the human face to evaluate job applicants based on these models. "This is alarming, because firms that are using such software may not have diverse workforces to begin with, and often have decreasing diversity at the top. And given that systems like HireVue are proprietary and not open to review, how do we validate their claims to fairness and ensure that they aren't simply tech-washing and amplifying longstanding patterns of discrimination?"
In a statement to CNBC, Loren Larsen, CTO of HireVue, said, "It is extremely important to audit the algorithms used in hiring to detect and correct for any bias. ... No company doing this kind of work should depend only on a third-party firm to ensure that they are doing this work in a responsible way. Third parties can be very helpful, and we have sought out third-party data-science experts to review our algorithms and methods to ensure they are state-of-the-art. However, it's the responsibility of the company itself to audit the algorithms as an ongoing, day-to-day process."