Technology Executive Council

Microsoft, Amazon among the companies shaping AI-enabled hiring policy

Key Points
  • Just 12% of hiring professionals report using artificial intelligence in their recruiting or talent management processes, but several new uses of the technology are being adopted by HR.
  • Some uses of AI in the hiring process are relatively uncontroversial, like using intelligent interview scheduling and chatbots that help progress people into the funnel more seamlessly.
  • Amazon, Unilever, Koch Industries and Microsoft are among the companies that recently joined together to publish a set of policies for use of AI in hiring and recruiting. 
Style-photography | Istock | Getty Images

While further introducing AI across hiring practices could solve some problems, experts say the technology shouldn't be expected to fully transform how companies bring in new workers.

Today, just 12% of hiring professionals report using artificial intelligence in their recruiting or talent management processes, according to the 2023 Hiring Benchmark Report from software and talent success company Criteria. But AI solutions, for everything from streamlining sourcing to making informed selection decisions, are "very actively being marketed," Criteria founder and CEO Josh Millet said.

A critical process like hiring — which is bound by legal, cultural and business implications — must innovate with caution. Introducing, or perhaps failing to mitigate bias in the talent acquisition pipeline comes at a cost. As businesses implement AI in hiring, they must earn and maintain the trust that these processes are working as they should. 

While organizations wait for legal implications like New York City's AI bias law to expand elsewhere, the Center for Industry Self-Regulation (CISR), BBB National Programs' 501(c)(3) nonprofit foundation, published a set of principles and protocols for trustworthy AI in hiring and recruiting. 

In partnership with companies including Amazon, Unilever, Koch Industries and Microsoft, the principles address transparency, fairness, non-discrimination, technical robustness, safety, governance and accountability with the use of AI in hiring. Meanwhile, the protocols specify the criteria for third-party AI vendor certification to promote accountability beyond the employer.

How AI is being used in the job market

As it stands, AI is being used in the hiring and recruiting process to develop job descriptions, source talent, create and score assessments, screen new applicants, communicate with candidates and train new employees, Criteria's Hiring Benchmark report states. Tools like OpenAI's ChatGPT, Google's Bard, recruiting chatbots and proprietary solutions help enable this.

"When AI tools are well designed, deployed, and monitored properly, the technology has the potential to mitigate discrimination and bias on a broader scale," said Eric Reicin, president and CEO of BBB National Programs.

Reicin says the key objectives of defining and publishing the principles and protocols for trustworthy AI in hiring were to ensure valid and reliable systems, promote transparency and accountability, and strive for systems that are secure, resilient and expandable.

The goal is to enable the promises of AI in hiring while simultaneously managing the inevitable risk. AI purportedly allows organizations to equitably expand their pool of candidates: One organization within the CISR's incubator received 20 million applications last year, according to Reicin. "There's no way that humans could have given a fair shake to all those applicants," he said.

However, Reicin notes that what is illegal without technology is also illegal with technology. "Employers are on the hook, whether they use a vendor or not, to ensure non-discrimination under relevant law," he said.

He pointed to the recent Rains vs. U.S. Healthworks Medical Group decision, in which the California Supreme Court expanded the state's Fair Employment and Housing Act liability to certain vendors, not just employers.

A massive bias problem in hiring

Whether or not employers use a vendor to provide hiring assistance through technology, the problem of explicit and implicit bias persists despite efforts to eradicate it. 

"We could almost not do any worse," Millet said about the state of hiring. "Yes, we should be careful, deliberate, measured about implementing [AI] systems. But that shouldn't obscure the fact that there is a massive problem."

Criteria's latest Candidate Experience Report states that 39% of candidates have been ghosted in the last year. A 2021 study from the National Bureau of Economic Research found that distinctively Black names reduce the probability of employer contact by 2.1% relative to distinctively white names. More recently, China's iTutorGroup had to pay a $365,000 settlement to the U.S. Equal Employment Opportunity Commission for violating the Age Discrimination in Employment Act by training its AI to weed out people above the age of 60.

"We can use evidence-based approaches to help remove some of the bias from the selection process or from the talent evaluation process," Millet said. "That's the promise." 

But the promised outcomes of AI are not a given, Millet added. "In trying to remove bias, we actually sometimes can have the opposite impact and amplify it," he said.

Nearly a decade ago, Amazon tried to automate hiring, but scrapped the project because the tool "was not rating candidates for software developer jobs and other technical posts in a gender-neutral way," according to a 2018 Reuters report.

AI harbors substantial potential, but whether that potential moves in a positive trajectory depends on how it's used, experts say. The principles and protocols for trustworthy AI in hiring are just one effort to mitigate risk and maximize promise.

Keeping humans in the recruiting loop

Some uses of AI in the hiring process are relatively uncontroversial, like using intelligent interview scheduling and chat bots that help progress people into the funnel more seamlessly, Millet said. However, high-stakes decisions or data-rich use cases are a different ball game.

Reicin recommends regular vendor analysis, which means asking questions like: What debiasing efforts are your vendors making? Who is your vendor certified by and what does that certification mean? How are they collecting, protecting and using data?

For the employer, knowing where there is the human oversight in the AI-enabled process is critical, Reicin said. "Do you have the right escape hatch for candidates because they may need, in combination with the Americans with Disabilities Act or for whatever other reason, some human engagement as part of this process?" he added.

Then there's data collection notices for candidates. "As a best practice, employers need to provide enough information about the process for the applicant so they can request an accommodation or opt out of the process, but also have a sense of where their data is going and what actually is being taken," Reicin said.

He cited the potential implications of a video interview, where AI technology could collect data about a candidate's voice, inflection and eye movements. What if the candidate is unaware this is happening?

"If you're using any data on a candidate that they're not aware is being used, that's deeply problematic," Millet said, adding that employers and vendors should be using data for a reasonable and intended purpose. Ultimately, he says communication and consent for reasonable use are two best practices for AI in hiring that employers shouldn't skip.

Millet said companies should become familiar with "glass box algorithms," which are transparent in how they reach conclusions and can combat the problems associated with "black box algorithms," which are opaque and can erode trust, or fail to build trust at all.

Elon Musk: There is an 'overwhelming consensus' that there should be some AI regulation
VIDEO6:3506:35
Elon Musk: There is an 'overwhelming consensus' that there should be some AI regulation