Avoiding Bias Risks in AI Hiring Technology

Kevin White  , Daniel Butler 

|

August 1, 2022

AI Hiring Screening

Artificial intelligence (AI) technology has brought numerous benefits to a wide variety of industries and sectors. Some of the applications include employee recruitment and hiring, with AI promising to bring increased efficiency, lower costs and better candidates to companies that employ such technology in recruiting and hiring. However, this technology poses very real risks too, and if not implemented carefully, the use of AI can lead to unlawful discrimination, which may make these tools a liability rather than an asset.

Types of AI Tools Used in Hiring

AI-based technology is available for all parts of the hiring process, from recruiting and interviewing to selecting and onboarding. Some employers use automated candidate sourcing technology to search the internet and determine which job postings should be advertised to particular candidates. Others use complex algorithms to determine which candidates’ resumes best match the requirements of open positions. Some companies use video interview software to analyze facial expressions, body language, and tone to assess whether a candidate exhibits preferred traits. Some organizations even use so-called “brain games” or cognitive assessment tests to filter out certain applicants.

Benefits of AI Hiring Tools

Regardless of the precise product, AI tools are usually marketed to human resource departments to provide simplicity, enhance the quality of candidates, promote efficiency and improve diversity. Perhaps the most obvious promised benefit is time. For example, AI can spare recruiting departments from the laborious task of reviewing technical resume requirements, such as degrees or certifications, to filter out unqualified candidates. Particularly for larger companies that receive thousands of applications each year, this can free up considerable amounts of time that can be spent on more productive activities.

AI also can expose companies to new pools of talent, and with a wider range of candidates, employers can expect more diverse and qualified new hires. Moreover, removing or curtailing human decision-making can help remove both intentional and unconscious human biases from hiring and other employment-related decisions.

Risks of AI Technology

Although AI promises significant rewards, there are also considerable risks. AI tools likely have no intent to discriminate—indeed, many of them are marketed as a guardrail from biased human decision-making. But that does not automatically insulate businesses that use them from liability. This is because the law contemplates both intentional discrimination (disparate treatment) as well as unintentional discrimination (disparate impact). The larger risk for AI lies with disparate impact claims. In such lawsuits, intent is irrelevant. The question is whether a facially neutral policy or practice has a disparate impact on a particular protected group, such as one’s race, ethnicity, national origin, gender, religion or disability.

The variety of AI tools means that each type presents unique potential for discrimination, however, a common thread is the potential for input data to create a discriminatory impact. Many algorithms rely on a set of inputs to understand search parameters. For example, resume screening tools are often configured to find candidates whose resumes are comparable to those of the employer’s existing high-performing employees. If the existing employees are of a particular race or gender, then the technology could end up reinforcing any existing homogeneity. In 2018, Amazon drew considerable attention for this very problem when an AI-based hiring tool was found to discriminate against female candidates after being trained on data of the company’s predominantly male engineering workforce.

Seemingly benign characteristics can also lead to discriminatory outcomes. For example, input data may include employees from certain zip codes that are home to predominately one race or ethnicity. Older candidates could be disfavored by an algorithm’s preference for “.edu” email addresses. And workers with disabilities may be unable to complete certain brain games or cognitive tests that have tenuous connections to the skills required for the open position.

Regulatory Scrutiny Increasing

The Equal Employment Opportunity Commission (EEOC), the federal agency tasked with enforcing the nation’s anti-discrimination laws, has taken note of AI’s potential to discriminate. To that end, the agency published guidance in May 2022 concerning the use of algorithmic software and its potential to discriminate against individuals with disabilities. In that guidance, the EEOC issued a list of “promising practices” that employers should follow, including: 1) informing all applicants who are being rated by algorithms that reasonable accommodations are available; 2) using algorithmic tools that only measure the abilities or qualifications actually necessary for the job;  and 3) confirming that the software does not ask questions that are likely to elicit information related to disabilities or medical conditions.

Because candidates often have no knowledge that their application may have been rejected by an AI tool, the EEOC has indicated it intends to use so-called “commissioner charges” to investigate companies’ use of AI technology. Commissioner charges are unique in that they are initiated by the agency itself, not by an individual. As a result, employers should be mindful that the EEOC may decide to launch an investigation into their AI practices, even if there is never a specific complaint from a rejected applicant.

In addition to the EEOC, employers should be aware of various states and localities that have enacted laws concerning the use of artificial intelligence. For example, New York City recently passed a law that restricts employers from using automated employment decision tools to screen a candidate or employee for an employment decision unless it makes publicly available on its website: 1) a summary of the tool’s most recent bias audit, and 2) the distribution date of the tool. The law goes into effect on January 1, 2023, and penalties for violations range from $500 to $1,500 per occurrence. 

Other jurisdictions with laws regulating AI in the workplace include Illinois and Maryland, and the risks are hardly confined to the United States. For example, regulators in the European Union have expressed clear intent to examine the use of AI that can perpetuate inherent bias in a broad range of contexts, and as such technology proliferates, scrutiny will likely increase as well.

Steps to Manage Discrimination Risks

Given the increasing use of AI and the EEOC’s spotlight on such technology, employers using artificial intelligence- or machine learning-based tools should take steps to minimize associated risks.

First, companies should demand that AI vendors disclose sufficient information to explain how their software makes employment decisions. Employers may receive pushback from the vendors on this question because vendors do not want to disclose proprietary information. However, employers cannot rely on “the computer did it” as a defense and will ultimately be held accountable for the results of these tools. If a vendor refuses to disclose sufficient information for an employer’s IT department to understand how the tool functions, then employers should look elsewhere or, at minimum, negotiate indemnity rights for any lawsuits or investigations related to the use of the vendor’s AI products.

Second, employers should consider auditing any AI tool before initial implementation. To do this, companies need to be able to identify the candidates that the tool rejected, not just those who were accepted. Thus, before onboarding any AI tool, employers should verify with vendors that sufficient data is preserved so that the employer can properly audit the tool and examine results to determine whether there was a negative impact on protected classes. This auditing should not only be conducted before initial use, but also performed regularly or at least whenever input data changes.

Third, employers should ensure that the input data upon which the tool relies does not reflect a homogenous group. If the input data reflects a diverse workforce, a properly functioning AI tool should mimic that diversity in its results.

Finally, because this is an emerging field, companies need to stay apprised of developments in the law and particularly the EEOC’s guidance in this area. When in doubt, companies should err on the side of caution and consult with qualified counsel when deciding whether and how to use AI in the hiring process. 

Kevin White is a partner at Hunton Andrews Kurth LLP and co-chair of the firm’s labor and employment team.


Daniel Butler is an associate at Hunton Andrews Kurth LLP.