Using AI in Employment Decisions

Laura Lapidus

|

April 3, 2023

risks of using AI in human resources processes and hiring decisions

Artificial intelligence has been defined as a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Research by the Society for Human Resource Management found that approximately 25% of organizations use AI for human resources processes including recruitment, hiring, performance and termination decisions. This has caught the attention of the Equal Employment Opportunity Commission (EEOC), the agency that enforces federal anti-discrimination laws in the United States.

The draft EEOC Strategic Enforcement Plan for 2023 to 2027 indicates that its enforcement priorities include “employment decisions, practices or policies in which the use of technology contributes to discrimination based on protected characteristics, such as the use of software that incorporates algorithmic decision-making or machine learning, including [AI].”

On January 31, 2023, the EEOC held a public hearing to explore the benefits and risks of employer use of AI, software and other emerging technologies that use algorithms in employment decision-making, collectively known as automated decision-making (ADM) tools. The hearing followed the May 2022 EEOC guidance titled “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees.” The hearing and the guidance are part of an EEOC initiative that focuses on ensuring that the use of emerging technologies in employment decisions complies with federal civil rights laws.

Challenges of ADM Tools

Human resources departments use ADM tools in many forms, such as resume scanners, video interviewing and testing software, as well as monitoring software to screen or evaluate applicants and employees. During the hearing, proponents noted that ADM tools are more efficient and less costly, and remove human bias from employment decisions.

Others argued that the use of ADM tools in employment decisions may mask and perpetuate bias or create new discriminatory barriers. One problem is that decisions about candidates are based upon data that is input into the ADM tool. If the data is biased, the resulting decisions based upon that data may reflect and/or reinforce that bias. For example, some ADM tools try to predict which applicants will be good employees by comparing them to current successful employees. As Amazon found after implementing a now-discontinued AI-based recruiting tool, if a company hires predominantly white men, the algorithm may conclude that white men will be more successful and rate them higher than others, perpetuating bias.

Some of the characteristics for which ADM tools screen could also be proxies for protected classes such as gender, race and age. For example, Black and Latino applicants are often overrepresented in data regarding records of criminal legal proceedings, evictions and credit histories. An algorithm that screens out candidates based on these data points may have a disparate impact on those protected classes and violate Title VII of the Civil Rights Act of 1964.

Best Practices from the EEOC

The EEOC guidance focuses solely on factors that employers should consider when using ADM tools in order to prevent disability discrimination under Title I of the Americans with Disabilities Act of 1990 (ADA). The EEOC notes that ADM tools may violate the ADA when an employer:

  • Fails to provide a reasonable accommodation necessary for an applicant/employee to be rated fairly and accurately by the algorithm
  • Relies on an ADM tool that intentionally or unintentionally screens out an individual with a disability, even though that individual can do the job with a reasonable accommodation
  • Uses an algorithmic decision-making tool for job applicants or employees that violates the ADA’s restrictions on disability-related inquiries and medical examinations

The guidance maintains that an employer must provide a reasonable accommodation to an individual with a disability so that they can be rated fairly and accurately, absent an undue burden on the employer. For example, when a company administers a knowledge test that uses a keyboard and an applicant with limited dexterity requests an accommodation, the employer should provide an accessible version of the test. If one is not available, it should consider providing an alternative test.

The guidance also states that an ADM tool may violate the ADA if it screens out individuals with disabilities by lowering their performance or by causing them to fail to meet a selection criterion, resulting in a lost job opportunity. For example, a chatbot that screens out applicants with employment history gaps may violate the ADA if the gaps were due to a disability or the need to undergo treatment for a disability.

Another concern is that a disability may reduce the accuracy of an assessment if the circumstances of the disability are not considered. An ADM tool that screens out an individual with a disability because the individual cannot perform a job under typical working conditions may fail to take into account the possibility that the individual may be entitled to an accommodation that would enable them to do the job.

Importantly, the EEOC advises that employers may be liable for discrimination even if the ADM tools are designed or administered by a vendor or other entity. The guidance also states that employers cannot simply rely on a vendor’s statement that the tool is free of bias, as those assessments may only focus on protected characteristics other than disability. Because each disability is unique, a general assessment for bias may not reveal all the potential ways someone with a disability may interact with the tool, and the ways in which the tool may impact that individual.

The guidance takes the position that any test or tool may violate the ADA if it poses disability-related inquiries or seeks information that is considered a medical examination prior to a conditional offer of employment. The guidance notes that an assessment includes disability-related inquiries if it asks job applicants or employees questions that are likely to elicit information about a disability or directly asks whether an applicant or employee is an individual with a disability. It qualifies as a medical examination if it seeks “information about an individual’s physical or mental impairments or health.”

Looking Ahead

To help mitigate the risk of discrimination while using ADM tools, employers should work with experienced employment counsel to assess the EEOC guidance and consider implementing some of the practices it has outlined. Organizations should also monitor federal, state and local guidance in this area to ensure that any ADM tools that it may use comply with all anti-discrimination laws. Employers should also anticipate additional EEOC guidance and enforcement regarding the use of ADM tools, as well as state and local legislation. For example, New York City enacted a law that will require, among other things, employers to perform a bias audit and to provide notice prior to using artificial intelligence in employment decisions. Although ADM and other emerging technologies may have many advantages, they must be carefully created, implemented and monitored so that the risks do not outweigh the benefits.

Laura Lapidus is management liability (EPL) risk control director at CNA Insurance.