
For many HR teams, the hiring process requires a lot of manual work. Overwhelmed hiring managers need to sort through hundreds or even thousands of resumes, rule out unqualified candidates and perform other time-consuming duties. Integrating artificial intelligence (AI) into the hiring process can present enticing opportunities to cut down on these initial tasks, but organizations need to be aware that using AI-based tools can also introduce potential legal and regulatory risks.
HR professionals and risk managers must navigate an increasingly complex regulatory landscape while ensuring fair and effective hiring practices and mitigating risks posed by AI. Currently, two states—Colorado and Illinois—have enacted laws to regulate the use of AI in the hiring process and more states will likely follow. Keeping abreast of the latest changes in the regulatory landscape and knowing the current risks and best of AI use in the hiring process are key for mitigating risk and remaining compliant with the evolving regulations.
Current Regulatory Landscape
There is currently no single federal law that regulates how organizations can use AI in hiring practices. However, Illinois and Colorado have emerged as pioneers in AI employment regulation, with Illinois’s HB 3773 set to take effect on January 1, 2026, and Colorado’s SB 24-205 taking effect on February 1, 2026. These laws establish crucial regulatory frameworks that other states are expected to follow. HR managers must stay on top of all developments to ensure they use best practices in their employment processes, especially if they have employees in multiple locations across the country, as different laws may apply.
These state regulatory requirements now mandate that employers be transparent with both prospective and current employees and they must notify employees about AI use in employment decisions. In addition, another provision of Illinois’s HB 3773 is that employers cannot use zip codes as a proxy for protected classes to prevent inadvertent discrimination against specific geographies. The bill also states that companies cannot use AI in a way that would subject employees to discrimination based on protected classes, such as creating an algorithm to filter out certain individuals.
Critical Risks HR Managers Should Consider
Algorithmic discrimination is an important concern with regard to AI-driven hiring systems. There is valid concern about AI systems holding unintentional biases that can put certain groups at a disadvantage. Hidden correlations in data might lead to discriminatory outcomes, and automated decision-making processes could potentially violate civil rights laws if not properly monitored. For employers, it is key to maintain detailed documentation of AI-driven decisions and implement regular system audits to ensure compliance with current regulations.
By itself, using AI is not always the problem. HR professionals need to monitor the progress and the final application closely. In August 2023, the Equal Employment Opportunity Commission (EEOC) settled a complaint alleging that a company used AI to automatically reject job applicants based on age, violating the Age Discrimination in Employment Act. In this case, the problem was the company violating discrimination laws, not the use of AI itself.
Another area that organizations must consider when using AI in their hiring processes is data privacy and security. Hiring managers retain a considerable amount of candidate information in all the job applications they receive. Many AI platforms are not secure, so running candidate information through an unchecked platform can put personal information at risk. Accordingly, HR managers should ensure that sensitive candidate information is protected in compliance with data protection laws when using any AI system. The secure storage and handling of sensitive personal information is increasingly critical as AI systems process more detailed candidate data.
Best Practices for Risk Managers
Developing a comprehensive AI audit framework can serve as the foundation for responsible AI implementation. Organizations should regularly assess their AI systems for potential biases, maintain detailed documentation of decision-making processes and validate AI outputs against established diversity and inclusion goals.
Any thorough implementation strategy requires well-defined policies for AI use in hiring, supported by dedicated oversight committees for AI deployment. Employees within HR might use AI themselves without oversight from the employer, which can cause larger issues if not properly monitored. Employees should be made aware of the regulations around AI and understand when they can and cannot use it in the workplace, regardless of whether the broader company implements AI. Organizations should also hold regular training programs to update HR staff on AI systems and regulatory requirements.
Lastly, risk mitigation measures should be robust and comprehensive. Human oversight of AI decisions and clear appeal processes for candidates remain essential. Regular updates to AI systems based on audit findings help maintain system accuracy and fairness. Additionally, proper training is critical for employees to fully use and understand the AI system. Scheduling full team training and keeping the company looped into new policies can help keep employers safe.
Looking Ahead
Organizations must remain vigilant about the evolving regulatory landscape regarding the use of AI. To stay on top of other AI threats, employers should conduct regular audits of AI systems to identify vulnerabilities and implement an action plan. Companies that use an outside firm for hiring should check on that company’s policies around AI use to ensure they are also following state regulations.
Employers should maintain active monitoring of state and federal regulations around this key emerging risk. Regularly updating risk assessment protocols and ensuring ongoing compliance with evolving standards will remain crucial for successful AI implementation in the hiring process.