The use of AI in hiring could bring with it potential violations regarding job seekers with disabilities, according to the Equal Employment Opportunity Commission (EEOC).
Artificial intelligence (AI) has become more common in hiring practices in recent years, prompting the EEOC to issue guidance regarding the concerns employers may encounter by using it.
The EEOC’s document outlines the use of AI in hiring and specifically notes that these tools “may disadvantage job applicants and employees with disabilities. When this occurs, employers may risk violating federal Equal Employment Opportunity (EEO) laws that protect individuals with disabilities.”
EEOC on AI Usage in Hiring Practices
There are several ways employers can run afoul of federal laws by using AI software (see more below) when hiring.
At a high level, common potential violations of the Americans with Disabilities Act (ADA) include:
- Not providing the reasonable accommodation necessary for a job applicant or employee to be fairly treated by an algorithm
- Intentionally or unintentionally screening out candidates with a disability through an algorithmic decision-making tool
- Using tools that violate the ADA’s restrictions on disability-related inquiries and medical examinations
The use of AI in hiring could lead to potential biases, the EEOC says, and in October 2021 federal officials announced an effort to ensure artificial intelligence and similar tools used in hiring and other employment decisions comply with federal civil rights laws.
“Artificial intelligence and algorithmic decision-making tools have great potential to improve our lives, including in the area of employment,” EEOC Chair Charlotte A. Burrows said in a statement. “At the same time, the EEOC is keenly aware that these tools may mask and perpetuate bias or create new discriminatory barriers to jobs. We must work to ensure that these new technologies do not become a high-tech pathway to discrimination.”
Meanwhile, it’s worth noting that New York City has passed a law regarding how employers use “automated employment decision tools” in hiring, targeting the potential for bias. Starting Jan. 1, 2023, employers must conduct a “bias audit” of their technology to determine its impact on race, ethnicity or sex. It also includes a notice requirement.
How is AI Used in Hiring? And What Can Go Wrong?
So, a company uses software to help it make employment decisions. How can hiring managers ensure compliance with federal laws and regulations?
Applications used in employment can take many forms, including:
- Automatic resume-screening software
- Chatbot software for hiring and workflow
- Video interviewing software
- Analytics software
- Employee monitoring software
- Worker management software
These programs, whether used internally or through a vendor, may exclude candidates or existing employees in a number of ways:
- Resume scanners that prioritize applications using certain keywords
- Employee monitoring software that rates employees on the basis of their keystrokes or other factors
- “Virtual assistants” or chatbots that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements
- Video interviewing software that evaluates candidates based on their facial expressions and speech patterns
- Testing software that provides “job fit” scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived “cultural fit” based on their performance on a game or on a more traditional test
At a more detailed level, these practices can create disadvantages for people with disabilities.
A program may reject applicants based on a resume gap that was the result of a disability.
A chatbot might reject an applicant based on speech patterns that are caused by a speech impediment.
A blind applicant may not be able to score as well on a digital memory test issued through a software test.
“Algorithmic decision-making tools are often designed to predict whether applicants can do a job under typical working conditions. But people with disabilities do not always work under typical conditions if they are entitled to on-the-job reasonable accommodations,” reads a portion of the EEOC guidance.
Lastly, the EEOC provides definitions for both algorithms and AI.
Generally, an “algorithm” is a set of instructions that can be followed by a computer to accomplish some end. Human resources software and applications use algorithms to allow employers to process data to evaluate, rate, and make other decisions about job applicants and employees. Software or applications that include algorithmic decision-making tools may be used at various stages of employment, including hiring, performance evaluation, promotion, and termination.
Artificial Intelligence (AI)
Some employers and software vendors use AI when developing algorithms that help employers evaluate, rate, and make other decisions about job applicants and employees. In the National Artificial Intelligence Initiative Act of 2020 at section 5002(3), Congress defined “AI” to mean a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.” In the employment context, using AI has typically meant that the developer relies partly on the computer’s own analysis of data to determine which criteria to use when making employment decisions. AI may include machine learning, computer vision, natural language processing and understanding, intelligent decision support systems, and autonomous systems.
Clearly, there are a host of considerations for employers using AI in hiring. And with more companies using software to vet candidates, they should be aware that federal officials are monitoring the environment.
Find more information in the EEOCs’ guidance on the use of AI in hiring.