AI and Hiring: What You Need to Know to Avoid Liability

Artificial intelligence (AI) is arguably the most transformative technology since the advent of the internet. It is poised to play a significant role in many areas of human life, including the workplace, where AI is changing not only how we work but also how we hire. Due to the emergence of AI hiring tools, applicants might not connect with a human being from the hiring company until AI has screened them.

Companies that use AI-automated hiring software should be aware of the associated legal risks and take steps to avoid potential liability.

How Businesses Use AI Hiring Tools

It was not very long ago that job seekers needed to print out their resumes and wear nice clothes to prepare for an in-person interview. Making a good first impression on a hiring manager was often the key to a job seeker getting their foot in the door.

Today, resumes are emailed to companies, and an applicant’s first contact with a company is likely to be an Applicant Tracking System (ATS). An estimated 97 percent of Fortune 500 companies filter candidates through an ATS such as Greenhouse, Lever, Jobvite, or Taleo.[1]

An ATS can be used to help employers find candidates with particular certifications, degrees, skills, job titles, or keywords listed in their resume. The use of ATS software can eliminate candidates that do not meet an employer’s requirements and speed up the hiring process. In a tight labor market, the ability to quickly vet and hire candidates is a key factor in securing the best workers.[2]

The following are some other ways that businesses use AI tools in hiring:

  • Writing job descriptions
  • Ranking candidates
  • Performing background checks
  • Scheduling interviews
  • Conducting online tests and games
  • Measuring personality traits based on candidate-submitted videos
  • Communicating with candidates
Legal Risks of AI Hiring

AI does not replace human intelligence in the hiring process; rather, it acts as an assistant to human resources professionals to accelerate and streamline hiring.

AI tools show promise with their potential to reduce hiring bias. But in some cases, they could disadvantage certain job seekers and run afoul of antidiscrimination laws. The 2023 Hiring Benchmark Report notes that “some of the earliest uses of AI in recruitment have led to major misses that have had the opposite of the intended effect, increasing bias and legal liability.”[3]

AI tools are trained on data about an employer’s current workforce and hiring processes. That data may therefore reinforce existing institutional and systemic biases, including those related to legally protected characteristics.

EEOC Rules

Laws enforced by the Equal Opportunity Employment Commission (EEOC) make it illegal to discriminate against job seekers and workers on the basis of the following characteristics:

  • Race
  • Color
  • Religion
  • Sex (gender identity, sexual orientation, and pregnancy)
  • National origin
  • Age (40 and over)
  • Disability
  • Genetic information

Under EEOC regulations, companies are responsible for their hiring decisions, including decisions based on the AI tools they use. Even if a company does not intend to discriminate and does not know why an algorithm selected one candidate over another, they could still be held liable for discriminatory decisions.

As part of its Artificial Intelligence and Algorithmic Fairness Initiative, EEOC has published guidance focused on how companies can ensure that AI and other software used in employment decisions comply with EEOC-enforced federal civil rights laws, such as Title VII of the Civil Rights Act, the Americans with Disabilities Act, and the Age Discrimination in Employment Act.[4] In September 2023, iTutorGroup paid $365,000 to settle an EEOC discriminatory hiring lawsuit stemming from its use of AI in hiring.[5]

State Laws

Some states have enacted legislation that specifically regulates transparency in the use of AI for employment screening purposes.

The following jurisdictions have these types of laws:

  • Illinois (B. 2557)
  • Maryland (H.B. 1202)
  • California (S.B. 1001)
  • New York City (2021/144)

California, Colorado, and Illinois have also enacted laws designed to protect individuals from discrimination, including situations in which an AI system contributes to discriminatory treatment.[6]

How Employers Can Avoid AI Hiring Discrimination

To prevent legal action, fines, and penalties stemming from discriminatory AI hiring, employers who rely on algorithms in screening or hiring employees should take steps to weed out bias from the process.

The EEOC recommends that employers conduct ongoing self-analysis to determine whether their use of technology could result in discrimination. In recently published guidelines, the EEOC states this assessment can be done by evaluating whether an algorithm causes a selection rate for individuals in a protected group that is “substantially” less than the selection rate for individuals in another group.[7] If an algorithmic decision had an adverse impact on a protected group, it could violate civil rights laws.

The EEOC stresses that employers may be responsible for discriminatory algorithmic decisions even if an AI tool is administered by a third party, such as a software vendor. This makes it important to conduct regular vendor analyses regarding their debiasing efforts.

Human oversight is critical to ensure compliance with federal and state laws. Employers should develop best practices that allow candidates to give informed consent about the AI processes being deployed and to opt out or request an accommodation, as may be required under the ADA (e.g., specialized equipment or alternative tests).

The intersection of technology and law poses novel issues that can be difficult for businesses to navigate. Noncompliance with federal or state nondiscrimination laws in the use of AI for hiring, even if unintentional, can result in lawsuits and substantial penalties. For help crafting legally compliant hiring policies, whether or not you use AI, contact our small business attorneys to schedule an appointment.

[1] Sydney Myers, 2023 Applicant Tracking System (ATS) Usage Report: Key Shifts and Strategies for Job Seekers, Jobscan (Oct. 2, 2023), https://www.jobscan.co/blog/fortune-500-use-applicant-tracking-systems/.

[2] Jennifer Alsever, AI-powered speed hiring could get you an instant job, but are employers moving too fast?, FastCompany (Jan. 6, 2023), https://www.fastcompany.com/90831648/ai-powered-speed-hiring-could-get-you-an-instant-job-but-are-employers-moving-too-fast.

[3] Criteria Corp., 2023 Hiring Benchmark Report, p. 13, https://go.criteriacorp.com/l/945703/2023-09-15/3lylgv/945703/1694808647jrZadWgj/Criteria_Research_2023BenchmarkReport.pdf.

[4] U.S. Equal Emp. Opportunity Comm’n, Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964 (May 18, 2023), https://www.eeoc.gov/laws/guidance/select-issues-assessing-adverse-impact-software-algorithms-and-artificial

[5] Press Release, U.S. Equal Emp. Opportunity Comm’n, iTutorGroup to Pay $365,000 to Settle EEOC Discriminatory Hiring Suit (Sept. 11, 2023), https://www.eeoc.gov/newsroom/itutorgroup-pay-365000-settle-eeoc-discriminatory-hiring-suit.

[6] The Council of State Governments, Artificial Intelligence in the States: Emerging Legislation (Dec. 6, 2023), https://www.csg.org/2023/12/06/artificial-intelligence-in-the-states-emerging-legislation/#:~:text=Three%20states%20%E2%80%94%20California%20(SB%201001,AI%20system%20is%20being%20used.

[7] U.S. Equal Emp. Opportunity Comm’n, Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964 (May 18, 2023),  https://www.eeoc.gov/laws/guidance/select-issues-assessing-adverse-impact-software-algorithms-and-artificial.

Related Articles