Employers, Be Wary of Built-in Bias from AI Vendors
PDFProfessionals
Practice Areas
Employers beware: you could be liable for allowing third-party vendors to use applicant screening software if it results in unlawful discrimination, even when unintentional and without your awareness. Employers who turn to artificial intelligence programs to streamline hiring and recruitment processes must exercise caution to avoid potential liability.
Workday Class Action
In February 2023, Derek Mobley filed a class-action lawsuit against Workday Inc., asserting claims for race, age and disability discrimination in violation of Title VII of the Civil Rights Act of 1964 and other federal laws.[i] Workday is perhaps best known for its human resources software products and services for businesses. Among other things, Workday uses AI-powered hiring software to screen job applicants for employers. The complaint argues that Mobley — described in court documents as a Black man over 40 years old who has been diagnosed with anxiety and depression — was subjected to racism, ageism and ableism via more than 100 job applications he submitted to various employers using Workday’s hiring software. He claims that these applications included assessments and personality tests that likely revealed his protected characteristics and screened him out because of them. Mobley’s suit is the first proposed class action, potentially including hundreds of thousands of people, to challenge the use of AI employment screening software.[ii] It will set an important precedent on the legal implications of using AI to automate hiring and other employment functions.
EEOC Oversight
Beyond reputational damage from discriminatory practices, employers using AI screening technology could also face significant financial damage. This includes fines from regulatory bodies like the U.S. Equal Employment Opportunity Commission for violations of anti-discrimination laws. Employers may also incur increased insurance costs and operational costs associated with auditing, overhauling or replacing the discriminatory AI system. Lastly, employers could end up paying compensatory and punitive damages to affected applicants. In EEOC v. iTutorGroup, Inc., et al. (Civil Action No. 1:22-cv-02565), iTutorGroup's application software was programmed to automatically reject female applicants aged 55 or older and male applicants aged 60 or older, thereby rejecting more than 200 qualified applicants because of their age.[iii] On Aug. 9, 2023, the case settled after iTutorGroup agreed to pay $365,000 to be distributed as back pay and compensatory damages among the applicants who were allegedly unlawfully rejected.[iv]
The EEOC has warned employers that they can be held legally liable if they fail to prevent screening software from having a discriminatory impact. Under EEOC regulations, companies are responsible for their hiring decisions, including decisions based on the AI tools they use.[v] Even if a company does not intend to discriminate and does not know why an algorithm selected one candidate over another, it could still be liable for discriminatory decisions. To guide employers, the EEOC launched an initiative in 2021 to ensure AI-assisted software complies with EEO laws and released a related technical assistance document in May 2023.[vi] The EEOC has repeatedly warned employers that it is watching for AI bias in employment.[vii] But even with these warnings, recruiting officers are not always aware of how vendors use AI in their programs, and growing legal requirements mean that employers need to develop a deeper understanding of the technologies they use and know what questions to ask.
Steps for Employers to Protect Themselves
Oversight and review of these AI employment screening programs are crucial to prevent inadvertent discriminatory practices leading to potential liability from applicants who may claim that the program discriminates based on protected characteristics. This must include regularly assessing AI-driven hiring processes to identify and address potential biases.
Employers can adopt proactive measures to mitigate these risks, including at least the following:
- Involve HR personnel in decision-making to complement AI-generated recommendations, rather than simply adopting recommendations without question.
- Vet third-party vendors. Ensure AI vendors follow ethical guidelines and have measures in place to prevent bias before deploying the tool. This includes understanding the data they use to train their models and the algorithms they employ.
- Develop a formal written policy and standards for collaborating with third-party vendors, specifying how AI screening programs will be used in a non-discriminatory manner and how the employer will ensure that vendors adhere to these guidelines when screening applicants in compliance with equal employment opportunity laws.
- Review software licensing agreements to ensure that they include appropriate indemnification, where the software provider takes on some or all of the cost of third-party claims against a customer in case of a problem.
- For companies that might not have the leverage to negotiate indemnification into software licensing agreements, reconsider using AI screening programs until a thorough review of the program is conducted and its success in numerous other organizations confirms the program’s credibility and reliability for eliminating unlawful biases.
In summary, while AI can enhance efficiency in candidate selection, it also carries risks of perpetuating biases and discrimination if not properly managed. These risks can expose employers to lawsuits under anti-discrimination laws if the AI tools (inadvertently or intentionally) screen out applicants based on protected characteristics such as race, gender, sex and disability. To mitigate these risks, employers should ensure that their AI systems are transparent, regularly audited for bias, and compliant with legal standards. Failure to do so could result in significant legal and financial consequences, undermining the benefits that AI technology promises to bring to the hiring process.
Robinson Bradshaw’s Employment & Labor Practice Group will closely monitor and report on the latest developments regarding the Workday lawsuit and government standards and regulations addressing AI tools, such as the screening program challenged by Mobley. For assistance in evaluating whether to use an AI screening program and counsel on best practices to avoid liability, please contact a member of our team.
[i] Mobley v. Workday Inc, U.S. District Court for the Northern District of California, No. 3:23-cv-00770.
[ii] Daniel Wiessner, Workday must face novel bias lawsuit over AI screening software Reuters (2024), https://www.reuters.com/legal/litigation/workday-must-face-novel-bias-lawsuit-over-ai-screening-software-2024-07-15/ (last visited Jul 24, 2024).
[iii] EEOC v. iTutorGroup, Inc., et al., Civil Action No. 1:22-cv-02565.
[iv] EEOC v. iTutorGroup, Inc., et al., Civil Action No. 1:22-cv-02565.
[v] Title VII of the Civil Rights Act of 1964, Title VII of the Civil Rights Act of 1964, the Age Discrimination in Employment Act and the Americans with Disabilities Act.
[vi] Title VII, 29 CFR Part 1607
[vii] Patrick Thibodeau, Workday’s AI lawsuit defense puts responsibility on users Tripp Scott Attorneys at Law (2024), https://www.trippscott.com/insights/workdays-ai-lawsuit-defense-puts-responsibility-on-users (last visited Jul 22, 2024).