Workday stands accused of building algorithms that have resulted in bias against Black applicants in their 40s, according to a lawsuit.
Launched earlier this week in the Northern District Court of California, the case alleges that the HR and payroll SaaS firm “unlawfully offers an algorithm-based applicant screening system that determines whether an employer should accept or reject an application for employment based on the individual’s race, age, and/or disability.”
The Register has asked Workday to comment.
The class action complaint [PDF] – which seeks to represent other applicants who may have been affected – stems from the alleged experience of Derek Mobley, a Black man over 40 who suffers from anxiety and depression. Court documents state that since 2018, Mobley has applied for 80-100 positions at companies that use Workday as a recruitment screening tool.
He was denied employment on every occasion, even though he holds a bachelor’s degree in finance from Morehouse College, a private historically Black men’s liberal arts college in Atlanta, Georgia, the documents state. He also has an associate’s degree in network systems administration from ITT Technical Institute, Indiana.
“The selection tools marketed by Workday to its customers allows these customers to manipulate and configure them in a discriminatory manner to recruit, hire, and onboard employees. Workday’s products process and interpret an applicant’s qualifications and recommend whether the applicant should be accepted or rejected,” the documents add.
The court papers allege that Workday’s tools rely on subjective practices which result in biases against African-American applicants, those over 40 years of age, and those with disabilities.
The plaintiff claims that Workday’s AI systems and screening tools rely on algorithms and inputs created by humans who often have “built-in motivations, conscious and unconscious, to discriminate.”
Workday said in a statement to The Register that the lawsuit was without merit. It said it was “committed to trustworthy AI” and acts “responsibly and transparently in the design and delivery” of its AI solutions “to support equitable recommendations.”
“We engage in a risk-based review process throughout our product lifecycle to help mitigate any unintended consequences, as well as extensive legal reviews to help ensure compliance with regulations,” a spokesperson added. ®