This is how AI recruitment systems keep discrimination alive
Across the US, authorities are grappling with how to regulate the use of AI in labour hiring and guard against algorithmic bias
Los Angeles — After applying in vain for nearly 100 jobs through the human resources platform Workday, Derek Mobley noticed a suspicious pattern.
“I would get all these rejection emails at 2am or 3am,” he told Thomson Reuters Foundation. “I knew it had to be automated.”
Mobley, a 49-year-old Black man with a degree in finance from Morehouse College in Georgia, had previously worked as a commercial loan officer, among other jobs in finance.
He applied for mid-level jobs across a range of sectors, including energy and insurance, but when he used the Workday platform, he said he did not get a single interview or call-back and was often forced to settle for gig work or warehouse shifts to make ends meet.
Mobley believes he was being discriminated against by Workday's artificial intelligence (AI) algorithms.
In February, he filed what his lawyers describe as a first-of-its-kind class action lawsuit against Workday, alleging that the pattern of rejection he and others experienced pointed to the use of an algorithm that discriminates against people who are black, disabled or over the age of 40.
In a statement to Thomson Reuters Foundation, Workday said Mobley’s lawsuit was “completely devoid of factual allegations and assertions”, and said the company is committed to “responsible AI”.
The question of what “responsible AI” might look like goes to the heart of an increasingly robust push-back against the unrestricted use of automation in the US recruitment market.
Mobley’s lawsuit, which is working through California’s court system, is just one skirmish in a bigger battle involving automation in the workplace.
Across the US, state and federal authorities are grappling with how to regulate the use of AI in labour hiring and guard against algorithmic bias.
Around 85% of large US employers, including up to 99% of Fortune 500 companies, now use some form of automated tool or AI to screen or rank candidates for hire, according to recent surveys.
This includes using resume screeners that automatically scan applicants' submissions, assessment tools that grade an applicant's suitability for a job based on an online test, or facial recognition or emotion recognition tools that can analyse a video interview.
In May, the Equal Employment Opportunity Commission (EEOC), the federal agency that enforces civil rights law in workplaces, released new guidelines to help employers prevent discrimination when using automated hiring processes.
In August, the EEOC settled its first ever automation-based case, fining iTutorGroup $365,000 for using software to automatically reject applicants over the age of 40. The company, which provides English-language tutoring to students in China, denied wrongdoing in the settlement.
City and state authorities are also weighing in.
A novel law to regulate AI in hiring went into force in New York City in July, and legislators from California to Vermont to New Jersey are pushing through new legislation.
“Right now, it's the Wild Wild West out there,” said Matt Scherer, a lawyer with the Center for Democracy and Technology (CDT), a nonprofit advocating for civil rights in a digital age. “But that will change.”
Technology-enabled bias is a risk because AI uses algorithms, data and computational models to mimic human intelligence. It relies on “training data” and if there is bias in that data, which is often historical, this could be replicated in an AI programme.
In 2018, for instance, Amazon abandoned an AI resume screening product that had started to automatically downgrade applicants with the word “women’s” on their CVs, as in “women’s chess club captain”.
This was because Amazon’s computer models were trained to vet applicants by observing patterns over a decade. Most applications came from men, a reflection of male dominance across the industry.
This is the kind of discrimination that worries Brad Hoylman-Sigal, a state senator in New York. In August, he introduced a bill that would require audits of hiring tools and also ban certain kinds of data collection, including emotion-recognition software.
“Many of these tools have been proven to unduly invade workers’ privacy and discriminate against women, people with disabilities, and people of colour,” he said.
Ifeoma Ajunwa, director of the AI and the Law programme at Emory University, says job applicants often don’t have a choice about whether to submit to automated hiring processes.
She has warned about the possibility of “algorithmic blackballing”, where hiring systems repeatedly reject an applicant based on hidden criteria.
She also called on the Federal Trade Commission (FTC) to step in and ban certain kinds of automated hiring tools.
In April, the FTC and three other federal agencies, including the EEOC, said in a statement that they were looking at potential discrimination arising from data sets that train AI systems and opaque “black box” models that make anti-bias diligence difficult.
Some advocates of AI acknowledge the risk of bias but say this can be controlled.
Amandeep Singh Gill, the UN secretary-general’s envoy on technology, called for more investment in AI and data literacy to mitigate risks such as discrimination in automated hiring.
“We need to lower the barriers to entry to these conversations and build up the literacy around data, AI and how we teach it in schools and government,” he said at the Thomson Reuters Foundation’s annual Trust Conference in London on Friday.
Frida Polli, co-founder and former CEO of pymetrics, which creates AI-powered assessment tools, said programmers could tweak the variables that are considered by an automated system, something that cannot be done to the human brain.
CDT’s Scherer is sceptical.
“The industry says that you can use these tools to increase diversity but I think there’s a real tension there,” he said. “In reality, you are just automating the process of human bias in hiring.”
Taming the tech
That’s what worries legislators such as Californian state assembly member Rebecca Bauer-Kahan, who introduced legislation this year that would allow job applicants to opt-out of automated hiring platforms and require those platforms to submit to fairness audits.
Her bill, AB331, would also have made it easier for private citizens to sue hiring platforms if they suspect bias.
That last point proved to be a major obstacle. Businesses and tech groups signed a letter penned by California’s Chamber of Commerce to raise concerns about private right of action, among other issues.
The bill failed to pass out of assembly but Bauer-Kahan plans to reintroduce a version in the next session in December.
“The federal government is not doing much these days,” she said. “The states are going to have to move first.”
New York City is leading the way. In July, it became the first jurisdiction in the country to introduce a law to specifically regulate algorithms in hiring. Under the legislation, applicants can petition to be notified when they are subjected to automated tools, and hiring software that relies on AI to choose preferred candidates or eliminate others must be audited for racist or sexist bias.
But many digital privacy advocates say the law does not go far enough: it only applies to AI hiring tools that “substantially assist or replace” humans, and it also does not address biases that might affect disabled applicants.
Cody Venzke, senior policy counsel with the American Civil Liberties Union (ACLU), said he was particularly concerned about “watered-down” regulatory efforts.
“Some proposals would require harmed applicants and employees to prove that an algorithm directly caused discrimination, when many algorithmic hiring tools’ real harm is their strong influence on human decision-making,” he said.
“Other proposals would give employers a second bite at the apple to come up with nondiscriminatory hiring practices, rather than giving applicants harmed by discriminatory technology their day in court.”
As the regulatory environment seeks to adapt to the ubiquity of AI in the recruitment market as well as its complexity, Mobley hopes his lawsuit will at least raise the lid on the extent of algorithmic bias.
“I know I’m not the only one,” he said. “There are a lot of people out there, applying for jobs they were probably qualified for .…but [who] are being unfairly discriminated against.”
Thomson Reuters Foundation
Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.
Please read our Comment Policy before commenting.