New study says AI recruitment tools don’t solve interview bias

Algorithms could be judging candidates on simple lighting factors, claims a new Cambridge study.
17 October 2022

Could you choose your next star employee from the way they move? Chances are your AI can’t, either.

One of the biggest issues in recruitment is that recruiters are human beings. That means they carry with them the (mostly unconscious) biases – and prejudices – of the society in which they live, and which has shaped them.

Even discounting overt bigotry, misogyny, racial prejudice and the like, the way Western society is built is not the panacea we’re often told it is. It’s a pyramid, with rich, cisgender, heterosexual, white, non-disabled men with good mental health at the top, and everyone else below, operating on a system of social trap doors for each point of difference from that Powerball-winning list of qualities. That means recruiters come to the task of recruiting with that social structure lodged in the way they view the world – and the way they recruit.

That’s why, for instance, schemes like Affirmative Action have been necessary over the years – to attempt to counteract the inherent social biases that see the same sort of people who’ve always been in charge continue in leading roles, because of the notion that they represent what leaders look like.

The perfect job for AI?

It’s also why lots of businesses use AI to sift their initial candidates in to shortlists of prospective employees. Remove the human, remove the bias – at least in the initial stages.

It’s a sentiment that should make perfect, and would, were our society not entirely saturated in these prejudices. But a new report from the University of Cambridge in the UK – and published in the academic journal Philosophy and Technology – says the claims that AI can equalize all candidates by removing gender and race from the recruitment process, and that doing so is more inherently fair, are little more than “pseudoscience.”

The report outlines four specific objections against the idea of inherently fairer AI recruitment tools:

Firstly, they argue that attempts to “strip” gender and race from AI systems frequently misunderstand what gender and race really are. Seeing them as simple, single isolatable variables in what is essentially an algorithm takes no account of their broader systems of power within a society. Reducing such complex and nuanced social factors into individual variables is practically bound to reduce their pervasive nature to an unrealistically concrete point – which is likely to add flaws into the AI’s processing.

Secondly, they argue against outsourcing the “diversity work” of recruitment to AI-powered hiring tools, because it might unintentionally entrench cultures of inequality and discrimination, by failing to address the systemic problems within organizations. That appears to have been the case in several leading technology-based ventures, including Amazon, so the researchers are not simply pulling arguments out of a clear blue sky on this.

Thirdly, they claim that AI hiring tools’ supposedly “neutral” assessment of candidates’ traits belies the power relationship between the observer and the observed.

And finally, recruitment AI tools skew towards producing the “ideal candidate.” But identifying that candidate is a matter of constructing associations between words, on a resume or in a video, and the bodies of the candidates (through pictures or videos).

An incomplete picture

Researchers Dr Eleanor Drage and Dr Kerry Mackereth draw attention to a 2020 poll by Gartner, Inc., that showed that of 334 HR leaders, no less than 86% of those questioned were implementing AI in their recruitment process. In particular, the new trend for video-based AI recruitment sifting has since experienced a meteoric rise.

Dr Mackereth explained the flaw in the logic of letting such tools dictate your ideal candidates.

“These tools can’t be trained to only identify job-related characteristics and strip out gender and race from the hiring process, because the kinds of attributes we think are essential for being a good employee are inherently bound up with gender and race,” she said.

In particular, the researchers said there were significant issues with the video recruitment tools that promised to “analyse the minutiae of a candidate’s speech and bodily movements” to see how closely they resembled a company’s supposed ideal employee.

This, Dr Drage said, was little more than technological phrenology, and had no scientific basis. Certainly, analysis of movement patterns has been controversial – not to say frequently entirely erroneous – in security systems at airports and high-profile venues, when attempting to identify criminal intent, so it should come as no surprise that it’s by no means the best way to find your next star employee, either.

Seeing the ‘Inner You’

“They say that they can know your personality from looking at your face,” explained Dr Drage. “The idea is that, like a lie-detector test, AI can see ‘through’ your face to the real you.”

Testing out their theory about the flaws in AI recruitment tools, the researchers built their own – admittedly simplified – AI recruitment tool, and got it to rate candidates’ photographs for five key personality criteria:

  • agreeableness
  • extroversion
  • openness
  • conscientiousness
  • neuroticism

But the results they produced were able to be skewed by any number of presentation variables, such as contrast, brightness, and color saturation. Just as social media filters can be used to alter the reception that a particular image gets, so simple factors like these can alter the ability of an AI to rate candidates equally on these core criteria.

Believable bunk?

So is all AI in recruitment simply believable bunk?

Potentially not. Many AI systems are helping companies fill high-volume vacancies in what is a tense recruitment climate, where there can be anything up to 250 or more applicants for a single vacancy. Some such tools, like TaTiO, are innovatively designed to use things like job assessment simulators and performance analysis without any need to consult a resume or video.

But it’s certainly true that the Cambridge work builds on previous cautionary tales, both in academia and hard business. In 2019, Harvard Business Review published a study that showed AI recruitment tools selecting against Black candidates. Meanwhile in 2028, Amazon was forced to abandon an AI recruitment tool after not only discovering it was relentlessly rejecting women from top jobs (because it was using a data model that was based on previous top hires – all of which were men), but sending in a team of experts to correct the problem… and having them fail.

The persuasive power of techno-phrenology

For now, the Cambridge study is compelling, but potentially not a death-blow for AI in recruitment – the simplified tool the researchers created may be found to be less sophisticated than particular, highly innovative versions. But it certainly argues for – at the very least – selection of your AI recruitment tool with significantly more care than may have previously been the case.

It’s also worth noting that businesses with a lot of vacancies to fill may value the speed and cost benefits of using a single AI tool for early sifting more than they necessarily value the absolute fairness of the process. While such priorities remain within the law, and while our society remains as flawed and hierarchical as it is, the likelihood is that a little techno-phrenology will be seen to beat scientific fact in the hiring process.