Why you shouldn’t let AI choose your next candidate

As Unilever brings AI and facial recognition into the recruitment process. Will it add more bias than it removes?
4 October 2019 | 19 Shares

Can AI present us ‘perfect’ candidates? Perhaps not. Source: Shutterstock

As the experience economy gathers pace, diversity of thought and authenticity are now playing a crucial role in improving a business’s bottom line. Ensuring that your workforce reflects the audience it serves is a huge step forward in the name of progress. But can tech help us shake our bad habits?

Despite our best attempts to create a more productive work environment where innovative ideas will flourish, it appears some human traits have been holding us back. For example, the financial cost of a bad hire is estimated at more than US$18,700. So, when we are faced with a difficult decision, the human condition ensures we put our faith in the candidate that might look like us and have a similar background.

Mirror image recruiting results in unwittingly bringing bias into the hiring process, which can ironically stifle the creativity and innovation that a business is chasing. How many times have you sat in a large meeting room where everyone agrees with the strongest personality in the room? Your community of customers or clients deserve more than this one-dimensional thinking.

AI recruitment

So, what is the answer? Recognizing the problem is the easy bit. As we enter solution mode, many businesses are turning to artificial intelligence (AI) and facial recognition to bring the recruitment industry into the 21st century. Unilever is one of the first to embrace AI recruitment technology that analyzes the facial and linguistic information of every candidate when asked the same question.

Could AI and algorithms decide the future of your career? Interview situations can be stressful enough. But try to imagine a machine examining your facial expressions, use of language, and tone of voice during every answer. US company Hirevue developed the interview technology and claimed it could finally remove human bias and provide a more objective way of choosing the right candidate.

Sure, humans have made more than a few disastrous recruitment decisions over the years. Give the machines a chance; they couldn’t do any worse, right? The problem is AI isn’t some magical solution that will right all our wrongs by merely pushing a button. It’s just math.

Remember the old phrase of garbage in, garbage out? Sophisticated algorithms can only make sense of the datasets that they are provided. If a business fails to use almost limitless datasets, it will inevitably inherit the biases that we once again unwittingly feed it. Without understanding the limitations of this technology, businesses also run the risk of discriminating talented applicants who do not conform to the norm, ironically dictated by the human biases fed into datasets.

We are living in a digital world where businesses must embrace the diversity of thought to create experiences for diverse audiences. Every person reading this will have different interests and motivations. We will perform better at some tasks and weaker at others.

Neurodiversity highlights the many different ways the brain can work and interpret information. When using an algorithm to filter out candidates whose speech is slower, has an undesirable tone, or different facial expressions, it quickly becomes a dangerous precedent to set in the recruitment process.

Those that think, learn, and process information differently should play a critical role in every organization. Neurodivergence includes autism, dyslexia. Sir Richard Branson once said in a blog: “Once freed from archaic schooling practices and preconceptions, my mind opened up. Out in the real world, my dyslexia became an advantage: it helped me see solutions where others saw problems.”

Game-changing or dangerous?

Peter Thiel famously said that Asperger’s can be a significant advantage in Silicon Valley. There is also increasing evidence that autistic employees can give companies an edge in innovative thinking. Solving the bias and diversity problem in the workplace requires leaders and tech companies to think much bigger.

The use of facial recognition to scan people’s language, tone, and facial expression and make a decision based on limited datasets is more likely to eliminate the best candidate from the recruitment process. We are all unique. Introverts and extroverts all bring different skills to a team. It’s using technology to bring people together that is exciting rather than foolishly looking for a one size fits all candidate.

For the most part, businesses have the best intentions when attempting to leverage emerging technology such as AI to solve problems and create opportunities. However, in some cases, we are behaving like small children with a very powerful and dangerous toy, blissfully unaware of the dangers and responsibility that playing with datasets brings.

Fairness, diversity, privacy, and security should all be at the heart of our efforts. Feeding biases into datasets could run the risk of magnifying the problem rather than providing the solution that we all crave. I believe we will eventually get to the promised land. Just don’t be fooled into thinking it will be a quick journey when reading the 2020 tech trend lists waiting on the horizon.

As we attempt to progress forward, businesses will be wise to stay clear of algorithms and limited datasets that could reject the next Bill Gates or Sir Richard Branson based on their language, tone or facial expression.