Computer says no – biased AI programs used in recruitment for major companies

Artificial intelligence (AI) is increasingly used in recruitment processes with algorithms scanning through social media to find potential candidates and analysing the voice and facial expressions of interviewees. This development is claimed to make recruitment faster and cheaper as well as reduce human bias in candidate selection. Conversely, algorithms have been found to be systematically biased against minority ethnic groups and other underrepresented groups such as queer or disabled people.

The market for AI in recruitment and human resources has been growing steadily and many companies such as Amazon or Ikea are actively using these systems to select their employees (Holmes, 2019). AI can be used in several different ways during the recruitment process, from high-tech video interviews where facial expressions are analysed to algorithms that go through text-based answers to pre-interview questions (Pozniak, 2020). While it is well-known that human recruiters can be biased against candidates from minority groups, AI initially seems like a simple solution to this problem. After all, how can a computer be biased? However, instances have been made public in which algorithms were biased against people of colour, non-native English speakers and women. There are multiple reasons for this, two of which are the set up of the program itself and the data it is initially fed with to learn its task (Yam & Skorburg, 2021). AI programs are designed by humans which means that programmers might consciously or unconsciously include their own biases in the algorithm. In the case of recruitment, decisions like what the characteristics of a good candidate are could be biased meaning that the AI will learn to select candidates in a discriminatory way. Secondly, the data used to program these systems is too often based on white faces or speakers without regional accents (Balayn, Lofi & Houben, 2020). This leads to issues like the system not being able to recognise the facial expressions of a person with minority ethnic background or automatically ruling out applicants who speak in unfamiliar accents. Another issue with the type of data used is that it can be based on the employment history of successful companies, meaning that the algorithm looks for people who are similar to previous employees. However, many companies have a history of employing predominantly white males, especially for higher paying positions. This leads to the AI considering this as the status-quo and filtering out candidates who do not fit this image.

One of the biggest producers of recruitment AI , Modern Hire, has addressed these issues by ensuring that their algorithm is “ethical” and does not use any type of face of voice recognition (Caprino, 2021). This practise is deemed unfair and discriminating even within the field of recruitment AI development. Instead, Modern Hire has created an AI program that evaluates candidates based on personality tests, written responses to interview questions and other techniques. The company advertises their product as “proven to be over three times less biased than human interview scorers” (Modern Hire, 2021). Although this removes obvious biases like judging someone by their name or the colour of their skin, other problems remain. Depending on how the AI is programmed it is not familiar with certain accents or might rule someone out because they made a grammatical error. Furthermore, several new issues emerge in this type of recruitment. For instance, it is unfair for all candidates to have no information about how the AI is judging them or what it “wants” them to say. In a traditional interview situation, the interviewee can pick up cues from the interviewer based on their body language, what they say or whether they seem interested. Based on this, a candidate can change their behaviour or talk about something different. This is not an option in AI based recruitment. Additionally, companies like Modern Hire are yet to give a concrete explanation as to what makes their AI “ethical”.

While it is evident that something must change in recruitment following decades of systemic discrimination, AI as it is currently used is not the answer that will erase biases and lead to diverse workforces. On the surface it may look like human biases are removed if an algorithm judges “fairly” based on what someone says rather than where they come from or what they look like. But the algorithms are still man-made and can be built in an unfair way or trained with one-sided data. Experts in the field call for rigorous testing of recruitment AI similar to pharmaceutical testing of a new drug (McDonald, 2019). This would mean that a program must undergo rigorous tests with thousands of participants from diverse backgrounds before it can be used in recruitment. At the moment, this is not the case and recruitment AI is developed and used with little to no regulation. More research and testing are needed, as well as an agreement of the standards that must be met for AI to be “ethical” and “fair”.

Balayn, A., Lofi, C., & Houben, G. J. (2021). Managing bias and unfairness in data for decision support: a survey of machine learning and data engineering approaches to identify and mitigate bias and unfairness within data management and analytics systems. The VLDB Journal, 1-30.

Caprino, K. (2021) How AI Can Remove Bias From The Hiring Process And Promote Diversity And Inclusion, Forbes.

Holmes, A. (2020) AI could be the key to ending discrimination in hiring, but experts warn it can be just as biased as humans, Insider

McDonald, H. (2019) AI expert calls for end to UK use of ‘racially biased’ algorithms, Guardian.

Pozniak, H. (2020) The bias battle: how software can outsmart recruitment prejudices, Guardian.

Yam, J., & Skorburg, J. A. (2021). From human resources to human rights: Impact assessments for hiring algorithms. Ethics and Information Technology, 1-13.