Oversights in Hiring

Artificial Intelligence (AI) has proven to be a game-changer in the recruitment process.  AI-powered tools promise efficiency, objectivity, and the ability to handle large volumes of applications swiftly. However, this advancement comes with its own set of challenges, one of the most pressing being algorithmic bias. As organizations increasingly rely on AI for recruitment, it's crucial to understand how these biases manifest and their implications for the job market.

What is Algorithmic Bias?


“What starts as a human bias turns into an algorithmic bias,” states Gartner. Algorithmic bias occurs when an AI system reflects and perpetuates the biases present in the data it was trained on or the way it was programmed. The individuals who design AI systems can inadvertently introduce their own biases into the algorithms due to most teams developing AI systems often lacking in diversity. This leads to unconscious biases being encoded into the algorithms. According to a report by McKinsey, only 15% of the tech workforce is Black or Hispanic, and women hold just 25% of tech roles. These biases can stem from assumptions made during the development process or from the lack of diverse perspectives in the development team.

AI systems are trained on historical data, which often includes past hiring decisions. If these decisions were biased against certain groups, the AI system would learn and replicate these biases. For instance, if a company historically hired more men than women, an AI tool might favor male candidates, assuming that gender correlates with job performance.

Facial recognition technologies used in interview assessments have also been found to exhibit significant algorithmic biases, particularly against individuals with darker skin tones. A notable example of this bias is highlighted in a study by the National Institute of Standards and Technology (NIST), which found that facial recognition systems have higher error rates for people with darker skin. Specifically, the study reported that false positive rates were significantly higher for Asian and African American faces compared to Caucasian faces, with some systems being up to 100 times more likely to misidentify these individuals.

Gartner explained four types of algorithmic bias: 

  1. Amplified Bias: systemic or unintentional bias in processing data used in training machine learning algorithms. 

  2. Algorithm Opacity: end-user data black boxes, whether intrinsic or intentional, cause concern about levels of integrity during decision-making. 

  3. Dehumanized Processes: views on replacing human intelligence with ML and AI are highly polarized, especially when used to make critical, life-changing decisions. 

  4. Decision Accountability: there is a lack of sufficient reporting and accountability from organizations using Data Science to develop strategies to mitigate bias and discrimination. 

What Can Be Done?

Addressing and solving for algorithmic biases in the recruitment process is critical for fostering a fair and inclusive hiring environment. Companies can take several steps to effectively mitigate these biases.

1. Reducing Racial Bias

First, it is essential to diversify the training data used for AI models. AI systems learn from the data they are fed, so using a diverse dataset that represents various demographic groups can help reduce bias. This involves ensuring that the data includes a balanced representation of different races, genders, and backgrounds to prevent the AI from learning and perpetuating existing biases. One effective tool for reducing racial discrimination in AI systems used for hiring is IBM Watson's OpenScale. IBM Watson OpenScale is designed to track and measure AI outcomes, ensuring they are fair and unbiased.

2. Regular Audits of AI systems

Conducting regular audits of AI systems to identify and mitigate biases is essential. These audits should focus on identifying any biases in the system's outputs and making necessary adjustments. This process can also involve using evaluators to provide an unbiased assessment of the AI's performance and fairness. An example of a third-party evaluator that can provide an unbiased assessment of an algorithm used in recruiting is AI Ethics Lab. This organization specializes in evaluating AI systems for ethical considerations, offering actionable recommendations to mitigate any potential biases found. 

3. Human Oversight

While AI can help streamline the recruitment process, it should not replace human judgment entirely. Human recruiters should review the AI's decisions, particularly in borderline cases, to ensure that qualified candidates are not unfairly screened out due to algorithmic biases. This dual approach combines the efficiency of AI with the nuanced understanding of human consideration.

AI has the potential to transform the hiring process, making it more efficient and manageable for talent acquisition teams. However, without careful consideration of algorithmic bias, it can exacerbate existing inequalities. By acknowledging and addressing these biases early and often, organizations can use the power of AI while promoting fairness and diversity in their hiring practices. The journey toward unbiased AI in hiring is complex, but it is a crucial step toward creating a more equitable job market.

Previous
Previous

Perception of Job Hoppers

Next
Next

Retiring Workforce Woes