AI is now being used for applying to job listings and almost 99% of Fortune 500 companies are using AI in their hiring processes. This improves the efficiency and some say that it also reduces discrimination. But according to new research by the University of Washington, using AI in the hiring process increases discrimination against race and gender. After looking at 550 real world resumes reviewed by LLMs, researchers found that LLMs favorite names commonly used for white people 85% of the time, while only favoring women names 11% of the time. It was also found that LLMs never chose black names over white names.
There has also been a study done previously on ChatGPT that found that the AI-powered bot shows discrimination against gender, race and disabilities. The researchers of this study wanted to do more analysis on open source LLMs and how they perceive race and gender. For the study, the researchers gathered 120 white and black associated names of men and women. Then the researchers used LLMs from three different companies– Salesforce, Mistral AI and Contextual AI. Different jobs like engineer, resource worker, teacher were written on the resumes.
According to the results, LLMs chose White associated names 85% of the time, while Black associated names were only chosen 9% of the time. Similarly, male associated names were chosen 52% of the time, while female associated names were chosen 11% of the time. There was also some disparity among white women and white men and black women and black men. There was a small difference between systems choosing white female names and white male names. But LLMs preferred black female names 67% of the time as compared to 15% for black male names. This shows that there was a greater bias against black men as compared to black women.
The researchers noted that as generative AI has become widely available that it is now being used for job listings and job hirings too, the developers should work on AI models that show minimum bias and discrimination. Even though AI can be helpful in a lot of instances, it can also negatively shape how we perceive race and gender.
Image: DIW-Aigen
Read next: Researchers Say that Robots Can Develop a Sense of Self Which Can Help in Studying Disorders
There has also been a study done previously on ChatGPT that found that the AI-powered bot shows discrimination against gender, race and disabilities. The researchers of this study wanted to do more analysis on open source LLMs and how they perceive race and gender. For the study, the researchers gathered 120 white and black associated names of men and women. Then the researchers used LLMs from three different companies– Salesforce, Mistral AI and Contextual AI. Different jobs like engineer, resource worker, teacher were written on the resumes.
According to the results, LLMs chose White associated names 85% of the time, while Black associated names were only chosen 9% of the time. Similarly, male associated names were chosen 52% of the time, while female associated names were chosen 11% of the time. There was also some disparity among white women and white men and black women and black men. There was a small difference between systems choosing white female names and white male names. But LLMs preferred black female names 67% of the time as compared to 15% for black male names. This shows that there was a greater bias against black men as compared to black women.
The researchers noted that as generative AI has become widely available that it is now being used for job listings and job hirings too, the developers should work on AI models that show minimum bias and discrimination. Even though AI can be helpful in a lot of instances, it can also negatively shape how we perceive race and gender.
Image: DIW-Aigen
Read next: Researchers Say that Robots Can Develop a Sense of Self Which Can Help in Studying Disorders