ChatGPT Is Bias Against Disable People

AI's emergence has changed how we perceive and work in this world. One of the biggest implications of AI is improving human functionality through resourcefulness by doing the tasks that humans can do at a higher efficiency. Due to language processing models, neural networks, and machine learning, machines can now think and rationalize like human beings with greater information retention, making AI a mind of its own, more or less.

Several business owners and institutions want to integrate their organizations and structures with AI to improve their work output. The argument presented by AI people is that since these machines can think and perform better than an average and even an expert person in a certain field, then why shouldn’t these systems be deployed? Therefore, you have seen AI being used now in Law and governance systems, financial systems, education systems, and even healthcare.

But there’s a dire problem that most people overlook because it's something within each human and often overlooked as not being important enough, and that is the problem of Bias. Just like human beings, AI systems are biased too because these AI models are trained mostly by human input and human input, I.E, their opinions or perceptions might be radical towards someone or something. Once these biases enter the AI in the initial phases of its training, they are carried forward in the later stages of its advancement and progression, making those biases tremendously stronger and more visible.

Over the last few months, this issue has been mainstream highlighted when Kate Glazko, a graduate of the University of Washington, while seeking an internship, came across the use of OpenAI’s ChatGPT and several more AI tools to rank and sort candidates' resume among job recruiters. Although the use of AI in job applicant screening is very common, Kate, a PhD student at the UW’s Paul G. Allen School of Computer Sciences and Engineering, found something very odd. She found that AI systems used to sort and screen applicants for jobs and internships had a bias against those with disability.

To follow up on this finding, a study was done by UW researchers on the issue of the ChatGPT ranking of resumes of disabled people. It was found that GPT ranked disability-related honors resumes, for example, the “Tom Wilson Disability Leadership Award,” at a lower rank than a similar resume but without those credentials.

When the research asked the AI system to give a reason behind their ranking order of disabled persons in comparison to those without, AI gave bias that is often found in human recruiters too, that autistic people aren’t good leaders therefore, autism leadership awards have less importance on the subject of leadership.

However, the researchers also found that when they instructed the AI not to be biased against anyone via written instructions, then the results reduced the overall bias. Of the six indicated impairments, only three were ranked higher than resumes that omitted the word "disability"; the other five, deafness, blindness, cerebral palsy, autism, and the general phrase "disability," got better.

Later on, the researchers presented these findings at the ACM Conference on Fairness, Accountability, and Transparency on June 5th, held in Rio, Brazil.

According to the research team lead author, Glazko, AI resume ranking is now being done at an increased rate, but this domain still lacks efficient research that is valid and safe. Glazko further stated that disabled people who are looking for jobs always question when submitting a resume whether they should add their disability credentials or not, even when the recruiters are humans.

To further declutter the bias that the AI systems have against disabled people, the researchers used a CV (curriculum vitae) that was publicly available. Based on this, the team made 6 improvised CVs, each with a different disability, including related credentials such as awards, equity, scholarship, and things related to panel seats like membership in a student organization.

After making these CVs, the researchers then used ChatGPT model 4 for ranking these CVs in comparison to the real version for the researcher job list at a software company. The total trial runs were 60 times, and found that the AI ranked CVs that didn’t have a disability for the first quarter of the time.

Once again, when the researchers asked the AI system to give a reason for such a ranking, they showed bias by saying a person with depression may have more focus on DEI and personal challenges, which are distant from the research and technical part of the job role.

In fact, the AI was biased to such a high extent that it would sabotage the entire resume based on a disability through the claim of it being involved with DEI. For example, the AI would mix the core meaning of challenges with depression and then make a resume comparison when no such things as challenges that were depression-related were mentioned at all as if the AI was stereotyping these ideas.


Image: DIW-Aigen

Read next: Who's Really on Board with AI: Youngsters or Boomers?
Previous Post Next Post