The prevalence of racial stereotypes in the responses provided by AI chatbots such as ChatGPT and the like has been a problem for quite some time now. Fixing the issue is a paramount concern because of the fact that this is the sort of thing that could potentially end up derailing the growth of the industry. Some anti-racism training has been incorporated by the companies creating these LLM chatbots, but in spite of the fact that this is the case, they continue to use racial stereotypes based on a recent report.
This reveal comes from a collaboration between researchers at Stanford University, the University of Chicago, as well as the Allen Institute for AI. Their findings indicate that anti-racism training for LLMs is not making them use fewer stereotypes than might have been the case otherwise.
The way that the research worked was that the researchers submitted text documents using AAVE, or African American Vernacular English, and asked the LLM to provide comments regarding the authors. The same test was then conducted with text written in Standard American English, and the replies were then tallied against each other to find any differences between them.
It turned out that the responses received by the text containing AAVE universally contained harmful stereotypes about Black Americans. GPT-4 went so far as to say that the author of the AAVE text would be worthy of suspicion, along with alleging that they would be aggressive and uncouth. These stereotypes are the same ones that human beings tend to have regarding Black Americans, which just goes to show that technology is only as progressive as humanity can have the capacity to be.
The interesting thing is that when the chatbots were asked to speak about African Americans in general, they only had positive things to say. This seems to suggest that the racial bias is implicit as opposed to explicit, and it will be interesting to see if any further studies confirm what is being said in the here and now. As it currently stands, LLMs have a long way to go before they can be considered completely unbiased.
Image: DIW-Aigen
Read next: 10 Skills New Entrepreneurs Need to Make Their Business a Success
This reveal comes from a collaboration between researchers at Stanford University, the University of Chicago, as well as the Allen Institute for AI. Their findings indicate that anti-racism training for LLMs is not making them use fewer stereotypes than might have been the case otherwise.
The way that the research worked was that the researchers submitted text documents using AAVE, or African American Vernacular English, and asked the LLM to provide comments regarding the authors. The same test was then conducted with text written in Standard American English, and the replies were then tallied against each other to find any differences between them.
It turned out that the responses received by the text containing AAVE universally contained harmful stereotypes about Black Americans. GPT-4 went so far as to say that the author of the AAVE text would be worthy of suspicion, along with alleging that they would be aggressive and uncouth. These stereotypes are the same ones that human beings tend to have regarding Black Americans, which just goes to show that technology is only as progressive as humanity can have the capacity to be.
The interesting thing is that when the chatbots were asked to speak about African Americans in general, they only had positive things to say. This seems to suggest that the racial bias is implicit as opposed to explicit, and it will be interesting to see if any further studies confirm what is being said in the here and now. As it currently stands, LLMs have a long way to go before they can be considered completely unbiased.
Image: DIW-Aigen
Read next: 10 Skills New Entrepreneurs Need to Make Their Business a Success