There’s a whole array of AI tools that are being brought to the spotlight for displaying sexual and racist content.
The news comes after the UN issued a warning against using such tools for the act and it’s amazing how they’re powered by some of the world’s biggest names in the world of AI. And in case you have yet to guess by now, it’s OpenAI and Meta we’re referring to here.
The programs are displaying massive prejudice against women as per the study rolled out last Thursday by UNESCO.
Shockingly, big tech giants are putting out investments worth billions of dollars to ensure algorithms are trained using a plethora of data that’s pulled from the web. This would enable tools to produce written content in a manner that’s similar to Oscar Wilde or Salvador Dali.
The output in this regard is often slammed for rolling out stereotypes that are not only racist but sexual-themed as well. The material is copyrighted and gathered without any kind of permission.
Meanwhile, some experts spoke in detail on this matter and explained how they were experimenting with Meta’s Llama 2 and ChatGPT by OpenAI. It was interesting to see the study's results and how every algorithm, particularly the LLMs, displayed proof of prejudice.
The research proved how every algorithm was particularly unfair to the female gender and all programs produced content that linked female names with terms like home, family, or kids. Meanwhile, those linked to the male gender had to do more with terms like salary, business, and career.
Interestingly, ChatGPT was outlined to display less biased material than other models in question. But the authors were witnessed praying Llama 2 and ChatGPT-2 for displaying open source nature and giving such issues the chance to get scrutinized. This is very unlike the 3.5 variant that’s more closed in nature.
Plenty of AI firms are not giving users what they need, one expert from UNESCO added to media outlet AFP. Meanwhile, the director general also highlighted how more and more people were utilizing AI tools to carry out tasks in their lives each day.
Such new AI apps carry the capability to gradually shape people’s mindsets so that even small-scale biases arising in regards to gender in material produce could multiply inequalities located in today’s real world.
Image: DIW-AIgen
Read next: GPT-4 Was the Worst Performing AI in This Copyright Infringement Test
The news comes after the UN issued a warning against using such tools for the act and it’s amazing how they’re powered by some of the world’s biggest names in the world of AI. And in case you have yet to guess by now, it’s OpenAI and Meta we’re referring to here.
The programs are displaying massive prejudice against women as per the study rolled out last Thursday by UNESCO.
Shockingly, big tech giants are putting out investments worth billions of dollars to ensure algorithms are trained using a plethora of data that’s pulled from the web. This would enable tools to produce written content in a manner that’s similar to Oscar Wilde or Salvador Dali.
The output in this regard is often slammed for rolling out stereotypes that are not only racist but sexual-themed as well. The material is copyrighted and gathered without any kind of permission.
Meanwhile, some experts spoke in detail on this matter and explained how they were experimenting with Meta’s Llama 2 and ChatGPT by OpenAI. It was interesting to see the study's results and how every algorithm, particularly the LLMs, displayed proof of prejudice.
The research proved how every algorithm was particularly unfair to the female gender and all programs produced content that linked female names with terms like home, family, or kids. Meanwhile, those linked to the male gender had to do more with terms like salary, business, and career.
Interestingly, ChatGPT was outlined to display less biased material than other models in question. But the authors were witnessed praying Llama 2 and ChatGPT-2 for displaying open source nature and giving such issues the chance to get scrutinized. This is very unlike the 3.5 variant that’s more closed in nature.
Plenty of AI firms are not giving users what they need, one expert from UNESCO added to media outlet AFP. Meanwhile, the director general also highlighted how more and more people were utilizing AI tools to carry out tasks in their lives each day.
Such new AI apps carry the capability to gradually shape people’s mindsets so that even small-scale biases arising in regards to gender in material produce could multiply inequalities located in today’s real world.
Image: DIW-AIgen
Read next: GPT-4 Was the Worst Performing AI in This Copyright Infringement Test