Many seem to be under the impression that AI can help us make better decisions. After all, what can be more objective than a machine? Well, studies have shown that AI can be prone to bias, just like human beings. Humans are the ones entering the data to feed its machine learning algorithm in the first place, so it stands to reason that it will start learning a lot of the biases that we are operating under.
A survey conducted by Tidio revealed that 98% of people think that AI is biased. 45% of the people who responded to this survey stated that this is actually the biggest issue with AI, and 40% went on to say that it is the developers who are responsible for these biases with all things having been considered and taken into account.
With all of that having been said and now out of the way, it is important to note that the data sets that are being used to train the AI deserve some of the blame. They contain all of our biases, and they were certainly going to show up when AI started using them.
In spite of the fact that this is the case, the more recent iterations of AI that were developed by Open AI are showing some promise. When prompted to show an ambitious CEO, for example, the AI ended up depicting a black man as well as a woman wearing professional clothing. This is very likely due to a conscious effort from Open AI to reduce biases in DALL-E.
However, Stable Diffusion clearly needs to do a lot of work, since it only depicted middle aged white men when given this prompt. When asked to depict a nurse, it only showed women. This is a clear sign of bias, and it suggests that the people behind Stable Diffusion are not putting enough effort to address this.
Changes need to be made if AI is meant to go to the next level. Otherwise it will just be an advanced representation of the same old biases we’ve been living with for thousands of years.
Read next: Users Are Spending Less Time On Snapchat As App Faces Tough Competition From Reels And YouTube Shorts
A survey conducted by Tidio revealed that 98% of people think that AI is biased. 45% of the people who responded to this survey stated that this is actually the biggest issue with AI, and 40% went on to say that it is the developers who are responsible for these biases with all things having been considered and taken into account.
With all of that having been said and now out of the way, it is important to note that the data sets that are being used to train the AI deserve some of the blame. They contain all of our biases, and they were certainly going to show up when AI started using them.
In spite of the fact that this is the case, the more recent iterations of AI that were developed by Open AI are showing some promise. When prompted to show an ambitious CEO, for example, the AI ended up depicting a black man as well as a woman wearing professional clothing. This is very likely due to a conscious effort from Open AI to reduce biases in DALL-E.
However, Stable Diffusion clearly needs to do a lot of work, since it only depicted middle aged white men when given this prompt. When asked to depict a nurse, it only showed women. This is a clear sign of bias, and it suggests that the people behind Stable Diffusion are not putting enough effort to address this.
Changes need to be made if AI is meant to go to the next level. Otherwise it will just be an advanced representation of the same old biases we’ve been living with for thousands of years.
Read next: Users Are Spending Less Time On Snapchat As App Faces Tough Competition From Reels And YouTube Shorts