Meta recently revealed a new tool built for the purposes of developing AI programs quickly and efficiently. Just one catch, though: the tool’s apparently got racist tendencies to it.
It’s almost expected that AI, or even AI development systems that are built by humans with inherent bias would ultimately come to reflect some form of them. It’s the ultimate fallacy of machines: no system in the world can be truly free of error and bias, especially since the quote unquote “unnatural” ones such as technological devices are ultimately made by imperfect, “natural” beings. And yes, this is as philosophical as I intend on getting with the subject matter of technology; now, back to our regularly scheduled programming. It is interesting to note, however, that this is the second time in the past few years that we’ve come across racist AI being employed by social media platforms, which in and of itself feels like a phenomenon that either shouldn’t have happened twice or should have happened many, many more times than that.
The example that I have in mind is one of Twitter’s image resizing AI. The short-form text platform (that’s an indie band name if I’ve ever heard one) decided to employ an algorithm that would automatically resize photos that don’t fit Twitter’s basic display, sparing users the effort of editing photos ahead of time. However, users quickly figured out that if photos of a larger group of individuals was posted, minorities such as black people kept getting cropped out of the photo. Some users even ran tests with this, to conclusively agree that the AI straight up started ignoring users that weren’t white. So, let’s be real: there’s little chance that developers were actively attempting to make their technology racist, personal beliefs notwithstanding. However, this does display just how effectively racial bias seeps into every social crevice; the AI was probably trained on a database of photos for referencing, and those photos probably just had a ton of white people in them since media channels aren’t super hip on showing other minorities except for scoring diversity points every now and then. This is, of course, speculative, and I’m willing to be educated on the actual reason. My point, however, still stands.
Meta’s new system, named OPT-175B, was funnily enough outed for its less than scrupulous tendencies by the company’s own researchers. In a report accompanying the system’s test release, it was elaborated upon that OPT-175B had a tendency to generate toxic language that reinforced harmful stereotypes about individuals and races. I guess Meta wanted to stay ahead of the curve on this, and the researchers are still at work undoing the new AI generator’s kinks.
Read next: Meta’s Instagram And Facebook Reign Supreme As Most Trusted Social Networks For Consumers
It’s almost expected that AI, or even AI development systems that are built by humans with inherent bias would ultimately come to reflect some form of them. It’s the ultimate fallacy of machines: no system in the world can be truly free of error and bias, especially since the quote unquote “unnatural” ones such as technological devices are ultimately made by imperfect, “natural” beings. And yes, this is as philosophical as I intend on getting with the subject matter of technology; now, back to our regularly scheduled programming. It is interesting to note, however, that this is the second time in the past few years that we’ve come across racist AI being employed by social media platforms, which in and of itself feels like a phenomenon that either shouldn’t have happened twice or should have happened many, many more times than that.
The example that I have in mind is one of Twitter’s image resizing AI. The short-form text platform (that’s an indie band name if I’ve ever heard one) decided to employ an algorithm that would automatically resize photos that don’t fit Twitter’s basic display, sparing users the effort of editing photos ahead of time. However, users quickly figured out that if photos of a larger group of individuals was posted, minorities such as black people kept getting cropped out of the photo. Some users even ran tests with this, to conclusively agree that the AI straight up started ignoring users that weren’t white. So, let’s be real: there’s little chance that developers were actively attempting to make their technology racist, personal beliefs notwithstanding. However, this does display just how effectively racial bias seeps into every social crevice; the AI was probably trained on a database of photos for referencing, and those photos probably just had a ton of white people in them since media channels aren’t super hip on showing other minorities except for scoring diversity points every now and then. This is, of course, speculative, and I’m willing to be educated on the actual reason. My point, however, still stands.
Meta’s new system, named OPT-175B, was funnily enough outed for its less than scrupulous tendencies by the company’s own researchers. In a report accompanying the system’s test release, it was elaborated upon that OPT-175B had a tendency to generate toxic language that reinforced harmful stereotypes about individuals and races. I guess Meta wanted to stay ahead of the curve on this, and the researchers are still at work undoing the new AI generator’s kinks.
Read next: Meta’s Instagram And Facebook Reign Supreme As Most Trusted Social Networks For Consumers