The White House has recently confirmed that a number of top tech giants are more than eager to curb the issues related to AI misuse. This includes the problem linked to producing and distributing explicit images.
The list includes Adobe, OpenAI, Common Crawl, Anthropic, Microsoft, and Cohere. All those participating in the cause also rolled out the measures they’re taking to stop their platforms from being used for this unlawful act. This includes the generation of nonconsensual pictures of adults and children which are classified as NCII and CSAM.
The issue is nothing new and has been gaining a lot of negative publicity for so long, especially since the trend of generative AI took center stage. The tech giants claimed to be committed more now than ever to safeguarding users from such acts that’s sexual abuse.
All the companies other than Common Crawl also mentioned how they would add feedback loops and stress-testing maneuvers to developmental processes. This is to protect against any AI models rolling out sexual abuse images. It would similarly be deleting nude pictures made through AI datasets when applicable.
The commitment is a serious one and from what we can tell from today’s announcement, it’s not any different from what we’ve heard from the past. All companies have made similar ones but no laws were outlined to action those that don’t stay true to their promises. Still, the fact that they do care and consider the need for change is a great step forward.
Meanwhile, it’s not hard to see that some tech giants were missing from the list including Meta, Apple, Google, and Amazon. These companies have been slammed since day one for not doing enough and not being on the list sends out a loud and clear message.
We’ve seen several big AI and tech firms make efforts for victims in the past that were impacted by non consenual images produced of them in a sexual manner. Their goals were linked to stopping their distribution, other than the usual federal way.
We’ve seen companies like StopNCII collaborate with other leading tech giants for a more impactful approach to deleing such content from databases. Meanwhile, other companies are committed to rolling out new tools that report these kinds of deepfakes and sexual images of users made without consent.
Image: DIW-Aigen
Read next: OpenAI Says Its Latest ‘o1’ Model Excels In Complex Reasoning And Performs Better Than Humans In Math, Science, And Coding
The list includes Adobe, OpenAI, Common Crawl, Anthropic, Microsoft, and Cohere. All those participating in the cause also rolled out the measures they’re taking to stop their platforms from being used for this unlawful act. This includes the generation of nonconsensual pictures of adults and children which are classified as NCII and CSAM.
The issue is nothing new and has been gaining a lot of negative publicity for so long, especially since the trend of generative AI took center stage. The tech giants claimed to be committed more now than ever to safeguarding users from such acts that’s sexual abuse.
All the companies other than Common Crawl also mentioned how they would add feedback loops and stress-testing maneuvers to developmental processes. This is to protect against any AI models rolling out sexual abuse images. It would similarly be deleting nude pictures made through AI datasets when applicable.
The commitment is a serious one and from what we can tell from today’s announcement, it’s not any different from what we’ve heard from the past. All companies have made similar ones but no laws were outlined to action those that don’t stay true to their promises. Still, the fact that they do care and consider the need for change is a great step forward.
Meanwhile, it’s not hard to see that some tech giants were missing from the list including Meta, Apple, Google, and Amazon. These companies have been slammed since day one for not doing enough and not being on the list sends out a loud and clear message.
We’ve seen several big AI and tech firms make efforts for victims in the past that were impacted by non consenual images produced of them in a sexual manner. Their goals were linked to stopping their distribution, other than the usual federal way.
We’ve seen companies like StopNCII collaborate with other leading tech giants for a more impactful approach to deleing such content from databases. Meanwhile, other companies are committed to rolling out new tools that report these kinds of deepfakes and sexual images of users made without consent.
Image: DIW-Aigen
Read next: OpenAI Says Its Latest ‘o1’ Model Excels In Complex Reasoning And Performs Better Than Humans In Math, Science, And Coding