Microsoft Bing Takes Major Step Against Growing Number Of Non-Consensual Intimate Image Incidents

The advent of Generative AI has brought about a huge change in the tech world. However, not everything is for the good.

AI is making it difficult for victims of revenge porn to fight back against those who created synthetic nude pictures without any consent. The images appear so real that it’s hard to differentiate real from fake. However, Microsoft has been working on a solution.

The software giant just took a huge step to provide porn victims with the best tool to halt the recycling of these pictures from its Bing Search. This is all thanks to its latest collaboration with StopNCII which gives victims the chance to produce digital fingerprints of the pictures on their devices.

This scrubs the picture from platforms where the images were likely to be circulated like Meta’s Instagram and Facebook. The same goes for TikTok, Snapchat, Pornhub, Reddit, Threads, and OnlyFans who are also in partnership with the company.

As per its latest announcement, Microsoft claims it ensured 268,000 explicit pictures were stopped from getting circulated on Bing Search as a part of a pilot study for the latest feature. They attained success and hope this will now remain a part of their search engine so revenge porn victims can benefit.


Microsoft mentioned in its latest blog post that they’ve heard one concern too many from victims and stakeholders that reporting alone is not the solution. This is why they hope this latest strategy can benefit many who are targeted online concerning explicit imagery getting fabricated and circulated.

Now you can only imagine if the issue is huge on Microsoft, how big of a matter it is for Google whose search engine is 100 times more popular than Bing Search.

Google also provides its own list of tools to report and get rid of explicit pictures from searches. It has faced a lot of criticism in this regard from employees in the past for not doing enough. In South Korea alone, nearly 170,000 links for YouTube and Search were generated for explicit content.

We already know how big of an issue AI deepfakes around the globe and the issue is getting worse. Small steps like these from tech giants are necessary to help bring an end. We’re already seeing minors being exploited from undressing sites.

It’s also worth mentioning that many countries are yet to roll out a law that combats AI Deepfakes. Even the US is yet to have one in the works so all these nations are relying on law enforcement agencies to tackle the issue.

Prosecutors in San Francisco did speak about a lawsuit that was designed to remove 16 questionable websites for promoting such imagery. While 23 states did pass laws to better combat non-consensual deepfakes, not a lot of success was seen with them.

Read next: YouTube Launches New Detection System To Combat Growing AI Deepfakes
Previous Post Next Post