Meta’s Oversight Board Rules Against Instagram While Calling Out App For Its Moderation Practices After AI Deepfake Case

An investigation involving a certain AI deepfake case on Instagram has finally come to an end.

Meta’s Oversight Board took charge to see whether the app should have removed an AI-generated image of a real female immediately. The ruling included calling out Instagram for its moderation practices as it failed to delete the image of a leading public figure from India, the name of whom has not been disclosed.

The board also reviewed another incident where explicit deepfake versions of a real user from the US were removed quickly, and the board felt this was the right course of action taken by the organization.

The board noted that deleting both of these posts was in line with Meta’s responsibilities concerning human rights.

The deepfake images of the female from the US were added to the company’s database with flagged pictures, but it was surprising how the Indian woman’s pictures were not included and were only deleted after the board intervened and started its investigation.

This approach was deemed inherently reactive, as many feel that the tech giant only takes relevant action after cases go viral and the damage to its reputation has already been done. Victims who aren’t famous often don’t receive the right attention and action compared to more popular individuals, the investigation explained.

The matter is worrisome due to the growing rise in AI trends and, consequently, deepfake images that may temporarily escape public scrutiny but cause significant distress over time.

The board feels that victims are sometimes forced to remain silent and accept the reported fake content because Meta does not cooperate in all instances.

Publishing anyone’s fake and naked picture online without consent is wrong; however, Meta’s poor rules and regulations have been called out in this instance. The board found that both deepfake cases violated Meta’s policy on derogatory sexualized image manipulation.

Meta’s rules would be clearer with a greater focus on consent and the harm of proliferating such material. The board recommended that Meta reconsider its policies and use terminology that encompasses a broader range of image manipulation methods beyond just Photoshop. This should be the new standard and should not be limited to bullying and harassment contexts.

The board also criticized Meta’s policy of automatically closing user reports within two days if no Meta staff members respond within the timeframe. This situation occurred concerning the Indian female’s deepfake AI case, where the picture was deleted only after the board’s investigation began and Meta was flagged for its response.

For now, the board does not have sufficient data regarding the tech giant’s auto-closing policy but believes such actions might significantly affect users’ human rights.

Most of the comments the Oversight Board received about social media apps were linked to these platforms serving as first lines of defense against perpetrators of such crimes.

The findings regarding Meta’s handling of nude deepfakes are worrisome and come as the Senate is set to pass the pending Defiance Act. This legislation will ensure that victims of deepfake porn can sue those who produce or distribute such content.

Image: DIW-Aigen

Read next: Samsung Adds Auto Blockers To New Galaxy Devices To Stop App Downloads From Unauthorized Sources
Previous Post Next Post