The sheer scale of the number of people that use Facebook every day is so massive that it’s inevitable that someone will end up uploading unsavory content to the platform. This content never lasts very long because of the fact that people end up reporting it, but Facebook is trying to make itself even safer by tweaking its artificial intelligence (AI) so that the algorithm can detect content that is against community standards on its own before users have to report it. After all, if a user reported a certain kind of content, that means that they have had to at least partly watch it, which means that at least some of the users will end up being affected by potentially triggering content.
The new tweaks in Facebook’s AI and ML (machine learning) are specifically designed to target revenge porn. You would be surprised at just how much revenge porn ends up getting uploaded to Facebook and Instagram every day. It has become an epidemic, and Facebook is clearly hard set on tackling the matter at hand so that it can continue to market itself as a family friendly social media platform where such depictions of violence will not be tolerated.
Some users might be concerned about an AI removing content of its own volition, though. Whenever algorithms start to remove content without human help, things start ending badly. However, Facebook has assured that the new content removal AI will be highly efficient at targeting only those pieces of content that truly violate its community standards and have been found to be hateful, violent or abusive in some manner. We will have to wait and see whether or not this is actually true, but a lot of people would agree that it’s better to be safe than sorry.
The new tweaks in Facebook’s AI and ML (machine learning) are specifically designed to target revenge porn. You would be surprised at just how much revenge porn ends up getting uploaded to Facebook and Instagram every day. It has become an epidemic, and Facebook is clearly hard set on tackling the matter at hand so that it can continue to market itself as a family friendly social media platform where such depictions of violence will not be tolerated.
Some users might be concerned about an AI removing content of its own volition, though. Whenever algorithms start to remove content without human help, things start ending badly. However, Facebook has assured that the new content removal AI will be highly efficient at targeting only those pieces of content that truly violate its community standards and have been found to be hateful, violent or abusive in some manner. We will have to wait and see whether or not this is actually true, but a lot of people would agree that it’s better to be safe than sorry.
"We can now proactively detect near nude images or videos that are shared without permission on Facebook and Instagram. This means we can find this content before anyone reports it, which is important for two reasons: often victims are afraid of retribution so they are reluctant to report the content themselves or are unaware the content has been shared." announced Antigone Davis, Facebook's Global Head of Safety. Adding further, "A specially-trained member of our Community Operations team will review the content found by our technology. If the image or video violates our Community Standards, we will remove it, and in most cases we will also disable an account for sharing intimate content without permission. We offer an appeals process if someone believes we’ve made a mistake."