Video-sharing giant YouTube is taking stricter measures to combat the misuse of AI.
The company just mentioned how it was giving users the chance to report any AI-based content that resembled them in appearance or sound. As it is, the company forces creators to mention if the material was made using AI tools.
Now, it’s giving them the chance to report how AI-based material might be misrepresenting them online without obtaining consent or them being aware of it.
The news comes in the form of a change rolled out in the app’s support page where the platform explains how various factors would be considered when the admin acts upon their complaints.
Several factors would be evaluated including whether the material was altered or not, if it was portrayed to the masses as being altered or not, if the person included could be identified, and if the material is real to begin with.
Other than that, the content would be judged upon whether or not it entails a parody, have public interests attached to it, or if it entails some kind of value. Similarly, any video featuring famous identities or a public face must be mentioned if they are involved in any kind of sensitive-themed acts like violence or criminal behavior.
The app says you can begin the process of Privacy Complaint to inform the platform when or if someone is making use of AI to produce content that resembles them. But at the same time, the app reiterated how filing complaints does not necessarily mean that an individual would be kicked off the app altogether.
Only if the content does qualify for deletion from the platform would be be removed like those depicting real versions of people.
At the same time, YouTube wants users to make sure they can be identified uniquely through the content in question before any complaint is filed. So that’s like making sure enough data is available to confirm that others can recognize you in the content.
After such complaints are filed, the app will provide uploaders with 48 hours to act on the matter. The entire process of reviewing the private data would begin if the material isn’t edited out or deleted.
However, it must be remembered that the platform needs claims from first parties but there are a few exceptions to keep in mind. This includes minors, those vulnerable, and anyone having zero access to online content.
It was mentioned in the previous year how bad actors made use of AI content to spread malware via the app. Now, it’s a part of the process and it accounts for creators on the app that still exist and are continuously being targeted to attain bigger audiences and ensure uploads remain legitimate at all times.
Image: DIW-AIgen
Read next: Google Rolls Out New Disclosure Policy For Digitally Altered Ads To Combat Election Disinformation
The company just mentioned how it was giving users the chance to report any AI-based content that resembled them in appearance or sound. As it is, the company forces creators to mention if the material was made using AI tools.
Now, it’s giving them the chance to report how AI-based material might be misrepresenting them online without obtaining consent or them being aware of it.
The news comes in the form of a change rolled out in the app’s support page where the platform explains how various factors would be considered when the admin acts upon their complaints.
Several factors would be evaluated including whether the material was altered or not, if it was portrayed to the masses as being altered or not, if the person included could be identified, and if the material is real to begin with.
Other than that, the content would be judged upon whether or not it entails a parody, have public interests attached to it, or if it entails some kind of value. Similarly, any video featuring famous identities or a public face must be mentioned if they are involved in any kind of sensitive-themed acts like violence or criminal behavior.
The app says you can begin the process of Privacy Complaint to inform the platform when or if someone is making use of AI to produce content that resembles them. But at the same time, the app reiterated how filing complaints does not necessarily mean that an individual would be kicked off the app altogether.
Only if the content does qualify for deletion from the platform would be be removed like those depicting real versions of people.
At the same time, YouTube wants users to make sure they can be identified uniquely through the content in question before any complaint is filed. So that’s like making sure enough data is available to confirm that others can recognize you in the content.
After such complaints are filed, the app will provide uploaders with 48 hours to act on the matter. The entire process of reviewing the private data would begin if the material isn’t edited out or deleted.
However, it must be remembered that the platform needs claims from first parties but there are a few exceptions to keep in mind. This includes minors, those vulnerable, and anyone having zero access to online content.
It was mentioned in the previous year how bad actors made use of AI content to spread malware via the app. Now, it’s a part of the process and it accounts for creators on the app that still exist and are continuously being targeted to attain bigger audiences and ensure uploads remain legitimate at all times.
Image: DIW-AIgen
Read next: Google Rolls Out New Disclosure Policy For Digitally Altered Ads To Combat Election Disinformation