Popular video-sharing app YouTube is going strong in terms of its crackdown against AI content that simulates deadly and drastic events linked to minors in a very realistic manner.
The tech giant just confirmed how it would be making it harder for such incidents to arise in the first place through stringent policies linked to both cybercrimes and harassment.
Google has already been in the hot seat when critics spoke about how AI content can lead to drastic and devastating impacts on the minds of minors who are unaware of being targeted by cybercriminals who use AI technology to attain sick and disturbing benefits.
The company added that the crackdown in the form of harsher policies would be laid down starting the middle of this month so it’s ready to come into effect as early as next week.
The app owned by Google has been noticing how so many content creators are obsessed with crime and therefore are using AI to recreate realistic deaths or minors gone missing. Such disturbing ordeals have led to more individuals making use of the breakthrough technology to give minors linked to high-profile cases the chance to actually voice deaths in a realistic manner that sounds exactly like how a child involved in the crime would.
So many content creators in the recent past were called out for the act for disturbing narration practices entailing the kidnapping and deaths linked to British victims such as a two-year-old boy named James Bulgar who was reportedly described in the Washington Post.
Other common narrations that have taken place include the case involving Madeleine McCann who is a three-year-old based in the UK that suddenly disappeared from a top resort. Meanwhile, there is Gabriel Fernandez who was harassed, abused, and then killed by his own parent and her lover in the state of California.
Therefore, the online video powerhouse would get rid of any content violating the latest policies, and users who get strikes cannot publish videos, stories, or any kind of livestreams, even for seven days. After they get three strikes for violations committed, their channel would be deleted forever from the app as penalized.
These new changes are set to take center stage as early as two months after the app began rolling out new policies related to responsible disclosures about AI content, alongside the latest tools that can delete deepfakes once a request for the act is generated.
One change for this to ensue would entail users openly disclosing that they’ve produced, changed, or even made content that seems realistic in nature.
The firm generated warnings on this front including how those that did not make such disclosures and still went ahead with publishing the content would be forced to go through a phase of suspension or content removal from the app’s Partner Program that’s designed to help aspiring creators generate more revenue. Similarly, other punishments can arise on this front as well.
Additionally, the app mentioned during the time how some of the material produced through AI means could be deleted if it gets used to display violence realistically or in those cases where it’s labeled.
Toward the end of last year, we saw social media giant TikTok roll out a tool that enabled creators to add tags or labels to content to display realistic moments produced synthetically or even those where manipulation of the media is done. So far, the app has also been successful at taking down any pictures produced through AI that fail to disclose their nature.
Photo: Digital Information World - AIgen
Read next: Meta Launches New Lawsuit Against FTC To Prevent It From Imposing More Restrictions Against Its Ad Targeting Practices
The tech giant just confirmed how it would be making it harder for such incidents to arise in the first place through stringent policies linked to both cybercrimes and harassment.
Google has already been in the hot seat when critics spoke about how AI content can lead to drastic and devastating impacts on the minds of minors who are unaware of being targeted by cybercriminals who use AI technology to attain sick and disturbing benefits.
The company added that the crackdown in the form of harsher policies would be laid down starting the middle of this month so it’s ready to come into effect as early as next week.
The app owned by Google has been noticing how so many content creators are obsessed with crime and therefore are using AI to recreate realistic deaths or minors gone missing. Such disturbing ordeals have led to more individuals making use of the breakthrough technology to give minors linked to high-profile cases the chance to actually voice deaths in a realistic manner that sounds exactly like how a child involved in the crime would.
So many content creators in the recent past were called out for the act for disturbing narration practices entailing the kidnapping and deaths linked to British victims such as a two-year-old boy named James Bulgar who was reportedly described in the Washington Post.
Other common narrations that have taken place include the case involving Madeleine McCann who is a three-year-old based in the UK that suddenly disappeared from a top resort. Meanwhile, there is Gabriel Fernandez who was harassed, abused, and then killed by his own parent and her lover in the state of California.
Therefore, the online video powerhouse would get rid of any content violating the latest policies, and users who get strikes cannot publish videos, stories, or any kind of livestreams, even for seven days. After they get three strikes for violations committed, their channel would be deleted forever from the app as penalized.
These new changes are set to take center stage as early as two months after the app began rolling out new policies related to responsible disclosures about AI content, alongside the latest tools that can delete deepfakes once a request for the act is generated.
One change for this to ensue would entail users openly disclosing that they’ve produced, changed, or even made content that seems realistic in nature.
The firm generated warnings on this front including how those that did not make such disclosures and still went ahead with publishing the content would be forced to go through a phase of suspension or content removal from the app’s Partner Program that’s designed to help aspiring creators generate more revenue. Similarly, other punishments can arise on this front as well.
Additionally, the app mentioned during the time how some of the material produced through AI means could be deleted if it gets used to display violence realistically or in those cases where it’s labeled.
Toward the end of last year, we saw social media giant TikTok roll out a tool that enabled creators to add tags or labels to content to display realistic moments produced synthetically or even those where manipulation of the media is done. So far, the app has also been successful at taking down any pictures produced through AI that fail to disclose their nature.
Photo: Digital Information World - AIgen
Read next: Meta Launches New Lawsuit Against FTC To Prevent It From Imposing More Restrictions Against Its Ad Targeting Practices