In a recent blog post, YouTube has announced that it is starting to use Artificial Intelligence (AI) powered technology to restrict more videos based on age, and this will require the users to sign into their accounts more frequently before using YouTube to verify their age.
YouTube was prompted to take such a step because it was receiving a lot of appeals from concerned parents and also from advocacy boards from all over the world. Many people think that YouTube is not safe for children, and YouTube always agrees to these criticisms. Time and again, YouTube has mentioned that the platform is not meant for children under the age of thirteen, and that is the reason why YouTube for Kids was launched some time back, to divert the kids towards a safer alternative. But that didn’t and wouldn’t stop the kids from using YouTube.
Part of the responsibility lies on some of the most popular YouTube channels that create content specifically for kids, so this is a kind of tug of war between YouTube and kids.
But it seems that YouTube is very serious now to prevent younger children to watch videos on YouTube’s main app. So, now, more videos will be flagged with the age-restriction labels, and there will be tightened age verification processes.
The AI-technology will be used for content moderation, and if it comes across a video that does not contain appropriate content for people under 18, it will be flagged there and then.
YouTube says that since their machine learning technology will result in more videos to being age-restricted, their policy team has taken this opportunity to mark clear boundaries for content that is appropriate or inappropriate for certain age groups. However, the teams at YouTube took help from experts and consultants who evaluated their policies with the global content rating guidelines and pointed out the areas which need to be adjusted. And that is how YouTube took this decision to employ AI-powered technology for content moderation.
Previously also, in 2017, YouTube used artificial intelligence to detect videos with violent extremism, and hateful content, and then the platform was able to get rid of them. With a similar approach in mind, YouTube is now going to use AI tech for content moderation and putting age restrictions on different content that may be inappropriate for people under the age of 18. Users watching YouTube videos embedded on third-party sites will be directed to sign in to verify their age on YouTube.
These age-restricted videos will hardly have any ads, so there will not be a problem of monetizing for the creators, because it will be kind of understood.
However, where machine learning is involved, there are chances of mistakes and errors too, including wrong labels, mistaken copyright strikes, etc.
To provide an extra layer of security, users in the European Union may even be asked to show their valid id cards to verify their ages before they sign on YouTube. Although their government-issued information will be deleted, it may stir a lot of people as they may become cautious about their identities being revealed.
So, let us see how it all turns out and how much content moderation can actually happen through these steps.
Photo: Dado Ruvic / reuters
Read next: An ex-YouTube employee tells how difficult it is to moderate world's largest video search engine
YouTube was prompted to take such a step because it was receiving a lot of appeals from concerned parents and also from advocacy boards from all over the world. Many people think that YouTube is not safe for children, and YouTube always agrees to these criticisms. Time and again, YouTube has mentioned that the platform is not meant for children under the age of thirteen, and that is the reason why YouTube for Kids was launched some time back, to divert the kids towards a safer alternative. But that didn’t and wouldn’t stop the kids from using YouTube.
Part of the responsibility lies on some of the most popular YouTube channels that create content specifically for kids, so this is a kind of tug of war between YouTube and kids.
But it seems that YouTube is very serious now to prevent younger children to watch videos on YouTube’s main app. So, now, more videos will be flagged with the age-restriction labels, and there will be tightened age verification processes.
The AI-technology will be used for content moderation, and if it comes across a video that does not contain appropriate content for people under 18, it will be flagged there and then.
YouTube says that since their machine learning technology will result in more videos to being age-restricted, their policy team has taken this opportunity to mark clear boundaries for content that is appropriate or inappropriate for certain age groups. However, the teams at YouTube took help from experts and consultants who evaluated their policies with the global content rating guidelines and pointed out the areas which need to be adjusted. And that is how YouTube took this decision to employ AI-powered technology for content moderation.
Previously also, in 2017, YouTube used artificial intelligence to detect videos with violent extremism, and hateful content, and then the platform was able to get rid of them. With a similar approach in mind, YouTube is now going to use AI tech for content moderation and putting age restrictions on different content that may be inappropriate for people under the age of 18. Users watching YouTube videos embedded on third-party sites will be directed to sign in to verify their age on YouTube.
These age-restricted videos will hardly have any ads, so there will not be a problem of monetizing for the creators, because it will be kind of understood.
However, where machine learning is involved, there are chances of mistakes and errors too, including wrong labels, mistaken copyright strikes, etc.
To provide an extra layer of security, users in the European Union may even be asked to show their valid id cards to verify their ages before they sign on YouTube. Although their government-issued information will be deleted, it may stir a lot of people as they may become cautious about their identities being revealed.
So, let us see how it all turns out and how much content moderation can actually happen through these steps.
Photo: Dado Ruvic / reuters
Read next: An ex-YouTube employee tells how difficult it is to moderate world's largest video search engine