Recently we saw TikTok being scrutinized for lacking appropriate measures for its younger audiences. Therefore, the app seems to be taking the issue more seriously now after unveiling a new set of features to limit unwanted content exposure.
For starters, the company recently revealed a new option by which users can filter video-based content automatically. This is done with certain keywords or by using hashtags that they deem to be harmful.
The company says blocking words and hashtags by highlighting them in the ‘Details’ tab can really make a difference in filtering content. For instance, if you select violence, you will no longer be seeing such videos appear on the feed or in the suggestions. Hence, the key here is to be mindful of certain terms in the description.
While the app does understand that such filters are not perfect yet, they’re definitely one step forward to curbing unwanted exposure for the younger lot, including content with descriptions having such offensive terms.
TikTok does not promise that you won’t be seeing any content related to a certain theme but this new feature adds a new way to manage things on the app. Hence, users can avail the new functionality in a few weeks when it rolls out.
Next up, TikTok has revealed how it’s automatically going to be limiting content exposure related to particular themes that it believes won’t be suitable for the younger world. This includes some common ones like extreme diets, vigorous workouts, depression, and more.
Last year, we saw the company unveiling how it planned to curb harmful instances linked to its algorithm amplification. And it planned on achieving that target by simply reducing such videos in a particular sensitive category that gets highlighted in the ‘for you’ section of the app.
Now, the app wishes to take that to the next level because after a series of tests, they’ve realized how much viewer experience has improved as less of such content is being aired at a particular time.
The entire ordeal does get a little tricky sometimes as you’re dealing with content that’s both sad and encouraging simultaneously. Hence, the challenge is great, the app confirmed, however, the results they believe will be worth it.
This area of research seems to be pretty interesting if you ask us. The goal of keeping users at bay and putting a barrier against harmful elements can really create a positive effect on their behavior.
Last but certainly not least, TikTok recently unveiled its decision to introduce a new system through which content can be rated. The new feature would be launched in the upcoming weeks where the platform hopes to stop ‘mature’ themed content from being visible to younger audiences who lie in the 13 to 17-year age bracket.
When the app does find a particular video that perhaps is too fictional or intense for the younger lot, a certain score of maturity would be provided to help stop that content from being viewed by people under 18.
In addition to that, TikTok has brought forward new safety ratings for brands that advertisers could use to gain more insight into which types of controversial content they shouldn’t be promoting on the app.
We’re curious to see how the app’s systems go about detecting such content and what impact it brings on audiences.
We feel the systems here are pretty advanced in terms of using AI-powered technology to ensure the content of the best and most engaging kind continues to flourish on the app. It’s no wonder why we’re seeing people have their eyes glued to their screens as users’ interests and past behavior are taken care of.
Read next: More Trouble For TikTok As Company Refuses To Adopt Planned Changes In Its Privacy Policy
For starters, the company recently revealed a new option by which users can filter video-based content automatically. This is done with certain keywords or by using hashtags that they deem to be harmful.
The company says blocking words and hashtags by highlighting them in the ‘Details’ tab can really make a difference in filtering content. For instance, if you select violence, you will no longer be seeing such videos appear on the feed or in the suggestions. Hence, the key here is to be mindful of certain terms in the description.
While the app does understand that such filters are not perfect yet, they’re definitely one step forward to curbing unwanted exposure for the younger lot, including content with descriptions having such offensive terms.
TikTok does not promise that you won’t be seeing any content related to a certain theme but this new feature adds a new way to manage things on the app. Hence, users can avail the new functionality in a few weeks when it rolls out.
Next up, TikTok has revealed how it’s automatically going to be limiting content exposure related to particular themes that it believes won’t be suitable for the younger world. This includes some common ones like extreme diets, vigorous workouts, depression, and more.
Last year, we saw the company unveiling how it planned to curb harmful instances linked to its algorithm amplification. And it planned on achieving that target by simply reducing such videos in a particular sensitive category that gets highlighted in the ‘for you’ section of the app.
Now, the app wishes to take that to the next level because after a series of tests, they’ve realized how much viewer experience has improved as less of such content is being aired at a particular time.
The entire ordeal does get a little tricky sometimes as you’re dealing with content that’s both sad and encouraging simultaneously. Hence, the challenge is great, the app confirmed, however, the results they believe will be worth it.
This area of research seems to be pretty interesting if you ask us. The goal of keeping users at bay and putting a barrier against harmful elements can really create a positive effect on their behavior.
Last but certainly not least, TikTok recently unveiled its decision to introduce a new system through which content can be rated. The new feature would be launched in the upcoming weeks where the platform hopes to stop ‘mature’ themed content from being visible to younger audiences who lie in the 13 to 17-year age bracket.
When the app does find a particular video that perhaps is too fictional or intense for the younger lot, a certain score of maturity would be provided to help stop that content from being viewed by people under 18.
In addition to that, TikTok has brought forward new safety ratings for brands that advertisers could use to gain more insight into which types of controversial content they shouldn’t be promoting on the app.
We’re curious to see how the app’s systems go about detecting such content and what impact it brings on audiences.
We feel the systems here are pretty advanced in terms of using AI-powered technology to ensure the content of the best and most engaging kind continues to flourish on the app. It’s no wonder why we’re seeing people have their eyes glued to their screens as users’ interests and past behavior are taken care of.
Read next: More Trouble For TikTok As Company Refuses To Adopt Planned Changes In Its Privacy Policy