TikTok has recently introduced new warning labels attached to content that contains dubious or incorrect data in attempts to curb the spread of misinformation on the platform.
This recent development from TikTok has it join the ranks of other social media apps such as Facebook, Instagram, and Twitter trying to stem or stopper the online communities spouting or being formed from unfounded claims. The short form video-sharing social media platform has been exploring avenues of combating misinformation for a while, such as removing information that contains incorrect information judged by third party analysts such as SciVerify and PolitiFact. Then again, as admitted by developers in a statement regarding the announcement of warning signs, the judgement panels can prove to be inconsistent at times, and other solutions needed to be examined.
The warning labels are a perfect way of both highlighting TikToks with potentially harmful or incorrect information, while also providing users with the agency to not share these posts forward. Considering the active Gen Z population on such platforms, it's almost all-too possible that such videos can be shared forward with the intention of spreading real awareness about online misinformation, while also discrediting and ridiculing the users behind such attempts. Let us not forget, this is the generation from which K-pop fans hijacked #WhiteLivesMatter hashtags in order to counter racist posts with posts celebrating the Black Lives Matter movement.
The warning labels operate as such: when a video with potentially dubious information is not removed by TikTok, yet is suspect in nature, choosing to share the video will result in a pop-up message. The message will ask users if they'd like to share the post, elaborating that it has been flagged for containing unverified claims. This, in a rather roundabout way, admittedly, even provides users with context as to what new form of conspiracy or online campaigning is being established, so as to brace them against it.
Then again, nothing would be more effective than actively removing such posts. TikTok's main audience is Gen Z, a generation that while being their own brand of witty and intelligent, is still made up of young, impressionable kids that may fall victims to more persuasive online discourse. Be it inflammatory ranting against racial minorities, conspiracies about the COVID-19 pandemic, or any form of fear-mongering, TikTok's young community only further necessitates the need of more active and prolific policing. Especially since the COVID vaccines have actively started rolling out, and this more than ever is a time to combat online misinformation.
Read next: Google, YouTube, Facebook Are the Most Visited Websites in the World (infographic)
This recent development from TikTok has it join the ranks of other social media apps such as Facebook, Instagram, and Twitter trying to stem or stopper the online communities spouting or being formed from unfounded claims. The short form video-sharing social media platform has been exploring avenues of combating misinformation for a while, such as removing information that contains incorrect information judged by third party analysts such as SciVerify and PolitiFact. Then again, as admitted by developers in a statement regarding the announcement of warning signs, the judgement panels can prove to be inconsistent at times, and other solutions needed to be examined.
The warning labels are a perfect way of both highlighting TikToks with potentially harmful or incorrect information, while also providing users with the agency to not share these posts forward. Considering the active Gen Z population on such platforms, it's almost all-too possible that such videos can be shared forward with the intention of spreading real awareness about online misinformation, while also discrediting and ridiculing the users behind such attempts. Let us not forget, this is the generation from which K-pop fans hijacked #WhiteLivesMatter hashtags in order to counter racist posts with posts celebrating the Black Lives Matter movement.
The warning labels operate as such: when a video with potentially dubious information is not removed by TikTok, yet is suspect in nature, choosing to share the video will result in a pop-up message. The message will ask users if they'd like to share the post, elaborating that it has been flagged for containing unverified claims. This, in a rather roundabout way, admittedly, even provides users with context as to what new form of conspiracy or online campaigning is being established, so as to brace them against it.
Then again, nothing would be more effective than actively removing such posts. TikTok's main audience is Gen Z, a generation that while being their own brand of witty and intelligent, is still made up of young, impressionable kids that may fall victims to more persuasive online discourse. Be it inflammatory ranting against racial minorities, conspiracies about the COVID-19 pandemic, or any form of fear-mongering, TikTok's young community only further necessitates the need of more active and prolific policing. Especially since the COVID vaccines have actively started rolling out, and this more than ever is a time to combat online misinformation.
Read next: Google, YouTube, Facebook Are the Most Visited Websites in the World (infographic)