It’s no surprise that TikTok has been fighting misinformation for a while now. And just like many other tech giants in the industry, they’re having great difficulty.
Therefore, you might not want to trust everything you see in the app’s search results because a new report has some interesting findings in this regard.
Think along the lines of 20% of the platform’s data filled with misinformation, as reported by researchers in a new study from NewsGuard. And by this, we mean some major news subjects being included.
Common examples of categories to look out for include the COVID-19 pandemic and the ongoing Russian invasion of Ukraine. And then you’ve also got the 2021 Capital Riot in Washington, DC.
Researchers were also able to find out that anyone who typed an innocuous query would end up getting plenty of misguided suggestions. The perfect example provided in the study was climate change. While it’s a pretty straightforward subject, you’ll end up with suggestions like the denial of climate change after a search or two.
The app’s results are also quite polarizing if you ask us when compared to archrivals like Google. Let’s take the upcoming US Midterms, for example. You’re going to get results that entail so many partisan statements.
We know that TikTok’s spokesperson revealed how the company does not allow any form of harmful misinformation to arise on the app. Moreover, they've even made strong claims that all such details are immediately pulled away from the platform whenever discovered.
TikTok adds that tackling misinformation is a norm for them, and they’ve managed to delete around 350,000 videos so far that are linked to the 2020 US elections. Similarly, they keep on boasting about AI technology for video screening.
Moreover, any content that’s flagged is dealt with immediately and deleted automatically while sending out a response to human moderators to have a look at the matter. But with that said, it’s shocking to see how it’s still not managed to outline offenders too well. And that’s true for those that prefer not to use keywords that the AI-based system can’t detect.
But some experts feel the timing of this particular report isn’t great for the app. The platform’s chief of operations will be due in court shortly to testify with a number of executives at the US Senate.
This worrying case has to do with the role of the app in putting the country’s national security at risk, ever since allegations from different lawmakers have come forward. We won’t get the results in the hearing, but it could mean trouble for TikTok.
Read next: TikTok Is Now Providing Creators With Notifications Whenever Their Videos Are Added To An Individual’s Favorites Tab
Therefore, you might not want to trust everything you see in the app’s search results because a new report has some interesting findings in this regard.
Think along the lines of 20% of the platform’s data filled with misinformation, as reported by researchers in a new study from NewsGuard. And by this, we mean some major news subjects being included.
Common examples of categories to look out for include the COVID-19 pandemic and the ongoing Russian invasion of Ukraine. And then you’ve also got the 2021 Capital Riot in Washington, DC.
Researchers were also able to find out that anyone who typed an innocuous query would end up getting plenty of misguided suggestions. The perfect example provided in the study was climate change. While it’s a pretty straightforward subject, you’ll end up with suggestions like the denial of climate change after a search or two.
The app’s results are also quite polarizing if you ask us when compared to archrivals like Google. Let’s take the upcoming US Midterms, for example. You’re going to get results that entail so many partisan statements.
We know that TikTok’s spokesperson revealed how the company does not allow any form of harmful misinformation to arise on the app. Moreover, they've even made strong claims that all such details are immediately pulled away from the platform whenever discovered.
TikTok adds that tackling misinformation is a norm for them, and they’ve managed to delete around 350,000 videos so far that are linked to the 2020 US elections. Similarly, they keep on boasting about AI technology for video screening.
Moreover, any content that’s flagged is dealt with immediately and deleted automatically while sending out a response to human moderators to have a look at the matter. But with that said, it’s shocking to see how it’s still not managed to outline offenders too well. And that’s true for those that prefer not to use keywords that the AI-based system can’t detect.
But some experts feel the timing of this particular report isn’t great for the app. The platform’s chief of operations will be due in court shortly to testify with a number of executives at the US Senate.
This worrying case has to do with the role of the app in putting the country’s national security at risk, ever since allegations from different lawmakers have come forward. We won’t get the results in the hearing, but it could mean trouble for TikTok.
Read next: TikTok Is Now Providing Creators With Notifications Whenever Their Videos Are Added To An Individual’s Favorites Tab