The prevalence of bot traffic on the internet is a well known issue, but several studies have indicated that we don’t know the full extent of the issue on a wider scale. Lyric Jain, CEO of an AI software company called Logically, recently unveiled his findings which reveal the high extent of big companies and firms who are contributing to online bot traffic because of the fact that this is the sort of thing that could potentially end up allowing them to smear competitors.
With all of that having been said and now out of the way, it is important to note that firms that use bots are mostly trying to spread misinformation about their competitors. They also use fake accounts for this purpose, and they frequently use these tools to spread exaggerated claims about controversies that their competitors might be embroiled in. According to Jain, a majority of the cases he has worked on involved Chinese companies trying to discredit Western competitors, although he admitted that some smaller western businesses might be participating in such illicit methods as well.
Logically offers services to companies who want to fight against misinformation, and this entails them monitoring upwards of 20 million individual social media posts on a daily basis. These bot accounts and fake accounts usually get deleted by the host platform when they are made aware of their existence, but in spite of the fact that this is the case much of the damage will already have been done by that point.
The main issue here is that a social media platform takes about two hours on average to act on a report, and it usually only takes a few minutes for fake news and misinformation to start spreading rapidly. This is further exacerbated by the algorithms that social media platforms use to show content to users, since they tend to push content that would incite outrage and anger both of which drive astonishing levels of engagement with all things having been considered and taken into account.
The desire of social media platforms to keep their users from navigating away from their apps coupled with shady companies using underhanded methods to disparage competitors creates the perfect storm for the spreading of fake news. Logically is not the only company that is working in this space either, since Factmata is also using upwards of 19 different algorithms in its AI to hunt down targeted misinformation on behalf of corporate entities. They use multiple algorithms simultaneously to prevent their AI from marking genuine satire or good humored white lies as misinformation which can muddle up the results.
However, some people such as Oxford professor Sandra Wachter are calling into question the use of technology to fight misinformation. In her opinion, there is too much nuance in social media discourse to allow AI to function optimally. It might take a lot of machine learning before an AI can genuinely tell the difference between satire and outright misinformation.
YouTube’s use of algorithms to remove videos that violate copyright is just one example of situations where AI can do more harm than good, and remains to be seen if these companies might be able to offer different results.
H/T: BBC
Read next: New Study Proves Most People Lie About Their Tech Proficiency At Work
With all of that having been said and now out of the way, it is important to note that firms that use bots are mostly trying to spread misinformation about their competitors. They also use fake accounts for this purpose, and they frequently use these tools to spread exaggerated claims about controversies that their competitors might be embroiled in. According to Jain, a majority of the cases he has worked on involved Chinese companies trying to discredit Western competitors, although he admitted that some smaller western businesses might be participating in such illicit methods as well.
Logically offers services to companies who want to fight against misinformation, and this entails them monitoring upwards of 20 million individual social media posts on a daily basis. These bot accounts and fake accounts usually get deleted by the host platform when they are made aware of their existence, but in spite of the fact that this is the case much of the damage will already have been done by that point.
The main issue here is that a social media platform takes about two hours on average to act on a report, and it usually only takes a few minutes for fake news and misinformation to start spreading rapidly. This is further exacerbated by the algorithms that social media platforms use to show content to users, since they tend to push content that would incite outrage and anger both of which drive astonishing levels of engagement with all things having been considered and taken into account.
The desire of social media platforms to keep their users from navigating away from their apps coupled with shady companies using underhanded methods to disparage competitors creates the perfect storm for the spreading of fake news. Logically is not the only company that is working in this space either, since Factmata is also using upwards of 19 different algorithms in its AI to hunt down targeted misinformation on behalf of corporate entities. They use multiple algorithms simultaneously to prevent their AI from marking genuine satire or good humored white lies as misinformation which can muddle up the results.
However, some people such as Oxford professor Sandra Wachter are calling into question the use of technology to fight misinformation. In her opinion, there is too much nuance in social media discourse to allow AI to function optimally. It might take a lot of machine learning before an AI can genuinely tell the difference between satire and outright misinformation.
YouTube’s use of algorithms to remove videos that violate copyright is just one example of situations where AI can do more harm than good, and remains to be seen if these companies might be able to offer different results.
H/T: BBC
Read next: New Study Proves Most People Lie About Their Tech Proficiency At Work