One of the biggest controversies that have surrounded Meta over the course of the past few years has to do with the vast amount of misinformation on its platforms. This issue became even more prominent than might have been the case otherwise during the Covid-19 pandemic, prompting Meta to adopt a brand new content moderation policy with all things having been considered and taken into account. In spite of the fact that this is the case, a new study that was published in Media International Australia revealed that the strategy has produced mixed results.
With all of that having been said and now out of the way, it is important to note that this process is referred to as shadowbanning and content labeling. The former involved reducing the amount of problematic content in the feed through algorithmic methods, but the research shows that it’s not as effective as one might end up assuming.
According to Professor Amelia Johns, one of the lead authors of this study, misinformation actually gets a fair amount of engagement. As a result of the fact that this is the case, it tends to get pushed by the algorithm thereby exposing it to a higher proportion of users at the end of the day.
The main issue at this current point in time appears to be that Meta consistently errs on the side of caution. The company is hesitant about stifling free speech because of the fact that this is the sort of thing that could potentially end up inciting backlash, and this is creating a situation wherein misinformation has the ability to thrive.
The company claims that removing content is not an effective way to make it so that misinformation is suppressed. It asserts that users will find ways to circumvent removal of content, and that shadowbanning and labeling is more effective. However, suppression has not shown many positive results either, so many might suggest that removal would’ve been a far more effective method for getting rid of misinformation once and for all. Meta will need to adjust its moderation policy accordingly, otherwise the issue won’t be going away anytime soon.
Image: DIW-Aigen
Read next: Shocking Study Reveals AI May Be Writing 1 in 3 Software Reviews
With all of that having been said and now out of the way, it is important to note that this process is referred to as shadowbanning and content labeling. The former involved reducing the amount of problematic content in the feed through algorithmic methods, but the research shows that it’s not as effective as one might end up assuming.
According to Professor Amelia Johns, one of the lead authors of this study, misinformation actually gets a fair amount of engagement. As a result of the fact that this is the case, it tends to get pushed by the algorithm thereby exposing it to a higher proportion of users at the end of the day.
The main issue at this current point in time appears to be that Meta consistently errs on the side of caution. The company is hesitant about stifling free speech because of the fact that this is the sort of thing that could potentially end up inciting backlash, and this is creating a situation wherein misinformation has the ability to thrive.
The company claims that removing content is not an effective way to make it so that misinformation is suppressed. It asserts that users will find ways to circumvent removal of content, and that shadowbanning and labeling is more effective. However, suppression has not shown many positive results either, so many might suggest that removal would’ve been a far more effective method for getting rid of misinformation once and for all. Meta will need to adjust its moderation policy accordingly, otherwise the issue won’t be going away anytime soon.
Image: DIW-Aigen
Read next: Shocking Study Reveals AI May Be Writing 1 in 3 Software Reviews