Social media platforms have to seriously emphasize content moderation because of the fact that this is the sort of thing that could potentially end up making it safer for users to use their services. Flagging content that violates policies and removing it as well as suspending or banning offending users are all common tactics that they deploy. However, implementing these techniques is often rather challenging, but a suite of new scientifically backed tools may provide these platforms with a suitable path forward.
With all of that having been said and now out of the way, it is important to note that a researcher at USC named Luca Luceri is participating in a research project that might make social media content moderation easier than might have been the case otherwise. This project is referred to as CARISMA, short for Call for Regulation Support in Social Media.
The main goal of this project is to develop a methodology that possesses clarity and can be repeated intuitively. This framework can ideally be used to ascertain the efficacy of content moderation policies with all things having been considered and taken into account.
One interesting factoid revealed during the research was that bad actors often create Twitter (now X) accounts during times of geopolitical strife. The researchers analyzed over 270 million tweets published during the Russian invasion of Ukraine as well as the 2022 French presidential election, and this shed light on some behaviors that could be seen on the platform.
It turns out that Twitter is more likely to ban accounts that were created recently. However, it bears mentioning that speedy account removal does not always stem the spread of misinformation. YouTube videos posted by these accounts ended up spreading at a much more rapid rate, revealing that additional steps must be taken to truly optimize social media content moderation.
The CARISMA project is currently working on a social media moderation simulator that might be of enormous use in the long run. This simulator can be utilized to analyze the long term effects of various policies, thereby making it so that social media platforms can better assess what they need to do.
Social media platforms have long been a hub for misinformation and harmful content. In spite of the fact that this is the case, they have failed to mitigate these risks, which is why the simulator might prove useful.
Sources: 1 / 2.
Read next: Big Tech Giants Unit To Form Lantern Coalition To Ensure Child Protection
With all of that having been said and now out of the way, it is important to note that a researcher at USC named Luca Luceri is participating in a research project that might make social media content moderation easier than might have been the case otherwise. This project is referred to as CARISMA, short for Call for Regulation Support in Social Media.
The main goal of this project is to develop a methodology that possesses clarity and can be repeated intuitively. This framework can ideally be used to ascertain the efficacy of content moderation policies with all things having been considered and taken into account.
One interesting factoid revealed during the research was that bad actors often create Twitter (now X) accounts during times of geopolitical strife. The researchers analyzed over 270 million tweets published during the Russian invasion of Ukraine as well as the 2022 French presidential election, and this shed light on some behaviors that could be seen on the platform.
It turns out that Twitter is more likely to ban accounts that were created recently. However, it bears mentioning that speedy account removal does not always stem the spread of misinformation. YouTube videos posted by these accounts ended up spreading at a much more rapid rate, revealing that additional steps must be taken to truly optimize social media content moderation.
The CARISMA project is currently working on a social media moderation simulator that might be of enormous use in the long run. This simulator can be utilized to analyze the long term effects of various policies, thereby making it so that social media platforms can better assess what they need to do.
Social media platforms have long been a hub for misinformation and harmful content. In spite of the fact that this is the case, they have failed to mitigate these risks, which is why the simulator might prove useful.
Sources: 1 / 2.
Read next: Big Tech Giants Unit To Form Lantern Coalition To Ensure Child Protection