It’s the election period around the globe and an estimated four billion voters from different nations are set to take part in the endeavor.
So when you’ve got more than 50% of the global population in various nations making their way to the polls, the concerns about misinformation coming into play and persuading people’s opinions are certainly running high.
This is the main reason why we’ve got plenty of social media firms like TikTok, Meta, and even YouTube working day and night to make sure they’ve got the right safeguards in place to stop this from happening.
No one wishes to see fictional content being prompted and that means executives in charge of these leading social media apps are now on their toes to ensure this does not happen.
The goal here is to protect people from such rival apps whose motive might be more inclined to the likes of reach and greater scope. Such an absence is leading researchers from places like Mozilla to express their concerns on the matter as we speak.
90% of all such safety interventions that Meta has put into place seem to be linked to two of its leading social media platforms. No guesses here as to who they may be as it’s Instagram and Facebook.
However, researchers are reminding the tech giant about how it needs to make similar changes to its popular texting platform WhatsApp. There is no kind of public commitment related to making the right road map in terms of how elections would be carried out and how WhatsApp would be seemingly protected from misinformation being circulated.
In the past decade, we’ve seen WhatsApp become the main means through which people outside America communicate. Then in 2020, we saw the platform mention how it had billions of users making use of its services, transforming how communication is carried out.
But despite such stats being out in the open, it is truly amazing how Meta has failed to focus on WhatsApp turning into a major source for misinformation getting circulated as safety measures are very limited in this domain when we talk about elections.
A recent analysis revealed how Facebook made 95 new changes in anticipation of the election period since the start of 2016.
This is when the platform first raised eyebrows for not doing enough to stop fake news circulation and harbor extreme forms of political sentiments. Interestingly, WhatsApp’s contribution here was just 14.
In comparison to that, both Google and YouTube were making 35 and even 27. Then we had the likes of X making 34 and TikTok generating 21 respectively. So from what we can see here, Meta’s efforts towards the current election period appear to be very overwhelming with a distinct prioritization toward the Facebook app.
This is probably why researchers at the popular non-profit firm Mozilla are not happy as they are calling on the company to give rise to serious changes to how the app works on a polling day, months before the elections even begin.
What is interesting is how the researchers are calling for simple but effective changes like the addition of disinformation labels for content it feels is viral or continues to be forwarded at a faster pace than others. Similarly, tags like please verify are being promoted to ensure people can do their own research on this front instead of just blindly following what’s written online.
People are stopped from blasting messages to community members at the same time because new rules would force them to stop and think for a moment about what is going on and after reflection, if they still feel it’s alright then so be it.
Another interesting point on this front has to do with Mozilla asking the platform in the form of a pledge to roll out these changes before it's too late. And so far, it’s working as it has gotten close to 16,000 signatures, as confirmed by one rep at the media outlet Engadget.
What is even more interesting is how the feature of requesting people to take some time out to pause and reflect arose from features that Twitter rolled out to avoid misinformation seen on retweets.
Image: DIW-Aigen
Read next: Loophole in Meta AI Allows Image Generation of Celebrities
So when you’ve got more than 50% of the global population in various nations making their way to the polls, the concerns about misinformation coming into play and persuading people’s opinions are certainly running high.
This is the main reason why we’ve got plenty of social media firms like TikTok, Meta, and even YouTube working day and night to make sure they’ve got the right safeguards in place to stop this from happening.
No one wishes to see fictional content being prompted and that means executives in charge of these leading social media apps are now on their toes to ensure this does not happen.
The goal here is to protect people from such rival apps whose motive might be more inclined to the likes of reach and greater scope. Such an absence is leading researchers from places like Mozilla to express their concerns on the matter as we speak.
90% of all such safety interventions that Meta has put into place seem to be linked to two of its leading social media platforms. No guesses here as to who they may be as it’s Instagram and Facebook.
However, researchers are reminding the tech giant about how it needs to make similar changes to its popular texting platform WhatsApp. There is no kind of public commitment related to making the right road map in terms of how elections would be carried out and how WhatsApp would be seemingly protected from misinformation being circulated.
In the past decade, we’ve seen WhatsApp become the main means through which people outside America communicate. Then in 2020, we saw the platform mention how it had billions of users making use of its services, transforming how communication is carried out.
But despite such stats being out in the open, it is truly amazing how Meta has failed to focus on WhatsApp turning into a major source for misinformation getting circulated as safety measures are very limited in this domain when we talk about elections.
A recent analysis revealed how Facebook made 95 new changes in anticipation of the election period since the start of 2016.
This is when the platform first raised eyebrows for not doing enough to stop fake news circulation and harbor extreme forms of political sentiments. Interestingly, WhatsApp’s contribution here was just 14.
In comparison to that, both Google and YouTube were making 35 and even 27. Then we had the likes of X making 34 and TikTok generating 21 respectively. So from what we can see here, Meta’s efforts towards the current election period appear to be very overwhelming with a distinct prioritization toward the Facebook app.
This is probably why researchers at the popular non-profit firm Mozilla are not happy as they are calling on the company to give rise to serious changes to how the app works on a polling day, months before the elections even begin.
What is interesting is how the researchers are calling for simple but effective changes like the addition of disinformation labels for content it feels is viral or continues to be forwarded at a faster pace than others. Similarly, tags like please verify are being promoted to ensure people can do their own research on this front instead of just blindly following what’s written online.
People are stopped from blasting messages to community members at the same time because new rules would force them to stop and think for a moment about what is going on and after reflection, if they still feel it’s alright then so be it.
Another interesting point on this front has to do with Mozilla asking the platform in the form of a pledge to roll out these changes before it's too late. And so far, it’s working as it has gotten close to 16,000 signatures, as confirmed by one rep at the media outlet Engadget.
What is even more interesting is how the feature of requesting people to take some time out to pause and reflect arose from features that Twitter rolled out to avoid misinformation seen on retweets.
Image: DIW-Aigen
Read next: Loophole in Meta AI Allows Image Generation of Celebrities