Facebook is redoubling its attempts to curb online misinformation by sending notifications to users that might have encountered it on the platform.
Facebook has made this announcement shortly after its initial pledge to combat myths and conspiracies being propagated against the recently-developed COVID-19 vaccine. Their pledge included flagging posts, groups, and accounts that proved notorious in their attempts to propagate misinformation and hysteria. Now, they’re looking towards the collateral damage and people who have encountered such information. Notifications will be sent to users that have liked, commented and/or shared flagged posts, informing them of their mistake. The new notifications will also highlight which information included in the post was incorrect, thus providing individuals with valuable insight regarding the pandemic.
This feature is not a new fangled piece of technology as much as it is an improvement upon old ideas. The old idea in question first took form in the shape of a notification on one’s Facebook newsfeed. The text bubble, asked that users take steps towards stemming the flow of misconceptions amongst loved ones. It also featured a link to the WHO landing page, which users could share as a part of the tirade against your typical fare of anti-vaxxers and whatnot. However, Facebook soon realised that this was not the way to go, and a more direct approach might be warranted instead.
The notification bubble would not state whether or not users had actually come into contact with misinformation or not. It would not give us the commodity of identifying which parts of the flagged posts were incorrect. It simply asked the online community to be more vigilant. While an important message indeed, this sent a mixed message to users. Most of them, never making a connection between themselves and flagged posts, would simply assume this notification to be part of a Facebook campaign or something much less pressing. This meant that the conveyance of bad information was never realised by Facebook’s user-base. Clearly, tactics had to be changed.
Even this newer method isn’t completely foolproof. For starters, while information is corrected by the new update, the original post and its conveying language is never highlighted or pointed out. Facebook cites the reason for this being that banned posts cannot be viewed in any shape or form. The company also wishes to play it safe and not publicly humiliate well-meaning individuals who didn’t know any better. However, this is still a much more proactive measure which, combined with flagging posts, may start to effectively push back on online misinformation and vehemently active groups such as the Anti Vaccination Movement. This author only hopes for the best.
Read next: Facebook rolls out new features for Group Admins
Facebook has made this announcement shortly after its initial pledge to combat myths and conspiracies being propagated against the recently-developed COVID-19 vaccine. Their pledge included flagging posts, groups, and accounts that proved notorious in their attempts to propagate misinformation and hysteria. Now, they’re looking towards the collateral damage and people who have encountered such information. Notifications will be sent to users that have liked, commented and/or shared flagged posts, informing them of their mistake. The new notifications will also highlight which information included in the post was incorrect, thus providing individuals with valuable insight regarding the pandemic.
This feature is not a new fangled piece of technology as much as it is an improvement upon old ideas. The old idea in question first took form in the shape of a notification on one’s Facebook newsfeed. The text bubble, asked that users take steps towards stemming the flow of misconceptions amongst loved ones. It also featured a link to the WHO landing page, which users could share as a part of the tirade against your typical fare of anti-vaxxers and whatnot. However, Facebook soon realised that this was not the way to go, and a more direct approach might be warranted instead.
The notification bubble would not state whether or not users had actually come into contact with misinformation or not. It would not give us the commodity of identifying which parts of the flagged posts were incorrect. It simply asked the online community to be more vigilant. While an important message indeed, this sent a mixed message to users. Most of them, never making a connection between themselves and flagged posts, would simply assume this notification to be part of a Facebook campaign or something much less pressing. This meant that the conveyance of bad information was never realised by Facebook’s user-base. Clearly, tactics had to be changed.
Even this newer method isn’t completely foolproof. For starters, while information is corrected by the new update, the original post and its conveying language is never highlighted or pointed out. Facebook cites the reason for this being that banned posts cannot be viewed in any shape or form. The company also wishes to play it safe and not publicly humiliate well-meaning individuals who didn’t know any better. However, this is still a much more proactive measure which, combined with flagging posts, may start to effectively push back on online misinformation and vehemently active groups such as the Anti Vaccination Movement. This author only hopes for the best.
Read next: Facebook rolls out new features for Group Admins