Over the last four years, Facebook’s facilitation, as well as amplification of misleading information and hate speech, has been a critical topic. The pandemic, #BlackLivesMatter movement, and the Presidential election have also underlined Facebook’s role in amplifying dangerous narratives. The company is currently facing tough questions over its role in spreading misinformation about the election results. Facebook is keen to point out that the company is working to address hate speech.
For the first time, Facebook’s Community Standard Enforcement Report includes a measure for the prevalence of hate speech. The company explained that prevalence estimates the percentage of times users see violating content on Facebook. The company estimates that only 10-11 of every 10,000 views of content on the social media platform included hate speech. This means that only 0.10% of content views include hate speech.
Although it seems pretty good, it is essential to consider the scale. The social media platform has more than 2.7 billion active users. This means that if every user only saw one post per month, 2.7 million people still viewed some hate speech content.
However, the company claims that it is detecting and removing more hate speech content. According to Facebook, its proactive detection rate was 23.6% in Q4 of 2017. This indicates that 23.6% of the hate speech content removed by Facebook was deleted before a user reported it. Today, Facebook proactively detects approximately 95% of hate speech content.
The social media platform has also ramped up its removals during the past few months, which would incorporate Facebook’s decision to remove content related to QAnon. The company notes that it has taken more steps to enforce action against hate-related groups. For instance, Facebook has taken steps to tackle white nationalism and white separatism and has introduced new rules on content calling for violence against migrants.
While the social media giant is improving its enforcement action, it is still moving too late. In QAnon’s case, the company was repeatedly warned of the potential dangers posed by movements related to QAnon. However, Facebook still allowed such groups to thrive on its platform. In August, the company changed its approach to QAnon-related groups, yet, the damage was done mainly by that time.
The social media giant is also working to improve its Artificial Intelligence detection systems to better detect false information and weed out repeated versions of the same incorrect reports to combat misinformation.
Read next: Major Tech Companies Join Forces in Battle Against Vaccine Misinformation
For the first time, Facebook’s Community Standard Enforcement Report includes a measure for the prevalence of hate speech. The company explained that prevalence estimates the percentage of times users see violating content on Facebook. The company estimates that only 10-11 of every 10,000 views of content on the social media platform included hate speech. This means that only 0.10% of content views include hate speech.
Although it seems pretty good, it is essential to consider the scale. The social media platform has more than 2.7 billion active users. This means that if every user only saw one post per month, 2.7 million people still viewed some hate speech content.
However, the company claims that it is detecting and removing more hate speech content. According to Facebook, its proactive detection rate was 23.6% in Q4 of 2017. This indicates that 23.6% of the hate speech content removed by Facebook was deleted before a user reported it. Today, Facebook proactively detects approximately 95% of hate speech content.
The social media platform has also ramped up its removals during the past few months, which would incorporate Facebook’s decision to remove content related to QAnon. The company notes that it has taken more steps to enforce action against hate-related groups. For instance, Facebook has taken steps to tackle white nationalism and white separatism and has introduced new rules on content calling for violence against migrants.
While the social media giant is improving its enforcement action, it is still moving too late. In QAnon’s case, the company was repeatedly warned of the potential dangers posed by movements related to QAnon. However, Facebook still allowed such groups to thrive on its platform. In August, the company changed its approach to QAnon-related groups, yet, the damage was done mainly by that time.
The social media giant is also working to improve its Artificial Intelligence detection systems to better detect false information and weed out repeated versions of the same incorrect reports to combat misinformation.
Read next: Major Tech Companies Join Forces in Battle Against Vaccine Misinformation