All the major social media platforms keep upgrading their standards to ensure that community rules are not violated in any manner. Facebook also tries to keep improving its detection efforts to remove the content that stands against its community guidelines. It has incorporated highly advanced machine learning tools to keep the users safe from misinformation, abuse, hate speech, and various scams.
Facebook claims to have made quite a success with its machine-learned models that can help the platform take down 99.5% of fake Facebook accounts before they are reported by the users.
However, no matter how amazing and advanced these machine-learned models are, human beings are necessary to review the machine-generated decisions. And sometimes, due to their limitations, human reviewers miss out on some important things and they pass through their better judgment and then wreak havoc on the platform.
Machine models can identify growing issues, but human input is still necessary. Now, when the COVID-19 pandemic hit the world and caused many offices of Facebook to shut down too, the manpower was greatly reduced. In such instances, it was imperative to create a system that can estimate the uncertainty in human-generated decisions with better accuracy.
So, Facebook has now built and deployed a system called CLARA (Confidence of Labels and Raters). It is used at Facebook to get more accurate decisions while cutting out operational resource use. This system enhances the human decision-making process. It adds a machine learning layer over the human decision and assesses the capacity of individual raters. This helps in taking the right decision regarding various content, based on the past accuracy of their ratings.
When a human reviewer decides to take down something, CLARA will work as a real-time prediction service and will crosscheck and re-asses this decision against what a machine model would have predicted along with the reference from both of their past accuracy reports.
CLARA which has now been deployed at Facebook is expected to bring a great significant improvement in the machine-learned models. It will make enforcement results smoother, more efficient, and more exact. CLARA can enable efficient usage of labeling resources as compared to a normal system. In a production deployment, Facebook has found out that CLARA can save around 20% of total reviews in comparison to majority votes.
So, let us see how it works out for Facebook and how better its enforcement policies get with the help of CLARA.
Photo: Rafael Henrique/SOPA Images/LightRocket / Getty Images
Read next: Artificial intelligence works more efficiently, enhances the content moderation efforts, states VP of Facebook
Facebook claims to have made quite a success with its machine-learned models that can help the platform take down 99.5% of fake Facebook accounts before they are reported by the users.
However, no matter how amazing and advanced these machine-learned models are, human beings are necessary to review the machine-generated decisions. And sometimes, due to their limitations, human reviewers miss out on some important things and they pass through their better judgment and then wreak havoc on the platform.
Machine models can identify growing issues, but human input is still necessary. Now, when the COVID-19 pandemic hit the world and caused many offices of Facebook to shut down too, the manpower was greatly reduced. In such instances, it was imperative to create a system that can estimate the uncertainty in human-generated decisions with better accuracy.
So, Facebook has now built and deployed a system called CLARA (Confidence of Labels and Raters). It is used at Facebook to get more accurate decisions while cutting out operational resource use. This system enhances the human decision-making process. It adds a machine learning layer over the human decision and assesses the capacity of individual raters. This helps in taking the right decision regarding various content, based on the past accuracy of their ratings.
When a human reviewer decides to take down something, CLARA will work as a real-time prediction service and will crosscheck and re-asses this decision against what a machine model would have predicted along with the reference from both of their past accuracy reports.
CLARA which has now been deployed at Facebook is expected to bring a great significant improvement in the machine-learned models. It will make enforcement results smoother, more efficient, and more exact. CLARA can enable efficient usage of labeling resources as compared to a normal system. In a production deployment, Facebook has found out that CLARA can save around 20% of total reviews in comparison to majority votes.
So, let us see how it works out for Facebook and how better its enforcement policies get with the help of CLARA.
Photo: Rafael Henrique/SOPA Images/LightRocket / Getty Images
Read next: Artificial intelligence works more efficiently, enhances the content moderation efforts, states VP of Facebook