We know Facebook has tried its best over the years but still bad actors have always found a way to break the safeguards. With rising concerns over the privacy of users on the platform, the social media giant is now training an army of bots in an attempt to make the anti-spam walls stronger than ever.
The new system of bots can actually simulate bad behaviors and then help in testing the platform’s loopholes or flaws that are required to be fixed. Furthermore, these bots are also trained to act like a real person with the help of behavior models that Facebook has developed by taking help from over two billion users.
In order to make sure that the experiment shouldn’t disturb real users, Facebook has actually built kind of a parallel version of their own social network and inside it, the bots have all the liberty to run rampant by messaging each other, commenting on fake posts, send friend requests, visit pages and much more. These A.I bots especially are also created to make extreme scenarios happen like of selling rugs or guns on the platform so that Facebook can see how its algorithms respond to preventing such actions in the real world.
According to Facebook, this new system can host thousands or millions of bots as it is based on the same code that real users actually experience. By doing so, the bots also remain faithful to all the effects that a real user has to face on the platform.
While it is pretty much unclear how effective the new simulation environment will be for us - considering it is still in the research phase and there hasn’t been a single clear outcome which can be used to bring an update in the platform, but project’s lead, Mark Harman is pretty much hopeful that soo it will help spot integrity or reliability issues which are affecting users every day on the platform.
Nevertheless, we can be sure about one fact that Facebook is trying its best to fight harassment and spam on the platform and that is the reason why they have invested so much in the artificial-intelligence-based research program as well. Sooner or later, the results will eventually come for the betterment of users.
Read next: News Content Dominates Facebook Discussion, Reveals Facebook's Own Data into Most Shared Posts on Its Platform
The new system of bots can actually simulate bad behaviors and then help in testing the platform’s loopholes or flaws that are required to be fixed. Furthermore, these bots are also trained to act like a real person with the help of behavior models that Facebook has developed by taking help from over two billion users.
In order to make sure that the experiment shouldn’t disturb real users, Facebook has actually built kind of a parallel version of their own social network and inside it, the bots have all the liberty to run rampant by messaging each other, commenting on fake posts, send friend requests, visit pages and much more. These A.I bots especially are also created to make extreme scenarios happen like of selling rugs or guns on the platform so that Facebook can see how its algorithms respond to preventing such actions in the real world.
According to Facebook, this new system can host thousands or millions of bots as it is based on the same code that real users actually experience. By doing so, the bots also remain faithful to all the effects that a real user has to face on the platform.
While it is pretty much unclear how effective the new simulation environment will be for us - considering it is still in the research phase and there hasn’t been a single clear outcome which can be used to bring an update in the platform, but project’s lead, Mark Harman is pretty much hopeful that soo it will help spot integrity or reliability issues which are affecting users every day on the platform.
Nevertheless, we can be sure about one fact that Facebook is trying its best to fight harassment and spam on the platform and that is the reason why they have invested so much in the artificial-intelligence-based research program as well. Sooner or later, the results will eventually come for the betterment of users.
Web-Enabled Simulation is the first method for building realistic, large-scale simulations of complex social networks. We built a WES test environment using production code so we can better detect harmful behavior before it affects people in the real world.https://t.co/Qdhh22atMU pic.twitter.com/OS5a1fwagl— Facebook AI (@facebookai) July 23, 2020
Read next: News Content Dominates Facebook Discussion, Reveals Facebook's Own Data into Most Shared Posts on Its Platform