Digital face-swapping in images and videos using deep learning technologies are gaining mometum. They are easily accessible to anyone through deepfake generators and app. Initially deepfakes were launched in the toy niche created for Artificial Intelligence conferences, but now they have become easily downloadable software that can be used by anyone to create convincing but fake videos and image of public figures. Different platforms are trying to keep an eye on them to control the technology being abused.
Facebook began a Deepfake Detection Challenge in 2019, and after many months of competition in which thousands of people took part, finally, Facebook revealed the results just recently.
Controlling people from misusing the deepfakes is necessary because of scammers and malicious actors who can influence a political movement by producing fake videos and alter the reality and malign the reputation of otherwise reputable people.
Facebook began the competition last year with a new database of deepfake footage. Until this, the researchers were working on a small number of manipulated videos, but it was nothing like the humungous data used for the evaluation and improvement of things like computer vision algorithms.
More than 3,500 paid actors took part in the competition and recorded thousands of videos that were presented as an original and a deepfake. Some ‘distractor’ modifications were also made to force any algorithm to spot the fake ones by focusing on the faces in the videos.
Many researchers from around the globe participated and submitted thousands of models that would decide whether a video is an original or a deepfake.
After many attempts, changes, and fine-tuning, the algorithms managed to reach almost 80% accuracy level in identifying the deepfakes. But these algorithms reached only 65% accuracy when they were used on a reserved set of videos that were not given to the researchers.
It is not a guessing game, and it is not like flipping a coin and make a choice. But in the field of Artificial Intelligence, even small successes are considered huge, because it is better to move on to something, rather than nothing. So, this competition by Facebook gave the researchers and AI experts that something can be done with the algorithms if all intelligent minds work together to develop a system capable of keeping a scrutinizing eye on the deepfakes.
The data set that was created by Facebook was larger and more representative and inclusive than other sets of data. For a model to have a representative understanding, it is necessary to build a training set with appropriate variance in the way that real people look.
While creating a perfect training data set, Facebook considered many factors for representations. These factors include self-identified age, ethnicity, gender because Detection technology has to work for everyone, not for a specific sub-set.
Facebook is working on methods and technology for deepfake detection based on the above-mentioned criteria and understanding. Let us see how it turns out eventually and how much accuracy with the algorithms can be achieved.
Read next: COVID-19 is making Facebook extend its Blood Donation feature more globally
Facebook began a Deepfake Detection Challenge in 2019, and after many months of competition in which thousands of people took part, finally, Facebook revealed the results just recently.
Controlling people from misusing the deepfakes is necessary because of scammers and malicious actors who can influence a political movement by producing fake videos and alter the reality and malign the reputation of otherwise reputable people.
Facebook began the competition last year with a new database of deepfake footage. Until this, the researchers were working on a small number of manipulated videos, but it was nothing like the humungous data used for the evaluation and improvement of things like computer vision algorithms.
More than 3,500 paid actors took part in the competition and recorded thousands of videos that were presented as an original and a deepfake. Some ‘distractor’ modifications were also made to force any algorithm to spot the fake ones by focusing on the faces in the videos.
Many researchers from around the globe participated and submitted thousands of models that would decide whether a video is an original or a deepfake.
After many attempts, changes, and fine-tuning, the algorithms managed to reach almost 80% accuracy level in identifying the deepfakes. But these algorithms reached only 65% accuracy when they were used on a reserved set of videos that were not given to the researchers.
It is not a guessing game, and it is not like flipping a coin and make a choice. But in the field of Artificial Intelligence, even small successes are considered huge, because it is better to move on to something, rather than nothing. So, this competition by Facebook gave the researchers and AI experts that something can be done with the algorithms if all intelligent minds work together to develop a system capable of keeping a scrutinizing eye on the deepfakes.
The data set that was created by Facebook was larger and more representative and inclusive than other sets of data. For a model to have a representative understanding, it is necessary to build a training set with appropriate variance in the way that real people look.
While creating a perfect training data set, Facebook considered many factors for representations. These factors include self-identified age, ethnicity, gender because Detection technology has to work for everyone, not for a specific sub-set.
Facebook is working on methods and technology for deepfake detection based on the above-mentioned criteria and understanding. Let us see how it turns out eventually and how much accuracy with the algorithms can be achieved.
Read next: COVID-19 is making Facebook extend its Blood Donation feature more globally