While algorithms are changing the world for the better, finally bad actors are also taking advantage for their own gains, and one recent example of such a case has been of generating fake faces that can make us believe as if the pictures are real.
But finally, there is now a way to identify if the person in the image is the real one or not - all thanks to an online tool built by researchers from an Amsterdam-based visual threat intelligence company, Sensity.
The tool has been designed in such a way that its algorithms spot whether someone has created the image with manipulation intention by taking the help from the general adversarial networks (GAN). For those of you who don’t know, GANs are the real deal when it comes to creating various deepfakes that have gradually started to dominate the internet.
The detection of forgeries also takes place with the combination of more deep learning and visual forensics techniques as told by Giorgio Patrini, CEO and co-founder of Sensity. According to him, engineers first train the deepfake detectors by feeding in hundreds and thousands of deepfake videos and GAN-generated images taken from the internet, and for further research and development, engineers have also crafted the training materials on their own to train the algorithms in a way that they are able to fight more formidable challenges.
As of now, Patrini is hopeful that the tool can easily detect any fake face. However, for more advancement, his team currently is also working on identifying a fake object in pictures of videos.
The team of engineers comprehensively tested the tool by inserting photos created by AI’s of different types of fake face generators. Fortunately, the detection tool recognized the fake faces with 99.9% results.
On the other hand, the outcome of the face swaps didn’t turn out to be as straight forward. For instance, the team tested the tool for an excerpt from a YouTube show, “Sassy Justice,” in which South Park creators turned Mark Zuckerberg into a salesman with face swap. In the results, while the tool was able to recognize that the face is not GAN-generated, the confidence level turned out to be 64.7%.
Patrini thinks that if there are clear signs of manipulation or visual artifacts of a deepfake generator, the confidence level can go up to 90%.
In case of low confidence, the team has still found signals of manipulation but those are not enough to classify if the object should be counted as a deepfake one or now. Hence, with more observation and R&D, Patrini is aiming for higher accuracy and confidence in the times to come.
Sensity first had technical users integrating the tool for their own technology, but as deepfakes is rapidly turning out to be a global problem, therefore, there is now more need for a deepfake detector for everyday applications as well.
We have already seen the case during last July in which Reuters found out a fake journalist named Oliver Taylor, who was hiding behind an AI-created face and managed to use the persona for lashing out at activists. In the end, he got successful in his mission as he received enough attention from the Israeli news channels.
There was another similar story reported by The Financial Times which comprised of GAN-generated faces by Russia for the campaign related to China pushing pro-Being points.
More recently, in August 2019, UCL also published a report which mentions that deepfakes will soon become the most dangerous AI created crime to date. The extent will only go up from shaming and fake revenge porn to faking audio and video content as well.
So, under such circumstances, the more this tool evolves, the better it would be for the internet community.
But finally, there is now a way to identify if the person in the image is the real one or not - all thanks to an online tool built by researchers from an Amsterdam-based visual threat intelligence company, Sensity.
The tool has been designed in such a way that its algorithms spot whether someone has created the image with manipulation intention by taking the help from the general adversarial networks (GAN). For those of you who don’t know, GANs are the real deal when it comes to creating various deepfakes that have gradually started to dominate the internet.
The detection of forgeries also takes place with the combination of more deep learning and visual forensics techniques as told by Giorgio Patrini, CEO and co-founder of Sensity. According to him, engineers first train the deepfake detectors by feeding in hundreds and thousands of deepfake videos and GAN-generated images taken from the internet, and for further research and development, engineers have also crafted the training materials on their own to train the algorithms in a way that they are able to fight more formidable challenges.
As of now, Patrini is hopeful that the tool can easily detect any fake face. However, for more advancement, his team currently is also working on identifying a fake object in pictures of videos.
The team of engineers comprehensively tested the tool by inserting photos created by AI’s of different types of fake face generators. Fortunately, the detection tool recognized the fake faces with 99.9% results.
On the other hand, the outcome of the face swaps didn’t turn out to be as straight forward. For instance, the team tested the tool for an excerpt from a YouTube show, “Sassy Justice,” in which South Park creators turned Mark Zuckerberg into a salesman with face swap. In the results, while the tool was able to recognize that the face is not GAN-generated, the confidence level turned out to be 64.7%.
Patrini thinks that if there are clear signs of manipulation or visual artifacts of a deepfake generator, the confidence level can go up to 90%.
In case of low confidence, the team has still found signals of manipulation but those are not enough to classify if the object should be counted as a deepfake one or now. Hence, with more observation and R&D, Patrini is aiming for higher accuracy and confidence in the times to come.
Sensity first had technical users integrating the tool for their own technology, but as deepfakes is rapidly turning out to be a global problem, therefore, there is now more need for a deepfake detector for everyday applications as well.
We have already seen the case during last July in which Reuters found out a fake journalist named Oliver Taylor, who was hiding behind an AI-created face and managed to use the persona for lashing out at activists. In the end, he got successful in his mission as he received enough attention from the Israeli news channels.
There was another similar story reported by The Financial Times which comprised of GAN-generated faces by Russia for the campaign related to China pushing pro-Being points.
More recently, in August 2019, UCL also published a report which mentions that deepfakes will soon become the most dangerous AI created crime to date. The extent will only go up from shaming and fake revenge porn to faking audio and video content as well.
So, under such circumstances, the more this tool evolves, the better it would be for the internet community.