Deepfakes simultaneously represent both the best as well as the worst of what technology can do. It has managed to push the field of machine learning to previously unforeseen heights, but in spite of the fact that this is the case it has also turned into a tool of misinformation and propaganda with all things having been considered and taken into account.
There has been some progress in the attempt to distinguish deepfakes from genuine recordings, but with all of that having been said and now out of the way, it is important to note that Microsoft’s Chief Science Officer, Eric Horvitz, opines that there are new and greater threats on the horizon. He recently put out a new research paper that highlights two separate classes of deepfake threats, namely interactive and compositional deepfakes.
Interactive deepfakes are relatively accurate representations of individuals that you can have a conversation with, and many would struggle to figure out whether the person on the other end was real or just a simulation. Compositional deepfakes are even more dangerous because of the fact that this is the sort of thing that could potentially end up allowing threat actors to create entire artificial histories that can lend their deepfakes more credence and make them harder to dispute than might have been the case otherwise.
Horvitz has frequently commented on the threats that deepfakes pose, but advanced in this technology is taking the threats to a whole other level. However, he is not just a pearl-clutcher in that respect. Rather, Horvitz has tried to come up with adequate solutions that private and government enterprises can use to fight against deep fakes.
Improved authentication protocols can be a useful tool, but nothing will have more of an impact than improved media literacy among regular people. If people are more educated and they know that certain types of media could be deepfakes, they would less likely to believe what they are being told at face value. A discerning and educated consumer is better equipped to handle such challenges and they need to be given the right training.
Photo: Starline/freepik
Read next: Zero trust sims to be issued by Cloudflare to secure phones and IoT devices
There has been some progress in the attempt to distinguish deepfakes from genuine recordings, but with all of that having been said and now out of the way, it is important to note that Microsoft’s Chief Science Officer, Eric Horvitz, opines that there are new and greater threats on the horizon. He recently put out a new research paper that highlights two separate classes of deepfake threats, namely interactive and compositional deepfakes.
Interactive deepfakes are relatively accurate representations of individuals that you can have a conversation with, and many would struggle to figure out whether the person on the other end was real or just a simulation. Compositional deepfakes are even more dangerous because of the fact that this is the sort of thing that could potentially end up allowing threat actors to create entire artificial histories that can lend their deepfakes more credence and make them harder to dispute than might have been the case otherwise.
Horvitz has frequently commented on the threats that deepfakes pose, but advanced in this technology is taking the threats to a whole other level. However, he is not just a pearl-clutcher in that respect. Rather, Horvitz has tried to come up with adequate solutions that private and government enterprises can use to fight against deep fakes.
Improved authentication protocols can be a useful tool, but nothing will have more of an impact than improved media literacy among regular people. If people are more educated and they know that certain types of media could be deepfakes, they would less likely to believe what they are being told at face value. A discerning and educated consumer is better equipped to handle such challenges and they need to be given the right training.
Photo: Starline/freepik
Read next: Zero trust sims to be issued by Cloudflare to secure phones and IoT devices