According to Ajay Almani, the president and head of iProov, a biometric authentication company, deep fakes are going to get more common as the time passes. iProov conducted a poll to see how many organizations have encountered a deep fake in the past few months. The results of the poll found that 47% of the organizations have encountered a deep fake, while 70% of the organizations believe that deep fakes are going to have a big impact on them. 62% of the organizations said that they are taking up measures to prevent generative-AI created deep fakes.Almani says that deep fakes have been getting popular over the last few years, but now they are at their peaks. It is scary that anyone can create a fictitious person, give it whatever getup they want and create their voices according to their requirements. Now, deep fake images, videos, voices and other media related things have become so powerful that they are almost undetectable.
This is a major cause of concern for many governments and organizations. A worker in a multinational financial company paid out $25 million after thinking that their chief financial officer was asking for them but in reality, it was a deep fake. KnowBe4, a cybersecurity company, hired an employee but it turned out he was a North Korean hacker who used a deep fake to pass the hiring process.
The situations about deep fakes also vary from region to region. For instance, 51% organizations in Asia Pacific became victims of deep fakes and 53% in Europe were victims. Organizations in Latin America (53%) are more exposed to deep fakes than organizations in North America (34%). The survey also found that deep fakes (61%) are the third biggest security concern. Password breaches (64%) is the top security concern followed by ransomware (63%).
Deep fakes are getting so good because of a number of reasons including increased speeds, faster sharing of information and generative AI. Even though some content can be easily flagged as AI, captchas are making it hard for even human beings to prove that they are humans. On the other hand, using biometrics can easily solve the issue for people who find it difficult to solve captchas. According to iProov, three-quarters of the organizations are using facial biometric systems to prevent themselves from deep fakes. Multi Factor authentication is being used by 61%, 67% are using some biometric tools on their devices and 63% are educating their employees on how to detect deep fakes.
iProov also ranked biometric methods according to how effective they are. Fingerprints are the most effective (81%), followed by Iris (68%), Facial (67%) and Advanced Behaviour (65%). Voice recognition is the least effective because deep fakes can easily produce a human voice similar to any person. iProov is also experimenting with its AI tool that uses light from the device screen to reflect 10 different colors on a human face. It then recognizes different features and tells if they are human features or not. The company is using this tool in different commercial and government sectors as it has an extremely high pass rate.
Read next: ChatGPT Tops AI Tools Among Students, 86% Use AI for Studies: Survey
This is a major cause of concern for many governments and organizations. A worker in a multinational financial company paid out $25 million after thinking that their chief financial officer was asking for them but in reality, it was a deep fake. KnowBe4, a cybersecurity company, hired an employee but it turned out he was a North Korean hacker who used a deep fake to pass the hiring process.
The situations about deep fakes also vary from region to region. For instance, 51% organizations in Asia Pacific became victims of deep fakes and 53% in Europe were victims. Organizations in Latin America (53%) are more exposed to deep fakes than organizations in North America (34%). The survey also found that deep fakes (61%) are the third biggest security concern. Password breaches (64%) is the top security concern followed by ransomware (63%).
Deep fakes are getting so good because of a number of reasons including increased speeds, faster sharing of information and generative AI. Even though some content can be easily flagged as AI, captchas are making it hard for even human beings to prove that they are humans. On the other hand, using biometrics can easily solve the issue for people who find it difficult to solve captchas. According to iProov, three-quarters of the organizations are using facial biometric systems to prevent themselves from deep fakes. Multi Factor authentication is being used by 61%, 67% are using some biometric tools on their devices and 63% are educating their employees on how to detect deep fakes.
iProov also ranked biometric methods according to how effective they are. Fingerprints are the most effective (81%), followed by Iris (68%), Facial (67%) and Advanced Behaviour (65%). Voice recognition is the least effective because deep fakes can easily produce a human voice similar to any person. iProov is also experimenting with its AI tool that uses light from the device screen to reflect 10 different colors on a human face. It then recognizes different features and tells if they are human features or not. The company is using this tool in different commercial and government sectors as it has an extremely high pass rate.
Read next: ChatGPT Tops AI Tools Among Students, 86% Use AI for Studies: Survey