By now, all the researchers and consumers agree with the fact that face unlocks in any device is not only more convenient but secure as well. However, there is an engineer out there from Brigham Young University (BYU) who has claimed that the security can be further improved if companies start adding different facial gestures as a part of their face unlock feature.
If you haven’t had a chance to own a device that is based on face unlock, then such a system is based on analyzing your face on the whole and with that, a unique code is also created within the device to identify the user. There have been cases where face unlocks worked perfectly alright when a user was sleeping or also with the help of photographs of the person as well.
So, to overcome those shortcomings, the BYU computer and electrical engineering professor D.J. Lee has introduced a plan to work on Concurrent Two-Factor Identity Verification (C2FIV).
Lee has stated that the biggest challenge of face unlock systems right now is to make the identity verification process intentional. The world cannot rely on any such device that can identify one when they are unconscious. This then makes the whole scenario a lot like movies where heroes get to unlock any phone by replicating someone else’s face or even fingerprints.
Hence, moving on from facial features, Lee has suggested C2FIV as well in which the user would be required to record a two-second video while doing a "unique facial action or a lip movement (preferably of reading a secret phrase). As a result, facial recognition and facial gesture together can offer the kind of sound guarantee that face unlocks systems of today need within mobile phone devices.
C2FIV basically runs on a neural network through which the facial data and the recordings are analyzed to match it with the authentic data of the person. Lee also performed a test in which he recorded over 8,000 facial gesture video clips of 50 people. Fortunately, the identity verification turned out to be 90% accurate and he asked users to perform a range of gestures right from blinking, dropping a jaw, smiling, and raising eyebrows as well.
While 90% accuracy is indeed an amazing outcome, Lee still thinks that a lot of improvements can be made if the system is trained with a massive dataset. Furthermore, the best part is that the verification system won’t have to rely on any server to work as it will be able to run locally on the device itself. And by doing so, smartphone, tablet, or laptop users will also have their personal biometric data under their own protection.
Lee has filed the patent is working on bringing more enhancements. So, sooner or later we can expect our devices to open up with a wink, smile, or any random mouth movements based on a unique phrase.
Read next: How to avoid being tracked by email tracking services to maintain your privacy
If you haven’t had a chance to own a device that is based on face unlock, then such a system is based on analyzing your face on the whole and with that, a unique code is also created within the device to identify the user. There have been cases where face unlocks worked perfectly alright when a user was sleeping or also with the help of photographs of the person as well.
So, to overcome those shortcomings, the BYU computer and electrical engineering professor D.J. Lee has introduced a plan to work on Concurrent Two-Factor Identity Verification (C2FIV).
Lee has stated that the biggest challenge of face unlock systems right now is to make the identity verification process intentional. The world cannot rely on any such device that can identify one when they are unconscious. This then makes the whole scenario a lot like movies where heroes get to unlock any phone by replicating someone else’s face or even fingerprints.
Hence, moving on from facial features, Lee has suggested C2FIV as well in which the user would be required to record a two-second video while doing a "unique facial action or a lip movement (preferably of reading a secret phrase). As a result, facial recognition and facial gesture together can offer the kind of sound guarantee that face unlocks systems of today need within mobile phone devices.
C2FIV basically runs on a neural network through which the facial data and the recordings are analyzed to match it with the authentic data of the person. Lee also performed a test in which he recorded over 8,000 facial gesture video clips of 50 people. Fortunately, the identity verification turned out to be 90% accurate and he asked users to perform a range of gestures right from blinking, dropping a jaw, smiling, and raising eyebrows as well.
While 90% accuracy is indeed an amazing outcome, Lee still thinks that a lot of improvements can be made if the system is trained with a massive dataset. Furthermore, the best part is that the verification system won’t have to rely on any server to work as it will be able to run locally on the device itself. And by doing so, smartphone, tablet, or laptop users will also have their personal biometric data under their own protection.
Lee has filed the patent is working on bringing more enhancements. So, sooner or later we can expect our devices to open up with a wink, smile, or any random mouth movements based on a unique phrase.
Read next: How to avoid being tracked by email tracking services to maintain your privacy