Voice authentication can be a useful tool because of the fact that this is the sort of thing that could potentially end up allowing you to obtain more security for your various accounts. However, it might be less useful than might have been the case otherwise if malicious actors are able to make multiple attempts at cracking it. A team of researchers working out of the University of Waterloo recently revealed that security systems could be cracked 99% of the time after just six tries.
With all of that having been said and now out of the way, it is important to note that voice authentication has been on the rise thanks to remote banking as well as call centers. The process works by asking someone to say a certain phrase a few times, and the various features of your voice are then compiled into a voiceprint, so to speak.
This is fairly similar to a fingerprint, but in spite of the fact that this is the case, it turns out to be far easier to bypass with all things having been considered and taken into account. Machine learning allows malicious actors to create deepfakes of other people’s voices, thereby rendering the so called voiceprints a risky proposition in terms of the level of security they can help to facilitate.
When the test was conducted on the voiceprint system that is currently being used by Amazon Connect, just a single four second long attack yielded a 10% success rate. If malicious actors had 30 seconds to work with, they were able to crack the system 40% of the time.
The 99% success rate was only possible with voiceprints that lacked the right level of sophistication, but this still reveals that there are serious gaps in the system that would need to be patched up in order to make it truly secure. Using multiple security protocols in tandem with one another with checks as well as balanced is critical, since it can allow for one protocol to compensate for any weaknesses in the chain that might be caused by voiceprints and the like.
Read next: STEM Might Not Be Male Dominated Anymore, New Study Shows
With all of that having been said and now out of the way, it is important to note that voice authentication has been on the rise thanks to remote banking as well as call centers. The process works by asking someone to say a certain phrase a few times, and the various features of your voice are then compiled into a voiceprint, so to speak.
This is fairly similar to a fingerprint, but in spite of the fact that this is the case, it turns out to be far easier to bypass with all things having been considered and taken into account. Machine learning allows malicious actors to create deepfakes of other people’s voices, thereby rendering the so called voiceprints a risky proposition in terms of the level of security they can help to facilitate.
When the test was conducted on the voiceprint system that is currently being used by Amazon Connect, just a single four second long attack yielded a 10% success rate. If malicious actors had 30 seconds to work with, they were able to crack the system 40% of the time.
The 99% success rate was only possible with voiceprints that lacked the right level of sophistication, but this still reveals that there are serious gaps in the system that would need to be patched up in order to make it truly secure. Using multiple security protocols in tandem with one another with checks as well as balanced is critical, since it can allow for one protocol to compensate for any weaknesses in the chain that might be caused by voiceprints and the like.
Read next: STEM Might Not Be Male Dominated Anymore, New Study Shows