Fraudsters are financially motivated, adaptable, and are increasingly becoming more adept at using the latest technologies to execute account takeover fraud, so it’s no wonder that, as biometric authentication becomes ubiquitous in the financial services sector, bad actors are finding opportunities to exploit these systems – but does this mean that technologies like voice recognition have no place in modern banking?
A recent article from VICE exposed how a journalist proved, by hacking into their own account, how easy it is to bypass telephone banking security steps using synthetic voice technologies generated by AI. Sometimes referred to as voice cloning or voice spoofing, these attacks have sparked an outpouring of privacy concerns about our voices being harvested and used against us.
Since the launch of Lyrebird and WellSaid Labs, synthetic voices generated by AI have evolved to the point where they are indistinguishable compared to real voices, and need only a minute of voice data to produce realistic results, as reported by both MIT and Google respectively. With the advancement of such technologies, spoofing voice recognition systems when you have access to the victim’s voice data is entirely possible.
Voice recognition systems, like the one used in the VICE exposé, rely on the victim saying something aloud, either a unique passphrase (similar to a password), or a generic statement such as “my voice is my password.” Both are vulnerable to exploitation, with the latter being particularly weak in terms of security.
While this is alarming, it’s neither unexpected nor is it a cause to boycott the technology altogether. Generally speaking, banks will not rely on a single form of authentication, so the debate around the effectiveness and security of voice recognition therefore depends on the mitigating factors put in place to stop spoofing threats from escalating into full blown fraud.
Fraud prevention processes typically require banks to practice a higher degree of diligence when making critical changes, such as updating the contact details, resetting passwords, adding new beneficiaries, or ordering a replacement card, to someone’s account over the phone.
Normally, the customer would be asked specific security questions about their transactional and account data to verify their identities – this kind of data is hard to obtain without direct access to the account. That being said, it is possible that a threat actor would be able to dig out more obscure data like transactional history from their victims, especially if the person is known to them.
There are heightened concerns for public figures, as fraudsters can harvest their voice data from interviews and social media – with platforms like Instagram, TikTok and YouTube opening up the floodgates for these kinds of attacks. However, obtaining sensitive account data would be much more difficult, making this kind of attack relatively unscalable for threat actors, who today have access to more efficient means of account takeover fraud.
Scale vs Impact
When it comes to fraud prevention in financial services, mitigating threats based on their financial impact or their ability to be executed at scale is key to reducing the threat surface.
Voice recognition spoofing today poses less of a threat to the general public as it’s difficult to execute at scale. To be successful, threat actors would need to have substantial personal information about a customer in order to successfully evade the bank’s layered security defenses. However, that does not mean that such attacks don’t have a high impact when successful.
High-net-worth customers are a particular risk, as their transactional data may be managed by trusted associates like employees, making them more vulnerable to such attacks. Plus, they’re more likely to have given interviews or spoken online – making it possible for fraudsters to illegally harvest their voice data.
However when it comes to protecting against voice spoofing threats, the answer is not replacing biometric technologies altogether, but complementing biometric solutions with additional security measurements.
To avoid spoofing threats, all biometric authentication solutions – whether voice, face or fingerprint – need to have robust fallback methods and be used together with fraud detection engines. When high-risk activity is detected, it’s important that banks re-authenticate their customers, regardless of the amount of friction this would cause for the customer.
When compared to passwords and PINs, which can be easily compromised, biometric authentication solutions provide a much higher level of security – but this does not mean they should be used in isolation. By combining biometric solutions, such as voice and facial recognition with other authentication challenges and fraud detection systems, banks can help protect their customers from the financial impact of identity fraud and account takeover threats