Speaking The Language Of Cyber Threats: Voice Cloning In Action

Learn about the latest threats of AI voice cloning and how it can be used to protect your organization
With technology’s rapid pace, cybersecurity threats are evolving, and we at CulperSec are always on the lookout to stay one step ahead. Recent developments in voice-cloning technology have allowed scammers to create hyper-realistic copies of real voices, leaving many potential victims unaware. But for us, this technology presents a unique opportunity to help gain organizations a better understanding of the threat landscape and train and prepare their defenses against this new threat vector.
Voice Cloning: An Innovation Potentially More Troubling Than DeepFakes
There has been a buzz on the internet for years at this point surrounding the dangers of the so-called DeepFake technology, and for good reason of course – not being able to trust what you see presents many potential challenges for a society. However, what if you also can’t trust what you hear? To demonstrate the impact of the new and extremely accessible innovation, we have employed this tactic in a select few of our latest penetration testing campaigns.
The team at CulperSec utilized AI cloned voices of employees and senior leadership, marking a groundbreaking application of this technology in the penetration testing domain. By extracting only a few minutes of audio from a publicly available 15-minute webinar posted on YouTube, we were able to create a real-time voice changer model of a senior executive and use that model to trick other employees and other members of leadership into performing tasks as part of our social engineering campaign.
Our methodology involved employing publicly available, open-source projects to train the model suitable for both Text-To-Speech (TTS) and real-time voice alteration. Using an Nvidia RTX 4090, a gaming GPU accessible to the general public, we trained our model in just under an hour. The results were staggering.
The Impact
These real-time voice models enabled us to convincingly impersonate members of senior leadership during our penetration testing. Through our campaigns, we gained access to systems, acquired sensitive information, and even elevated permissions, all by mimicking the voice of a trusted senior executive. Using the AI voice, we were also able to lend credibility to our existing email phishing campaigns, allowing more phishing emails to be opened, clicked, and have sensitive credentials entered, despite existing security controls.
The impact of AI voice cloning in today’s world has unfortunately already been proven as a real threat, as attackers used both AI video and voice cloning technologies to steal $25 million dollars.
Why This Matters
Our success with AI voice cloning in penetration testing isn’t just a testament to our own abilities and innovations but a dire warning for organizations and individuals alike. Voice cloning technology is no longer limited to large-scale operations; it’s now accessible to small-time crooks, as noted by Subbarao Kambhampati, a computer science professor at Arizona State University in an interview with NPR. There have already been instances of scammers exploiting this technology. Although some argue for restricting such tools, they are publicly available right now, and regulation alone won’t deter malicious actors. Proactively addressing this threat before its proliferation is imperative for businesses.
What Can Be Done About It?
For businesses and individuals alike, understanding the potential threats posed by voice cloning is paramount. Here are some protective measures:
-
Foster a Culture of Security: Now, more than ever, it is imperative to foster a culture of security within your organization. From the C-Suite down, everyone must feel comfortable double-checking and verifying who they are speaking to, even when they are convinced that it’s the CEO on the other end of the line.
-
Verify Unexpected Requests: If you receive a call from a known contact asking for money, access, or sensitive information, always feel comfortable hanging up and initiating a new call to that contact to verify the request’s legitimacy. Consider implementing a system of “Confirmation Words” where you can check the other individual to ensure they are legitimate.
-
Educate Employees: Regular training sessions can keep employees updated on the latest scam techniques, ensuring they remain vigilant against such threats. With advancements in AI moving as rapidly as they are, awareness is key.
-
Monitor Digital Footprint: Regularly audit and monitor the content you or your employees post online, particularly any audio or video content that could be used maliciously. Attackers can create a fairly convincing AI Voice Clone model from as little as 5 seconds of audio, and near undetectable versions with even more.
-
Implement Multi-Factor Authentication (MFA): MFA adds an extra layer of security, ensuring that even if someone is tricked by a cloned voice, they would still require additional verification. Reinforcing that nobody should ever ask you for that code is also important.
-
Stay Updated: As cybersecurity experts, we cannot stress enough the importance of staying updated on the latest threats and protective measures. With advancements in all sectors of technology moving so fast, it’s important to have a partner that is continuously in the loop on current events.
In Conclusion
As the first security firm to test the impact of these attacks by utilizing AI voice cloning by incorporating them into our penetration testing methodology, CulperSec is trying to shed light on a new dimension of threats that organizations face. While new and creative techniques are always popping up, our primary goal remains: ensuring organizations are fortified against the ever-evolving world of cybersecurity threats.
Staying aware, vigilant, and educated is currently our best defense against this. Let’s work together to build a safer digital future.