The Landscape of Deepfake Attacks
As the use of deepfakes becomes increasingly prevalent and sophisticated, the threat they pose extends far beyond entertainment. Cybercriminals are using deepfakes to scam businesses, imitating the voices and writing styles of executives to trick employees into transferring money or sharing sensitive information. Deepfakes can also be used for political manipulation, creating false narratives to influence everything from stock prices to elections. To protect against deepfakes, organizations must focus on people and behaviors, not just technology. Multi-level authentication procedures and education programs can help to reduce risk, and a holistic defensive strategy incorporating AI and ML can be effective in detecting and mitigating deepfake threats. As deepfake attacks become more prevalent and harder to detect, it is vital for organizations to consider them as part of their threat modeling activities and security awareness programs. The CIO plays a critical role in developing and implementing a comprehensive strategy to protect against deepfake attacks.
CIO World Asia Spoke with me about the landscape of deepfake attacks and how to mitigate such attacks.
Current Landscape of Deepfake Attacks
As the popularity of ‘Tom Cruise’ deepfake videos spreads across the internet, it is becoming increasingly evident that this technology is being employed for more nefarious purposes than just viral entertainment. In fact, deepfakes are now being used for political manipulation, such as in the case of the 2019 UK election, where a video falsely depicted Boris Johnson endorsing his opponent. This is just the beginning, as deepfakes are rapidly becoming the weapon of choice for those who wish to spread misinformation and fear for their own benefit, whether it be in the form of influencing stock prices or election outcomes.
From a business standpoint, deepfakes have proven to be an ideal tool for scammers. Criminals have already begun using deepfake recordings to convincingly impersonate executives’ voices, instructing employees to transfer money to fake bank accounts. These techniques, known as voice spoofing, are also being applied to texts, with the technology able to imitate the writing style and wording of company executives. This can result in phishing emails with fraudulent links that prompt employees to disclose passwords or sensitive information. In the context of corporate fraud, deepfakes represent a more sophisticated form of social engineering that poses a significant reputational risk for organizations.
Until recently, the production of deepfakes required considerable computing skills. However, with the emergence of generative AI models such as ChatGPT, more people can now generate deepfakes even without advanced technical expertise. These models can be trained on large datasets of spoken text, enabling them to generate synthetic scripts that closely match the words and tone of the person being imitated.
The threat posed by deepfakes extends far beyond sophisticated scams. BlackBerry’s latest Global Threat Intelligence Report predicts that cyberattacks on critical infrastructure will continue to rise, with AI increasingly being used not only for attack automation but also to develop advanced deepfake attacks. As such, it is essential that businesses and organizations take proactive steps to protect themselves from these emerging threats.
Safeguarding Against Deepfakes: Strategies for Organizations to Utilize AI Advancements and Increase Awareness for Protection
There is no universal solution for handling deepfakes, as each situation requires a tailored approach. However, there are steps that can be taken to minimize the risks. Preventing deepfakes involves focusing not only on technology but also on people and their behaviors. Studies show that social engineering is still a major cybersecurity risk, and deepfake attacks will exploit common human vulnerabilities such as curiosity, fear, and cognitive biases.
To mitigate this risk, companies should implement multi-level authentication procedures for data release and transfers, outline them in their internal guidelines, and educate their employees on the dangers of deepfakes. Training and workshops can also help employees detect and prevent deepfake attacks. Since reputational crises can spread quickly, companies should have communication rules and processes in place to manage the damage caused by deepfake attacks.
However, education and training alone are insufficient, and a comprehensive defense strategy that includes people, technology, and processes is necessary. AI and ML can be used to detect deepfakes, such as with BlackBerry’s Cylance portfolio, which examines the behavior of the organization and its users to detect anomalies and predict the likelihood of certain network behavior being associated with a specific user.
While deepfake attacks have primarily targeted high-profile users, they are becoming more prevalent, realistic, and difficult to detect. Organizations should consider deepfakes as part of their threat modeling and security awareness programs.
The Role of a CIO in Defending Against Deepfake Attacks
The battle to safeguard cybersecurity has always been ongoing, but the pace of change is picking up speed. In terms of protecting their organization from deepfake attacks, the CIO has a crucial part to play in creating and executing a thorough strategy that considers the particular risks and vulnerabilities faced by the organization.
However, in light of the current threat landscape, a greater number of individuals with the expertise to implement effective cyber defense strategies and techniques are needed, regardless of their job title or the underlying supporting organization. The CIO should partner with other key stakeholders in the organization, such as HR, legal, and communications teams, to make certain that deepfakes are addressed as part of a complete and tailored plan based on the threat model specific to each organization.
Read this article and explore relevant topics in CIO World Asia here.