Skip Navigation
BlackBerry Blog

From ChatGPT to HackGPT

CYBERSECURITY / 06.14.23 / John Schaap
(This post, "From ChatGPT to HackGPT," written by John Schaap, BlackBerry senior director., South & West Europe, was originally published in the June 13, 2023 edition of WinMagPro Netherlands. Excerpted with permission – access the article and relevant topics here). 

The emergence and continued development of artificial intelligence (AI) is creating endless new possibilities in the world of cybersecurity. Threats and weaknesses can be detected faster and more accurately, and the speed of action of security teams is faster than ever with AI. This is badly needed because as with all new resources that help humanity move forward, new techniques are also eagerly sought after by cybercriminals.

Cybersecurity teams are already using AI and Machine Learning (ML) to keep cybercriminals out. Advanced cybersecurity solutions use such technologies to automatically detect attacks, quickly monitor large volumes of data traffic, detect fraud patterns, and even predict attacks. It's just a selection of AI applications that make life a lot easier for cybersecurity teams.

On the other hand, those same AI assets can be used for activities of a more malicious nature. From creating more successful phishing emails to generating deepfakes. With the increasing popularity of AI, the cat-and-mouse game between hackers and security professionals has been taken to a new level. 

Taking Phishing to the Next Level

Phishing emails could be identified more easily by language errors and were also impersonal. With the rise of ChatGPT, attackers can now generate personalized phishing messages based on messages that have been successful in the past or contain certain details that make the message more credible. Employees must therefore be trained even better in recognizing phishing emails and ask themselves more often: "Is this link safe?" because it is increasingly difficult to see at first glance.

Deepfakes of Acquaintances

The fact that deepfakes can cause a lot of damage is no longer news. But other variants of deepfakes are popping up these days. AI bots can generate or imitate voices, and even videos. Cybercriminals can now pose as one of a company's executives to convince employees to share money, and personal or company information. This technique is the next development of the now well-known WhatsApp fraud, only people are now called by a computer, with the voices of family members, friends, or colleagues being faked.

Widespread Dissemination of False Information

With the help of AI, it is easier than ever for hackers to send false information out into the world, with which they can influence public opinion on a large scale. Fake news and misinformation can cause enormous chaos. Suppose you suggest that certain stocks (or cryptocurrencies) become valuable, you can create hype among investors. Companies and individuals then invest money in these stocks, artificially increasing their value. This is a lucrative scam for criminals who have already purchased part of the shares (or cryptocurrency) in advance. In addition, there is the opportunity for malicious groups to influence public opinion, reputation, and political and social issues.

Reputational Damage Due to Fake Email

Generative AI allows you to recreate highly realistic email exchanges. They can cause considerable reputational damage. Suppose an AI model makes it appear that executives are talking to each other by email about how they are trying to cover up a financial shortfall. If that email exchange then "leaks out" and is distributed via social media bots, the reputational damage is incalculable – but could include things like customer and employee turnover, and a plummet in business value, as a potential consequence.

To counter the potential harm of such AI threats, cybersecurity teams must take proactive and preemptive action. By fighting fire with fire – using AI to protect against AI – you level the playing field. This ensures that AI applications are secured and threats are fended off using the power of AI.

It goes without saying that AI offers great potential to continuously improve cyber security. It is essential for cybersecurity teams and business owners to keep in mind that every new development also comes with new opportunities for criminals. Relying on your own, sometimes outdated, techniques and manual processes to defend yourself is no longer possible in the age of lightning-fast developments. 

Read this article and explore relevant topics in WinMagPro here.

For similar articles and news delivered to your inbox, please subscribe to the BlackBerry Blog.
John Schaap

About John Schaap

John Schaap is Senior Director, South & West Europe at BlackBerry.