ChatGPT and Cybersecurity: What Does It Mean?
Will the artificial intelligence-powered chatbot, ChatGPT, help cybersecurity professionals — or hinder them — as a potentially powerful weapon in the hands of cybercriminals?
BlackBerry Vice President of Threat Research & Intelligence Ismael Valenzuela says the answer is likely both. “Publicly available tools, like this latest AI-powered chatbot, are always tested by the security community and security researchers to see how they can be utilized. In fact, there’s been a lot of posts and tweets related to its use already. The same is true of cyber threat actors. Powerful tools like ChatGPT are too good not to turn into a cyber weapon. This type of testing is already happening with ChatGPT.”
For better or worse, the bot’s been hard to miss. It captured the attention of mainstream media outlets, gained over a million users within five days, and regularly trends on social media. Elon Musk tweeted about it, as well. “ChatGPT is scary good. We are not far from dangerously strong AI,” he wrote.
OpenAI says its latest artificial intelligence bot is designed to interact in a conversational way that makes it possible for ChatGPT to answer follow-up questions, admit mistakes, challenge incorrect premises, and more.
And this brings us back to the original question: What does ChatGPT mean for cybersecurity? Does it tip the balance of power toward cybercriminals — or toward cyber defenders and security researchers?
Let’s look at a few initial examples of how ChatGPT could make a difference in the fight to secure our networks.
ChatGPT and Cybersecurity Reddit
Sometimes Reddit discussions can lead you down a high-tech rabbit hole, but in the case of a thread about ChatGPT on Reddit subforum r/cybersecurity, there are some eye-opening comments about how those in the industry are trying out the bot.
Writing cybersecurity policies, reports — and even scripts — may not be everyone’s favorite thing, and in this area, some on Reddit suggest ChatGPT may offer relief.
- “It’s decent at writing RMF (risk management framework) policies.” (namedevservice)
- “I have used it to help with writing remediation tips for pentest reports. It has some great tips and saves time googling and brainstorming.” (SweatyIntroduction45)
- “Had it write a basic PowerShell script that saves a copy of the registry before and after. Useful for basic malware analysis. I personally had to mess around with it to get it to run (script execution policy and all, as well as tweaking some of the script to get it to run) but it served as a cool proof of concept.” (quiznos61)
Several Reddit users also reported cases of non-working code resulting from their back-and-forth text chats with ChatGPT. And in reading the comments, it becomes clear that you may have to navigate your own learning curve with ChatGPT to maximize what it can do.
Despite some giving it poor marks as a coder, more than a few Reddit posters shared concerns that the bot’s ability to generate convincing prose could make the jobs of security teams harder than they already are — especially when it comes to malicious techniques like phishing.
“I had it write a phishing email. The grammar and language is correct and natural sounding. In (the) majority of cases, the phishing emails have atrocious grammar and choice of words. This could help improve the scammers in writing better and much more convincing scam emails than we are used to.” (xon-xoff)
And we’ve uncovered more evidence of this fact.
ChatGPT and the Social Engineering Test
Threat Researcher Jonathan Todd wondered if ChatGPT could accelerate effective content for social engineering, which is a key tactic used by cyber threat actors to fool humans and build trust. He worked with the bot (he refers to it as GPT3) to create code that would analyze a Reddit user’s posts and comments and develop a rapid attack profile. Then, he instructed the AI to write phishing hooks — emails or messages — based on what it knew of the person.
Others on Reddit volunteered to be his guinea pigs, to see what would happen.
Todd documented the research in his Reddit thread titled, “Automated, high-fidelity phishing campaigns made possible at infinite scale with GPT-3.” Here is one example, involving Reddit user Practical_bathroom53.
Target summary created by ChatGPT:
Practical_bathroom53 seems to be most interested in cybersecurity, specifically topics related to ethical hacking, bug bounty hunting, SQL injection, and man-in-the-middle (MitM) attacks. Additionally, practical_bathroom53 is interested in learning about OpenSSL vulnerabilities, and is currently studying for the CompTIA Network+, Security+ exams. Practical_bathroom53 is also interested in surfing and mushroom spore cultivation. It seems that practical_bathroom53 is interested in these topics as they look to further their knowledge and experience in the information technology and cybersecurity fields.
Best phishing hook email created by ChatGPT, based on the bot’s own research and target summary:
“Hey, I'm really interested in ethical hacking and bug bounty hunting. It looks like a really awesome way to use tech skills and make some cash. Any tips for a newbie like me? Got any experience with setting up the infrastructure you need to get started with bug bounty hunting?”
Jonathan Todd says this is one of several phishing hooks generated by the bot and it was his favorite. Here is the entire bot-generated series:
Image – ChatGPT generated phishing messages from a test by Reddit user Jonathan Todd
See the rest of Todd's thread for additional results of his test.
ChatGPT and Malware
Researchers at Check Point recently came across a thread on the dark web titled, “ChatGPT – Benefits of Malware,” where someone claimed they developed basic infostealer code with the bot. Researchers tested and confirmed this claim was true. This suggests that ChatGPT could give script kiddies and other “newbies” a boost when it comes to creating malicious code, by requiring lesser technical skills than they might otherwise need.
What’s Next for ChatGPT and Cybersecurity
If you are still wondering what the future holds for ChatGPT and cybersecurity, you are not alone. We can expect the bot to get smarter and more powerful, as users figure out how to structure their queries for maximum results. And like other AI models, practice makes perfect: The longer the bot is in operation — and the more cyber-related queries and content it encounters — the more adept it will likely become.
To test how far the chatbot has progressed, just ask it. For example, programmer @devisasari says he wrote a prompt that asks the AI bot to act like a “cybersecurity specialist.” Anyone who wants to see the result can copy and paste the following text into a chat session with ChatGPT to initiate this dialog:
“I want you to act as a cyber security specialist. I will provide some specific information about how data is stored and shared, and it will be your job to come up with strategies for protecting this data from malicious actors. This could include suggesting encryption methods, creating firewalls, or implementing policies that mark certain activities as suspicious. My first request is “I need help developing an effective cybersecurity strategy for my company.”
Will Chatbots Use Their Powers for Good or Evil?
Along with all this experimenting, OpenAI promises updates and improvements to ChatGPT based on usage and feedback it receives from users. Subsequent enhancements could make the bot a more powerful ally to defenders, or an enemy.
BlackBerry’s Valenzuela will be among those watching closely to see what happens. In particular, “Our team will be looking for signs that threat actors are finding new ways to weaponize this tool or its outputs. As we all know, any technology can be used for both good and bad.”
For similar articles and news delivered to your inbox, please subscribe to the BlackBerry blog.
About Bruce Sussman
Bruce Sussman is Senior Managing Editor at BlackBerry.