AI to Stop Cyberthreats: Why Organizations Can No Longer Wait
As the leader of the BlackBerry and Cylance cybersecurity R&D teams, I see a critical truth emerging from data science: If you need to defend your organization from cyberattacks, you must adopt machine learning and AI into your cyber defenses, and it is better to act now. Please do not delay.
The reason for urgency is that cybersecurity on a "human scale" is increasingly irrelevant. It’s been outgunned by what I call the “democratization of AI.” Threat actors are using generative AI and other easy-to-utilize automation tools to dramatically increase the number of new malware variants.
The evidence of this AI- and automation-fueled malware onslaught is clear. From June 2023 to August 2023, BlackBerry® Cybersecurity solutions stopped more than 3.3 million cyberattacks. Among these attacks, the number of unique malware files encountered showed a 70% increase over the previous reporting period. The BlackBerry Threat Research and Intelligence team recorded 2.9 unique malware samples per minute.
The scope of this mounting malware problem is now beyond human capacity. We must lean into tools built upon proven artificial intelligence to win this fight. Our collective success in cybersecurity boils down to understanding millions of actions in totality, so we can decide quickly and accurately what we must do. This is a job for AI.
Advancing Artificial Intelligence in Cybersecurity
For cybersecurity teams, there are exciting AI-powered advancements underway.
I recently discussed this during an episode of the Unsupervised Learning podcast, hosted by Daniel Miessler. He asked me detailed questions about AI's role in cyber defense. Here are some key points I shared with him.
Analyzing the Success of AI in Cybersecurity
First, you should judge AI's effectiveness in cybersecurity by looking at a critical concept: Can the model you’re using withstand the test of time? In other words, it needs to be able to handle both known and as-yet-unknown threats. Specifically, it should be able to generalize and predict how threats behave, and understand what a threat is — and stop it — even if it has never seen that threat before. The goal is to stop threats before they execute, not just detect them. At the same time, your model must also understand what looks like a threat but is just benign activity, to limit false positives. It should be doing both these things consistently over time. Don’t be dazzled by claims regarding how big the model is or how much data it's trained on. Those are interesting data points, but that is all. Efficacy and performance over time are the only metrics that really matter.
AI in Cybersecurity Does More Than Ever Before
During our podcast conversation, I also shared some specific examples of mature AI functions, based on advancements from our Data Science and ML team as they build on our extensive portfolio of AI patents and innovations. Cylance was one of the first to introduce AI to protect the endpoint, and we continue to deliver successful outcomes in ever more complex scenarios.
We are continuously adding more “signals” that may appear weak by themselves, but are extremely valuable collectively. These signals in aggregate are what we use to train different models that produce different outcomes. These signals coming from network data, authorization and authentication data, executable data, in-memory data, etc., together paint the required picture to make decisions.
Our model looks at these things in isolation, looks again at these things in the aggregate, and makes a prediction. It also compares these actions against similar organizations or groups that do these things and predicts whether this is a threat. So, in an instant, AI allows our decision engine to make a final assertion about what needs to happen next.
As I said, these are exciting times for defenders who put the power of artificial intelligence into their cybersecurity arsenals.