Artificial intelligence (AI) is positively impacting our world in previously unimaginable ways across many different industries. The use of AI is particularly interesting in the cybersecurity industry because of its unique ability to scale and prevent previously unseen, aka zero-day, attacks.
But remember, similar to how drug cartels built their own submarines and cellphone towers to evade law enforcement, and the Joker arose to fight Batman, so too will cyber-criminals build their own AI systems to carry out malicious counter-attacks.
An August 2017 survey commissioned by Cylance discovered that 62% of cybersecurity experts believe weaponized AI attacks will start occurring in 2018. AI has been heavily discussed in the industry over the past few years, but most people do not realize that AI is not just one thing, but that it is made up of many different subfields.
This article will cover what AI is and isn’t, how it works, how it is built, how it can be used for evil and even defrauded, and how the good guys can keep the industry one step ahead in the fight.
What is AI?
We must first develop a basic understanding of how AI technology works. The first thing to understand is that AI is comprised of a number of subfields. One of these subfields is machine learning (ML), which is just like human learning, except on a much bigger and faster scale.
In order to achieve this type of in-depth learning, large sets of data must be collected to train the AI on in order to develop a high-quality algorithm, which is basically a math equation that will accurately recognize an outcome or characteristic. This algorithm can then be applied to text, speech, objects, images, movement, and files. Doing this well takes vast amounts of time, skill, and resources.
So what is it not? AI is really a marketing misnomer that sounds awesome and futuristic, which is why the phrase is currently slapped onto everything in order to boost sales, from cars to automatic juicers. What it currently is not, is a self-motivated, conscious technology, so there is no Matrix or Terminator scenario to fear. (Not at the moment anyways).
If someone does create that in future, we will have to revisit that statement. But for now, each AI product made is simply a really useful and powerful tool that is made to have a very narrow purpose. Like every tool, AI has the potential to be used for evil as well as good.
Here are the possible steps that adversaries may take to build their own AI:
Let’s Build Something for Bad
Step 1 in creating “AI for bad” is developing the infrastructure. It is harder for adversaries to get the hardware necessary to create their own AI solution. This is because of the importance and scarcity of particular components, such as GPUs, which are key resources to developing an algorithm.
To bypass this problem, they will likely take the traditional approach and steal the computing power from existing hosts and data centers, which is achieved by infecting those machines with malware. From here, they can then steal credit card information, takeover machines in AWS, or create a botnet.
Step 2 in AI creation is to start developing algorithms. As we discussed earlier, this takes a lot of time, money and talent. But where it is worth it, it will be done. When there is $1.5 trillion at stake, for example, the payoff is definitely worth the effort to the wannabe cybercriminal.
Step 3 is profiting through scale. Now that the bad guys have an algorithm, they can get to work accomplishing their missions by letting their AI creation constantly run. Their goals may be anything from gaining access into an organization to steal trade secrets by pretending to be real human traffic, to million-dollar blackmail, to whatever else is desired and profitable.
Here are some example scenarios to consider.
Examples of Evil AI
Image CAPTCHAs leverage humans to teach a machine what an image is. When you click on the CAPTCHA images and choose boxes where letters are shown or which contain vehicles, you are actually helping the neural network to learn how to recognize a letter or vehicle. Bad actors on the Dark Web can take advantage of this same idea for their forums to develop their own AI algorithms that will accurately recognize what letters and vehicles look like, in order to create their own CAPTCHA-breaking AI services.
In fact, researchers have created their own CAPTCHA breaking bot that is up to 90% accurate. This will then be scalable and profitable because the machine will be able to effectively deceive the CAPTCHA into categorizing it as human, and so it will then be able to easily bypass this type of two-factor authentication (2FA). There are more difficult CAPTCHAS, such as sliding puzzle pieces and pivoting letters, but these are not quite as popular or widespread yet.
Another AI driven attack could be finding vulnerabilities. Vulnerabilities are labeled by CVE numbers and describe what exploits exist in a piece of software or hardware. As mentioned before, reading these falls into the field of AI. A bad actor could train the AI to become effective at reading the vulnerability details and from there, automate exploiting those vulnerabilities in organizations at scale.
AI solutions can also be defrauded if you understand what that particular AI is looking for. For example, there are AI solutions that are very good at determining whether or not traffic to their site is legitimate human traffic. It bases this on a variety of factors, such as Internet browser type, geography, and time distribution. An AI tool built for evil purposes could collect all of this information over time and use it in conjunction with a batch of compromised company credentials.
Why There is Hope
The good news is that for once, the good guys are years ahead of the bad guys by already having their own AI solutions ready to meet these threats. This is due to the high barrier to entry from a resource and talent pool perspective. However, these barriers are smaller for certain groups, such as organized crime groups and nation states.
But if you are now wondering how to protect yourself and your business from this type of threat, the first thing you need to do is to start educating yourself on what AI and ML really are, and to pledge to look deeper into a product than just reading the marketing brochure. If you do your due diligence, you will quickly learn that there are many security products out there that claim they “have AI” - but the question you need to ask of their technical teams is, “what type of AI, and how does the product use it?” While canoes and battleships could both technically be marketed as boats, they are not the same thing.
The same word of caution also applies to any piece of hardware or software marketed using the words “artificial intelligence” and “machine learning,” which you will find in the product descriptions of many legacy antivirus products. Make sure to always first do your own research, read the fine print, ask questions of previous customers, and ultimately, always test for yourself using malware samples of your own choosing.
The good guys need to keep creating and improving their AI tools. If we rest on our laurels, the bad guys will not only catch up to us, but they will come out ahead.