Everywhere I look, I’m constantly besieged with news stories using terms like “weapons-grade exploits” and “weaponized malware” to describe the latest malware outbreak. The narrative constructed that results from this colorful language is detrimental to the progress of securing our computer networks, because it seeks to categorically apply the metaphor of military conflict to cybersecurity - and not every incident in cyber conflict qualifies as a military-style “attack.”
I’d like to argue that the words we use in this industry matter and set the tone and context of our public discussion about cybersecurity. The problem with applying loose and imprecise terminology to the world of cybersecurity is that it constrains our thinking; if everything is a weapon, then its use is an attack, therefore we must detect the oncoming attack, and respond to the attack.
This is the OODA Loop (observe, orient, decide, act) in action. A poor language choice leads to a bad framework for discussion, and will ultimately result in poor policy decision-making, such as thinking it’s appropriate to respond to a computer network intrusion with a kinetic weapon.
All we’re left with is deterrence, which observers such as Peter Singer note hasn’t been effective and instead has created incentives for other nation-states to break into private networks and steal sensitive information.
To a Man with a Hammer…
To consistently describe all exploits and malware as “weapons” contributes to the mistaken thinking that only nation-state backed actors are capable of developing cyberweapons. We now know this isn’t the case.
For example, the Mirai botnet that caused a massive amount of disruption with record breaking distributed denial of service (DDoS) attacks was initially speculated to be the work of a nation-state. But eventually, it was revealed to be the work of a few college kids. In other words, a botnet with the capability to knock out large online providers and websites was built due to a quarrel over servers for a popular video game, Minecraft.
The bar for creating mischief and mayhem has been lowered because we keep connecting more and more things (read: I(di)oT devices) to our networks and even worse, to the public Internet. The exponential rate at which code is written and products are brought to market (now with more cameras, microphones, GPS, and other sensors) can only mean we are making ourselves increasingly more vulnerable. The bad guys don’t have to advance their tradecraft because we keep making their jobs easier by carpet-bombing our networks with insecure devices and hastily written software applications.
A secondary issue of note is that the information security industry and news media love to fetishize the idea of “weaponized exploits.” At the very least, the term certainly makes for a sexy headline that serves as excellent clickbait.
This obsession results in lazy journalism where every security incident or mishap is depicted as a “cyberattack.” The overuse and misuse of the term “cyberattack” has resulted in new guidance by the 2017 AP Stylebook, which defines it as “A computer operation carried out over a device or network that causes physical damage or significant and wide-ranging disruption”.
Likewise, James Lewis, at the Center for Strategic & International Studies, defines an attack as a tool “for violence or coercion to achieve political effect” in his Rethinking Cybersecurity report, perhaps in an attempt to add some precision to the way we discuss security.
Breaking the Cyber Kill Chain
The media’s fixation on categorically applying a militaristic view on cyber capabilities leads to an industry dominated by warfare terminology. The cyber landscape is now “a domain” (the fighting domain, not an ICANN approved .cyber TLD) complete with a Cyber Command, “cyber forces,” and “cyber kill chains.”
This kind of thinking leads us to bombastic metaphors of a “cyber Pearl Harbor” or “cyber 9/11”. (In my opinion, the use of these terms disrespects the memories of those who perished in those tragic events.) Likewise, this view leads to hilariously simplistic ideas that we can simply “isolate malware” and “reengineer it and prep to use it against the same adversary” - as if a piece of executable code is equivalent to an enemy rifle picked up off the battlefield.
It’s mind boggling to compare the inconvenience of not being able to access your social media or online banking to a real-world kinetic attack that causes mass human causalities. The comparison further breaks down as our society starts to rely more and more upon information technology, leading to consequences such as the hospital not being able to access your electronic health records (EHR) when you’re at the emergency room.
So, what does it really mean for an exploit to be weaponized? It’s not like I can just rip out an exploit from the Metasploit project and strap it atop an ICBM and call it weaponized.
Exploits don’t launch with “boom” or “bang,” but more of a “pew pew,” because trying to label them as weapons is such a joke; the sound effects should be just as comical. The way in which we talk about cybersecurity in terms of weapons and defenses sounds like a bad science fiction movie: “Cybernado”, a tale of ransomware ravaging the Earth which can only be stopped by generating enough cash through Bitcoin mining to pay off the ransom.
In the real world, software vulnerabilities are bugs or unintended computational artifacts which allow an attacker to effectively trick the computer into running code provided by the attacker or by unintentionally leaking information about the state of a computer (such as passwords, user names, or memory addresses to be used for further exploitation). Exploits are computer programs which take advantage of the vulnerabilities in other computer programs. Software updates and patches fix vulnerability and immunize you from the exploit.
The real damage can be done by the payload delivered by the exploit. The payload can surreptitiously install a backdoor on your machine, thus allowing the threat actor to return at a later time or take an immediate destructive action by encrypting all your files and demanding a ransom.
The definition of a “weaponized exploit” can’t even be agreed upon by experts. Some consider it to be a working proof of concept (PoC), in much the same way the first 3D printed firearm was able to fire a single round before destroying itself. Meanwhile, other experts regard the “weaponization” of exploits to be the refinement of an exploit in such a way that it always works: press <Enter>, get shell. The former is merely the security version of a programmer’s “works on my machine” while the latter is just software working reliably as intended from the exploit writer’s point of view.
Along the same lines, real world “weapons” have an intent for either offensive or defense use; although some could (divisively) be categorized as “tools,” proving intent may be tricky. For example, a chef’s knife may be used to either cut a steak or commit a homicide, but although both actions may be committed intentionally by the knife’s user, only the first use was intended by the knife’s manufacturer/retailer.
In much the same way, it’s difficult to determine intent just by reading code. Did the software program delete all your files maliciously or was it just a bug in the code? When your antivirus software deletes a critical operating system file, is it an attack? I don’t think anyone would argue that these software vendors had malicious intent.
Building Reliable Exploits
Software is complex and with complexity comes bugs. At some point, sophisticated actors will take advantage of this and operate under the cover of software with bugs that may introduce a backdoor or accidentally delete your files. If enough Internet-connected devices are immunized from the bug du jour we may avoid this potential onslaught of new ‘bug-veiled’ attacks, but in the Wild West climate caused by the current market saturation of unsecured devices (all released with the same default password), it’s anyone’s guess what the IoT future may bring.
In the meantime, the process of taking a vulnerability proof of concept to a “weaponized exploit” requires an astonishing amount of research, development, and testing. The minutiae of patch levels, runtime settings, language packs, plugins, and third-party software matter in the world of exploit development and must be accounted for when developing a memory corruption exploit.
Each small deviation from the developer’s test system perturbs the memory layout which requires extra development effort to handle these cases gracefully. The exploit must be robust despite variations in the memory layout. In other words, it’s just standard software engineering where the goal is to build a reliable software product that can account for erroneous conditions and handle them gracefully.
In the words of @halvarflake, “an exploit is ‘just’ a program for the weird machine that leads to a violation of the security properties”. Building reliable exploits is no different than building reliable software. But if that’s the case, maybe we should be “weaponizing” common software (operating systems and browsers) instead. That effort exists but it goes by a name that isn’t so sexy: hardening.
Now, there’s plenty of work out there (data execution prevention, control flow integrity, address space layout randomization, structured exception handler overwrite protection, control-flow enforcement technology, export address filtering) focused on eliminating entire classes of vulnerabilities, but it doesn’t receive much fanfare nor news coverage because the technical details are difficult to digest and it’s not as interesting if the story doesn’t involve something blowing up. These features are typically opt-in and require independent software vendors (ISV) to update their compiler infrastructure to enable these protections.
To give an example, we can see the security opt-in rate of various operating systems thanks to the folks at Cyber ITL. In a perfect world, vendors could just acquire new compilers and flip a few switches, and everything would be protected by the latest and greatest anti-exploitation technologies. The bad news is that software incompatibilities exist and enabling these features require a significant amount of testing to ensure the software works—and we’re back at proper software engineering all over again.
When Updates Attack
At a recent security conference, INFILTRATE 2018, Matt Tait (@pwnallthethings) discussed the idea of engineering software for ease of updates. The ability to push an update with widespread adoption destroys vulnerabilities before threat actors have a chance to retool and use it in the typically wide window between patch vulnerability and patch adoption.
Unfortunately, this update capability invites a different problem: the software update infrastructure itself becomes a target for hacking, as we have seen in the cases of NotPetya and MeDoc. I would argue that it’s easier to defend this much smaller group of machines than it is to defend the hundreds of millions of devices they serve, but this leads to an interesting question: should software update servers be considered “critical infrastructure”?
Just because computers work in a world of binaries doesn’t mean our thinking about them has to be binary as well. It’s time we transition away from these military terms when discussing cybersecurity – or face the real-world consequences.