According to Merriam-Webster, the word context is defined as “the interrelated conditions in which something exists or occurs.” Not a very long definition, but in cybersecurity, understanding the meaning of context can mean the difference between chasing ghosts and detecting an ongoing threat or compromise.
It’s not difficult to make the case for context in today’s environment. When over 350,000 new pieces of malware or other unwanted programs are discovered daily, and attackers are making use of standard system tools to carry out their campaigns, anyone can see that it is impossible to protect your environment by investigating each and every suspicious event in isolation.
Check out our Future of EDR Infographic HERE (PDF)
Unfortunately, today many Security Operations Centers (SOCs) find themselves buried under a sea of security events, many of which lack the context required to either rule them as being benign or have them rise to the top as critical or severe events that need immediate attention. One of the worst offenders adding to this data explosion in the security stack is a class of technology called Endpoint Detection and Response (EDR).
The concept that drove the creation of EDR technology was simple: antivirus (AV) cannot identify and prevent all threats to the environment, so we need something else that monitors the endpoint for suspicious behavior and, if found, alerts the security analyst.
Not unlike many ideas that can be stated simply, the implementation can get complex quickly. The complexity in the technology starts and ends with data. To identify “everything” AV may have missed you need to capture and analyze “everything” from “every” endpoint “all the time.” It doesn’t take a mathematician to figure out that this is an enormous amount of data. Where will this data be stored? How will it be collected? How will it be analyzed?
Before you knew it, the average organization that implemented this new technology was staring at a growing monolith of data; gigabytes, terabytes, even petabytes. Each time a user turns on their laptop, checks their email, creates a new word document, even reads this report online, the EDR is listening and capturing every change. All day, every day.
It’s quite staggering when you consider that even a small or medium size company may have 500 endpoints running each day; forget about the multinational organization that has several hundred thousand endpoints to monitor. Assuming an acceptable strategy is identified for dealing with this data, the next problem to solve is that of analysis.
Data is just data until you analyze it. This generally means a rules-engine running continuously looking for the proverbial needle in the haystack. These rules-engines are loaded with hundreds, if not thousands of rules that some security guru - maybe even the vendor - has created to find threats.
The only problem is that, as you know, the threat landscape is not a static one. Attackers are constantly refining and modifying their approaches which, in turn, means any rules designed to identify those threats must be refined and modified on a regular basis. If this does not occur SOCs quickly see their EDR alert volume dwindle down, which does not mean attacks have slowed, it just means that the attackers have successfully subverted the rules are may be running free in the environment.
So, assuming you have the data issue solved and have figured out a way to keep your rules current, the final hurdle to jump brings us back to where we started: context.
The Problem with Context
To understand the issue with context, let’s first think about how machines work. A machine, especially one designed for a specific purpose, has tunnel vision. Let’s take the example of an alarm clock. Let’s say you typically need to wake up each day at 6:00am. You set your alarm, and each day it plays your favorite song or a tone prompting you to wake up and start your day. Now let’s say that you forget to turn off your alarm and it startles you awake on a non-work day. Did the alarm clock malfunction? No. The alarm worked perfectly. The problem is that it lacked the context of “today I do not need to get up at 6:00 am."
Bringing this back to the world of security, traditional rules-engines essentially manage a series of alarm clocks, each waiting for the minute, second, even millisecond, to ring. The problem - just like with the alarm clock at your bedside - without proper context, these alarm clocks ring in the ears of SOC analysts continuously, having little or no concept that other alarm clocks are also ringing. It’s an exhausting scenario, and luckily one that can be solved with context.
Context would - just as with your 6:00 am wakeup call - add a “gate” or series of gates to each alarm that would instruct the alarm to only ring if all the conditions or sets of conditions exist. So for example, if there is an alarm looking for use of Command Prompt or PowerShell and there is no “condition” set, be prepared to be swamped with alarms.
Now, by providing some context to this alarm instead of being swamped, SOCs will see alarms decrease for the right reasons, with those benign alarms of old being avoided. Let’s look at a real-world example: the increase of tactics, techniques, and procedures that abuse commonly trusted programs that are installed by default on an operating system, commonly referred to as “living-off-the-land” attacks, is a perfect example of why a strong set of contextual data is a requirement for providing the highest level of protection without overwhelming the security operators in an environment.
Living-off-the-land attacks are becoming more common due to their difficulty to detect and then automatically respond appropriately in poorly instrumented environments. This is because these attacks will utilize programs that are developed and trusted by the operating system’s development teams. This implicit level of trust allows the execution of these programs to go unnoticed by antivirus solutions, as quarantining or deleting one of these programs simply based on file-level analysis would cause system instability.
To further complicate the situation, many of these trusted programs have been, until recently, largely unknown by system administrators and security staff alike. Furthermore, some of these programs provide little to no documentation detailing some of their powerful and unexpected features or capabilities. This, of course, creates a situation where a security team may not even know what to look for or understand various areas of risk that exist in their environment.
A good example of this is the rather well known Microsoft Office Dynamic Data Exchange vulnerability that many companies encountered throughout 2018. This vulnerability utilized a little-known feature of Microsoft Office applications known as Dynamic Data Exchange (DDE), a feature designed to seamlessly transfer data between Microsoft applications, to essentially execute arbitrary commands on a system.
In common in-the-wild techniques, this vulnerability would be used to force the Microsoft application to spawn a child process, typically Powershell, to then download and execute an additional payload (a piece of malware, a script, or other untrusted code) on the targeted system.
When the entire lifecycle of a DDE attack is spelled out, similar to the description above, it becomes fairly clear how and why it can be used for malicious purposes. This is because context is provided with the overall description of the attack. When one deconstructs the description of the attack into smaller atomic pieces, the necessity of context becomes even more clear.
The most basic level of context required to detect and prevent this attack requires an analyst to have access to the processes that are involved – such as Microsoft Office application (Microsoft Word) and PowerShell for this example. Observing these two programs executing on a system is typically not cause for alarm, as they are both commonly utilized in many environments.
The next level of context required is process heritage, in this case Microsoft Word is spawning Powershell. In many situations this level of context alone may be enough to warrant an investigation or response as this process relationship is typically unneeded and anomalous in many environments; however, it is still prone to false alarms in some situations.
Process details will continue to add valuable context for analysts and automated response actions. Some process details, such as command line arguments, can be the ultimate contextual attribute to provide a near perfect mechanism for determining maliciousness of a particular action, especially when combined with more foundational layers of context. In the above example, Powershell could be called with a set of command line arguments indicating that it would attempt to download content from a remote location.
The Solution: Intelligent Context
The crown of contextual details comes with the ability to automatically or intelligently correlate different levels of system activity (i.e.: processes starting, and files being created) in some logical manner. This level of context allows detection and response logic to be finely tuned to immensely specific parameters.
Breaking an event down to this level of context not only helps explain the intricacies of detection malicious system behavior, but also allows for a clear understanding of how detection and response logic can be implemented at multiple levels of the contextual stack to fine-tune the types and volume of alerts that a security team will receive about the state of their environment.
Does context solve all your security problems? Of course not, making that assertion would be foolish. Attackers are a determined bunch and will always look at new ways to reach their goals.
But one thing is for sure, if you are given the choice between working in a SOC that has implemented some form of automated contextual analysis and one that hasn’t, choose context. You just might get your nights and weekends back.