The SANS Next Generation Endpoint Security (NGES) test for the Center for Internet Security Critical Security Controls version 6.1, Control 8: Malware Defenses (or just SANS NGES Test) is a test conducted by Dean Sapp to evaluate the ability of a selection of endpoint security products to fulfill certain anti-malware objectives outlined in CSC 6.1 #8 – a cybersecurity standard focused on endpoint security for mitigating breach risk.
Specifically, it checks for product compliance to three of the six sub-controls: automatic system monitoring and defense, product updating, and anti-exploit features.
The test report states that the test was designed to focus on “prevention and blocking”, and not on detection. The test does this by utilizing a mixture of malicious and non-malicious executables run against victim virtual machines (VMs) secured by the products under test.
For each test run, a malicious executable file is transmitted to the victim system as though an insider-class attacker had uploaded it to the endpoint via USB. Then, the simulated attacker launches the executable.
The victim’s reaction to the file, malware or not, is then recorded and analyzed.
Non-malicious samples are then utilized in the same way to test a given product’s ability to distinguish between actual malware and benign software. A balance must be struck between the rate at which truly malicious programs are correctly blocked, and the rate at which benign ones are mistakenly stopped – known as a false positive.
Samples Used in Test
The malware used in the test was a subset – fifty total samples – of a much larger sample pool amassed from VirusTotal™, as well as from members of the cybersecurity community – not from antivirus (AV) vendors.
The non-malicious samples, used to test false-positive alert rates, were fifty programs selected from Microsoft’s Sysinternals suite. This is a good set of false-positive test programs, because Sysinternals applications are widely used by programmers, systems administrators, and enthusiasts the world over; ideally, no security product under test should detect them.
To simulate previously unseen malware, the fifty malicious samples chosen were modified at the byte level to change their signatures, without altering their behavior. This precluded the possibility of blocking by, for example, hash recognition. In addition, the samples were copied and then packed, further modifying their structure, and therefore their appearance, to the tested products’ anti-malware engines. Packing malware is a common practice in the world of cybercrime, and can be a very powerful method for bypassing simple detection procedures.
All in all, a total of 100 malicious samples were produced: 50 non-packed samples, and 50 packed samples.
A Word About Packing
It is worth noting here that the false-positive sample set was treated in the same way; the 50 Sysinternals samples were also packed, using the same packer as with the malicious samples, to produce a subset of packed false-positive test programs.
The resulting sample set was problematic in the face of the real-world premise underpinning the NGES test’s sample selection method. The intention behind the tool used to fool antimalware products is just as important as the tool itself; without the motivation to employ a tool, an actual threat actor simply wouldn’t use it that way.
To this point, the overwhelming majority of packed files in existence are malware. Because of this, CylancePROTECT®’s math models associate packed files with maliciousness, and in turn block them.
In spite of this, the NGES test was designed thoughtfully, implemented skillfully, and conducted methodically. While no test is perfect, the NGES test’s ability to highlight the strengths of the products it tested is worthy of merit and recognition:
- It identified products highly capable of preventing malware from infecting its victims
- It showed which solutions could block malicious files mutated to evade detection
- It also discovered which products reacted the least to genuinely benign files
NGES test results showed that endpoint security products which utilize artificial intelligence (AI) and machine learning (ML), such as CylancePROTECT, outperformed other products tested.
Among these AI/ML products, CylancePROTECT was the most effective at preventing infection with low system impact – in fact, the report indicates that CylancePROTECT came in first place in almost every measured test.
The tester, Dean Sapp, issued a statement about the results: “After conducting three rounds of independent testing against zero-day, file base malware in our lab, Cylance distinguished itself as the most effective prevention based, next generation endpoint protection product on the market.”