On The Way to Better Malware Testing

By Magnus KalkuhlHave you ever found a false positive when uploading a file to a website like VirusTotal? Sometimes it happens that not just one scanner detects the file, but several. This leads to an absurd situation where every product which doesn’t detect this file automatically looks bad to users who don’t understand that it’s just false positives.

Have you ever found a false positive when uploading a file to a website like VirusTotal? Sometimes it happens that not just one scanner detects the file, but several. This leads to an absurd situation where every product which doesn’t detect this file automatically looks bad to users who don’t understand that it’s just false positives.

Sadly you will find the same situation in a lot of AV tests, especially in static on-demand-tests where sometimes hundreds of thousands of samples are scanned. Naturally validating such a huge number of samples requires a lot of resources. That’s why most testers can only verify a subset of the files they use. What about the rest? The only way for them to classify the rest of their files is using a combination of source reputation and multi-scanning. This means that, like in the VirusTotal example above, every company that doesn’t detect samples that are detected by other companies will look bad – even if the samples might be either corrupted or absolutely clean.

Since good test results are a key factor for AV companies, this has led to the rise of multi-scanner based detection. Naturally AV vendors, including us, have been scanning suspicious files with each others’ scanners for years now. Obviously knowing what verdicts are produced by other AV vendors is useful. For instance, if 10 AV vendors detect a suspicious file as being a Trojan downloader, this helps you know where to start. But this is certainly different to what we’re seeing now: driven by the need for good test results, the use of multi-scanner based detection has increased a lot over the last few years. Of course no one really likes this situation – in the end our task is to protect our users, not to hack test methodologies.

This is why a German computer magazine conducted an experiment, and the results of this experiment were presented at a security conference last October: they created a clean file, asked us to add a false detection for it and finally uploaded it to VirusTotal. Some months later this file was detected by more than 20 scanners on VirusTotal. After the presentation, representatives from several AV vendors at the event agreed that a solution should be found. However, multi-scanner based detection is just the symptom – the root of the problem is the test methodology itself.

Read the rest of this editorial at VirusList.

* Magnus Kalkuhl is a senior virus analyst in Kaspersky Lab’s Global Research & Analysis Team.

Suggested articles

Exploit Kits Now Updated With New Wares Before Patches Are Ready

The creators and maintainers of exploit kits often rely on public reports of new exploits and proof-of-concept exploit code in order to be able to add new exploits to their software. And in many cases, the exploits included in kits such as Black Hole and Eleonore and others will be for vulnerabilities that are older and have long since been patched. But, if recent events are any indication, that could be changing.

New Version of REMnux Malware-Analysis Linux Distribution Released

A new version of the REMnux specialized Linux distribution has been released, and it now includes a group of new tools for reverse-engineering malware. The new additions include a tool for memory forensics as well as one for analyzing potentially malicious PDFs.

Discussion

  • Researcher on

    Magnus,

    Where can we read the latest test methodology?

    We are interested in researching into this area.


    Thanks

    CL

Subscribe to our newsletter, Threatpost Today!

Get the latest breaking news delivered daily to your inbox.