The exponential explosion of malware in recent years has seen the rise of automated analysis environments - or "Sandboxes" - as an essential means of providing detailed and pertinent information about a sample, in a timely manner.
A Sandbox simulates the execution of a malware sample in a genuine environment while "sandboxing" any evil behaviour.
Data is gathered that can help analysts decide if the sample is genuinely malicious, what malware family it may belong to, and any further actionable indicators such as filenames or command and control addresses.
Although this provides a scalable solution, there are drawbacks. In particular, since a Sandbox is artificial there will inevitably be ways to detect just that.
Traditionally, when malware detects that it is not running in a genuine victim setting - typically through detecting the presence of a Virtual Machine (VM), although there are other ways such as checking for installed analysis tools or product IDs of known public Sandboxes - the malware will simply exit immediately.
However, there is a certain subset of malware families that are more cunning when they detect an analysis environment.
At the Virus Bulletin conference this year, I will present a paper detailing several malware families that employ a variety of techniques to throw off researchers or otherwise produce erroneous analysis results.
I will examine:
- How some families display more benign behaviour under a VM than on a real machine - such as Andromeda;
- How Vundo uses decoy command and control addresses that are used to divert attention and potentially induce False Positives;
- How Simda builds a blacklist of researcher IP addresses; and
- How Shylock distributes dummy configuration files to send analysts down divergent paths.
The paper will be presented at VB2014 in Seattle. The full programme is available on the Virus Bulletin website.