Is Amazon hacking our apps? Or doing us all a security favour?

A war of words that started out as a fairly stinging criticism of Amazon has mellowed out into praise for the cloud services behemoth.

At the start of the week, popular online IT publication The Register said:

By mid-week, El Reg had chilled out a little, and instead described the situation like this:

As the developer himself reports on his blog:

It's been stated in various places that I was "lashing out" or otherwise upset with Amazon. That's simply not the case. I was both pointing out our mistake and simultaneously noting how interesting it was that Amazon examined a binary hosted on an app store looking for AWS credentials.

Let’s take a peek behind the scenes of this story.

Reversing Java in particular

Firstly, “reversing” compiled Java code (going from the executable format to a human-readable code format) isn’t quite like disassembling a Windows executable.

Java compiles into bytecode that lends itself fairly well to decompilation.

Of course, source code itself can be scrambled, minified or obfuscated, as anyone who has tried to get to grips with JavaScript malware will know.

Nevertheless, taking a basic look at decompiled Java (no pun intended) is easy to automate.

Using automated decompilation to look for egregious security blunders is technically no more sophisticated than entry-level spam scanning.

By the same token (no pun intended), looking at Java programs in this way no more objectionable than reading emails to scan for spam, a type of automated “email surveillance” that we have not only come to tolerate, but even to expect.

Reversing in general

Secondly, reversing apps for compatibility and security tests is an important legal right to maintain.

Otherwise, in an emergency, or in the case of a recalcitrant, inept or out-of-business vendor, you could end up stuck in a security hole, to your own and to other people’s detriment.

In other words, even if Amazon has already taken a step beyond automated decompilation and is passing high-risk apps to human experts for analysis, that’s the sort of effort we should encourage.

You may have heard the open source mantra that “given enough eyeballs, all bugs are shallow.”

It’s not completely true (bugs you might consider obvious have survived in open source code for some time), but there’s some relevance in the saying, especially in computer security.

All other things being equal, the more eyes on a piece of code, the more likely that any latent security problems will be found.

We know the Bad Guys are looking for holes, and finding them, so we should be pleased that more and more Good Guys are, too.

Hard-wired passwords

Thirdly, as the developer in this story has now made abundantly clear on his blog, he did make a serious security blunder.

As we keep reminding you on Naked Security, DON’T HARDWIRE PASSWORDS into your applications.

And especially don’t hardwire passwords that have other uses, like allowing crooks to masquerade as you or to “borrow” your resources for further criminality.

If the principle doesn’t convince you, take Amazon’s own advice on what might go wrong if someone recovered and used your AWS password:

Click to jump to Amazon's article...

You might be allow the end user to use your root credentials, that is, the credentials associated with your account. However, for security reasons, you should never embed your root credentials in an application. These credentials provide complete access to all your AWS resources as well as your billing information. You should always keep these credentials as secure as possible and use them only when necessary.

In other words, even if you don’t want to embrace security for everyone else’s sake, do it for your own so sake you don’t wake up to an Amazon Web Services bill you weren’t expecting.

Security holes might not be shallow when lots of eyes are focused on finding them.

But it seems a truism to say that they’re a lot more likely to be found.