Lessons to learn from the MongoHQ database breach

Cloud-based database services company MongoHQ is in “we’d better fix things” mode this week, following a network intrusion that proves the old adage that once you’ve been breached, all security bets are off.

And if you’re a database provider, any breach anxiety is no doubt greater than usual.

After all, it’s not just your own databases that were put at risk, possibly including data about your customers, it’s your customers’ databases, probably including data about your customers’ customers.

(Imagine, in Adobe’s recent SNAFU, if the multimedia giant hadn’t merely lost its own source code, but all your source code, too.)

Now, regular readers of Naked Security will know that I’m partial to a good apology when a company deals with a breach, and MongoHQ CEO Jason McCay didn’t disappoint, writing earlier in the week that the company was “deeply sorry this occurred.”

I know apologies don’t actually fix anything, but in the hush-hush world of computer security, unambiguous apologies often act as a signal that things aren’t going to get swept under the carpet.

That, in turn, probably means you’ll actually find out what happened, and instead of hearing platitudes like, “We’re going to fix this so you can trust us next time,” you’re more likely to hear, “This is what we’re doing so you can decide for yourself if you can trust us next time.”

So, what did happen, according to MongoHQ’s own admission?

Sadly, a series of weak security decisions were behind the intrusion:

  • A user’s work account had the same password as one of his personal accounts, and his personal account got pwned. So the crooks were into both accounts at one stroke.
  • MongoHQ’s internal support application was accessible directly over the internet. So the crooks didn’t need to authenticate their way onto a Virtual Private Network (VPN) first.
  • There was no two-factor authentication (2FA). So the personal password known to the crooks was enough all on its own.
  • The support application gave access to customer account information, such as email addresses and hashed passwords.
  • The support application also allowed insiders to pretend to login as a customer, and to see exactly what that customer would see, “for use in troubleshooting customer problems.” So the crooks could effectively login to customer accounts without needing to know their passwords.
  • There were no Access Control Lists (ACLs) to prevent support users from getting at data they didn’t need.

The only good news in all of this is that the customer passwords revealed to the crooks were hashed using bcrypt.

Bcrypt is a so-called keystretching function that ramps up the time it takes for a supplied password to be checked against its stored hash, by requiring various parts of the hash calculation to be repeated thousands or even tens of thousands of times, rather than just once.

That means it takes thousands or tens of thousands of times longer to check each password – not much of an inconvenience when you are validating passwords one-by-one when customers login, but a giant roadblock when you are a crook wanting to try a dictionary attack using millions of likely passwords.

As for the not-so-good items listed above: MongoHQ has already started working on addressing them all.

In fact, the company has gone so far as to say (my emphasis below) that it will keep its support application shut down until “we have obtained third-party validation that:

  • we have functioning, enforced two-factor authentication,
  • access to the applications is provided solely through VPN,
  • [we have] a system of graduated permissions, tested thoroughly, that allows only the minimum needed privileges to support personnel based on role.”

That’s a laudable and a robust response, and – as it happens – it’s a great checklist for your own network security setup.

2FA, VPNs, and ACLs: your computer security allies.