You won’t have missed the “Heartbleed” bug.
Recent versions of OpenSSL – in fact, versions available for two years – have a buffer overflow vulnerability that can cause data leakage.
Ironically, the bug is only exploitable if you are setting up or already using a secure TLS connection, as you would, for example, when browsing to an HTTPS web page.
Why is the bug there at all?
One problem with TLS connections is they’re a bit like aeroplane flights.
Once you’re airborne, you zoom along at an impressive 800km/hr, but only after it has taken ages for everyone to get through the boarding gate, find their seats, stow their bags, rediscover their manners, and get ready for takeoff.
In a similar fashion, TLS connections are as quick as regular connections once they’re set up, but they take much more work to get started.
So it makes sense for TLS to have what’s called a “heartbeat” function to let the encryption layer keep a connection alive, even if the application itself doesn’t have anything to send or receive.
But it turns out that OpenSSL’s implementation of the heartbeat function can be tricked.
By sending a maliciously-constructed heartbeat request, you can get OpenSSL to reply with up to 64KB of data that wasn’t supposed to be sent at all.
Where’s the heartbleed?
Every time you send a malicious heartbeat, which could be as often as every second, you get an unstructured (and illicit) peek at some 65,000 bytes sucked out of memory at the other end of the connection.
In short: the heartbeat function can be abused to bleed data from the other end, thus: heartbleed.
You can’t predict what’ll you’ll get, but in the same way that gold mines succeed by getting just a few grams of precious metal from every tonne of rock, you might end up with something valuable.
For example, you might discover the username and password of someone who logged in just before you.
Let’s be honest: there’s a lot of hype around “heartbleed,” starting with the name itself, so it isn’t the end of the world as we know it.
But when it comes to the question, “Could someone have used this bug to get hold of my passwords,” you have to answer, “Maybe, just maybe.”
Understandably, that has led to a frenzy of password resets.
Would 2FA have helped?
Because of the global password reset pandemic, lots of Naked Security readers have asked, “Wouldn’t 2FA have helped?”
2FA is short for Two Factor Authentication; we write about it and promote it a lot.
If you look at any industry commentary about 2FA, you’ll keep bumping across the idea that it involves at least two of: something you know; something you have; and something you are.
Examples of 2FA include:
- An ATM (cashpoint) withdrawal. You have a card issued by the bank. You know a PIN that unlocks the card for use. Neither one on its own gets you anything.
- An immigration check at the US frontier. You have a passport. You are the person with specific fingerprints.
- A secure WordPress login. You know a password. You have possession of a mobile phone that receives a one-off authentication code.
We’re going to focus entirely on the last sort of 2FA above.
We think it’s the easiest and most effective way for web properties and other internet services to raise the bar against stolen passwords.
In the first two examples, the 2Fs of A are constant.
Another guy’s fingerprint won’t work with your passport, and another chap’s bank card can’t be used with your PIN, but all four factors are the same every time you draw money or visit California.
But the last sort of 2FA involves one static factor, your password, and one factor that is different every time.
If the second factor is stolen after it’s been used, the crooks end up with nothing: it only works once.
In fact, you will often see this sort of 2FA referred to as an “OTP system”, and some servers may even prompt you to enter your OTP, short for One Time Password.
Where does the OTP come from?
There are three main technologies used for OTP logins.
You have probably seen or heard of, and possibly used, all three sorts:
- The hardware token.
- The smartphone app.
- The SMS service.
The first two generate and display a stream of random numbers; you need to read off and enter the correct number in order to log in.
Greatly simplified, digit-stream OTPs, both hardware and software, rely on a random starting seed or key, a counter, and a cryptographic hash.
→ A hash is a function that mixes up an arbitrary amount of input, using encryption-like scrambling techniques, to produce a fixed-length output. But you can’t work backwards from the output to the input, even in part. The hash thus acts as a sort of secure digital fingerprint for the input. You can quickly verify that the hash of cat is 77af778b51, but you can’t easily find out from 77af778b51 alone that cat produces it.
These days, because most digital devices include accurate clocks, the time is used as the counter, typically rounded off to the closest half-minute.
This allows your token or smartphone to stay in synch with the server you’re logging into simply by keeping track of the time to within a few seconds.
If you can come up with the right code, the server can deduce that you have the correct hardware token, or that you must know the starting seed for the smartphone app.
The last 2FA system, based on text messages, doesn’t require your phone to perform any cryptographic calculations.
The server generates a random number, usually six or seven digits, and simply sends it to your phone.
You read it off, type it in on your computer, and the server can therefore deduce that you are currently in possession of the phone whose number is logged with the server.
Why the special hardware?
You might be wondering, “Why go to all the fuss with smartphones, SMSes or hardware tokens? Why not just calculate the token code on your computer?”
The reason is simple: so that you have not only an OTP that can never be used again, but also a second, independent, device for computing or receiving it.
The OTP alone helps greatly against heartbleed, because “bleeding” your username and password after you’ve logged in is no longer good enough.
The crooks have to get hold of your OTP code too – and they have to do so before you login.
(If they “bleed” the OTP code you just used to login, it’s useless, because it’s already been consumed by the login process; the next OTP will be different, and can’t be predicted.)
But if the crooks have malware on your computer, they could interfere with or emulate the entire login process, including the process used to calculate or display the OTP.
With a separate physical device involved – token, smartphone or regular mobile – that becomes much harder.
Your password could be stolen off a remote server, but with SMS authentication, for example, the crook who acquired the password would then also need the SIM card from your mobile phone before he could get anywhere.
→ “Stealing” someone’s SIM card can be done without stealing their physical phone. In a number port or SIM swap attack, the crooks convince a mobile phone shop – where they may have an accomplice – to cancel the old SIM and issue a new one with the old number. But this attack requires physical effort, making it hard to do in volume, and victims have a chance of spotting the attack because their own phones go dead when the old SIM is cut off.
Which is the best sort of OTP?
Our favourite is SMS-based OTP, simply because it doesn’t require a cryptographic seed shared between the token or smartphone and the server; and because the service can be used with the smallest, cheapest and most basic type of mobile phone.
After all, if you are worried that crooks might have heartbled your password, you should probably be worried that they might have heartbled your OTP cryptographic seed as well.
Also, if you get in the habit of carrying a basic mobile phone for emergency calls and OTP SMSes only, you can use it for secure 2FA even when the device you are logging in from is your smartphone or tablet.
However, SMS-based 2FA has some disadvantages: you have to give your phone number to the service provider; SMSes can be delayed, sometimes by hours; and when you are roaming you may end up paying handsomely to receive your login codes.
Hardware tokens are attractive because of their simplicity.
They are also tamper-proof and have a single, specific purpose, so they can’t easily be made to misbehave: they can’t get malware or leak data like a smartphone, and they aren’t vulnerable to SIM swapping.
But tokens are easily lost, and you may end up with a bagful of tokens, one for each service you use.
Smartphone apps are increasingly popular for 2FA, because they reduce the number of special purpose devices you need to carry.
But smartphones aren’t tamper-proof, so if you’re the sort of person who is worrying about systematic, large scale password theft from servers due to heartbleed, you probably also need to worry about your smartphone token app’s cryptographic seed being stolen and exfiltrated.
Does 2FA solve all known login risks?
No.
You can still be phished, for example: the crooks simply ask for your username, password and OTP code.
That means they can’t just put your stolen password in a database and sell or use it later, because they have to initiate a login right away in order to process the OTP in a timely fashion.
But they can still attack your account.
Crooks could also infect your smartphone with malware to intercept 2FA text messages, or to interfere with your authenticator app.
And you may find (if your country’s laws allow it) that using 2FA with your bank paradoxically increases your liability, and weakens your case in the event of a dispute.
A court or ombudsman might believe the bank if it were to argue that 2FA would have prevented the charges you are disputing, if indeed they were fraudulent.
Nevertheless, 2FA does make it harder for the crooks.
And while it wouldn’t have made heartbleed less of a bug, it would have made any passwords harvested by means of the bug much less useful, perhaps even useless.
In short: we recommend 2FA.
For further information…
Image of green 2FA token courtesy of Shutterstock.
Please forgive an old Dinosaur from repeating what we aging techies have been crying in the wilderness for 30 years (or more!) but would you young coders please just search “boundary checking” (lack of in modern coding), and you have your answer to both the questions “How” and “Why”. Nuff said?
Also, “Guard Pages”.
That’s where a malloc() causes the memory page immediately after the allocated memory to be marked “not there”, so a buffer overflow, whether by write or read, triggers a page fault and is therefore caught by the memory management hardware itself. (Try “man malloc” on OpenBSD.)
I was thinking about this yesterday. Why couldn’t the shared secret used in OTP be leaked via Heartbleed just like private SSL keys may be? The architecture of a particular web service may make this unlikely, but I don’t think we can assume it’s impossible…
That’s why I wrote (leaving our readers to evaluate the likelihood for themselves 🙂 “if you are worried that crooks might have heartbled your password, you should probably be worried that they might have heartbled your OTP cryptographic seed as well.”
Never say never, of course. But it’s one of the things I like about SMS OTPs. Even if the random code for your current login ends up in memory, the crooks have limited time to exploit it (they have to beat its validity period, probably a minute or five, and beat you to using it). There’s no seed or sequence. Just a mechanism for proving that an SMS was sent, and you somehow got to see it.
Also, admittedly in a “security through obscurity” way, I’d guess that your SMS contact details and the SMS-send-auth server are somewhere else, possibly even at a third party SMS auth provider. Not sure how good that is in general, but in this case, it might actually be a benefit 🙂
I have a token for my VPN, but that is dedicated to that purpose. SMS would be good if I didn’t live in a mobile deadzone. If SMS is the only 2FA available, it isn’t practical to use it. Perhaps the best solution would be to allow users a choice between SMS and authentication via a different internet device …
Some providers do just that…you can choose whether to go down the software-based authenticator route, or to have SMSes sent.
loss of private key allows server to be spoofed and traffic captures to be decoded, but if you can’t get into the train of communication it doesn’t matter. If people are using OTP then the passwords go out of date fast but session keys can still be stolen to take over an authenticated connection.
If the site’s been patched, then just logging out and back in should deal with that risk. Sites should limit the time that a single authenticated connection lasts (for example, IIRC Outlook.com makes you log back in every 24 hours, in case you forgot to logout).
As an aside, we recommend you explicitly logout anyway (yes, even from Facebook) whenever you are not actively using an account. It’s more hassle that way, but it protects you from yourself as much as from anyone else…you can’t press a Like button, say, by mistake. That mitigates the risk of clickjacking (where you Like something without even realising), amongst other tricks.
Paypal will only let you go ~10min of inactivity before they log you out.
I used to work for a national rail infrastructure organisation and we had a device that gave us a OTP code that we had to use along with loggin name and password to access our systems accounts. We could do it from anywhere in the UK (except in a deep tunnel!) and it was fine as long as the small device you had in your pocket kept in step with the central system. Works fine. Most of the time.
But there’s always a catch – not everyone has a mobile phone! But they still use computers on line. So why not do what the rail company and several on-line banks do, use a device to generate a OTP every time.