Imagine that you just checked into a hotel.
You’re in the lift on the way to your room, holding a key.
You get to your room; you wave, swipe or turn the key; and the door opens.
Assuming the door wouldn’t open until you presented the key, it certainly feels like security of a sort, doesn’t it?
But what if your key isn’t unique?
What if your key opens every other door in the hotel (or, for that matter, if every other key opens your door)?
How would you ever know, just for starters?
You could make a habit of trying your key on a random selection of doors every time you use a hotel, but even that might not help, because:
- You’ll probably get into trouble, especially if you do manage to open someone else’s door unexpectedly.
- Some hotels only let you access your own floor, so you’ll never know if your key might open doors on other floors.
- Your key might be fine until the hotel next reboots its lock control server.
- Other keys might open your door, even if your key doesn’t open other people’s.
In short, you have to assume that the hotel’s key management software (or its locksmith service) knows what it’s doing.
From hotel keys to router keys
Now imagine you just finished setting up a secure router at home or at work, taking it out of its box, putting it through its first-time setup, and connecting it to the network.
You might similarly assume that tasks related to key setup, such as generating public-private keypairs, were done correctly.
In fact, history suggests that key creation tasks sometimes aren’t done well at all, especially on small, cheap routers where generating pseudorandom numbers is hard to do well because there just aren’t many sources of unpredictable data inside the router’s limited hardware.
If the clock always sets itself, say, to 01:00 on 01 January 1991, every time you power up, then randomisation routines that rely on the time of day to mix things up at the outset are not going to get very mixed up at all.
But what if conventional checks showed you that your router’s cryptographic keys looked OK, or at least that on the 10 routers you just deployed, all the keys were dfferent?
You’d probably be reasonably happy that their “room keys” wouldn’t open each other’s doors.
Things are not always what they seem
Researchers at Royal Holloway, a UK institution already well-known for its cryptographic work, found that your happiness might be misplaced.
What’s more, they found out sort-of by mistake, though we don’t mean to denigrate their work by putting it that way.
They decided to use a fast network scanner called Zmap to check up on the FREAK vulnerability, and measure how many servers still needed patching.
FREAK, of course, is a recently-reported security bug that allows an attacker to trick each end of a TLS (secure) internet connection into dropping back to a level of encryption that is rather easy to break these days.
FREAK involves going back to 512-bit RSA keys, a size that was already successfully cracked by civilian researchers with unexceptional equipment back in 1999.
And Zmap is a special type of internet scanning tool that can, in theory, make a probe to almost every active computer on the internet within one hour.
Just the combination for a Friday afternoon experiment in a cryptography lab!
The team quickly measured that just under 10% of servers on the internet were still vulnerable to FREAK.
Useful information.
A little disappointing, perhaps, because by now you might expect the proportion to be much lower; but also somewhat encouraging, because the original FREAK report, released two weeks ago, put the figure at 26%.
If that was all the researchers had found, you might already consider this objective measurement to be “a good return on investment for a Friday afternoon’s work,” to borrow the authors’ own words.
Bugs within bugs
But there’s more.
While they were about it, the researchers noticed that a surprising number of servers that were FREAKable presented exactly the same 512-bit RSA key when they were tricked into falling back to old-style encryption.
As it happens, many servers cheat a little bit with RSA keys, because generating a public-private keypair is a lot slower than merely using that keypair for encryption and decryption.
So, instead of generating a unique keypair for every connection, they only generate a keypair when the server starts up, and keep on using it until the server is restarted.
In theory, someone who grabs your private key could then decrypt every connection that was protected with it, which increases the risk compared to creating a new keypair every time.
But the idea is that if a crook gets into your server and acquires your temporary private key, then all security bets are off anyway, so this can be considered an acceptable risk.
What is definitely not an acceptable risk is sharing keypairs with other servers in other locations belonging to other organisations.
Otherwise, if any one of the others were hacked (or maliciously revealed the private key to cybercrooks), then you’d fall along with them.
The key that opens many doors
In the Royal Holloway paper, the authors found that one particular 512-bit RSA key, exposed during FREAK testing, was repeated a whopping 28,394 times.
That’s an awfully big hotel to have the same key for every door!
Worse still, that repeated key seemed to belong to a VPN router product.
Ironically, a VPN is a Virtual Private Network – a secure, encrypted network “tunnel” that is supposed to let your remote workers connect back to head office in much greater safety than if they were to use the open internet.
Does it matter?
You’re probably thinking, “Why fuss about repeated RSA keys if those keys only show up during a FREAK attack, which is a bug in its own right anyway?”
The point is that the repeated-key bug reveals that, somewhere in those affected VPN routers, there exists an egregious programming mistake to do with cryptographic randomness.
That bug could affect other aspects of the router’s encryption setup code.
Yet the researchers only spotted that bug because they happened to be looking for a completely different one.
Two take-aways
There are two important take-aways here:
- Finding programming mistakes is hard; sometimes it requires serendipitous coincidences.
- If you are the vendor of the affected router (check yours: you’ll soon see if it’s you), you have a code review to do.
Image of cutaway lock thanks to Wapcaplet at the English language Wikipedia.
Image of router courtesy of Shutterstock.
A perfect example of the big problem with computer security (and crypto in particular) – the sunny day “Does it work” type testing of security functions is at best useless, and creates a false sense of security (Pun not intended, but apt!). But outside specialist crypto/security companies most testing still focusses on that sunny day!
In the deep and distant past I have seen encryption systems where the encryption was fundamentally broken by a bug, so in fact no encryption was taking place at all, but the product was shipped as “It worked” (I.e. traffic successfully passed across the network).
Couldn’t vendors find this problem by simply testing two routers and make sure the keys aren’t identical? Are the vendors really THAT sloppy?
Thing is, unless and until you test in EXPORT_GRADE mode (512-bit RSA keys), the bug doesn’t show up.
And no well-behaved TLS client has bothered to ask for an EXPORT_GRADE connection since the US regulators finally decided that the EXPORT_GRADE rules were silly and ditched them, some time around AD2000.
So, the whole EXPORT_GRADE cipher thing went out of sight and out of mind. But not out of many TLS libraries’ source code…meaning that there were plenty of code paths that could have revealed various bugs (like this one) that probably fell off most people’s testing lists 🙁
Ahhhh. OK, that makes sense.
Still a process hole, though: If a company is not going to test specific scenarios, then it shouldn’t allow those scenarios.
But, you’re right: once it fell off the radar, nobody was going to think of it until it is too late.
“And Zmap is a special type of internet scanning tool that can, in theory, make a probe to almost every active computer on the internet within one hour.
Just the combination for a Friday afternoon experiment in a cryptography lab!”
dunno what country you’re in but that sounds like an awful lot of CMA offences in an hour in the UK
No more than using “ping” many times…
pinging someone doesn’t constitute a section one CMA offence – attempting to gain access does.
and your ISP’s terms and conditions are quite happy with port scanning too, are they, whilst we’re at it ?
nobody gave you the right to break the law you know..
Firstly, I didn’t do the research, so my ISP’s T&Cs on port scanning don’t come into it. I have little doubt that the researchers had clearance from their own service provider – perhaps the UK academic research network? – to proceed.
Secondly, the researchers weren’t trying to gain *unauthorised* access, which is surely where the CMA would come in?
They connected to public-facing TLS servers and conducted a single TLS handshake with each. As far as I can see, they never actually gained unauthorised access, and they never intended to. There was no DDoS, neither in theory nor in practice.
I am not a lawyer, but I am struggling to see how this could fall foul of the Computer Misuse Act…if this were unlawful, surely search engine spiders like the Googlebot would be unlawful, too?
Using fairly innocuous port scans just looking at FREAK from well know security researchers IPs, and then NOT immediately following up with many suspicious exploit attempts from a WAN behind many proxies, probably wouldn’t worry many admins. That is if said admins even checked their logs for this activity.