Google just announced that its HTTPS web pages will be ditching 1024-bit RSA keys in favour of 2048 bits.
“Pah,” I hear you say. “I have one or two questions about that – three questions, in fact.”
- How is this newsworthy when many other web properties have already made the switch to 2048 bits? (Kim “Big Fella” Dotcom’s mega.co.nz, for example.)
- Why switch if 1024 bits is much bigger than the largest RSA key yet known to have been cracked, at 768 bits?
- Why the fuss about 1024 bits anyway, if just 128 bits is considered more than enough for other encryption algorithms, such as AES?
Let’s start at the end: why thousands of bits of RSA key but only hundreds for AES?
An n-bit symmetric key gives 2n different possible keys to choose from.
If we assume there is no algorithmic shortcut, we have to try all 2n of them to be certain of cracking the key.
For obvious reasons, this is called a brute force attack.
A cryptographic attack that requires an equivalent effort to brute-forcing n bits of symmetric cipher key is said to have a security strength of n.
According to the US National Institute of Standards and Technology (NIST), security strengths of 112 and above are considered OK until the end of 2030. Security strengths below 112 are already considered deprecated, a word that means “used with disapproval.”
From the end of 2013, they’ll move from “deprecated” to “disallowed”, at least if you follow NIST’s playbook. So, you have until the end of the year to get all your cryptosystems up to a security strength of 112 or more.
AES uses keys of 128, 192 or 256 bits in length, so AES implicitly meets this condition.
But RSA encryption is a public/private key cipher, meaning you have one key to lock and another key to unlock.
You make the locking key public, so that anyone can encrypt messages and send them to you; you keep the corresponding unlocking key private, rather obviously, so that only you can decrypt them.
Technically and algorithmically, a 128-bit RSA key isn’t anything like a 128-bit AES key. It’s actually a 128-bit number n that is constructed by multiplying together p and q, two randomly-chosen 64-bit prime numbers. To crack such a key requires you to factorise n back into p and q.
This sort of factorisation is computationally complex, but it doesn’t take 2128 tries to guarantee a result.
→ Straight off the bat, we know that the largest possible factor of n is √n, and √(2128) is 264. So that cuts the maximum number of tries to 264. And neither factor can be even, since then it wouldn’t be a prime factor, which instantly halves the maximum number of tries to 263. Clearly, the cost of factoring n-bit prime products is well below 2n.
If fact, an 128-bit RSA key would be absurdly weak by modern standards. In practice, a 1024-bit RSA key is only considered equivalent to an 80-bit symmetric key, and thus has a security strength of 80.
So the fuss about 1024-bit RSA keys is that they too, like AES or similar keys below 112 bits, will become “disallowed” at the end of the year.
Now we come to the second question: why the jump from 1024 to 2048 bit keys?
In 2014, symmetric keys will need to go from a minimum of 80 bits to a minimum of 112 bits; in 2031, they’ll go from 112 to 128 bits. Those are key-size increases of 40% and about 15% respectively.
But in 2014, RSA key sizes are required to grow by 100% (1024 to 2048 bits), and in 2031 by 50% (2048 to 3072 bits). Why the discrepancy in the scale-up?
The reason is that the complexity of a brute-force attack against a symmetric key grows exponentially with the number of key bits, so each additional key bit multiplies the strength by a constant factor of 2.
But each additional key bit in an RSA key multiplies the strength by an ever-decreasing amount, so you need a bigger jump in key size for the same increase in resilience to brute force attack.
And finally, the first question: is this a big deal only because Google has announced it?
It shouldn’t be a big deal for anyone to make an announcement like this. Indeed, it should be expected and unexceptional, as Google itself explains.
In theory, you should easily be able to change website certificates as a matter of routine, not least because they expire and need refreshing anyway. Everyone’s software should automatically adapt.
But SSL certificates don’t usually stand alone, or else anyone could mint a certificate that claimed to be from sophos.com, or microsoft.com, or anywhere they liked.
So SSL certificates are themselves generally signed by other people who vouch for you – firstly by one or more intermediaries (which might be security teams in your own company) and finally by a root certifier. This creates a so-called “chain of trust,” topped out by a root certificate that is automatically trusted by software on your computer, such as your browser or the operating system itself.
The list of root certificates is often rather long – perhaps alarmingly long when you think about the power and authority it conveys.
Of course, root certificates themselves aren’t immune from expiry, or from compromise, or from needing key-size updates. So software that uses its own list of trusted roots must provide a way to update that list, for changes neccesitated both by routine (e.g. expiry) and emergency (e.g. a hack of the certificate authority’s network).
Note that Google will be changing its root certificate size to 2048 bits as well, so that all the certificates in its chain of trust are 2048 bits.
So Google is warning all of us, in good time, in case any of us have software (or, more challengingly, firmware burned into devices like games consoles, phones, and printers) that rely on hard-wired lists of root certificates.
If you have software that relies inflexibly on hard-wired SSL trust lists, take this as a good time to change the way your code works.
As Google points out, firmly but fairly, in its FAQ:
The only way to do this correctly is to build software that understands that Roots can change, and can adapt to that.
Google does have some self-interest here, as it doesn’t want potential customers being put off by certificate warnings as as result of this change. On the other hand, those certificate warnings shouldn’t really happen, so Google is acting in the interest of the whole ecosystem by highlighting these issues.
The takeaways?
- Build SSL certificate flexibility into your software.
- Switch to 2048-bit RSA keys before the end of 2013.
For the greater good of all!
Even if 1024 bit RSA keys can’t be broken right now (unless the attacker has a really big budget), your encrypted connections can still be recorded and decrypted later when the key is broken since ssl doesn’t provide forward secrecy by default. This can obviously lead to the disclosure of sensitive data such as email content which has been transferred via the weak ssl key. Since many users use the same password for years, an attacker may be able to just use the password from a connection recorded several years ago to take over an account. If the attacker has recorded the account creation, he can also decrypt the answers to the security questions for resetting the password. Even if the user regularly changes his password, knowing the answers to the security questions may still allow taking over the account.
“Unless he has a very big budget”: NSA-big budget maybe, but I don’t believe that it is practical right now.
Your attack seems a bit academical. But yes, theoretically this is possible I guess.
You suppose you could explain this in terms the average computer user could understand? How are we to 1) Build SSL certificate flexibility into your software and 2) Switch to 2048-bit RSA keys before the end of 2013?
I'm sure it's a lot of good information for those who understand it, but unfortunately, I'm not one of those people. Can you break it down just a little better, please?
Apologies for that. But the "Anatomy of an XXXX" articles are supposed to be a bit more technical that average, and they tend to be aimed at readers who are on the network or system administration side of the fence.
If you aren't involved in software engineering, or quality assurance, or technical support, you can probably ignore point 1, since you aren't responsible for any SSL key management.
And if you're not running your own web server, you can't directly influence the key size of the secure web pages on your own site, so you can ignore point 2 if you like.
If you have someone else who looks after these things for you…for example you have a cloud-hosted web server…try asking your provider these questions. By all means, forward them a link to this article and say, "Any potential problems coming up for me in this regard?"
Thanks, Paul! I knew the info was way above my head, but since I try and read all articles put out by Sophos, (and I forward quite a few of them too), I went ahead and read it, despite it making me feel like a 5 year old in a college class. 🙂 It might be helpful in the future if you put a statement at the beginning of your articles that states who the intended audience is and/or who we should consider forwarding this information to, if applicable. Just a suggestion! Thanks for getting back to me and so quickly too.
I think if you are just an average user, and not a computer programmer, running a software company, running a web site etc., you shouldn't have to do anything. You may have to update some of your software some time before the end of the year. That means download and install newer versions of the software. On a mac it's usually done automatically. On a PC I don't know, probably the same. Probably you don't have to do anything at all.
But can it be trusted? Has a backdoor of some sort been included to allow govt's to take a peek? I don't know, just asking if this is possible.
What is "it" here?
Do you mean "can Google be trusted," "can SSL be trusted," or "can RSA [the algorithm, not the company] be trusted"?
There is no backdoor in RSA or SSL.
With enough influence you could force, urge, trick or hack a root certifier into signing fake keys, saying (for instance) that a key belonged to example.com when in fact it belonged to you.
That would last until example.com noticed that a key was floating around in its name that it knew it didn't sign…
Excellent article.
Thank you, kind Sir.
There is some really good information in this article that I was not aware of until now. Your style of writing is really good, it includes the mathematical fundamentals and why they matter but isn’t too hard to understand.
Thanks Paul for writing this article. As Mark said above, you have done an excellent job!
For those not wanting to do the maths themselves, I’d recommend http://www.keylength.com/en/2/
Example: Using Lenstra’s equations (2004) and Double Moore factorising law, a conservative estimate of the equivalent key length of a 256 AES key would be a 53516-bit RSA key.
Don’t you just love maths? 🙂
That's all really interesting – but isn't a problem google's 512-big DKIM key they use to verify email ? Or is that still live as a "honey pot" for developers http://goo.gl/MMgK6
Google's DKIM key (at least, the one I checked, based on the headers of an email sent to me from Gmail) is 2048 bits.
(FWIW, DKIM has no chain of trust and no concept of a CA. You publish your own public key via DNS every time anyone wants to check the sender of an email.)
At some point, will secure RSA become impractical, simply because of the number of bits required to encompass a sufficient number of possible prime pairs or because the computing time to discover the next prime number is an increasing multiple of the time to crack RSA drawn from all smaller primes?
If so, won't this be catastrophic for our (digital) world, or can we depend upon a replacement public-private key method becoming available?
"How is this newsworthy when many other web properties have already made the switch to 2048 bits? (Kim "Big Fella" Dotcom's mega.com.nz, for example.)"
The New Zealand 'company' TLD isn't .com.nz (like Australia), but .co.nz.
The correct domain is mega.co.nz.
Thanks…fixed. "Muscle memory", I suppose 🙂
Google already have the power to crack 1024 keys thanks to D-Wave Two. For those who don't know any symmetric encryption system such as RSA that has been previously thought to be practically unbreakable should not be relied upon as Quantum computing has made massive leaps and bounds over the past 2 years. New breads of quantum supercomputers are less bound by bottlenecks in transitor speeds and are now utilising the massive parallelism from the effect of superposition where an atom (Qubit) can have many states…
Something to think about over the next decade but we need to start rethinking cryptography as a whole.