The security of online transactions is again in the spotlight as a pair of UK cryptographers take aim at TLS.
TLS, or Transport Layer Security, is the successor to SSL, or Secure Sockets Layer.
It's the system that puts the S into HTTPS (that's the padlock you see on secure websites), and provides the security for many other protocols, too.
The name comes from the fact that encrypted TLS packets have thirteen header bytes that are consumed in one of the cryptographic calculations on which TLS relies.
→ The paper's name is a bit cheeky - the authors wryly note that "in some sense, thirteen is lucky, but twelve would have been luckier," since 12-byte headers would make their attack even more efficient.
To give you some idea of what makes cryptographers tick, and how they manage to extract order even out of carefully-contrived chaos, here's how it all started.
The authors of the paper noticed that the encrypted packets in most TLS sessions:
- Are encrypted with a secure block cipher, usually AES in CBC mode, to keep the contents secret.
- Include a strong cryptographic checksum, usually HMAC-SHA1, to prevent errors and forgeries.
You might assume that using not one but two strong cryptographic primitives would inevitably boost security, but cryptographers don't think that way.
Nadhem AlFardan and Kenneth Paterson of Royal Holloway, a renowned centre for information security research that is part of the University of London, realised they could use these two security features against one another because of the way they are combined in TLS.
Greatly oversimplified, the attack relies on the following details:
- The AES block cipher encrypts 16 bytes at a time. Data packets that aren't a multiple of 16 bytes long must be padded out until they are.
- The TLS packet checksums are stored inside the encryption layer, after the raw data but before any necessary padding.
When a TLS server receives a packet that is HMAC-SHA1 checksummed and AES-CBS encrypted, it needs to validate it, like this:
- Decrypt the received data, which should be a multiple of 16 bytes in length.
- Identify any padding at the end of the decrypted buffer, and strip it off.
- Remove the last 20 bytes (the length of an HMAC-SHA1 checksum) and store it for reference.
- Compute the HMAC-SHA1 of what's left in the buffer (less padding and reference checksum).
If the computed HMAC-SHA1 checksum does not match the reference checksum, then there has been an error or a forgery. The server will send back an error message and terminate the connection.
Now imagine that you're a MiTM, or a man-in-the-middle. You can't decrypt and re-encrypt the packets as they fly past, but you can intercept them, shorten them, alter them subtly, and pass on the modified versions.
Every time you do this, you'll break the TLS session you're attacking, but you can do so in a way that may just leak information about what was inside the now-broken session.
That won't immediately give you the crown jewels, but if you can extract anything from the enciphered data stream, you've violated the cryptographic sanctity that TLS is supposed to provide.
The trick is that when you truncate an encrypted packet, there's a chance that the plaintext at the end of the chopped-off data stream will look like the sort of padding bytes that TLS would have added if the original data had been shorter to start with.
In that case, the TLS server will strip off what it thinks is padding, extract what it thinks is the packet checksum, and then verify that the checksum matches.
The checksum won't match, of course, but you don't care about that.
What you care about is how long the TLS server takes to reply with its error message when it realises the packet is invalid.
Because of the way that HMAC-SHA1 works, it takes four units of CPU time to checksum the first 55 bytes of data, plus an extra time unit for every additional 64 byte chunk (or part thereof at the end).
The authors realised that by making a two-byte tweak to part of an encrypted packet and truncating it, they had a chance of approximately one in 65,000 that the server would end up wrongly thinking the message was padded, stripping the padding and checksumming what was left.
Unless the TLS server code had been carefully engineered to consume the same amount of time regardless of the content of the decrypted packet, this would take four-fifths of the time of processing an untweaked packet.
This would happen because the tweaked packet had tricked the server into checksumming 55 bytes or less of data, not 56 or more.
If the researchers could reliably measure this speedup by timing how fast the server's error response came back, then, based on the tweak, they could now compute two bytes of the original packet.
By systematically trying all possible two-byte tweaks (216, or 65536 of them), and assuming perfect timing measurement, they could guarantee to extract two of the encrypted bytes.
Once those first two bytes are cracked, a further 14 bytes can be cracked, one byte at a time, by trying all 256 possible tweaks of each byte, for a total "tweak cost" of 216 + 14x28.
And that's the Lucky Thirteen attack, in grotesquely brief operational, if not theoretical, terms.
It's not a terribly practical attack, not least because perfect timing measurements are impossible:
- Each tweak attempt causes a TLS session to terminate, which may be both noticeable and time-consuming.
- Each tweaked session needs to have the same plaintext at the same packet location for the tweaking to be done exhaustively.
- The authors needed 27 repetitions of an exhaustive set of 216 tweaks (that's eight million dud TLS sessions!) to produce enough statistically-significant timing data for a reliable result.
Oh, and that was under ideal network circumstances, with the TLS client, server and MiTM computer on the same dedicated LAN.
Nevertheless, it's an important result because, as mentioned above, it punctures some of the cryptographic sanctity that TLS is supposed to provide.
Solutions and mitigations, which designers of future protocols might bear in mind, include:
- Design and write your code so it isn't measurably quicker or slower when errors occur, even if this means performing redundant computations.
- Use a stream cipher, not a block cipher, to avoid the need for plaintext padding.
- Checksum the data after encryption, rather than encrypting the checksum. This ensures that the quantity of data to be checksummed does not depend on the plaintext.
- Use an authenticated encryption algorithm, like AES-GCM, that combines checksumming and ciphering.
Interestingly, the last-listed mitigation above (use authenticated encryption) is already supported in TLS version 1.2, which came out nearly five years ago.
According to the SSL-Pulse project, fewer than 12% of the web sites it surveyed in January 2013 supported TLS 1.2, and most browsers (IE on Windows 7 and 8 being a notable exception) don't include it, either.
Probably time to move forward!