Imagine that you’re a hacker who has taken over a Wi-Fi access point at a coffee shop.
You don’t need to be there in person; you just need to be able to login to the access point as root (UNIX’s name for the system administrator).
If you can do that, you can almost certainly spy on, and sneakily modify, everyone’s network traffic.
However, you’re out of luck for coffee shop customers who are using HTTPS, because that traffic is encrypted.
At least, that’s the theory.
But by modifying traffic as it passes through your rogue access point, you can interfere with the way HTTPS connections are established.
That means you may be able to trick both ends of the connection – typically a browser and a server – into downgrading their security, falling back to using an old and less secure HTTPS version known as SSL 3.0.
If you can do that, you may then be able to extract confidential data, thanks to a bug known as CVE-2014-3566.
That’s not much of a name, so the Google researchers who worked out this attack have dubbed it POODLE, short for Padding Oracle On Downgraded Legacy Encryption.
Woof!
How SSL 3.0 works
Very simply put, POODLE works because the encrypted data in an SSL 3.0 packet is arranged like this:
The MAC is a Message Authentication Code (MAC), or cryptographic checksum, used to ensure that the encrypted data hasn’t been tampered with along the way.
The padding is just that: extra bytes tacked on the end of the data to pad it out so that the encryption algorithm can process it properly.
→ Padding is not needed if the connection is using what’s known as a stream cipher, such as RC4, because stream ciphers can encrypt any number of bytes at a time, from one to millions. But if the HTTPS session is configured to use SSL 3.0 with a cipher such as AES-CBC or 3DES-CBC, it has to consume its data in fixed-size chunks, because AES and 3DES are block ciphers that can only encrypt one full chunk at a time: 8 bytes for 3DES, or 16 bytes for AES.
SSL 3.0 padding is done by adding zero or more bytes until the data size is one byte shorter than a multiple of the cipher block size.
Then, a final byte is added denoting how many padding bytes were added before.
Using the last byte to signal the size of the padding is convenient, because it makes it easy for the other end of the connection to get rid of the padding later, by doing this:
- Decrypt all the data (user data, MAC and padding).
- Chop off the last byte to get a count of how many padding bytes there are.
- Chop off the padding bytes.
- Chop off the MAC to leave just the confidential data.
- Verify that the MAC matches the checksum of the confidential data.
Note that if the data and MAC are already a multiple of the cipher block size, you can’t just leave it at that, because you always need to add at least one count byte, even if only to say that you’ve added zero padding bytes.
So, if we have 16 bytes of data plus a 16 byte MAC, and we are encrypting with AES (i.e. in 16-byte blocks), we’ll need 15 bytes of padding, plus a final byte (set to the value 15) to round out the overall content to 48 bytes, an exact multiple of the AES block size:
Cryptographic flaws in SSL 3.0
You probably have an inkling of the flaws in SSL 3.0 from the diagram above.
Firstly, the padding bytes and the padding count aren’t included in the MAC, and secondly, the 15 padding bytes can be anything at all and therefore cannot be validated or verified.
So, if you tamper with the last 16 bytes of the encrypted data, replacing them with absolutely anything you like as they pass through your rogue Wi-Fi router, there’s still a 1-in-256 chance that the entire 48 byte data packet will be accepted by the server.
If the last byte in the modified last block just happens to decrypt to the value 15, then the server will chop off that final byte, and then blindly but correctly chop off the other 15 bytes of padding, blissfully ignorant that the padding bytes are completely different from when they left the user’s browser.
That will leave the MAC, which will match the checksum of the confidential data, and the packet will be accepted even though it was altered.
This may not sound like much of a cryptographic hole, since only data that is going to be discarded anyway has been tampered with.
But it should be ringing a warning bell: modifications should always be spotted and if there are any inconsistencies in the received data, it should be rejected.
Deliberately devious tweaks
Now imagine that instead of randomly altering those last 16 bytes, you try some deliberately devious tweaks instead.
What if you copy the first 16 encrypted bytes and write them over the encrypted padding bytes, and then pass the data along?
If the server accepts your modified data stream, you now know that that the last byte of the block you copied decrypted to the value 15, otherwise the server would have extracted the wrong MAC and the data verification would have failed.
You just tricked the server into telling you something about the encrypted data you copied across!
Likewise, if the server reports an error, you know that the last byte of the first 16-byte block is not 15.
That doesn’t sound like much of an information leak, but in this case, size doesn’t matter: strong encryption isn’t supposed to leak anything at all.
You shouldn’t be able to extract anything about the plaintext in an SSL 3.0 data packet merely by fiddling with the encrypted data stream.
Clearly, we have a problem.
Cipher Block Chaining
Actually, the description above isn’t quite correct, because we omitted one aspect of the way that SSL 3.0 does its block ciphering, namely that is uses Cipher Block Chaining, or CBC.
That’s actually a security enhancement that XORs the previous block of ciphertext with the current block of plaintext before encrypting each block.
The first plaintext block, of course, doesn’t have any previous ciphertext to draw on, so it is XORed with a random starting block known as the Initialisation Vector, or IV.
CBC ensures that even a run of identical blocks, such as a sector’s worth of zeros, won’t encrypt into a recognisably repeating pattern of ciphertext blocks.
That’s because of the the random IV mixed in at the start, and the randomness that then percolates through the encryption of each subsequent block.
So, when the server unscrambles your sneaky copy of those first 16 encrypted bytes that you wrote over the padding block at the end, it decrypts it and then XORs it with the previous 16 bytes of ciphertext (which just happen to be the encrypted MAC – the middle 16 bytes in our diagram above).
Ironically, this actually improves your chances of learning something about that last byte in the copied block.
As described above, with straight decryption only, you could only tell if the last byte was 15 or not.
But with CBC, that last byte is decrypted and then XORed with what is an essentially a random number.
In other words, it’s not always going to be a plaintext byte of 15 that decrypts to 15 to trigger the POODLE trick and give the byte away.
In fact, if you can somehow persuade the user’s browser to generate the same HTTPS request many times, for example by tricking it into thinking it needs to re-send the data due to an error, you’ll have a fresh 1-in-256 chance every time of working out what that last byte is.
That’s because the random IV means that you’ll get a different stream of encrypted data every time.
Sooner or later, you’ll recover the 16th byte of the original plaintext.
Recovering plaintext one byte at a time
If you can trick the user’s browser into visiting a sequence of URLs (for example, by using your rogue access point to inject some JavaScript into a web page), you may be able to create a sequence of SSL 3.0 requests with the confidential data shifted along one character.
Then you can recover the new 16th byte in the block, which was the previous 15th byte, and so on.
One obvious way to control the positioning of both known and unknown content in SSL 3.0 packets is by generating a series of download requests in which you ask for a file that is one character longer each time.
You can then predict where the filename will appear in each HTTP request, and thus almost certainly work out where the HTTP headers, including the data you want to decrypt and steal, such as a session cookie, will be located:
The green data is controlled by you, because it is determined by the URL in the web request; the red data is the confidential content inserted by the user’s browser that you want to attack using POODLE.
What to do?
The problem is SSL 3.0 and its unauthenticated, unverifiable padding process.
SSL 3.0 is an old version of the secure socket layer protocol: about as old as XP, in fact.
It was long ago superseded by safer alternatives such as TLS 1.0, TLS 1.1 and TLS 1.2.
So consider turning it off altogether.
You can tell your browser not to support SSL 3.0 connections, so that no rogue cybercrook in a coffee shop can downgrade your HTTPS sessions to his own advantage.
And you can tell your servers not to offer or to accept SSL 3.0 connections, so that you can never be tricked into accepting a malicious sequence of POODLE downgrade requests.
Please be aware: if you run a web server that refuses SSL 3.0 even when users specifically ask for it, those users, who might be your customers, will no longer be able to access your HTTPS pages.
But when WordPress.com VIP, the service on which Naked Security is hosted, wrote to tell us that it turned off SSL 3.0 on all its sites, including ours, it pointed out that only about 1 in 1000 users make SSL 3.0 connections at all.
Those 1 in 1000 users have much bigger problems than POODLE: most of them are using SSL 3.0 because they’re still running IE 6 on Windows XP.
And that’s a story for another day…
Thanks, Paul – that’s the first explanation of POODLE that I’ve actually understood!
Following another sites advice, which is the same as yours, I opened IE, went to ‘Internet Options’ then the ‘Advanced Tab’ and unchecked the box for SSL 3.0. I closed the browser and re-opened it and all seemed OK. I opened another Tab for another webpage and, just out of curiosity I checked to see if the box was still unchecked. It was not.
I went back to the original tab that the browser opened in and the box was checked.
I closed all but one tab (for my home page) and unchecked the box, hit Apply then OK. I closed the browser and restarted the computer.
I opened IE to my home page, checked the check box and it was checked.
It seems that one cannot turn this off by merely unchecking the box and then applying the change.
Comment?
Are you in a corporate environment ? If so the settings may be controlled by a Group Policy…
Very nice explanation Thanks!
Agree. Really nice explanation. Thanks!
Paul, can you please show your readers how to change Google Chrome’s SSL 3.0 default settings in a Mac?
Thank you for showing us how to change the Firefox SSL 3.0 default settings.
I think this should work. I downloaded Chrome specially and tried it, and it *seemed* to do the trick…
Run the app called Automator.
A Choose a type dialog should appear. Pick Application.
Then go wayyyyyy down the giant list on the left and choose Run Shell Script.
In the shell script boxette that appears, put:
Then do a File | Save… and the resulting file should be an app bundle that launches Chrome with the above argument.
Be sure to quit fully from any running instance of Chrome before you try the new icon out. If Chrome is already running the Automator script won’t relaunch it, which means the SSL 3.0 deactivation won’t happen.
Thanks, Paul. This is great. I am a computer dummy, so I really appreciate having your instructions.
I followed your instructions. Here is what happened:
* * *
I ran the Automator. From there, I chose and ran Run Shell Script.
By default the word [ cat ] appears in the Run Shell Script text box. I didn’t know what to do with the [ cat ], so I hit the return key and my cursor moved to the line below.
In the line below [ cat ], I copied and pasted your [ open … =tlsl ] script and then saved the file. By default the file saved as a .workflow rather than an .app.
By default the .workflow file appears to run as a Run Shell Script in the Automator program. Double click the file, click the Run button on the upper right hand corner, and then Google Chrome appears.
* * *
Am I doing this right? Is the foregoing what should happen?
How long should I open Google Chrome via my new .workflow file rather than through the Google Chrome.app? Until further notice, perhaps?
Seems to be working for you. The “cat” is a sample command (it’s the UNIX command to print out a file, a bit like TYPE on Windows). It should be overwritten with the “open -a …” line.
Google discovered POODLE and told us all about it. (Without too much hype, IMO. I thought they kept it pretty clean.) You’d imagine they’ll tell us when Chrome has a fix. I think they are planning to ditch SSL 3.0 completely within a few months, so you can probably expect a halfway-house fix before then.
Noted – I’ll overwrite [ cat ] with your script.
Thank you so much, Paul. You have provided an invaluable service and I appreciate that. Cheers!
You can actually re-redirect HTTPS connections to HTTP and from then on the data is no longer encrypted.
That’s nowhere near as invisible to the user, though. Firstly, many sites no longer let you use their “logged in” pages via HTTP, so the crook has to redirect you to a clone site with a different domain, which is noticeable. Secondly, the HTTPS telltales in the address bar will vanish, which is noticeable,
Thing about POODLING someone is that they still have an HTTPS connection, and it’s still to the genuine site, with no man-in-the-middle or certificate tricks, either.
Thanks for the interesting article. Is there a way to test if Google Chrome is using SSL fallback?
BTW, Google says they’ll patch their client products “in the coming months”:
http://googleonlinesecurity.blogspot.com/2014/10/this-poodle-bites-exploiting-ssl-30.html
Poodle–a worthless dog that leaves crap everywhere and sniffs where it shouldn’t. Perfect name for this. What is the tech industry going to do about this for those of us that don’t have a clue as to how to type code. Computers as tolls for the future? Total joke if the head fell of the hammer every other hit we would toss it into the trash why do we put up with this just because its technology?
With this excellenty explanation I understand what is really Poodle.
But now, I wonder if in the “real life” Poodle is really easy to exploit ? Because the attack scenario looks complex: the attacker must execute a combination of network-level manipulation and malicious JavaScript executing in the victim’s browser.
It isn’t easy to exploit…but as with many holes of this sort, it only takes one person to sort out the code needed to do it, and to sell it or give it away…
Consider, for example, MP3 sound compression and decompression (which relies on some fairly funky mathematics using Discrete Cosine Transforms) – that isn’t something you could code up in an hour yourself from scratch. Well, not something *I* could code in an hour from scratch.
But thanks to freely available software libraries, it’s something I could code *into* my own software in a few minutes; and thanks to command line tools like FFMPEG, it’s something I can code into my own shell scripts in a few seconds.
Very good article. I have a question. I am trapped in a legacy situation and have to either turn off SSL or turn off TLS export, what would you do? What is worst, POODLE or 56 bit keys?
Get out of the legacy situation. Seriously.
Hello Paul,
thanks for your article. It’s really brilliant.
I’m a student and trying to carry out the POODLE attack, but it seems not easy like I thought. I’ve installed VM Virtual-box and I’m working with Linux system. But I still don’t know how to take control of the connection between the client and server an how to inject a Javascript code in the victim’s browser. Could you please explain what kind of stuffs I need to implement this attack?
We’re not in the habit of giving hacking instructions – even when they are widely available elsewhere, we generally leave you to cross the bridge of searching for them yourself – but in this case it seems mostly harmless to say that you need: a browser; a web server (‘php -S’ is a place to start, or else just install a full-blown server like Apache or nginx); some JavaScript to serve up (you’ll have to write that yourself); and something between the browser and the server that can intercept, log and modify packets in transit (mitmproxy for Linux or Fiddler for Windows are worth learning about if you are interested in security research).