An information disclosure vulnerability has been found, and promptly patched, in OpenSSL.
OpenSSL is a very widely used encryption library, responsible for putting the S in HTTPS, and the padlock in the address bar, for many websites.
The bug only exists in the OpenSSL 1.0.1 source code (from version 1.0.1 to 1.0.1f inclusive), because the faulty code relates to a fairly new feature known as the TLS Heartbeat Extension.
The heartbeat extension was first documented in RFC 6520 in February 2012.
TLS heartbeats are used as “keep alive” packets so that the ends of an encrypted connection can agree to keep the session open even when they don’t have any official data to exchange.
Because the heartbeats consist of a reply and a matching response, they allow either end to confirm not only that the session is open, but also that end-to-end connectivity is working properly.
Sending heartbeat requests
The RFC 6520 standard explicitly restricts the maxium size of a heartbeat request to 214 bytes (16KBytes), but OpenSSL itself generates far shorter requests.
Don’t worry if you don’t understand C; but if you do, the OpenSSL heartbeat request code looks like this:
unsigned int payload = 18; /* Sequence number + random bytes */ unsigned int padding = 16; /* Use minimum padding */ /* Check if padding is too long, payload and padding * must not exceed 2^14 - 3 = 16381 bytes in total. */ OPENSSL_assert(payload + padding <= 16381); /* Create HeartBeat message, we just use a sequence number * as payload to distuingish different messages and add * some random stuff. * - Message Type, 1 byte * - Payload Length, 2 bytes (unsigned int) * - Payload, the sequence number (2 bytes uint) * - Payload, random bytes (16 bytes uint) * - Padding */ buf = OPENSSL_malloc(1 + 2 + payload + padding); p = buf; /* Message Type */ *p++ = TLS1_HB_REQUEST; /* Payload length (18 bytes here) */ s2n(payload, p); /* Sequence number */ s2n(s->tlsext_hb_seq, p); /* 16 random bytes */ RAND_pseudo_bytes(p, 16); p += 16; /* Random padding */ RAND_pseudo_bytes(p, padding); ret = dtls1_write_bytes(s, TLS1_RT_HEARTBEAT, buf, 3 + payload + padding);
The reason that the code says that “payload and padding must not exceed 16381 bytes in total” is that the 16KByte (16384 byte) maximum heartbeat request size includes one byte to signal that this is a TLS1_HB_REQUEST, and two bytes to denote the length of the payload data in the request.
As the code stands, the OPENSSL_assert to verify that payload + padding <= 16381 is redundant, because the payload size is hard-wired to 18 bytes and the padding size to 16.
But the programmer has tried to do the right thing: put in the check anyway, in case someone changes those payload or padding sizes in the future without considering the consequences.
The code then transmits a heartbeat request consisting of:
- The single byte 0x01 (denoting that this is a TLS1_HB_REQUEST).
- Two bytes containing the 16-bit representation of 34 (size of payload plus padding).
- Two bytes of payload consising of a 16-bit sequence number.
- 16 bytes of random data making up the rest of the 18-byte payload.
- 16 further random padding bytes, required by the standard.
Replying to heartbeat requests
When vulnerable versions of OpenSSL 1.0.1 respond to a heartbeat request, they aren’t quite so careful in processing the received data.
Heartbeat replies are supposed to contain a copy of the payload data from the request, as a way of verifying that the encrypted circuit is still working both ways.
It turns out that you can send a small heartbeat request, but sneakily set your payload length field to 0xFFFF (65535 bytes).
Then, OpenSSL will uncomplainingly copy 65535 bytes from your request packet, even though you didn’t send across that many bytes:
/* Allocate memory for the response, size is 1 byte * message type, plus 2 bytes payload length, plus * payload, plus padding */ buffer = OPENSSL_malloc(1 + 2 + payload + padding); bp = buffer; /* Enter response type, length and copy payload */ *bp++ = TLS1_HB_RESPONSE; s2n(payload, bp); memcpy(bp, pl, payload); bp += payload; /* Random padding */ RAND_pseudo_bytes(bp, padding); r = dtls1_write_bytes(s, TLS1_RT_HEARTBEAT, buffer, 3 + payload + padding);
That means OpenSSL runs off the end of your data and scoops up whatever else is next to it in memory at the other end of the connection, for a potential data leakage of approximately 64KB each time you send a malformed heartbeat request.
This bug has been rather melodramatically named “heartbleed,” for reasons that should now be obvious.
According to the Finnish National Cyber Security Centre, the sort of data that “bleeds” when the bug is triggered varies enormously, but may include message contents, user credentials, session keys and even copies of a server’s own private keys.
That’s not good!
Fixing the problem
Fortunately, there’s a fix already: simply upgrade to OpenSSL 1.0.1g.
If you don’t want to or can’t do that, you can rebuild your current version of OpenSSL from source without TLS Heartbeat support, by adding -DOPENSSL_NO_HEARTBEATS at compile time.
Both of these immunise you from this flaw.
The new OpenSSL version includes a bounds check to make sure the payload length you specified isn’t longer that the data you actually sent:
/* Read type and payload length first */ if (1 + 2 + 16 > s->s3->rrec.length) return 0; /* silently discard */ hbtype = *p++; n2s(p, payload); if (1 + 2 + payload + 16 < s->s3->rrec.length) return 0; /* silently discard per RFC 6520 sec. 4 */
The -DOPENSSL_NO_HEARTBEATS compile-time option simply omits the buggy code altogether from vulnerable versions.
The lessons to learn
If you’re a programmer, remember: always double-check the data that the other side sent you before you use it to control the operation of your own code.
And remember: buffer overflows should always be treated as serious, even if they don’t lead to remote code execution.
Data leakage bugs can be just as disastrous, as this flaw demonstrates.
For further information…
If you’d like to know more about the main sorts of vulnerablity, from RCE (remote code execution) through EoP (elevation of privilege) to Information Disclosure, please listen to our Techknow podcast, Understanding Vulnerabilities:
Your own appliances, including 9.x up-to-date UTMs are vulnerable to this. When will a patch be released?
We are working as quickly as possible to issue patches for the Sophos UTM product. We will announce the release on our corporate blog (http://blogs.sophos.com/) when available. It should be real soon now, but I am not allowed to make promises other team members have to keep.
Thanks for the explanation. One thing that I don’t understand though, is the comment that the data leaked can include “copies of a server’s own private keys.” Isn’t a private key, by definition, never transmitted?
This bug only affects data that is transmitted I think?
I read somewhere that the bug can also affect data that have been ‘recorded’ previously. Perhaps I’ve misunderstood how SSL works but I don’t see how it’s possible? My belief of SSL is that the client downloads the public key from the server, and uses it to encrypt a randomly generated session key (AES) that it sends back to the server. All transmissions now use the session key (since the server can decrypt the session key with its private key).
So the tunnel is protected by a session key that was only ever transmitted in encrypted form with the public key. You still don’t have the private key so still can’t decode the contents of the tunnel. Am I missing something? If so, please advise as this is quite important to understand.
The private key might be in memory temporarily, and it might be in memory near to where OpenSSL is temporarily storing the body of the heartbeat request. (It needs to keep the request so it can copy the payload part and send it back in the reply so the other end can check you replied with what it sent.)
Since the memcpy() highlighted above can copy up to 64KB past the memory location where the request is stored, you’ll get various chunks of not-supposed-to–be-transmitted data in the replies. So that could include all sorts of, ahem, “interesting” stuff. (Could include pointless garbage, of course 🙂
If you steal someone’s private key, you *may* be able to decrypt previously-sniffed encrypted sessions. That’s because the private key is used to agree a symmetric session key, which is part of the what you sniff. Decrypt the session key and you can decrypt the rest of the data in the session. Unless you use a technique called “forward secrecy,” where you also generate a one off public-private key pair *for each session*, keep them only in memory, and discard them after the session ends.
More about this here:
http://nakedsecurity.sophos.com/2013/11/24/twitter-joins-the-forward-secrecy-club-for-added-resistance-to-surveillance/
Techniques for forward secrecy online were supported from the earliest days of HTTPS, but have only started catching on recently because they put extra processing load on the server. (Generating a public-private key pairs for every session is much, much more computationally complex that generating a one-time symmetric key, which is just a shortish string of random bytes.)
If you want to be extra safe when developing software that keeps passwords and other sensitive data in memory, you should set those bytes to 0 or random junk when you’re done with them. That way, any attack like this that gets leftover memory garbage would get just that (garbage).
I’m an ignorant PC user. Am I likely to have OpenSSL on my computer? If not, should I change all my dozens of passwords as suggested by media reports?
No and no. It is never a bad idea to change your passwords, but this mostly affects server administrators.
unfortunately, once someone has a server private key they can eavesdop the communications to the server, and thus see any passwords you sent, and so yes, you should change any passwords you ahve used in the last 2 years
The most likely way this sort of bug might be used against you on a regular PC would be while you browse.
Imagine you got sucked into connecting to a rogue server that sent TLS heartbeats back to your browser in the hope of getting it to spill some of its memory, such as URLs you just visited, webmail text you just typed in, and so on…
…well, there’s good news! Not one of the major browsers – that’s Microsoft IE on Windows, Apple Safari on OS X, Mozilla Firefox and Google Chrome/Chromium on all platforms – uses OpenSSL for its TLS (HTTPS) connections, so none of them are at risk 🙂
On the other hand, lots of the world’s web *servers* do use OpenSSL, and most servers openly invite, even encourage, connections from anyone, anywhere, any time. So, as Chester suggests, this is mainly a problem for the sysadmins who run those OpenSSL-based web servers.
Android uses OpenSSL.
IIRC, Android from version 6 onwards switched to Google’s own security-focused reimplementation of OpenSSL known as BoringSSL.
And, again IIRC, Firefox (mentioned above as “exempt” from OpenSSL) uses Mozilla’s own SSL libraries known as Network Security Services (NSS) that ship as part of the Firefox app.
I believe the heartbeat request is an example of code that was not properly reviewed.
Opening comments in the code could have referenced the standard that was being followed. Perhaps this was mentioned earlier.
Payload, padding and maximum length (16381) could have been #defined at the outset reflecting the standard…a true hard-wire. Declaring them as an ‘int’ is more of a soft-wire even though they did not change throughout the process.
Comment lines 16 and 24 would be in error should payload or padding change.
The value ’16’ in lines 29 and 30 should have been predefined.
Just some of the clean up that could have been done…IMHO
Thanks to Naked Security, I have different, non-dictionary passwords for every site stored in Keepass. I also have two step authentication set up in some places.
The BBC is reporting that everybody should be changing their passwords everywhere, but I am holding back for now. Hopefully by following your advice I am reasonably safe.
I looked at github, but it would be interesting to see what other patches the programmer who created heartbeat has done. See if there are any patterns that would indicate that this was not accidental.
Does no-one else consider it a little strange that the programmer takes care to ensure that the message *he* generates satisfies the standard, but completely omits those same checks from the message he receives.
I guess I still prefer cockup over conspiracy on this one, especially since there is *still* no check on the 2^14 constraint that he so carefully (and unnecessarily) checks on generated packets.
Well, there is a maxim in internet coding that says to be precise in what you send to others, but liberal and accommodating in what you receive.
That might have affected the programmer’s judgement.
(It’s time for that maxim to change. If well-coded apps stopped being so soft on incorrect ones, we might get better attention to detail all round, and we wouldn’t end up stuck with a “backward compatibility” reason to keep on doing the wrong thing…but that is a rant for another time.)
“There is a maxim in internet coding that says to be precise in what you send to others, but liberal and accommodating in what you receive.”
I find it hard to believe that this statement appeared in a security-related blog. For a long time, the advice has been to be highly suspicious of incoming data, and to reject anything that is not explicitly permitted, rather than to create blacklists of things that known to be potentially harmful and allow the rest.
Your statement does, unfortunately, reflect much of current practice, which is why buffer overrun, SQL injection, and cross-site scripting attacks are still dismayingly common.
Two points, since you seem to be a stickler for accuracy 🙂
Firstly, you *must* believe that the words appeared here, because they did. Disbelief won’t make them go away.
Secondly, the statement was not me endorsing that viewpoint (I think it is unsuitable in the modern age), but saying that “a maxim exists” to encourage it.
That maxim, in fact, is written down in RFC 1122, and you will hear it called “the robustness principle,” though you can argue it delivers exactly the opposite.
So, perhaps RFC 1122 influenced the programmer?
That is all.
“So, perhaps RFC 1122 influenced the programmer?”
If so, then evidently not by much, because the paragraph following the words you quoted (and which is also part of the robustness principle) says “Software should be written to deal with every conceivable error, no matter how unlikely; sooner or later a packet will come in with that particular combination of errors and attributes, and unless the software is prepared, chaos can ensue. In general, it is best to assume that the network is filled with malevolent entities that will send in packets designed to have the worst possible effect. This assumption will lead to suitable protective design, although the most serious problems in the Internet have been caused by unenvisaged mechanisms triggered by low-probability events; mere human malice would never have taken so devious a course!”
Not that I agree with the final clause, which now looks naive.
It would be unfortunate if the maxim has been widely mistaken as advocating policies that lead to situations authors warned against. It is a risk you take whenever you put non-specific advice into a specification.
That’s a great quote!
“Sooner or later a packet will come in with that particular combination of errors and attributes, and unless the software is prepared, chaos can ensue.”
Jon Postel, RIP.
Don’t make ’em like they used to 🙂
As for the last clause, you are probably right about it being naive…but those “low probability events” are no *less* a problem (speaking in absolute terms) just because controlled malevolence has increased.
And, of course, low probability doesn’t necessarily mean low *frequency* any more. A brute force DES attack is a low probability venture my any measure (a randomly chosen password has about a 10 in one million million million chance of being right), but can be completed within one day.
Duck wrote “John Postel, RIP.”
Err, that’s Jon.
Fixed in the original post. Thanks. Dunno how that happened.
Thanks for the code description, exactly what I was looking for.
Questions:
1) I know it’s *possible to leak out something important, but how likely is a memcpy of 65k actually have any usefull info in there? Especially something like actual key data. Wouldn’t it be using some volatile area of program memory used for storing short lived variables? Not something like constants or static data.
2)You can actually trace it if you now know what you are looking for. A heartbeat request asking from more than it should but under 64k.
They should have actually updated the code to trap that request and log it instead of just preventing it.
Also, to be transferring heartbeats you would have to be in a secure SSL link which would already be logging your IP, etc if not tons more with the key handshake.
The stack grows down. If the request was placed on the stack, then the overflow would read part of the current function, and some of the callers. Depending on how many locals the caller needs, you could see all the way up to main.
The buffer that the code reads off the end of is acquired using malloc(), so it’s on the heap, not the stack. As the name suggests, the heap doesn’t grow and shrink like the stack, so your chunk of allocated memory could end up next to apparently unrelated stuff that came from a completely different part of the program.
If the heap’s largely empty, your request data might end up next to 64KB of unimportant zeros…but it might end up next to something much more exciting.
64KB may not sound a lot, but you can keep on firing heartbleed packets at a server, and keep on getting leaked data back…and even if you only ever see variables that were used recently, that still might be data from someone else’s session, or used by OpenSSL itself.
Indeed, some people are claiming to have seen the server’s raw, unencrypted private key inamongst the leaked stuff.
The point is that your TLS connection isn’t supposed to reveal *anything* about the state of the rest of the server, and the heartbeat reply is supposed to contain a copy only of the data *you* originally sent.
Even one leaked byte would be a problem.
64KB as many times as you care to keep asking is a correspondingly bigger problem…
What was the reason given, in OpenSSL ‘g’ version that fixed this error? They must have known at that point that the previous code was vulnerable in a HUGE way, and alerted the community? Surely a gross breach here?
OPENSSL_malloc() should be modified to independently randomize the location of each buffer in a large address space. And all buffers should be zeroed as they are freed.
Or the code should use some kind of managed strings with length checking. (Randomising doesn’t help much here – it might merely make the variety of stuff leaked from a long sequence of heartbleeds even greater, making the bug even worse.)
Gotta love the shocking spread of magic numbers in those small snippets of code. I guess #define is out of the question.
Why? If it’s a constant, it’s never going to change 🙂