At the end of last week, Apple published iOS 7.0.6, a security update for its mobile devices.
The update was a patch to protect iPhones, iPads and iPods against what Apple described as a “data security” problem:
Impact: An attacker with a privileged network position may capture or modify data in sessions protected by SSL/TLS
Description: Secure Transport failed to validate the authenticity of the connection. This issue was addressed by restoring missing validation steps.
CVE-ID CVE-2014-1266
Apple didn’t say exactly what it meant by “a privileged network position,” or by “the authenticity of the connection,” but the smart money – and my own – was on what’s known as a Man-in-the-Middle attack, or a MitM.
MiTM attacks against unencrypted websites are fairly easy.
If you run a dodgy wireless access point, for example, you can trick users who think they are visiting, say, http://example.com/ into visting a fake site of your choice, because you can redirect their network traffic.
You can then fetch the content they expect from the real site (you are the MitM, after all), feed it back to them with tweaks, modifications and malware, and they may be none the wiser.
But if they visit https://example.com/ then it’s much harder to trick them, because your MitM site can’t provide the HTTPS certificate that the official site can.
More precisely, a MitM site can send someone else’s certificate, but it almost certainly can’t produce any cryptographic “proof” that it has possession of the private key that the certificate is meant to validate.
So your visitors end up with a certificate warning that gives away your trickery.
At least, your visitors get a warning if the application they are using actually notices and reports the certificate problem.
→ Recent research suggested that about 40% of mobile banking apps do not check HTTPS certificates properly, or at least do not warn if an invalid certificate is presented. This led us to advise our readers to stick to their desktops or laptops for internet banking. Certificate warnings are important.
What was wrong with Apple’s SSL code?
Apple’s reluctance to give away too much is perhaps understandable in this case, but the result was to send experts scurrying off to fill in the blanks in HT6147, the only official detail about this risky-sounding bug.
The problem soon came to light, in a file called sslKeyExchange.c in version 55741 of the source code for SecureTransport, Apple’s offical SSL/TLS library.
The buggy codepath into this file comes as a sequence of C function calls, starting off in SecureTransport’s sslHandshake.c.
The bad news is that the bug applies to both iOS and OS X, and although the bug was patched in iOS, it is not yet fixed in OS X.
If you’d like to follow along, you need to make your way through these function calls:
SSLProcessHandshakeRecord() -> SSLProcessHandshakeMessage()
The ProcessHandshakeMessage function deals with a range of different parts of the SSL handshake, such as:
-> SSLProcessClientHello() -> SSLProcessServerHello() -> SSLProcessCertificate() -> SSLProcessServerKeyExchange()
This last function is called for certain sorts of TLS connection, notably where forward secrecy is involved.
That’s where the server doesn’t just use its regular public/private keypair to secure the transaction, but also generates a so-called ephemeral, or one-off, keypair that is used as well.
→ The idea of forward secrecy is that if the server throws away the ephemeral keys after each session, then you can’t decrypt sniffed traffic from those sessions in the future, even if you acquire the server’s regular private key by fair means (e.g. a subpoena) or foul (e.g. by bribery or cybertheft).
Now the C code proceeds as follows:
SSLProcessServerKeyExchange() -> SSLDecodeSignedServerKeyExchange() -> SSLDecodeXXKeyParams() IF TLS 1.2 -> SSLVerifySignedServerKeyExchangeTls12() OTHERWISE -> SSLVerifySignedServerKeyExchange()
And theSSLVerifySignedServerKeyExchange function, found in the sslKeyExchange.c file mentioned above, does this:
. . . hashOut.data = hashes + SSL_MD5_DIGEST_LEN; hashOut.length = SSL_SHA1_DIGEST_LEN; if ((err = SSLFreeBuffer(&hashCtx)) != 0) goto fail; if ((err = ReadyHash(&SSLHashSHA1, &hashCtx)) != 0) goto fail; if ((err = SSLHashSHA1.update(&hashCtx, &clientRandom)) != 0) goto fail; if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0) goto fail; if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0) goto fail; goto fail; /* MISTAKE! THIS LINE SHOULD NOT BE HERE */ if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0) goto fail; err = sslRawVerify(...); . . .
You don’t really need an knowledge of C, or even of programming, to understand the error here.
The programmer is supposed to calculate a cryptographic checksum of three data items – the three calls to SSLHashSHA1.update() – and then to call the all-important function sslRawVerify().
If sslRawVerify() succeeds, then err ends up with the value zero, which means “no error”, and that’s what the SSLVerifySignedServerKeyExchange function returns to say, “All good.”
But in the middle of this code fragment, you can see that the programmer has accidentally (no conspiracy theories, please!) repeated the line goto fail;.
The first goto fail happens if the if statement succeeds, i.e. if there has been a problem and therefore err is non-zero.
This causes an immediate “bail with error,” and the entire TLS connection fails.
But because of the pecadilloes of C, the second goto fail, which shouldn’t be there, always happens if the first one doesn’t, i.e. if err is zero and there is actually no error to report.
The result is that the code leaps over the vital call to sslRawVerify(), and exits the function.
This causes an immediate “exit and report success”, and the TLS connection succeeds, even though the verification process hasn’t actually taken place.
What an attacker can do
An attacker now has a way to trick users of OS X 10.9 into accepting SSL/TLS certificates that ought to be rejected, though admittedly there are several steps, and he needs to:
- Trick you into visting an imposter HTTPS site, e.g. by using a poisoned public Wi-Fi access point.
- Force your browser (or other software) into using forward secrecy, possible because the server decides what encryption algorithms it will support.
- Force your browser (or other software) into using TLS 1.1, possible because the server decides what TLS versions it will allow.
- Supply a legitimate-looking TLS certificate with a mismatched private key.
Safari on OS X is definitely affected, because it makes use of the buggy version of SecureTransport.
You can work out if a application is affected by this bug by using Apple’s otool program, which is a handy utility for digging version details out of program files and code libraries.
You use the -L option, which displays the names and version numbers of the shared libraries that a program uses:
That’s only a start, since the Safari app is just a wrapper for the Safari.framework, which we need run through otool in its turn:
Other popular apps that link to the buggy library include, in no particular order: Mail, Numbers, Keynote, Pages, Preview and Calendar.
Clearly, not all of these apps put you in quite the same immediate danger as Safari, but the list presents a good reminder of why shared libraries are both a blessing (one patch fixes the lot) and a curse (one bug affects them all).
Popular apps in which encryption is important but that don’t seem to use the buggy library include: Firefox, Chromium (and Chrome), Thunderbird and Skype.
What to do?
The good news is that Apple has broken its usual code of silence, by which it “does not disclose, discuss, or confirm security issues until a full investigation has occurred and any necessary patches or releases are available.”
Reuters reports that an official Apple spokesperson, Trudy Muller, has gone on the record to say, “We are aware of this issue and already have a software fix that will be released very soon.”
Sadly, she didn’t define “very soon,” but you should watch for this patch and apply it as soon as you can.
→ When you update, be sure to follow the advice below about avoiding insecure networks. The Software Update app uses the buggy Security library!
We suggest that you try any or all of the following:
Avoid insecure networks
Connecting to other people’s Wi-fi networks, even if they are password protected, can be dangerous.
Even if you trust the person who runs the network – a family member or a friend, perhaps – you also need to trust that they have kept their access point secure from everyone else.
If you are using your computer for business, consider asking your employer to set you up as part of the company’s VPN (virtual private network), if you have one.
This limits your freedom and flexibility very slightly, but makes you a lot more secure.
With a VPN, you use other people’s insecure networks only to create an encrypted data path back into the company network, from where you access the internet as if you were an insider.
Use a web filtering product that can scan HTTPS traffic
Products like the Sophos Web Appliance and Sophos UTM can inspect HTTPS traffic – ironically by decrypting and re-encrypting it, but without any certificate shenanigans like a man-in-the-middle crook might try.
Because the Sophos web filtering products do not use Apple’s libraries, or even Apple’s operating system, they are not vulnerable to the certificate trickery described above, so certificate validation will fail, as it should.
Switch to an alternative browser
Alternative browsers such as Firefox and Chromium (as well as Chrome) use their own SSL/TLS libraries as a way of making the applications easier to support on multiple operating systems.
In this case, that has the effect of immunising them against the bug in Apple’s SecureTransport library.
You can switch back to Safari after Apple’s patch is out.
Try this completely unofficial patch!
(Only kidding. You wouldn’t dream of applying a little-tested hack to an important system library, would you?)
This patch exists only:
- To demonstrate that emergency “fixes” don’t always fix, but often can only work around problems.
- To show what C code looks like when compiled to assembler.
- To give some insight into how unauthorised hacks, for good and bad, can be achieved.
- To introduce the OS X codesign utility and Apple’s code signing protection.
- For fun.
By all means take a look – but for research purposes only, of course.
Enjoyed the details and your patch (for research purposes only).
Where did you learn your assembler?
Back in the day, you wrote in assembler to save space and time. Or because the compiler was untrustworthy 🙂
Would a third party app or web browser (e.g. mercury or a banking app) rely on the flawed apple code in IOS ?
Frankly still not keen on upgrading to IOS7, but may be forced by this.
You should assume that third party apps on iOS are affected. IIRC, iOS 6 is also affected, and there is an update.
Heh, no conspiracy theories? 🙂 If a mistake, that’s a pretty conveniently sweet mistake for anyone with an interest in being able to eavesdrop on SSL connections. If a rogue developer, that’s a pretty strange way to insert your little bug, instead of trying to do some remote code executes or backdoors. And it’s something that even basic code reviews should pick up on. If a mistake and code reviews missed it, then what else has been missed?
Just sayin’ this is *very* convenient. I could certainly see some agency/gov demanding that Apple include something to allow eavesdropping, and Apple/devs just doing the most obvious and basic thing to satisfy the demand. Sort of like a kid being asked to clean his room, and he drags feet and shoves everything under into the closet, doing the bare minimum and only exactly as requested by someone who maybe didn’t know better how to do this and hide it. I can totally see a dev team not liking this, doing exactly what is asked, knowing it’s shoddy.
Personally, I’d write the code differently so that the function didn’t return whatever the last value of err was, but used non-zero err values to set, say, a separate variable retval, and I’d make the default value of retval something like ERR_INCOMPLETE.
Also, I *always* use squiggly brackets (braces) after if()s even if they are not necessary, as my opinion is that code written that way is easier to read. Actually, that’s not an opinion. It’s a fact, but I’ll pretend it’s an opinion.
E.g.
That way I have to set retval to ERR_NONE myself if I want my “return retval” to signal success.
I think your conspiracy theory is a long bow – did you look at the code style overall?
As for code reviews picking it up – it was in Apple’s open source repository for months (“with many eyes, all bugs are shallow”, eh?) and no-one saw.
It happens.
Haha – but what would you do if you wanted it to look like an accident..? 😛
And why did it take so long for you guys to post about this? I had to find out from elsewhere!!!
It didn’t take us “so long.” The news only broke over the weekend, and because most of the articles that came out quickly were just news and opinion, we figured we’d put some effort into the advice and research angle.
You may have found out from elsewhere (e.g from Apple), but then what? I suppose those early articles told you that you might be *in* danger…but I betcha they didn’t offer a patch to get you *out* of danger 🙂
Do you think that proper code testing would have revealed this?
Of course so…if you define “proper testing” as “testing that catches this kind of bug” 🙂
In an ideal world, the code could have been written in a slightly different style so that the forward jumps (goto fail) were much less likely to have resulted in “accidental success” being returned; the layout of the code could have been clearer; the compiler could have been used to warn more stringently about unused code (the skipped-over part); a different language could have been used that makes error handling easier and safer; the coder could have insisted on devising test cases that ensured more combinations of TLS version and encryption mode; and a code reviewer could simply have spotted the repeated line – after all, it sure looks obvious *once you know it’s there*!
This was in Apple’s open source code for months, and no-one noticed (no-one who reported it, anyway). And in the real world, for the bad code to be noticed at all during routine work, you’d need a site that did support forward secrecy but did not support TLS 1.2 – sounds an unusual sort of progressive/regressive combination to me.
My personal feeling is that a lot of coders will be *thinking*, “There but for the grace of God go I, ” even if they are *saying,* “Oooer, yucky code.”
Surely simple code coverage test should have revealed that the finalise and verify lines never get executed?
Coverage tests are rarely “simple,” at least for the last gaps in the code.
If you’re looking for “simple,” then just noticing the weird-looking error in a review might have been a better way 🙂
Note to self: stop overusing the word simple
Coverage tests may require some time and investment to set up, but for security related code I would have thought it important to make that effort. I still think this could and should have been caught by a code coverage test.
On that, we agree. This is TLS, after all, and because the coder has gone to the trouble of splitting out the TLS1.2 and TLS-other verification into two functions…yep, they should both be properly tested, and a coverage test would show the lack of completeness in the functionality test.
I’ll toss in one more “barn door closed” comment. One thing I do treasure is a competent code formatter. Given that, the code might have appeared as…
making error identification easier.
Back when I used C heavily and followed the goto as exceptions pattern, my solution was to have a flag for completion. I’d default it to 0, and set it only on success.
int complete = 0;
int err;
if ((err=hashfunc()) != 0) {
goto fail;
}
complete = 1;
fail:
if ((complete == 0) && (err == none)) err = unknown;
// rest of error handling
I like this approach a bit more as it’s more explicit, but of course either would work perfectly well.
In my example, retval serves the same purpose as complete…if you only ever let it take the ERR_NONE value when you are explicitly done-and-succeeded.
“Also, I *always* use squiggly brackets (braces) after if()s even if they are not necessary, as my opinion is that code written that way is easier to read. Actually, that’s not an opinion. It’s a fact, but I’ll pretend it’s an opinion.”
Not only that, but it prevents accidentally adding another indented statement thinking it’s part of the if.
I *always* tell my students that they should *never* leave out the curly brackets when programming in C, which is the only language I’m prepared to use with {}s. Any later language – yes, I’m talking to your Java – should not use them, they are SO 1970s.
Well, the inventors of the “Go” language – including Ken Thompson, the inventor of B, that became C – decided that the “strong guidelines” in the K&R book (braces on the same line and always used) should be mandated by the language.
They weren’t so sure about using braces at all – they’re SO 1970s – but decided they should, because it would cause a level of comfort likely to avoid silly complaints.
(I have seen loads of people refuse to use Lua, an excellent language embedded in many popular tools, like Nmap, Wireshark and WoW, because it expects you to write “if condition then dothis() else dothat() end” and they think THAT looks so 1970s…remember Pascal 🙂
Also, by the way, while the patch for iOS is out, OS X is also affected and currently not patched.
Hmmm. I thought the article made that fairly clear…but perhaps I should add the text from the main page of Naked Security into the article to make it clearer? (Not everyone comes from the main page, of course, where I wrote “Apple just patched an SSL/TLS bug in iOS – but the flaw is not yet fixed in OS X.”
Thanks.
Oops, sorry about that. I typically come in from the daily email. 🙂
No need to apologise – I added in a piece to make that point in the article itself. It was a good idea. I also put “OS X” in the headline.
With an old background of software coding, this kind of code mistake is easily avoided by expicitely write all braces in conditional tests or loops
If instead :
It was writen :
It would be dead code, but no bug.
In terms of code reliability or security, short notations are *always* a bad idea.
Even more: That label ‘fail’ should not have been called that, if jumping through it can proceed successfully. The name ‘fail’ wrongly suggests that
any jump there would raise a failure. Sure, the second goto would still have been an error if jumping to ‘fail’ always ended in an error being raised/processing stopped, but the result would not have been as bad (and likely been found much earlier).
“An attacker now has a way to trick users of OS X 10.9 into accepting SSL/TLS certificates that ought to be rejected…”
The mention of *only* OS X 10.9 causes me to infer that other versions of OS X are not affected, but the article (excellent, by the way) doesn’t say so explicitly. Is it indeed true that only OS X 10.9 is vulnerable on non-iOS systems?
Errrr, I have to admit – I don’t know.
Perhaps a reader who is running the last version of 10.8 that came out can use “otool” on his or her copy of:
/System/Library/Frameworks/Security.framework/Versions/Current/Security
and see what version it is? Then we can check that version in Apple’s source code repository and we shall know if it has this bug or not.
The reason I didn’t mention 10.8 is that Apple no longer supports it. At least from a security point of view: if you are still running 10.8, you have *much* more to worry about that this bug.
You know how XP will fall off a cliff in April after its last update? That’s what already happened to 10.8 after Mavericks came out…
Alas, it appears that otool doesn’t exist in Mountain Lion. (I’m running OS X 10.8.5.) Here’s what happened when I tried it:
$ otool -L /System/Library/Frameworks/Security.framework/Versions/Current/Security
-bash: otool: command not found
I found a post on StackExchange stating that in Mountain Lion Apple moved the command line utilities. They have to be installed from inside Xcode (a 2.04 GB download). Fortunately, they can also be downloaded and installed separately. They’re available as a separate 96.50 MB package called “Command Line Tools (OS X Mountain Lion) for Xcode – October 2013” at the following URL:
https://developer.apple.com/downloads/index.action (Apple Developer account needed)
After installing the Command Line Tools, I get the following result:
$ otool -L /System/Library/Frameworks/Security.framework/Versions/Current/Security
/System/Library/Frameworks/Security.framework/Versions/Current/Security:
/System/Library/Frameworks/Security.framework/Versions/A/Security (compatibility version 1.0.0, current version 55179.13.0)
/usr/lib/system/libcorecrypto.dylib (compatibility version 1.0.0, current version 1.0.0)
/usr/lib/libbsm.0.dylib (compatibility version 1.0.0, current version 1.0.0)
/usr/lib/libobjc.A.dylib (compatibility version 1.0.0, current version 228.0.0)
/usr/lib/libauto.dylib (compatibility version 1.0.0, current version 1.0.0)
/usr/lib/libxar.1.dylib (compatibility version 1.0.0, current version 1.0.0)
/usr/lib/libz.1.dylib (compatibility version 1.0.0, current version 1.2.5)
/usr/lib/libstdc++.6.dylib (compatibility version 7.0.0, current version 56.0.0)
/usr/lib/libsqlite3.dylib (compatibility version 9.0.0, current version 9.6.0)
/usr/lib/libpam.2.dylib (compatibility version 3.0.0, current version 3.0.0)
/System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation (compatibility version 150.0.0, current version 744.19.0)
/usr/lib/libOpenScriptingUtil.dylib (compatibility version 1.0.0, current version 1.0.0)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 169.3.0)
My ability to interpret the result is approximately zero, but there it is.
___
I wasn’t aware that it had become “official” that Apple no longer supports Mountain Lion, but then I suppose it’s typical that they’d make no formal announcement. If they did, I missed it.
Anyhow, some of us don’t have the option of using Mavericks…at least not yet. For my part, I run many applications and plugins that Mavericks has broken. In fact, I only recently “upgraded” (I use the term metaphorically) from Snow Leopard where everything worked perfectly…er, except continuing support for that version of OS X. (sigh)
I suppose that eventually everything will work in Mavericks, but until then Apple has forced users like me to make a choice between security and productivity. (I believe the technical description for that condition is, “It sucks.”)
In the meantime, staying productive is not optional. All I can do is keep everything else patched and updated, run Sophos AntiVirus for Mac, follow the advice I read in NakedSecurity, back up regularly, and otherwise follow best security practices.
Looks like they just “announced” support for ML by publishing a security update for it. Better late than never.
From my Mac running 10.8.5, here’s the output of otool -L:
/System/Library/Frameworks/Security.framework/Versions/A/Security (compatibility version 1.0.0, current version 55179.13.0)
/usr/lib/system/libcorecrypto.dylib (compatibility version 1.0.0, current version 1.0.0)
/usr/lib/libbsm.0.dylib (compatibility version 1.0.0, current version 1.0.0)
/usr/lib/libobjc.A.dylib (compatibility version 1.0.0, current version 228.0.0)
/usr/lib/libauto.dylib (compatibility version 1.0.0, current version 1.0.0)
/usr/lib/libxar.1.dylib (compatibility version 1.0.0, current version 1.0.0)
/usr/lib/libz.1.dylib (compatibility version 1.0.0, current version 1.2.5)
/usr/lib/libstdc++.6.dylib (compatibility version 7.0.0, current version 56.0.0)
/usr/lib/libsqlite3.dylib (compatibility version 9.0.0, current version 9.6.0)
/usr/lib/libpam.2.dylib (compatibility version 3.0.0, current version 3.0.0)
/System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation (compatibility version 150.0.0, current version 744.19.0)
/usr/lib/libOpenScriptingUtil.dylib (compatibility version 1.0.0, current version 1.0.0)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 169.3.0)
Looks like you’re in the clear – though Apple just published an update for you anyway. (About time!)
Update: 2014-02-25
I’ve found a couple of sources (on ZDNet) that claim the vulnerability is NOT present in OS X 10.8.5, or in versions of iOS earlier than v6.
Reportedly, OS X Update 10.9.2 (released today, 2014-02-25) fixes the problem, although Apple’s own Knowledge Base article on the 10.9.2 update (http://support.apple.com/kb/HT6114) makes no mention of it.
I’m writing that up as we speak (2014-02-26T00:05Z) – the Combo Update download page, which I used to download the update for my own use, does mention that the “SSL cert validation” problem is gone.
And I decompiled the SecurityTransport library again *after* updating – the code looks right now. (The call to sslRawVerify() is there!)
Another update: 2014-02-25
Folks who are still using Lion or Mountain Lion will be relieved to know that Apple evidently has NOT dropped support for those earlier versions of OS X. There are new 2014-001 security updates for OS X 10.7 and 10.8 on the Apple Downloads page as of today:
http://support.apple.com/downloads/
True! Strange way to announce continued support, but there you have it…
“Strange way to announce continued support…”
That line gave me a cynical chuckle. You’d think the norm would be that continued support would be a given until there’s an explicit announcement that support for an OS will be DIScontinued. But no. “Hey…it’s Apple!” might well be a metaphor for the unexpected. 😉
There was no 10.8 or 10.7 update when 10.9.1 came out, and when lots of people wondered what that implied, Apple hid behind the “we don’t comment on security unless we just published an update.”
In my book, if people openly ask, “Will there be any more security updates” and you don’t even say, “Unknown at this stage,” then those asking should take that as a “No”.
Until the 2014-001 updates yesterday, Lion and Mountain Lion users had both absence of evidence *and* evidence of absence of continued support 🙂
(Why you wouldn’t want to upgrade to Mavericks is a giant mystery to me. I sort of understand the people who say, “The update broke my legacy software,” but they might as well say, “My legacy software is holding me back. Time to find something new.” I had two apps that wouldn’t work in Mavericks, IIRC one paid and one free. I binned them and in a well-spent hour, found free replacements that were smaller, faster, better – result! 🙂
I would be interested in peek into the version control system: who was the first to check in the code version with the double goto? when?…
Could there be a missing, obsolete
” if ((err =…. ”
between the two goto fail:’s? So the programmer deleted one line instead of two. That would be a more likely mistake to make.
Good reason to use squiggly brackets (a.k.a. braces) if so 🙂
I just don’t understand it…
I get that the IF triggers the first “goto fail” which only happens when an error in the validation process occurs. But the second “goto fail” is always exectuted because it’s not part of the IF anymore.
But why doesn’t it always see the certificate as invalid? I mean, it ALWAYS skips to the fail part. Fail = Website not valid.
I don’t understand that part…
It’s explained in the article (and the coding style is touched on in one of the comments). Sadly, “goto fail” doesn’t mean “the website is invalid.” It means simply “exit early, returning whatever error code you have in the variable err right now”.
In other words, the way the code is written, “goto fail” only causes the function to report the certificate as invalid *if the value of err is non-zero*.
So if you inadvertently “goto fail” when there hasn’t just been an error, you end up skipping over the validation function, but returning zero (i.e. reporting success) anyway.
and which is why that label is wrongly named, contributing to the bug.
Agreed. It makes those “goto fails” feel as though the worst they can do is to reject a good certificate (fail closed), not to let a crook do his worst (fail open).
Identifier names in code shouldn’t influence code reviewers…but they do.
Ironically, perhaps (check the linked-to Patch article) the bug is actually more obvious in the decompiled assembler code than in the C source.
The compiler has noticed that the code turns into:
and so the conditional jump to fail actually vanishes, and so – rather obviously – does the call to sslRawVerify().
This may be appropriate 🙂 http://xkcd.com/292/
I remember my college C programming lessons from long ago (20+ years). You would get severely marked down at the slightest hint of a goto in your code…
Gotos can be very useful for “bail on error” in C. (Sort of home-made finalisation.)
Problem here is that the label “fail” *isn’t*. You can reach “fail” and still return success. Bloke could have coded it differently, not least because “goto fail” implies a fail-safety that doesn’t exist. See comments from @m above…
I mean something like this:
I would put that redundant final “goto finish” in, too 🙂
This style also forces you to map the error codes returned along the way (“err” in the article example) into your own set of legal “retval” values before doing the “goto finish”, thus avoiding potential ambiguity – does a final return value of “2” mean “network socket timeout error” from the networking library, or “hash function ran out of memory” from the crypto library?
Any decent compiler would raise a warning about never executed code here. Though not paying attention to compiler warnings goes along with the coding style of this library…
Actually, Adam Langley, the chap who created the handy test site I link to on the Unofficial Patch page [q.v.], tested your assumption and wrote it up. (There’s a link to his article in this one – click on the text that says “soon came to light.”)
IIRC he found that both the GCC and Clang compilers, when used with -Wall, an option nominally meaning “turn on all compiler warnings”, did *not* warn about the dead code.
He reports that you can add -Wunreachable-code to make Clang complain. But if this is an option that isn’t included by default in “all warnings” mode (strange semantics for “all” 🙂 then you can imagine that it probably produces sheds full of need-to-be-ignored messages in real life.
Thanks for your answer, it’s been many years since I coded in C but I remembered getting such warnings myself… After some research, i found out that GCC’s team has disabled these warnings because they were “unstable”: http://gcc.gnu.org/ml/gcc-help/2011-05/msg00360.html and yielded different warnings depending on the optimisation levels.
Using a VPN is not going to protect you. The built-in VPN on OS X and iOS uses the same flawed crypto library. On OS X, you can use third-party VPNs which implement their own crypto libraries. Even this does not completely protect you, since someone can still do a man-in-the-middle attack on your VPN endpoint.
On iOS, a VPN will not protect you at all, since all VPNs on iOS are required to use the built-in system libraries (App Store rule). Similarly, Chrome on iOS does not protect you, since all browsers on iOS are required to use the built-in system browser engine.
I probably should have said, “Not using the built-in VPN.” Sorry.
And although it is true that “someone can still do a man-in-the-middle attack on your VPN” if you avoid the OS X one, that’s a truism…but they’d need some other way (not this flaw) to achive that result.
As for subverting iOS with this flaw – iOS was already patched when I wrote this article. That’s why it’s about securing OS X, which wasn’t patched when I wrote the piece.
Is this Apple’s way of absolutely making me update to IOS7…..I have been holding off updating from 6 my iPhone 5 and I really dislike the new look and feel of 7……Just sayin’
There is an update for iOS 6 as well.
Head to the HT1222 page: http://support.apple.com/kb/HT1222
Unfortunately if he has an iPhone 5 it will update to IOS 7.0.4
Only choice is downgrade from IOS 6 to IOS 7 … or live with unpatched SSL risk.
Love to hear if anyone has a third option.
Using goto in C is just a proof of incompetency … since at school they said: “goto statement exists in C (and C++), but please, never use it!” ;
For me it is a statement for BASIC (e.g on a commodore 64) and nothing else.
I’ll disagree. There are places – like error handling – where forward gotos can result in perfectly safe and readable code. So you can have a guideline that says, “If you can avoid using goto, please do.” But a rule that says “you’re incompetent if you use goto in C” is like saying “it is grammatically incorrect in English to start a sentence with ‘but’.”
You can’t see the need for goto in C unless you need to build some solid error handling in your program.
When a function needs to allocate resources and test multiple exit conditions, you can :
use nested “if/then/else” ( unreadable as soon as more than a few test conditions)
use sequential “if”, and for each test failure, duplicate the cleanup code and use a return. Duplicated identical code and several returns in a function are poor coding style and hard to maintain.
use a goto to reach the function cleanup followed by the ‘return’.
In short, goto is the clean way, in C, to keep readability and proper cleanup when handling ‘exceptions’.
Thanks for the detailed dissection of this problem! Usually, I am frustrated by reports on Mac OS X security problems because they rarely indicate how old the problem is. I am always left wondering whether my old machines running older OSes are affected or not.
In this instance, I was able to use the detailed info you provided to actually compare the source code for older versions of Security.framework to the code you displayed to verify that they are NOT affected. In my case, Mac OS X 10.5.8 uses version 26935.0.0 of Security.framework. I could only find versions 24739 and 28425 on Apple’s open source site, but they were identical and both predate a rewrite that must have introduced the bug.
http://opensource.apple.com/source/libsecurity_ssl/libsecurity_ssl-28425/lib/sslKeyExchange.cpp
It is interesting to compare versions 28425 and 55471: the file was changed from “mixed C/C++” to just C at some point and the code in that block is almost identical in both versions (ignoring added & operators in the C version) except for a deleted line
if ((err = SSLFreeBuffer(hashCtx, ctx)) != 0)
near the end and the reordering of two lines including that one spurious “goto fail;” line. When I do a “diff” in TextWrangler between the two versions, the lines get mismatched in a way that suggests to me that the bug may have been the result of a careless RCS merge.
The “FIXME” and “TODO” comments throughout the files are perhaps interesting as well …
Anyways, thanks again.
Thanks for interesting article and techie comments.
One question which is striking from the day I came to know that iOS has security flaw..
–> How to check whether my apps or third party apps are impacted because of this bug in security frameworks…?
fyi.. My apps are connecting to server over Https connection
Since your app almost certainly uses the SecureTransport library (assuming it simply makes an HTTPS connection), you should assume it is affected. You could add some additional code – like some sort of certifcate pinning, where your app has its own list of SSL certificates it will accept – or just warn your users to upgrade…