A few days ago, my colleague Chester wrote an article with the no-punches-pulled headline Turkish Certificate Authority screwup leads to attempted Google impersonation.
Since then, an online discussion and dissection of what happened – or, more accurately, what happened so far as one might tell – has unfolded, and seems to have reached a conclusion – or, more accurately, an acceptable hypothesis.
Let me try to summarise as briefly as I dare.
SSL and the hierarchy of trust
I’ll use some informal terminology, which will probably offend SSL experts everywhere, and which runs the risk of confusing the situation through oversimplification.
But here goes.
A certificate is, well, a plain old SSL certificate. Supersimplified, it’s a public key people can use to encrypt traffic to your site, combined with a digital signature that identifies it as yours.
A certificate authority (CA) is a company that adds a digital signature to your certificate, supposedly after verifying in some way that you are who you claim to be.
An intermediate certificate is the SSL instrument that is used by a CA in generating the digital signature on your certificate, so that people can see who vouched for you.
A root certificate is the instrument that is used by a CA at the top level of trust to add a digital signature to the intermediate certificates that are used at the next level of trust down, when your certificate gets signed. That means people can see who vouched for the company that vouched for you.
You aren’t expected to verify by hand who vouched for whom in this hierarchy of trust. It all happens automatically when your browser sets up a secure connection.
The CAs at the public root certificate level really are starting points of the tree of trust. Their certificates are pre-loaded in your browser and automatically bestow trust downwards.
So if you have an SSL certificate in the name of EXAMPLE.ORG that is signed by a certificate from, say, GOOD4NE1, and if their certificate is signed by a certificate from, say, TURKTRUST, and if TURKTRUST‘s certificate is trusted by your customer’s browser…
…then your customer’s browser (and therefore your customer) automatically trusts your server (and therefore implicitly trusts you).
You vouch for yourself. GOOD4NE1 vouches for you. TURKTRUST vouches for GOOD4NE1. And your browser vendor vouches for TURKTRUST.
TURKTRUST’s operational blunder
What could possibly go wrong?
In the TURKTRUST case, here’s what:
1. Back in mid 2011, TURKTRUST introduced a flawed business process which made it possible for the company to generate and ship an intermediate certificate by mistake, when a regular certificate had been requested.
(Hats off to TURKTRUST for publicly documenting in some detail what went wrong.)
2. TURKTRUST made such a blunder, and sent two intermediate certificates to organisations that had requested regular certificates.
One of the certificates was revoked at the request of the customer who received it.
The other incorrectly-issued intermediate certificate, issued to EGO, the public transport authority in Ankara, Turkey, remained valid.
What that meant was that EGO now had the ability, whether it realised it immediately or not, to sign SSL certificates for any domain name it chose, apparently with the imprimatur of TURKTRUST.
And any certificate signed by EGO in this way would uncomplainingly be accepted by almost every browser in the world, because TURKTRUST‘s root certificate was in every browser’s list of presumed-good CAs.
The next chapter in the story, it seems, didn’t start until the end of 2012, when EGO decided to implement security scanning of HTTPS traffic out of its network.
Scanning encrypted traffic
It’s easy to scan HTTP traffic by using a proxy, but HTTPS traffic is harder to look inside, since the content is supposed to be encrypted end-to-end.
The usual approach is to perform a Man in The Middle (MiTM) attack on your own traffic. The marketing names for this are keybridging or decrypt-recrypt, but it’s really just a MiTM.
You split a user’s SSL connection into two parts, creating two SSL sessions – one from browser to proxy and the other from proxy to the final destination.
You decrypt inside the proxy, examine the contents, and then re-encrypt for the rest of the journey.
→ Keybridging isn’t an attack if you do it on your own company’s outbound traffic, but you ought to let your users know. It violates the sanctity of the end-to-end encryption you expect in an SSL connection.
The operational pain with keybridging is that your users get a certificate warning every time they make a secure connection to a new site. That’s because their SSL connections terminate at your proxy, not at the real sites they intended to visit.
The usual way around this is to create your own private root certificate, upload it to your keybridging proxy, and let the proxy automatically generate, sign and supply placeholder certificates to your own users.
By adding your private root certificate to all the computers inside your network, you suppress the certificate warnings, because your own browsers trust your own proxy as a CA. That means your browsers quietly tolerate the placeholder certificates generated by the proxy.
It’s somewhat impure and ugly, but it’s practical, and it works.
The trouble with outsiders
Things get really troublesome, as you can imagine, when you have a Bring Your Own Device (BYOD) policy, or if you let contractors onto your network, and want (hopefully with both their knowledge and their consent) to scan their SSL traffic along with that of your regular users.
Until they download and install your private root certificate in their browser, thus accepting you as a top-level CA, they’ll get certificate warnings.
And so those who don’t follow the instructions given by the helpdesk will keep getting certificate warnings, and will keep phoning the helpdesk. Wash, rinse, repeat.
Unless, as luck would have it, you happen to have an intermediate certificate, signed by an already globally-trusted root CA, that you can use for your MiTM.
But that, of course, is never going to happen, not least because any reputable root CA’s business processes would prevent it from inadvertently issuing you with an intermediate certificate for that purpose…
…and you can tell where this is going.
On 21 December 2012, EGO turned on SSL keybridging in its web proxy, using the intermediate certificate it had received back in 2011.
SSL public key pinning
The TURKTRUST palaver surfaced, it seems, a few days later, when one of the users on the EGO network, who was using Google’s Chrome browser, received a warning about an unexpected certificate claiming to represent a google.com web property.
That’s because of a Chrome feature called public key pinning, in which the browser is equipped not only with a list of presumed-good root CAs, but also with a list of known-good Google SSL certificates.
So, even if a presumed-good CA suddenly starts signing certificates claiming to be from *.google.com, the browser will complain.
This helps to protect against the compromise of a root CA, or against deliberately dodgy behaviour by a root CA, or, as in this case, against sloppy business process and buggy behaviour by a root CA.
You’ll note that I’ve said “in this case, against sloppy business process.”
Conspiracy theories notwithstanding, I’ll accept that this was a crisis born out of convenience, not an abortive attempt at secret surveillance:
- TURKTRUST shouldn’t have issued the wrong sort of certificates.
- EGO shouldn’t have put the wrongly-issued intermediate certificate to the use it did.
Where to from here?
What happens next?
I’ll let the proposal summarise for itself:
The goal is to make it impossible (or at least very difficult) for a Certificate Authority to issue a certificate for a domain without it being visible to the owner of that domain. A secondary goal is to protect users as much as possible from mis-issued certificates. It is also intended that the solution should be backwards compatible with existing browsers and other clients.
This is achieved by creating a number of cryptographically assured, publicly auditable, append-only logs of certificates. Every certificate will be accompanied by a signature from one or more logs asserting that the certificate has been included in those logs. Browsers, auditors and monitors will collaborate to ensure that the log is honest. Domain owners and other interested parties will monitor the log for mis-issued certificates.
Briefly put, the idea is to maintain a community-policed list that lets you differentiate between certificates that are supposed to be in circulation, and certificates than have been generated through incompetence or for nefarious purposes.
Of course, certificate transparency will add yet another layer of complexity to an already-complex process, which is a worry.
But it will also inject a layer of enforced honesty, accountability and supervision into the SSL world, which ought to be good for us all.
Update. At 2013-01-08T22:38+11, I corrected a chronlogical error and made some cosmetic changes. I originally wrote that the bad certificates were generated in late 2012. But that was the date at which the remaining bad certificate was first used in EGO’s firewall. TURKTRUST‘s business process wasn’t flawed from mid-2011 until late 2012. The problem existed only for a short time in 2011, and the bad certificates were generated back then. More details can be found in Turkish and in English on TURKTRUST‘s website. Thanks to TURKTRUST for helping me get this right.
Update. At 2013-01-09T06:59+11, I corrected an error about the recipients of the bad certificates. They weren’t both issued to EGO, as I originally wrote. So there is no mystery, real or imagined, about why EGO reported one as bad and not the other, since EGO only ever received one of them. Thanks to Atilla and Cagdas at TURKTRUST for their patience in reviewing the article.