You may very well have read about the latest leak supposedly sourced from the secret data stolen by whistleblower Edward Snowden.
The three-bullet version tells approximately this story:
- Intelligence services managed to penetrate the network of a major SIM card manufacturer.
- They got hold of large amounts of cryptographic key material.
- They can therefore eavesdrop millions, perhaps even billions, of mobile devices.
Actually, there’s a subtle rider to the last item.
Having copies of the keys in the story doesn’t just let you listen in to present and future calls, but theoretically to decrypt old calls, too.
Understandably, a lot of coverage of what The Intercept has boldly entitled “The Great SIM Heist” is focusing on issues such as the audacity of the intelligence services.
There’s also speculation about the possible financial cost to the SIM manufacturer connected with (though not implicated in) the breach.
But we think there’s a more interesting angle to zoom in on, namely, “What is it about SIM cards that made this possible?”
After all, according to the story, there wasn’t really a “SIM heist” after all.
No SIM card was ever touched, physically or programmatically.
No SIMs were stolen or modified; no sneaky extra steps were inserted into the manufacturing process; there were no interdictions to intercept and substitute SIMs on the way to specific targets; there was no malware or hacking needed on any handsets or servers in the mobile network.
What was grabbed, if we have interpreted the claims correctly, was a giant list of cryptographic keys for an enormous stash of SIMs.
Many, if not most, of these have presumably (given the age of Snowden’s revelations) already been sold, deployed, used, and in some cases, cancelled and thrown away.
And yet these keys still have surveillance and intelligence-gathering value, both for already-intercepted but still uncracked call data, and for calls yet to be made by SIMs on the list.
How can that be?
The basic purpose of a SIM card is exactly what its name suggests: to act as a Subscriber Identity Module. (That’s why your mobile phone number isn’t – the number goes with your SIM from phone to phone, not the other way around.)
A SIM is a smartcard: it doesn’t just store data, like the magstripe on a non-smartcard does, but is also a miniature computer with secure storage and tamper protection.
That ought to make it ideal for cryptographic purposes, such as:
- Secure authentication to the mobile network. (This protects the company’s revenue by ensuring it can bill you accurately for calls.)
- Secure authentication of the network to your phone. (This makes it harder for imposters to man-in-the-middle your calls.)
- Secure encryption of calls. (This protects you from eavesdropping, which was a real problem with earlier mobile phones.)
- Resistance to SIM duplication. (This protects you and the network from “phone cloning,” where someone else racks up calls on your dime.)
You’re probably expecting the techniques used for (1) and (2) to involve public-key cryptography.
That’s where you have an encryption algorithm with two keys: one of them locks messages, so you can give that public key to anybody and everybody; the other is the private key that unlocks messages, which you keep to yourself.
This feature – one key to lock and another to unlock – can be used in two splendidly useful ways.
If I lock a message with your public key, I know that only you can unlock it, if you’ve been careful with your private key.
In other words, I can communicate secrets to you without the tricky prospect of securely and secretly sharing a secret key with you first. (Read that twice, just in case.)
On the other hand, if you scramble a message with your private key, anyone can unscramble it with the public key, but when they do, they know that you must have sent it.
So I can satisfy myself that it really is you at the other end, again without needing secure and secret channel first.
For item (3), you’re probably expecting another use of public-key cryptography, namely something like Diffie-Hellman-Merkle (DHM) key exchange, where each end agrees on a one-time encryption key that can never be recovered from sniffed traffic.
That means that even if someone records your entire call, including the “cryptographic dance” each end does with the other at the start, there isn’t enough data in the intercept alone to decrypt the call later, providing that both ends throw away the one-time key when the call ends.
→ The property of preventing decryption later on is known as forward secrecy, though it’s probably easier to think of it as “backwards security.”
The third how?
That’s not how SIM cards work.
For both the GSM and UMTS networks (the protocols behind 2G and 3G/4G mobile voice and data), SIM authentication and call encryption rely on a good, old-fashioned shared secret key.
You’ll often see that shared secret referred to as Ki, pronounced, simply, “kay-eye.”
It’s the key by which your SIM proves its identity and prepares to place a call.
When a SIM is manufactured, a randomly-generated Ki is burned into its secure storage.
That key can’t be read back out; it can only ever be accessed by software programmed into the SIM that uses it as a cryptographic input; it never emerges, in whole or in part, in the cryptographic output.
If we assume that the SIM’s tamper protection is perfect, and that there are no cryptographic flaws that leak data about Ki (it seems there were some such flaws in the early days, but they have been fixed now), that ought to be that.
Even if I target you by borrowing your phone and getting the SIM into my own grubby hands, I can’t access that key, not even if I have an electron microscope and millions of dollars up my sleeve.
One tiny problem
But there’s one tiny problem: namely that a copy of every Ki for every SIM has to be kept for later, when the SIM is sold to a mobile phone operator and subsequently provided to a subscriber.
And as anyone who has uploaded a dodgy selfie onto a social network and seen it turn up later in unexpected places can tell you, the only way to be sure that no copies of confidential content get into circulation is…
…not to make a copy in the first place.
Sadly, secret-key encryption (also known as symmetric encryption) that involves two different parties, such as you and a mobile phone network, relies on having at least two copies of that secret key: one for you, and one for them.
As far as we’re aware, the primary reason that GSM and UMTS rely on shared secret keys, and don’t support forward secrecy, is performance.
The processing power of SIM cards, and of many of the mobile devices they are plugged into, isn’t quite enough to do things properly.
Public key cryptography is well-known, and can be reasonably efficiently implemented, but it nevertheless isn’t anywhere near as efficient, in terms of CPU power and memory usage, as symmetric encryption.
So SIM authentication and call encryption are done nearly-properly instead.
With an unsurprising, if disappointing, outcome, assuming that The Intercept has this story correct.
The bottom line
We’ll keep it short.
If you’re going to encrypt your own stuff, do it properly.
Image of stash of SIMs courtesy of Shutterstock.
22 comments on “How the “Great SIM Heist” could have been avoided”
The assertion that the key can’t be got at even with an electron microscope and millions of dollars is not completely true.
Although this is old, an EU project I ran managed to get at the keys with fairly low tech mechanisms – see http://www.nytimes.com/2002/05/13/technology/13SMAR.html for a brief mention of it. In fact, some of the researchers in the project managed to do a lot more. What we learned was then used to design countermeasures that at least in part are in some of todays systems.
Every now and again, someone will find a low tech method to crack high tech systems. The attack above use the fact that chips are light sensitive – something that I used to supplement my pocket money as a kid by scraping the graphite off the outside of OC71 transistors to turn them into OCP71 transistors which sold for 4x the price. I don’t claim any credit for the attack mechanism – that was Cambridge, just pointing out that those of us who are older than the IC can sometimes use that knowledge differently.
Do you think that any SIMs in use today are specifically vulnerable to harvesting of Ki via the “flashgun” method?
Because it’s Ross Anderson et al. I’m happy to accept it as a probably workable attack, but the article you linked to doesn’t say exactly how much data and of what sort could be retrieved. (I assume that things like key length and algorithm implementation would play a fair part in deciding what you could and couldn’t deduce with “optical disruption,” as well as the fabrication size [terminology?] of the electronic part of the chip.)
I was being vaguely tongue in cheek with the mention of an electron microscope…though apparently in the Cambridge attack, an optical microscope was of more use 🙂
iPhones do not have SIM cards, so I would assume this platform would not be affected. Correct?
My iPhone has a Nano-SIM card.
iPhones do have SIM cards.
iPhones do have SIM cards.
I think you have to read this article again.
Without a SIM, your iPhone is just a very expensive iPod!
The SIM uniquely identifies the phone, this, and a subscription of some sort, allows the phone to connect to a mobile phone network enabling calls to made and received.
An iPhone certainly does have a SIM card, at least if you want to make or receive mobile calls or do SMS text messaging with it.
You may be thinking of Wi-Fi only iPads, which don’t have any GSM or UMTS hardware (therefore no SIM card), or thinking of the fact that recent iPhones use a special sort of SIM called a NanoSIM, which is technically identical to any other SIM except for its size.
(A NanoSIM is basically an old-style SIM pared back just to the chip – almost no insulation or plastic packaging around it – that fits into a much tinier slot than any other sort of SIM. Electrically and electronically, however, it is identical.)
Do CDMA phones work the same way Paul? Both ends holding the encryption key? I know we don’t have any in Australia anymore but some Americans are still stuck on CDMA
No idea, I’m afraid. I’ve never even seen, let alone used, a CDMA phone. I don’t even know if they have SIM cards 🙂
I probably ought to know about it, but I’ve always put CDMA in the same basket as BeOS and video tape: gone, and as good as forgotten 🙂
(I did try BeOS once. It was kind of weird. It supported most of my hardware…but not my keyboard. Sort of made it hard to go much further than the end of the installation process.)
I think you are confusing SIM cards and SD cards. iPhones do have SIM cards, they do NOT have SD Cards. SD Cards store data (often unencrypted). SIM cards allow you to use the iPhone as an actual phone.
There are many key card implementations on the market that can do public key cryptography. The answer is not lack of processing power, any more.
Well, it is still a problem inasmuch as it was a problem – or was seen as a problem – back when the standards were etched into place.
That’s the tricky part: moving everything forward. (While still producing and selling $6 prepaid mobile phones that continue both to provide useful service and return useful revenue.)
If it’s true that the reason to not use forward secrecy is because of performance, the engineers who developed the protocol must not have understood Diffie-Hellman. It’s a ridiculously simple algorithm. Just look it up on Wikipedia. And it’s actually used to exchange keys for symmetric encryption, so no performance argument there, either.
It might be “ridiculously simple” in your book (please call it DHM, not DH, which is what I think Messrs D and H would prefer :-), but it does require quite a lot of memory and CPU time, at least compared to most symmetric crypto.
It requires “bignum” arithmetic, which is where you work with numbers that require thousands of bits of precision, not merely 16, 32, 64 or even 128 bits.
(I’m not offering this as an excuse, just as an observation. After all, binary search is a “ridiculously simple” algorithm, even for huge data sets, but comes with some huge overheads, e.g.: Step 1, sort your data… 🙂
“How the “Great SIM Heist” could have been avoided”
The explanation in the article didn’t answer the title’s statement in such a way that truly would keep it from being avoided – once again.
This wasn’t some shadowy group or criminal organization – this was done at the behest and by of our own governmental organizations. Only when citizens care enough about their own privacy and are unwilling to acquiesce their privacy for perceived security will this be avoided.
These organizations should be working to keep such information from being pilfered – not doing it themselves.
I thought it was reasonably clear… Avoiding the need for a shared secrets that need to be stored and shipped in bulk, and supporting forward secrecy, wouldn’t have prevented what happened here, but sidestepped in altogether.
Out of interest what does this mean for Silent Circle, is their App and/or Blackphone compromised in any way. e.g. if I’m using an iPhone with the SC App’?
Great article. This question bugged me all weekend. Thanks for the answer.
Some statements are not completely exact. The Ki is not used to encrypt the communication. Its is used to authenticate the terminal to the operator. In that process, a temporary key is created to cipher the communication. This temporary key is based on a random 16bytes string sent from the operator. So…. not as simple as you make it seem.
I kept the explanation simple. I still think it’s clear: Ki is the secret key that “is” the SIM’s identity. If you have already sniffed a call, then you should have the random 16-byte nonce used in that call. As you say, that nonce plus Ki are used to derive an authentication string and a session key. So if you have Ki, you can decrypt that call…
I am assuming, in theory, that if you have a SIM fabrication plant of your own, that you may be able to clone a SIM given Ki, too. But I am not 100% sure about that. There might be additional material needed; that might not have been acquired along with the stash of Kis.
There’s a sort of quick explanatory diagram in the 60 Second Security video here – in fact, it’s the thumbnail image for the video itself: