When you think of “signal jamming,” you probably imagine some kind of fine steel mesh that blocks out radio transmissions altogether, or a source of electromagnetic noise that interferes enough to make legitimate communication impossible.
But a paper presented by a trio of German researchers at the recent USENIX Security Symposium reveals a much more subtle approach to jamming mobile phone calls.
They were able to convert a single mobile phone into a denial of service (DoS) device that could be turned against another subscriber, perhaps wherever they roamed through a whole town or city.
The paper is quite technical, and unavoidably filled with the jargon of mobile telephony, yet the authors have done an excellent job of making it into a comprehensible read that teaches you a number of useful security lessons.
As they point out very clearly, many of the security decisions taken in the early days of the GSM (Global System for Mobile) system were based at least in part on security through obscurity.
The consensus back then seemed to be, “Nobody will ever be able to build their own base station, or make their own handset!”
So why bother going to the trouble of designing in security to protect against the hardware and firmware of the network itself turning hostile?
All that has changed, with open source implementations available for both base stations and handsets.
As a result, security shortcuts that didn’t seem to matter much 20 years ago have come back to haunt us.
How your phone receives a call
Mobile phones aren’t in a perpetual state of readiness to receive calls or SMSes (text messages) instantaneously.
Instead, your phone spends most of its time in a low-power mode, from which it can be signalled to wake up fully to accept a call or message. (That’s why your phone battery may well last for days when you aren’t making or receiving calls, but typically only hours when you are.)
Rather casually simplified, and with apologies to the authors of the USENIX paper, this is what happens when a nearby cell tower decides it’s time for you to get a call:
- The base station sends out a broadcast page containing an identification code for your phone.
- Your phone recognises its own identification code.
- Your phone wakes up and responds to the base station.
- The base station and your phone negotiate a private radio channel for the call.
- Your phone authenticates to the base station.
- Your phone starts ringing (or an SMS arrives).
How an attacker can “jam” your calls
You can probably spot what computer scientists call a race condition in the sequence above, caused by the fact that authentication happens late in the game.
Every device in range can listen in to the broadcast pages inviting your phone to wake up, so a device that’s faster than yours can race you to step 5 and win, causing your phone’s attempt to authenticate to be rejected.
Of course, the “jamming” phone doesn’t know how to authenticate, but that doesn’t matter; in fact, it can deliberately fail the authentication, causing the process to bail out at step 5.
There is no step 6, so the call is lost – invisibly to you, because you lost the race to reply – and service is denied.
The authors got this attack working with a tweaked open source baseband (mobile phone firmware) that was adapted to ensure that it ran faster than a wide range of commercial handsets, including the Apple iPhone 4s, Samsung Galaxy S2 and Blackberry 9300 Curve.
How an attacker finds your phone
There is no authentication or encryption during the “are you there?” message and the “here I am!” reply, so an attacker doesn’t need any cryptographic cleverness to work out which messages are meant for what devices.
There is a slight complication, however: the attacker probably doesn’t know your phone’s identification code in advance.
To be strictly correct: the code is tied to your SIM card, not to the phone hardware itself, since every SIM has a unique code called an IMSI (International Mobile Subscriber Identity) burned into it, rather like the MAC address in a network card.
But GSM phones deliberately minimise the frequency with which unencrypted IMSIs are visible on the network, in order to provide you with some safety and privacy against being tracked too openly.
Instead, occasional exchanges involving your true IMSI are used to produce a regularly changing TMSI, where T stands for Temporary.
The TMSI is a pseudorandom, temporary identifier that varies as a matter of course as you turn your phone off and on or roam through a network.
The network operator maintains a list to keep track of which TMSI relates to what IMSI at any moment, but that database is unlikely to be accessible to an attacker.
The authors used traffic analysis to get round this problem.
While sniffing all the TMSIs being broadcast on the network, they call your number 10 to 20 times in quick succession, but deliberately drop each call after a few seconds.
The TMSI that suddenly appears 10 to 20 times in quick succession in the sniffer logs, as the network tries to track you down with its broadcast pages, is almost certainly the one they want.
Easy, isn’t it?
→ As long as they drop the call after the TMSI has sent in a broadcast page but before your phone gets past the authentication stage (step 5 above), your phone won’t ring and the imposter calls won’t show up. That means you won’t be aware that anything dodgy is going on. The authors used trial and error to determine a suitable call-drop delay for the network provider they targeted, finding that 3.7 seconds worked well.
How the attacker finds out which cell you are in
Here’s the thing: he doesn’t need to know more than your general location.
When you receive a call, the mobile network doesn’t page for your phone only in one cell of the network – it pages throughout your location area, which is a cluster of base stations in the vicinity.
This means that the network doesn’t need to keep precise tabs on you all the time, which in turn means that your phone doesn’t have to tell the network exactly where it is from moment to moment, thus extending battery life.
So as long as I know you are somewhere, say, in the City of Sydney, I can sit in a coffee shop at the Opera House and sniff for your TMSI wherever you go in town, because the broadcast pages that go out when I make those 10 to 20 bogus calls are duplicated everywhere in the location area.
The authors did some warmapping drives around Berlin, their home turf, and determined that location areas can be very extensive, ranging from 100km2 to 500km2.
(For comparison, the City of Sydney, which stretches from the Harbour Bridge south as far as Central Station, is just 25km2.)
How the attacker can amplify the attack
Instead of looking out for your TMSI and blocking your calls, what if the attacker wanted to block every call to knock a large metro area out in one go?
One rigged sniffer phone alone couldn’t do it.
The authors found that although their tweaked phone baseband could beat many popular mobile phones in the race to authenticate, it still took about one second to “jam” each broadcast page, limiting each phone to about 60 “jammed” pages per minute.
So they built a rig with eleven tweaked phones, thus allowing them to subvert more than 600 broadcast pages per minute.
Their measurements suggested this would be enough to knock out the service of at least some of the four major German operators across one location area (100km2 – 500km2) in metro Berlin.
Remember that the eleven attack phones don’t have to be distributed through the location area, since all broadcast pages are replicated through all cells in the area.
The only problem the authors faced was how to allocate the TMSI broadcasts amongst their eleven tweaked phones.
Using a messaging system to hand out each successively sniffed TMSI to the next phone on the list required the use of a serial connection to each phone, which was too slow.
In the end, they simply allowed each phone to select TMSIs by a bit pattern, so that phone 1, for example, might handle TMSIs starting with the bytes 0x00 to 0x1F, and so on.
→ As an amusing side-effect of tuning the partitioning algorithm to ensure that each phone handled about the same quantity of broadcast pages, the authors noticed that the bytes in most TMSIs were far from randomly distributed. Ironically, in this case, the lack of randomness made the partitioning job harder, not easier.
What about interception, not just jamming?
As the authors note, in some mobile networks, they could go further than just cancelling your calls and knocking you off the network.
They observed that some networks, presumably for performance reasons, cheat a little on step 5, and don’t authenticate every call.
In these cases, an attacker who can win the race to the authentication stage (step 5 above) can do more than cancel your call – he can accept it instead (or receive your SMS), from anywhere in your location area, and you won’t realise.
Also, some networks still use outdated, broken versions of the A5 encryption algorithm that is part of the GSM standard.
On these networks, your calls can be sniffed and decrypted anyway, but in a busy metro area, an attacker is faced with problems of volume: how to home in automatically only on the calls he really wants to intercept, without having to listen to everyone else’s chatter too.
The authors’ “jamming” firmware could be modified to do just that job, used as a call alerting mechanism instead of for a denial of service.
→ Sniffing the call data for later decryption can’t be done from anywhere in the location area, which is a small mercy, so an attacker needs to be in the same cell as you.
What to do about it?
You can probably guess what mitigations the authors proposed, because they are obvious and easy to say; you will also probably wonder if they will ever happen, because they involve change, and potentially disruptive change at that, so they are hard to do.
Defending against the eavesdropping and call hijacking problems is straightforward: perform authentication for every call or SMS, and don’t use broken versions of the GSM cipher.
The system already supports everything that’s needed; all that is required is for it to be turned on and used by every operator.
Defending against the denial of service problem is slightly trickier, as it needs a protocol change: move authentication up the batting order to prevent the race condition.
The authors propose a technically simply way to do this, but it means shifting some of the cryptographic operations from the authentication stage (step 5 above) to the “are you there?/here I am!” stages (steps 1 and 2).
Unfortunately, these mitigations don’t include steps you can take to help yourself; they need changes from the mobile operators.
Will that happen?
Or will backward compatibility, the thorn that is making Windows XP so hard to dislodge, get in the way yet again?