Chip-and-PIN should be “Chip-and-Skim,” say Cambridge card-cloners

Back in 2011, cryptography experts at the University of Cambridge in England were approached by a man from Malta whose bank refused to refund a series of disputed transactions made in Spain.

It seems that the bank presumed that the victim’s own Chip-and-PIN card must have been used, because that sort of card is as good as impossible to clone, so that was that.

To the Cambridgshire crypto posse, including the redoubtable Ross Anderson, making that sort of inference would have been like throwing down a gauntlet.

History has shown that there are many ways that cryptographic systems fail, and it’s usually not down to the cryptography itself, but to how it was implemented.

So the Cantabrigians set out to answer the question, “Is is reasonable to assume the infallibility of Chip-and-PIN transactions?”

More importantly, they wondered, “Can we find an attack that, however complex it might be to mount, could be considered practicable for real-world cybercrooks?”

As you probaby can tell from the headline above, the answers were “No” and “Yes.”

Chip-and-PIN not infallible

Chip-and-PIN is not infallible, and real-world attacks must be considered feasible.

At last, after giving the Chip-and-PIN community two years to study their findings, the quincumvirate has published a fascinating paper that is available to the public:

Click to jump to the paper... [PDF]

They presented the paper at the 2014 IEEE Symposium on Security and Privacy in San Jose, California, and I urge you to read it.

If nothing else, it provides yet another handy piece of expert commentary you can mention when you decide that you really do want to dig your heels in and refuse to hand over PII (personally identifiable information) to someone in a position of authority who keeps arguing that, “We take security really seriously, and all our systems are secure.”

Here, very greatly simplified (though hopefully not to the extent of being fundamentally incorrect), is some of what the researchers found, and what they think we can do about it.

Card verification

An EMV, or Chip-and-PIN, card transaction (in our example, a customer wants to get or spend £150) includes a verification phase something like this:

The idea here is to ensure, as the paper puts it, that the customer’s card is “alive, present and engaged in the transaction.”

The card authenticates itself by calculating a cryptographic checksum of the details of the transaction, called an ARQC, notably including the amount, the currency, and the date.

You can’t compute the checksum yourself, because it needs a secret encryption key, stored securely in a tamper-proof part of the card. (The terminal doesn’t know the key. Only the card and the issuer do, and this attack presumes that the secret key really is secret.)

You can’t re-use a checksum sniffed from a previous transaction because the card contains a counter, known as the ATC; the ATC is incremented after each transaction, and is included in the checksum.

And you can’t trick the card into generating a checksum for some potential future transaction by temporarily advancing the ATC, because the terminal sends the card a random salt, or nonce, known as the Unpredictable Number (UN), as part of each transaction.

Each UN is used once only, can’t be guessed, and is included in the checksum.

You can see where this is heading, can’t you?

The researchers found that Unpredictable Numbers sometimes weren’t.

And, as Naked Security readers will know all too well, many otherwise safe encryption systems have foundered on the rocks of randomlessness.

Other “problems with random” stories you might like:

Bad random numbers

From the logs acquired in the Maltese case that started this all off, the researchers quoted this sequence of transaction data:

The UNs certainly don’t look random (though, admittedly, a sample size of four is rather small).

With additional log data at their disposal, the researchers formed the opinion that the UNs from this payment terminal were generated from a 15-bit counter, incrementing every few milliseconds, appended to a fixed 17-bit number.

That means the UNs repeat systematically every few minutes – a far cry indeed from “unpredictability.”

The authors went on to find flawed random number generators in several ATMs, using various methods:

  • They “mined” real-world ATMs for UNs, for example by repeatedly requesting account balances on real accounts, using modified ATM cards with logging circuitry embedded. (With permission, of course.)
  • They extracted UNs from ATM logs acquired from banks, and analysed the UNs for randomness.
  • They reverse engineered two second-hand ATMs bought on eBay for £100 each.

Worse to come

Worse still, the researchers found that, even if a terminal generated strong random numbers, there was another common implementation flaw that made the quality of random numbers irrelevant.

The terminal doesn’t have access to the card’s secret key; nor should it, not least because the terminal and the issuer may belong to different companies, and even be in different countries.

Instead, verification of the ARQC from the card is be done by the issuer, as shown in the diagram above.

So the terminal must transmit the necessary cryptographic inputs (T, UN and ARQC) to the issuer and await authorisation.

You can see where this is heading, can’t you?

Many terminals failed to provide any kind of authentication or integrity check for the UN sent to the issuer, making a Man-in-the-Middle (MitM) possible.

Loosely speaking, a crook in control of a terminal could:

  • Harvest transaction data from your card, including the UN sent to your card that resulted in the ARQC computed by your card.
  • Replay that transaction data later from a cloned card, ignoring the new UN sent by the terminal and simply re-using the old ARQC.
  • Replace the new UN with the old one in the data sent from the terminal to the issuer.

Thereafter, the logs would show that a new transaction had succeeded, complete with interactions between the terminal and a card, and between the terminal and the issuer.

In fact, the transaction would look identical to the original one that you did with the genuine card.

That’s because a correctly-validated ARQC is assumed to be proof that a genuine card containing the correct secret key was used – in other words, that your card must have been “alive, present and engaged in the transaction.”

But if you can guess when a UN is about to repeat, or force it to repeat by hacking the transmission from the terminal to the issuer, then you can re-use old transaction data and get the ARQC checksum correct without ever knowing the secret key from the victim’s card.

The bottom line

The experts from Cambridge showed that duplicating a Chip-and-PIN transaction without having or using the original card is technically feasible in the real world.

That busts the myth that every Chip-and-PIN transaction can be traced unambiguously back to a specific card.

The experts suggest that this hole can be fixed in two ways:

1. In the long term, by closing the implementation loopholes, but this may take some time and meet financial resistance.

2. In the short term, by shifting the burden of proof on disputed transactions to the so-called acquirer (the organisation operating the terminal).

The research doesn’t mean we should stop using Chip-and-PIN, any more than we stopped using magstripe cards as soon as it was known how to clone them.

Chip-and-PIN simply isn’t infallible, and don’t let anyone tell you otherwise.

Round icons of card, ATM and bank courtesy of Shutterstock.

Image of gold credit card courtesy of Shutterstock.