Back in 2011, cryptography experts at the University of Cambridge in England were approached by a man from Malta whose bank refused to refund a series of disputed transactions made in Spain.
It seems that the bank presumed that the victim’s own Chip-and-PIN card must have been used, because that sort of card is as good as impossible to clone, so that was that.
To the Cambridgshire crypto posse, including the redoubtable Ross Anderson, making that sort of inference would have been like throwing down a gauntlet.
History has shown that there are many ways that cryptographic systems fail, and it’s usually not down to the cryptography itself, but to how it was implemented.
So the Cantabrigians set out to answer the question, “Is is reasonable to assume the infallibility of Chip-and-PIN transactions?”
More importantly, they wondered, “Can we find an attack that, however complex it might be to mount, could be considered practicable for real-world cybercrooks?”
As you probaby can tell from the headline above, the answers were “No” and “Yes.”
Chip-and-PIN not infallible
Chip-and-PIN is not infallible, and real-world attacks must be considered feasible.
At last, after giving the Chip-and-PIN community two years to study their findings, the quincumvirate has published a fascinating paper that is available to the public:
They presented the paper at the 2014 IEEE Symposium on Security and Privacy in San Jose, California, and I urge you to read it.
If nothing else, it provides yet another handy piece of expert commentary you can mention when you decide that you really do want to dig your heels in and refuse to hand over PII (personally identifiable information) to someone in a position of authority who keeps arguing that, “We take security really seriously, and all our systems are secure.”
Here, very greatly simplified (though hopefully not to the extent of being fundamentally incorrect), is some of what the researchers found, and what they think we can do about it.
Card verification
An EMV, or Chip-and-PIN, card transaction (in our example, a customer wants to get or spend £150) includes a verification phase something like this:
The idea here is to ensure, as the paper puts it, that the customer’s card is “alive, present and engaged in the transaction.”
The card authenticates itself by calculating a cryptographic checksum of the details of the transaction, called an ARQC, notably including the amount, the currency, and the date.
You can’t compute the checksum yourself, because it needs a secret encryption key, stored securely in a tamper-proof part of the card. (The terminal doesn’t know the key. Only the card and the issuer do, and this attack presumes that the secret key really is secret.)
You can’t re-use a checksum sniffed from a previous transaction because the card contains a counter, known as the ATC; the ATC is incremented after each transaction, and is included in the checksum.
And you can’t trick the card into generating a checksum for some potential future transaction by temporarily advancing the ATC, because the terminal sends the card a random salt, or nonce, known as the Unpredictable Number (UN), as part of each transaction.
Each UN is used once only, can’t be guessed, and is included in the checksum.
You can see where this is heading, can’t you?
The researchers found that Unpredictable Numbers sometimes weren’t.
And, as Naked Security readers will know all too well, many otherwise safe encryption systems have foundered on the rocks of randomlessness.
Other “problems with random” stories you might like:
- Anatomy of a cryptoglitch – iOS hotspot passphrases crackable in 50sec.
- Has HTTPS finally been cracked?
- Anatomy of a PRNG – visualising Cryptocat’s buggy generator.
- Android random number flaw implicated in Bitcoin thefts.
- Rudest man in Linuxdom rants about randomness.
Bad random numbers
From the logs acquired in the Maltese case that started this all off, the researchers quoted this sequence of transaction data:
The UNs certainly don’t look random (though, admittedly, a sample size of four is rather small).
With additional log data at their disposal, the researchers formed the opinion that the UNs from this payment terminal were generated from a 15-bit counter, incrementing every few milliseconds, appended to a fixed 17-bit number.
That means the UNs repeat systematically every few minutes – a far cry indeed from “unpredictability.”
The authors went on to find flawed random number generators in several ATMs, using various methods:
- They “mined” real-world ATMs for UNs, for example by repeatedly requesting account balances on real accounts, using modified ATM cards with logging circuitry embedded. (With permission, of course.)
- They extracted UNs from ATM logs acquired from banks, and analysed the UNs for randomness.
- They reverse engineered two second-hand ATMs bought on eBay for £100 each.
Worse to come
Worse still, the researchers found that, even if a terminal generated strong random numbers, there was another common implementation flaw that made the quality of random numbers irrelevant.
The terminal doesn’t have access to the card’s secret key; nor should it, not least because the terminal and the issuer may belong to different companies, and even be in different countries.
Instead, verification of the ARQC from the card is be done by the issuer, as shown in the diagram above.
So the terminal must transmit the necessary cryptographic inputs (T, UN and ARQC) to the issuer and await authorisation.
You can see where this is heading, can’t you?
Many terminals failed to provide any kind of authentication or integrity check for the UN sent to the issuer, making a Man-in-the-Middle (MitM) possible.
Loosely speaking, a crook in control of a terminal could:
- Harvest transaction data from your card, including the UN sent to your card that resulted in the ARQC computed by your card.
- Replay that transaction data later from a cloned card, ignoring the new UN sent by the terminal and simply re-using the old ARQC.
- Replace the new UN with the old one in the data sent from the terminal to the issuer.
Thereafter, the logs would show that a new transaction had succeeded, complete with interactions between the terminal and a card, and between the terminal and the issuer.
In fact, the transaction would look identical to the original one that you did with the genuine card.
That’s because a correctly-validated ARQC is assumed to be proof that a genuine card containing the correct secret key was used – in other words, that your card must have been “alive, present and engaged in the transaction.”
But if you can guess when a UN is about to repeat, or force it to repeat by hacking the transmission from the terminal to the issuer, then you can re-use old transaction data and get the ARQC checksum correct without ever knowing the secret key from the victim’s card.
The bottom line
The experts from Cambridge showed that duplicating a Chip-and-PIN transaction without having or using the original card is technically feasible in the real world.
That busts the myth that every Chip-and-PIN transaction can be traced unambiguously back to a specific card.
The experts suggest that this hole can be fixed in two ways:
1. In the long term, by closing the implementation loopholes, but this may take some time and meet financial resistance.
2. In the short term, by shifting the burden of proof on disputed transactions to the so-called acquirer (the organisation operating the terminal).
The research doesn’t mean we should stop using Chip-and-PIN, any more than we stopped using magstripe cards as soon as it was known how to clone them.
Chip-and-PIN simply isn’t infallible, and don’t let anyone tell you otherwise.
Round icons of card, ATM and bank courtesy of Shutterstock.
Image of gold credit card courtesy of Shutterstock.
Thanks – and thanks for the link to the original paper.
In case you didn’t catch it yet, the transaction flow graphic has a “typo” in the greenish-blue box near the bottom. APC should read ARC.
Twice. APC s/b ARC in both the shaded box and the dotted line box above it.
Fixed the second one too 🙂
Actually…don’t tell anyone, but I found a third glitch in the same part of the diagram, where I put “APRC” instead of “ARPC.”
I hope that’s it. (Please let me know if it isn’t.)
By the way, if I struggled to get that extra-super-simplified diagram right – it looked right at first blush, since P and R look so similar, and it was *nearly* correct, just not actually correct – then just imagine how hard it is do implement the EMV stuff correctly. Four books and numerous supplementary volumes of specifications, documentation, corrections and so on, like this [filenames have been shortened for space]:
189 pages: EMV_v4.3_Book_1_ICC_to_Terminal_Iface
174 pages: EMV_v4.3_Book_2_Security_and_Key_Mgmt
230 pages: EMV_v4.3_Book_3_Application_Spec
154 pages: EMV_v4.3_Book_4_Other_Interfaces
104 pages: EMV_CPS_v1.1
2 pages: 2008 AN-39_CPA Xaction Logging Controls
2 pages: AN-40_CPA Perso of Duplicate Record Data
771 pages: EMV Common Payment Application Spec v1
42 pages: SU-56v2_CPA_Corrections_and_Changes
26 pages: SU-58v4_Editorial Errors in CPA
2 pages: SU-60v2_CPA Logging Data Elements Minimums
2 pages: SU-61_CPA Additional Check Table Error
2 pages: SU-62v2_CPA Perso of Log Entry with EMV-CPS
3 pages: SU-63v3_CPA Update of VLP Available Funds
3 pages: SU-64_CPA Security Limits Status Indicators
3 pages: SU-65_CPA Last Online Xaction Not Completed
Aaargh. Fixing it now. Thanks.
Hey guys,
Shouldn’t the blue box in the diagram be ARC, rather than APC. There’s no indication of what the APC abbreviation is in the text below, but there is an ARC.
Other than that, good post as always.
Sorted…see above 🙂
Hello,
Just don’t see how reuse the same ARQC, as it should be computed using the ATC too.
The issuer is not supposed to refuse identical ARQC, as ATC is supposed incremented by every EMV validation ?
Supposing the ARQC generated by the card is retained by the crooks, they have a little time to reuse it before a legitimate EMC validation occurs, wich invalidate all intermediate ATC, and all ARQC generated with those ATC numbers ? Am i wrong about it ?
Regards,
AB
I wouldn’t describe this atack as easy. (The authors of the paper carefully list the hoops that the crooks have to jump through, such as having the same transaction amount and currency, same date, and so on.)
So it’s not like keylogging your banking password ten years ago and then draining your account from another computer in another country for days afterwards.
But I think the point the researchers are making is that the crooks already know how to deploy point of sale malware over tens of thousands of terminals at a time and collect the data for re-use almost immediately; how to corrupt the manufacturing process of PoS devices to insert their own hardware modifications; how to have money mule accomplices standing by in dozens of different countries for cashout work at a moment’s notice; and so on.
http://nakedsecurity.sophos.com/target-admits-there-was-malware-on-our-point-of-sale-registers/
http://nakedsecurity.sophos.com/cybercrooks-can-buy-hacked-pos-device-and-money-laundering-bundle-for-2000/
http://nakedsecurity.sophos.com/casher-crew-from-global-cyberheist-busted-in-new-york/
In other words, it would be wise to assume the experience of the sort of “infectious scale” needed to pull an attack like this off…
Check the paper – there are some thoughts on how the crooks might cash out from attacks of this sort, and how to prevent that.
This flaw might not sound easy to exploit, and it isn’t, but I betcha ten years ago most people would have laughed at the incredibility of a PoS attack involving the simultaneous infection of tens of thousands cash registers – on a private network of a major retailer! – and their apparent systematic reinfection, day after day, for weeks.
And I betcha if you’d suggested, two years ago, that a significant proportion of the world’s web servers could be asked – millions of times each, if you wanted – to give you repeated glimpses at their most secret internal data, right out of memory, that you’d have been written off as “worrying about pointless theoretical problems.”
So, did the “man from Malta” get his money back – or is the bank still arguing that black is white (or vv) in the light of the above evidence?
I don’t know 🙂 The “evidence” in the paper might not actually be relevant to the Maltese case. (The authors kind of imply that it isn’t, by saying that “other insiders have suggested malware was to blame,” as though the loss had nothing to do with the Chip-and-PIN part of the equation in the end. I can imagine that if there were a settlement in the end, it would have been under conditions of silence by both sides.)
I mentioned it just because the authors describe it as ani mportant spur to their research. It seems a poor sort of root cause analysis: “You are to blame because your card has a chip on it.”
Sort of like saying to a speed cop who just pulled you over for doing 80 in a 60 zone, “I couldn’t have been speeding because I’ve got a totally clean licence – look, no demerit points, ever!”
Proves what many have been saying for some years, C&P is not as safe as the Banks will have us believe. That is is a complex operation to implement such a ‘scam’ does not mean it will not or is not being tried.
There are many other instances where banks have initially refused to believe that a C&P card as issued to the account holder was not present when a suspicious transaction occured. Some have subsequently re-imbursed but others are still adamant that it “can’t happen” – even in the face of proof such as this.
every transaction should be photograph the user and the client from internal data to match a data base of faces
Nothing is infallible. Criminals will always find a way to circumvent technology that is why companies have to continually come up with new and improved security measures. One of the impossible things to get around is human implementation.
Why are you making up words to sound important? If it’s not in the OED it isn’t a real word.
“quincumvirate”? really?
Do you honestly find it hard to work out the meaning of words that follow a well-established pattern?
You know “million,” right (106), and “billion” (109 in the US flavour)? So you probably don’t need a dictionary to guess that a trillion is 1012, and so on. You’ve probably never heard of an octillion but it’s a cool wounding word, and if you encountered it, you could enjoy the coolness AND work out what it meant. Though it’s not a word you need a lot, so you’d have to excuse the dictionary for not having it. (Dictionaries don’t list every possible number, after all, because that would be silly. Try looking up 42. Not there!)
Ergo, if you know what a duumvirate is, and a triumvirate, then quincumvirate shouldn’t be too much of a struggle, eh?
And if you don’t know what a triumvirate is, then even if I’d said “the triumvirate plus the other two blokes” you’d still be lost.
(You’re allowed to stretch and play with language a bit. And in English, a language that has never been regulated by some kind of ill-advised Academy, praise the Lord, you’re allowed to stretch and play a lot. One reason why English has so many more words than any other.)
Personally, I find Paul Ducklin’s literary inventiveness to be one of the most interesting elements of his writing. In fact, I do it myself in my own writing, as much for my own amusement as for that of my readers. If the writer is fully engaged in his writing, his readers are likely to be too.
And what’s with the “sound important” thang? Can’t you differentiate between playing (as in “having fun”) and pseudo-erudition?
For my part, I disapprove of wordplay that obscures the writer’s meaning, but I’ve never found that to be the case with the regular NakedSecurity writers, who usually write to be understood and to inform, not to show how smart they are…or, more accurately, how smart they aren’t, which is what pseudo-eruditors usually manage to accomplish with those perceptive enough to see it.
Stated alternatively, lighten up, dude. It’s a friggin’ security blog, for cryin’ out loud. No one’s bucking for a Nobel prize in literature here.
And speaking of Nobel prizers, you might try reading some of the works of one of them who arguably had as great a command of the English language as anyone who ever lived — namely Winston S. Churchill. He knew how to have fun with the language, while still communicating effectively. And I think his response to a semantic quibbler (who took him to task for ending a sentence with a preposition) is appropriate here:
“That is the kind of arrant pedantry up with which I will not put.”
Kind words, Sir! I thank you.
Clearly I needed to find a criminal when a dodgy merchant terminal managed to corrupt my PIN at Heathrow as I was leaving the country, as the bank could not offer me any help to re-enable my card until I got back to the UK 2 weeks later (after much inconvenience).
But a couple of years ago, when I accidentally stumbled on how to lock every user out of one UK’s online banking, it took 15 months and reporting them to the regulator before they would even talk to me.
And when I discovered a data leakage, they insisted it was nothing to do with them because I discovered it through a link that was not from the bank.
I think that the approach of “we are right, our systems are secure” is a completely wrong approach to take to security. I approach on the basis that any system I deal with is insecure and it is only a matter of time before it is compromised. By taking this approach, I am always looking for ways of improving security. I even offered these to the bank for free, but couldn’t find anyone to talk to.
Reminds me of an old paraphrase: “Enigma is NOT crackable!” Of course, the original was in German …. And, they said it despite overwhelming evidence to the contrary (as in, lots of U-Boats were being sunk while on “secret” missions).
The problem here is that the banks’ security people actually believe the cards are using 2-factor authentication (2FA), because there are two different “passwords”, if you will.
If you want 2FA, you need at least two “things”. A card and an independent passphrase would be 2FA. But, the key is INDEPENDENT. To be independent, it would have to follow a completely different path to validation, and yet somehow manage to validate the same transaction.
Something you are, something you have, something you know. For 2FA, you need 2 of those 3 (at least). And, it still won’t be uncrackable. But you’ll have the advantage that the crooks will go for the low-hanging fruit, so you’re safer.
Great Information. Thanks for sharing it with us.