Transcript – Sophos Security Chet Chat – Epsiode 131 – Jan 22, 2014

Sophos Security Chet Chat – Episode 131 – Jan 22, 2014

The Chet Chat is produced weekly in a quarter-hour format, and gives you an informative and entertaining take on the latest security news.

Original article: SSCC 131 – Mac malware, Starbucks security, Apple versus FTC and giant Korean breach [PODCAST]

Presenter: Chester Wisniewski [CW].

Guest: Paul Ducklin [PD].

Duration: 15’16”

Date: Jan 22, 2014

This transcript has been edited for clarity.


( Download to listen offline, or listen on Soundcloud.)


START OF PODCAST

[FX: MODEM SYNC NOISES]

CW. Welcome to Sophos Security Chet Chat, Epsiode 131, for 22 January 2014.

I’m Chester Wisniewski, and I’m here with Paul Ducklin. Welcome back, Paul.


PD. Hello, Chester.


CW. I was going to start out this week’s Chet Chat talking about Mac malware.

We haven’t really talked about Mac threats in quite a while – it’s been pretty quiet on the Mac front.

I saw you wrote up some details on a new one on Naked Security this week. What’s the scoop?


PD. This was one of those “undelivered courier item” scams – Windows users will probably be quite familiar with these: a courier company has tried to deliver something to you; the delivery was unsuccessful; let’s sort this out so that stuff can be delivered…because it’s quite important, obviously.

What would be the harm in opening what looks like a PDF?

Well, in this case, if you’re on a Windows computer, you get Windows malware, but if you’re on a Mac you get not a PDF file, but an application that has been given an icon that makes it look like a PDF.

That program is the malware – it goes and has a look on your computer; looks around for Microsoft Office files; zips them up and sends them off.

And, like any good bot or remote access Trojan, it also has a feature that makes it go out and get additional stuff. Which, of course, could be what ever the attackers want to do this morning. Or this afternoon.


CW. Right, so you and James Wyke from our Labs talked about botnets in a recent Techknow [podcast]; this particular one isn’t being used in the traditional “immediate monetary gain” way of CryptoLocker, or fake anti-virus, or turning you into a spambot.


PD. This is what most people would probably call a RAT, or a Remote Access Trojan, rather than a bot.

I see those as two sides of the same coin: a program that sits in the background and gives up control of your computer to some outsider.

Generally, to my mind, the difference is that a bot is usually more concerned with cybercrooks who want to use your computer to make money – through click fraud, spamming, stuff like that; whereas a RAT is more about people who want to have a look around on your computer, and actually find what interesting files you’ve got…and steal them!

Perhaps because they’re into identity theft; espionage; intellectual property theft; whatever it might be.


CW. Now, when you when you downloaded this malware, your Safari browser presented you some sort of warning, letting you know that you were getting an application.

I saw a study from Cambridge University talking about the efficacy of these browser warning messages, whether that be things like SSL certificates not matching the site correctly, or Internet Explorer SmartScreen filter warning you about the file types you’re downloading, and whether that’s a common file that’s downloaded, or if looks suspicious.

What did these these messages look like in Safari? Was it easy to tell that something bad might be happening?


PD. Chester, I thought so.

It’s easy to believe that what you’re about to open probably is a PDF, but the operating system comes up and says, “Hey! By the way, this file you just downloaded is in fact an application. Are you are you sure you want to open it?”

The big trick here, as far as the attackers are concerned, however, is that the application was digitally signed. This does bypass the warning that OS X would normally give you to say, “Hey! This file comes from unidentified developer.”

As we see surprisingly frequently in the Windows world, crooks and attackers out there are quite adept at getting hold of digital signatures, by fair means or foul.


CW. We’ve even seen, in some of the research that one of our colleagues, Gabor, has done, how legitimate signed apps that truly are from where they’re say – if they have vulnerabilities, they can even be leveraged to launch malware, like happened with – I think – the Nvidia Control Panel and some other applications that we saw involved in some attacks last year.


PD. The other thing to bear in mind, of course, is that you should never confuse a digital signature with some kind of assessment of the program that got signed.

Ironically, as recently as December, Microsoft published a security blog post with a very uncompromising title: “Be a real security professional – keep your private keys private.”

Which pretty much says it all: if you don’t do that, and crooks get hold of your code signing keys, then they’re basically stealing your imprimatur. They are able to masquerade as you and go with any legitimacy you may have earned.


CW. What interested me about the Cambridge report – one of the findings in the study was that people are more likely to click things or believe messages when they come from a friend or someone they believe has some sort of vested authority.

Of course, if you have something that says it’s coming from Microsoft, for example, or perhaps a close friend of yours on Facebook, then you’re more likely to believe that message or click links in that message.

So I guess if somebody were to steal your signing certificate, then in essence you’re allowing criminals to borrow your social capital.


PD. And, of course, you do bypass at least one security warning that would otherwise appear.

So in the case of the Mac malware we’ve been talking about, if it were not digitally signed then there would actually be a warning about an “unidentified developer.” You’d then have to right click and go, “Yes, I really do want to run this application.”

Everything would be much harder, and you’ll probably be much less likely to do it.


CW. Well, the clarity of messaging was, I think, something that was talked about in the study. And it turns out clarity of messaging is costing Apple at least $32,500,000.

The FTC came to an agreement with Apple about purchases… I’m not a big iPad user, and I don’t buy things from the iTunes store, but when you authorise a payment for an in-game item and that type of thing, your iPad stays unlocked for a while and lets you continue to make purchases.

It turns out to be a pretty expensive mistake for Apple.


PD. Yes – apparently one parent had complained that her daughter managed to spend $2600 in the Tap Pet Hotel during this window of opportunity!

The FTC’s complaint, I think, was very well presented: the “Buy” button and the “Now enter your password to approve the purchase” are two completely separate windows, and it’s not obvious that one relates to the other.

So the FTC’s complaint was that, from a workflow point of view, Apple would know that this App would be played by a kid, and that it would be the kid pressing the “Buy” button. Then the child would probably hand the device to one of his or her parents and say, “Mummy/Daddy, can you approve what I’ve just done.”

And it wasn’t really obvious, when you typed in your password, exactly what you were doing. It didn’t remind you, either, if you had set the convenience option, that meant that once you’d done one in-app purchase you didn’t need to put in your password for 15 minutes.

Apple decided not to contest that, so they’re paying up.


CW. Which is good – it’s nice that they’re acknowledging it and making some changes to the workflow in iOS for App purchases.

One of the things that some of my friends have done, when they hand over a mobile device to a child, is not to tie those accounts from iTunes, for example, to a credit card. Rather put in a prepaid card, an Apple Store card, or whatever, ahead of time.

Say, “OK, here’s $20 that you can use to buy music, or Angry Birds expansions. That’s all.” You get your $20 for the month, and it’s not tied to a credit card. Once the money’s gone, it’s gone.

I think it teaches some good fiduciary discipline to the children, and it also prevents you from being in a situation where you’ve bought $2000 in virtual goods.

So that might be another way for people that want to manage these types of things – whether it’s Google Play, iTunes: it makes it a little safer if it’s not tied to a credit card.


PD. Clearly, many parents have been in a position that their children have spent money either that they didn’t expect or more than they intended – to the point that the FTC has intervened.

Now, if that’s happening on this sort of scale, you can imagine how many accidents are happening with children who are messing around with iPads also used for work: sending tweets by mistake; deleting e-mails; accidentally locking things so that they can’t be used later.

If you’ve got an iPad that you use for work purposes, then you need to be very cautious about handing it over to your kids so they can play a game – whether or not you intend to allow them to make in-app purchases.


CW. Well, speaking of apps, we talked in the last Chet Chat about the insecurity of some of the mobile banking apps. It turns out that mobile banking isn’t the only place we do financial transactions.

Obviously, there’s iTunes – which we’ve just been talking about – but a lot of folks put their coffee money on their Starbucks App so they can pull out their phone when they’re at the cafe, and purchase a coffee.


PD. I even wrote a little Limerick about the problem that Starbucks found itself in. It goes like this [DECLAMATORY VOICE]:

A coffee shop from the North West
Implied it was doing its best 
   To keep you secure, 
   But it managed to store
All its passwords in plaintext at rest.

And that’s exactly what happened.

The Starbucks App had some crash logging. But, of course, they were dumping stuff into that log file that they would never have thought of putting anywhere else – and that included your plaintext username and password.

So anyone who stole your phone could go in, dig out that file, and they’d know the password you’d chosen with Starbucks. Which, if you’d reused the password, might get them into other accounts – and, of course, meant that they could then just spend your money at Starbucks.

You might not lose $1,000,000, but writing plaintext passwords to disk is always a no-no; is never necessary; and…fortunately, Starbucks has now stopped that practice and issued an updated App.


CW. Now, I have a bit of a public service announcement for PR people out there, because Starbucks seems to fallen slightly into the category, with Snapchat, of the “non-apology” apology – one of those “We’re sorry that you feel harmed by this,” which is not really saying you’re sorry at all.

These activities are inexcusable. I don’t care whether it’s my Starbucks card, or whether it’s access to my Tamagotchi pet, or what it is. Storing passwords, as you say, on the disk is is never acceptable, especially when organisations understand how much password reuse there is.

We saw, a few months back, Facebook looking at the Adobe passwords and disabling accounts because of password reuse. It isn’t exactly a secret that you might use the same password for your Starbucks App that you use for eleven other things on your phone.

It’s really inexcusable, and a genuine apology is is worth its weight in gold when it comes to the press…instead of this nonsense that you’re sorry that we feel like we were harmed.


PD. Ironically, Starbucks could have said a lot less and meant a lot more in how they presented what went wrong and what they did. But, to be fair, it seems they have fixed the problem.

To all app developers out there: all of the rest of your app can be compliant with the PCI-DSSes, with OWASP best practices…and then you go and keep information that might be useful for debugging and give away the keys to the castle. Not a good idea.


CW. Let’s wrap up the Chet Chat with a final story that seems like a repeat again: the Korea Credit Bureau, a credit rating agency in South Korea, had an employee steal identity information – things like social security numbers and financials – on 20,000,000 South Koreans. Nearly half of the population of the entire country in one go!

And there was nothing suspicious whatsoever about accessing 20,000,000 records at once?


PD. Yes, it’s sort of like Wikileaks all over again, again, isn’t it?


CW. Yes.


PD. One contractor, if you don’t mind! Who retrieved 20,000,000 lines of data out of the database – oh, and wrote it to a USB key. And at no point did an alarm bell even give the slightest tinkle, apparently.

The idea that an organisation that is holding this sort of “dynamite data” should not have at least some kind of data loss prevention strategy in place, that would warn that something like this was happening – at least log it so that it could be tracked sooner – kind of beggars belief.


CW. Yes. It reminds me a bit of the Snowden incident, which apparently was a Sharepoint database that he sucked down every document in the entire database – contractors were given super-admin rights because it was too inconvenient to give them granular rights to the information they were supposed to have access to.

You’d hope that other similarly sensitive databases would notice hoovering up whole thing. [LAUGHS.] I mean, that’s just so embarrassing!


PD. Yes, even if there’s some kind of warning that he is allowed to override and carry on getting the data, but at least makes a permanent record that he decided to proceed.

That would prevent genuine mistakes – where, you know, he means to retrieve 2000 records, but he’s got a defective SQL query and it sucks down 200,000 records.

And it would also mean there would at least be a fighting chance that you’d realise that this guy was up to no good *around the time that he was up to no good*. Not a year or more later, when he’s actually sold the data on the underground market.


CW. Yes. Hopefully the listeners of the Chet Chat are a little more savvy, and when they hear stories in the news, they look at them and say, “Could this happen to my organisation? What am I doing to prevent it?”

That’s really the purpose of talking about these stories, more than just shaming the people that maybe made a poor decision. There *are* lessons to be learned here, and it *is* the reason we like to talk about these things.


PD. Yes, to lose 1,000,000 records could be a mistake; to lose 2,000,000 records is carelessness; to lose 20,000,000 is, ah…at or beyond the border of incompetence.


CW. Yes! Well, that concludes Sophos Security Chet Chat, Episode 131.

As always, for the latest security news, you can visit us at nakedsecurity.sophos.com, and for all of our podcasts, RSS feed and all of our audio you can go to soundcloud.com/sophossecurity.

Until next time, stay secure!

[FX: MODEM DISCONNECTS.]

END OF PODCAST


What do you think?