Facebook has just published an article entitled Keeping You Safe from Scams and Spam. It’s all about improving security on its network.
In the past, Facebook has seemed curiously reluctant to do anything which might impede traffic.
After all, Facebook’s revenue doesn’t come from protecting you, the user. It comes from the traffic you generate whilst using the site.
So this latest announcement is a welcome sign, since some of the new security features prevent or actively discourage you from doing certain things on the Facebook network. Let’s hope that everyone at Facebook has accepted that reduced traffic from safer users will amost certainly give the company higher value in the long term.
But do Facebook’s new security features go far enough? Let’s look them over.
* Partnership with Web of Trust (WOT)
WOT is a Finnish company whose business is based around community site ratings. You tell WOT if you think a site is bad; WOT advises you as you browse what other people have said about the sites you visit.
Community block lists aren’t a new idea – they’ve been used against both email-borne spam and dodgy websites for years – and they aren’t perfect. Here’s what I said about them at the VB2006 conference in Montreal:
[C]ommunity-based block lists can help, and it is suggested that they can be very responsive if the community is large and widespread. (If just one person in the entire world reports a [dodgy] site, everyone else can benefit from this knowledge.)
But the [cybercriminals] can react nimbly, too. For example, using a network of botnet-infected PCs, it would be a simple matter to 'report' that a slew of legitimate sites were bogus. Correcting errors of this sort could take the law-abiding parts of the community a long time, and render the block list unusable until it is sorted out. Alternatively, the community might need to make it tougher to get a [site] added to the list, to resist false positives. This would render the service less responsive.
Another problem with a block list based on “crowd wisdom” is that it can be difficult for sites which were hacked and then cleaned up to get taken off the list. Users will willingly report bad sites, but are rarely prepared to affirm good ones.
False positives, in fact, have already been a problem for Facebook’s own bad-link detector, which is also mentioned in the announcement. Naked Security has had its own articles blocked on Facebook simply for mentioning the name of a scam site.
In short, the effectiveness, accuracy and coverage of the WOT partnership remains to be evaluated. But I approve of the deal. It’s a step forward by Facebook. However, Facebook’s own bad-link detector could do with improvement.
* Clickjacking protection
Facebook introduced some anti-clickjacking measures a while ago. It’s a good idea. If you’re trying to Like a page known to be associated with acquiring Likes through clickjacks, Facebook won’t blindly accept the click. You’ll have to re-confirm it.
Again, I approve of this. But in my opinion, it’s not going far enough. It would be much better if Facebook popped up a confirmation dialog every time you Liked something, so that the “blind Likes” triggered by clickjacking would neither work nor go unnoticed. (Indeed, this popup dialog would be a great place for users to report clickjacks to the WOT community block list!)
That’s not going to happen. Facebook wants Liking to be easy – really easy – as it helps to generate lots of traffic. A popup for every Like almost certainly wouldn’t get past Facebook’s business development managers. Not yet, at any rate. But if we all keep asking, perhaps they’ll see the value?
* Self-XSS
This is a geeky way of saying “Pasting JavaScript into your own address bar.”
We’ve already reported on the potential danger of doing this. When you put JavaScript in your address bar, you implicitly give it permission to run as if it were part of the page you just visited. That’s always a risky proposition. Facebook is adding protection against this behaviour.
Facebook also says it’s working with browser makers on this problem. That’s good.
Perhaps all browsers should simply disallow Javascript in the address bar by default? It’s a useful feature, but the sort of user who might need it would surely be technically savvy enough to turn it on when needed.
* Login approvals
Facebook’s final announcement is what it describes as two factor authentication (2FA). Facebook will optionally send you an SMS every time someone logs in from “a new or unrecognised device”. (Facebook doesn’t say how it defines “new”, or how it recognises devices.)
This is a useful step, and will make stolen Faceook passwords harder to abuse. In the past, you would only see Facebook’s “login from new or unrecognised device” warning next time you used the site, by which time it might have been too late.
The new feature means that you’ll get warnings about unauthorised access attempts pushed to you. Furthermore, the crooks won’t be able to login because they won’t have the magic code in the SMS which is needed to proceed.
It’s a pity Facebook isn’t offering an option to let you enable 2FA every time you login. It would be even nicer if they added a token-based option (and they’d be welcome to charge a reasonable amount for the token) for the more security-conscious user.
A token would also allow users to enjoy the benefits of 2FA without sharing their mobile phone number with Facebook – something they might be unwilling to do after Facebook’s controversial flirtation, earlier this year, with letting app developers get at your address and phone number.
In summary
Where does this leave us?
Good work. I’m delighted that Facebook is getting more visibly involved in boosting the security of its users. But there’s still a long way to go.
In particular, this latest announcement doesn’t address any of the issues in Naked Security’s recent Open Letter to Facebook. Those issues represent more general problems which still need attention: Privacy by default, Vetted app developers, and HTTPS for everything.
(If you use Facebook and want to learn more about spam, malware, scams and other threats, you should join the Sophos Facebook page where we have a thriving community of over 80,000 people.)
Sorry Facebook, too little way too late. I already deactivated my account and moved on.
does that even work anymore? I've disabled my account, but if a hacker have access to my password, they can re-activate my FB account and can view all of my past information….
I'm actually very happy with the plan for two-factor authentication using SMS (although I would have preferred email of the security code).
BUT – I've been trying to set it up all day and the system seems to refuse to send me the setup code…
Using email for two-factor authentication is a bad idea. You probably run your web browser and your email client on the same PC. In fact, anyone using webmail uses _exactly the same client software_ – the browser – to access both Facebook and email.
Having the two authentication factors so similar makes it _much_ more likely that if the crooks end up with your Facebook login, they'll also be able to access your email, grab the one-time security code, and delete it so you never know.
For example, if they get your Facebook password because of a keylogger, they'll get your gmail password as well. Suddenly your 2FA is back to 1FA, and you have a false sense of security.
The second factor of authentication in any 2FA implementation should ideally use completely different channels of communication, different technologies, and separate hardware and software, from the first.
Tokens offer that functionality. SMS does, to a lesser but generally satisfactory extent. Unless you are using Facebook (or doing your internet banking, or whatever it is) and receiving SMSes on your smartphone. In that case, it really isn't 2FA either.
Thanks for the reply – point taken.
As Graham Cluely says in the linked article explaining 2FA: "I, for one, won't be handing over my mobile phone number to Facebook in exchange for this two-factor authentication system." They can have my email. That's enough, until they can be shown to treat privacy matters seriously.
A couple of notes regarding Web of Trust:
1. The WOT rating system isn't a simple community-based block list, it's a meritocratic reputation rating tool where ratings aren't considered equally reliable. This, among other precautions, makes the rating system very resistant to spamming, including more sophisticated attacks conducted using botnets.
What comes to comments from Sophos' Chester Wisniewski on Financial Times, it would have been nice if he had familiarized himself with the rating system or contacted us for further information before concluding that "pretending to be 10m computers" would be a viable method for manipulating ratings. In our experience, these types of attacks typically create a highly unusual activity pattern that's easy to detect.
2. The ratings in WOT lose weight over time, which means a website's reputation can more easily recover from malware or phishing incidents, for example. We also have a process for site owners to request reviews should they feel their site has been unfairly rated due to a problem that has been resolved.
Note that unlike with ordinary block lists which determine only if a site is currently infected or not, the reputation rating may take longer to recover should the site's users no longer find it trustworthy. If a website is compromised several times in a short period of time, you may not feel as confident about sharing your credit card information with them anymore.
Something many traditional computer security companies are unable to see is that the real value in reputation ratings comes from user experiences, not from trying to compete with automated tools in detecting malware. Other people can warn you about scams that every automated security tool in the world considers perfectly safe.
I'm not familiar with the way the WOT rating system works however I have a few comments:
1. with regards to the statement : "pretending to be 10m computers" would be a viable method for manipulating ratings
I believe that using CAPTCHA to prevent an automated process of reviewing a website can help alleviate the issue highlighted above. While it is true that i'm assuming that the bot network will be unable to thwart a carefully implemented CAPTCHA algorithm it is one of the few ways to identify that a human is the one issuing the command to review/rate the web site. either positively (boosting the rating of malware laden sites) or negatively (impacting "good" sites).
It is also the case that a single human user or a group of users can, using their own legitimate accounts or fake accounts, falsely report a web site through a manual process as well. Is there any way which WOT can flag users as being malicious and prevent them from using the service any longer? Also, is a user's review of a site considered confidential data similar to a voting process?
> I'm not familiar with the way the WOT rating system works
then how could you intelligently criticize it?
> manipulating ratings http://www.mywot.com/wiki/FAQ#Reputations_are_eas…
> using CAPTCHA http://www.mywot.com/wiki/Comment_restrictions
> It is also the case that a single human user or a group of users can, using their own legitimate accounts or fake accounts, falsely report a web site through a manual process as well.
WOT is a Meritocracy, not all users have the same rating reliability: http://www.mywot.com/wiki/Rating_reliability
many have tried as you're suggesting and failed.
> Is there any way which WOT can flag users as being malicious and prevent them from using the service any longer?
When WOT recognizes an abusing user, their rating reliability is nullified; their ratings have no weight, completely useless and users are not aware "what" their reliability is. Also, the Community is very good at reporting users suspected of fowl play / spamming.
> Also, is a user's review of a site considered confidential data similar to a voting process?
Ratings or testimonies are cast by secret ballot. Comments divulge a user's name but do not reveal how they rated (voted). This is not the place to learn about WOT. I invite you to try the add-on, register a user account and participate on the Forums, but please review the information available on WOT's Wiki first.
It’s great that Facebook is strengthening security by using two-factor authentication. People share so much personal information on Facebook that relying on a single layer of password protection is simply not enough. However, sending a code by SMS text message is not very secure because they are sent in clear text. If the user were to lose their phone or have it stolen, anybody could read that text message and fraudulently authenticate.
More websites need to use two-factor authentication like Facebook is doing, but a more secure and easier-to-use approach is to send an image-based authentication challenge to the user’s phone, like Confident Technologies provides: http://bit.ly/dMNzB5. A grid of pictures is displayed on the user’s smartphone and to authenticate, the user must correctly identify the pictures that fit their pre-chosen, secret categories. Even if someone else had possession of your phone, they wouldn’t be able to authenticate because they wouldn’t know your secret picture categories.
Nice post, Duck.
It's difficult to deal with the issue of people accessing Facebook from pwned computers. But Facebook is so widely used that it could have a significant impact in helping reduce the number of pwned computers if it helped to educate its members.
Suppose FB created honeypots that gathered lists of IPs used for questionable activity within the past 72 hours, alerted users who logged in from one, and gave them information to help them determine if it is their computer that is compromised and what to do about it?
Also, a significant number of accounts are compromised not because the password was stolen, but because it was guessed. (You may not be allowed too many chances to guess the password for a particular username, but you have lots of chances to guess the username for a common password.) What if when someone chooses a FB password, they get a message like, "This password has also been chosen by x other users." Many people honestly think "password" is a clever password. If they knew how easy some passwords are to guess, and even better, if young users got in a competition with their friends to see how unique their passwords could be, it would have a ripple effect improving security on other password-protected sites.