There’s been much fiddling around with security warnings to see which versions work best: should they be passive and not require users to do anything? (Not so good.) Active? (Better.) Where should the dialogue boxes be positioned? What about the amount of text, message length or extent of technical gnarliness?
Now, in a systematic attempt to determine what gets people to comply with warning messages, two researchers at the University of Cambridge’s Computer Laboratory actually modeled their security warnings on scammers’ messages in their research.
In their recently published paper, “Reading This May Harm Your Computer: The Psychology of Malware Warnings”, Professor David Modic and Professor Ross Anderson describe their efforts to figure out what aspects of computer security warnings are effective.
The researchers assumed that crooks are doing something right, given that their messages are skillfully crafted to lure potential victims into clicking on bogus security messages that lead them to malware downloads.
From the paper:
[W]e based our warnings on some of the social psychological factors that have been shown to be effective when used by scammers. The factors which play a role in increasing potential victims' compliance with fraudulent requests also prove effective in warnings.
According to prior research into persuasion psychology, these factors influence decision making:
- Authority. People tend to comply with warnings if they think they’re coming from a trusted source.
- Social influence. People tend to comply if they think that’s what other members of their communities are doing.
- Risk preference. The researchers figured that giving “a concrete threat [that] clearly describes possible negative outcomes” would increase compliance more than a vague one.
The researchers wrote security warnings for five different conditions and used the same number of participants for each condition:
- Control Group: The researchers used real anti-malware warnings that are currently used in Google Chrome.
- Authority: “The site you were about to visit has been reported and confirmed by our security team to include malware.”
- Social Influence: “The site you were about to visit includes software that can damage your computer. The scammers operating this site have been known to operate on individuals from your local area. Some of your friends might have already been scammed. Please, do not continue to this site.”
- Concrete Threat: “The site you are about to visit has been confirmed to include software that poses a significant risk to you. It will try to infect your computer with malware designed to steal your bank account and credit card details in order to defraud you.”
- Vague Threat: “We have blocked your access to this page. It is possible the page contains software that may harm your computer. Please close this tab and continue elsewhere.”
Anderson and Modic recruited 583 men and women through Amazon Mechanical Turk to take their survey.
Some of the findings:
- Respondents said they were more likely to click through if their friends or – even more so – their Facebook friends told them that it was safe. In spite of this, the power of social influence was actually “much less effective than it is fashionable to believe,” the researchers found.
- The warning messages that worked the best were clear and concrete – for example, messages that informed users that their computers would be infected with malware or that a malicious website would steal the user’s financial information.
Anderson and Modic advised software developers who create warnings to follow this advice:
- The text should include a clear and non-technical description of the possible negative outcome.
- The warning should be an informed, direct message given from a position of authority.
- The use of coercion (i.e., threatening people so that they feel like they have no option but to do as told) tends to be counterproductive, whereas persuasion (i.e., getting people to voluntarily change their beliefs or behaviour) tends to get better results.
In short, knowledge is power, the researchers said:
When individuals have a clear idea of what is happening and how much they are exposing themselves, they prefer to avoid potentially risky situations.
Professor Modic’s work was funded by Google and by the Engineering and Physical Sciences Research Council (EPSRC), United Kingdom.