POPUP ALERT!!!!
Be it the iconic yellow “Yield” sign with an exclamation mark inside or, say, the red lock with a warning “X” meaning “broken!”, popup alerts warn people away from risky business, right?
You would hope so. When we’re talking specifically about SSL (Secure Sockets Layer) warnings, the consequences of ignoring alerts can be pretty dire: eavesdroppers modifying or reading emails or tweets, or attackers intercepting credit card or other sensitive data, for example.
Google’s security team has analysed its own SSL warning interface. The team had tinkered with that interface and put out a simpler, hopefully less confusing one in Chrome 37, with the hope that it would teach users more secure behaviours.
The results of the redesign weren’t exactly what you’d call breathtaking, it tells us.
In an Association for Computing Machinery (ACM) presentation and paper, the Chrome security team tells us that the redesign did in fact nudge up secure behaviour a bit, but there’s still plenty of work to be done.
The problem
Prior to Chrome 37, most Google Chrome users would not back out of SSL quicksand. In fact, a mere 30% of Chrome users adhered to SSL warnings.
(For whatever reason, Firefox users are far more compliant with the browser’s SSL warnings: 70% adhere to the alerts.)
Why would most – 70% – of Google Chrome users “proceed anyway” rather than get “back to safety”?
They’re jaded, alert-numbed or confused, so they simply ignore alerts, Google says, particularly when the alert is jargon-filled, isn’t succinct, isn’t specific, or is flat-out wrong.
There are multiple reasons why alerts can be wrong. For example, 20% of HTTPS Strict Transport Security (HSTS) errors are caused by clocks being set wrong, Google says.
Other errors are produced when an employer has a DPI box or content filter between a user and the internet, or a client is missing a root certificate.
In such cases, a user will get an SSL warning that tells them nothing, and serves only to teach them to click “OK” on a dialogue, Google says.
Here’s the old warning that everybody loved to ignore:
Confusing warnings only make users more insecure, and they normalise risky behaviour, Google says. Google had tried to make the interface easier to understand, and to restrict itself to giving warnings only when there’s a real risk.
What’s a real risk? Well, on one hand, there are misconfigured servers or firewalls triggering spurious warnings.
On the other hand, users who ignore an SSL warning and proceed can face physical harm or imprisonment.
From the paper:
Browsers display SSL warnings when the encryption is too weak or the server could not be authenticated. The connection is immediately halted, pending the user’s decision about the warning. In some cases, the problem indicates a real attack. Syrian Internet users saw SSL warnings when the Syrian Telecom Ministry allegedly attacked Facebook users. Similarly, SSL warnings alerted Chinese Internet users to attacks on Google and GitHub.
So how did Google seek to change user behaviour?
Google set out to explain SSL dangers in alerts, with an eye to conveying:
- the threat source (the attacker is on the network, as opposed to being on a malicious site, for example),
- the specific data that’s at risk (“passwords, messages, or credit cards”, for example), and,
- with an emphasis on errors on well-regarded sites, such as bank sites.
The goal was to strip the alerts of jargon, to hit a sixth-grade reading level, to be as brief as possible, to be specific about the risk, and to provide enough information.
To the Google team’s surprise, these types of text tweaks didn’t affect user behaviour, regardless of how simple, specific, or non-technical they made it.
They managed to moderately improve users understanding of the risks, but not adherence to the desired “get me out of here!” behaviour they were after: users instead kept walking straight into the quicksand during testing.
OK, so what did work?
The one thing that did work was to change the design elements of the warning.
Google used what’s called “opinionated” design, which relies on visual cues to promote the choice that designers think is the safest action.
Some of the visual elements of the so-called opinionated design, which was the one Google opted for at the end of testing:
- The safe button is a bright blue color that stands out against the background: the same blue that Google uses in other properties for primary actions. In contrast, the unsafe choice is a dark gray text link.
- Hiding the unsafe choice, forcing users to jump through hoops to do something unsafe. The Google Chrome malware warning hides the “proceed” button behind an “Advanced” link. For what it’s worth, Firefox users actually have to click four times to proceed through Mozilla Firefox SSL warnings. That one isn’t a complete win-win: Google notes that this element increases the annoyance factor, given how hard users have to work to ignore false positives.
The results were dramatic: the opinionated design of the new SSL warning nearly doubled user compliance.
Adherence rates went from 31% to 58% in a controlled field experiment, and from 37% to 62% in the field following the release of the new warning.
Still, too many users don’t get it, Google says, and having users understand is even more important than having them mindlessly comply:
Unfortunately, comprehension rates remain lower than desired for all of the SSL warning texts that we tested. This is disappointing, as we view comprehension as more important than adherence. ... We attribute the low comprehension rates to the difficulty of creating an SSL warning that is simultaneously brief, non-technical, simple, and specific.
Readers, where do alerts fail you?
Image of warning courtesy of Shutterstock.
it is not about design etc, it is about doubling the number of steps to proceed, earlier you could just go with one click but now you need to do two clicks that is it.
As a webdeveloper I’d be very annoyed if I would have to click more then once to finally get the “continue” button.
Aside of that, users who have no clue what they’re doing and just want to go back would probably in 99,9% of all cases click the blue button rather then the gray text link, and even after that it’s probably again another 99% that would click “continue” rather then “back to safety”
correction: another 99% that would click “back to safety” rather then “continue”
I’m a developer too, and I agree that now is the correct way. Proceed with only one click for who does not understand the risks, is very dangerous!
If someone intercept the connection and replace the original certificate (for example, if someone open his fake wireless near a place where many people are trying to access to the internet), the victims will not know that are bean spied!
Good job google!
Yeah it’s operation cash I in opm dummy clicks at the expense of the users frustration and it makes it hard to learn when people install apache software to achieve this. Shame
The warnings often aren’t very actionable. Often you go to a site to perform a specific task. The task cannot be completed by ‘going back to safety’.
I think it would help if users could do an action (like leaving a message or fixing the clock) that would give them the feeling they advanced the task and can for example come back in a week when the server is correctly configured.
In practice, this is not really possible, but I think this is the only way to solve this problem.
Yeah, that would be a great feature. I’ve developed several websites, all on GitHub, and obviously on a custom domain, but Chrome gives warnings if anyone goes on the HTTPS version of the site. There used to be a feedback link on the page, but they’ve removed it.
I only ever see this message when I log into our firewall, as we have a self signed certificate. Of course, I always click through…
Perhaps the most effective page would be one with a prominent click-through button labeled, “Checkout now” with the typical credit card and PayPal logos underneath it. Clicking on the button would display a message tersely explaining how there’s a good chance they’ll be hosed if they don’t stop now!
100% of the security warnings I get are because I don’t have a CERT-authorty signing my personal site’s cert, or because the hostname for another website is off by a character.
It’d be nice if the warnings showed up only for sites I care about, like my bank account.
Hostnames that are off by a character are common among phishing sites, so that’s still worth reporting.
It’s nice to do this but Chrom(e/ium) devs should also drop RC4 support witch is reportedly useless since quite some time now…
I’m sure that a large portion of these stats come from people browsing to internal servers with self-signed certs. A bigger emphasis needs to be placed on ensuring that companies, regardless of size, have a functional PKI system. I try to issue certs for the servers I’m responsible for, but 99% of companies I’ve consulted for completely ignore that aspect, opting to leave the self-signed and click through.
PKI is very expensive and complex and would be overkill for our organisation and many others. It would prevent click-throughs, sure, but who besides IT would be clicking through to internal servers and devices anyway? You’d hope that IT would take SSL security warnings seriously, even if the end-users don’t.
For enterprise, I hear there’s justification for that kind of investment, but for small business, I need a better reason for spending several thousand pounds.
PKI doesn’t have to be expensive and complex. For a small business, the business needs to set up a single certificate they use to sign everything. Then they sign all their internal services with that certificate, and provision the root certificate to all desktops/desktop browsers.
The big problem tends to be businesses that don’t actually practice IT security at all, and provision desktop computers with the operator as the administrator, with no central software/certificate provisioning source. As these companies often don’t even have an IT manager, let alone an IT department, nobody is aware of how to use the tools they have to secure the information that keeps them in business. So they hire an outside contractor to set up/secure the essentials as a single event, and ignore the rest.
As for a better reason: avoiding being hit by phishing attacks, ransomware, BSA license audits, and other IT events that could cripple your ability to do business are all better reasons.
Investing in a public key infrastructure doesn’t mean purchasing a signed key from a public certificate authority (although these days, you can get those for free) — it just means setting up a chain of trust and using it across all your IT assets. It costs a bit of time to set it up, and a bit of inconvenience when you want to change something, but that’s about it.
“It costs a bit of time to set it up, and a bit of inconvenience when you want to change something, but that’s about it…”
So that’s the downside to the company of 15 staff for which I am de-facto IT Manager. What’s the up-side? What’s in it for them? Why should they bother?
One advantage of having your own web servers (including your web filter, if you have one) explicitly trusted by your 15 staff’s browsers is that they won’t keep getting security warnings that they learn (need) to ignore…
Firefox’s security warning bypass process is an acceptable compromise. The only problem is the “Permanently store this exception” check box which is always on by default. If, for example, you want to temporarily bypass the warnings for a trusted website until the server misconfiguration is fixed, you’ll have to clear that check box every time you access the website. One slip of mind and you’ll have permanently stored an unwanted exception without even realizing it.
interestingly, when accessing naked security at work I get SSL invalid certificate warnings from FireFox but can load the site using Chrome.
FF warning content:
nakedsecurity.sophos.com uses an invalid security certificate. The certificate is not trusted because the issuer certificate is unknown. (Error code: sec_error_unknown_issuer)
So…who is the issuer of the Naked Security certificate you’re receiving? Is it *really* our cert (issuer “GlobalSign Extended Validation CA – SHA256 – G2”, trusted by default by Firefox), or are you getting a “man-in-the-middle” certificate sent by your work’s web filtering gateway?
I’ll guess the latter, and I’ll go on to guess that your web filter’s certificate has been added to Chrome but not to Firefox.
For several days, about a week ago, I was getting that error in Firefox with every image posted in articles on this site. Firefox would show the broken image icon, I would click to show it and Firefox would give me the warning – only it referred to a WordPress link. It appears you were hosting your images somewhere else that had a problem, but the main site did not have the problem. I have noticed in the last few days the error has not reappeared, so whatever was going on seems to have been fixed.
Wilbur
Our content and images are hosted by WordPress.com VIP. The content should be served with URLs starting https://nakedsecurity.sophos.com/ with a TLS certificate issued for nakedsecurity.sophos.com by GlobalSign Extended Validation CA - SHA256 - G2.
The images come from the WordPress Giant Stash O’ Images at https://news-sophos.go-vip.net/wp-content/uploads/sites/2/ with a TLS certificate issued for *.files.wordpress.com by Go Daddy Secure Certificate Authority - G2.
I can’t see why either of those would cause a warning in Firefox – never has in my Firefox, anyway. Both those issuers are trusted by default by Firefox. (See Preferences | Advanced | Certificates | View Certificates | Authorities.)
I suppose this sort of thing, namely “that shouldn’t happen!” (yet it did), is one of the problems with the whole HTTPS infrastructure. Things go wrong, there’s no easily-found explanation, then they fix themselves, and nothing bad happens that you can see. So, next time, you’re a bit less inquisitive, and so on…
The problem here is that there is still a way to bypass the warning and get to the website. If the browsers completely blocked access, that would force the website owners to step up the security and fix the issues. Then users wouldn’t be constantly bombarded with alerts and would start to take notice of them.
That would be really helpful for IT pro’s who are setting up the certificates or testing things… /sarcasm
And trust every decision to a simple piece of software? As so many people here are saying, the ability to click through is necessary, especially in the IT field, where I’m working with internal sites hosted on local servers that would be pointless to pay for a professional certificate for. To completely cutoff access to any site that a free piece of software decides is suspicious is, at best, insulting to the user, and at worse, detrimental to productivity of a person or business.
I get a warning from Google when I access the admin side of my server, located in the USA. Sometimes it’s the only way of doing things, as the backend of eCommerce package doesn’t let me do ‘everything’.
Needless to say I ignore the warning, I need to perform the admin on my website stuff.
At that point, I want to be able to see the certificate and what appears to be broken. Is it just expired, is it my company man-in-the-middling me or something odder? IE doesn’t give the option of looking at certificates without visiting the site, so no better. At my company where MITM is deployed, chromium/chrome makes life too hard, firefox is easier. Just avoid doing anything private like email or banking.
I really don’t like the notion of my browser teaching me behaviour. Hopefully there will be a plugin to disable this.
What’s with the certificate warning triggered by the HTTPS site www . whitehouse . gov ?
Isn’t this exactly what Microsoft did with browser warnings a few years ago?
Sure it has some design flaws but the real problems are the advertisement that come and are associated with the pages. Advertisement that has nothing to do with logging into your accounts page or entering data sensitive material. Every page I go to I have that dang yellow lock instead of the green. I suspect it’s very much the advertisement since I have a fairly new and clean laptop which has gone through many endless windows updates.
I click through about 99% of the time since I’m going to my own sites with self signed certs. I make no money from them, hence, I’m no about to spend big bucks for a cert. The prices they charge for what amounts to updating a database is outrageous. They want 3-10 times as much for a cert as I do to rent a VPS.
Chrome’s certificate handling design is a giant annoyance to web developers like myself and my team. The simple inability to A) Ignore Cert Errors for Local LAN or B) Permanently store certificate error exceptions is pain in the ass.
I’m a technophobe with nothing but an Android phone. There are so many fake warnings that are themselves a trap that I never click “back to safety” – I just use the back button or swipe the page offscreen. At first I used to email formerly trusted sites to ask if they’d had problems, but they always say their certificate is fine and can’t understand why there’s an issue. For simple folks like me it’s all very confusing so now when I see a warning I just exit, fast.
If any of the people not responding are like me they are probably not able to respond in the correct manner, because: they are PC illiterate enough to not be able to follow all the confusing directions. You folks use a lot of terms that people like me just have no clue as to what is being said or don’t know how to follow up. Try writing your directions to lets say a 3 year old, or an 83 year old. Moght work better.
I’ve been to a LinuxMusicians.com forum that I frequent and got certificate warnings. Well, get this: my connection is always private and I will ignore it anyway, even though I know it’s dangerous and I could be susceptible to MITM attack…
I find their change to over ride the SSL option a pain in the backside. I’ve got one website that I regularly update and the owner has set up the SSL wrong. I notice on Chrome 53 on Windows 10 there doesn’t appear to be anyway to override this problem (maybe I’m not lookin hard enough), I can’t see any advanced button, and it doesn’t even give me a proper reason why it’s not allowed. If I try on my Linux laptop it’s fine and I have the option to proceed but that’s running a very old version of Chrome. Also on my smartphone (Moto X Play) I get the old type warning so can ignore it there too, only problem with the smartphone is that the website I’m updating uses Adobe Flash as an uploader for files (the person who designed this site must have been having a laugh using such decadent technology!) so I can’t even upload files from my smartphone (and looking towards the end of the year I possibly won’t even be able to upload files via Chrome either. I just wonder how long the main webmaster will keep using Flash for before admitting defeat!
Alerts fail me when sites I routinely visit (va.gov, university moodle site, and just about any other government run site) kicks back an SSL warning for me. As Google put nicely, these false alerts only teach me to ignore them and someday that’ll probably cost me. Thanks Google! >:[
how about adding an option to ignore all the warnings and go on at your own risk forever
These alerts used to be present on sites with no options for credit card use, sites people had been using for years. When most of the sites users had come to trust were deemed untrustworthy, people stopped paying attention to the severity of the warning.
You forgot to say that Chrome never had the option to save fingerprints, like Firefox always did. So every Chrome victim has to go EVERY TIME through this, and remember the key, while Firefox users only ever see those warnings once (4 clicks flat versus “2 clicks every visit”). Also this is far better security wise, since nobody manually compares those hashes in chrome… if they even let you see it lol