By now, most of us have seen privacy notifications from popular web sites and services. These pop-ups appeared around the time that the General Data Protection Regulation (GDPR) went into effect, and they are intended to keep the service providers compliant with the rules of GDPR. The regulation requires that companies using your data are transparent about what they do with it and get your consent for each of these uses.
Facebook, Google and Microsoft are three tech companies that have been showing their users these pop-ups to ensure that they’re on the right side of European law. Now, privacy advocates have analysed these pop-ups and have reason to believe that the tech trio are playing subtle psychological tricks on users. They worry that these tech giants are guilty of using ‘dark patterns’ – design and language techniques that it more likely that users will give up their privacy.
In a report called Deceived By Design, the Norwegian Consumer Council (Forbrukerrådet) calls out Facebook and Google for presenting their GDPR privacy options in manipulative ways that encourage users to give up their privacy. Microsoft is also guilty to a degree, although performs better than the other two, the report said. Forbrukerrådet also made an accompanying video:
Tech companies use so-called dark patterns to do everything from making it difficult to close your account through to tricking you into clicking online ads (for examples, check out darkpatterns.org‘s Hall of Shame).
In the case of GDPR privacy notifications, Facebook and Google used a combination of aggressive language and inappropriate default selections to keep users feeding them personal data, the report alleges.
A collection of privacy advocacy groups joined Forbrukerrådet in writing to the Chair of the European Data Protection Board, the EU body in charge of the application of GDPR, to bring the report to its attention. Privacy International, BEUC (an umbrella group of 43 European consumer organizations), ANEC, a group promoting European consumer rights in standardization, and Consumers International are all worried that tech companies are making intentional design choices to make users feel in control of their privacy while using psychological tricks to do the opposite. From the report:
When dark patterns are employed, agency is taken away from users by nudging them toward making certain choices. In our opinion, this means that the idea of giving consumers better control of their personal data is circumvented.
The report focuses on one of the key principles of GDPR, known as data protection by design and default. This means that a service is configured to protect privacy and transparency. It makes this protection the default option rather than something that the user must work to enable. Their privacy must be protected even if they don’t opt out of data collection options. As an example, the report states that the most privacy-friendly option boxes should be those that are ticked by default when a user is choosing their privacy settings.
Subverting data protection by default
Facebook’s GDPR pop-up failed the data protection by default test, according to the report. It forced users to select a data management settings option to turn off ads based on data from third parties, whereas simply hitting ‘accept and continue’ automatically turned that advertising delivery method on.
Facebook was equally flawed in its choices around facial recognition, which it has recently introduced in Europe after a six-year hiatus due to privacy concerns. It turns on this technology by default unless users actively turn it off, making them go through four more clicks than those that just leave it as-is.
The report had specific comments about this practice of making users jump through hoops to select the most privacy-friendly option:
If the aim is to lead users in a certain direction, making the process toward the alternatives a long and arduous process can be an effective dark pattern.
Google fared slightly better here. While it forced users to access a privacy dashboard to manage their ad personalization settings, it turned off options to store location, history, device information and voice activity by default, the report said.
The investigators also criticized Facebook for wording that strongly nudged users in a certain direction. If they selected ‘Manage Data Settings’ rather than simply leaving facial recognition on, Facebook’s messaging about the positive aspects of the technology – and the negative implications of turning it off – became more aggressive.
“If you keep face recognition turned off, we won’t be able to use this technology if a stranger uses your photo to impersonate you,” its GDPR pop-up messaging said. “If someone uses a screen reader, they won’t be told when you’re in a photo unless you’re tagged,” it goes on.
The report argues that these messages imply that turning the technology off is somehow unethical. The message also contains no information on how else Facebook would use facial recognition technology.
Microsoft drew less heat from the investigators, who tabulated each tech provider’s transgressions:
We would have liked to see Apple included, as the company has long differentiated itself on privacy, pointing out that it sells devices, not users’ data.
If nothing else, this report shows that reading and thinking about privacy options is important. Paying attention to these GDPR notifications and taking time to think about what they’re asking is worthwhile, even if it means taking a few minutes before accessing your favourite service. If you already shrugged and clicked ‘accept and continue’, there’s still an option to go in and change your privacy settings later. Just watch for those dark patterns: forewarned is forearmed.
I find the cookies agreement displayed at the bottom of this article to be subtly guilty of a sort of “dark pattern”… the agreement text appears in #999 on an #fff background, failing WCAG AA for color contrast. The action button fails for contrast, as well. The agreement text is panned hard left, and the action button far right, effectively distancing the consequences from the action. While color contrast may not affect your typical user, it does make the cookie agreement less noticeable.
You are right! I didn’t even notice it, unlike the bright green bars on other sites.
I didn’t even notice there was an agreement – I use noscript in my default browser, and don’t typically turn on any scripts unless something breaks.
You know what, we think you’re right. For now we’ve changed the text to make it darker, and right-aligned it on screens wider than 992px, so there isn’t a gap between the button and the text.
The site’s pretty thoroughly cached so it might take a while for you to see it.
Personally, I think the term “Dark Patterns” is an inappropriate term. “Dark” implies “bad”. I suggest a neutral, not negatively-fueled term. Using the phrase “dark patterns” is resorting to the same word “tricks” that Facebook and Google are being accessed of. Cheers, …Rowby.
I agree with you. “Dark Patterns” is one researcher’s cool, marketing-friendly jargon name for what I think is much more clearly expressed as “misleading wording and unclear website layout”.
I appreciate the point the Norwegian government is trying to make, but I found the report rather unappealingly unobjective – they should ditch the babbling jargon and write in plain, neutral English, especially when you think that the report is aimed at an audience consisting mainly of non-native English speakers.
When I see blather like this…
“Dark patterns are considered ethically problematic, because they mislead users into making hoices that are not in their interest, and deprive them of their agency. This is particularly problematic given the power imbalances and information asymmetries that already exist between many service providers and their users”
…I wonder why they didn’t just say…
“Misleading wording and unclear website layouts often trick users into making choices that play into the hands of the service provider. This is unacceptable.”
But why say that someone got tricked when you can write that they were deprived of their agency instead?
I believe that’s another characteristic of the self-appointed intellectual elite style of writing favored by academics. Never use a short word when a long word will do,
The freaking Government need to leave this young man the heck alone trying all kind of bullshit ways to come after his company. Dam. Lol i need a job if you ever get company in Colorado
If you are an idiot you should stay off the internet, don’t click I accept without reading what you are accepting it is very simple, if your not sure what you are accepting don’t click ok or I accept again very simple, instead people want to cry about their own stupidity, after all how can one be expected to actually read what they are accepting, it has all those words and stuff…….
Hey Bad Case, I don’t think that’s fair. In this case, sites are playing with emotional manipulation and using familiar prompts to cause people to autofill what they expect, in the hopes that they didn’t do a through read-through that one time. In the case of Facebook saying “Facial recognition is turned off” a human may (rightfully) skim Facebook’s arguing for turning it on, then click “Accept and Continue” instead of “Review Settings”, not noticing the subtle bit added on right at the end that defines “Accept and Continue” as doing something that “Accept and Continue” does not usually do.
“If you are an idiot you should stay off the internet” So that leaves about 326 people left to use the internet. Not going to be much content……
I find this kind of writing to be extremely self-serving to get ratings and raise eyebrows. It’s well written, but the entire premise of the article is to create controversy and score high on engagement. Of course businesses are going to present an offer that works to their favor and promotes their ‘agenda’. We as consumers, if we really care, need to read the terms and understand the options we select. I think that a good portion of us are not too worried and we are fine with receiving intuitive, targeted content that benefits us.
Congrats on grabbing attention for your article with embellished conclusions and extremist choice of description.
For a “researcher” this author and hundreds of others sure do not understand the fundamental *business model* of social media companies. They exist ONLY to reap their *users* (not their “customers”!) data, use data mining and deep learning to build highly specific detailed *user* profiles, and subsequently offer their SPONSORS (real *customers*) valuable efficiently targeted and message tailored advertising with optimal ROI. It’s a BRILLIANT business model and it’s why Facebook and Google are among the largest and most highly capitalized companies in the world. To retain their users, these companies offer valuable and attractive services (social interaction, photo sharing, incredible search and research tools, email, content creation…). Absolutely NO ONE is *forced* to use any of these services. But not agreeing to share one’s data is the equivalent of wanting to watch television or read news with no ads / commercials. It can be arranged, but not sustainably without subscription fees. And no private, non-fraudulent business should be forced to change their transparent business model at the whim of government.
Actually this is a good step for something good starting so we’re very very thankful to Facebook and Google for thinking about social media security and safety.i think they will be successful.
Manipulating people with ‘Dark Patterns’ (oooh, spooky) is nothing new. Tech companies have been using the methods mentioned in the article for years now. I just had a conversation with a co-worker who had no idea that you could still set up your Windows machine with a local account for the same sort of reasons that people are sometimes unwittingly giving more information to Google/Facebook etc. than they mean to. Microsoft intentionally made the ‘Create local account’ option as small and unnoticeable as possible because linking a Microsoft account to your PC provides improved data collection which, while often used to help improve the operating system and make centralized management easier, can also be used for profiling users, among other things. I think the most important thing to take away from stories like this is that these businesses are just that – businesses. They will operate within the law, paying ethics only enough attention as is warranted by how much it impacts profits. I don’t think it’s right or wrong, it simply is what it is. When all is said and done, it’s up to consumers to protect themselves and their rights. It’s easy to just click through on these sorts of things, but as Danny mentions at the end of the article, we need to take the time to understand what we’re agreeing to, and whether these companies need to know all of the info that they’re asking for.