Facebook shrugs as ’emotional contagion’ research outrages its users

Image of comedy tragedy masks courtesy of Shutterstock

Image of comedy tragedy masks courtesy of ShutterstockOver the weekend, a paper was published in a prestigious journal by Facebook researchers who, for one week, intentionally modulated the news feeds of Facebook users.

Not “passively monitored”, mind you; rather, actively manipulated.

Some saw a dash more positive items in their feeds; some received a more grim daily dose, as the researchers snipped out happy tidings, all of which led to the conclusion that yes, emotional states are contagious, and no, seeing friends post happy news does not necessarily make people want to jump off ledges.

The researchers subsequently also found out that just as emotions are contagious, so too is the outrage that spewed out of internetlandia at the idea of having been toyed with unawares.

Fury spread on Monday, coming from politicians, lawyers, and internet activists who ripped to shreds the experiment and its ethical standing.

Here’s the gist of the stick that stirred up this hornet’s nest:

For one week in January 2012, data scientists tampered with what almost 700,000 Facebook users saw when they logged on.

Some saw content that had mostly happy, positive words; some were served content that analysis showed was sadder than average.

The researchers found that at the end of the week, the manipulated Facebook users – or, as the New York Times has dubbed them, the “lab rats” – were themselves more likely to post using correspondingly extra-positive or extra-negative words.

The research’s conclusion:

We show, via a massive (N=689,003) experiment on Facebook, that emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. We provide experimental evidence that emotional contagion occurs without direct interaction between people (exposure to a friend expressing an emotion is sufficient), and in the complete absence of nonverbal cues.

The Atlantic has been tracking the ethical, legal, and philosophical response to the Facebook experiment.

Some have shrugged, calling the experiment no big deal.

But there are others who think it is, in fact, a very big deal:

Clay Johnson on Twitter

Clay Johnson @cjoh · Jun 28
In the wake of both the Snowden stuff and the Cuba twitter stuff, the Facebook "transmission of anger" experiment is terrifying.

Facebook, for its part, reacted with the sensitivity of a rock. Apparently, it doesn’t see what all the fuss is about.

The company just doesn’t get what so many have pointed out: namely, testing whether users’ emotions can be screwed with via selective content curation is creepy.

This research was conducted for a single week in 2012 and none of the data used was associated with a specific person's Facebook account We do research to improve our services and to make the content people see on Facebook as relevant and engaging as possible. A big part of this is understanding how people respond to different types of content, whether it's positive or negative in tone, news from friends, or information from pages they follow. We carefully consider what research we do and have a strong internal review process. There is no unnecessary collection of people's data in connection with these research initiatives and all data is stored securely.

Was Facebook’s experiment legal?

Well, Facebook’s data use policy states that users’ information will be used “for internal operations, including troubleshooting, data analysis, testing, research and service improvement,” meaning that any user can become a lab rat.

But that clause was only added in May 2012 – a whole four months after the experiment began – the Guardian reports.

Additionally, institutions, such as universities, must first run experiments by an ethics board to get approval before they can experiment on people.

Cornell University released a statement on Monday saying its ethics board passed on reviewing the study because the part involving actual humans was done by Facebook, not by a Cornell researcher who was involved in the research.

The researcher did, however, help to design the study, analyzed the research results and worked with Facebook researchers to prepare the paper.

While Facebook didn’t apologise, one of its researchers did.

On Sunday, Adam D.I. Kramer wrote that the research results weren’t worth the tsunami of anxiety that the project whipped up.

Our goal was never to upset anyone. I can understand why some people have concerns about it, and my coauthors and I are very sorry for the way the paper described the research and any anxiety it caused. In hindsight, the research benefits of the paper may not have justified all of this anxiety.

Some of the study’s defenders are dismissing the outcry, saying that it hasn’t really caused harm to anyone.

But, as Scientific American’s Janet D. Stemwedel writes, we don’t have to judge a study to be as bad as “fill-in-the-blank horrific instance of human subjects research” to judge its researchers’ conduct as unethical.

Nor is it fair to point the finger at the users who submit to Facebook’s terms of service.

There’s a wide gap between the clearcut language of informed consent documents – which are actually meant to be understood by humans – and end user license agreements, which are written by and for lawyers and which, Stemwedel notes, even lawyers themselves have a hard time understanding.

But the most damning argument against Facebook’s actions is the prospect that manipulating the emotions of people without their consent could cause tragic outcomes.

Is such a fear overblown? We can only hope.

We can, and must, also demand far more thought from Facebook researchers as they design experiments on unwitting users.

Image of comedy and tragedy courtesy of Shutterstock.