Over the weekend, a paper was published in a prestigious journal by Facebook researchers who, for one week, intentionally modulated the news feeds of Facebook users.
Not “passively monitored”, mind you; rather, actively manipulated.
Some saw a dash more positive items in their feeds; some received a more grim daily dose, as the researchers snipped out happy tidings, all of which led to the conclusion that yes, emotional states are contagious, and no, seeing friends post happy news does not necessarily make people want to jump off ledges.
The researchers subsequently also found out that just as emotions are contagious, so too is the outrage that spewed out of internetlandia at the idea of having been toyed with unawares.
Fury spread on Monday, coming from politicians, lawyers, and internet activists who ripped to shreds the experiment and its ethical standing.
Here’s the gist of the stick that stirred up this hornet’s nest:
For one week in January 2012, data scientists tampered with what almost 700,000 Facebook users saw when they logged on.
Some saw content that had mostly happy, positive words; some were served content that analysis showed was sadder than average.
The researchers found that at the end of the week, the manipulated Facebook users – or, as the New York Times has dubbed them, the “lab rats” – were themselves more likely to post using correspondingly extra-positive or extra-negative words.
The research’s conclusion:
We show, via a massive (N=689,003) experiment on Facebook, that emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. We provide experimental evidence that emotional contagion occurs without direct interaction between people (exposure to a friend expressing an emotion is sufficient), and in the complete absence of nonverbal cues.
The Atlantic has been tracking the ethical, legal, and philosophical response to the Facebook experiment.
Some have shrugged, calling the experiment no big deal.
But there are others who think it is, in fact, a very big deal:
Clay Johnson @cjoh · Jun 28
In the wake of both the Snowden stuff and the Cuba twitter stuff, the Facebook "transmission of anger" experiment is terrifying.
Facebook, for its part, reacted with the sensitivity of a rock. Apparently, it doesn’t see what all the fuss is about.
The company just doesn’t get what so many have pointed out: namely, testing whether users’ emotions can be screwed with via selective content curation is creepy.
This research was conducted for a single week in 2012 and none of the data used was associated with a specific person's Facebook account We do research to improve our services and to make the content people see on Facebook as relevant and engaging as possible. A big part of this is understanding how people respond to different types of content, whether it's positive or negative in tone, news from friends, or information from pages they follow. We carefully consider what research we do and have a strong internal review process. There is no unnecessary collection of people's data in connection with these research initiatives and all data is stored securely.
Was Facebook’s experiment legal?
Well, Facebook’s data use policy states that users’ information will be used “for internal operations, including troubleshooting, data analysis, testing, research and service improvement,” meaning that any user can become a lab rat.
But that clause was only added in May 2012 – a whole four months after the experiment began – the Guardian reports.
Additionally, institutions, such as universities, must first run experiments by an ethics board to get approval before they can experiment on people.
Cornell University released a statement on Monday saying its ethics board passed on reviewing the study because the part involving actual humans was done by Facebook, not by a Cornell researcher who was involved in the research.
The researcher did, however, help to design the study, analyzed the research results and worked with Facebook researchers to prepare the paper.
While Facebook didn’t apologise, one of its researchers did.
On Sunday, Adam D.I. Kramer wrote that the research results weren’t worth the tsunami of anxiety that the project whipped up.
Our goal was never to upset anyone. I can understand why some people have concerns about it, and my coauthors and I are very sorry for the way the paper described the research and any anxiety it caused. In hindsight, the research benefits of the paper may not have justified all of this anxiety.
Some of the study’s defenders are dismissing the outcry, saying that it hasn’t really caused harm to anyone.
But, as Scientific American’s Janet D. Stemwedel writes, we don’t have to judge a study to be as bad as “fill-in-the-blank horrific instance of human subjects research” to judge its researchers’ conduct as unethical.
Nor is it fair to point the finger at the users who submit to Facebook’s terms of service.
There’s a wide gap between the clearcut language of informed consent documents – which are actually meant to be understood by humans – and end user license agreements, which are written by and for lawyers and which, Stemwedel notes, even lawyers themselves have a hard time understanding.
But the most damning argument against Facebook’s actions is the prospect that manipulating the emotions of people without their consent could cause tragic outcomes.
Is such a fear overblown? We can only hope.
We can, and must, also demand far more thought from Facebook researchers as they design experiments on unwitting users.
Image of comedy and tragedy courtesy of Shutterstock.
Hey, companies tweak their product all the time. Facebook users need to remember that they are the product, not the customer.
I feel like this is a little beyond that. Most studies would warn you ahead of time and gain your consent to be monitored. They may not tell you what they plan to do to avoid any effect on the results, but you’re still aware that you are participating in something. Even if they chose not to do that, they should have notified participants AFTER the experiment concluded that their content had been manipulated in some way.
Nothing terrible seems to have happened as a result of this particular study, but imagine if it was slightly longer term or someone in an already fragile emotional state did something as a result of a study like this.
erm…imagine a food company adding something that might make the consumer more likely to buy more, as a test. Or a TV show adding subliminal advertising, as a test.
It’s probably harmless, but is it ethical? no. but then facebook isn’t ethical, since day one Mark Z has thought all the users were idiots and the company still treats them all with contempt. Unfortunately the average FaceBorg user has stockholm syndrome.
It is surprising that the paper was even accepted by the journal since it did not adhere to the journal’s own policy. From the journal’s policy: “For experiments involving human participants, authors must also include a statement confirming that informed consent was obtained from all participants.”
We should all praise Facebook for publishing this research, we can officially add “social media” as an effective way of subliminally altering the mood of the masses and recognise that the “free state” internet is the myth it always was.
If they had been sinister in intent, the research would not have seen the light of day, now we are fore-warned and therefore fore-armed. It might be as unethical but it does set a reality check for the everyday user of the internet that needs to remain in the collective consciousness for years to come
Gee… is anyone surprised by any of this? Anyone who trusts Facebook, Google, et al, is deluded. The time to dump social media, etc., was ages ago.
I had to laugh at this…Facebook is the last place I would look for or pay attention to anything posted from anyone besides my “friends”. Everyone knows that Facebook’s sole purpose is to make money, and if happy users click on more ads, then their profits go up.
You don’t understand the study, do you? All the content came from your friends. You were just shown different proportions of positive and negative posts depending on which experimental condition you were in.
I see a bigger problem being that if fb can manipulate your feed for this, than they can do it for whatever they want to influence the users to do; such as keeping posts hidden that are negative on certain politicians or parties or products. What if they hide every feed that says something positive about a rival company? Brainwashing in various forms has been around for a long time. This seems like another way to do it.
A few comments here:
1) Facebook is free and they can do what they want with it and all people can do is threaten never to use it again but we all know that’s a lie.
2) This research has an important function in that it’s important to be able to recognize when you are being manipulated. Bad people try to manipulate the emotions of others and it was only a matter of time before someone started doing it on a platform the size of Facebook. If we understand it and can recognize it when it happens we may be better able to resist it.
3) Why is everyone outraged now? Nobody is irritated or annoyed or just plain angry anymore.
Thank you. That’s exactly it. Facebook is a free service. No one has any
right to complain about ethics or behaviors or the “myth of the free state internet”……. because you have no right to.
Period.
To those saying that Facebook is a free service, you are wrong. You may not be paying in cash, but you are paying by allowing yourself to be subjected to the mental manipulation called advertising. Just because it is a common business model doesn’t reduce the fact that you ARE paying.
Irrespective of how they get their money, no organisation has the right to do what they like with other peoples lives.
Worse, this experiment is a terrible precedent. Even if it could be proved that it caused no harm, Facebook have shown that they are happy to deliberately manipulate people. This sends a clear signal to all the less ethical people out there that it is ok to write apps, use the Facebook data, etc to deliberately manipulate people for whatever ends they want and Facebook will not try to stop them.
Facebook needs to be told in unequivocal terms and very publicly that this is unacceptable behaviour and will not be tolerated. Maybe this class action, if it gets big enough, might be the way to do this.
I think it’s funny.
Let’s not hyperbolize what Facebook did. They used algorithms to determine whether posts contained negative words. This doesn’t mean your feed started looking like all your friends instantly became country singers (my wife done left me, my dog died, etc.).
It simply means that posts with words like “sad” or “mad” or “angry” showed up more often (or rather, the way the study was designed, that positive posts were removed). So a post in which the poster said he or she was angry about a recent Supreme Court decision might be more or less likely to show up. “U mad, bro?” might also be more likely to show up in the negative condition. “Sad is out of the World Cup.” And so on. Some of these probably weren’t even particularly negative in flavor; they just contained negative words.
Keep in mind this is stuff your friends were sharing anyway, and refreshing the news feed might show posts that were previously hidden.
These sorts of priming manipulations are considered minimal risk, and their effects — which are mild — often only last a few minutes. My undergraduate psych research was a priming study, and I managed to temporarily tweak people’s privacy attitudes by having them read brief stories that were fake quotes about the benefits or risks of sharing on social media. Did my study tear the fabric of the universe? No. It just got people to answer noticeably differently immediately after reading the quotes, depending on which set of quotes they read.
Here, they found that people used more positive or negative words depending on which content they viewed. Priming effects from viewing words are common. You’re quicker to recognize the word doctor after being presented with the word “nurse” than if you saw “bread” because the network of words associated with the word nurse is activated. So when negative words show up, other negative words are more likely to be at the top of your mind, more likely to be used in a subsequent post, etc.
There may be some mild emotional effects as well, but the handwringing over potential “tragic outcomes” seems way overblown. People are not that fragile.
The requirement for informed consent can be waived under certain circumstances. Researchers may even gain permission to deceive participants under certain circumstances. These kinds of ethical decisions are always weighed by the proportion of risk to societal benefit, and whether the study can feasibly be done any other way.
700,000 is a pretty big sample, big enough for a reasonable population of outliers to exist. If one of the outliers on the sad side was close enough to suicide this slight pressure could have pushed them over the edge.
Sure something else might have done it anyway, but also, they might have also survived the low patch and gone on to a full recovery. Did I hear someone dismiss this as being a “one in a million chance”? Well 1 in 700,000 is pretty damn close!
Is the risk of even one human life worth it? They say “nothing bad happened”, but how will anyone ever know? It is unlikely that any suicide note will say “Facebook made me do it”.
We have ethics for this sort of study, and just because Facebook is huge doesn’t give them the right to assume they have different rules to the rest of us. It is right that they be brought to task for this.
It sets a really, really bad precedent, and put a pretty sour taste in my mouth.
Personally, I don’t care that much, but it just seems so completely surreal; facebook is a publicly traded company and it is just beyond mindblowing that someone was like “hey, lets change what information our users receive from their friends and family to see if it makes them angry/sad/pissy. You know, like, gaslight them, but for science.”
If one of my colleagues suggested we gaslight someone just to see how they react, I would fire them. If I was, say a big investor in a company that gaslights it’s clients, I’d either pull out my investment or start up a clamoring for the head of the guy who thinks it’s OK to manipulate people for a ridiculous experiment without them opting in or agreeing to participate in an active way.
It’s really, really crazy. Experimenting on 700 thousand people to see how it effects their emotions is like, sociopathic. That’s a really, really bad precedent.
Good to know Facebook needed to manipulate and deceive 700,000 of its users to determine that happiness is contagious. Now lets experiment on Facebook to see if untrustworthiness is contagious!
The study should have gone through an IRB (Institutional Review Board)–which is exactly what Cornell declined to do. The outcome of the study per se–whether it is useful, or interesting, or whatever–makes absolutely no difference to the ethical issue at stake: informed consent. The fact that Facebook intentionally dodged an IRB was unethical. As this author pointed out, the EULA is for lawyers–it is no substitute for a formal evaluation of the ethics of human research. This is another slap in the face to Facebook users by a company that doesn’t give a damn about their privacy. Cornell University should formally censure Adam Kramer for an unethical study, and they should evaluate whether they perhaps need an overhaul of their entire IRB. Really shameful.
There is no question but that study should have gone through an IRB (Institutional Review Board)–which is exactly what Cornell declined to do. The outcome of the study per se–whether it is useful, or interesting, or prestigious to Fb, or whatever–makes absolutely no difference to the ethical issue at stake: informed consent. The fact that Facebook is able to dodge IRB scrutiny is itself unethical. Cornell’s media statement also attempted to dodge the bullet by blaming the affair on Fb and represents an utter failure to take responsibility for their own participation in unethical research. As this author pointed out, an EULA is for lawyers–it is no substitute for a formal evaluation of the ethics of human research. This is another slap in the face to Facebook users by a company that doesn’t give a damn about privacy. Cornell University should formally censure Dr. Guillory & Dr. Hancock for participating in, and formally rebuke Adam Kramer for conducting, an unethical study. PNAS also shares blame for publishing unethical research (but hey, let’s all remember that PNAS articles are really just paid advertisements anyway!). Cornell should also evaluate whether they perhaps need an overhaul of their entire IRB. Really shameful all the way around.