“Report” is a word that kids just don’t like.
Sure, a “report abuse” button may seem like a great idea to help victims of online bullying, but 13-16-year-olds in a Yale focus group shied away from the word, saying it sounded like they’d get a friend in trouble.
That’s just one insight that Facebook took to heart, stripping words like “report” from its newly launched tools and Bullying Prevention Centre, which Facebook premiered in the US this past autumn but as of Wednesday is now available in the UK and throughout Europe.
The bullying efforts come from years of work by Facebook engineers and partners at the Yale Center for Emotional Intelligence.
In November 2013, Facebook launched a preliminary version, the Bullying Prevention Hub, in the US, making it the first time that the site had put a report abuse button at victims’ fingertips.
That early work has now been complemented with a set of reporting tools designed to empower teenagers that might be experiencing a range of emotions caused by all manner of affronts previously lumped under the overarching umbrella of “bullying.”
For example, teenagers and children can be embarrassed, upset or threatened by content posted on the social network. Now, they’ll have more nuanced ways to communicate with people who post the troublesome content.
The semantics of the word “report” is a case in point.
Arturo Bejar, Engineer Director for Facebook’s “compassion research team”, told Wired.co.uk that the teenager focus group’s dislike of the word caused the anti-bullying team to drop it in favor of more informative options – a tweak that led to an immediate improvement in the tool’s use, he said:
We used more descriptive language - 'this post is a problem' or 'I don't want to see this'. The moment we changed that, we tripled the number of people going into the flow and and tripled those doing the report. Originally the button did not serve its purpose as far as making people feel aware they could deal with issues.
Jake Brill, Facebook product manager, said in a video on the new Bullying Prevention Centre that more often than not, when we see something posted that we don’t like, it doesn’t necessarily violate Facebook’s community standards.
Rather, objectionable content can represent a violation of the social contract between two people, he said – something that’s perhaps emotionally disturbing but that doesn’t necessarily merit a report of abuse.
James Mitchell, manager of user operations for Facebook, said that a good example is when we get tagged in photos.
Maybe we just don’t like the way we look. Maybe it’s a disastrous, career-ruining photo we don’t want our employer to see.
The thing is, it’s hard to tell our friends that we don’t like something they did, Mitchell said.
The company started to take a look at research being done around compassionate communication and realized there were “many people in the field doing really great work,” said Arturo Bejar, Facebook engineering director.
So starting in January 2014, Facebook started working with compassionate communications experts, trying to infuse its reporting flow with compassion theories.
Now, a few months later, we can see the results of the work.
One thing that Facebook did was to take the simple “report” function and deepen it into something that more approximates the amount of information humans impart when we communicate face to face, with body language, facial expressions and dialogue.
So now, when people say that they don’t like a photo, Facebook asks why: is it because it’s a bad photo? Reputation-damaging? Embarrassing? Offensive?
The better language that people have to engage in these kinds of conversations, the more likely the recipient of a message is to respond, Facebook says.
When it comes to genuine bullying, the social network has a very specific new flow: first, Facebook asks the user what’s happening, then how they feel about it, and then it asks the user about the intensity of the emotion.
This information gets sent to a reporting team that decides how to proceed.
The user, however, receives a message validating his or her feelings, such as: “We’re sorry, no one should say mean things, here are some options for you to resolve”.
It’s fascinating to read about how Facebook and Yale compassion engineers have finessed language in the new reporting tools.
For example, politeness makes a measurable difference – making messages as polite as possible, with phrases like “I would prefer that people don’t see this on Facebook would you please take it down”, worked best, Wired.co.uk reports.
In fact, the word “please” performed 4 percent better than “would you mind” – likely because the wording evokes an emotional response in the targeted friend, Bejar told the news outlet.
Another bit of insight: people didn’t respond particularly well to the message “it’s a bit sad” – perhaps that’s a bit overly dramatic when you’re just talking about a photo? – but they did seem to appreciate the word “embarrassing” – whom amongst us has never been embarrassed?
As it is, critics charge that the advice given to bullying victims tends to be overly simplistic.
Twitter’s new “mute” button is the type of more socially sensitive tools we need online – a button that allows us to ignore boring or irrelevant people without insulting them with a more abrasive “block”.
Kudos to Yale and Facebook for delivering similarly nuanced reporting flows and tools for children and teens.
Let’s hear it for nuanced messaging and “report” tools that better mirror the intricacies of human relations.
For more tips on helping children and teens who might be cyber-bullied, check out our 10 tips to keep kids and teens safe online.
I am being bullied by a Dutch guy with 3 facebook accounts. He is very abusive and victimizes an autistic fb member,calling her an ‘ass kisser’, and one of his friends says she needs to be ready for insults on fb. Its how it is. Can’t fb do something about these members?