Toxic comments, how do we detest thee? Let me count the ways.
Sites that have simply given up on scrubbing the nasty from their comment sections now include Vice, The Telegraph, Popular Science, Recode, Mic, The Week, Reuters, The Verge, and USA Today …to name a few.
But the war to establish comment section civility – I would say “reclaim”, but the notion that comment sections were ever less than unnerving is debatable – is far from over.
Google’s latest salvo: on Thursday, it released an artificial intelligence (AI) tool, Perspective,which is an API that uses machine learning models to identify how troll-like a comment is. The API was developed by Jigsaw – a Google division – and Google’s Counter Abuse Technology team.
Perspective learns by seeing how thousands of online conversations have been moderated. It’s been trained on data collected using online surveys and scored with the “toxicity” model. The toxicity model in turn was trained by asking people to rate comments on a scale, from a “very healthy” contribution on up to “That was rude, disrespectful, and or unreasonable, and I’m likely to leave this discussion.”
Jigsaw gave an example of how Perspective organized comments on three topics that get people pretty hot under the collar when they’re discussing them online: climate change, Brexit and the US election. Here’s an example of how it rated climate change-related comments:
The goal is to improve quality of debate, and that’s much more than an abstract concept. Online publishers have financial motivation to get people to stay on their sites, as opposed to closing site windows in disgust.
Researchers have found that rudeness, obscenities and attacks on other commenters create what they’ve dubbed the “nasty effect”. It’s an effect that results in a drop-off of how much readers trust and esteem content.
In other words, sites’ reputations are tinted, or tainted, by whatever’s bubbling up from the comment sections at the bottom of articles. That translates into lost money: it’s hard to get revenue-generating ads if steaming piles of comments scare away readers and sink site traffic numbers.
A number of online publishers are working with Jigsaw on Perspective and other tools, all of which are being developed to automate detection of toxic comments using machine-learning models. Jigsaw cites experiments being run by the Wikimedia Foundation, the New York Times, the Economist and the Guardian.
It’s hard to argue with AI that scores a given comment against similar comments that people have rated as being toxic. But if you want to argue with the AI logic, Google’s welcoming input, saying that these are still the early days, and it expects it will get things wrong.
You can give the tool a try for yourself: go to the Perspective page and scroll down to the Writing Experiment section. There, you can type in a comment that Perspective will rate with regards to how similar it is to comments that others have dubbed toxic.
I put in some variations of a comment, plus some insults penned by master insulter William Shakespeare. Their toxicity scores:
- Might there be a possibility that he’s not telling us the truth? 8%
- She is spherical, like a globe. I could find out countries in her. 9%
- I do believe he’s lying. 19%
- Thou cream-faced loon. 26%
- He’s lying. 29%
- Pants on fire! 35%
- Thou dost infect my eyes. 46%
- Out, you green-sickness carrion! 48%
- Lying blowhard. 55%
- Bald-faced liar. 62%
- Liar! 68%
- Better a witty fool than a foolish wit. 69%
- He is a liar. 70%
- Out, you baggage! You tallow face! 80%
So, yes, Capulet’s a troll, by the bot’s estimation. Sorry, WS: Romeo and Juliet has been deemed the Reddit of your day.