YouTube bans dangerous and harmful pranks and challenges

Driving while blindfolded is stupid. Ingesting laundry detergent pods is stupid. Asking your girlfriend to shoot you through an encyclopedia is stupid. And, in the case of Pedro Ruiz III, it’s lethal.

These are all so-called “pranks” that have been filmed and posted on YouTube. After reports of people getting hurt or even killed, YouTube has explicitly called it quits on the genre.

On Tuesday, Google announced that it had updated its dangerous challenges and pranks enforcement.

Specifically, Google updated its external guidelines to clarify that challenges like the Tide pod challenge, that’s when teens dare each other to bite into the laundry pods, which can and has led to poisoning, or the fire challenge which involves pouring flammable liquid onto your skin, then lighting it on fire, resulting in multiple cases of kids giving themselves second- and third-degree burns, “have no place on YouTube.”

A history of violence and/or stupidity

Dangerous pranks and challenges may not have a place on YouTube now, but the content has certainly made itself at home before this. Some examples:

  • In 2016, four members of the YouTube channel TrollStation – known then as the septic tank of prankster sites – were jailed for staging and filming fake robberies and kidnappings. Their aggressive and/or violent public antics have included trolls enacting brawls and smashing each other in the head with bottles made out of sugar.
  • In 2017, a couple in the US reportedly lost custody of two of their five children, whom they had filmed while screaming profanities at them, breaking their toys as a “prank” and blaming them for things they didn’t do. Some of the videos, posted to their DaddyOFive YouTube channel, showed the kids crying and being pushed, punched and slapped.
  • In February 2018, Australian YouTube prankster Luke Erwin was fined $1,200 for jumping off a 15-meter-high Brisbane bridge in the viral “silly salmon” stunt.
  • US YouTube prankster Pedro Ruiz III was killed last year by his girlfriend and the mother of his children after insisting that she shoot a .50 caliber bullet through an encyclopedia he was holding in front of his chest. She was sentenced to 180 days in jail.

How will YouTube yank the pranks?

It’s a step in the right direction to say that this type of material “has no place” on YouTube. But what exactly is YouTube going to do about it? Its previous moderation successes haven’t exactly been stellar, after all.

In April 2018, during the earnings call for Google parent Alphabet, Google CEO Sundar Pichai pointed to the success of automatic flagging and human intervention in the removal of violent, hate-filled, extremist, fake-news and/or other violative YouTube videos. According to YouTube’s first-ever quarterly report on removed videos, between October and December 2017, it removed a total of 8,284,039 videos. Of those, 6.7 million were first flagged for review by machines rather than humans, and 76% of those machine-flagged videos were removed before they received a single view.

It was an impressive number, but if YouTube’s ongoing problems with getting bestiality off the platform are any indication, it won’t be easy. BuzzFeed News told YouTube back in April that searching for the word “girl” along with “horse” or “dog” was returning dozens of videos with thumbnails suggesting women having sex with those animals.

Well, that won’t do, YouTube said, removing the videos and emphasizing that the “abhorrent” content was in violation of its policies. But what exactly did it do to enforce that no-horse-and-pony-show policy?

Not much, apparently. After BuzzFeed published an article on Tuesday about such content still being plentiful on YouTube, the company said that it’s gone after the culprits by way of throttling ad revenue, “aggressively [enforcing] our monetization policies to eliminate the incentive for this abuse.”

YouTube also said in its statement that it’s beefing up enforcement against abusive thumbnails and trying to “get it right.”

We recognize there’s more work to do and we’re committed to getting it right.

Are those steps better than nothing? You can’t punish content producers via ad revenue if they haven’t actually monetized their videos. As a senior YouTube employee told BuzzFeed last year, the graphic thumbnails well may have been coming from a content farm that’s keeping videos ad-free until their views spike and they can cash in big-time.

Could artificial intelligence (AI) help YouTube moderate its vast reams of content?

BuzzFeed talked to one AI expert who thinks it could spot bestiality imagery, though training AI on human-on-animal content wouldn’t be much fun. Bart Selman, a Cornell University professor of AI:

It is definitely possible for AI to detect bestiality-related porn, but it would need to be trained on images related to that. So, it requires a special effort to do that kind of training and it’s not fun to work on. Another issue is that the content spreading mechanisms may actually push this stuff widely, going around content safety checks.

Training AI to recognize pranks that veer into the realm of dangerous seems more difficult still.

To echo what Netflix said after people were inspired by its movie “Bird Box” and started coming up with “do-such-and-such-while-blindfolded” dares, which led to at least one car crash…

…winding up in the hospital with meme-related injuries isn’t the best way to start the new year.

Let’s hope that Google figures out substantive ways to moderate this dangerous content, be it through throttling ad revenue or by training AI to be a lot smarter than the humans who are drawn like moths to the flame… or to the blindfold… or to the sugar bottles crashed over their skulls.

You can’t legislate people out of stupidity, but perhaps you can strip them of stupidity-derived internet glory.