Twitter threatens revenge porn posters with account locking and suspension

Twitter. Image courtesy of 360b/Shutterstock.

Twitter. Image courtesy of 360b/Shutterstock.Twitter has joined the war against revenge porn.

On Wednesday, its rules were amended to ban intimate images posted without subjects’ permission.

Twitter says it will lock revenge porn posters’ accounts until the offending material’s deleted and will even suspend accounts if the intent behind such content is harassment.

The updated rules now read:

You may not post intimate photos or videos that were taken or distributed without the subject's consent.

The micro-blogging platform also moved to put revenge porn in the same category as threats of violence against others on the basis of race, ethnicity, national origin, religion, sexual orientation, gender, gender identity, age, or disability.

Its updated abuse policy now uses the same language as its new rules, outlawing the posting of intimate images without the subject’s consent.

Content deemed to be in violation of the new policy will be hidden from public view. The users who post it will have their accounts locked until they delete the objectionable content.

If Twitter finds that the content was posted with the intent of harassment, perpetrators will be subject to suspension.

Twitter also gave BuzzFeed a new FAQ regarding stolen nudes and revenge porn.

The FAQ is based on 12 questions BuzzFeed posed to Reddit after that site had itself banned what it called “involuntary porn” two weeks ago.

Twitter says that as part of the reporting process, users will be asked to confirm that photos or videos in question were posted without consent. Its agents will review the complaints to confirm whether the reported content actually violates its policy.

In an interview with National Public Radio, Christina Gagnier, a board member of Without My Consent, noted that Twitter’s new policy covers a range of private content beyond intimate images, including taxpayer IDs, dates of birth, and addresses:

What they're trying to tackle is information that was disclosed in a private context and there was no permission for that information to be put out publicly. And then that information is being used to victimize someone.

If Twitter’s agents determine that images do not in fact violate the company’s policy, the company will refrain from taking action. That includes, for example, when the reported content was previously made publicly available with permission.

Twitter has been criticized for dragging its feet with regards to cyberbullying, user safety and harassment.

That criticism has stung. It’s obvious that cleaning up its act is now a priority for the company.

In December, Twitter introduced new anti-trolling tools and promised quicker abuse investigation.

In February, it put out a new tool, TweetDeck Teams, to help stop password sharing and thereby help to stop account hijacking.

Also in February, in a leaked memo, CEO Dick Costolo outlined the impetus behind these moves.

In the memo, Noto acknowledged that Twitter “sucks at dealing with abuse and trolls”, confessed to being “frankly ashamed” at “how poorly” the company’s done at dealing with the issue, and promised “to start kicking these people off right and left and making sure that when they issue their ridiculous attacks, nobody hears them.”

That pledge fell on some deaf ears.

A month after Noto’s leaked memo got out, trolls threatened to violate baseball star Curt Schilling’s daughter with a baseball bat, among other violent, vicious threats.

Obviously, the work to make Twitter a safe place is an ongoing process. Here’s hoping that the new policies help to stiffen Twitter’s backbone all the more.

The resolve it’s shown to fix this problem is heartening. May other sites follow suit.

Image of Twitter bird courtesy of 360b / Shutterstock.com.