Facebook’s real-name policy requires people “to provide the name they use in real life; that way, you always know who you’re connecting with.”
But what if a pod person’s using your face and your name?
Facebook’s now working on a feature that will automatically pick up on people using your name and profile picture, sending an alert if somebody’s put up a doppelganger account.
According to Mashable, the feature’s been in development since November and is now live for 75% of the world.
Facebook Head of Global Safety Antigone Davis told Mashable that the feature will be expanded further soon.
How it works: after Facebook gives you a heads-up about a suspicious profile, it will prompt you to either provide personal information to prove it’s a fraud, or enable you to identify that it belongs to somebody legitimate.
Mashable reports that Facebook’s notification process is automated, but flagged profiles are manually reviewed.
Davis said that impersonation isn’t necessarily widespread on the platform, but it is a source of harassment: that’s just one takeaway from a series of roundtables Facebook’s held around the world to discuss women’s safety on social media.
Mashable quotes her:
We heard feedback prior to the roundtables and also at the roundtables that this was a point of concern for women.
And it’s a real point of concern for some women in certain regions of the world where [impersonation] may have certain cultural or social ramifications.
What’s in a name? A rose by any other name would smell as sweet, but a troll using somebody’s name, and photo, can impersonate that person to smear their reputation.
That’s a tool in the sextortionist’s bag. Threatening to use women’s photos to associate them with prostitution was one trick used by Michael C. Ford, the former US Embassy worker who on Monday was sentenced to nearly 5 years in jail after pleading guilty to sextorting, phishing, breaking into email accounts, stealing explicit images and cyberstalking hundreds of women around the world – some of them minors.
So kudos to Facebook for automating the discovery of some of these trolling pod people identity abusers. Even though it sounds like the automated impersonation feature will only pick up on profile pictures, that’s a start.
In addition to the impersonation detection feature, which Facebook plans to roll out further after getting more feedback, it’s also testing two other safety features as a result of the roundtable talks.
One is a new way to report nonconsensual porn. Nonconsensual intimate images have been banned on Facebook since 2012, but the company’s testing a new feature that should make it more compassionate for abuse victims, Davis said.
On top of allowing people to report a photo as inappropriate, they’ll also be able to identify themselves as the subject.
Beyond just triggering the review process, that would also prompt Facebook to offer links to outside resources, including support groups for victims of abuse and information about possible legal options, Mashable reports.
Finally, Facebook’s also working on a photo checkup feature similar to its privacy dinosaur, that blue cartoon which urges users to check their privacy settings.
Davis said that in spite of Facebook having put things like the blue dinosaur and plain English explanations in place to help explain privacy, some users – particularly in India and other countries where it’s testing the new photo-centric privacy feature – don’t always get how to use privacy controls.
The photo checkup feature is meant to help by walking users through a step-by-step review process of the privacy settings for their photos. It’s now live in India, as well as some countries in South America, Africa and southeast Asia.Follow @NakedSecurity Follow @LisaVaas