Facebook’s real-name policy requires people “to provide the name they use in real life; that way, you always know who you’re connecting with.”
But what if a pod person’s using your face and your name?
Facebook’s now working on a feature that will automatically pick up on people using your name and profile picture, sending an alert if somebody’s put up a doppelganger account.
According to Mashable, the feature’s been in development since November and is now live for 75% of the world.
Facebook Head of Global Safety Antigone Davis told Mashable that the feature will be expanded further soon.
How it works: after Facebook gives you a heads-up about a suspicious profile, it will prompt you to either provide personal information to prove it’s a fraud, or enable you to identify that it belongs to somebody legitimate.
Mashable reports that Facebook’s notification process is automated, but flagged profiles are manually reviewed.
Davis said that impersonation isn’t necessarily widespread on the platform, but it is a source of harassment: that’s just one takeaway from a series of roundtables Facebook’s held around the world to discuss women’s safety on social media.
Mashable quotes her:
We heard feedback prior to the roundtables and also at the roundtables that this was a point of concern for women.
And it’s a real point of concern for some women in certain regions of the world where [impersonation] may have certain cultural or social ramifications.
What’s in a name? A rose by any other name would smell as sweet, but a troll using somebody’s name, and photo, can impersonate that person to smear their reputation.
That’s a tool in the sextortionist’s bag. Threatening to use women’s photos to associate them with prostitution was one trick used by Michael C. Ford, the former US Embassy worker who on Monday was sentenced to nearly 5 years in jail after pleading guilty to sextorting, phishing, breaking into email accounts, stealing explicit images and cyberstalking hundreds of women around the world – some of them minors.
So kudos to Facebook for automating the discovery of some of these trolling pod people identity abusers. Even though it sounds like the automated impersonation feature will only pick up on profile pictures, that’s a start.
In addition to the impersonation detection feature, which Facebook plans to roll out further after getting more feedback, it’s also testing two other safety features as a result of the roundtable talks.
One is a new way to report nonconsensual porn. Nonconsensual intimate images have been banned on Facebook since 2012, but the company’s testing a new feature that should make it more compassionate for abuse victims, Davis said.
On top of allowing people to report a photo as inappropriate, they’ll also be able to identify themselves as the subject.
Beyond just triggering the review process, that would also prompt Facebook to offer links to outside resources, including support groups for victims of abuse and information about possible legal options, Mashable reports.
Finally, Facebook’s also working on a photo checkup feature similar to its privacy dinosaur, that blue cartoon which urges users to check their privacy settings.
Davis said that in spite of Facebook having put things like the blue dinosaur and plain English explanations in place to help explain privacy, some users – particularly in India and other countries where it’s testing the new photo-centric privacy feature – don’t always get how to use privacy controls.
The photo checkup feature is meant to help by walking users through a step-by-step review process of the privacy settings for their photos. It’s now live in India, as well as some countries in South America, Africa and southeast Asia.
Image of man in disguise courtesy of Shutterstock.com
7 comments on “Facebook’s testing a feature that alerts you if someone’s impersonating you”
it sounds like the automated impersonation feature will only pick up on profile pictures
This is going to create a lot of headaches for identical twins.
From the top of the story: “Facebook’s now working on a feature that will automatically pick up on people using your name and profile picture…”
In other words, both name and profile picture will be used to trigger this. Identical twins may look alike but they won’t have identical names.
It seems that this would flag what are commonly called cloned accounts, yet that is not mentioned here. I have seen quite a few accounts of Friends that have been cloned by people who are presumably doing so in order to collect personal information from Friends of the person whose account was cloned, who are tricked into Friending the cloned account.
It’s about time. Any idiot fb user can accuse you of impersonation just to get your account shut down. Then it’s up to you to prove you are who you say you are. Completely ridiculous. Even with homeland security documentation it took the brilliant minds (machines) at fb a week to reinstate my account.
It doesn’t go nearly far enough… there are plenty of people using fake profiles with stolen photos trying to lure in and trick lonely people, especially widows etc., out of their money. It would be great if Facebook could go against that, too, and automatically discover when someone uses photos from others.
I’m following up to this point (paragraph formatting removed to make it all one section in my quote):
“One is a new way to report nonconsensual porn. Nonconsensual intimate images have been banned on Facebook since 2012, but the company’s testing a new feature that should make it more compassionate for abuse victims, Davis said. On top of allowing people to report a photo as inappropriate, they’ll also be able to identify themselves as the subject.”
So – Not only is the user embarrassed, but they can call themselves out by name to announce they’re embarrassed? Doesn’t that defeat the purpose??
The problems with trying to prove or disprove consent make the entire point moot. Usually you’ll just have one person’s word against another’s, and determining whether or not the act was consensual (and consequently, whether or not a criminal act took place) is way out of Facebook’s purview and ability. They aren’t a law enforcement agency; it’s absurd to think that they can get to the truth in such matters, or that they should even try. Their answer should just be “report it to your local police”.
That sort of language in Facebook’s rules sounds nice but is ultimately unenforceable without a court’s involvement, and it’s just a duplication of their own responsibilities to comply with law enforcement when such things do occur, though it might make it a little easier internally for them to justify removing such content no matter which jurisdiction claims that the act was non-consensual.