The online activist group Fight for the Future is calling for a Federal ban on facial recognition surveillance.
Evan Greer, deputy director of Fight for the Future, compared facial recognition to nuclear or biological weapons: while we can’t go back in time to ban the development of those technologies, we still have time to stop facial recognition before we get to the point where we’re living in what the campaign calls a nation with “automated and ubiquitous monitoring of an entire population.”
This surveillance technology poses such a profound threat to the future of human society and basic liberty that its dangers far outweigh any potential benefits. We don’t need to regulate it, we need to ban it entirely.
This is the latest campaign from the group that led a targeted internet blackout in 2015: thousands of sites blocked and redirected Congressional URLs to a Patriot Act protest page. Then, in 2017, Fight for the Future launched a last-ditch attempt to save net neutrality with its Break the Internet campaign.
Its latest call to action, BanFacialRecognition.com, offers visitors a form that connects them to their Congressional and local lawmakers in order to ask them to ban this “unreliable, biased” technology, which the group calls “a threat to basic rights and safety.”
Fight for the Future charges Silicon Valley lobbyists with “disingenuously calling for light ‘regulation’” of facial recognition so they can continue to profit from the rapid spread of this “surveillance dragnet,” thereby ducking the real debate: namely, should this technology even exist?
Industry-friendly and government-friendly oversight will not fix the dangers inherent in law enforcement’s use of facial recognition: we need an all-out ban.
The campaign includes a laundry list of the criticisms that stick to facial recognition technology like so many civil rights burrs. One of its many problems is a high error rate. For example, as the Independent reported last year, freedom of information requests show that the facial recognition software used by the UK’s biggest police force – London’s Metropolitan Police – gets false positives in more than 98% of the alerts it generated. The UK’s biometrics commissioner, Professor Paul Wiles, told the news outlet that the technology is “not yet fit for use”.
As we reported in 2017, the Met’s use of facial recognition fell flat on its face two years in a row. Its “top-of-the-line” automated facial recognition (AFR) system, which it trialled at London’s Notting Hill Carnival, couldn’t even tell the difference between a young woman and a balding man. One man was wrongfully detained after being erroneously tagged as being wanted on a warrant for a rioting offense.
Multiple studies have found that AFR is an inherently racist technology: facial recognition algorithms have been found to be less accurate at identifying black faces.
During a scathing US House oversight committee hearing on the FBI’s use of the technology in 2017, it emerged that 80% of the people in the FBI’s facial recognition database don’t have any sort of arrest record. Yet the system’s recognition algorithm inaccurately identifies them during criminal searches 15% of the time, with black women most often being misidentified.
From the Ban Facial Recognition site:
These errors have real-world impacts: wrongful imprisonment, deportation, or worse.
Even though facial recognition has proved to be more error-prone than not, it’s being widely deployed by law enforcement in multiple countries. And once governments have our biometric information in their databases, it’s “an easy target for identity thieves or state-sponsored hackers,” Fight for the Future says.
In fact, our biometric data has already been ripped off, the group said, pointing to last month’s theft of a US Customs and Border Protection (CBP) database full of travelers’ photos and license plates.
Fight for the Future points out that “Law enforcement officers frequently search facial recognition databases without warrants – or even reasonable suspicion that you’ve done anything wrong.”
It’s not just facial image databases that police use for non-sanctioned uses, looking up romantic partners, business associates, neighbors, journalists and others for reasons that have nothing to do with official police work in criminal-history and driver databases. We’ve seen multiple cases of cops treating their state’s driver license databases like a kind of Facebook, using the databases to look up and ogle female colleagues’ images hundreds of times. It’s a hobby that’s set taxpayers back hefty amounts when those women have sued over breach of privacy.
‘It threatens our future’
Fight for the future says that facial recognition is uniquely Orwellian, and we’ve got to stop its spread before we’re living under an authoritarian state:
Facial recognition is unlike any other form of surveillance. It enables automated and ubiquitous monitoring of an entire population, and it is nearly impossible to avoid. If we don’t stop it from spreading, it will be used not to keep us safe, but to control and oppress us – just as it is already being used in authoritarian states.
The backlash is growing
As Fight for the Future pointed out in a press release about the campaign to ban facial recognition, police use of the technology has already been banned in San Francisco and Somerville, Massachusetts.
The group said that Axon, which makes tasers and body cams for police officers, has said that it wouldn’t commercialize facial recognition because it currently can’t “ethically justify” its use.
Fight for the Future also cited recent revelations that the FBI and Immigration and Customs Enforcement (ICE) are reportedly using driver’s license photos for facial recognition searches without license holders’ knowledge or consent. Doing so gives them access to millions of Americans’ driver’s license photos, creating what critics have called an “unprecedented surveillance infrastructure.”
Both Democrats and Republicans have been dumbfounded by law enforcement’s audacity – no elected official gave permission for 18 state DMVs to share their driver’s license databases – and have looked to ban it in the absence of rules about its use by law enforcement and government agencies.
Rep. Jim Jordan, R-Ohio, said during a House Oversight Committee hearing that it’s “scary.”
It doesn’t matter what side of the political spectrum you’re on. This should concern us all.
Fight for the Future:
We’re joining this outcry to call for a complete ban on facial recognition. It’s time the federal government take a stand now to prevent this technology from proliferating across the country.
But while there well may be bipartisan support for banning facial recognition, there’s also bipartisan support for keeping it, propped up by strong tech lobbying efforts. During the House Oversight Committee hearing, “Facial Recognition Technology (Part 1): Its Impact on our Civil Rights and Liberties,” which took place in May, Rep. Alexandria Ocasio-Cortez, D-N.Y., had this to say:
The consensus on this issue I think is bipartisan, but also the opposition is bipartisan as well. You know, big tech is a very strong lobby that has captured a lot of members of both parties.
16 comments on “Facial recognition surveillance must be banned, says Fight for the Future”
It’s bizarre that all the money invested in a technology, that is still in its infancy (understatement for a 98% failure rate), is not put towards putting police back on the beat/streets. Who on earth authorises this?
No. In China, when you’re crossing roads not following the lights, immediately the big screen will show your face, name and reason! I think the Western countries just hiding the facts to common people.
Facial recognition AND Facebook—they both should be banned. Regulating or splitting up won’t work. They will always find workarounds, eventually.
Hmm… Yet, it was able to find the two Russian assassins who tried to kill Sergei Skripal among piles of faces in multiple cities. Go ask your kids how facial recognition lets them tag their friends in photos. I think it works a lot better than described by these guys.
Is there any evidence to confirm that facial recognition led to this?
I’m not claiming that FR doesn’t have any success (or ‘a’ success rate), but as far as I remember “super recognisers” were used in place of facial recognition for this. These are teams of people who have the skills to recognise faces and they trawl through heaps of CCTV footage after obtaining various intelligence to know where to start searching.
Perhaps that’s why it’s not mentioned in this article…
Sorry for double-post
Also, to reflect on your reference to Facebook’s facial recognition; remember that Facebook has it’s own dataset of faces that are often ‘verified’ by each and every member that ‘tags’ them. So false tagging within FB can easily be detected after more data is added and thus FR is improved more so
Good point on the extra learning cycles that FB system gets and the limited set of faces.
The programs you mention aren’t solely based on facial recognition. For the tagging, there is a ton of analytics for location, similar interest/contacts/age and other people tagged that they are linked to though other tagging. The Assassins, they likely had several people inspecting the images after the AI provided hundreds of hits – BUT, I am not familiar with that case.
Which is more likely? All the research into face recognition is wrong, or an actual spy didn’t reveal their source?
To be fair, “tag your friends” is a feature that only has to identify which friends they are out of a list of friends an individual has. A “tag your friends” feature that has to identify which friends to tag out of the entire population of Facebook, or even just the population of Facebook users in New York, may be significantly less successful.
Good point on limited set of targets.
If I upload a picture of one of my friends to Facebook, then Facebook suggests a name tag with, in my experience, about 90% accuracy. The other 10%–its mistakes–are hilarious. Facebook only has to pick among a few hundred friends. If the police pick my face out of a crowd and match it against a database of mug shots, a false positive–resulting in a false arrest–is anything but hilarious. We are at risk when authorities act from believing as gospel truth the output of a program which at best should be considered as a mere suggestion for further investigation.
I’ve had a lot of fun with Facebook consistently tagging pictures of me (usually where my hairline is obscured with a hat or a bandanna) with the name of a friend whose hair has gone completely white. Mostly this is because until very recently I never tagged my own face in pictures I post, while my friend did so often.
We are all now self tagging our faces as we cross borders. Fake passport names or not, the assassins probably had their faces registered when they entered the EU.
Too little, too late…
If rather than banning the technology outright it was rather regulated as inadmissible as evidence for court or a warrant, much of the abuse could be addressed. If they rather use it to get a list of suspects to collect further evidence before they get a warrant they would still need to do the regular police work before reacting to false evidence. It should also be treated as illegal evidence, where any evidence used for a warrant or court must be obtained by independent legitimate means.
Like any technology, for better of worse the genie is out of the bottle and can not be put back. Rather than focusing on single techniques we need a significant criminal justice system reform.