California outlaws facial recognition in police bodycams

On Tuesday, California passed into law a three-year block of the use of facial recognition in police bodycams that turns them into biometric surveillance devices.

This isn’t surprising, coming as it does from the state with the impending, expansive privacy law – California’s Consumer Privacy Act (CCPA) – that’s terrifying data mongers.

As it is, in May, San Francisco became the first major US city to ban facial recognition. It might well be a tech-forward metropolis, in a state that’s the cradle of massive data gobbling companies, but lawmakers have said that this actually confers a bit of responsibility for reining in the privacy transgressions of the companies headquartered there.

When facial recognition gets outlawed, lawmakers point to the many tests that have found high misidentification rates. San Francisco pointed to the ACLU’s oft-cited test that falsely matched 28 members of Congress with mugshots.

The ACLU of Northern California repeated that test in August, finding that the same technology misidentified 26 state lawmakers as criminal suspects.

One of the misidentified was San Francisco Assemblyman Phil Ting, the lawmaker behind the bill that passed and which was signed into law by Gov. Gavin Newsom on Tuesday: AB1215.

The law, which goes into effect on 1 January 2020 and which expires on 1 January 2023, prohibits police from “installing, activating, or using any biometric surveillance system in connection with an officer camera or data collected by an officer camera.”

The law cites the threat to civil rights posed by the pervasive surveillance of facial recognition bodycams:

The use of facial recognition and other biometric surveillance is the functional equivalent of requiring every person to show a personal photo identification card at all times in violation of recognized constitutional rights. This technology also allows people to be tracked without consent. It would also generate massive databases about law-abiding Californians, and may chill the exercise of free speech in public places.

…and noted the technology’s tendency to screw up:

Facial recognition and other biometric surveillance technology has been repeatedly demonstrated to misidentify women, young people, and people of color and to create an elevated risk of harmful ‘false positive’ identifications.

There are many cases in point when it comes to this error-prone technology. Here’s one: After two years of pathetic failure rates when they used it at Notting Hill Carnival, London’s Metropolitan Police finally threw in the towel in 2018. In 2017, the “top-of-the-line” automatic facial recognition (AFR) system they’d been trialling for two years couldn’t even tell the difference between a young woman and a balding man.

Facial recognition failure hasn’t stopped the UK from signing up with Singapore to collaborate on developing a digital identity, mind you. As part of its Verify scheme, the UK Government Digital Service launched a system of biometric payment for government services earlier this year. For its part, France is set to implement a nationwide facial recognition ID program next month, in spite of protests from privacy groups and from its independent data regulator, CNIL.

London’s history of failure with the technology is underscored by an oft-cited study from Georgetown University’s Center for Privacy and Technology that found that AFR is an inherently racist technology. Black faces are over-represented in face databases to begin with, and FR algorithms themselves have been found to be less accurate at identifying black faces.

In another study published earlier this year by MIT Media Lab, researchers confirmed that the popular FR technology it tested has gender and racial biases.

All of these tests were confirmed by the ACLU’s recent test that labelled him a potential suspect, Ting said. Besides, pervasive, error-prone surveillance that could and has led to the arrest of innocent people isn’t what police bodycams were intended to do, he said – namely, it’s supposed to be a tool for police accountability. The San Francisco Chronicle quoted the lawmaker following the passage of his bill in the legislature:

Let’s not become a police state. [Police bodycams should be used] as they were originally intended – to provide police accountability and transparency.

Why just 3 years?

The bill originally called for a permanent ban, but Ting scaled it back to three years – a period after which it may be renewed – due to the protest of law enforcement groups. They argued that a full-out ban on the technology would rob them of a vital crime-solving tool: one that could be used to identify repeat offenders, to solve cold cases and old crimes, and to deter future crime.

Police groups said that facial recognition could help identify criminals at large events – similar to how China has used it to pick out suspects as they travel during the Lunar New Year, for example. San Francisco Chronicle quoted what the California Police Chiefs Association told lawmakers:

Prohibiting the use of biometric surveillance systems severely hinders law enforcement’s ability to identify and detain suspects of criminal activity.

…while critics of facial recognition say that the public is jeopardized by that same technology. From ACLU technology and civil liberties attorney Matt Cagle:

Unleashing this inaccurate and racially biased technology on police body cameras …would undoubtedly lead to unjust arrests and even death.