Football fans have not taken kindly to being treated like a bunch of criminals by law enforcement grabbing their mugshots via automated facial recognition (AFR).
Hibs fans protest against SPFL plans to bring in facial recognition cameras for football supporters. pic.twitter.com/3LutNMFWk3
— FCKolektivoVictoria (@fckv2010) February 7, 2016
Nonetheless, and despite the technology’s failings, British police are planning to scan each and every fan’s face with a new surveillance system at the UEFA Champions League final on June 3. Their images will be compared to those in a police database that contains some 500,000 persons of interest.
The final will take place at National Stadium of Wales, in Cardiff. According to a government contract issued by South Wales Police, they plan to pilot the use of AFR in and around the stadium and Cardiff’s main train station on the day of the game.
The pilot will build on London Metropolitan Police’s pilot of AFR in August 2016, during the Notting Hill Carnival.
Tony Porter, the UK government’s surveillance camera commissioner, told Motherboard that the recent bombing of a Borussia Dortmund team bus before a Champions League quarter-final match is an example of why law enforcement is looking at AFR.
Porter said that his office has been in touch with South Wales Police to ensure they comply with a code of practice when using AFR:
I have seen the use of AFR increase [over] the past few years and a recent report by the National Institute of Standards and Technology indicated that facial recognition is a difficult challenge. Getting the best, most accurate results for each intended application requires good algorithms, a dedicated design effort, a multidisciplinary team of experts, limited-size image databases, and field tests to properly calibrate and optimize the technology.
He was referring to NIST’s Face In Video Evaluation program report (PDF), which found that when subjects fail to face a camera or when their faces are obscured, accuracy of AFR drops.
How accurate is facial recognition?
Not so much. Besides the NIST report, studies have found that black faces are disproportionately targeted by facial recognition. They’re over-represented in face databases to begin with: according to a study from Georgetown University’s Center for Privacy and Technology, in certain states, black Americans are arrested up to three times their representation in the population. A demographic’s over-representation in the database means that whatever error rate accrues to a facial recognition technology will be multiplied for that demographic.
Beyond that over-representation, facial recognition algorithms themselves have been found to be less accurate at identifying black faces.
During a recent House oversight committee hearing in the US that was scathing on the FBI’s use of the technology, it emerged that 80% of the people in the FBI database don’t have any sort of arrest record. Yet the system’s recognition algorithm inaccurately identifies them during criminal searches 15% of the time, with black women most often being misidentified.
That’s an enormous number of people wrongly identified as persons of interest to law enforcement. According to a Government Accountability Office (GAO) report from August 2016, the FBI’s massive face recognition database has 30m likenesses.
In the UK, controversy has arisen recently over police’s retention of images. According to Biometrics Commissioner Paul Wiles, the UK’s National Police Database holds 19m images: a number that doesn’t even include all police forces. Most notably, it lacks those of the largest police force, the Metropolitan Police. A Home Office review was bereft of statistics on how those databases are being used, or to what effect, Wiles said.
As Motherboard notes, the limitations of facial recognition are corroborated by how useless it was in the Notting Hill Carnival operation. Out of 454 arrested people, not a single one of them had been pre-identified as a person of interest by the technology, as Met Police has reported.