Governments, retailers and social networks are driving multiple-use scenarios toward ubiquitous facial recognition capability, a technology that’s moved out of the realm of fiction and Hollywood (George Orwell’s novels, or Mission Impossible, Bourne Ultimatum, Minority Report or Matrix Reloaded) into the realm of everyday acceptance.
There are the applications and tools that “suggest” the identity of a given person, ranging from who is the actor on the screen, to the much more problematic and frankly creepy ability to collect information on the stranger across the aisle from me on the subway using an app on my smartphone.
No country is having more success in adoption of facial recognition technologies than China – which isn’t too surprising given the years of authoritarian introspection into the lives of the Chinese citizen.
Do China’s residents really find the facial recognition capabilities useful? Absolutely, as there is little pushback given the different perspective on privacy which exists in China. MIT’s April/May 2017 Technology Review highlighted the myriad ways facial recognition is being used in China.
- Making sure that the driver behind the wheel of the vehicle-for-hire you are about to get into is legitimate;
- Picking up your rail tickets by showing your face;
- Visiting tourist attractions without need for a ticket with your face authenticating you instead
- Walking into a retailer where you are greeted by name.
And who can forget the success enjoyed by the Chinese app, Baby Come Home, built to connect missing children with their parents? The technology, created with Microsoft, reunited a father with his child, who had been missing for four years.
Lest we think China is the only country embracing the technology successfully, there are many others, too – and more than a few whose implementation is not sitting well with people.
Fancy football (soccer)? The Champions League Finals in the UK earlier in June 2017 let a contract to have faces scanned at the stadium and central rail station – a scenario for real-time review of unknown individuals and identify them against a database of known individuals, presumably bad guys.
The US NIST, in collaboration with the Department of Homeland Security, conducted a multi-year project called, Face in Video Evaluation (FIVE). The purpose of the project was to determine if algorithms could “correctly identify or ignore persons appearing in video sequences”. The identified scenarios included high-volume screening within crowded venues, forensic screening (crime scene), crime video review (such as bank video of a robbery), video-conferencing, and individuals appearing in video footage (television).
Fly much? Airports around the globe have been using facial recognition for a number of years (Brazil, UAE, and US), but only recently have we seen airlines rolling out use scenarios. Both Jet Blue and Delta are experimenting with facial recognition: Jet Blue as part of the boarding process; and Delta for passengers to self-check their luggage.
Then there is the Norwegian digital signage company, ProntoTV, which surreptitiously collected data on visitors to its client’s locales. The software, using artificial intelligence, provided data on the individuals within “scan range”.
What is acceptable in China may not be the case in the US, so learned the FBI. In May 2016, the FBI was reminded of the 2008 E-Government Act that requires government agencies to publish a “privacy impact assessment” as they rolled out their Next Generation Identification-Interstate Photo System (NGI-IPS).
The FBI and its facial analytic teams attracted censure from the US Government Accounting Office, which took the FBI to task for failure to test the technology appropriately, levying six recommendations on the FBI. Those included completing the privacy impact assessment, improving transparency, conducting audits on the use of the NGI-IPS capability by law enforcement, carrying out tests on accuracy of the NGI-IPS technologies, conducting an annual review of the NGI-IPS, and determining if each system used by the FBI is sufficiently accurate.
The FBI disagreed with some, agreed with most, and then went on its way, only to be called out by the House of Representatives in March 2017, for many of the same issues highlighted by the GAO.
Social networks and search engines have shown us the results of their algorithms in various ways. Facebook users can tag individuals in photographs – and with each tag another piece of the facial recognition jigsaw is provided to Facebook. The result: Facebook can now suggest individuals for tagging to you when you share photos.
Is Facebook alone? No. Snapchat created a positive bump for privacy advocates last year when it filed a patent for an “apparatus and method for automated privacy protection in distributed images” – in other words, automatically assigning privacy settings to an image that matches that of a Snapchat user.
Is facial recognition good enough?
It’s getting there.
In March 2017, NIST published the results of its Face Recognition of Non-Cooperative Subjects project. The perhaps not unsurprising findings: facial recognition is hard to do, and none of the technologies “attained peak performance”. It went on to note that candidate alerts by systems and humans contain errors and the “overall rates of the hybrid machine-human system must be understood and planned for”.
In other words, false positives and misses will occur. To increase the likelihood that the hybrid machine-human system increase accuracy, perhaps the solution lies with employing those with “superior facial recognition skills”. These are “super-recognisers”, or people with superior ability to remember faces, which, according to a study by Bournemouth University, comprises 2% of the population. Perhaps human intelligence might yet trump AI when it comes to faces after all.