A spokesman confirmed a report in Dubai’s 7 Days newspaper that police have developed software that will connect a Glass wearer to a database of wanted people.
Once the device matches a suspect with a face print in the database, it alerts the officer wearing the gadget.
Phase 1 of the project will be using the devices to fight traffic violations and track vehicles suspected of vehicular offences, the spokesman said.
Detectives will get their crack at the technology in phase 2 of the rollout.
Well, it will be handy for keeping their gun hands free. But is Dubai experiencing a crime wave that would justify this lavish gadget investment?
If it seems a lavish expense for a place with such a low rate of (reported) crime, bear in mind that last year Dubai said it would outfit its police with $400,000 Lamborghini sports cars for use at major tourist sites – in keeping with Dubai’s image of unparalleled luxury, according to its deputy police chief.
For its part, Google has consistently said that it won’t add new face recognition features to its services unless it has strong privacy protections in place.
In May 2013, Google announced that it wouldn’t allow developers to distribute facial recognition software through its Glassware app store for Glass.
That hasn’t stopped third parties, though.
It’s just meant that software has had to be “sideloaded” onto the device: i.e., installed with developer tools outside of the official Android Market.
Dubai isn’t the first city to outfit Robocops.
The New York Police Department, for example, in February 2014 began testing Glass for use in investigations.
It’s all in keeping with the trend for US cities to increasingly gobble up data on residents using surveillance technology such as gunshot-detection sensors, license plate readers, data-mining of social media posts for criminal activity, tracking of toll payments when drivers use electronic passes, and even at least one police purchase of a drone in Texas.
Much of this has been done in spite of concerns about violations of the Wiretap Act and the Fourth Amendment’s protection against unreasonable search.
At any rate, concerns about facial recognition extend beyond Glass.
The US National Security Agency (NSA), for one, has been collecting millions of images from the web and storing them in a database that experts say can be mined by facial recognition software for identifying surveillance targets.
A freedom of information request in April uncovered that the US Federal Bureau of Investigations (FBI) is building a massive facial recognition database that could contain as many as 52 million images by 2015, including 4.3 million non-criminal images.
Where is this all heading?
If you want to get really paranoid, go read The Atlantic’s write-up about how facial-recognition technology has been developed that’s better at reading human facial movements – and hence better at deciphering when somebody is telling even a white lie – than humans themselves, at least in lab conditions.
Then, read George Orwell’s definition of a Facecrime from 1984:
It was terribly dangerous to let your thoughts wander when you were in any public place or within range of a telescreen. The smallest thing could give you away. A nervous tic, an unconscious look of anxiety, a habit of muttering to yourself -- anything that carried with it the suggestion of abnormality, of having something to hide. In any case, to wear an improper expression on your face (to look incredulous when a victory was announced, for example) was itself a punishable offence. There was even a word for it in Newspeak: facecrime, it was called.
Facial recognition technology can be used to develop wonderful, human-positive applications.
Developers have, for example, described applications that will help Alzheimer’s patients identify the faces of loved ones.
Should we be hopeful, paranoid, or both?