Police bodycams get tech that can identify “faces and people”

Body cameras aimed at Police and other “public safety users” are getting outfitted with new abilities to identify things like stolen bicycles, missing children and other “objects of interest”.

On Monday, Motorola Solutions – formerly Motorola Inc. – announced a new partnership with machine learning startup Neurala to use object recognition AI in its Si500 body cameras.

Yesterday, in the face of suggestions to the contrary, Motorola said that no, what was featured in its news release wasn’t face recognition. Rather, it was object recognition.

In a statement sent to Verge reporter Russell Brandom, shared in the tweet below, a Motorola spokesperson stresses that the child featured in a released photo was found, after he went missing, by searching on a description of what he was wearing and the color of his hair, not by face recognition.

The Motorola statement published by Brandom doesn’t rule out the use of face recognition (in fact it states specifically that Neurala’s technology can be taught to identify “faces and people”) but makes it clear that face recognition wasn’t used in the company’s announcement:

One of the reasons we chose Neurala to explore public safety applications of AI is because they provide a flexible AI engine that we can teach to identify various things – including faces and people. For the “missing person” use case described in the news release, it’s focused on finding a person matching a specific description (e.g., blue shirt, black hair) rather than recognizing a specific face.

The technology behind the announcement is a deep learning-based neural network that Neurala says allows network edge devices to learn in a streamlined fashion, without the need for servers running in the cloud. The company’s founder is Massimiliano Versace, a neuroscientist who described his patent-pending image recognition and machine learning technology in a 2010 paper for IEEE Spectrum.

In that paper, Versace describes how the AI consists of a chip that mimics the brain’s neurons and synapses. That’s what’s got to be done to push AI beyond the severe limitations of Boolean logic and get it to approach the computing power of, say, a rat, or rather, a rat’s brain…

…whose networks of millions of neurons and billions of synapses are distributed across many brain areas – a brain that weighs no more than 2 grams and can operate on the power budget of a Christmas-tree bulb.

How to mimic and package that kind of processing power into a tiny bodcam? By setting up tiny constellations of processors to do the work done by different parts of the brain:

…computation that can be divided up between hardware that processes like the body of a neuron and hardware that processes the way dendrites and axons do.

Versace claims his research shows that AIs can learn using much less code in such a structure. Less code means less processing, which means smaller computers using less power to perform sophisticated tasks. That includes recognizing an image a tiny camera has been instructed to search for.

Motorola Solutions Chief Technology Officer Paul Steinberg:

This can unlock new applications for public safety users. In the case of a missing child, imagine if the parent showed the child’s photo to a nearby police officer on patrol. The officer’s body-worn camera sees the photo, the AI engine ‘learns’ what the child looks like and deploys an engine to the body-worn cameras of nearby officers, quickly creating a team searching for the child.

Of course, facial recognition technology concerns privacy advocates, for good reason, particularly given how ga-ga law enforcement is about it. For years, the trend has been for US cities to gobble up data on residents using surveillance technology such as gunshot-detection sensors, license plate readers, data-mining of social media posts for criminal activity, tracking of toll payments when drivers use electronic passes, and even at least one police purchase of a drone in Texas.

Much of this has been done in spite of concerns about violations of the Wiretap Act and the Fourth Amendment’s protection against unreasonable search.

And while Versace has been quick to play down such concerns, it’s certainly not as if the technology can’t do it or that police haven’t used facial recognition in the past. In 2014, at the height of the Google Glass privacy kerfluffle, Dubai police added facial recognition to the gadgets, and New York police began testing Glass for use in investigations.

In a discussion with Silicon Angle, Versace noted that Neurala’s software doesn’t record any data or images it scans. Rather, it only looks for a matching face, which he said makes privacy violations simply impossible:

We truly believe that the many benefits made possible by this technology, including the ability to more easily find a missing child, will alleviate any misplaced concerns about privacy.

At any rate, it seems that smart bodycams are still a way off. The first goal is to build a prototype that “allows for real-time learning for a person of interest search.”

Good. That gives us plenty of time to get paranoid about what could be done with this muscular technology besides find lost kids.

For the dystopian world view, I’d suggest rereading The Atlantic’s 2014 write-up about facial-recognition technology that’s better at reading human facial movements – and hence better at deciphering when somebody is telling even a white lie – than humans themselves, at least in lab conditions. And that, mind you, was state of the art three years ago.