An API that will enable developers to program facial recognition into Google Glass apps is due to be released this week by Lambda Labs, a San Francisco startup.
Company co-founder Stephen Balaban said that the API will be available to any interested developer, according to TechCrunch’s Sarah Perez.
TechCrunch says that Lambda Labs’ facial recognition API went into beta last year and is now in use by 1,000 developers, including several major international firms.
The API is now seeing 5 million calls per month and is growing at 15 percent month-over-month, Perez writes, with Lambda Labs now on the brink of releasing a version of the API that will recognize faces and objects in Google Glass apps.
In the upcoming Glass version, Balaban says, the technology will enable apps such as “remember this face”, “find your friends in a crowd”, “networking event interest matching”, “intelligent contact books”, and more.
Searching Twitter for #ihackglass reveals more detailed descriptions of the apps that developers are planning to create.
Those apps cover a wide spectrum, many laudable: helping those with Alzheimer’s to recognize loved ones, filling in the blanks for people who just can’t seem to remember names, automatically snapping your children’s photos when they smile at you, or building an opt-in facial-recognition social network, for example.
The sampling:
West Jones
@westHjones
I will use @LambdaAPI to integrate with existing medical records and provide a tool for doctors to access patient files & info #ihackglass
Jamie Houston
@silvyn 24 May
I will use @LambdaAPI to build a frequent customer option for http://www.smartstubs.com (for example: checking in for a bus) #ihackglass
Jeffrey Yeung
@jbyeung
I will use @LambdaAPI to build alzheimer's pt assistant to recognize loved ones so they can be cared for #ihackglass (have glass soon)
Russell Holly
@russellholly 24 May
I will use @LambdaAPI to build digital nametags, since I can't remember a name to save my life. #ihackglass
Nathan Waters
@nathanwaters 23 May
Opt-in facial recognition social network. Imagine walking down the street, knowing the people around you and meeting cool peeps #ihackglass
This last app is notable for mentioning the term “opt-in”: a concept that might help soothe a hornet’s nest of privacy angst about Google Glass, the discrete, photo-snapping/video-grabbing, internet-enabled eyeglasses that have already earned their wearers the sobriquet of “glassholes.”
As it is, Lambda Labs’ announcement comes fast on the heels of the US Congress having sent Google a list of pointed questions about just what the company plans to do about privacy in Glass, which is still under development.
Some have pointed out that those worried about privacy might not be aware that Glass apps that conduct facial recognition won’t do so in real time.
Perez writes:
"...you couldn’t just walk around automatically recognizing people you see through Glass. The way Google’s Mirror API works right now is that you first have to snap a photo, send it to the developer’s servers, then get the notification back. The lag time on that would be several seconds at least, and would depend on how fast you could take a photo and share it."
But just because potentially creepy stalker types can’t instantly recognize those they look at, take photos/videos of whomever they choose, and search for their subjects on, for example, Facebook, doesn’t mean they won’t ever be able to do just that.
As Balaban puts it:
"There is nothing in the Glass Terms of Service that explicitly prevents us from doing this. However, there is a risk that Google may change the ToS in an attempt to stop us from providing this functionality... This is the first face recognition toolkit for Glass, so we’re just not sure how Google, or the privacy caucus, will react."
Steve Lee, Google’s director of product management for Google Glass, has put out a statement saying that privacy protection is coming before facial recognition debuts in Glass:
"We’ve consistently said that we won’t add new face recognition features to our services unless we have strong privacy protections in place."
But as pointed out by CBC News’ Dan Misener, in the case of Lambda Labs, it’s the third-party developers building on the Lambda Labs platform who’ll be offering face recognition, not Google, per se.
Misener asks who, then is responsible for protecting privacy? Is it Google, facial recognition service providers such as Lambda Labs, or the third-party apps developers who build on the platforms provided by Lambda Labs and others?
Balaban told Misener that Lambda Labs, for its part, will take the onus of controlling facial recognition data and thus shepherding privacy protection via voluntary opt-out:
"And in the beginning, we will be creating an opt-out privacy protection, to make sure that people who don't want to be recognized by this have an ability so their face will never be linked with any social profiles online."
Opt-out? Really? That strikes me as offering weak privacy protection.
Opt-out means that all those who are concerned about privacy will have to research the multiple developers behind face recognition technologies and opt out with each one.
Misener puts it this way:
"Lambda Labs isn't the only company working in this space. And they're not a consumer-facing company; they provide toolkits for developers. I wonder how many consumers will know about Lambda Labs at all, let alone their opt-out setting."
I asked Lambda Labs why it wasn’t instead putting in place opt-in for facial recognition, as at least some app developers have mentioned. The company hadn’t responded by the time this article posted.
Lambda Labs, set an example. Don’t force us to sniff out hidden-away facial recognition providers and their opt-out requirements.
Step into the light. Let us, in turn, step forward if in fact we want Glass wearers to use your platform to track us down.
Don’t make us jump through hoops if we don’t.
Image of opt-in/opt-out and facial recognition courtesy of Shutterstock.
Wonder if law enforcement will use it identify previous offenders to help prevent crime. Parents could use this information when in areas like parks to identify registered sex offenders.
Equally likely: corrupt law enforcement agents use this technology to further harass victims. People who have enraged criminals or abusive partners find it harder to escape. People using social engineering to talk their way into buildings now have more targeted information to exploit trust (could sit near building and observe if facial recognition isn't instant — Oh look, Bob Guyman likes to hold the door open for people, "Thanks Bob. How're the kids?").
I'm not sure this technology itself is good or bad, but it certainly enhances both the good and bad things people are capable of. The real trick is to augment the good and temper the bad. I'm not sure we have the will to really do that.
People with similar faces could be mis-identified, hilarious until you find out your face is remarkably close to that of a sex offender, and someone is using "Vigilante V0.6" which flags you up as a target.
I have several look-alikes, once having even shared a hair-dresser with one. She couldn't tell us apart until we spoke. Would AI facial recognition be guaranteed to do any better? If not, there's clearly room for major problems.
"creepy stalker types … instantly recognize those they look at, take photos/videos of whomever they choose, and search for their subjects on, for example, Facebook" – I think knowing that the person stalking you was frequently relaying his (or her) field of view back to some independent image-analysis service should make you feel safer, actually.