“Faception” software claims it can spot terrorists, pedophiles, great poker players

In April 2008, Sophos announced ground-breaking technology: the use of facial recognition to identify and stop virus writers even before they managed to write malware.

Sophos solicited readers’ images of facial obfuscation with which to train the technology, because well of COURSE it was a joke, happy April Fool’s Day!

What comes next, however, is not a joke. It is, in fact, enough to wipe that grin off our faces.

A “facial personality profiling” company has dragged the Victorian affection for physiognomy out of the mothballs and packaged it in an application called Faception.

The Israeli startup says it can take one look at you and recognize facial traits undetectable to the human eye: traits that help to identify whether you’ve got the face of an expert poker player, a genius, an academic, a pedophile or a terrorist.

The startup sees great potential in machine learning to detect the bad guys, claiming that it’s built 15 classifiers to evaluate certain traits with 80% accuracy.

Eighty percent? Is that really good enough to arrest somebody under suspicion of terrorism or pedophilia? As in, “Excuse me, you’d best come with us: your philtrum has a suspicious shape to it, and we’re adverse to the symmetry of your lips.”

The Washington Post quotes Faception chief executive Shai Gilboa:

We understand the human much better than other humans understand each other. Our personality is determined by our DNA and reflected in our face. It’s a kind of signal.

Gilboa, who told the newspaper that he’s also the company’s chief ethics officer, said that it would never make public the classifiers that supposedly predict negative traits.

But making it available to government agencies is apparently A-OK: Faception has reportedly signed a contract with a homeland security agency in the US to help identify terrorists.

In fact, according to The Mirror, the company claims its technology correctly identified 9 of the 11 Paris massacre jihadists as terrorists, without being given any information about their involvement.

The company also recently demonstrated its technology at a poker tournament. To predict who would wind up as finalists, it analyzed photos of 50 players, comparing them with a database of professional poker players.

Faception placed its money, so to speak, on four players. Two of them wound up among the event’s three finalists.

If you’re wondering, well, maybe that database contained photos of some of the players at the tournament, making that prediction little more than a “Duh,” you may be onto something.

Pedro Domingos, a professor of computer science at the University of Washington and author of “The Master Algorithm,” brought up an example of how using artificial intelligence systems to draw conclusions can be tricky. As he told The Washington Post, a colleague of his had trained a computer system to tell the difference between dogs and wolves.

It did great. It achieved nearly 100% accuracy. But as it turned out, the computer wasn’t sussing out barely perceptible canine distinctions. It was just looking for snow. All of the wolf photos featured snow in the background, whereas none of the dog pictures did.

A system, in other words, might come to the right conclusions, for all the wrong reasons. Because of such limitations, Gilboa doesn’t see Faception used on its own to identify terrorists; rather, he imagines governments could take its findings into account along with other sources.

But the use of “facial profiling” is still troubling to some. How do you approach this subject without falling foul of Godwin’s Law? The law holds that in any online discussion that goes on long enough, sooner or later, someone will compare someone or something to Hitler or Nazism.

Game over, conversation done, Hitler-bringer-upper declared the loser.

Well, here goes. Sorry, Mr. Godwin, but this isn’t hyperbole: the Nazis really did like to talk about the use of physiognomic features to determine one’s Jewishness or lack thereof.

Beyond the Third Reich, there are reasons why profiling should send shivers up our spines. Physiognomy has long been dismissed as a psuedoscience, along with its sibling psuedoscience of phrenology: the notion that you can determine personality, and perhaps morality, from the morphology of somebody’s skull.

You can see why it was so popular back in its heyday, during the first half of the 19th century, when parlors popped up with automated phrenology machines for self-diagnosis (in case you couldn’t figure out on your own if you were a nice guy or a mass murderer).

Such an appealing notion, that you can figure out somebody’s personality and morality from mapping out itty bitty “brain organs” on, in and around the skull, or that we can detect “cunning, deceit, evasion, worldly wisdom, lying, trickery, guile, and sensuality,” all of this being “characteristics of the Evil One,” because of a fleshy eyelid.

The Washington Post quoted Alexander Todorov, a Princeton psychology professor whose research includes facial perception, on the dubiousness of Faception’s physiognomy claims:

The evidence that there is accuracy in these judgments is extremely weak. Just when we thought that physiognomy ended 100 years ago. Oh, well.

Make no mistake about it: facial recognition is real.

It’s real enough to be used to strip porn stars of anonymity.

It’s real enough that Microsoft’s been training facial recognition apps to recognize our emotions.

But determining whether somebody’s a terrorist by facial traits?

We can only hope that most, if not all, government agencies don’t buy into this dubious form of profiling. It sounds like a dangerous slope to me, but please do share your own thoughts about it below.