“Turing Test” allegedly defeated – is it time to welcome your robot overlords?

I’m sure you have heard of, and indeed at some time faced up to and solved, a CAPTCHA.

All over the web, you’ll see people telling you CAPTCHA is a pun on “capture,” since it’s meant to catch out automated software, but actually stands for Completely Automated Turing Test for Telling Computers and Humans Apart.

That’s nonsense, of course, or else the acronym would be CATTTCHA, which would be a perfectly good play on words itself.

CAPTCHA is better expanded as Completely Automated Procedure for Telling Computers and Humans Apart.

Briefly put, a CAPTCHA falls a long way short of a real Turing Test, which sets much higher human-like behavioural standards on computers that attempt it.

The Turing Test, as you can probably guess, is named after British computer pioneer Alan Turing.

Turing proposed his now famous test back in a seminal paper published in 1950, entitled Computing Machinery and Intelligence.

The test was presented as a way of answering the question, “Can machines think?”

To bypass the complexity of defining “thinking,” and of deciding through philosophical argument that an entity was engaging in it, Turing proposed a practicable systematic alternative in the form of a test.

He based it on an imaginary contest called The Imitation Game.

A man and a woman are sitting in separate rooms, each in front of a teleprinter, so they can’t be seen or their tone of voice heard.

One of them is denoted by X and the other by Y; a questioner gets to interrogate them, directing each question at either X or Y.

That means he can group all of X’s answers together, and all of Y’s answers together; at the end, he has to work out who’s who.

But here’s the tricky part: the man must convince the questioner he’s the woman, and so must the woman. (You could do it the other way around, but one person is being themselves, and the other is trying to imitate someone they aren’t.)

The idea is that if the questioner can tell them apart, the man hasn’t played a convincing enough role.

Since the woman’s job is to convince the questioner that she is, indeed, female, thus exposing the man as a fraud, her best approach is to be as truthful and accurate as possible.

She is effectively on the questioner’s side, so misleading him won’t help.

It sounds like a parlour game – it might even have been a 1940s parlour game – but once you think about the sort of tactics the man would need to adopt, you can see where Turing was going.

Replace the man in the game with a computer, and see if the questioner can distinguish the computer from the woman. (Or from a man. This time the differentiation is not gender based: it’s computer versus human.)

Turing’s suggestion was that if you can’t tell the computer from the human, then you have as good as answered the question, “Can computers think?” with the word, “Yes.”

In other words, given the right sort of questions, the human participant would have to perform what we call “thinking” in order to answer.

So, if a computer could give sufficiently human-like answers, you’d have to concede it was “thinking,” too.

Clearly, to pass a proper Turing test, a computer program would need a much broader set of skills than it would need to read the following CAPTCHA:

Make no mistake: programming the sort of software than can read modern CAPTCHAs is a serious challenge in its own right.

You might even decide to refer to a computer that could do it as “clever,” but it still wouldn’t be thinking.

Turing predicts the future

Interestingly, in the paper in which he introduced the Imitation Game, Turing estimated that by the year 2000, computers would able to survive his eponymous test for five minutes at least 30% of the time.

→ Generally speaking, the longer the questioning goes on, the more likely the questioner will tell the human and the computer apart, as he has more opportunity to catch the computer out. So the longer a computer can last, the more we should accept that it is “thinking.”

Furthermore, Turing guessed that his fin de siècle test-beating computers would need about 128MB (1Gbit) of memory to do the job.

He was a trifle optimistic, but nevertheless surprisingly close.

It actually took until 07 June 2014 for a serious claim to surface that a computer, or more precisely a program, had passed a Turing Test.

It happened in a contest organised by the University of Reading in England, and the “thinking software” was called Eugene Goostman.

Just how seriously the world of computer science will take the claim remains to be seen: Reading University’s machine intelligence experts are no strangers to controversy.

Indeed, the spokesman in Reading’s latest press release is none other that Professor Kevin Warwick, a media-savvy cyberneticist who promotes himself as the man who “became the world’s first Cyborg in a ground breaking set of scientific experiments.”

And University of Reading research fellow Mark Gasson proudly announced, in 2010, that he was the first human to infect himself with a computer virus.

→ What Gasson actually did, as far as we can see, is to inject himself with an RFID chip containing executable code that could, in theory, be considered an exploit against a vulnerable RFID reader, if Gasson were to find (or build) a vulnerable RFID reader to match his “infected” chip.

The Eugene Goostman software was developed in Saint Petersburg, Russia, by a team including Vladimir Veselov, an American born in Russia, and Eugene Demchenko, a Russian born in Ukraine.

This year’s competition took place, fittingly if slightly sadly, on the 60th anniversary of Turing’s death.

Eugene, reports the University of Reading, tricked 33% of the judges into thinking he was human in a series of five-minute conversations.

Fans of TV Sci-Fi shows will enjoy that fact that one of the judges was Robert Llewellyn, the actor who played the intelligent robot Kryten in the cult comedy series Red Dwarf.

Is this the dawn of the robot overlord?

Will 07 June 2014 become, as one of my Naked Security colleagues joked (at least, I assume he was joking), the day we first welcomed our robot overlords?

I’m saying, “No.”

One trick the programmers used was to make Eugene a 13-year-old boy, which almost certainly gave them much more leeway for “believable mistakes” than if they had simulated a person of adult age.

As Veselov pointed out:

Eugene was 'born' in 2001. Our main idea was that he can claim that he knows anything, but his age also makes it perfectly reasonable that he doesn't know everything. We spent a lot of time developing a character with a believable personality.

As Turing Tests go, this one feels a bit more like getting a learner’s permit for a moped than qualifying for your unrestricted car licence.

Eugene has a few years to go before he can do that.

So Naked Security’s message to our new robot overlord is, “Stop showing off on the internet and go and tidy your bedroom!”

That’s what it told me to say, anyway.