Were he still alive, **Alan Mathison Turing OBE FRS** would be 100 years old today.

Turing is probably best known to the public for his cryptanalytical derring-do at Bletchley Park, UK, during the Second World War.

Turing took on the German navy's tricky Enigma cipher machine, which was a more secure variant of the devices used elsewhere in the Nazi military. Turing won.

But Turing's influential brilliance was evident even before the war started.

As a freshly-minted graduate of King's College, Cambridge, Turing took on a famous mathematical problem about computability.

Consider, for example, the mathematical formulae known as Diophantine equations. These are polynomials, meaning that they are computed from a number of input variables that are only ever subjected to multiplication or addition, as in `x + y = 1` or `x ^{2} + y^{2} = z^{2}`.

Additionally - which you might think would simplify things rather a lot - Diophantine variables can only ever be whole numbers like `-2` or `1`. Numbers like `0.5` or π aren't allowed.

We can easily prove that some Diophantine equations can be solved. For example, if `x=0` and `y=1`, then `x + y = 1`. Done! And if `x=3`, `y=4` and `z=5`, then `x ^{2} + y^{2} = z^{2}`.

We can also prove that some Diophantine equations cannot be solved. For example, there isn't any way to solve `x ^{3} + y^{3} = z^{3}` using whole numbers, even though we can trot out solutions when

`x`,

`y`and

`z`are squared, not cubed.

Note, as an aside, that the general result for `x ^{n} + y^{n} = z^{n}` - that there are no Diophantine solutions to

`x`for any

^{n}+ y^{n}= z^{n}`n > 2`- didn't come easily.

It was claimed very cheekily as a theorem by Pierre de Fermat in 1637, but only provably proved in 1995 by British mathematician Andrew Wiles. But it was solved in the end.

So there are Diophantines we can solve, and ones we cannot. Known knowns, and known unknowns, in the vernacular of the twenty-first century.

But can we decide - regardless of how long it might take - which is which? Or will there always be unknown knowns and unknown unknowns?

Turing knocked this problem on the head in 1936 in a famous paper entitled **On Computable Numbers, with an Application to the Entscheidungsproblem**.

(*Entscheidung* is German for *decision*. It was the word used by German mathematician David Hilbert to describe the challenge of whether we could decide about the solvability of Diophantine equations.)

In particular, Turing showed that there *are* uncomputable numbers, and thus, almost as a casual side-effect, that there will always be unknown knowns and unknowns amongst the Diophantines.

**There are things which cannot be worked out - and which we cannot work out whether we can work them out.**

In computer science terms, this result is as stark as you might like.

There are algorithms which will run for ever, never producing a usable result, and algorithms which will conclude successfully.

But we can't reliably compute in advance which is which - so we can't write a program which will correctly determine the behaviour of all other programs.

This is known, rather neatly, as the **halting problem**, and it is a vital result to keep in mind when you deal with computer security.

For example, it means that you can't write an anti-virus program which never needs updating. All those criticisms about the imperfection of anti-virus are true!

But the halting problem applies to everyone.

Not just to anti-virus, but to code analysers, behaviour blockers, intrusion detection systems, network flow correlators, and so forth.

No security solution can be perfect, because no solution can decide all the answers.

That's why defence in depth is really important, and why you should run a mile from any security vendor who still makes claims like "never needs updating."

(By the way, Turing's result can be turned around to make it a bit more optimistic: you can't write a virus which will be undetectable by all possible anti-virus programs. So the good guys always win in the end.)

Alan Turing, of course, worked all this stuff out before digital computers existed. No wonder he went on to be a pioneer in inventing them!

Based on this theorem ...... No End.... change is constant... good guys become bad... bad guys become good..... So therefore the statement claiming that the Good Guys always win in the end... is false.... but business as usual...

The halting problem doesn't say that good guys have to _become_ bad...

But you are right that the crooks might choose to say that they always win, because they define their "finishing line" as the point when they're ahead.

It is an arms race, and whoever has played the most recent card will inevitably have had the most recent chance to win, since they now know how the last defence/attack worked. (That doesn't guarantee victory, of course.)

It's also my article, so in this case _I_ get to position the finishing line :-)

Lol! Funny but very wrong conclusions.

Don't know how much you care, but you spelled the tag version of "Entscheidungsproblem" without the 's' toward the beginning.

I care enough to fix it - thanks!