The Defense Advanced Research Projects Agency (DARPA) is looking for a superhero who can take on one of the trickiest problems in computer security.
Humans applicants need not apply.
The skunkworks team that brought you (amongst many, many other things) the Internet, Deep Web search engines, robot snakes, thought control, bionic exoskeletons and armed drones has set its sights on “making software safety the expert domain of machines.”
One of the fundamental challenges of cybersecurity is what’s known as the Fortification Principle; the economics are stacked in favour of an attacker because a defender must guard against every possible attack vector whilst an attacker only has to succeed once.
The patching problem is already significant – there aren’t enough bodies to go around,so bugs like Heartbleed and Shellshock can sit unobserved in critically important software for years.
According to DARPA, we’re “building [a] connected society on top of a computing infrastructure we haven’t learned to secure” and it’s about to get a lot worse thanks to the so-called Internet of Things (IoT).
As we add light bulbs, fridges, thermostats, cars and electricity grids to our global computer network, insecurity is “making its way into devices we can’t afford to doubt.”
DARPA’s answer to the Fortification Principle’s fundamental asymmetry is, unsurprisingly, automation and Artificial Intelligence (AI):
Today’s attackers have the upper hand due to the problematic economics of computer security. Attackers have the concrete and inexpensive task of finding a single flaw to break a system. Defenders on the other hand are required to anticipate and deny any possible flaw – a goal both difficult to measure and expensive to achieve. Only automation can upend these economics.
The robot-loving boffins want supercomputers that can analyse billions of lines of hitherto unseen code, find its mostly deeply hidden security flaws, and fix them without their creators having to so much as mop their AI’s sweatless brows.
We’re a long way from that point right now.
On the other hand, if we weren’t, then DARPA wouldn’t be getting its hands dirty – because that’s what DARPA does, trying to close the gap between where we are and where we need to go.
DARPA’s plan to speed up this particular journey is called the Cyber Grand Challenge – a series of competition events that started last year and will culminate in the world’s first all-computer Capture The Flag contest in 2016.
The competitors will duke it out in each event with no human involvement.
The tournament “where automated systems may take the first steps towards a defensible, connected future” will be held alongside the 2016 DEF CON Conference in Las Vegas, and the winning system will net its creators a prize of $2,000,000.
And what then?
The DARPA cybersecurity program manager who’s running the contest gave an interview last year to the New York Times in which he made an analogy with the progress of computer chess.
Deep Blue became the first computer to defeat a world chess champion in a six-game series in 1997, forty-seven years after Claude Shannon first outlined a plan for a competitive chess program.
If automated cybersecurity is on the same path then right now it’s still somewhere very near the start line.
So we’re a long way from fully autonomous, adaptive cyber-defense, but DARPA is doing no harm to its reputation as the organisation most likely to usher in our new robot overlords by accident.
Image of Giant evil robot destroying the city courtesy of Shutterstock.
5 comments on “DARPA’s plan to make software security “the domain of machines””
Regardless of what anyone else thinks, I believe that the heart, lungs, arteries, and muscles of our current electro-mechanico-digital society have very little need to do with “software” (as currently described in this article).
Unless the morons in charge decide to put remotely managed, rarely maintained and never parsed (i.e. never split, duplicated, compared/tested and then functionally re-aligned) software regimes in charge.
Yes, it will be much faster. But let’s just sneak back a few years and note that this is still true: Speed Kills.
Once the system is “fully functionalized”, i.e. managed digitally, there will never again be the opportunity for a human brain to say, “What the…!” and hit the abort button.
Evolution sux when it has passed you by.
I’ll be watching for the smoke signals from my tepee out in the wilderness.
Take human review out of code or make it impossible to read and you can hide all sorts of wonderful “features” in it.
@jimvs you are right. This step worries me. Self modifying code is never a good idea. Time to watch Terminator again, maybe this should be compulsory viewing for all those in charge?
Sounds exactly like Skynet!
I had to learn Formal Specification and Formal methods at university. They specify what software is supposed to do and define mathematical frameworks for proving whether it does or doesn’t do it.
Personally, I hated the subject, but if you can define functionally what software is supposed to do, and simply get a machine to do the hard work of making sure that it does, I can’t see that being a bad, or dangerous thing.
There’s software that controls lots of dangerous things already – like power stations, aircraft and petrol engines.
Wouldn’t you rather than they weren’t hackable?