Could deliberately adding security bugs make software more secure?

The best way to defend against software flaws is to find them before the attackers do.

This is the unshakeable security orthodoxy challenged by a radical new study from researchers at New York University. The study argues that a better approach might be to fill software with so many false flaws that black hats get bogged down working out which ones are real and which aren’t.

Granted, it’s an idea likely to get you a few incredulous stares if suggested across the water cooler, but let’s do it the justice of trying to explain the concept.

The authors’ summary is disarmingly simple:

Rather than eliminating bugs, we instead add large numbers of bugs that are provably (but not obviously) non-exploitable.

By carefully constraining the conditions under which these bugs manifest and the effects they have on the program, we can ensure that chaff bugs are non-exploitable and will only, at worst, crash the program.

Each of these bugs is called a ‘chaff’, presumably in honour of the British WW2 tactic of confusing German aircraft radar by filling the sky with clouds of aluminium strips, which also used this name.

Arguably, it’s a distant version of the security by obscurity principle which holds that something can be made more secure by embedding a secret design element that only the defenders know about.

In the case of software flaws and aluminium chaff clouds, the defenders know where and what they are but the attackers don’t. As long as that holds true, the theory goes, the enemy is at a disadvantage.

The concept has its origins in something called LAVA, co-developed by one of the study’s authors to inject flaws into C/C++ software to test the effectiveness of the automated flaw-finding tools widely used by developers.

Of course, attackers also hunt for flaws, which is why the idea of deliberately putting flaws into software to consume their resources must have seemed like a logical jump.

To date, the researchers have managed to inject thousands of non-exploitable flaws in to real software using a prototype setup, which shows that the tricky engineering of adding flaws that don’t muck up programs is at least possible.

Good idea, mad idea?

Now to the matter of whether this idea would work in what humans loosely refer to as the real world.

The standout objection is that the concept is a non-starter for the growing volume of the world’s software that is open source (secret code and open source are incompatible ideas).

The next biggie is that even applied to proprietary software, adding bogus flaws would tie down legitimate researchers who take the time to find and report serious security vulnerabilities.

While it’s true that attackers would also be bogged down, adding the same layer of inconvenience to the job of the good guys might negate this benefit.

The worst-case scenario is that attackers eventually fine tune their flaw hunting rigs to spot the bogus code and you end up back at square one. In this world, injecting new chaff to defeat this would become a full-time job.

It’s not as if the fact that chaff had been added would be hard for anyone to discover – all they’d have to do would be to compare the size of a new version with an old one and make an educated guess about how much was new features and how much was chaff.

More likely, developers would run a mile for fear that the process of injecting chaff would in itself risk creating new and possibly real flaws, even if those were simply denial of service states caused by a program crashing.

In the end, intriguing though the chaff concept is, the best way to cope with security flaws remains the proven method – find and efficiently mitigate or patch them before the attackers find out.