Security has to strike a balance

"Rich Baldry, a product manager based in our Vancouver offices, has put his pen to paper (actually that’s a lie, he’s used his fingers and a keyboard) and written the following guest blog post. Over to you Rich.."

Rich Baldry
IBM’s X-Force recently revealed that 55% of all vulnerabilities discovered in software last year were in web applications.

Not a big surprise – web apps are complex, tying together a smorgasbord of software components with an all-you-can eat buffet of different programming languages.

Web apps are also by definition public-facing. They are open to anyone with a sense of curiosity, or perhaps more malicious motives, to prod, probe and poke.

More worryingly, IBM also revealed that 74% of those vulnerabilities have still not been fixed. Many of these can be abused by hackers to penetrate web sites and either extract confidential information, or leave behind malware or links to virus distribution sites.

The failure of websites to keep their own house in order leaves us all vulnerable. The unpredictability of vulnerability discovery and exploitation means that we can never really be sure which sites to trust.

So how can we protect ourselves?

Airport security

Events at Google last weekend exposed an unusual problem where programmer error caused an excess of security. It is a standing joke that if you want a really secure computer system, just don’t connect it to the internet, but I never expected Google to be a proponent of this.

Security has to strike a balance. Take cross site scripting attacks for example. They are a really nasty way that the bad guys are abusing our trust in known web sites to do bad things.

The popular ‘NoScript’ plugin for Firefox takes a very aggressive approach this which provides great protection. Unfortunately it also prevented me from winning $43 million in the provincial lottery last weekend, because it blocked the link from the web site’s payment page to my Bank’s ‘Verified by Visa’ site.

Microsoft’s Internet Explorer 8 also provides XSS prevention which uses more complex rules to avoid overblocking, but reports suggest it errs on the side of allowing potentially suspicious requests.

Which is the correct approach? I suspect your answer would vary depending on whether you’ve just been the victim of an identity theft or you’re just trying to buy a last-minute lottery ticket online.

The boundaries of trust and mistrust, good and bad, real and fake are severely blurred by the internet and the way we experience it. The increasing complexity of web applications and the vulnerabilities in them means that very few sites can be really trusted. Databases of good and bad web sites just can’t keep up with the changes. Simple content rules or signatures will either overblock or underblock.

Web security needs to combine URL filtering with content analysis, behavioural prediction (reactive and proactive) and control of unnecessary content to even begin to be effective. Combining gateway and endpoint security is also vital to get a full picture of the threat.

Any one of those techniques alone will always fail us in one way or another.

* Image source: Stephen Witherden’s Flickr photostream (Creative Commons 2.0)