If you’re in charge of IT security, keeping users safe on the web is one of the biggest problems you face. But there are some outdated notions about threats that can get in the way of effective security.
Protecting users on the web requires you to think about all the ways users access it, and the different weapons cybercriminals have in their arsenals for getting around traditional anti-virus security.
To catch the bad stuff, keep your users productive, and cut down on the amount of time you spend cleaning up compromised computers, take a look at the following security myths and read our recommendations for getting web security right.
Myth #1 – A strict browsing policy that simply blocks unsavoury sites keeps users safe
Web security used to be pretty straightforward: you merely had to block out certain websites in categories like adult content, gambling, P2P, and violent or extremist content.
You may have some sound reasons for blocking those kinds of websites – they probably violate your HR policies, present legal liabilities, harm worker productivity, and can contribute to an unhealthy work environment.
But blocking dodgy sites won’t keep users safe from web-borne threats.
The reality is that the vast majority of threats come from legitimate websites that have been compromised by cybercriminals.
In fact, SophosLabs detects between 20,000 and 40,000 new malicious URLs every single day, and over 80% of them are legitimate websites – including some with very high traffic, like news, business or government agency sites.
Modern websites tend to be built from a huge number of components. Some of these are likely delivered by third-party sites and the bad guys have become expert at targeting those, which are often not as well protected.
Thus even if a site has done a good job securing its own infrastructure it could still unwittingly be serving up malware. Malware delivered by ad networks is a common example of this (known as malvertising).
Some attacks – called drive-by downloads – can infect your computers with malicious code just by visiting a compromised website. They don’t even need to click on anything because the infection happens automatically, without them even realising it. Your staff are particularly at risk of this kind of attack if they don’t keep your browsers and all associated plugins up to date with outstanding security patches.
Security Tip: In addition to a URL filtering solution, you also need to make sure you: 1) perform deep scanning of web traffic as it’s accessed and 2) keep your endpoints well-patched.
Myth #2: Network layer scanning alone will keep you secure
Filtering web content as it traverses the network is a good start, but it’s not the only control you’ll need to keep your users safe.
While increased use of encryption, particularly TLS, is great for the security of the internet as a whole, it does causes some headaches for sysadmins.
Well-implemented TLS is extremely hard for anyone to intercept for good or bad reasons. When you visit a HTTPS site, a network appliance or service can only see where you are going – it can’t check the contents of your traffic. To fall back on the oft-used postal analogy, the appliance can see the envelope but not the contents. This is known as being “blinded by encryption.”
The problem is getting harder for two reasons:
- HTTPS is fast becoming the default for everyone, including the bad guys; and
- Intercepting it for legitimate reasons is getting trickier.
The common way round the second issue was to install a special certificate authority (CA) on all your managed endpoints. This CA had the power to pretend to be any website, allowing an appliance to decrypt the traffic, then re-encrypt before sending it on – think of it as an “authorised” man-in-the-middle attack.
While this technique may still work for some organisations, consumer demand for privacy is fast eroding its effectiveness.
One challenge is certificate pinning, which prevents your own special CA from generating a certificate your clients will consider as valid. Another is that devices are increasingly considering unknown CAs (including your legitimate “special” CA) as suspicious and warning the user accordingly.
Security Tip: Web protection works best in conjunction with strong, layered defences including good endpoint security controls and egress controls (looking at traffic on the way back out).
Myth #3: The only way to protect offsite users is by routing their traffic back through HQ
Branch offices, employees travelling outside the network, and off-site workers all need secure internet access.
This is commonly achieved by routing all traffic through head office.
Users rarely like this: it adds latency and they lose localisation (have you ever ended up with Google in the wrong language?). You may also end up paying for web traffic twice – once when it traverses the WAN and again when it breaks out onto the internet.
It’s worth looking at options to scan traffic locally. A UTM can be a cost effective way to provide a local, protected, internet breakout at local sites.
For your true road warriors you really need to pick an endpoint security solution that integrates web policy enforcement and web content scanning directly into the network layer on your laptops.
Security Tip: Look into web protection solutions which don’t hamper or slow down your users. Not only will you keep them happy, but they’ll be less likely to look for a (likely insecure) workaround.
“Delinquent web filtering” is one of Sophos’s 7 Deadly IT Sins.
You can find resources – including videos and whitepapers – about these common security sins on the Sophos website.