It used to be easy. Everyone had a desktop on a fixed IP range and your perimeter firewall had half a dozen carefully-managed holes. A simple ACL of “ALLOW 10.0.0.0/8”, or equivalent, was a fairly robust access control.
A decade or two later and it’s a very different picture. Clients move around non-stop and expect constant access when they are on the road. The tenuous link between a user and a “trusted IP” has been completely severed.
The perimeter, traditionally demarcated by a corporate firewall, probably doesn’t even encompass half the servers you care about. Given their fuzzy, soft and shifting edges, the cloud analogy is particularly apt.
Even the traditional model incurs serious risk. If your organisation still only has desktops always sitting behind a network firewall you’re still vulnerable to attack. Unfortunately, attackers long ago started focusing on the few holes that they know you have to allow, web and email particularly.
Web and mail filters are a crucial, of course, but defence in depth is required. All it takes is a single successful drive-by-download or malicious mail attachment and an attacker has full control over a machine inside your perimeter. Your network firewall, focused on blocking attacks from the outside world, won’t offer much help.
Those in charge of security in large organisations have been fretting about this for years, but the trend is accelerating and all organisations need to take a strategic approach to combatting the risks.
The first step is to stop trusting your client devices. All of them. This can be quite liberating as it’s a great opportunity to focus on what is really important to your organisation and ensure security resource is focused appropriately.
Adopting an untrusted approach doesn’t mean you shouldn’t try to protect your clients. It just means considering what to do if the protection fails.
Damage-control is an important first consideration. If a client has full access to all your servers, and those servers are not well-hardened, then any potential problem is going to spread rapidly.
Defending your servers means isolating them from your clients. The perimeter firewall is no longer enough. Minimising the attack surface between your clients and servers limits the risk of a server compromise via an insecure management port, for example. Likewise, if all your clients access applications through a web interface, exposing the database server directly creates more unnecessary exposure.
Server-side endpoint security is also crucial. Simply isolating them from the internet is not sufficient – they should be able to withstand attacks from the inside. The good news is that it can actually be easier than client-side endpoint security.
Servers tend to perform a less diverse range of tasks so it’s easier to define and lock down behaviour. They’re rarely used for web browsing, for example and they’re not subject to frequent software installation which can cause problems with heuristic anti-virus detections. Chances are you’ll also have more success tuning an IDS system.
Internal controls and segmentation generally buy you time and, as a consequence, visibility is an important consideration. Spotting malicious activity before it compromises valuable resources will allow you to operationally react to problems.
Returning to previous examples: how easily would you spot a server attempting to access an external website? Would your firewall spot and log a client port scanning your network for open database ports? Log and security event management tools to make sense of all the data can really help.
Authentication and authorisation also needs to be taken into consideration. “Coming from a known IP address” or “on the corporate LAN” is still a useful authentication factor but should certainly not be relied upon. Although challenging to achieve, a nuanced approach to trusting authentication claims is required.
Device type is one factor to consider. Separate VPNs and wireless networks for managed and unmanaged devices will allow you to tailor application access based on device trust. VPNs, proxies and applications themselves can do something similar for user authentication. Requiring an additional certificate or token when accessing sensitive applications, data or functionality is a sensible strategy.
Shifting from a binary in-or-out trust model requires some fundamental network architecture changes. It’s a strategy, not a task, but it’s a necessary one which could really help enable you users. If you know your critical assets have good, multi-layered protections, safely allowing access to them from a diverse range of devices and locations becomes much easier.