We’ve recently seen some pretty high-profile vulnerabilities in Java and Internet Explorer.
In both cases the issues became widely publicised before a patch was available after evidence emerged of in-the-wild exploitation by criminals.
As someone looking after IT for your company, how do you react to reports like this?
The easiest response is to cross your fingers and hope the vendor gets a patch out quickly.
In many cases this is a sensible strategy.Nobody likes unplanned change, least of all the IT operations team. Forcing through a risky emergency change every time a buffer overflow is discovered in an application you run is guaranteed to create apathy when you really need to mobilise the team.
Two key metrics for assessing severity are exploitability and exposure.
Turning a proof-of-concept vulnerability into a real live effective exploit is, luckily, often very challenging for a would-be attacker. Vendor-issued bug reports rarely take this into account appropriately.
Software vendors can lack the security expertise to understand how an attacker would turn a bug into an exploit and they almost certainly don’t have adequate insight into emerging real-world threats. A reliable source of vulnerability information which takes known in-the-wild threats into account is crucial.
Exposure is specific to your organisation.
Ensuring you can assess the risk quickly and accurately is step 1 for a defensible infrastructure. Given a critical vulnerability in a specific version of an application (like only exploitable on certain OS versions), how quickly and easily can you assess how many endpoints are affected?
Context is also very useful – do your reports also include things like the owners of the affected machines? An issue disproportionally affecting a board member may warrant prioritisation.
Are they desktops or laptops? Laptops are much more likely to be plugged into hostile networks and may not always be protected by perimeter gateways.
When investigating a server-side issue how easily can you assess the access required to exploit it? Cross referencing with firewall rules or external vulnerability scanning data could help you to quickly assess if a vulnerable application is only exposed to a well-protected internal management network or the entire internet.
The accessibility of the data is just as important as its availability. Agile response requires having the data at your fingertips. Pestering a busy SQL guru every time you need answers isn’t going to be sustainable.
Confidence in the data will also help decision making. You might be getting great-looking reports out of a package or patch management system but does it have good enough coverage? It might be worth cross referencing the data with the web logs of a widely used corporate application, for instance.
Does the breakdown of user agents in your logs correspond with you reports? If you have dark corners of your network which aren’t so well managed, it’s better to know now than base a decision on incorrect data.
Armed with good data you’ll be in a much better place to get the buy-in necessary to execute some appropriate mitigation.
An understanding of your patching capabilities will also be crucial. There’s not much difference between a vulnerability with no patch and a vulnerability you’re unable to patch.
In fact, the latter is often more dangerous. Patch Tuesday may be followed by exploit Wednesday. It’s a safe bet that someone will be behind on their patching so reverse engineering newly-released patches to create exploits makes a lot of sense for an cybercriminal.
Don’t forget to cover applications which silently patch themselves. This is increasingly common and can really help ease the burden on IT. However, are your security measures shooting you in the foot? Blocking these updates at the gateway is likely counterproductive. Likewise, do these systems still work when users don’t have admin rights on their workstations?
Speed of patch deployment should also affect your response. The longer it takes to roll out a patch the more value in temporary mitigation. It’s really worth knowing this in advance so you have the data when you need to make an important decision.
Tracking a routine roll out can help you understand this and, again, cross-referencing with any other relevant data sources can give you confidence in the results. It’s worth knowing if your shiny new package management system is only managing your shiny new clients! If you run one, client firewall logs might be a good additional data source for application versions running across systems.
The next step is to develop a thorough understanding of your control points and their limitations.
To pick a relevant example, how do you go about blocking java-based browser attacks? Can you reliably uninstall java from all your clients? Can you assess the impact of doing so? If you choose to block it at the gateway will roaming clients be protected? What about threats delivered via https?
In anything beyond the most homogeneous, tightly controlled environments, the chances are that no strategy will be 100% effective. You may need to deploy a few to be adequately covered or you may choose to accept some risk. If a quick and easy strategy covers 90% of your infrastructure with little effort, that’s a pretty good reduction in overall risk. Spending days on the remaining 10% just might not be worth it.
A brainstorming exercise based on plausible sample scenarios is a good way to explore your options in advance. Imagine a problem with some widely used software and discuss how you would react. This discussion needs to involve people outside the security team. There are likely some very useful controls which are not purely security focused and may be overlooked.
Particularly, reliable well-deployed configuration management tools such as SCCM or Puppet are extremely useful for operationally reacting to security threats. Helping an operations team justify investment in this area could come in very useful next time you need to rapidly roll out a config change to mitigate a new threat.
Sounds like a lot of effort? Yes! Unfortunately, without a thorough understanding of risk to your organisation and the cost of mitigation, threat response is reduced to guess work.
The key point to understand is that you can’t go through a long winded process every time a new vulnerability report hits your desk. The key is to review your assessment and response capabilities, understand their limitations and have a strategy for improvement.
Luckily a defensible infrastructure is also generally a well-managed infrastructure. You’ll likely find lots of common ground with your IT operations team so it’s a great opportunity to pool resources and demonstrate the value, not just the cost, of effective security capabilities.