There’s no best way to handle disclosure of zero-day vulnerabilities

shutterstock_202374871

Earlier this week, the headlines flashed with news that Google had disclosed a vulnerability to Microsoft that allows local privilege escalation in Windows 10. This vulnerability is a zero-day, meaning these vulnerabilities did not have an immediate fix, and by making these vulnerabilities public, theoretically an attacker could take advantage of this information and use the vulnerability to their own advantage.

A lesser-discussed note to this story is that Google had disclosed this vulnerability privately to Microsoft a week prior, at the same time it had made a similar private disclosure to Adobe about its own actively exploited vulnerability in Flash (CVE-2016-7855). In response to Google’s disclosure, Adobe made a fix to Flash about five days later, with the security update already available to the public.

This is good news for Adobe, as Google has a published disclosure policy for what it defines as actively exploited critical vulnerabilities: it will privately notify the vendor and give them seven days to make a fix, otherwise it goes public with the news. Google’s reasoning?

We believe that more urgent action—within seven days—is appropriate for critical vulnerabilities under active exploitation. The reason for this special designation is that each day an actively exploited vulnerability remains undisclosed to the public and unpatched, more computers will be compromised.

Google certainly had reason for concern, as Microsoft later identified that the Russian APT group Fancy Bear was already actively using Google’s disclosed zero-day in their own attacks before Google’s disclosure even happened.

Microsoft hasn’t yet issued a fix for the Windows vulnerability, but in a blog post response they say one will be available on the next Patch Tuesday, November 8.

So what’s a normal vulnerability disclosure policy?

There’s no agreement in the security industry for a standard vulnerability disclosure. Many abide by what they call “responsible disclosure,” which means privately reaching out to the vendor with the details of a discovered vulnerability, with the intent to give them time to fix it, and only making the details of the vulnerability public once the issue has already been patched.

Often whoever discovered the vulnerability works with the affected vendor to fix the problem. As long as the vendor is making a good-faith effort, the timeline for fixing a vulnerability can be weeks or even months, although many set a deadline to get things fixed before they go public. Google recommends 60 days, though others will give more time.

Not everyone in the security industry agrees with the principles of responsible disclosure. Though it may strike some as reckless, some believe that the best course of action is to go public as soon as possible, especially if the vulnerability is being actively exploited. While Google gave both Adobe and Microsoft seven days for a fix, others might have gone public straight away, without giving them a chance to fix it.

The idea behind immediately disclosing an actively exploited vulnerability is that the public pressure to fix the vulnerability could make all the difference in the vendor taking it seriously, especially if they have been unresponsive or uncooperative in past disclosure efforts. This approach can sometimes backfire, however, as it makes a potential new avenue for attack public. However, it can also backfire on the vendor if they don’t respond quickly.

Do you think immediately disclosing vulnerabilities is too reckless, or the right thing to do? Or was Google right?


Image by drserg / Shutterstock.com