GitHub launches Security Lab to boost open source security

When it comes to open source software security, nobody could accuse Microsoft-owned development platform GitHub of not thinking big when it came up with the idea for Security Lab.

Launched last week at its GitHub Universe developer conference, the idea sounds simple enough – create a global platform for reporting and fixing security vulnerabilities in open source projects before they do serious damage.

It sounds so obvious, it’s surprising that nobody’s thought of it before. That might have something to do with the size of the job, admitted GitHub’s vice president of security product management in Security Lab’s launch blog:

Securing the world’s open source software is a daunting task.

The JavaScript ecosystem alone encompasses more than a million projects, not helped by the dauting 500:1 ratio of developers to security experts with the knowledge of how to fix things.

Lots of developers crank out vulnerable code, leaving a tiny clean-up squad to pick up the mess of a problem that sprawls across thousands of companies.

Feeling depressed yet? Don’t be – that’s where GitHub’s Security Lab steps in.

To boost credibility, GitHub has already signed up big companies – namely Google, Oracle, Mozilla, Intel, Uber, VMWare, J.P. Morgan, F5, NCC Group, IOActive, Trail of Bits, HackerOne, as well as Microsoft and LinkedIn.

This has already borne fruit, with these companies collectively finding more than 100 CVE-level security vulnerabilities in open source code. Anyone who joins them will qualify for bug bounties of up to $3,000, GitHub said.

Vulnerability count

The list of goodies goes on, such as Security Lab making available a free-to-use analysis engine, CodeQL which GitHub acquired when it bought Semmle in September:

If you know of a coding mistake that caused a vulnerability, you can write a query to find all variants of that code, eradicating a whole class of vulnerabilities forever.

Perhaps the simplest innovation of all is that Security Lab will operate as a CVE Numbering Authority (CNA) – a critical piece of security architecture for a project that aims to shine a wider light on security problems in open source projects.

Currently, GitHub says at least 40% of security flaws affecting open source don’t receive a CVE when they’re announced, which means they are excluded from public databases that tell customers they have something to patch.

Security Lab will sort this with security advisories for users of affected projects, backed by automated security updates when patches are available and a Security Advisory API to integrate the flaw database into third-party tools.

There’s even a neat token-scanning system to spot hard-coded credentials in the formats used by 20 different cloud providers:

When we detect a match, we notify the appropriate service provider and they take action, generally revoking the tokens and notifying the affected users.

Will it work?

Let’s return to the sheer scale of the open source security problem and the difficulty of enrolling enough of this base to make a difference.

Open source is, and always has been, a world of the long and hard-to-reach tail. GitHub is hopeful its Security Lab will hack off a chunk of this but that might still leave a lot of barely monitored projects in the wild.

There’s also the small issue of whether open source developers will trust something that is a collaboration between Microsoft and lots of other makers of big-brand proprietary software.

The optimistic argument is that the real innovation here isn’t simply the setting up of a single open source vulnerability management platform, but the way it might embed the use of scanning tools and methodologies.

The best way to secure anything is to make it secure before it is released and to accelerate the process of finding, publicising and fixing flaws when they are found.

On a separate front, GitHub also announced its Archive Code Vault, a sort of cold storage vault for open source code located in an underground Arctic bunker.

Just like lifeforms, it turns out that code can go extinct too. If developers can’t find every flaw today, at least in years to come they’ll know where to look.