Bugzilla, Mozilla’s free and popular bug tracking program, has just been updated to patch a number of security holes.
One of them is not only interesting and important, but also ironic.
In fact, if we are allowed to smile at security holes, this bug is wryly amusing.
Many Bugzilla implementations are publicly accessible on the internet, as a way of encouraging anyone who’s interested in helping out with bug fixing.
Open and closed source projects alike use Bugzilla for just this purpose.
If you make it easy for your users to give you feedback about things that aren’t working properly, you can build an engaged online community and improve your software engineering at the same time.
The idea is simple: register for an account, report your bug, track its progress…
…and perhaps you’ll feel enough pull from, and interest in, the community, to stay involved in the future.
Of course, if you do run a public-facing Bugzilla server, you might not want just anybody to see all the details of every bug currently reported in your software.
Projects run on so-called full disclosure principles might not mind, because full disclosure is all about everyone knowing everything all the time.
That way, there are no more secrets, Marty, and so no reason to skip fixing a bug on the assumption that the Bad Guys don’t know about it yet.
But if your project follows responsible disclosure, where you give yourself a reasonable time to fix bugs such as security holes before you reveal to the world exactly how to exploit them, you will typically have bug details on record that aren’t yet globally visible.
That’s because the bug report might include sufficient detail to act as a sort of proof-of-concept (PoC) that would allow a cybercriminal to exploit the vulnerability revealed in the bug report.
→ A vulnerability is bug that could theoretically allow a crook to get unauthorised access to your computer or online service; an exploit is a working attack that puts a vulnerability into practical use. Weaponising a vulnerability to turn it into a working exploit is often very difficult and may require detailed information about the bug that causes the vulnerability.
For that reason, many Bugzilla implementations are configured so that insiders can see (and receive email notifications about) all the gory details of every bug and its progress through the system, but outsiders can’t.
One simple way to do that is to differentiate based on email address, on the assumption that if outsiders deliberately sign up with insider email addresses to try to get at insider-level details, they won’t be able to receive the revealing emails, or even complete the sign-up, so no harm will be done.
The Bugzilla bug-revealing bug was caused by what you might call a programmatic slip ‘twixt cup and lip, because it turns out you could give a different email address in the final stages of signup than you gave at the start.
Bugzilla would register you to receive email at the first address, but decide your access rights based on the second.
In other words, you could convince Bugzilla that you were an insider, yet sign up and receive your privileged bug notifications as an outsider.
That creates a risk of turning privately disclosed vulnerabilities into as-yet-unpatched, publicly known exploits, better known as zero-days.
In short, if you use Bugzilla and use email addresses as a way to determine privileges, you want this fix.