It isn’t often that good news makes headlines, especially in the security world. Dovecot bucked the trend earlier this month, with a cybersecurity audit that praised the mail server as “near-impenetrable”. Commendations like that are rarer than hens’ teeth. What can we learn from it?
Dovecot is a mail server. Not the kind that sends and receives mail, mind you. It’s an IMAP server – the kind that stores email received by another program (called a mail transfer agent) and gives it to programs like Outlook when they ask for it.
Cure53, the German security team that audited Dovecot, did a thorough job. Four experts spent 20 days poking around the program. First, they pored over the code manually looking for holes in it. Then the team ran a series of automated penetration tests, using programs that tried to find exploitable flaws such as memory leaks.
‘A’ for effort and results
The results were impressive. Cure53, which found only three minor issues, said:
“Despite much effort and thoroughly all-encompassing approach, the Cure53 testers only managed to assert the excellent security standing of Dovecot. For a complex piece of software that Dovecot constitutes, it is an extremely rare result to stand strong with so few problems.”
Verifying security in a product like this is particularly important, for two reasons. The first is ubiquity. Dovecot is extremely popular. The most recent scan by Open Email Survey, using the online discovery tool Shodan, found that 68% of IMAP servers ran Dovecot. The next most popular accounted for 17%.
Whenever a product becomes that popular, it gets close to creating a monoculture, and monocultures are bad for security. We know this is true, because when people say so it makes companies angry, and gets people fired. You can’t mandate diversity in software usage, though, so you’d better make sure that if there is a dominant program, it’s pretty watertight.
Luckily, Dovecot was written with security in mind, and its primary author Timo Sirainen offers a €1,000 bug bounty out of his own pocket to anyone finding security holes. This is a guy who walks the walk, and really cares about the security of the software.
The problem with many eyes
The second reason that Dovecot’s stellar report card is important is because it is open-source software. One of the biggest myths in open-source software was voiced by one of its biggest advocates. Eric Raymond wrote The Cathedral and the Bazaar, a seminal book on open source. In it he coins “Linus’s law”, named for the developer of Linux. He says:
“Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.”
Or, less formally: “Given enough eyeballs, all bugs are shallow”.
It’s not entirely true, though. Some of the biggest and most popular open-source projects have been felled by bugs that lay dormant for years. Naked Security has reported on a few. In 2014, a bug that rendered Linux rootable was patched after existing for five years. Cure53 uncovered several bugs in Curl, a program underpinning hundreds of others, last November.
Perhaps most damning was Kees Cook’s analysis of Linux kernel bugs last October. He found 34 high-severity bugs that had been hanging around for six years each on average, and some for more than a decade.
And let’s not forget the granddaddy of all open-source bugs: Heartbleed. That bug, spotted in 2014, hit OpenSSL, the most commonly used TLS certificate library on the planet, causing institutions to scrabble for a fix.
Show me the money
The “many eyes” concept isn’t complete snake oil. Having lots of people looking at your code certainly can’t hurt, and has produced quantifiably positive results. Vulnerability scans have revealed a disparity between defect densities (that’s errors per thousand lines of code) in open source versus commercial software – and open source won. But that doesn’t mean that errors don’t still lurk in open-source software, or that they don’t include some absolute howlers.
The problem is that while many eyes may scour open-source code, they aren’t all well-trained or experienced eyes, and they won’t always look at mundane parts of the code. This is why a deep dive by seasoned experts is vital in complementing community efforts.
That’s all very well, but who’s going to pay for it? Open-source software is generally a labour of love. Sirainen offers commercial contracts for his open-source code, which no doubt help with his bug bounties, but he didn’t pay for the Cure53 scan. Mozilla did.
In October 2015, the open-source Foundation launched its Mozilla Open Source Support (MOSS) program, initially allocating $1m. It identified meaningful projects that it could fund and support in a meaningful way.
Today, it has a budget of around $3m per year, and includes a Secure Open Source initiative that supports audits for open-source software projects. It funded the Cure53 curl audit, among others including phpMyAdmin and PCRE.
Mozilla is just one organization, though. There can never be too much financing for deep security audits in open source. Where else could the money come from? How about some of the biggest companies that use it?
Banks, retailers, utilities, and IT infrastructure companies all take advantage of open-source software. If each of them allocated just a piece of their IT budget to financing coordinated audits, they could go a long way toward improving the overall security of the internet.
But they don’t have to do that. They get to use the software without giving back. If companies use free things without considering what they cost to produce and maintain, they become what economists call “externalities”. That leaves a broader bill that everyone ends up paying in the end.