Linux reached the entirely respectable age of 23 this week, more or less.
More or less 23, that is, not more or less respectable.
25 August 1991 was the day that a Finnish student named Linus Torvalds announced that he was working on a hobby operating system, and invited people to say what they'd like to see in it.
Ironically, because his project didn't have a name yet, he posted his announcement to the newsgroup comp.os.minix with the subject line What would you like to see most in minix?
Hello everybody out there using minix -
I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready... I'd like to know what features most people would want. Any suggestions are welcome, but I won't promise I'll implement them :-)
Tanenbaum, now 70, will give his farewell lecture in October 2014 after 43 years at the Vrije Universiteit, so you can say he knows a thing or two about computer science.
Torvalds certainly realised that.
He took numerous ideas from MINIX (which was, after all, initially created as a teaching tool) and used a MINIX computer for his early development.
What turned into Linux, however, ended up very different from MINIX indeed - possibly as different as you can get from an OS point of view.
What's the difference?
The big difference between Tanenbaum's OS and Linus's is right at the heart of the software.
Many modern operating systems: Microsoft Windows, Apple's OS X, the various flavours of BSD, as well as Linux and its derivatives, have a giant lump at their core known as a monolithic kernel.
The bulk of the operating system, including the code for scheduling processes, understanding and accessing data on disk, keeping track of your mouse, managing network connections and much, much more, is done inside the highly-privileged kernel.
This means, very loosely speaking, that if a malware writer can insinuate his code into the kernel, you're in trouble.
Most, if not actually all, security bets are off, because malware inside a monolithic kernel can do things that are anathema to security.
For example, kernel malware can typically:
- Read and write arbitrary disk sectors, without any access control limitations.
- Peek into and alter any running program's memory.
- Sneakily modify the behaviour of already-active programs.
- Snoop on everybody's network traffic.
- Lie to higher-level software to give a false sense of security.
It certainly sounds as though you'd want to keep the kernel as tiny as possible, and implement things like file systems and device drivers in what's called userland, the less privileged environment that is managed by the kernel.
That way, a comparatively small amount of code known as a microkernel (about 10,000 lines in the latest release of MINIX), comparatively easy to audit, can be made responsible for managing and enforcing the security of the rest of the system.
In contrast, even though Linux started life with about 10,000 lines, the kernel now consists of more than 15 million lines of code (although not all of them are always used).
Imagine how long it might take you to find a bug in something that big.
What about performance?
Microkernels, it turns out, are surprisingly unusual, notably for performance reasons.
Passing data and program flow between userland and a microkernel takes longer than it does inside a monolithic kernel - because of all those pesky security checks you need to do.
And so most laptops and many of the mobile devices we use today ended up with monolithic, or macrokernel-based, OSes.
(One notable mainstream exception to the path that Linus took is QNX, a microkernel Unix-like OS that was famously adopted by BlackBerry for its version 10 release.)
Tanenbaum and Torvalds notoriously clashed online over macro-versus-micro early in 1992, with Tanenbaum famously suggesting that:
LINUX is a monolithic style system. This is a giant step back into the 1970s. That is like taking an existing, working C program and rewriting it in BASIC. To me, writing a monolithic system in 1991 is a truly poor idea.
When a story emerged in 2004 suggesting that Linux was tainted because Torvalds stole code from Tanenbaum, the latter came out strongly in support of Torvalds to say that it simply wasn't so.
But Tanenbaum's evidence-in-chief was of an amusingly ironic sort:
...[T]he code was [Linus's]. The proof of this is that he messed the design up... Instead of writing a new file system and a new memory manager, which would have been easy, Linus rewrote the whole thing as a big monolithic kernel, complete with inline assembly code :-( . The first version of Linux was like a time machine. It went back to a system worse than what he already had on his desk.
What of the next 23 years?
Today, Linux still has a public image of being not much more than a "geek thing," at least as a desktop OS, just like it was back in the early 1990s.
But Linux is the brand, if you want to call it that, behind brands as big as Google, which uses Linux not only to run its data centres but also as the heart of its Android product.
So that macrokernel implementation decision, in what was just a hobby project, has ended up with enormous implications.
That's the thing about time machines: they have a way of changing history.