You enter stormy waters when you compare security at the core of Linux with security inside Windows.
So, hold your breath and hang onto your hat.
Here’s a kernel bug in Linux that turned out to have been sitting there, Heartbleed style, awaiting discovery and exploitation for several years.
The vulnerability allows a buffer overflow in the Linux pty driver, short for “pseudo teletype” or “pseudoterminal.”
Ptys are, in the words of the relevant man page, “a pair of virtual character devices that provide a bidirectional communication channel.”
You can use ptys for all sorts of interprocess communication, but they are typically intended for use by software such as ssh, which gives you a secure shell; script, which keeps a transcript of your current terminal session (a sort-of honest-to-goodness keylogger); and screen, which lets you split a single terminal window into multiple virtual terminals.
In short, if you’re a Linux user, you probably use ptys a lot of the time.
Race conditions
The bug is what’s called a race condition, where two processes compete to get access to a resource, but end up clashing over it in a way that corrupts either or both accesses.
Here’s an example in the form of an analogy.
When I was a youngster, we were supposed to chant, when crossing the road, “Look right, look left, look right again.” (In a country that drives on the left, you are in more imminent danger from traffic to your right as you step off the kerb.)
That ditty always worried me, because of the race condition.
What if, while you’re looking right again, a car comes into range on the left that you didn’t see before?
Ideally, what you want is a Pelican crossing, where you press a button that toggles a traffic light so that traffic is required to stop, and pedestrians are allowed to cross.
In computer science, a Pelican crossing is known as a mutex, short for mutual exclusion.
That’s a global resource, managed by the operating system, that brackets a section of code so it that can only be entered by a single thread of execution at a time.
Once you get hold of the mutex (the little green man), you’re free to access the resource; when you’re done with it, you give it back to the operating system (and your little man goes red) so the next guys can have a go at crossing the road in their turn.
Pseudocode
Here’s some pseudocode to show the sort of problem that happened in the Linux kernel.
Imagine two concurrent threads of execution, sharing the same memory space, so that the variables buffer (a storage array for eight objects) and bcount (how many objects are already in the buffer) are common to both threads.
If they hit the same piece of code, where the buffer gets updated, at about the same time, then the instructions they execute will be interleaved and we run the risk of a race condition on the buffer.
We might be lucky:
Or we might not:
In regular usage, even heavy usage, the “bad luck” scenario might never show up, but with clever timing (or simply with many repeated attempts), an attacker could find a way to hit the jackpot by overwriting memory.
In the best case, the code will crash; in the worst case, it will begin to misbehave or actually give control to the attacker.
The bottom line
An attacker who can run code of his choice in the kernel can easily promote himself to the all-powerful Linux user called root.
A proof of concept exploit is available online, so this is a bug to patch as soon as you can.
If you’re in the habit of building your own kernel, you can apply the patch now, which makes the pty code use a Pelican crossing to write to its output buffer:
Or you can wait until your Linux distro publishes an update and then get busy.
Just don’t say, “This would never have happened on Windows.”
In both pseudocodes, in the right hand thread, there should be “if bcount < 8 then" instead of "if bcount < N then". Regardless, great post as usual.
Arrrgh! I used “N” to start with, but then figured that since it’s not real code, getting rid of N (is it a variable? a constant?) and just making it a single digit would stll make the point but be simpler. Forgot to change all occurrences of N.
Fixed, thanks.
And thanks for the kind words.
Sure it could happen on Windows and anyone who says it doesn’t know what they are talking about. Plenty of code on Windows runs in device drivers and kernels that can be exploited if the same vulnerability is present. I wonder what Linus Torvalds thinks of this, knowing he’s so fastidious about changes. Was this in a particular distro or in the main Linux trunk?
Maybe Linus should have a rant at himself? Threaten to cut the brake lines on his own car?
AFAIK the original bug was in the mainline code.
Okay, this is an important fix, but it ain’t no where in the same league as the Heartbeat horror.
I don’t think anyone’s saying its in the same league as Hearbleed.
OTOH, this fault was *in the kernel itself*, not merely a userland library, and it survived for years unnoticed, like Heartbleed. So I am happy with my mention of Heartbleed in the article.
from what i understand you can exploit this bug only if you have already access to the server, or if a malicious program runs
This is an EoP, or “elevation of privilege” vulnerability. In the scheme of things, those usually get flagged as “Important” rather than “Critical,” because you can’t ride in on them.
Nevertheless, with a proof-of-concept exploit floating around, it means that any crook who does get in, no matter how limited his user rights, _might_ be able to acquire root. You still need to know some addresses in the kernel in order to call the system functions that give you root, usually done with commit_creds(prepare_kernel_cred (0)); – but you may well be able to work those out in advance.
Linux still doesn’t have ASLR in the kernel (unlike Windows, but don’t say that too loudly, you’ll sound like a fanbuoy 🙂 so once you know the addresses of handy kernel calls in my kernel, which may be a stock kernel, you know them indefinitely. On Windows, kernel addresses are chosen at boot time so they vary from computer to computer and over time.
“I don’t think anyone’s saying its in the same league as Hearbleed.”
then why did you mention it at all?
I’ve explained why above (see reply to Jacques) and below (see reply to Anonymous).
The bugs are different in scope, but surprisingly similar in how they came about and escaped notice.
In terms of the side effects of exploitation – Heartbleed gives very many people a small chance of fragementary glimpses into your private activities on a server. This bug gives a small number of people complete and total access to all your private activities on a server.
Different? Sure. Comparable? Definitely.
I think it is disingenuous to compare this with heartbleed, and has more to do with drawing readers than it does to fair comparison. Trying to compare the threat model of each is like comparing shoplifting to genocide.
Sorry…but I can’t agree with anything you said in this comment.
The two bugs (along with Apple’s recent “goto fail” bug in its SSL engine) are perfectly suited for comparison: all are very simple to fix (two or three lines of missing code); ought to have been obvious during coding; can be considered serious vulnerabilities; yet survived un-noticed in open source code for years. If nothing else, they are well-matched reminders that the aphorism “with many eyes, all bugs are shallow” is false, and that it’s dangerous to rely upon it.
As for the “threat model” of each (whatever that means – I presume you mean “threat”) being unsuitable for comparison, I wouldn’t dismiss this one so lightly.
Anyone who has ever used a shell account on a shared server in the past four years, and wants to work from a worst-case assumption (e.g. like assuming all their passwords were compromised because of Heartbleed), would be sensible to assume that everything they did with that shell account in the past four years – including opening SSH tunnels, for example – was under root-level scrutiny by any and all other users of the system.