Xen fixes another “virtual machine escape” bug

The widely-used Xen hypervisor has just issued a rather important patch.

Here’s why it’s important.

A hypervisor is software that lets you carve up a physical server into multiple virtual computers, or VMs (short for virtual machines).

One very common use for Xen is to divide up racks of super-powerful servers in your operation centre into an even larger number of VMs for flexibility, redundancy and scalability.

In a rather obvious metaphor, the real servers are known as hosts, and the VMs as guests.

Each guest VM will run more slowly than the host it’s running on, for obvious reasons, but those guests are a lot more flexible.

You can run a different operating system in each guest; you can squeeze in a few more guests in an emergency; and you can stop or start individual guests at will without affecting any of the others.

You can even move guests from one physical server to another to make maintenance easier in your operation centre, or to take advantage of different power costs in different centres.

Of course, the security of each guest depends heavily on the correctness of the hypervisor software, which needs to fling a sort of “security blanket” around every VM.

Blissful ignorance

For security reasons, each guest must remain in blissful ignorance of all the others that might be running on the same server, even though they share the same physical CPU, memory and peripheral devices such as network cards.

The secure separation of VMs is especially important in a hosted service provider’s environment, where different customers – perhaps even competitors – might end up running guests on the same physical host.

Sometimes, however, the separation breaks down due to security holes, and what’s known as a VM escape becomes possible.

We wrote about a Xen escape dubbed the “Venom” bug back in May 2015, where an attacker could exploit vulnerabilities in the guest’s floppy disk drive support code.

In fact, even if you configured a VM not to have a virtual floppy disk drive at all, the vulnerable code was nevertheless present inside the guest and could be exploited.

So, at least in the case of Venom, patching the Xen software running on the host operating system was the only practical way around the problem.

This latest patch fixes a similar bug in the guest CD-ROM support.

A vulnerable host could be taken over by an attacker who is supposed to be confined to an individual guest.

The dangers of a VM escape

Generally speaking, if you’re a crook who can escape from the guest into the host itself, you score a sort of double victory:

  • You’re directly on the host server’s local network, typically inside an operation centre, instead of on a guest virtual network that “skips over” the internal network and emerges directly on the internet.
  • You can probably get back into all the other guests on the same server, and see what they’re up to.

A host escape is a bit like a prisoner who breaks out of his cell, but instead of emerging into the prison car park, ends up in the guardhouse where the keys to all the other cells are stored.

He can not only see what the prison authorities are up to, but also let all his fellow inmates out of their cells at will.

What to do?

In short, this is an important bug to patch, because VM escapes undermine the fundamental promise of virtualisation, namely the secure separation of all guests on a host.

Unlike with the Venom bug, however, there is a workaround: if you start a guest without a virtual CD-ROM drive installed, the vulnerability is not exposed, and therefore isn’t exploitable inside that guest.

However, in many hosted environments, VMs are routinely provisioned with CD-ROM support as a simple way of installing an operating system and application software stack from “outside”.

In other words, if you run guests on Xen that include emulated CD-ROMs, you need the patch.