Howto: QEMU w/ Ubuntu Xenial host + UEFI guest

At this point, running a UEFI virtual machine with QEMU is still a fairly obscure use-case for which it can be difficult to find good examples. And to complicate things, changes in the ovmf and qemu-system-x86 packages in Ubuntu Xenial mean that the examples you found in the past might no longer work (when using Xenial as the host).

I recently had to sort this out, so I thought it worthwhile to share what I learned.

In short, here’s the deal: As of Xenial, the OVMF firmware (a UEFI implementation for QEMU) has been split into two files. OVMF_CODE.fd contains the actual UEFI firmware, and OVMF_VARS.fd is a “template” used to emulate persistent NVRAM storage. All VM instances can share the same system-wide, read-only OVMF_CODE.fd file from the ovmf package, but each instance needs a private, writable copy of OVMF_VARS.fd.

To work through all the examples below, you’ll need to be running Ubuntu Xenial as your host operating system. I believe these examples should likewise work fine on Debian testing, although I haven’t personally confirmed this.

First, you’ll need to install the qemu-system-x86 and ovmf packages like this:

sudo apt-get install qemu-system-x86 ovmf

You’ll also need an installation ISO for your favorite Linux distro. I used the latest Ubuntu Xenial daily desktop ISO, but a recent-ish version of any distro should work fine as long as it supports UEFI well enough.

BIOS

If you’re new to QEMU (or need a refresher), you can install a BIOS-mode VM with something like this:

qemu-img create -f qcow2 example.qcow2 16G
qemu-system-x86_64 -m 1G -enable-kvm -vga qxl \
    -drive if=virtio,file=example.qcow2 \
    -cdrom xenial-desktop-amd64.iso

And after the above VM has been shutdown, you can subsequently boot the installed OS like this:

qemu-system-x86_64 -m 1G -enable-kvm -vga qxl \
    -drive if=virtio,file=example.qcow2

(Just the -cdrom argument is dropped when subsequently powering up your VM.)

UEFI (the old way)

A few years ago when I first started experimenting with UEFI VMs under QEMU, the examples I found all used the -bios option to specify the path of the (unified/legacy) OVMF firmware image file and thereby put the VM into UEFI-mode.

So prior to switching to Xenial as my daily driver, I would install a UEFI-mode VM with something like this:

qemu-img create -f qcow2 example.qcow2 16G
qemu-system-x86_64 -m 1G -enable-kvm -vga qxl \
    -bios /usr/share/ovmf/OVMF.fd \
    -drive if=virtio,file=example.qcow2 \
    -cdrom xenial-desktop-amd64.iso

And after the above VM has been shutdown, I would subsequently boot the installed OS like this:

qemu-system-x86_64 -m 1G -enable-kvm -vga qxl \
    -bios /usr/share/ovmf/OVMF.fd \
    -drive if=virtio,file=example.qcow2

This works great with a host running Ubuntu 14.04 (Trusty) through Ubuntu 15.10 (Wily).

This kinda sorta almost works on a host running Ubuntu Xenial, save for one huge exception: you can’t non-interactively boot into your VM on subsequent invocations of qemu-system-x86_64 because you haven’t provided a mechanism to store the (emulated) NVRAM state.

I don’t understand all the details of why this doesn’t work from a Xenial host. The ovmf package in Xenial still seemingly contains an equivalent unified/legacy OVMF firmware image, so my guess is that changes in QEMU are also at play.

UEFI (the new way)

As I mentioned, all VM instances can share the same system-wide, read-only OVMF_CODE.fd file from the ovmf package, but each instance needs a private, writable copy of OVMF_VARS.fd.

So a key change from the previous example is that we first need to copy the /usr/share/OVMF/OVMF_VARS.fd file from the ovmf package to a private, writable file that our VM will use.

On a Xenial host, you can install a UEFI-mode VM with something like this:

cp /usr/share/OVMF/OVMF_VARS.fd example_OVMF_VARS.fd
qemu-img create -f qcow2 example.qcow2 16G
qemu-system-x86_64 -m 1G -enable-kvm -vga qxl \
    -drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd \
    -drive if=pflash,format=raw,file=example_OVMF_VARS.fd \
    -drive if=virtio,file=example.qcow2 \
    -cdrom xenial-desktop-amd64.iso

And after the above VM has been shutdown, you can subsequently boot the installed OS like this:

qemu-system-x86_64 -m 1G -enable-kvm -vga qxl \
    -drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd \
    -drive if=pflash,format=raw,file=example_OVMF_VARS.fd \
    -drive if=virtio,file=example.qcow2

Several things in the above commands are worth drawing attention to:

  • The -drive arguments for OVMF_CODE.fd and OVMF_VARS.fd must come one after another and in that order

  • The -drive argument for OVMF_CODE.fd must include the readonly option, otherwise QEMU will try to open the OVMF_CODE.fd file in read-write mode, causing QEMU to exit with an error

  • The same private OVMF_VARS.fd copy should be used throughout the lifetime of a VM instance

  • The -bios argument is no longer used, and is considered depreciated

Thanks!

Of course I need to give credit to the people that helped me figure this all out.

  • Huge thanks to rharper in #ubuntu-devel for pointing me in the right direction and helping me work through my initial goofs

  • Thanks to everyone who contributed to the conversation in Debian bug #764918

  • Thanks to Laszlo Ersek for authoring the excellent OVMF whitepaper and for his work on OVMF itself

  • Thanks to Steve Langasek for maintaining the ovmf package in Debian and Ubuntu, which means I’ve never had to build OVMF myself just to use it!

One more thing

If you’re like me, you’ll probably constantly forget the locations of the files installed by the ovmf package.

Remember that dpkg -L is your friend:

$ dpkg -L ovmf
/.
/usr
/usr/share
/usr/share/qemu
/usr/share/doc
/usr/share/doc/ovmf
/usr/share/doc/ovmf/copyright
/usr/share/doc/ovmf/changelog.Debian.gz
/usr/share/ovmf
/usr/share/ovmf/OVMF.fd
/usr/share/OVMF
/usr/share/OVMF/OVMF_CODE.fd
/usr/share/OVMF/OVMF_VARS.fd

Oh the mixed case directory names!