Friday, April 25, 2008

I/O: Maintainability vs Performance

I/O performance is of great importance to a hypervisor. I/O is also a huge maintenance burden, due to the large number of hardware devices that need to be supported, numerous I/O protocols, high availability options, and management for it all.

VMware opted for the performance option, but putting the I/O stack in the hypervisor. Unfortunately the VMware kernel is proprietary, so that means VMware has to write and maintain the entire I/O stack. That means a slow development rate, and that your hardware may take a while to be supported.

Xen took the maintainability route, by doing all I/O within a Linux guest, called "domain 0". By reusing Linux for I/O, the Xen maintainers don't have to write an entire I/O stack. Unfortunately, this eats away performance: every interrupt has to go through the Xen scheduler so that Xen can switch to domain 0, and everything has to go through an additional layer of mapping.

Not that Xen solved the maintainability problem completely: the Xen domain 0 kernel is still stuck on the ancient Linux 2.6.18 release (whereas 2.6.25 is now available). These problems have led Fedora 9 to drop support for hosting Xen guests, leaving kvm as the sole hypervisor.

So how does kvm fare here? like VMware, I/O is done within the hypervisor context, so full performance is retained. Like Xen, it reuses the entire Linux I/O stack, so kvm users enjoy the latest drivers and I/O stack improvements. Who said you can't have your cake and eat it?

4 comments:

Vikash Kumar Roy said...

So are you telling that since VMare hypervisor is proprietary it may not give better performance ? I agree with you that I/O has major take in hypervisior . I have add this article to my blog and I can remove you don’t want.

Avi Kivity said...

Vikash Kumar Roy: VMware ESX and KVM have very similar architecture, so they can have similar I/O performance. What I'm saying is that VMware will have to do a lot more work since they cannot reuse the open source I/O stack.

Anonymous said...

It seems to me that the hypothesis (esx IO performance ~= kvm IO performance) does not hold.

Today, I installed Kubuntu 10.10 with KVM and Windows 7.

I downloaded virtio 1.1.11-0 from Redhat, created a Windows 7 machine with --os-variant=virtio26 and installed Windows 7 using the two ISOs (the virtio driver ISO and the Microsoft ISO). Writeback caching in KVM was disabled, but other than that performance should be well tuned.

Installation progressed with about 2 MiB/sec, while raw disk write speed (sequential) is ~88MiB/sec on this laptop.

It seems that IO in KVM is extremely slow.

Compare VMware and VirtualBox which both do a very rapid Win7 (= mostly sequential IO) install on this box.

It seems there are still some IO issues to be taken care of in KVM :-/.

similar experienc, more details

Avi Kivity said...

Anonymous, are you sure you have kvm configured correctly? 2 MB/s is ridiculously low.

See for example the recent SPECvirt_sc2010 submission, where kvm tops the results. It's not a pure I/O benchmark, but it's certainly a factor.