Check out our technical paper on ESX CPU scheduler in vSphere 4.1. This is revised from the previous version to reflect a new feature, wide VM NUMA support.
This paper attempts to answer the following questions:
- How is CPU time allocated between virtual machines? How well does it work?
- What is the difference between “strict” and “relaxed” co-scheduling? What is the performance impact of recent co-scheduling improvements?
- What is the “CPU scheduler cell”? What happened to the scheduler cell in ESX4?
- How does ESX scheduler exploit the underlying CPU architecture features like multi-core, hyper-threading, and NUMA?
The following provides a brief summary of the paper:
ESX 4.1 introduces wide-VM NUMA support, which improves memory locality for memory-intensive workloads. Based on testing with micro benchmarks, the performance benefit can be up to 11–17 percent.
In ESX 4, many improvements have been introduced in the CPU scheduler. This includes further relaxed co-scheduling, lower lock contention, and multicore-aware load balancing. Co-scheduling overhead has been further reduced by the accurate measurement of the co-scheduling skew and by allowing more scheduling choices. Lower lock contention is achieved by replacing the scheduler cell lock with finer-grained locks. By eliminating the scheduler cell, a virtual machine can get higher aggregated cache capacity and memory bandwidth. Lastly, multicore-aware load balancing achieves high CPU utilization while minimizing the cost of migrations.
Experimental results show that the ESX 4 CPU scheduler faithfully allocates CPU resources as specified by users. While maintaining the benefit of a proportional-share algorithm, the improvements in co-scheduling and load-balancing algorithms are shown to benefit performance. Compared to ESX 3.5, ESX 4 significantly improves performance in both lightly-loaded and heavily-loaded systems.
The paper can be downloaded from http://www.vmware.com/resources/techresources/10131.