Two more entries from my visit to the Intel Developer Forum:
To recap, here are some things you should know about Virtual Machine Device Queues (VMDq) from Intel and Netqueue from VMware:
- VMDq is the base technology, Netqueue is the software feature baked into VMware ESX.
- You need this if you want to take advantage of 10 Gigabit
Ethernet in your virtual machines. Without it, you max out at about 3-4
Gb. With Netqueue, Shefali was showing ESX with throughput of 9.3-9.4
Gb.- It offloads the work that ESX has to do to route packets to
virtual machines, so using Netqueue frees up CPU and reduces latency.- This is a technology that runs down on the Ethernet controller hardware and exists today, so you don’t need to wait for Nehalem.
- Netqueue is supported on VMware ESX 3.5 Updates 1 and 2.
But aside from the networking, given recent conversations on memory overcommit, this is worth noting:
The benefits of this 18:1 consolidation include an 85-90% reduction in
power usage, resulting in $348,000 in savings even without taking into
effect reduced cooling costs. What I found especially interestng when I
was talking with Bjoern was that because of the memory page sharing
technologies in ESX, instead of the specified 36GB of RAM usage by the
36 virtual machines, they were seeing only about 20GB used — and again,
all without a perceptible hit on performance.
And the second blog post. Link: IDF@Intel · I/O pass-through lets us have our virtual cake and eat it, too.
Essentially, current virtual network adapters have a CPU overhead for
high-speed I/O devices, such as 10 Gigabit Ethernet. In 2009, VMware
expects to be able to bypass emulation for the virtual network adapter
and interact directly with the hardware. This uses Intel VT-d to do
address translation and protection. In the keynote, they demonstrate a
1.7x performance increase in the virtual machine using VMDirectPath,
because now the CPU is not doing network device emulation.