To recap, here are some things you should know about Virtual Machine Device Queues (VMDq) from Intel and Netqueue from VMware:
VMDq is the base technology, Netqueue is the software feature baked into VMware ESX.
You need this if you want to take advantage of 10 Gigabit
Ethernet in your virtual machines. Without it, you max out at about 3-4
Gb. With Netqueue, Shefali was showing ESX with throughput of 9.3-9.4
It offloads the work that ESX has to do to route packets to
virtual machines, so using Netqueue frees up CPU and reduces latency.
This is a technology that runs down on the Ethernet controller hardware and exists today, so you don’t need to wait for Nehalem.
Netqueue is supported on VMware ESX 3.5 Updates 1 and 2.
But aside from the networking, given recent conversations on memory overcommit, this is worth noting:
The benefits of this 18:1 consolidation include an 85-90% reduction in
power usage, resulting in $348,000 in savings even without taking into
effect reduced cooling costs. What I found especially interestng when I
was talking with Bjoern was that because of the memory page sharing
technologies in ESX, instead of the specified 36GB of RAM usage by the
36 virtual machines, they were seeing only about 20GB used — and again,
all without a perceptible hit on performance.
Essentially, current virtual network adapters have a CPU overhead for
high-speed I/O devices, such as 10 Gigabit Ethernet. In 2009, VMware
expects to be able to bypass emulation for the virtual network adapter
and interact directly with the hardware. This uses Intel VT-d to do
address translation and protection. In the keynote, they demonstrate a
1.7x performance increase in the virtual machine using VMDirectPath,
because now the CPU is not doing network device emulation.
Their list of "common sources of errors and anomolies" is worth a
paper of its own, as you can tell it comes from long experience, but
for this blog post let me just hit the headers of their slides on
"common pitfalls." After reading this, I hope you will think twice
before just firing up a quick timer on a process in a virtual machine.
It’s probably not telling you what you think it’s telling you! (Most
real-world virtualized workloads are not performance-bound, anyway, but
that’s a whole other conversation.)
Today the topic was SRM and our guest was Jon Bock. If you’ve wondered what SRM does, what components it’s actually made up of, what you might use it for, and how to operationalize your DR plan, then check us out this week.
Due to popular demand, I ran the audio file through the magic Levelator, so now you can safely listen to it on headphones without blowing your eardrums out. I’ve also put in some hopefully useful information in the mp3 metadata fields.
As always, listen by clicking over there on the right or by downloading the mp3 (47:05). Feeds: podcast, iTunes.
For spouses traveling to Las Vegas with their VMworld-attending partners, my wife Crystal has volunteered to loosely organize some activities. After working on this for quite some time, here’s a rough schedule and some additional information.
I’m over at the Intel Developer Forum this week and blogging over at Intel’s IDF blog. Over the next day or two, I’ll also try to touch on some recent VMware-Intel developments, including Extended VMotion and NetQueue.
Rich Brunner from VMware touched on a number of topics, but the basic challenge he talked about is the “all your eggs in one basket” problem as we build the datacenter of the future. As CPUs become more capable, virtual machine density becomes higher. Imagine a future 8-way server, with each processor having 16 cores (I said future), and 8 virtual machines on each core. That’s 8 x 16 x 8 = 1024 virtual machines on this hypothetical future piece of hardware. In this future, one memory error can crash the whole physical server, bringing down 1024 virtual machines. Yes, you will have your second hot server standing by, but it seems like we should be able to do better than crashing all 1024 virtual machines for an itty bitty memory error.
The Open Virtual Machine Format (OVF) has some interesting movement around it and in the ecosystem. I’m probably not capturing the subtleties here, but you can think of OVF as a standard packaging format for virtual Machines.
Dialing the wayback machine to June, Chris Wolf gives some contest around Steve Herrod’s talk at the Burton Group Catalyst conference, and thinks that OVF could evolve into an appliance format into something much more like an vendor-neutral .vmx file. Link: Catalyst Day 2 Virtualization Highlights at ChrisWolf.com.
I think we do a disservice to the OVF standard and the people working on it if we just see OVF as a way for the chess players to move their pieces around the board. I see it as a way to get things done — case in point: importing OVF-based appliances into ESXi via a menu item.
Virtual appliances represent a streamlined way to develop, deliver,
manage and deploy enterprise software stacks and they have gained a
great deal of traction in the market over the past couple of years.
VMware’s Virtual Appliance Marketplace has grown to 850+ virtual
appliances and VMware’s products are providing greater access to
virtual appliance content with each release. To date, no integration is
more substantial then what has been done in the pairing of VI Client
with ESX/ESXi 3.5.
Today, any user with access to VMware’s free ESXi and VI Client has the
ability to directly import a number of OVF-based virtual appliances
directly into their environment and power on an enterprise workload
within minutes of first boot.
In addition to the express patch and the re-issued ESX/ESXi 3.5 Update
2 release, we now have an alternative installation process for
customers who haven’t applied either to hosts that were affected by the
product expiration issue.
The following message is applicable ONLY for customers who had
installed the impacted release of ESX 3.5 Update 2 (build number
103908), but not yet applied the express patch.
We are aware that you may encounter the following challenges installing the express patches needed to correct the problem:
Internal change control procedures
No available server to VMotion running VM’s onto
Unable to schedule a maintenance window
If you experience one of the challenges listed above, please contact
your support provider and indicate you need assistance with the U2
Alternative Install Process (U2 AIP). The support team can assist
customers with this alternative installation procedure.
anything is possible. Many of you have proven that to be true. VMware
wants to hear your story about how you’ve accomplished the virtually
impossible with VMware virtualization products and solutions. Your
story may be featured throughout the conference in networking lounges,
registration areas, hallways and general sessions.