Continuing the network performance theme…
Boon Seong Ang, Senior Staff Engineer for VMkernel IO, wrote the following response to one of our SEs. The SE’s customer was experiencing some virtual network performance issues with UDP.
Considerations for virtual network performance:
- (1) vNIC types: do not use vlance. vlance is our oldest vNIC and does not have good performance. If a VM has “Flexible” vNIC without VMware tools installed, the vNIC will end up being vlance (pcnet32).
- In addition, for UDP, use either e1000 vNIC, or with ESX 4, vmxnet3, in order to be able to configure a larger vNIC Rx ring size. Because UDP can be a lot more bursty (due to lack of flow-control), having a larger Rx ring size helps to provide buffering/elasticity to better absorb the bursts. Both e1000 vNIC and our new vmxnet3 allows resizing the vNIC’s Rx ring size, up to around 1 to 2 thousand buffers. As a side note, there is some negative performance impact with larger ring size due to larger memory foot print. The new vxmnet3 vNIC is more efficient than the e1000 vNIC. Also in general, ESX 4 has some performance improvements over ESX 3.5.
- In the past, we have seen better networking performance with RHEL than with Fedora Core.
- If there are many more Virtual CPUs than Physical CPUs in a server, there will be contention for Physical CPU cycles. During the time a Virtual CPU of a VM waits for its turn to run, network Rx processing cannot happen in the VM and will more likely cause network packet drops. A larger Rx ring size may help, but it has its limits depending on the degree of over-commit.
- Another note/question is the number of Virtual CPUs in a VM. Having more Virtual CPUs may have some detrimental effects due to the added coordination needed between the multiple Virtual CPUs. If a uniprocessor VM suffices, that sometimes performs better.
- Finally, if the customer can use the newest processors, e.g. Intel’s Nehalem (5500 series), the boost from hardware improvement is quite substantial.