I’ve been involved recently in a couple situations in which the ‘perferHT’ advanced setting has been implemented, but for the wrong reasons. I want to re-clarify how and when it should be used. As with many advanced settings, it can be helpful or hurtful.
“PerferHT exposes Hyper-Threading to the guest operating system” – False!
The VMware Mobile Knowledge Portal iOS and Android app has recently been updated. It sports a great new look and feel and makes finding the information you need even easier by grouping it by area in our SDDC vision.
More recent versions of Microsoft operating systems contain the ability to detect if they are running virtualized or not. This is accomplished through the checking of a CPUID hypervisor-present bit presented by the VMware virtual hardware. Since virtual hardware 7, VMware has implemented this interface, which is required by the Microsoft SVVP program.
However, as Microsoft continues to change and update its specifications, lets look at a specific behavior in which virtual machine performance can be impacted by the operating system accessing a time source inefficiently. Continue reading →
There is a lot of outdated information regarding the use of a vSphere feature that changes the presentation of logical processors for a virtual machine, into a specific socket and core configuration. This advanced setting is commonly known as corespersocket.
It was originally intended to address licensing issues where some operating systems had limitations on the number of sockets that could be used, but did not limit core count.
It’s often been said that this change of processor presentation does not affect performance, but it may impact performance by influencing the sizing and presentation of virtual NUMA to the guest operating system. Continue reading →
I previously mentioned performance changes in vSphere Replication 5.5, and in this post I’ll take a look at some of the things our tireless engineers in development have done to make things much quicker for replication within this newest release.
The changes they’ve made fall into three main areas:
Improved buffering algorithms at the source hosts resulting in better read performance with less load on the host and better network transfer performance
More efficient TCP algorithms at the source site resulting in better latency handling
More efficient buffering algorithms at the target site resulting in better write performance with less load on the host