Author Archives: Deji

RUSH POST: VMware Tools and RSS Incompatibility Issues

UPDATE II:

We have released VMware Tools Version 10.2.0, with more fixes and improvements for VMXNet3  vNICs. Please download and update your VMware Tools, even if only for the additional improvements in VMXNet3 vNIC drivers.

Recommended drivers versions for Windows and Microsoft Business Critical applications is: 1.7.3.7

UPDATE:

We have just released a new version of the VMware Tools which fixes the issue described in this post (below).

Please download and install this version of the VMware Tools, especially if you are using the VMXNet3 vNIC type for your Windows VMs.

We thank you very much for your patience and understanding while we worked on fixing this problem.

From the Release Notes:

Receive Side Scaling is not functional for vmxnet3 on Windows 8 and Windows 2012 Server or later. This issue is caused by an update for the vmxnet3 driver that addressed RSS features added in NDIS version 6.30 rendering the functionality unusable. It is observed in VMXNET3 driver versions from 1.6.6.0 to 1.7.3.0.

Continue reading

RUSH POST: Microsoft Convenience Update and VMware VMXNet3 vNIC Incompatibilities

LATEST UPDATE:
VMware has received confirmation that Microsoft has determined that the issue reported in this post is a Windows-specific issue and unrelated to VMware or vSphere. Microsoft is encouraging customers to follow the directions provided in Microsoft KB3125574 for the recommended resolution. All further updates will be provided directly by Microsoft through the referenced KB. This marks the end of further updates to this blog post.

MINOR UPDATES:

  • Please see Microsoft’s updated guidance and recommended work-around regarding this issue
  • I am removing reference to “VMware VMXNet3” in the title of this post to reflect Microsoft’s latest updates to their KB. Yes, the issue still exists when using VMXNet3 for VMs, but it no longer appears that this issue is specific to “VMXNet3” virtual network adapters.
  • We are still working with Microsoft to conduct a comprehensive Root-Cause Analysis, and we will provide further updates as new information (or a resolution) becomes available.

 
Continue reading

Now Updated: Microsoft Exchange Server on VMware vSphere Best Practices Guide

Microsoft Exchange Server is one of the mission critical applications most commonly virtualized on the vSphere platform. As customers become more comfortable and familiar with virtualization in general and the VMware vSphere virtualization platform in particular, they become more confident, enough to virtualize their Exchange Server environments. To help customers achieve success as they begin to virtualize their Microsoft Exchange Server infrastructure, VMware provides guidance and recommendations for designing, configuring and managing the infrastructure.

The Microsoft_Exchange_Server_2016_on_VMware_vSphere_Best_Practices_Guide contains VMware’s official prescriptive guidance and recommended practices for successfully running Microsoft Exchange Server on the VMware vSphere platform.

Continue reading

Completely Disable Time Synchronization for your VM

Some administrative practices, like a bad habit, have more lives than the proverbial cat – they tend to stay around forever. It is, therefore, very comforting when one finds a problematic administrative practice that has not just been universally abandoned by administrators, but is also on the top of any junior administrator’s “configurations sure to get you dis-invited from the next user group meetup” list.

Take the case of the old practice of synchronizing a virtual machine’s clock with its host’s clock in a vSphere environment. That used to be “the thing to do” way back when. It was actually the default configuration option on the ESX platform in those days. Until everyone got wiser and the message went out to every admin far and wide that such configurations was no longer kosher. Even VMware got religious and stopped making that option the default behavior. Continue reading

A Stronger Case For Virtualizing Exchange Server 2013 – Think “Performance”

VMware is glad to see that the Microsoft Exchange Server (and Performance) teams appear to have identified the prevalent cause of performance-related issues in an Exchange Server 2013 infrastructure. We have been aware for several years that Microsoft’s sizing recommendation for Exchange Server 2013 is the number one cause of every performance issue that have been reported to VMware since the release of Exchange Server 2013, and it is gratifying that Microsoft is acknowledging this as well.

In May of 2015, Microsoft released a blog post titled “Troubleshooting High CPU utilization issues in Exchange 2013” in which Microsoft acknowledged (for the first time, to our knowledge) that CPU over-sizing is one of the chief causes of performance issues on Exchange Server 2013. We wish to highlight the fact that the Exchange 2013 Server Role Requirements Calculator is the main culprit in this state of affair. One thing we noticed with the release of Exchange Server 2013 and its accompanying “Calculator” is the increase in the compute resources it recommends when compared to similar configuration in prior versions of Exchange Server. Continue reading

Say Hello to vMotion-compatible Shared-Disks Windows Clustering on vSphere

As you dive into the inner-workings of the new version of VMware vSphere (aka ESXi), one of the gems you will discover to your delight is the enhanced virtual machine portability feature that allows you to vMotion a running pair of clustered Windows workloads that have been configured with shared disks.

I pause here now to let you complete the obligatory jiggy dance. No? You have no idea what I just talked about up there, do you? Let me break it down for you:
In vSphere 6.0, you can configure two or more VMs running Windows Server Failover Clustering (or MSCS for older Windows OSes), using common, shared virtual disks (RDM) among them AND still be able to successfully vMotion any of the clustered nodes without inducing failure in WSFC or the clustered application. What’s the big-deal about that? Well, it is the first time VMware has ever officially supported such configuration without any third-party solution, formal exception, or a number of caveats. Simply put, this is now an official, out-of-the-box feature that does not have any exception or special requirements other than the following:
  • The VMs must be in “Hardware 11” compatibility mode – which means that you are either creating and running the VMs on ESXi 6.0 hosts, or you have converted your old template to Hardware 11 and deployed it on ESXi 6.0
  • The disks must be connected to virtual SCSI controllers that have been configured for “Physical” SCSI Bus Sharing mode
  • And the disk type *MUST* be of the “Raw Device Mapping” type. VMDK disks are *NOT* supported for the configuration described in this document.

Virtualizing Microsoft Lync Server – Let’s Clear up the Confusion

We at VMware have been fielding a lot of inquiries lately from customers who have virtualized (or are considering virtualizing) their Microsoft Lync Server infrastructure on the VMware vSphere platform. The nature of inquiries is centered on certain generalized statements contained in the “Planning a Lync Server 2013 Deployment on Virtual Servers” whitepaper published by the Microsoft Lync Server Product Group. In the referenced document, the writers made the following assertions:

  • You should disable hyper-threading on all hosts.
  • Disable non-uniform memory access (NUMA) spanning on the hypervisor, as this can reduce guest performance.
  • Virtualization also introduces a new layer of configuration and optimization techniques for each guest that must be determined and tested for Lync Server. Many virtualization techniques that can lead to consolidation and optimization for other applications cannot be used with Lync Server. Shared resource techniques, including processor oversubscription, memory over-commitment, and I/O virtualization, cannot be used because of their negative impact on Lync scale and call quality.
  • Virtual machine portability—the capability to move a virtual machine guest server from one physical host to another—breaks the inherent availability functionality in Lync Server pools. Moving a guest server while operating is not supported in Lync Server 2013. Lync Server 2013 has a rich set of application-specific failover techniques, including data replication within a pool and between pools. Virtual machine-based failover techniques break these application-specific failover capabilities.

VMware has contacted the writers of this document and requested corrections to (or clarification of) the statements because they do not, to our knowledge, convey known facts and they reflect a fundamental misunderstanding of vSphere features and capabilities. While we await further information from the writers of the referenced document, it has become necessary for us at VMware to publicly provide a direct clarification to our customers who have expressed confusion at the statements above. Continue reading

Disabling TPS in vSphere – Impact on Critical Applications

Starting with update releases in December, 2014, VMware vSphere will default to a new configuration for the Transparent Page Sharing (TPS) feature. Unlike in prior versions of vSphere up to that point, TPS will be DISABLED by default. TPS will continued to be disabled for all future versions of vSphere.

In the interim, VMware has released a Patch for vSphere 5.5 which changes the behavior of (and provides additional configuration options for) TPS. Similar patches will also be released for prior versions at a later date.

Why are we doing this?

In a nutshell, independent research indicates that TPS can be abused to gain unauthorized access to data under certain highly controlled conditions. In line with its “secure by default” security posture, VMware has opted to change the default behavior of TPS and provide customers with a configurable option for selectively and more securely enabling TPS in their environment. Please read “Security considerations and disallowing inter-Virtual Machine Transparent Page Sharing (2080735)” for more detailed discussion of the security issues and VMware’s response. Continue reading

Just Published – Virtualizing Active Directory Domain Services On VMware vSphere®

Announcing the latest addition to our series of prescriptive guidance for virtualizing Business Critical Applications on the VMware vSphere platform.

Microsoft Windows Active Directory Domain Services (AD DS) is one of the most pervasive Directory Services platforms in the market today. Because of the importance of AD DS to the operation and availability of other critical services, applications and processes, the stability and availability of AD DS itself is usually very important to most organizations.

Although the “Virtualization First” concept is becoming a widely-accepted operational practice in the enterprise, many IT shops are still reluctant to completely virtualize Domain Controllers. The most conservative organizations have an absolute aversion to domain controller virtualization while the more conservative organizations choose to virtualize a portion of the AD DS environment and retain a portion on physical hardware. Empirical data indicate that the cause of this opposition to domain controller virtualization is a combination of historical artifacts, misinformation, lack of experience in virtualization, or fear of the unknown. Continue reading

Which vSphere Operation Impacts Windows VM-Generation ID?

In Windows Server 2012 VM-Generation ID Support in vSphere, we introduced you to VMware’s support for the new Microsoft’s Windows VM-Generation ID features, discussing how they help address some of the challenges facing Active Directory administrators looking to virtualize domain controllers.

One of the common requests from customers in response to the referenced article is a list of events and conditions under which an administrator can expect the VM-Generation ID of a virtual machine to change in a VMware vSphere infrastructure. The table below presents this list. This table will be included in an upcoming Active Directory on VMware vSphere Best Practices Guide.

Scenario VM-Generation ID Change
VMware vSphere vMotion®/VMware vSphere Storage vMotion No
Virtual machine pause/resume No
Virtual machine reboot No
HA restart No
FT failover No
vSphere host reboot No
Import virtual machine Yes
Cold clone Yes
Hot clone
Note
Hot cloning of virtual domain controllers is not supported by either Microsoft or VMware. Do not attempt hot cloning under any circumstances.
Yes
New virtual machine from VMware Virtual Disk Development Kit (VMDK) copy Yes
Cold snapshot revert (while powered off or while running and not taking a memory snapshot) Yes
Hot snapshot revert (while powered on with a memory snapshot) Yes
Restore from virtual machine level backup Yes
Virtual machine replication (using both host-based and array-level replication) Yes

If you have a specific operation or task that is not included in the table above, please be sure to ask in the comments section.

Thank you.