As enterprise IT teams, leadership and business owners continuously drive towards service improvements, they invariably look at the public cloud infrastructure as a possible target for their mission critical applications. Whereas virtualization is now generally accepted as the default platform for enterprise-grade applications, businesses looking to leverage the public cloud for most of these applications are still constrained in their ability to do so.
These constraints can be directly attributed to the following (among others):
Performance Concerns – is the target public cloud robust enough to meet the application’s scale and performance requirements?
Vendor Support – is the target cloud platform certified for the application? Will the vendor provide the necessary technical support and assistance when (not if) the enterprise requires it?
Level of Effort – mission critical applications demand considerable attention to configuration and other considerations beyond those required for lower-tiered applications and moving from one hosting platform to another may not be a simple or quick undertaking.
This article will address two of these constraints in relations to enterprises’ desire to operate their Microsoft Exchange Server workloads on the VMware Cloud on AWS platform – Performance and Support. Part II of this article will address the “Level of Effort” aspect – we feel that this deserves a stand-alone article of its own.
Microsoft Exchange Server is one of the most prevalent Messaging and Collaboration applications in enterprises today. Microsoft officially supports virtualizing Microsoft Exchange Server (hereafter simply referred to as “Exchange” or “Exchange Server”) on the VMware vSphere virtualization platform. Because VMware has been supporting (and providing guidance for) the virtualization of Exchange Server for more than 10 years (even before official Microsoft support), virtualizing Exchange Server on the vSphere platform has become quite mainstream. Continue reading →
We have released VMware Tools Version 10.2.0, with more fixes and improvements for VMXNet3 vNICs. Please download and update your VMware Tools, even if only for the additional improvements in VMXNet3 vNIC drivers.
Recommended drivers versions for Windows and Microsoft Business Critical applications is: 126.96.36.199
We have just released a new version of the VMware Tools which fixes the issue described in this post (below).
Please download and install this version of the VMware Tools, especially if you are using the VMXNet3 vNIC type for your Windows VMs.
We thank you very much for your patience and understanding while we worked on fixing this problem.
From the Release Notes:
Receive Side Scaling is not functional for vmxnet3 on Windows 8 and Windows 2012 Server or later. This issue is caused by an update for the vmxnet3 driver that addressed RSS features added in NDIS version 6.30 rendering the functionality unusable. It is observed in VMXNET3 driver versions from 188.8.131.52 to 184.108.40.206.
On Tuesday, March 13th, 2018, Microsoft released a number of Security Updates for Windows Operating Systems. Two of these updates are now confirmed to be problematic: when applied to a virtualized Windows Server 2008/R2 and/or Windows 7 instances, these patches replace the existing virtual network card (vNIC) with a new one, hides the pre-existing vNIC and does not persist or transfer the existing IP Address configuration to the new vNIC.
We are updating this post (rather than creating a new one) because the issues are similar and well-known at this time. We expect that Microsoft will release an advisory or updates in due course.
ACTION: If you have been impacted, please note that the following manual fixes have been known to resolve the issue (after the fact):
Note the name of the new network adapter
Open “Device Manager” -> “Show Hidden Devices” and delete ALL hidden/phantom network adapters from the list (be sure to NOT delete the drivers)
Edit the properties of the new NIC and add the applicable IP address configuration. No reboot is required.
The following video has been created to provide a visual cue for this resolution.
Microsoft has has updated the “Known Issues in this Update” section of the applicable KB
VMware has received confirmation that Microsoft has determined that the issue reported in this post is a Windows-specific issue and unrelated to VMware or vSphere. Microsoft is encouraging customers to follow the directions provided in Microsoft KB3125574 for the recommended resolution. All further updates will be provided directly by Microsoft through the referenced KB. This marks the end of further updates to this blog post.
I am removing reference to “VMware VMXNet3” in the title of this post to reflect Microsoft’s latest updates to their KB. Yes, the issue still exists when using VMXNet3 for VMs, but it no longer appears that this issue is specific to “VMXNet3” virtual network adapters.
We are still working with Microsoft to conduct a comprehensive Root-Cause Analysis, and we will provide further updates as new information (or a resolution) becomes available.
Microsoft Exchange Server is one of the mission critical applications most commonly virtualized on the vSphere platform. As customers become more comfortable and familiar with virtualization in general and the VMware vSphere virtualization platform in particular, they become more confident, enough to virtualize their Exchange Server environments. To help customers achieve success as they begin to virtualize their Microsoft Exchange Server infrastructure, VMware provides guidance and recommendations for designing, configuring and managing the infrastructure.
Some administrative practices, like a bad habit, have more lives than the proverbial cat – they tend to stay around forever. It is, therefore, very comforting when one finds a problematic administrative practice that has not just been universally abandoned by administrators, but is also on the top of any junior administrator’s “configurations sure to get you dis-invited from the next user group meetup” list.
Take the case of the old practice of synchronizing a virtual machine’s clock with its host’s clock in a vSphere environment. That used to be “the thing to do” way back when. It was actually the default configuration option on the ESX platform in those days. Until everyone got wiser and the message went out to every admin far and wide that such configurations was no longer kosher. Even VMware got religious and stopped making that option the default behavior. Continue reading →
VMware is glad to see that the Microsoft Exchange Server (and Performance) teams appear to have identified the prevalent cause of performance-related issues in an Exchange Server 2013 infrastructure. We have been aware for several years that Microsoft’s sizing recommendation for Exchange Server 2013 is the number one cause of every performance issue that have been reported to VMware since the release of Exchange Server 2013, and it is gratifying that Microsoft is acknowledging this as well.
In May of 2015, Microsoft released a blog post titled “Troubleshooting High CPU utilization issues in Exchange 2013” in which Microsoft acknowledged (for the first time, to our knowledge) that CPU over-sizing is one of the chief causes of performance issues on Exchange Server 2013. We wish to highlight the fact that the Exchange 2013 Server Role Requirements Calculator is the main culprit in this state of affair. One thing we noticed with the release of Exchange Server 2013 and its accompanying “Calculator” is the increase in the compute resources it recommends when compared to similar configuration in prior versions of Exchange Server. Continue reading →
As you dive into the inner-workings of the new version of VMware vSphere (aka ESXi), one of the gems you will discover to your delight is the enhanced virtual machine portability feature that allows you to vMotion a running pair of clustered Windows workloads that have been configured with shared disks.
I pause here now to let you complete the obligatory jiggy dance. No? You have no idea what I just talked about up there, do you? Let me break it down for you:
In vSphere 6.0, you can configure two or more VMs running Windows Server Failover Clustering (or MSCS for older Windows OSes), using common, shared virtual disks (RDM) among them AND still be able to successfully vMotion any of the clustered nodes without inducing failure in WSFC or the clustered application. What’s the big-deal about that? Well, it is the first time VMware has ever officially supported such configuration without any third-party solution, formal exception, or a number of caveats. Simply put, this is now an official, out-of-the-box feature that does not have any exception or special requirements other than the following:
The VMs must be in “Hardware 11” compatibility mode – which means that you are either creating and running the VMs on ESXi 6.0 hosts, or you have converted your old template to Hardware 11 and deployed it on ESXi 6.0
The disks must be connected to virtual SCSI controllers that have been configured for “Physical” SCSI Bus Sharing mode
And the disk type *MUST* be of the “Raw Device Mapping” type. VMDK disks are *NOT* supported for the configuration described in this document.
We at VMware have been fielding a lot of inquiries lately from customers who have virtualized (or are considering virtualizing) their Microsoft Lync Server infrastructure on the VMware vSphere platform. The nature of inquiries is centered on certain generalized statements contained in the “Planning a Lync Server 2013 Deployment on Virtual Servers”whitepaper published by the Microsoft Lync Server Product Group. In the referenced document, the writers made the following assertions:
You should disable hyper-threading on all hosts.
Disable non-uniform memory access (NUMA) spanning on the hypervisor, as this can reduce guest performance.
Virtualization also introduces a new layer of configuration and optimization techniques for each guest that must be determined and tested for Lync Server. Many virtualization techniques that can lead to consolidation and optimization for other applications cannot be used with Lync Server. Shared resource techniques, including processor oversubscription, memory over-commitment, and I/O virtualization, cannot be used because of their negative impact on Lync scale and call quality.
Virtual machine portability—the capability to move a virtual machine guest server from one physical host to another—breaks the inherent availability functionality in Lync Server pools. Moving a guest server while operating is not supported in Lync Server 2013. Lync Server 2013 has a rich set of application-specific failover techniques, including data replication within a pool and between pools. Virtual machine-based failover techniques break these application-specific failover capabilities.
VMware has contacted the writers of this document and requested corrections to (or clarification of) the statements because they do not, to our knowledge, convey known facts and they reflect a fundamental misunderstanding of vSphere features and capabilities. While we await further information from the writers of the referenced document, it has become necessary for us at VMware to publicly provide a direct clarification to our customers who have expressed confusion at the statements above. Continue reading →
Starting with update releases in December, 2014, VMware vSphere will default to a new configuration for the Transparent Page Sharing (TPS) feature. Unlike in prior versions of vSphere up to that point, TPS will be DISABLED by default. TPS will continued to be disabled for all future versions of vSphere.
In the interim, VMware has released a Patch for vSphere 5.5 which changes the behavior of (and provides additional configuration options for) TPS. Similar patches will also be released for prior versions at a later date.
Why are we doing this?
In a nutshell, independent research indicates that TPS can be abused to gain unauthorized access to data under certain highly controlled conditions. In line with its “secure by default” security posture, VMware has opted to change the default behavior of TPS and provide customers with a configurable option for selectively and more securely enabling TPS in their environment. Please read “Security considerations and disallowing inter-Virtual Machine Transparent Page Sharing (2080735)” for more detailed discussion of the security issues and VMware’s response. Continue reading →
Announcing the latest addition to our series of prescriptive guidance for virtualizing Business Critical Applications on the VMware vSphere platform.
Microsoft Windows Active Directory Domain Services (AD DS) is one of the most pervasive Directory Services platforms in the market today. Because of the importance of AD DS to the operation and availability of other critical services, applications and processes, the stability and availability of AD DS itself is usually very important to most organizations.
Although the “Virtualization First” concept is becoming a widely-accepted operational practice in the enterprise, many IT shops are still reluctant to completely virtualize Domain Controllers. The most conservative organizations have an absolute aversion to domain controller virtualization while the more conservative organizations choose to virtualize a portion of the AD DS environment and retain a portion on physical hardware. Empirical data indicate that the cause of this opposition to domain controller virtualization is a combination of historical artifacts, misinformation, lack of experience in virtualization, or fear of the unknown. Continue reading →