Home > Blogs > Virtualize Business Critical Applications

Virtualizing Microsoft Lync Server – Let’s Clear up the Confusion

We at VMware have been fielding a lot of inquiries lately from customers who have virtualized (or are considering virtualizing) their Microsoft Lync Server infrastructure on the VMware vSphere platform. The nature of inquiries is centered on certain generalized statements contained in the “Planning a Lync Server 2013 Deployment on Virtual Servers” whitepaper published by the Microsoft Lync Server Product Group. In the referenced document, the writers made the following assertions:

  • You should disable hyper-threading on all hosts.
  • Disable non-uniform memory access (NUMA) spanning on the hypervisor, as this can reduce guest performance.
  • Virtualization also introduces a new layer of configuration and optimization techniques for each guest that must be determined and tested for Lync Server. Many virtualization techniques that can lead to consolidation and optimization for other applications cannot be used with Lync Server. Shared resource techniques, including processor oversubscription, memory over-commitment, and I/O virtualization, cannot be used because of their negative impact on Lync scale and call quality.
  • Virtual machine portability—the capability to move a virtual machine guest server from one physical host to another—breaks the inherent availability functionality in Lync Server pools. Moving a guest server while operating is not supported in Lync Server 2013. Lync Server 2013 has a rich set of application-specific failover techniques, including data replication within a pool and between pools. Virtual machine-based failover techniques break these application-specific failover capabilities.

VMware has contacted the writers of this document and requested corrections to (or clarification of) the statements because they do not, to our knowledge, convey known facts and they reflect a fundamental misunderstanding of vSphere features and capabilities. While we await further information from the writers of the referenced document, it has become necessary for us at VMware to publicly provide a direct clarification to our customers who have expressed confusion at the statements above.

RESPONSE HIGHLIGHTS:

  • We recommend that customers enable hyper-threading because doing so benefits the ESXi scheduling algorithm and, consequently, the virtualized workloads.
  • We recommend that customers enable NUMA. We recommend sizing a VM’s resources to fit within a single NUMA boundary and to only cross boundaries with proper understanding of the physical NUMA topology, and only when absolutely necessary.
  • Although we generally recommend against over-provisioning resources for critical workloads, it is possible and easy to over-commit resources within a given vSphere cluster and still ensure adequate resource availability for specific workloads.
  • All of vSphere’s High Availability features (vMotion, DRS and vSphere HA) satisfy all of Microsoft’s published requirements for VM portability.

DETAILED RESPONSE:

For the avoidance of any doubt, we are aware that Microsoft fully supports the virtualization of all Microsoft Lync components. See Running Lync Server 2013 on virtual servers, particularly the following statement:

Lync Server 2013 supports virtualization topologies that support all Lync Server workloads, including instant messaging (IM) and presence, conferencing, Enterprise Voice, Monitoring, Archiving, and Persistent Chat.

With regards to the recommendation to disable hyper-threading, the writers did not document the rationale for the recommendation. We will infer that the recommendation is based on the following statement contained in the “Hyperthreading” section of the Understanding Processor Configurations and Exchange Performance Guide published by the Microsoft Exchange Server Product team.

Hyperthreading causes capacity planning and monitoring challenges, and as a result, the expected gain in CPU overhead is likely not justified. Hyperthreading should be disabled by default for production Exchange servers and only enabled if absolutely necessary as a temporary measure to increase CPU capacity until additional hardware can be obtained.

We wish to draw the readers’ attention to the fact that the statement above does NOT imply the existence of ANY technical drawback to enabling hyper-threading for a virtualized Microsoft Lync Server workload. Instead, the concern is about capacity planning and monitoring. We share this same concern – this is why we always recommend that our customers size their critical application environment based on the physical processor cores available, and not to the logical cores exposed by hyper-threading.

The most alarming-sounding argument against enabling hyper-threading when virtualizing a Microsoft application came from the Exchange Server Product group, in the “Hyperthreading: Wow, free processors!” section of the Ask the Perf Guy: Sizing Exchange 2013 DeploymentsTechNet entry

Turn it off. While modern implementations of simultaneous multithreading (SMT), also known as hyperthreading, can absolutely improve CPU throughput for most applications, the benefits to Exchange 2013 do not outweigh the negative impacts….This significant increase in memory, along with an analysis of the actual CPU throughput increase for Exchange 2013 workloads in internal lab tests has led us to a best practice recommendation that hyperthreading should be disabled for all Exchange 2013 servers. The benefits don’t outweigh the negative impact.

The above statement is persuasive, but it is irrelevant to the vSphere virtualization platform and the writer of the TechNet entry was good enough to acknowledge that and accurately clarify the statement thus:

There’s an important caveat to this recommendation for customers who are virtualizing Exchange. Since the number of logical processors visible to a virtual machine is determined by the number of virtual CPUs allocated in the virtual machine configuration, hyperthreading will not have the same impact on memory utilization described above. It’s certainly acceptable to enable hyperthreading on physical hardware that is hosting Exchange virtual machines, but make sure that any capacity planning calculations for that hardware are based purely on physical CPUs…… –Jeff Mealiffe, Principal Program Manager Lead, Exchange Customer Experience

We highly recommend that customers ignore this recommendation to disable hyper-threading for virtualized Microsoft Lync workload. We have documented the performance benefits that we derive from hyper-threading in the “Hyper-threading” section of our Performance Best Practices for VMware vSphere® 5.5 Guide

Similarly, there is no disputing the fact that the Microsoft Windows Operating System is sufficiently modern and advanced to recognize and leverage the benefits of the Non-Uniform Memory Access optimization techniques of modern processor hardware. The writers of the referenced document did not advance any technical rationale for the following recommendation:

Disable non-uniform memory access (NUMA) spanning on the hypervisor, as this can reduce guest performance.

The most relevant authoritative source for NUMA discussion that we could find on Microsoft’s website is the Best Practices for Virtualizing and Managing Exchange 2013 Guide which, incidentally, has the following favorable statements regarding the benefits of NUMA:

….In addition, more advanced performance features, such as in-guest Non-Uniform Memory Access (NUMA), are supported by Windows Server 2012 Hyper-V virtual machines. Providing these enhancements helps to ensure that customers can achieve the highest levels of scalability, performance, and density for their mission-critical workloads….NUMA is a memory design architecture that delivers significant advantages over the single system bus architecture and provides a scalable solution to memory access problems. – Page 11

Although Exchange 2013 is not NUMA-aware, it takes advantage of the Windows scheduler algorithms that keep threads isolated to particular NUMA nodes; however, Exchange 2013 does not use NUMA topology information…. -Page 49

The Windows Operating System is NUMA-aware, and the presence of NUMA-capabilities in the OS has not been demonstrated to hurt any of the Lync Server components in any of our tests or at any of our customers. The document under discussion does not contain any fact alluding to such incompatibility. The Non-Uniform Memory Access (NUMA) section of our Performance Best Practices for VMware vSphere® 5.5 Guide  contains our rationale for recommending that our customers enable NUMA in their vSphere environment for their virtualized business critical applications. In the absence of a proven known incompatibility with NUMA, we continue to prescribe this recommendation to customers looking to improve performance for their Microsoft Lync Servers hosted on the vSphere platform.

Because it is possible to over-commit resources within a given vSphere cluster while simultaneously guaranteeing resources for select and specific workloads (through the use of reservation, limits, shares or resource pools), the third recommendation contained in the referenced whitepaper is neither accurate nor relevant in a vSphere infrastructure. While we strongly encourage our customers to avoid over-provisioning and over-committing resources for critical applications, vSphere enables our customers to guarantee allocated resources to their Lync Servers while taking advantage of some of the major benefits of virtualization – efficient resource sharing, consolidation and utilization. Critical applications workloads such as Lync can be allocated a reserved amount of resources which are then not available for contention by lower-priority workloads.

On the fourth point where the writers have stated that VM portability “breaks the inherent availability functionality in Lync Server pools”, we are unaware of the “breakage” alluded to in the document. The VMware’s “portability” feature is vMotion, a feature that has been in long use for clustered critical applications like Microsoft Exchange Server (DAG) and Microsoft SQL Server (MSCS or AlwaysOn). We are not aware of any documented incidents of “breakage” attributable to vMotion operations on these workloads, or even for Lync.

In the “Host-based failover clustering and migration for Exchange“ section of its Exchange 2013 virtualization whitepaper,  Microsoft defined the following strict criteria for its support of VM “portability” for Exchange workloads:

  • Does Microsoft support third-party migration technology? Microsoft can’t make support statements for the integration of third party hypervisor products using these technologies with Exchange, because these technologies aren’t part of the Server Virtualization Validation Program (SVVP). The SVVP covers the other aspects of Microsoft support for third-party hypervisors. You need to ensure that your hypervisor vendor supports the combination of their migration and clustering technology with Exchange. If your hypervisor vendor supports their migration technology with Exchange, Microsoft supports Exchange with their migration technology.
  • How does Microsoft define host-based failover clustering? Host-based failover clustering refers to any technology that provides the automatic ability to react to host-level failures and start affected virtual machines on alternate servers. Use of this technology is supported given that, in a failure scenario, the virtual machine is coming up from a cold boot on the alternate host. This technology helps to make sure that the virtual machine never comes up from a saved state that’s persisted on disk because it will be stale relative to the rest of the DAG members.
  • What does Microsoft mean by migration support? Migration technology refers to any technology that allows a planned move of a virtual machine from one host machine to another host machine. This move could also be an automated move that occurs as part of resource load balancing, but it isn’t related to a failure in the system. Migrations are supported as long as the virtual machines never come up from a saved state that’s persisted on disk. This means that technology that moves a virtual machine by transporting the state and virtual machine memory over the network with no perceived downtime is supported for use with Exchange. A third-party hypervisor vendor must provide support for the migration technology, while Microsoft provides support for Exchange when used in this configuration.

vMotion, DRS and vSphere HA satisfy all of those requirements without exceptions.

Granted, when not properly configured, a vMotion operation can lead to a brief network packet loss which can then interfere with the relationship between/among clustered VMs. This is a known technical condition in Windows clustering which is not unique to vMotion operations. This condition is well understood within the industry and documented by Microsoft in its Tuning Failover Cluster Network Thresholds Whitepaper.

This is further helpfully documented by Microsoft in the following publication: Having a problem with nodes being removed from active Failover Cluster membership?

Backup vendors have also incorporated these considerations into their publications. See: How do I avoid failover between DAG nodes while the VSS snapshot is being used?

Like most other third-party vendors supporting Microsoft’s Windows Operating System and applications, VMware has incorporated several of the recommended tuning and optimization steps contained in this whitepaper into several of our guides and recommendations to our customers. See our Microsoft Exchange 2013 on VMware Best Practices Guide for an example.

The VMware’s Microsoft Exchange 2013 on VMware Best Practices Guide includes several other configuration prescriptions that, when adhered to, minimize the possibility of an unintended failover of clustered Microsoft application VMs, including the Lync Server nodes. We wish to stress that our “portability” features do not negate or impair the native availability features of Microsoft Lync Server workloads.

We are unaware of any technical impediments to combining vSphere’s robust and proven host-level clustering and availability features with Microsoft Lync Server’s application-level availability features and we encourage our customers to continue to confidently leverage these superior combinations when virtualizing their Lync servers on the vSphere platform. In the absence of any documented and proven incompatibility among these features, we are confident that customers virtualizing their Microsoft Lync Server infrastructure on the vSphere platform will continue to enjoy the full benefits of support to which they are contractually entitled without any inhibition.

In the unlikely event that virtualizing Lync Server workloads results in a refusal of support from Microsoft to a customer, such customers can open a support request ticket with VMware’s Global Support Service and VMware will leverage the framework of support agreements among members of the TSANet “Multi Vendor Support Community” to provide the necessary support to the customers. Both Microsoft and VMware are members of the TSANet Alliance.



Virtualizing Microsoft Lync Server – Let’s Clear up the Confusion

We at VMware have been fielding a lot of inquiries lately from customers who have virtualized (or are considering virtualizing) their Microsoft Lync Server infrastructure on the VMware vSphere platform. The nature of inquiries is centered on certain generalized statements contained in the “Planning a Lync Server 2013 Deployment on Virtual Servers” whitepaper published by the Microsoft Lync Server Product Group. In the referenced document, the writers made the following assertions:

  • You should disable hyper-threading on all hosts.
  • Disable non-uniform memory access (NUMA) spanning on the hypervisor, as this can reduce guest performance.
  • Virtualization also introduces a new layer of configuration and optimization techniques for each guest that must be determined and tested for Lync Server. Many virtualization techniques that can lead to consolidation and optimization for other applications cannot be used with Lync Server. Shared resource techniques, including processor oversubscription, memory over-commitment, and I/O virtualization, cannot be used because of their negative impact on Lync scale and call quality.
  • Virtual machine portability—the capability to move a virtual machine guest server from one physical host to another—breaks the inherent availability functionality in Lync Server pools. Moving a guest server while operating is not supported in Lync Server 2013. Lync Server 2013 has a rich set of application-specific failover techniques, including data replication within a pool and between pools. Virtual machine-based failover techniques break these application-specific failover capabilities.

VMware has contacted the writers of this document and requested corrections to (or clarification of) the statements because they do not, to our knowledge, convey known facts and they reflect a fundamental misunderstanding of vSphere features and capabilities. While we await further information from the writers of the referenced document, it has become necessary for us at VMware to publicly provide a direct clarification to our customers who have expressed confusion at the statements above.

RESPONSE HIGHLIGHTS:

  • We recommend that customers enable hyper-threading because doing so benefits the ESXi scheduling algorithm and, consequently, the virtualized workloads.
  • We recommend that customers enable NUMA. We recommend sizing a VM’s resources to fit within a single NUMA boundary and to only cross boundaries with proper understanding of the physical NUMA topology, and only when absolutely necessary.
  • Although we generally recommend against over-provisioning resources for critical workloads, it is possible and easy to over-commit resources within a given vSphere cluster and still ensure adequate resource availability for specific workloads.
  • All of vSphere’s High Availability features (vMotion, DRS and vSphere HA) satisfy all of Microsoft’s published requirements for VM portability.

DETAILED RESPONSE:

For the avoidance of any doubt, we are aware that Microsoft fully supports the virtualization of all Microsoft Lync components. See Running Lync Server 2013 on virtual servers, particularly the following statement:

Lync Server 2013 supports virtualization topologies that support all Lync Server workloads, including instant messaging (IM) and presence, conferencing, Enterprise Voice, Monitoring, Archiving, and Persistent Chat.

With regards to the recommendation to disable hyper-threading, the writers did not document the rationale for the recommendation. We will infer that the recommendation is based on the following statement contained in the “Hyperthreading” section of the Understanding Processor Configurations and Exchange Performance Guide published by the Microsoft Exchange Server Product team.

Hyperthreading causes capacity planning and monitoring challenges, and as a result, the expected gain in CPU overhead is likely not justified. Hyperthreading should be disabled by default for production Exchange servers and only enabled if absolutely necessary as a temporary measure to increase CPU capacity until additional hardware can be obtained.

We wish to draw the readers’ attention to the fact that the statement above does NOT imply the existence of ANY technical drawback to enabling hyper-threading for a virtualized Microsoft Lync Server workload. Instead, the concern is about capacity planning and monitoring. We share this same concern – this is why we always recommend that our customers size their critical application environment based on the physical processor cores available, and not to the logical cores exposed by hyper-threading.

The most alarming-sounding argument against enabling hyper-threading when virtualizing a Microsoft application came from the Exchange Server Product group, in the “Hyperthreading: Wow, free processors!” section of the Ask the Perf Guy: Sizing Exchange 2013 DeploymentsTechNet entry

Turn it off. While modern implementations of simultaneous multithreading (SMT), also known as hyperthreading, can absolutely improve CPU throughput for most applications, the benefits to Exchange 2013 do not outweigh the negative impacts….This significant increase in memory, along with an analysis of the actual CPU throughput increase for Exchange 2013 workloads in internal lab tests has led us to a best practice recommendation that hyperthreading should be disabled for all Exchange 2013 servers. The benefits don’t outweigh the negative impact.

The above statement is persuasive, but it is irrelevant to the vSphere virtualization platform and the writer of the TechNet entry was good enough to acknowledge that and accurately clarify the statement thus:

There’s an important caveat to this recommendation for customers who are virtualizing Exchange. Since the number of logical processors visible to a virtual machine is determined by the number of virtual CPUs allocated in the virtual machine configuration, hyperthreading will not have the same impact on memory utilization described above. It’s certainly acceptable to enable hyperthreading on physical hardware that is hosting Exchange virtual machines, but make sure that any capacity planning calculations for that hardware are based purely on physical CPUs…… –Jeff Mealiffe, Principal Program Manager Lead, Exchange Customer Experience

We highly recommend that customers ignore this recommendation to disable hyper-threading for virtualized Microsoft Lync workload. We have documented the performance benefits that we derive from hyper-threading in the “Hyper-threading” section of our Performance Best Practices for VMware vSphere® 5.5 Guide

Similarly, there is no disputing the fact that the Microsoft Windows Operating System is sufficiently modern and advanced to recognize and leverage the benefits of the Non-Uniform Memory Access optimization techniques of modern processor hardware. The writers of the referenced document did not advance any technical rationale for the following recommendation:

Disable non-uniform memory access (NUMA) spanning on the hypervisor, as this can reduce guest performance.

The most relevant authoritative source for NUMA discussion that we could find on Microsoft’s website is the Best Practices for Virtualizing and Managing Exchange 2013 Guide which, incidentally, has the following favorable statements regarding the benefits of NUMA:

….In addition, more advanced performance features, such as in-guest Non-Uniform Memory Access (NUMA), are supported by Windows Server 2012 Hyper-V virtual machines. Providing these enhancements helps to ensure that customers can achieve the highest levels of scalability, performance, and density for their mission-critical workloads….NUMA is a memory design architecture that delivers significant advantages over the single system bus architecture and provides a scalable solution to memory access problems. – Page 11

Although Exchange 2013 is not NUMA-aware, it takes advantage of the Windows scheduler algorithms that keep threads isolated to particular NUMA nodes; however, Exchange 2013 does not use NUMA topology information…. -Page 49

The Windows Operating System is NUMA-aware, and the presence of NUMA-capabilities in the OS has not been demonstrated to hurt any of the Lync Server components in any of our tests or at any of our customers. The document under discussion does not contain any fact alluding to such incompatibility. The Non-Uniform Memory Access (NUMA) section of our Performance Best Practices for VMware vSphere® 5.5 Guide  contains our rationale for recommending that our customers enable NUMA in their vSphere environment for their virtualized business critical applications. In the absence of a proven known incompatibility with NUMA, we continue to prescribe this recommendation to customers looking to improve performance for their Microsoft Lync Servers hosted on the vSphere platform.

Because it is possible to over-commit resources within a given vSphere cluster while simultaneously guaranteeing resources for select and specific workloads (through the use of reservation, limits, shares or resource pools), the third recommendation contained in the referenced whitepaper is neither accurate nor relevant in a vSphere infrastructure. While we strongly encourage our customers to avoid over-provisioning and over-committing resources for critical applications, vSphere enables our customers to guarantee allocated resources to their Lync Servers while taking advantage of some of the major benefits of virtualization – efficient resource sharing, consolidation and utilization. Critical applications workloads such as Lync can be allocated a reserved amount of resources which are then not available for contention by lower-priority workloads.

On the fourth point where the writers have stated that VM portability “breaks the inherent availability functionality in Lync Server pools”, we are unaware of the “breakage” alluded to in the document. The VMware’s “portability” feature is vMotion, a feature that has been in long use for clustered critical applications like Microsoft Exchange Server (DAG) and Microsoft SQL Server (MSCS or AlwaysOn). We are not aware of any documented incidents of “breakage” attributable to vMotion operations on these workloads, or even for Lync.

In the “Host-based failover clustering and migration for Exchange“ section of its Exchange 2013 virtualization whitepaper,  Microsoft defined the following strict criteria for its support of VM “portability” for Exchange workloads:

  • Does Microsoft support third-party migration technology? Microsoft can’t make support statements for the integration of third party hypervisor products using these technologies with Exchange, because these technologies aren’t part of the Server Virtualization Validation Program (SVVP). The SVVP covers the other aspects of Microsoft support for third-party hypervisors. You need to ensure that your hypervisor vendor supports the combination of their migration and clustering technology with Exchange. If your hypervisor vendor supports their migration technology with Exchange, Microsoft supports Exchange with their migration technology.
  • How does Microsoft define host-based failover clustering? Host-based failover clustering refers to any technology that provides the automatic ability to react to host-level failures and start affected virtual machines on alternate servers. Use of this technology is supported given that, in a failure scenario, the virtual machine is coming up from a cold boot on the alternate host. This technology helps to make sure that the virtual machine never comes up from a saved state that’s persisted on disk because it will be stale relative to the rest of the DAG members.
  • What does Microsoft mean by migration support? Migration technology refers to any technology that allows a planned move of a virtual machine from one host machine to another host machine. This move could also be an automated move that occurs as part of resource load balancing, but it isn’t related to a failure in the system. Migrations are supported as long as the virtual machines never come up from a saved state that’s persisted on disk. This means that technology that moves a virtual machine by transporting the state and virtual machine memory over the network with no perceived downtime is supported for use with Exchange. A third-party hypervisor vendor must provide support for the migration technology, while Microsoft provides support for Exchange when used in this configuration.

vMotion, DRS and vSphere HA satisfy all of those requirements without exceptions.

Granted, when not properly configured, a vMotion operation can lead to a brief network packet loss which can then interfere with the relationship between/among clustered VMs. This is a known technical condition in Windows clustering which is not unique to vMotion operations. This condition is well understood within the industry and documented by Microsoft in its Tuning Failover Cluster Network Thresholds Whitepaper.

This is further helpfully documented by Microsoft in the following publication: Having a problem with nodes being removed from active Failover Cluster membership?

Backup vendors have also incorporated these considerations into their publications. See: How do I avoid failover between DAG nodes while the VSS snapshot is being used?

Like most other third-party vendors supporting Microsoft’s Windows Operating System and applications, VMware has incorporated several of the recommended tuning and optimization steps contained in this whitepaper into several of our guides and recommendations to our customers. See our Microsoft Exchange 2013 on VMware Best Practices Guide for an example.

The VMware’s Microsoft Exchange 2013 on VMware Best Practices Guide includes several other configuration prescriptions that, when adhered to, minimize the possibility of an unintended failover of clustered Microsoft application VMs, including the Lync Server nodes. We wish to stress that our “portability” features do not negate or impair the native availability features of Microsoft Lync Server workloads.

We are unaware of any technical impediments to combining vSphere’s robust and proven host-level clustering and availability features with Microsoft Lync Server’s application-level availability features and we encourage our customers to continue to confidently leverage these superior combinations when virtualizing their Lync servers on the vSphere platform. In the absence of any documented and proven incompatibility among these features, we are confident that customers virtualizing their Microsoft Lync Server infrastructure on the vSphere platform will continue to enjoy the full benefits of support to which they are contractually entitled without any inhibition.

In the unlikely event that virtualizing Lync Server workloads results in a refusal of support from Microsoft to a customer, such customers can open a support request ticket with VMware’s Global Support Service and VMware will leverage the framework of support agreements among members of the TSANet “Multi Vendor Support Community” to provide the necessary support to the customers. Both Microsoft and VMware are members of the TSANet Alliance.

32 thoughts on “Virtualizing Microsoft Lync Server – Let’s Clear up the Confusion

  1. Pingback: Virtualizing Microsoft Lync 2013 « The Lowercase w

  2. Simon Gardner

    So have VMware actually tested a production-like Lync workload with these features enabled? If so, you need to get a whitepaper out with details. If not, please stop guessing.

    Real-time media like audio & video conferences cannot tolerate a “brief network packet loss” while the server hosting the MCU is vmotioned, and maintain perfect call quality. If VMware are saying they can, at typical FE workloads (6600 users, 5% in a conference), please publish your findings.

    Reply
    1. Deji

      @Simon Gardner – Thank you for the response. VMware uses Lync, including the voice and video components. The employee population is over 13K.

      Every enterprise application can tolerate “brief network packet loss” and Lync is no exception, but (unless you are claiming to be operating a “zero packet drop” network infrastructure in your environment) the argument is irrelevant in this context because, as you should know, a vMotion operation is a maintenance task – not a routine activity, so we are talking about a once-in-a-while operation that occurs during a specific event and not as a matter of daily, steady-state operation. If you find yourself performing a vMotion operation on a daily basis, we would like to hear about it.

      We expect that the Microsoft Lync Server Product Group would let us know if they are convinced that we have misstated anything in this document.

      Reply
      1. Mickey

        Like politics these days it seems that the discourse is dominated by the voices of those that are heavily invested in the results.

        For those of us simple support folks without the stated giant processing requirements of some of the contributors (or the implied hatchet hanging over our heads), it would be great to read a white paper that actually included some data to review and make up our own minds.

        A good engineer should weigh the costs (MS recommended infrastructure is not cheap) against the risk (likelihood and severity of a problem) and make decisions based on facts and reliable data.

        Deji, Since you state that VMWare runs Lync – I would kindly request that you create a lab environment and perform full scale lync testing using LSS (including some moderately scaled environments as opposed to the all-everything package typically deployed for these papers) and publish all your findings.

        Thanks

        Reply
  3. DJ Grijalva

    So your using Exchange server documentation to guide customers on Lync? That’s like comparing apples to oranges. The 2 server products have completely different workloads and end user functions. This article is adding more confusion instead of clearing it up as the subject infers. The Exchange product group has always been better in getting information out compared to Lync & SharePoint (just compare their blog posts across the teams) and they are the ones getting hammered for the Lync product teams lack of explanation on their guidance. Maybe you should have waited to write this article when the writers of the Lync documentation responded back to you then again maybe you wrote the article to force that response.

    Reply
    1. Deji

      @DJ Grijalva – Thanks for the response. Your question is a fair one. Why are we using Exchange references to counter erroneous arguments made in a Lync document? Well, the reason we did so is because we are unable to locate ANY Lync-sourced document that provides any technical illumination for the statements made in the referenced Lync document. If you are aware of where the claims in the referenced document are substantiated, we would be very glad if you could point us to that source.

      Why did we not wait for the Lync Product Group to respond before we publish this clarification? We did indeed wait. We decided to publish this clarification publicly after a long wait, with the hope that doing so with spur the Microsoft Lync Product Group to respond with a more well-sourced, technical explanation of the disputed assertions.

      Reply
  4. Amit Panchal

    Great article and really good to see since we also read the guidelines from MS and were shocked. Thanks for the explanations on this. Maybe someone will include it in a book at some point ??

    Reply
  5. Pingback: Virtualizing Microsoft Lync Server | Microsoft Technology Musings

  6. Daniel

    Really, this article is unclear as Microsoft recommendation. So could VMware finally clarify the best settings in VMware for a Lync server installation? The article only mention HT and NUMA settings technology! So don´t SR-IOV give any benefits? SR-IOV in VMware don´t support DRS if you use Hyper-V you can easy enable SR-IOV and DRS works. So there should not be real memory usage of 32gbyte from the host server? How do you produce two core hex settings the best way? The article is not serious without giving the entire story and Data that confirm virtualization VS 1:1 hardware. We have had multiply system installation were issues could be related to virtualization and solved through dedicated hardware. If you install Lync there is never problem to get the customer to invest in more hardware. The problem is VMware license model. The license model is not an issue for VMware, so no wonder you don´t have the problem. So give us settings and data for Best Practice.

    Reply
    1. Deji Post author

      We have not found a good reason to bring out the SR-IOV sledge-hammer for Lync/SfB workloads in a vSphere environment, but the feature is there and available to any customer who desires to leverage it. Yes, it restricts vMotion, but that is a trade-off that should not create a significant pain for customers – DRS/vMotion should be a rare occurrence in a well-designed vSphere infrastructure hosting Lync/SfB workloads. Administrators can initiate VM migration on as needed basis.

      Since you asked for data, please take a look at this – http://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/business-critical-apps/exchange/vmware-skype-for-business-on-vmware-vsphere-best-practices-guide.pdf

      Reply
  7. VitohA

    Absolutely agree with DJ Grijalva: how you can use Exchange Server docs to “Clear up the Confusion”? Lync is real-time multimedia application, absolutely different with Exchange. If you can’t find Lync related docs it’s not a reason to use Exchange docs instead…

    Reply
    1. Deji

      VitohA, we answered this question in the blog – we were unable to find ANY Microsoft Lync document that *explains* WHY customers should disable Hyperthreading, for an example. We looked, and we couldn’t find anything except in the Exchange documents/blogs/whitepapers from Microsoft. We then asked the authors of the Microsoft Lync whitepaper directly and we did not receive any Lync-specific reference explaining the rationale for the recommendations included in the whitepaper.

      IF you know of a Lync document from Microsoft that explains the reasoning behind the recommendations that we addressed in this blog post, please post them here and we will review and acknowledge them. HTH

      Reply
      1. VitohA

        Hello Deji,
        Thank you for reply.
        I understand that this article MUST present in your blog, it’s very interesting article. And I don’t have these Lyn docs, unfortunately. But my point of view described in my previous comment – it’s not a correct way to give MS technical background from Exchange docs and extrapolate it for Lync. I believe that results of tests for Lync and ESXi will be more suitable.
        P.S: We’re also experiencing some issues with this product on ESXi but currently we don’t have answer why. I suppose that root cause in nature of HT and sharing of virtual cores which belong to one physical core between different VMs (this parameter is configurable in ESXi).

        Reply
        1. Deji Post author

          >>>But my point of view described in my previous comment – it’s not a correct way to give MS technical background from Exchange docs and extrapolate it for Lync

          That is exactly what the Lync Best Practices authors from Microsoft had done, and that, my friend, is the crux of the matter.

          Reply
      2. VitohA

        Additional point why we should disable HT on the Front-end servers – on each front-end running local SQL Express instance which hosting “hot” data. By license limitation this instance can’t use more than 4 cores and in case of high load of SQL Express (e.g. a lot of conferences) performance of these 4 cores will be a bottleneck of overall front-end server performance. In this case we can add additional front-end or increase per-core performance by disabling HT, I suppose that disabling of HT is more profitable and easy solution.

        Reply
  8. FreddieJ

    I hope everyone reading this has their resume ready…

    So if I follow your advice and it brings down my Lync environment, I can call you for support to bring it back up, not Microsoft, correct? And if it does not come back up, is VMWare willing to take full responsibility for the outage and potential money lost by my company? You should probably talk to your legal department before making these claims. I have seen many a Lync 2013 pool die a horrible death due to DRS and live vMotion.

    Reply
  9. Pingback: The debate about disabling Hyperthreading in virtualized Exchange server is over - Virtualize Business Critical Applications - VMware Blogs

  10. Oderus

    This blog does nothing to clear anything up. It just muddies the water even further.

    A virtual core in a Hyperthreaded setup cannot perform a write operation when the physical CPU its attached to is also doing a write operation. Surely this will cause some issue with Lync as it’s a Real-time Protocol.

    Exchange and Lync are so different you’d be better off comparing SQL with Lync. I understand your reasoning for using Exchange to explain Lync but it was a bad idea.

    I would really like to see VMware come up with their own best practice that they’d be willing to support. Much better than dumping on Microsoft for providing a best practise you don’t appreciate.

    You mention VMware uses Lync and someone asked for details. You never replied. I would really like to see how you have everything setup for 13,000 users. MIcrosoft’s own tests show they can have 11,000 users on just 3 Frontend servers with 2 Edge and 2 Backend servers. They didn’t use HT and had 6 non-HT cores per FE.

    You could also publish your KHI reports to prove you’re not having tons of issues.

    Reply
    1. Laura Williams

      @Oderus – It seems to me that what this blogpost is saying is that the authors of the Microsoft Lync Whitepaper are letting down their customers by not including the REASONS for their recommendations against certain virtualization features in the Whitepaper. Their recommendations are just blanket statements without any explanation. The VMware author says that there is no supporting document or discussion for any of those recommendations. Do you disagree? If yes, then you should include links to disprove the author’s claim.

      The VMware author also says that the recommendations were lifted directly from Exchange Server documents. He/She then includes links to back up the claim. Do you have such links for Lync, where the reasons are explained?

      Since Microsoft recently published this (http://blogs.technet.com/b/exchange/archive/2015/09/15/ask-the-perf-guy-what-s-the-story-with-hyperthreading-and-virtualization.aspx), do you still disagree with the author of this blog?

      The Microsoft Lync group can clear up the confusion by just explaining (in technical terms) why they make the recommendations that they make in the document. If VMware’s statement is not correct, it will be easy for Microsoft to shame them by just saying “this is why we said what we said” and not just “because Microsoft says so”.

      Reply
    2. kersg

      “I would really like to see VMware come up with their own best practice that they’d be willing to support. Much better than dumping on Microsoft for providing a best practise you don’t appreciate.”

      Amen. It’s been nearly 7 months, and still no VMWare Best Practices paper on Skype for Business.

      Reply
  11. Vesuvius

    This blog has been the biggest pain in my side concerning Lync/Skype virtualization. People are taking this as gospel when in fact, Microsoft should be the authority. Also, 13K users is nothing and even a poorly built Lync pool could perform reasonably well. Add an additional 30k, 40k, 100k and you will have issues. I’ve already experienced a Resume Generating Event for almost an entire team because they had to migrate to physical servers due to the virtual environment (Could fix because the host was shared). That said, VMware does work well with Lync…when it’s built correctly. The last thing anyone wants to hear from Microsoft is, go make “X” amount of changes and call us back once completed. That can and will happen if your guidance is all that was followed. I’ve seen and experienced this is multiple environments.

    Reply
    1. Deji Post author

      Vesuvius, we have demonstrated over and over again that, in the year 2016, there is hardly any Microsoft application that cannot be successfully virtualized with equal (or even better) performance, resilience and reliability. Yes, Skype for Business is a “complex” application; but, that is also the case when you run it on physical hardware. The equation does not change simply because you are virtualizing it.

      This blog entry speaks DIRECTLY to the misleading assertions in a published document from Microsoft. Those assertions are based on incorrect assumptions and generalization, and they were based on Microsoft sources that have since been corrected. We waited long enough for the authors of the Microsoft Skype for Business whitepaper to make the necessary corrections, but since it is clear that this will not be done, we felt that we owe our mutual customers a responsibility to correct the record.

      Hopefully, you have noticed that this post has not been challenged or contradicted (directly or indirectly) by Microsoft. Since many respondents asked us to “put up or shut up”, we decided to put up – http://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/business-critical-apps/exchange/vmware-skype-for-business-on-vmware-vsphere-best-practices-guide.pdf.

      Don’t view this as a VMware vs Microsoft issue because both of us agree that there is no technological or technical reasons why customers should not be able to successfully run their Skype for Business infrastructure in a correctly-configured virtual infrastructure.

      Reply
  12. Pingback: FREE Skype for Business Server 2015 Exam 70-334 – Core Solutions of Microsoft Skype for Business 2015 – Exam Prep – Gareth's Blog

Leave a Reply

Your email address will not be published. Required fields are marked *

*