Author Archives: Alex Fontana

Windows Server 2012 VM-Generation ID Support in vSphere

Update 1/25/2013: The vSphere versions required for VM-Generation ID support have been updated below.

Active Directory Domain Services has been one of those applications that, to the naked eye, seemed like it was a no brainer to virtualize. Why not? In most environments it’s a fairly low utilization workload, rarely capable of efficiently using the resources found in many of the enterprise-class servers that have been available for the past few years. Many organizations have adopted this way of thinking and have successfully virtualized all of their domain controllers. What about the hold-outs? What is it about Active Directory that has left so many AD administrators and architects keeping their infrastructure, or at least a portion of it on physical servers? Continue reading

Your Guide to Virtualizing Exchange 2010 on vSphere (Part 1)

For a few years now we've been providing guidance on virtualizing Exchange on vSphere. Although not fully supported at the time, customers were looking for guidance to virtualizing Exchange 2003 and 2007 and so we created best practices guides and performance studies. Today we continue to provide best practices, design and sizing, and availability guides for Exchange 2010 on vSphere. With all of those resources out there one could ask, "What else is there to cover?" In this first posting of two I wanted to take a step back and look at some of the pre-requisites for designing an Exchange environment on vSphere. In Part 2 of this series I'll jump forward to some design considerations to keep in mind when virtualizing Exchange on vSphere. What about the technical details in between pre-reqs and design considerations? We've got those well covered and I'll provide links at the end of this article.

Microsoft Exchange Server, for most organizations, is the central communications stream and as such becomes critical to the daily operations of the business, or a business critical application. Unlike the typically virtualized workloads of days past, these business critical applications typically require a dedicated project staff and budgets, and are highly visible throughout the organization. Because of the importance of such a project to the organization proper planning is vital to ensure a successful outcome.

To aide in successful planning consider the following pre-requisites when beginning an Exchange design for deployment on vSphere.

Understand the business and technical requirements

Among the many technical requirements that must be followed when deploying Exchange and vSphere it is important to understand any technical and non-technical requirements placed by the organization. A few of the most common business requirements we encounter when designing a virtualized Exchange environment are listed below.

  • High availability – What level of availability needs to be designed into the environment? Is there an SLA in place that requires services to be restored within a certain time period? With Exchange 2010 this is one of the most important decisions to be made at the beginning of the design process. The use of Database Availability Groups will dictate the amount of storage needed to house active and passive databases, as well as how many mailboxes can be supported per mailbox server. Is disaster recovery in scope?
  • Supportability of the design – Those of us that have worked for large organizations know the importance of designing a supported solution, and more importantly the frustration of having support calls closed due to an unsupported configuration. When building the design keep support in mind and be sure to consult with your hardware and any other software vendors to make sure they will take your calls when problems arise.
  • Scalability – Is the goal of the company to continue to grow or does the nature of the business keep the organization size stable? If this is a large environment do we need to consider deploying fewer larger virtual machines, or is it preferable to scale-out with smaller virtual machines. If this is a volatile environment do we need to have the capability to scale-out or up on demand using templates or hot-add technologies?
  • Mailbox Size – Are there currently quotas in place or will they be introduced as part of this new design? Is there a limit to the amount of data that the proposed solution can handle? Does archiving need to be factored into the solution?
  • Performance – What bolt-on accessories are in use in the current messaging environment and does their functionality need to be carried over? Many of these solutions will add overhead to the environment and their use needs to be accounted for when it comes time to size.

Analyze the current messaging environment

An Exchange 2010 sizing exercise is mostly based on the assumption that there is an understanding of the current usage of the messaging environment. If this is a greenfield environment it will be necessary to estimate what the usage will be and this can be very difficult. Typically I would recommend beginning with a medium to heavy workload, around 150 messages per day, per mailbox. The beauty of building on vSphere is that if it turns out that the environment was over-provisioned you can easily scale back and regain some compute resources for other projects.

If there is an existing Exchange environment you can download the Exchange Server Profile Analyzer to help understand the current messaging requirements. This tool can look at a single Exchange mailbox database or across an entire organization and report on user activity. Other ways to analyze the messaging activity within an organization include:

  • Exchange message tracking logs
  • Sendmail or Postfix logs
  • Statistics from email anti-virus tools
  • Logs from SPAM gateways
  • Third-party email statistics tools

Regardless of which method is chosen, the desired outcome is to determine the average number of messages sent and received per mailbox per day. This is the primary method used by Microsoft to determine the amount of processor and storage resources each mailbox uses and the recommended amount of physical RAM that should be allocated for mailbox database cache. The table below from TechNet can be used to determine the amount of database cache, IOPS, and CPU estimates based on user activity.

Table 1. IO Profile Resource Utilization Estimate

Messages sent or received per mailbox per day

Database cache per mailbox in megabytes (MB)

Single database copy (stand-alone) with estimated IOPS per mailbox

Multiple database copies (mailbox resiliency) with estimated IOPS per mailbox

Megacycles for active mailbox or stand-alone mailbox

Megacycles for passive mailbox

50

3

0.06

0.05

1

0.15

100

6

0.12

0.1

2

0.3

150

9

0.18

0.15

3

0.45

200

12

0.24

0.2

4

0.6

Source: TechNet – Mailbox Server Processor Capacity Planning

Analyze the health of the current messaging and vSphere Environment

Before beginning an Exchange migration project, a full health check of the current Exchange environment, the vSphere environment, and any infrastructure dependencies should performed. Often times a new environment can help bring an underlying issue to the surface. Being able to identify these issues and resolve them before going into production can make a migration much smoother for the implementation team as well as the end users.

A number of tools are available to help make sure there are no glaring issues in the current environment.

  • VMware HealthAnalyzer – captures data from the vSphere environment including configuration and utilization information to provide a health report card. Ask your VMware representative for more details on obtaining a health check using HealthAnalyzer.
  • Exchange Best Practices Analyzer – The ExBPA is installed with the Exchange Management Console and can be used to quickly scan a particular server or the entire organization's configuration against best practices. The report lists details about the configuration and offers explanations and guidance on how to fix common issues. Running the ExBPA is a must before placing any Exchange server into production and is recommended to run as part of routine maintenance.

Identify support and licensing options

With a business critical application like Exchange it is key to understand support and licensing considerations. Support for virtualizing Exchange has come a long way over the past few years. The Server Virtualization Validation Program provides mainstream support for running Exchange 2007 and 2010 on vSphere. Prior to going down the road of building a design it is a good idea to walk through the Support Policy Wizard on the Windows Server catalog web site to validate that the solution you are putting together is supported.

Figure 1. SVVP Support Policy Wizard

No matter the hypervisor used to virtualize Exchange 2010, the support requirements remain the same. The Exchange team at Microsoft outlines the requirements for virtualizing Exchange very well on the Exchange System Requirements TechNet page. These requirements must be reviewed and understood to be sure that the design meets Microsoft's support guidance and to help avoid any confusion during a support request.

Exchange 2010 is licensed per server and client-access license, just as it is on physical. This is important to note as it may help determine whether you design using a scale-up or scale-out approach. Another licensing consideration is that of license mobility. Previously application licenses tied to a physical server could only migrate between physical servers once every 90 days. This was updated to allow application licenses to migrate between physical hosts within a server farm as needed. More information can be found in the Application Server License Mobility document. As always, we suggest you consult with your Microsoft representative to obtain the most accurate licensing information for your situation.

Thanks for making it this far. I hope you found the often overlooked pre-requisites for virtualizing Exchange 2010 on vSphere helpful. In Part 2 I will dive into some additional design considerations to keep in mind when virtualizing Exchange 2010 on vSphere. If you've missed any of the great resources we have on virtualizing Exchange 2010 on vSphere check out our resources page at the link below.

http://www.vmware.com/solutions/business-critical-apps/exchange/resources.html

-alex

This blog is part of a series on Virtualizing Your Business Critical Applications with VMware. To learn more, including how VMware customers have successfully virtualized SAP, Oracle, Exchange, SQL and more, visit vmware.com/go/virtualizeyourapps.

Virtualized Exchange Storage: VMDK or RDM or…?

One of the hottest topics I get into when talking to customers about virtualizing Exchange is storage. Not surprising considering the number of options available when we virtualize Exchange on vSphere. If you are not familiar with the common methods for provisioning storage in vSphere a brief description of each follows:

  • VMFS based virtual disk (VMDK) – VMFS is a high performance, clustered file system that allows concurrent access by multiple hosts to files on a shared volume. VMFS offers high I/O capabilities for virtual machines and is optimized for large VMDK files. VMFS volumes can be Fibre Channel or iSCSI attached.
  • Raw-device mappings (RDM) – RDM is a mapping file in a VMFS volume that acts as a proxy for a raw physical device, sometimes called a pass-thru disk. The RDM file contains metadata used to manage and redirect disk access to the physical device. RDMs can be Fibre Channel or iSCSI attached.

In early versions of ESX the virtualization overhead associated with deploying virtual disks (VMDK files) was much higher than it is today and why it was considered a best practice to place Exchange data files on physical mode raw-device mappings (RDM). As ESX and vSphere have evolved the performance difference between RDMs and virtual disks has become almost nonexistent. This leaves some questioning why we might choose to deploy RDMs for Exchange storage.

Some reasons for deploying RDMs today might include:

  • Backups are being performed using a hardware based VSS solution using array based clones or snapshots – When talking to customers I typically see backups as being the number one reason for deploying RDMs. The ability to take array based backups quickly using hardware VSS makes RDMs very attractive for large organizations with massive amounts of email data. So, if we want to take advantage of array based backups are we limited to only using RDMs? Not quite, but more on that in a minute.
  • Volumes larger than 2TB are required – With Microsoft supporting mailbox databases up to 2TB (when database resiliency is in use) volumes may need to be larger than 2TB. In vSphere 5 only physical mode RDMs support volume sizes up to 64TB, VMDK files are limited to 2TB.
  • Require the ability to swing a LUN between a native Windows host and a virtual machine – Some deployments may choose to deploy on physical mailbox servers and later migrate to virtual machines. This migration could be expedited by swinging the LUNs from the physical mailbox server and attaching them to the Exchange mailbox VM using RDMs. With database portability only the user objects would need to be updated thus avoiding the time to move mailbox data over the network.
  • Management purposes – Some environments may require greater control over the relationship between LUNs and virtual machines. An RDM is assigned to a single VM (unless using a shared-disk cluster) guaranteeing that the I/O capabilities of the LUN are dedicated to a single VM.

The good news is, if you're not limited by any of the reasons above you can deploy on VMDKs with confidence. I tend to prefer VMDKs for the portability, manageability, and scalability. By portability I mean the ability to use features like Storage vMotion, Storage DRS, and vSphere Replication to provide storage load balancing and disaster recovery. Improved management comes with the native tools available in the vSphere client for working with VMDKs. Some storage vendors have very slick plug-ins for the vCenter client if you must use RDMs, but it's always nice using the native tools. From a scaling point of view larger VMFS volumes can be used to consolidate VMDKs if dedicated RDMs are pushing the 256 LUN limit in ESXi. vSphere 5 supports VMFS volumes of up to 64TB, VMDK files are limited to 2TB.

Now that we can make some better informed choices for our storage format let's get back to the backups. If you are looking to deploy a hardware based VSS backup solution it used to be that the only option was to use physical mode RDMs. Today some storage vendors have made progress in giving customers the ability to deploy on storage other than physical mode RDMs. This comes in the following forms:

  • In-guest iSCSI – Using iSCSI initiators from within the guest operating system an administrator can directly mount storage LUNs to the virtual machine. Connecting storage in this manner can still provide the ability to backup using array based snapshots and clones. This does put additional load on the virtual machine as it is now doing the storage processing, but will allow you to avoid using RDMs and can mitigate the 256 LUN limit of ESXi. At VMworld this year (both in the US and Europe) many customers shared their success stories of using in-guest iSCSI with Exchange.
  • NFS based VMDKs – Some storage vendors have added the ability to perform hardware based VSS backups of VMDKs housed on NFS based networked-attached storage. I've also had many customers tell me of their success using this solution. My only comment here is that Microsoft has been pretty clear on their lack of support for housing Exchange data files (mailbox and queue databases and transaction logs) on network-attached storage (Premier customers check with your Microsoft rep). That said, I'm a huge fan of NFS based storage.

Whether to choose VMDK or RDM for your Exchange storage should be based on technical and business requirements and not on any preconceived notions of performance or supportability. All storage protocols discussed here have proven to perform well within the requirements of Exchange and support for each is well documented on Microsoft's TechNet site. I've included some helpful links below for your reading enjoyment. With that I'll wrap up this post which hopefully has given you a bit to think about and maybe presented some new options for your deployment.

As always, we look forward to hearing from you so please join the discussion!

-alex

Alex Fontana, Sr. Solutions Architect

Performance Best Practices for VMware vSphere 5: http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.0.pdf

Virtualized Exchange Server on NFS, iSCSI, and Fibre Channel: http://www.vmware.com/files/pdf/vsphere_perf_exchange-storage-protocols.pdf

Performance Characterization of VMFS and RDM: http://www.vmware.com/files/pdf/performance_char_vmfs_rdm.pdf

Exchange 2010 System Requirements: http://technet.microsoft.com/en-us/library/aa996719.aspx

Using Virtual Disks for Business Critical Apps Storage

Hello all!

Welcome to the Business Critical Apps Blog. This week we will be publishing a few blogs that focus on virtualizing Microsoft Tier-1 applications. This may include discussion around a specific application like MS SQL or Exchange or some more generalized discussion around topics we get the most questions on when talking to customers. If you're responsible for virtualizing Microsoft Tier-1 apps check in throughout the week and take a look at what we've got going on. If this is your first visit to this blog check out our archives for tips on virtualizing MS Exchange, SQL and even some Oracle, SAP and Java discussions. Join in on the conversation by asking a question or making a comment. On to the first topic for the week: storage!

Virtualization of I/O intensive applications is nothing new. Traditionally the virtualization of these applications involved provisioning raw-device mappings over virtual disk files, whether warranted or not. VMware has proven the performance of VMFS to be on par with that of raw-device mappings as far back as ESX 3.0.1 (Performance Characterization of VMFS and RDM Using a SAN). While technically required for some configurations (MSCS clustering, hardware-based VSS, etc.), deploying raw-device mappings is no longer the de facto standard for virtualized I/O intensive applications.

When creating a new virtual disk (VMDK) there are a few options for how the virtual disk is created and when the space is allocated. Understanding the types of disk provisioning methods available and when to use them can help you provide the best level of performance for your business critical apps. The three types of disk provisioning are described below:

  • Thick provisioned lazy zeroed – The virtual disk is allocated all of its provisioned space and immediately made accessible to the virtual machine. A lazy zeroed disk is not zeroed up front which makes the provisioning very fast. However, because each block is zeroed out before it is written to for the first time there is added latency on first write.
  • Thick provisioned eager zeroed (Recommended for I/O intensive workloads) – The virtual disk is allocated all of its provisioned space and the entire VMDK file is zeroed out before allowing the virtual machine access. This means that the VMDK file will take longer to become accessible to the virtual machine, but will not incur the additional latency of zeroing on first write. For this reason the recommendation when deploying an I/O intensive application on VMFS is to use this provisioning method.
  • Thin provision – This method provides quick access to the virtual disk and increases storage utilization by allocating disk space on demand.

Now that we've established the differences between the provisioning types, let's discuss the various ways we can create an eager zeroed thick disk, how we can check if a virtual disk is eager zeroed thick, and how we can eager zero a disk after the fact.

How can I create a virtual disk as eagerzeroedthick?

  • If using the vSphere 4 client you can check the Support clustering features such as Fault Tolerance box during disk creation. Checking this box won't enable FT for your VM, but it will format the VMDK as eagerzeroedthick as this is a requirement for FT.

  • If using the vSphere 5 client you are presented with three options during disk creation; select the Thick Provision Eager Zeroed radio button.

  • If you prefer the command line or want to build this into an automated solution you have a couple of more options:
    • Console:

      Vmkfstools –d eagerzeroedthick –c 10g /vmfs/volumes/datastore1/myVM/myVMData.vmdk

      Note: -c 10g creates a 10GB vmdk file, adjust as needed.

    • vSphere CLI:

      Vmkfstools.pl –-server <ESXHost> –-username <username> –password <passwd> –d eagerzeroedthick –c 10g /vmfs/volumes/datastore1/myVM/myVMData.vmdk

      Note: -c 10g creates a 10GB vmdk file, adjust as needed.

I'm not sure how my virtual disks were created, how can I check?

Fortunately we can check if a virtual disk was created as eagerzeroedthick. To do so we can use vmkfstools –D (capital "D") against the VMDK in question (direct the command to the <vm_name>-flat.vmdk file):

vmkfstools -D <vm_name>-flat.vmdk

The output of this command will look similar to the output below. We're interested in TBZ in the last line. This refers to the number of blocks in the disk file To Be Zeroed. A TBZ of zero indicates an eagerzeroedthick VMDK, otherwise it is zeroedthick or lazy zeroed as in our example below.

Lock [type 10c00001 offset 9345024 v 30, hb offset 3293184

gen 11, mode 1, owner 4ea9d387-964b13c3-7f81-001a4be8eae0 mtime 49198 nHld 0 nOvf 0]

Addr <4, 2, 19>, gen 25, links 1, type reg, flags 0, uid 0, gid 0, mode 600

len 1073741824, nb 128 tbz 128, cow 0, newSinceEpoch 0, zla 1, bs 8388608

For more information on determining whether or not a VMDK is eagerzeroedthick or zeroedthick refer to VMware KB article Determining if a VMDK is zeroedthick or eagerzeroedthick.

Oops, I didn't eagerzerothick my virtual disk during creation, what can I do?

There are a few ways to zero out your existing virtual disk.

  • Using vmkfstools you can use the -k option to zero out un-zeroed blocks while maintaining existing data. This is the best option if you've already started populating the disk with data as the data will not be touched. Direct the command to the <vm_name>.vmdk file.

     

    Vmkfstools –k <vm_name>.vmdk

Note: This method requires the virtual machine to be powered off or the virtual disk to be removed from the virtual machine.

  • If powering off the virtual machine or using hot-remove to disconnect the virtual disk is not an option and if there is NO data on the virtual disk you may reformat the volume from within Windows by unchecking the Quick Format option. This process removes files from the volume and scans the entire volume for bad sectors, effectively causing all blocks to be touched and zeroed.

  • Enabling Fault Tolerance for a VM requires that the VMDK be zeroed out. If your virtual machine has only one vCPU you can temporarily enable FT and the process will make sure that all virtual disks are eagerzeroedthick. Once the disk has been prepared you can disable FT. This method also preserves existing data.

     

 

Hopefully this has given you a better look at why we recommend using eager zeroed thick disks for IO intensive applications as well as how to create new disks, check existing disks, and convert existing virtual disks to the eagerzeroed format.

As always, we look forward to hearing from you so please join the discussion!

-alex

Alex Fontana, Sr. Solutions Architect

Exchange 2010 on vSphere Customer Case Study

Those of us embarking on a new virtualization project like to learn from others. At the very least we want to be sure that if someone else has done something similar we can learn from any lessons encountered along the way. Over the past year and a half we've had many conversations with customers who were in the process of evaluating Exchange 2010 or designing a logical environment with a decision on whether or not to virtualize still pending. Many of these customers wanted to hear from other customers.

We're now getting to the point where we have full deployments that we can begin to talk about. Some we may not be able to mention by name but can speak to specifics around size and design. Others have allowed us to come in and create case studies based on their success story.

Today we released our latest case study on Raymond James. As a financial company managing about 1.9 million accounts, email is one of the most critical applications the Raymond James IT organization supports. Read how Raymond James successfully virtualized an Exchange 2010 environment on vSphere to support over 18,000 mailboxes, provide high availability without the use of Database Availability Groups, and how they use VMware Site Recovery Manager to provide disaster recovery capabilities and proactively test site failover.

Case Study, Video

-alex

Microsoft Clustering Services on vSphere 5

VMware has released an updated guide to deploying Microsoft Clustering Services on vSphere (link below). The guide provides deployment options and procedures for building MSCS Clusters on vSphere 5. Along with these instructions you will find a checklist to verify that your setup meets the requirements as well as best practices for using vSphere HA and DRS.

In addition to this comprehensive guide for most MSCS scenarios we've also published a KB article which sheds a bit more light onto disk configurations and the differences between "shared disk" and "non-shared disk" configurations and support.

Guide: Setup for Failover Clustering and Microsoft Cluster Service

KB: Microsoft Clustering on VMware vSphere: Guidelines for Supported Configurations

-alex

Alex Fontana, Sr. Solutions Architect

Microsoft Exchange 2010 Performance on vSphere 5

The VMware performance team is constantly working to show how virtualizing tier 1 applications on vSphere can provide comparable performance to physical deployments. With the release of vSphere 5 we can now provide up to 32 vCPUs and 1TB of memory per VM! That kind of scale-up capability means there are very few (if any) workloads that we can't accommodate.

When dealing with Exchange 2010 designs there are recommended maximums that should be followed to achieve the best performance. Those recommendations are published by Microsoft on TechNet. For stand-alone mailbox servers the recommended maximum is 12 CPU cores, when working with six core CPUs. For multi-role servers the recommended maximum is 24 CPU cores, again when working with six core CPUs. For those customers that prefer to run very large instances (>8 vCPUs) of Exchange servers vSphere 5 now makes this possible.

In keeping with tradition the VMware performance team has published a whitepaper examining how Exchange 2010 performs on vSphere 5 in terms of scaling up (adding more vCPUs) and scaling out (adding more VMs). This paper shows that vSphere 5 can provide flexibility in deployment while maintaining a positive user experience.

For the full paper see Microsoft Exchange Server 2010 Performance on vSphere 5.

-alex

Alex Fontana, Sr. Solutions Architect

Using VMware HA, DRS and vMotion with Exchange 2010 DAGs

The wave of VMware customers looking to virtualize Exchange 2010 on vSphere continues to accelerate. While there have been customers who have chosen not to virtualize the DAG nodes, the reasons were not those we heard of in years past. Today it's not because of performance or high storage IO, in fact most customers believe that the majority of their applications can be virtualized, including their business critical applications. VMware customers who chose to postpone virtualization of their Exchange 2010 DAG nodes mostly did so for one reason: lack of support for vSphere advanced features from Microsoft. Those customers will be pleased to know that this is now a thing of the past.

With Microsoft's latest announcement of enhanced hardware virtualization support for Exchange 2010 customers looking to deploy a virtualized Exchange 2010 environment and take advantage of vSphere features such as vMotion, VMware HA and DRS can do so with full support from Microsoft and VMware. VMware has been officially supporting the use of these vSphere features along with Exchange 2010 DAGs for some time now, as described in our KB article here. The additional support by Microsoft simply validates what we've been promoting since the release of our Exchange 2010 on VMware documentation available here.

As news of this validation and support begins to pick up steam we anticipate that questions will come up as to what are the best practices for making sure your deployments are successful. To help provide our customers with some insight into using these features with their production Exchange 2010 DAG clusters we've put together a whitepaper outlining testing we performed in our labs earlier this year. The purpose of testing outlined in Using VMware HA, DRS and vMotion with Exchange 2010 DAGs was to:

  • Validate the use case of combining VMware HA with Exchange DAGs to reduce the time required to re-establish DAG availability.
  • Validate the use of vMotion with Exchange DAGs to allow the use of DRS and proactive workload migration while maintaining database availability and integrity.
  • Provide guidance and best practices for taking advantage of these vSphere features.

The whitepaper can be found here: http://www.vmware.com/files/pdf/solutions/VMware-Using-HA-DRS-vMotion-with-Exchange-2010-DAGs.pdf

It shouldn't come as much surprise that the results we came up with and documented are in line with what Microsoft themselves are recommending, further validating our results. Additionally this whitepaper will provide guidance for customers looking to use features such as DRS to allow their vSphere environment to efficiently balance workloads and manage resources and VMware HA to provide even higher levels of availability.

This will no doubt begin driving up the number of virtualized DAGs out there, so help out your fellow vSphere and Exchange administrators. Join the Exchange, Domino and RIM VMware User Community!

-alex

Alex Fontana, Technical Solutions Architect

New KB: Guidelines for Supported Microsoft Clustering Configurations

Microsoft Clustering Services is a topic we get many questions on. Specifically, we spend time talking about the configurations that are possible when deploying MSCS on vSphere and what is and isn't supported. After talking to many customers it became apparent that there was a good amount of confusion as to what was supported by VMware.

To try and clear things up we decided it would be best to lay out, as clearly as possible, the configurations that are supported by VMware. KB 1037959 provides clear guidelines and vSphere support status for running various Microsoft clustering solutions and configurations.

VMware KB: Microsoft Clustering on VMware vSphere: Guidelines for Supported Configurations

 -alex

Alex Fontana, Technical Solutions Architect

Citrix XenApp on VMware Best Practices

Desktop application delivery and management can be tedious and time-consuming. Many organizations have chosen to leverage application virtualization and take a software as a service (SaaS) approach to desktop applications. By deploying software such as Citrix XenApp IT organizations are able to centralize the management, administration and delivery of popular Windows-based applications. This model of centralized administration and delivery of services is not new to VMware customers who have for years used virtualization technology to consolidate server workloads.

This guide provides information about deploying Citrix XenApp in a virtualized environment powered by VMware vSphere™. Key considerations are reviewed for architecture, performance, high availability, and design and sizing of virtualized workloads, many of which are based on current customer deployments of XenApp application servers on VMware. This guide is intended to help IT Administrators and Architects successfully deliver virtualized applications using Citrix XenApp on VMware vSphere.

The following topics are covered in detail:

  • VMware ESX host best practices for Citrix XenApp
  • Virtual hardware, guest operating system, and XenApp best practices
  • Monitoring performance
  • vSphere enhancements for deployment and operations

Download the complete best practices guide at the link below:

www.vmware.com/files/pdf/solutions/vmware-citrix-xenapp-best-practices-EN.pdf

-alex