Home > Blogs > Virtualize Business Critical Applications


Virtualized Exchange Storage: VMDK or RDM or…?

One of the hottest topics I get into when talking to customers about virtualizing Exchange is storage. Not surprising considering the number of options available when we virtualize Exchange on vSphere. If you are not familiar with the common methods for provisioning storage in vSphere a brief description of each follows:

  • VMFS based virtual disk (VMDK) – VMFS is a high performance, clustered file system that allows concurrent access by multiple hosts to files on a shared volume. VMFS offers high I/O capabilities for virtual machines and is optimized for large VMDK files. VMFS volumes can be Fibre Channel or iSCSI attached.
  • Raw-device mappings (RDM) – RDM is a mapping file in a VMFS volume that acts as a proxy for a raw physical device, sometimes called a pass-thru disk. The RDM file contains metadata used to manage and redirect disk access to the physical device. RDMs can be Fibre Channel or iSCSI attached.

In early versions of ESX the virtualization overhead associated with deploying virtual disks (VMDK files) was much higher than it is today and why it was considered a best practice to place Exchange data files on physical mode raw-device mappings (RDM). As ESX and vSphere have evolved the performance difference between RDMs and virtual disks has become almost nonexistent. This leaves some questioning why we might choose to deploy RDMs for Exchange storage.

Some reasons for deploying RDMs today might include:

  • Backups are being performed using a hardware based VSS solution using array based clones or snapshots – When talking to customers I typically see backups as being the number one reason for deploying RDMs. The ability to take array based backups quickly using hardware VSS makes RDMs very attractive for large organizations with massive amounts of email data. So, if we want to take advantage of array based backups are we limited to only using RDMs? Not quite, but more on that in a minute.
  • Volumes larger than 2TB are required – With Microsoft supporting mailbox databases up to 2TB (when database resiliency is in use) volumes may need to be larger than 2TB. In vSphere 5 only physical mode RDMs support volume sizes up to 64TB, VMDK files are limited to 2TB.
  • Require the ability to swing a LUN between a native Windows host and a virtual machine – Some deployments may choose to deploy on physical mailbox servers and later migrate to virtual machines. This migration could be expedited by swinging the LUNs from the physical mailbox server and attaching them to the Exchange mailbox VM using RDMs. With database portability only the user objects would need to be updated thus avoiding the time to move mailbox data over the network.
  • Management purposes – Some environments may require greater control over the relationship between LUNs and virtual machines. An RDM is assigned to a single VM (unless using a shared-disk cluster) guaranteeing that the I/O capabilities of the LUN are dedicated to a single VM.

The good news is, if you're not limited by any of the reasons above you can deploy on VMDKs with confidence. I tend to prefer VMDKs for the portability, manageability, and scalability. By portability I mean the ability to use features like Storage vMotion, Storage DRS, and vSphere Replication to provide storage load balancing and disaster recovery. Improved management comes with the native tools available in the vSphere client for working with VMDKs. Some storage vendors have very slick plug-ins for the vCenter client if you must use RDMs, but it's always nice using the native tools. From a scaling point of view larger VMFS volumes can be used to consolidate VMDKs if dedicated RDMs are pushing the 256 LUN limit in ESXi. vSphere 5 supports VMFS volumes of up to 64TB, VMDK files are limited to 2TB.

Now that we can make some better informed choices for our storage format let's get back to the backups. If you are looking to deploy a hardware based VSS backup solution it used to be that the only option was to use physical mode RDMs. Today some storage vendors have made progress in giving customers the ability to deploy on storage other than physical mode RDMs. This comes in the following forms:

  • In-guest iSCSI – Using iSCSI initiators from within the guest operating system an administrator can directly mount storage LUNs to the virtual machine. Connecting storage in this manner can still provide the ability to backup using array based snapshots and clones. This does put additional load on the virtual machine as it is now doing the storage processing, but will allow you to avoid using RDMs and can mitigate the 256 LUN limit of ESXi. At VMworld this year (both in the US and Europe) many customers shared their success stories of using in-guest iSCSI with Exchange.
  • NFS based VMDKs – Some storage vendors have added the ability to perform hardware based VSS backups of VMDKs housed on NFS based networked-attached storage. I've also had many customers tell me of their success using this solution. My only comment here is that Microsoft has been pretty clear on their lack of support for housing Exchange data files (mailbox and queue databases and transaction logs) on network-attached storage (Premier customers check with your Microsoft rep). That said, I'm a huge fan of NFS based storage.

Whether to choose VMDK or RDM for your Exchange storage should be based on technical and business requirements and not on any preconceived notions of performance or supportability. All storage protocols discussed here have proven to perform well within the requirements of Exchange and support for each is well documented on Microsoft's TechNet site. I've included some helpful links below for your reading enjoyment. With that I'll wrap up this post which hopefully has given you a bit to think about and maybe presented some new options for your deployment.

As always, we look forward to hearing from you so please join the discussion!

-alex

Alex Fontana, Sr. Solutions Architect

Performance Best Practices for VMware vSphere 5: http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.0.pdf

Virtualized Exchange Server on NFS, iSCSI, and Fibre Channel: http://www.vmware.com/files/pdf/vsphere_perf_exchange-storage-protocols.pdf

Performance Characterization of VMFS and RDM: http://www.vmware.com/files/pdf/performance_char_vmfs_rdm.pdf

Exchange 2010 System Requirements: http://technet.microsoft.com/en-us/library/aa996719.aspx

9 thoughts on “Virtualized Exchange Storage: VMDK or RDM or…?

  1. Schner

    “Some storage vendors have added the ability to perform hardware based VSS backups of VMDKs housed on NFS based networked-attached storage.”
    Which ones are these?

    Reply
  2. Vaughn Stewart

    Great write up and supporting documents.
    A few points of note. If you are a NFS customer, remember your medium is Ethernet and as such you can easily add iSCSI or FCoE RDMs to enable this solution.
    Customers interested in deploying Exchange Server on datastores connected via NFS should contact their Microsoft account team and request support. Like all vendors, Microsoft ants to take care of their customers.
    Lastly, with Microsoft announcing support for SMB2.2 with the forth-coming release of Hyper-V 3.0 I think it’s fair to speculate that NAS is becoming more prominent and hopefully support statements will be revised sooner rather than later.
    http://virtualstorageguy.com/2011/09/20/microsoft-announces-smb-2-2-and-nas-support-for-hyper-v-3-0-in-windows-8/

    Reply
  3. Josh

    Great article. I’ve deployed a few Exchange servers in development/test environments using VMDK, and never had any issues. Still used RDM for production environments, but I’ll certainly be giving VMDK consideration for use their as well from now on.

    Reply
  4. Alex Fontana

    Thanks for the comments!
    Schner – The customers who I’ve talked to that have had great success with Exchange on NFS all seem to use NetApp storage. Vaughn makes a great point if you are concerned with support.
    Vaughn – Thanks for the link, I think you are right, there will be enough demand and as we have seen with things like vMotion of DAG nodes and support for hypervisor HA, if we keep demanding and questioning the technical reasoning behind some of the “no-support” statements we can help turn the tide.
    Josh – Definitely do! Even if you want to do a bake off of sorts when validating a storage design with JetStress. The one thing I would tend to stress is make sure you are comparing apples to apples, i.e. equally sized LUNs (size and number of disks) and same workload or number of databases being housed on each.

    Reply
  5. Hussain

    Hello,
    I’m in the process of storage migration from EMC AX4 to IBM DS3512.
    Currently in my Exchange 2007 Enterprise holds 1800 users I have configured all the Disks as Virtual RDMs with separate Storage Groups and Databases, each from vRDMs.
    Now, when the storage migration, I’m using Storage vMotion to move VMs from old LUNs EMC to the new LUNs from IBM.
    When I do the same for the exchange, it automatically will convert the vRDM to VMDK disks.
    Here is comes to make the decision to stick to vRDM or to go ahead and move to VMDK.
    My only options that also it depends if I want to stick to the RDM for my Exchange 2007 or to get rid of vRDM and go ahead with VMDK, If I’m planning to stick to the vRDM, then I have to create all the disks similar to the EMC and present them to the host where the Exchange VM is hosted, then I have to create a new Storage Groups along with their Databases and start moving mailboxes from Databases to another Databases on the new IBM Storage.
    If I want to get rid of the vRDM and go back to VMDK I just need one BIG LUN Raid-5 to old the entire Exchange VM along with it’s disk.
    What’s your thoughts?
    Thanks,
    Hussain

    Reply
  6. Devin L. Ganger

    Hussain: since you submitted this question on my blog as well, see my answer to it here:
    http://www.thecabal.org/2011/10/exchange-2010-virtualization-storage-gotchas/
    To speak to the primary post — there is one flaw, that the VMware paper you link to was NOT a best practices or “how to” guide, it was a pre-release bit of marketing for vSphere 4 to show the work that has gone into making performance more equal across all protocols. Both NetApp and VMware’s actual “best practices” guidance for Exchange match Microsoft’s support statement — databases over NFS *are not supported*.
    The technical reasons behind the “no file level protocols” statement has been explained multiple times. The Exchange ESE database engine, in order to prevent data loss as much as possible, relies on block-level read/write semantics. Adding file-level protocols into the mix can cause log files and EDB updates to not get written to disk when they tell Exchange they have been and can result in loss or corruption. It may not happen often…but that’s not the same as saying that it won’t happen.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>