Home > Blogs > VMware vSphere Blog


vSphere 5.1 – VMDK versus RDM

It seems the debate between using a VMDK on VMFS or an RDM still rages when it comes to the question which one is better for performance.

The VMware team has published a lot of evidence in the past that the difference is very minor, in fact difficult to measure accurately and probably unperceivable to customers.  Definitely not worth giving up the value of encapsulating a VM.

Published Dec 7, 2007 (on ESX 3.0.1)
http://www.vmware.com/resources/techresources/1019

Published Feb 13, 2008 (on ESX 3.5)
http://www.vmware.com/resources/techresources/1040

Let’s take a look at some new internal data for vSphere 5.1 that continues to validate that VMDK is the right default choice.

The Test Bed

This data was generated on vSphere 5.1 using the DVD Store test application running mySQL 5.5 which simulates an online e-commerce store.  The scale out tests below were configured such that a number of DVD Store virtual machines were placed side by side on the Dell R910 (2x E5620) vSphere host.  Additionally, a number of client virtual machines were required to simulate the high numbers of transactions.

TestBed

Scaling Results

These graphs outlines the scaling capabilities of both VMDK and RDM by measuring orders per minute (OPM).

VmdkscaleRdmscaling

You can see that OPM are nearly identical between VMDK and RDM.  In fact, OPM throughput on VMDK was approximately 1% fast than RDM using a single DVD Store instance.  The 4VM test demonstrates near linear scaling for both technologies.

Orders/Minute Performance

Using these graphs we can take a closer look at the difference between VMDK and RDM using the same OPM measure.

Opm1vmOpm4vm

Here the application performance achieved is nearly identical (approximately +/- 1%) between VMDK and RDM.

IO Cost

These graphs outline the cycles required per transaction which can be viewed a measure of IO cost per transaction (note: lower is better).

Cyc1vmCyc4vm

Here we can see that the CPU cost of an RDM is actually slightly higher (though less than 1%) than that of a VMDK for both tests.  This dispels the notion that an RDM is significantly more efficient.

(Special thanks to Razvan Cheveresan for his efforts collecting this data)

Summary

What you should takeaway from this vSphere 5.1 data, as well as all previously published data, is that there is really no performance difference between VMDK and RDM and this holds true for all versions of our platform.  +/- 1% is insignificant in today’s infrastructure and can often be attributed to noise.  The decision between VMDK and RDM is now one of architecture or functional requirement and without a special need, VMDK should be used by default.  It provides all the performance and flexibility you require to virtualize your most critical business applications.

This entry was posted in Performance, Storage and tagged , , on by .
Mark Achtemichuk

About Mark Achtemichuk

Mark Achtemichuk is a Senior Technical Marketing Architect specializing in Performance within the Cloud Infrastructure Marketing group at VMware. Certified as VCDX #50, @vmMarkA has a strong background in datacenter infrastructures, cloud architectures, experience implementing enterprise application environments and a passion for solving problems. He has driven virtualization adoption and project success by methodically bridging business with technology. His current challenge is ensuring that performance is no longer a barrier, perceived or real, to virtualizing an organization's most critical applications, on their journey to the cloud.

19 thoughts on “vSphere 5.1 – VMDK versus RDM

  1. Marco Law

    Hi Mark,
    Do you have any test result in ESXi 5.0?
    Do you have any test done cluster?
    I am quite interest in these topic…:)

    Reply
    1. Mark AchtemichukMark Achtemichuk Post author

      Hi Marco. I don’t have any specific data for vSphere 5.0 but am confident in saying I’d expect the data to be identical. What we’ve tried to accomplish through these various tests is demonstrate you can now pick the technology based on design considerations vs performance. If you need to use a cluster service that means you may need to use RDMs, but you can be assured it will perform extremely well. If on the other hand you run Oracle RAC in VMDKs (which we can do today) again you can be assured it will meet your performance needs.

      Reply
      1. Mino

        For Oracle RAC, I use VMDK disabling simultaneous write protection.

        There is a VMware’s Best Practies guide for ORACLE DB implementation.

        Reply
  2. Matt

    I’m still looking for the answer to this design question. I use RDM’s not for the performce differnce between the two formats but to take advantage of SAN multi-patthing. I use round robin PSP with 1 iops per second path selection. With two HBAs or two iSCSI NICs and 4 to 8 ports in my disk array. This give 8 to 16 paths for SAN I/O. I then put my C drives in a shared VMDK and put my data drives on RDMs dedicated to the VM. This gives a nice balanced SAN I/O across many paths. I have yet to see a VMDK design that does this. Most of what I see is a handful od datastores with a random bunch of VMs in them. I would like to switch to VMDKs, but I still don’t see a good way to achieve the same balanced I/O. If I dedicate datastores to VMs I would have to use larger LUNs to have enough extra overhead for Snapshots.

    Reply
    1. Mark AchtemichukMark Achtemichuk Post author

      Interesting configuration Matt. With the disclaimer that I don’t have a full view of your environment, platforms involved or connectivity speeds, I might suggest though you’ve made it much more complicated than it needs to be. Multiple paths are good for performance, sometimes required depending on the storage vendor to ensure FA ports aren’t overloaded, but there are very few scenarios where I see the need for that many paths – especially across two protocols. Consider configuration complexity with management efficiency. It will all come down to the numbers (ie: IOPS, block size, latency, etc.) as to what an optimal configuration should be but I’d personally look towards a simpler configuration. Each storage OEM will also publish their best practice for architecture and connectivity so I’d encourage you to reach out to the specific OEM. Today’s arrays can service an enormous amount of IO timely across 8/16Gb FC fabrics.

      Reply
  3. Alex

    Hi guys,

    I am not a FAN of RDM but sometimes it is mandatory to use them (i.e. Oracle RAC Cluster, MS Cluster accross different boxes)
    But now I have to setup a couple of fileserver (Win2012) for a total of about 90 TB!!! In my opinion the only option which is easy to setup and suitable for my needs is to use pRDM to allocate 4 to 8 disks attached to the same numebr of LUNs. I don’t need to take snapshot of those disks and backup to tape will be used by agent installed on VM. What are your opinion on this scenario? any other ideas to suggest me?

    thanks

    Alex

    Reply
    1. Preetam

      I think this the only option. I would suggest following things

      1. Note down the LUNID and their relationship with SCSI ID. Document this properly. Unless and until you have this data, you cannot restore this VM when it crashes
      2. Take a backup of VMX file frequently
      3. Once VM is ready with LUN attached, test it by migrating using vMotion to all host. Sometimes LUN details are not update across the host
      4. Ensure you make operations team aware of the storage vmotion problem which can convert RDM into VMFS volumes

      Reply
  4. Chris

    For me sVmotion times is the main to choose RDM for large disks. The longer an sVmotion runs the greater chance of something timing out or going wrong.

    Reply
  5. Lasse

    What diskformat did you use for the VMDK test?

    Can you supply results for:
    Thin disk
    Lazy Thick disk
    Eager Thick disk?

    Reply
    1. Mark AchtemichukMark Achtemichuk Post author

      The testing was done using thick eager zeroed format.

      Unfortunately the testing is complete so I’m unable to provide results for the other formats. That said, we’ve published other data in the past demonstrating the capabilities of the other formats here:
      http://www.vmware.com/pdf/vsp_4_thinprov_perf.pdf

      As well some of our storage oem partners have published newer results showing even performance.

      Reply
  6. David

    Thanks for this post, excellent information. In a somewhat similar vein as commenter Matt, I’m thinking about application data residing in a VMDK. This is instead of it sitting in NFS shares or iSCSI luns served directly to the guest (I’ve not used RDMs before).

    I’ve been involved in testing that did show an application performing much better when it resided on a VMDK (from an NFS datastore) instead running on an NFS share mounted directly by the guest OS. What is attractive about serving objects directly to the guest OS is the ability to place application data on the desired storage object instead of being tied to the same datastore that the OS vmdk resides on.

    I’d like to move in the direction of placing application data on datastores (in the form of a vmdk) but the biggest limitation I’m dealing with now is the inability of vmdks for a single VM to be sourced from different datastores within the context of a vApp. Is anyone else aware of this limitation?
    Thanks!

    Reply
    1. Mark AchtemichukMark Achtemichuk Post author

      Hi David. In response to your last paragraph, yes, there are a number of design considerations around vApp storage and their use with just a simple vSphere implementation or the vCloud suite managed with vCloud Director. As this could be quite a discussion on its own, I’d like to suggest posting in the VMware communities (here: http://communities.vmware.com/community/vmtn/server/vsphere) for a more focused discussion depending on your requirements and configuration In the event you’re actively experiencing an issue with a current vApp, please reach out to VMware support.

      Reply
  7. Pingback: RDM versus VMDK (and Backup) in vSphere 5.1

  8. Pingback: ESX / ESXi - Hilfethread - Seite 94

  9. Pingback: Doing It Wrong: Virtualizing SQL Server - SQLRockstar - Thomas LaRock

  10. Pingback: Comparing of SMB performance of using RDM and VMDK on ESXI 5.0 | Peter Luk's Blog

  11. Pingback: VMDK versus RDM: Which One Do I Need for SQL Server?

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>