Home > Blogs > VMware VROOM! Blog


Exchange Performs Well Using Fibre Channel, iSCSI, and NFS on vSphere

In a new whitepaper, a large installation of 16,000 Exchange users was configured across eight virtual machines (VMs) on a single VMware vSphere 4 server.  The storage used for the test was a NetApp FAS6030 array that supported Fibre Channel, iSCSI, and NFS storage protocols.  This allowed for a fair comparison of these three storage protocols on the same hardware.  The test results show that each protocol achieved great performance with Fibre Channel leading the way, and with iSCSI and NFS following closely behind.

 

Similar tests have been done to compare Fibre Channel, iSCSI, and NFS on ESX in the past.  These tests used IOmeter to measure the storage performance.  In this new round of tests, Exchange Load Generator was used as the test tool to simulate an 8-hour work day.

 

The results show that Fibre Channel provided the best performance with the lowest CPU utilization.  Additionally, iSCSI and NFS were relatively close in performance.  The two graphs below summarize the test results showing the sendmail average latency as reported by LoadGen and the overall CPU utilization of the the ESX server.

 

CpuUtilizationGraph

 

SendMailAvgLatencygraph

 

The complete whitepaper has all of the configuration details and additional test results.

 

13 thoughts on “Exchange Performs Well Using Fibre Channel, iSCSI, and NFS on vSphere

  1. Rick Scherer

    I’m curious to know if the iSCSI and NFS options were tested with a single 1GbE connection or 10GbE.

    Reply
  2. Todd Muirhead

    The answer is that a single 1GbE connection was used for the iSCSI and NFS tests.
    In the spirit of a friendly blogger I would like to say that this info is in the whitepaper along with info regarding a test done with 4 x 1GbE for iSCSI.
    Thanks for the question.

    Reply
  3. Dave Boone

    It doesn’t seem fair to compare these protocols when using a NetApp box that virtualizes FC and iSCSI LUNs as files within the WAFL file system. Seems like you’re throwing away some of the inherent low-latency, high-bandwidth benefits of FC with the additional overhead of a file system and the possibility of file (LUN) fragmentation.

    Reply
  4. Jason Blosil

    Dave, a little FUD coming from EMC? Every modern storage system uses virtualized data objects. Whether you call them “Meta LUNs”, or files, the real test is the effectiveness of the design to provide the performance, efficiency, and ease of use required by the customer. NetApp delivers on all three.

    Reply
  5. Mika Ollikainen

    Is there somewhere same kind of storage protocol test using EMC Clariion/Celerra combination? Is it possible that NetApp WAFL will slow down FC and iSCSI?

    Reply
  6. Pavel Filin

    I think – for NetApp it is normal that FC option shows such bad results. Remmeber – FC device on NetApp is a file. That’s one more abstraction level. They must test other vendors mid-range platforms EMC, HP, HDS. The results are for NetApp not for vShpere…

    Reply
  7. Vaughn Stewart

    @Dave Boone – I realize my follow up is ‘somewhat’ dated – but I believe your concerns about NetApp virtual LUNs is unfounded.
    In fact, in this test the performance of every storage protocol out performed the same set of tests completed on EMC FC.
    Now the tests ran on EMC were with VI3 (as opposed to vSphere). While vSphere provides increased IO performance it does not equate to the delta between the tests nor does it explain why ‘real’ EMC FC LUNs underperform virtual NetApp LUNs and NetApp NFS.

    Reply
  8. scott owens

    Any chance the type of NIC used is available ?
    To see if it does TOE offloading for iSCSI ?

    Reply
  9. Todd Muirhead

    The NICs used were Intel PRO 1000 NICs based on the 82571EB controller. It does not do TOE offloading for iSCSI.
    Thanks – Todd

    Reply
  10. Lonnie Cingular

    On page 47 from Kens Virtual Reality – http://kensvirtualreality.files.wordpress.com/2009/12/the-great-vswitch-debate-combined.pdf
    he writes that you can get greater iSCSI performance by moving the iSCSI initiators into the VMs themselves instead of letting the ESX host do the work.
    He does point out that this initially is greater admin work but the load balancing & performance are increased – so is CPU but that does not seem to be an issue.
    I wonder what kind of improvement that would have shown.

    Reply
  11. Spud

    Lonnie, thats pointless and avoids half of the benefit of storage virtualisation and therefore virtualisation itself. But if performance is more important than ease of use, management or cost of ownership, then cool.

    Reply
  12. Alex

    You can have better performance if you will use FCoE data transfer protocol. Running at 10Gb Ethernet and Fibre Channel at 8Gb will improve you infrastructure. We tested in on HP WM.
    You can see FCoE detail at http://fcoe.ru

    Reply
  13. Bulk SMS

    I just finished reading . I liked it. I’m glad I saw this post on ; and that I am adding it as my favourite. This blog on has helped me in getting additional information about & also helped me see big picture context, which is valuable. It gave me a feel, and a long term view of

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

*