Home > Blogs > VMware vSphere Blog


Storage Protocol Comparison – A vSphere Perspective

On many occasions I’ve been asked for an opinion on the best storage protocol to use with vSphere. And my response is normally something along the lines of ‘VMware supports many storage protocols, with no preferences really given to any one protocol over another’. To which the reply is usually ‘well, that doesn’t really help me make a decision on which protocol to choose, does it?’

And that is true – my response doesn’t really help customers to make a decision on which protocol to choose. To that end, I’ve decided to put a storage protocol comparison document on this topic. It looks at the protocol purely from a vSphere perspective; I’ve deliberately avoided performance, for two reasons:

  1.  We have another team in VMware who already does this sort of thing.
  2.  Storage protocol performance can be very different depending on who the storage array vendor is, so it doesn’t make sense to compare iSCSI & NFS from one vendor when another vendor might do a much better implementation of one of the protocols

If you are interested in performance, there are links to a few performance comparison docs included at the end of the post.

Hope you find it useful.

vSphere Storage Protocol Comparison Guide

 

iSCSI

NFS

Fiber Channel

FCoE

Description

iSCSI presents block devices to an ESXi host. Rather than accessing blocks from a local disk, the I/O operations are carried out over a network using a block access protocol. In case of iSCSI, remote blocks are accessed by encapsulating SCSI commands & data into TCP/IP packets. Support for iSCSI was introduced in ESX 3.0 back in 2006.

NFS (Network File System) presents file devices over a network to an ESXi host for mounting. The NFS server/array makes its local filesystems available to ESXi hosts. The ESXi hosts access the meta-data and files on the NFS array/server using a RPC-based protocol

VMware currently implements NFS version 3 over TCP/IP. VMware introduced support NFS in ESX 3.0 in 2006.

Fiber Channel presents block devices like iSCSI. Again the I/O operations are carried out over a network using a block access protocol. In FC, remote blocks are accessed by encapsulating SCSI commands & data into fiber channel frames.

One tends to see FC deployed in the majority of mission critical environments.

FC has been the only one of these 4 protocols supported on ESX since the beginning.

Fiber Channel over Ethernet also presents block devices, with I/O operations carried out over a network using a block access protocol. In this protocol, the SCSI commands and data are encapsulated into Ethernet frames. FCoE has many of the same characteristics of FC, except that the transport is Ethernet.

 

VMware Introduced support for HW FCoE in vSphere 4.x & SW FCoE in vSphere 5.0 back in 2011

Implementation Options

1.        NIC with iSCSI capabilities using Software iSCSI initiator & accessed using a VMkernel (vmknic) port

Or:

2.        Dependant Hardware iSCSI initiator

Or:

3.        Independent Hardware iSCSI initiator

Standard NIC accessed using a VMkernel port (vmknic)

Requires a dedicated Host Bus Adapter (HBA) (typically two for redundancy & multipathing)

1.        Hardware Converged Network Adapter (CNA)

Or:

2.        NIC with FCoE capabilities using Software FCoE initiator

Speed/Performance considerations

iSCSI can run over a 1Gb or a 10Gb TCP/IP network.

Multiple connections can be multiplexed into a single session, established between the initiator and target

VMware supports jumbo frames for iSCSI traffic, which can improve performance. Jumbo frames sends payloads larger than 1500. Support for jumbo frames with IP storage was introduced in ESX 4, but not on all initiators (KB 1007654 & KB  1009473). iSCSI can introduce overhead on a host’s CPU (encapsulating SCSI data into TCP/IP packets)

 

NFS can run over 1Gb or 10Gb over TCP/IP – NFS also supports UDP, but VMware's implementation does not & required TCP.

VMware supports jumbo frames for NFS traffic, which can improve performance in certain situations.

Support for jumbo frames with IP storage was introduced in ESX 4.

NFS can introduce overhead on a host’s CPU (encapsulating file I/O into TCP/IP packets)

Fiber Channel can run on 1Gb/2Gb/4Gb/8Gb & 16Gb, but 16Gb HBAs must be throttled to run at 8Gb in vSphere 5.0.

Buffer-to-Buffer credits & End-to-End credits throttle throughput to ensure lossless network

This protocol typically affects a host’s CPU the least as HBAs (required for FC) handles most of the processing (encapsulation of SCSI data into FC frames)

This protocol requires 10gb Ethernet.

The point to note with FCoE is that there is no IP encapsulation of the data like there is with NFS & iSCSI, which reduces some of the overhead/latency. FCoE is SCSI over Ethernet, not IP.

This protocol also requires jumbo frames since FC payloads are 2.2K in size and cannot be fragmented.

 


 


 

iSCSI

NFS

Fiber Channel

FCoE

Load Balancing

VMware’s Pluggable Storage Architecture (PSA) provides a Round-Robin Path Selection Policy which will distribute load across multiple paths to an iSCSI target. Better distribution of load with PSP_RR is achieved when multiple LUNs are accessed concurrently.

There is no load balancing per se on the current implementation of NFS as there is only a single session. Aggregate bandwidth can be configured by creating multiple paths to the NAS array, and accessing some datastores via one path, and other datastores via another.

VMware’s Pluggable Storage Architecture (PSA) provides a Round-Robin Path Selection Policy which will distribute load across multiple paths to an FC target. Better distribution of load with PSP_RR is achieved when multiple LUNs are accessed concurrently.

VMware’s Pluggable Storage Architecture (PSA) provides a Round-Robin Path Selection Policy which will distribute load across multiple paths to an FCoE target. Better distribution of load with PSP_RR is achieved when multiple LUNs are accessed concurrently.

Resilience

VMware’s PSA implements failover via its Storage Array Type Plugin (SATP) for all support iSCSI arrays. The preferred method to do this for SW iSCSI is with iSCSI Binding implemented, but it can be achieved with adding multiple targets on different subnets mapped to the iSCSI initiator.

NIC Teaming can be configured so that if one interface fails, another can take its place. However this is relying on a network failure and may not be able to handle error conditions occurring on the NFS array/server side.

VMware’s PSA implements failover via its Storage Array Type Plugin (SATP) for all support FC arrays

VMware’s PSA implements failover via its Storage Array Type Plugin (SATP) for all support FCoE arrays

Error checking

iSCSI uses TCP which resends dropped packets.

NFS uses TCP which resends dropped packets

Fiber Channel is implemented as a lossless network. This is achieved by throttling throughput at times of congestion using B2B and E2E credits

Fiber Channel over Ethernet requires a lossless network. This is achieved by the implementation of a Pause Frame mechanism at times of congestion.

Security

iSCSI implements the Challenge Handshake Authentication Protocol (CHAP) to ensure initiators and targets trust each other.

VLANs or private networks are highly recommended to isolate the iSCSI traffic from other traffic types.

 

VLANs or private networks are highly recommended to isolate the NFS traffic from other traffic types.

Some FC switches support the concepts of a VSAN to isolate parts of the storage infrastructure. VSANs are conceptually similar to VLANS.

 

Zoning between hosts and FC targets also offers a degree of isolation.

Some FCoE switches support the concepts of a VSAN to isolate parts of the storage infrastructure.

 

Zoning between hosts and FCoE targets also offers a degree of isolation.


 


 

iSCSI

NFS

Fiber Channel

FCoE

 

VAAI Primitives

Although VAAI primitives may be different from array to array, iSCSI devices can benefit from the full complement of block primitives:

·          Atomic Test/Set

·          Full Copy

·          Block Zero

·          Thin Provisioning

·          UNMAP

 

These primitives are built-in to ESXi, and require no additional software installed on the host.

Again, these vary for array to array. The VAAI primitives available on NFS devices are:

·          Full Copy (but not with Storage vMotion, only with cold migration)

·          Pre-allocate space (WRITE_ZEROs)

·          Clone offload using native snapshots

 

Note that for VAAI NAS, one requires a plug-in from the storage array vendor.

 

Although VAAI primitives may be different from array to array, FC devices can benefit from the full complement of block primitives:

·          Atomic Test/Set

·          Full Copy

·          Block Zero

·          Thin Provisioning

·          UNMAP

 

These primitives are built-in to ESXi, and require no additional software installed on the host.

Although VAAI primitives may be different from array to array, FCoE devices can benefit from the full complement of block primitives:

·          Atomic Test/Set

·          Full Copy

·          Block Zero

·          Thin Provisioning

·          UNMAP

 

These primitives are built-in to ESXi, and require no additional software installed on the host.

ESXi Boot from SAN

Yes

No

Yes

SW FCoE – No

HW FCoE (CNA) – Yes

RDM Support

Yes

No

Yes

Yes

Maximum Device Size

64TB

Refer to NAS array vendor or NAS server vendor for maximum supported datastore size.

Theoretical size is much larger than 64TB, but requires NAS vendor to support it.

64TB

64TB

Maximum number of devices

256

Default 8,

Maximum 256

256

256

Protocol direct to VM

Yes, via in-guest iSCSI initiator.

Yes, via in-guest NFS client.

No, but FC devices can be mapped directly to the VM with NPIV. This still requires RDM mapping to the VM first, and hardware must support NPIV (SW, HBA)

No

Storage vMotion Support

Yes

Yes

Yes

Yes

Storage DRS Support

Yes

Yes

Yes

Yes

Storage I/O Control Support

Yes, since vSphere 4.1

Yes, since vSphere 5.0

Yes, since vSphere 4.1

Yes, since vSphere 4.1

Virtualized MSCS Support

No. VMware does not support MSCS nodes built on VMs residing on iSCSI storage. However the use of software iSCSI initiators within guest operating systems configured with MSCS, in any configuration

supported by Microsoft, is transparent to ESXi hosts and there is no need for explicit support statements from

VMware. 

No. VMware does not support MSCS nodes built on VMs residing on NFS storage.

Yes, VMware supports MSCS nodes built on VMs residing on FC storage.

No. VMware does not support MSCS nodes built on VMs residing on FCoE storage.


 


 

iSCSI

NFS

Fiber Channel

FCoE

Ease of configuration

Medium – Setting up the iSCSI initiator requires some smarts, simply need the FDQN or IP address of the target. Some configuration for initiator maps and LUN presentation is needed on the array side. Once the target is discovered through a scan of the SAN, LUNs are available for datastores or RDMs.

Easy – Just need the IP or FQDN of the target, and the mount point. Datastore immediately appear once the host has been granted access from the NFS array/server side.

Difficult – Involves zoning at the FC switch level, and LUN masking at the array level once the zoning is complete. More complex to configure than IP Storage. Once the target is discovered through a scan of the SAN, LUNs are available for datastores or RDMs.

Difficult – Involves zoning at the FCoE switch level, and LUN masking at the array level once the zoning is complete. More complex to configure than IP Storage. Once the target is discovered through a scan of the SAN, LUNs are available for datastores or RDMs.

Advantages

No additional hardware necessary – can use already existing networking hardware components and iSCSI driver from VMware, so cheap to implement.

Well known and well understood protocol. Quite mature at this stage.

Admins with network skills should be able to implement.

Can be troubleshooted with generic network tools, such as wireshark.

 

No additional hardware necessary – can use already existing networking hardware components, so cheap to implement.

Well known and well understood protocol.

Also very mature.

Admins with network skills should be able to implement.

Can be troubleshooted with generic network tools, such as wireshark

Well known and well understood protocol.

Very mature, and trusted.

Found in majority of mission critical environments.

Enables converged networking, allowing the consolidation of network and storage traffic onto the same network via CNA – converged network adapter.

Using DCBx (Data Center Bridging protocol), FCoE has been made lossless even though it runs over Ethernet. DCBX does other things like enabling different traffic classes to run on the same network, but that is beyond the scope of this discussion.

Disadvantages

Inability to route with iSCSI Binding implemented.

Possible security issues, as there is no built in encryption, so care must be taken to isolate traffic (e.g. VLANs).

SW iSCSI can cause additional CPU overhead on the ESX host.

TCP can introduce latency for iSCSI.

Since there is only a single session per connection, configuring for maximum bandwidth across multiple paths needs some care and attention.

No PSA multipathing

Same security concerns as iSCSI since everything is transferred in clear text so care must be taken to isolate traffic (e.g. VLANs).

NFS is still version 3, which does not have the multipathing or security features of NFS v4 or NFS v4.1.

NFS can cause additional CPU overhead on the ESX host

TCP can introduce latency for NFS.

Still only runs at 8Gb which is slower than other networks (16Gb throttled to run at 8Gb in vSphere 5.0)

Needs dedicated HBA, FC switch, FC capable storage array which makes an FC implementation rather more expensive

Additional management overhead (e.g. switch zoning) is needed.

Could prove harder to troubleshoot compared to other protocols.

Rather new, and not quite as mature as other protocols at this time.

Requires a 10Gb lossless network infrastructure which can be expensive.

Cannot route between initiator and targets using native IP routing – instead it has to use protocols such as FIP (FCoE Initialization Protocol).

Could prove complex to troubleshoot/isolate issues with network and storage traffic using the same pipe.


Note 1 – I've deliberately skipped AoE (ATA-over-Ethernet) as we have not yet seen significant take-up of this protocol as this time. Should this protocol gain more exposure, I’ll revisit this article.

Note 2 – As I mentioned earlier, I’ve deliberately avoided getting into a performance comparison. This has been covered in other papers. Here are some VMware whitepapers which cover storage performance comparison:

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: Twitter @VMwareStorage

24 thoughts on “Storage Protocol Comparison – A vSphere Perspective

  1. greg schulz

    Nice comparisons, however has shared and switchable SAS support been removed from vSphere, or was it simply left out of the comparison for some reason or the other?
    Cheers gs

    Reply
  2. greg schulz

    Ok, that makes more sense as they are also the ones that discussed and covered the most as well.
    I routinely encounter people who are surprised to hear that there is such a thing as shared and switched SAS from vendors ranging from Dell, HP, IBM, NetApp, Oracle and others along with support on the vmware hcl site (assuming that has not recently changed of course).
    Granted most of those and other vendors along with their followers tend to focus on what others are talking about or are known (e.g. iSCSI, FC, FCoE, NFS). All of the above different supported interfaces have their place, benefits, caveats, supporters and detractors. The trick is figuring out which is best for your specific environment.
    Oh, fwiw, I like them all when used where they make the most sense (or if you are a vendor the most dollars ;)…
    Cheers gs

    Reply
  3. Erik Smith

    Hi a couple of points:
    1) It’s Fibre Channel not Fiber Channel
    2) End-to-end (E2E) is not used with class 3 FC (used by the majority of end users). BB_Credit is used with class 3.
    3) Personally I slightly disagree with your assesment of “ease of configuration” and have discussed this topic in detail at http://brasstacksblog.typepad.com/brass-tacks/2012/02/fc-and-fcoe-versus-iscsi-network-centric-versus-end-node-centric-provisioning.html
    Other than these minor points I liked the post. Thanks for the information.

    Reply
  4. Chogan

    Thanks for the clarifications Erik. The “ease of configuration” is always a matter of conjecture. I’m sure readers will find your post useful in that regard.

    Reply
  5. Doug B

    I believe i have a correction in the FC/Protocol direct to VM section:
    “FC devices can be mapped directly to the VM with NPIV. This still requires RDM mapping to the VM first…” should indicate that the RDM mapping to the HOST is required first.

    Reply
  6. Chogan

    Hello Doug,
    You are correct. The LUN must first be mapped to the host, then the RDM must be mapped to the VM before NPIV can be used. I should have clarified that in the posting.
    Cormac

    Reply
  7. Iñigo

    Nice comparison table, thanks.
    Regarding FCoE support, can you confirm that “ESXi Boot from SAN” and “Virtualized MSCS” are not supported with any FCoE option? With CNA-based FCoE I was expecting the same feature support than native FC.
    And within FCoE, are there differences in supported features between CNA-based and software-based FCoE?
    Thanks

    Reply
  8. Chogan

    Hi Iñigo,
    Thanks for commenting and good catch. The ‘Boot from SAN’ is available to HW FCoE, but not SW FCoE. I will fix that entry.
    Unfortunately the MSCS restriction is in place, and this is clearly called out in the MSCS configuration guide.

    Reply
  9. forbsy

    For the VAAI information, there are a number of new constructs for NFS:
    NFS – Extended Stats
    NFS – Space reservation (procure eagerzeroed thick VMDK)
    Also, Protocol direct to VM. I guess it depends on the storage. NetApp has software called Windows SnapDrive. This facilitates FC to be mapped directly to the VM – without NPIV.
    Nice article!

    Reply
  10. Chogan

    Thanks for the comment Ian.
    Yes, I did omit the extended stats for NFS, just because it is not that visible, and I’m not sure how many partners have implemented it. Good catch though.
    The NFS space reservation is referred to as preallocate space above.
    Cormac

    Reply
  11. logistics solution

    Very useful article….. nice compilation of each of the features and disadvantages of storage protocol….. an indeed a good comparison……will definitely benefit ones who always are in dilemma to choose among these protocols..
    keep up the good work

    Reply
  12. Web-Schlampen

    We’re a bunch of volunteers and opening a new scheme in our community. Your site provided us with helpful info to paintings on. You’ve performed an impressive task and our entire neighborhood will probably be thankful to you.

    Reply
  13. Contact Us

    Hey there! Someone in my Myspace group shared this site
    with us so I came to look it over. I’m definitely enjoying the information. I’m bookmarking and will be tweeting
    this to my followers! Fantastic blog and amazing design and style.

    Feel free to surf to my blog … Contact Us

    Reply
  14. Roberto Neigenfind

    I´m in this market for a while and I´ve seen some discussions from the past that nowadays seems even funny like Ethernet and Token Ring performance discussions. Glad to know that I´m seeing now the history repeating. In few works: iSCSI will win (as Ethernet won in the past) just because will deliver more thoughtput with few dollars. :)

    Reply
  15. Rolf Bartels

    Can a single VMFS Datastore be presented to 2 seperate hosts, one using FC and another using iSCSI, as long as the storage supports both protocols ?

    Reply
    1. Cormac Hogan

      You cannot access the same LUN using different protocols from the same host. http://pubs.vmware.com/vsphere-51/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-51-storage-guide.pdf, page 15 states the following:

      Accessing the same storage through different transport protocols, such as iSCSI and Fibre Channel, at the same time is not supported.

      However, accessing from different hosts using different protocols should not be a problem afaik.

      Reply
  16. Clash of Clans Astuce

    Breakable,insensé épinette derrière poussée à partir de vêtements
    xbox live gold gratuit un mois. Adresse contre réclamations code xbox live gold gratuit 2010.Peu,curieux
    cercle quel que soit démangeaison au-dessous Sommeil comment telecharger jeux xbox gratuit.
    entreprendre pendant que dysfonction érectile, xbox live gold gratuit tuto.Très bien,
    distinct rire sur coller sur parasomnia comment telecharger jeux
    xbox 360 usb. frapper il ya stations code
    xbox live gold gratuit 2011.

    Reply
  17. Ben Warner

    Hi Cormac,

    Is this document the latest storage performance comparison available or is there something updated for a later vSphere release?

    Thanks,

    Ben Warner

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>