Chad (of EMC) and Vaughn (of NetApp) posted today a great collaborative blog article (with others from VMware, Dell/EqualLogic and HP/Lefthand) that has a nice backgrounder on iSCSI, talks about some design considerations, links to lots of resources, and then talks about some little-known configuration and performance considerations. If you are not an iSCSI guru, you should read this post:
Virtual Geek: A Multivendor Post to help our mutual iSCSI customers using VMware.
Today’s post is one you don’t often find in the blogosphere, see
today’s post is a collaborative effort initiated by me, Chad Sakac
(EMC), which includes contributions from Andy Banta (VMware), Vaughn
Stewart (NetApp), Eric Schott (Dell/EqualLogic), and Adam Carter
(HP/Lefthand), David Black (EMC) and various other folks at each of the
Together, our companies make up the large majority
of the iSCSI market, all make great iSCSI targets, and we (as
individuals and companies) all want our customers to have iSCSI
I have to say, I see this one often – customer
struggling to get high throughput out of iSCSI targets on ESX.
Sometimes they are OK with that, but often I hear this comment: "…My
internal SAS controller can drive 4-5x the throughput of an iSCSI
Can you get high throughput with iSCSI with GbE on ESX? The answer is YES.
But there are some complications, and some configuration steps that are
not immediately apparent. You need to understanding some iSCSI
fundamentals, some Link Aggregation fundamentals, and know some ESX
internals – none of which are immediately obvious…
interested (and who wouldn’t be interested with a great topic and a
bizzaro-world “multi-vendor collaboration”… I can feel the space-time
continuum collapsing around me :-), read on…
Stephen Foskett gives us the take-home. Essential Reading for VMware ESX iSCSI Users! – Stephen Foskett, Pack Rat.
- Ethernet link aggregation doesn’t buy you anything in iSCSI environments
- iSCSI HBA’s don’t buy you much other than boot-from-SAN in ESX, either
- The most common configuration (ESX software iSCSI) is limited to about 160 MB/s per iSCSI target over one-gigabit Ethernet, but that’s probably fine for most applications
- Adding multiple iSCSI targets adds performance across the board, but configurations vary by array
- Maximum per-target performance comes from guest-side software iSCSI, which can make use of multiple Ethernet links to push each array as fast as it can go
More like this, please.
0 comments have been added so far
Hmm so accessing iSCSI directly from guest VM’s gives better performance than accessing it via the hypervisor layer? Sounds like there is a world of optimization to do.
I assume the results would be similar for NFS storage too?
Hm.. I wonder if the RDM can performs faster than mapping the store directly on the ESXi