By Mark Ma
With the release of vSphere 6.7, VMware added iSER (iSCSI Extensions for RDMA) as a native supported storage protocol to ESXi. With iSER run over iSCSI, users can boost their vSphere performance just by replacing the regular NICs with RDMA-capable NICs. RDMA (Remote Direct Memory Access) allows the transfer of memory from one computer to another. This is a direct transfer and minimizes CPU/kernel involvement. By bypassing the kernel, we get extremely high I/O bandwidth and low latency. (To use RDMA, you must have an HCA/Host Channel Adapter device on both the source and destination.) In this blog, we compare standard iSCSI performance vs. iSER performance to see how iSER can release the full potential of your iSCSI storage.
The iSCSI/iSER target system is based on open source Ubuntu 18.04 LTS server with 2 x E5-2403 v2 CPU, 96 GB RAM, 120 GB SSD for OS, 8 x 450 GB (15K RPM) SAS drive and Mellanox ConnectX-3 Pro EN 40 GbE NIC (RDMA capable). The file system is ZFS with 4 x mirror set with 8 x 450 GB SAS drive (RAID 10). ZFS has an advanced memory feature that can produce very good random IOPS and read throughput. We did not place any SSD for caching since the test is rather to compare protocol difference, not disk drives. The iSCSI/iSER target is Linux SCSI target framework (TGT).
The iSCSI/iSER initiator is ESXi 6.7.0, 9214924 with 2 x Intel Xeon E5-2403 v2 CPU @ 1.8 GHz, 96 GB RAM, USB boot drive, and Mellanox ConnectX-3 Pro EN 40 GbE NIC (RDMA capable) has been used to benchmark the performance boost that iSER enables vs. iSCSI.
Both target and initiator connect to 40 GbE switch with QSFP cables for optimal network performance.
Both NICs have the latest firmware with 2.42.5000.
To measure performance, we used VMware I/O Analyzer, which uses the industry-standard benchmark Iometer.
We set the target with the iSCSI driver to ensure the first test measures the standard iSCSI protocol.
For the iSCSI initiator, we simply enable the iSCSI software adapter.
iSCSI test one: Max Read IOPS—this test shows the max read IOPS (4K random read I/Os per second) from the iSCSI storage.
Result: 34,255.18 IOPS
iSCSI test two: Max Write IOPS, this test shows the max write IOPS from the iSCSI storage.
Result: 36,428.26 IOPS
iSCSI test three: Max Read Throughput—this test shows the max read throughput from the iSCSI storage.
Result: 2,740.80 MBPS
iSCSI test four: Max Write Throughput—this test shows the potential max write throughput from the iSCSI storage. (The performance is rather low due to ZFS RAID configuration and limited disk spindle.)
Result: 112.04 MBPS
We set the target to the iSER driver to ensure the second test measures only iSER connections.
For the iSER initiator, we need to verify that an RDMA-capable NIC is installed. For this, we use the command:
esxcli rdma device list
Then we run the following command from the ESXi host to enable the iSER adapter.
esxcli rdma iser add
iSER test one: Max Read IOPS—this test shows the max read IOPS from the iSER storage.
Result: 71,108.85 IOPS, which is 207.59% over iSCSI.
iSER test two: Max Write IOPS—this test shows the max write IOPS from the iSER storage.
Result: 69,495.70 IOPS which is 190.77% over iSCSI.
iSER test three: Max Read Throughput—this test shows the max read throughput, measured in megabytes per second (MBPS), from the iSER storage.
Result: 4,126.53 MBPS which is 150.56% over iSCSI.
iSCSI test four: Max Write Throughput—this test shows the max write throughput from the iSER storage. (The performance is rather low due to ZFS raid configuration and limited disk spindle.)
Result: 106.48 MBPS which is about 5% less than iSCSI.
The performance over random I/O is about a 200% increase for both read and write. That’s about a 150% increase for read throughput. Write throughput is about the same. The only difference is the storage protocol. We also performed these tests in an environment made up of older hardware, so just imagine what vSphere with iSER could do when using state-of-the-art, NVME-based storage and the latest 200 GbE network equipment.
The result seemed too good to be true, so I ran the benchmark several times to ensure its consistency. It’s great to see VMware’s Innovation initiative in action. Who could think that the “not so exciting” traditional iSCSI storage could be improved over 200% through the efforts of VMware and Mellanox. It’s great to see VMware continues to push the boundary of the Software-Defined Datacenter to better serve our customers in their digital transformation journey!
About the Author
Mark Ma is a senior consultant at VMware Professional Services. He is heavily involved with POC, architecture design, assessment, implementation, and user training. Mark specializes in end-to-end virtualization solutions based on Citrix, Microsoft, and VMware applications.
4 comments have been added so far
You mention that “Both target and initiator connect to 40 GbE switch with QSFP cables for optimal network performance.” but there is no discussion of configuration of the switch. I’ve read that Priority Flow Control needs to be enabled and configured correctly on the switch to use RDMA. Did you do any special configurations on the switch in this test case?
Indeed, my experience with iSER (in non-VMware environs) is that without a lossless network (PFC is one way of achieving this), you’ll lose write performance. And in fact, with the network properly configured I usually see a 2-3X performance improvement over iSCSI.
So I expect there may be more performance left on the table…
Which SAN/Storage vendor is on the HCL for iSER?