Product Announcements

SIOC considerations with mixed HBA environments

I’ve been involved in a few conversations recently related to device queue depth sizes. This all came about as we discovered that the default device queue depth for QLogic Host Bus Adapters was increased from 32 to 64 in vSphere 5.0. I must admit, this caught a few of us by surprised as we didn’t have this change documented anywhere. Anyway, various Knowledge Base articles have now been updated with this information. Immediately, folks wanted to know about the device queue depth for Emulex. Well, this hasn’t changed and continues to remain at 32 (although in reality it is 30 for I/O as two slots on the Emulex HBAs are reserved). But are there other concerns?

The next query I received was about this difference between Emulex & QLogic was how could it impact shared storage, e.g. one host has QLogic HBAs and another host has Emulex HBAs. First off, we do not support mixing HBAs from different vendors in the same host, so we don’t have to worry about that aspect. But mixing hosts that have HBAs from the same vendor is an interesting question, especially when the hosts are accessing the same datastore.

Overall, there won’t be an issue with hosts having HBAs from different vendors with different queue depths. They can successfully share access to a datastore, behave correctly, and they are fully supported.

There is one consideration however, and this is around Storage I/O Control (SIOC). I won’t go into all the details of SIOC in this post (more can be found in this white paper) but suffice to say that SIOC works by throttling the device queue depth across all hosts in order to prevent a noisy neighbor problem (where one VM on one host is impacting other VMs sharing the same datastore). Now, the reason for increasing the QLogic device queue depth from 32 to 64 was to give SIOC more slots to play with when it came to controller I/O. However, now we have a situation where some hosts may have a device queue depth of 64 and other hosts may have a device queue depth of 32. This could mean that those hosts which have Emulex HBAs are not getting the same fairness as those hosts that have QLogic HBAs when there is no I/O congestion on the shared datastore.

If you have hosts in your cluster, and some are using QLogic HBAs and others are using Emulex HBAs, and you are also using Storage I/O Control, and at times of I/O congestion, we don’t think customers need to change anything. There are two possible scenarios that can happen:

1. The hosts with QLogic HBAs installed has VIP VMs that need a larger share of queue depth. If the latency becomes higher than the configured SIOC congestion threshold, SIOC will bring down the device queue depth on hosts with QLogic HBAs since the limit is 64 on these hosts. In that case all is good because SIOC has more room to play with on these hosts and it will be able to keep higher queue depth even in periods of high congestion.

2. The hosts with Emulex HBAs installed has VIP VMs that need a larger share of queue depth.  Again, in a mixed environment, SIOC will throttle down the QLogic HBAs first before throttling down the Emulex HBAs (depending of share values. etc). The only concern with a mixed environment is when SIOC increases the queue depth on Emulex hosts during periods of low congestion, it will get stuck at 32 but on hosts using QLogic, SIOC will be able to go up to 64. This is completely normal and this is what SIOC does in order to utilize the array effectively.

You might think that we should increase the Emulex Queue depth value to match that of the QLogics and give fairness across the cluster. Unfortunately, at this time Emulex are recommending that a device queue depth of 32 is the sweet spot for their HBAs.

Hopefully Emulex will allow us to increase the queue depth of their HBAs up to 64 as well sometime in the future, but in the meantime we can leave the default 64 setting for QLogic and default 32 setting for Emulex as SIOC will work fine with these.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @VMwareStorage