In 5.0, VMware has added a new UI interface to make it much easier to configure multipathing for Software iSCSI. The UI is a major change from 4.x. Previously users needed to use the command-line to get an optimal multi-path configuration with Software iSCSI.
The UI allows one to select different network interfaces for iSCSI use, check them for compliance and configure them with the Software iSCSI Adapter. This multipathing configuration is also referred to as iSCSI Port Binding.
Why should I use iSCSI Multipathing?
The primary use case of this feature is to create a multipath configuration with storage that only presents a single storage portal, such as the DELL EqualLogic and the HP/Lefthand. Without iSCSI multipathing, these type of storage would only have one path between the ESX host and each volume. iSCSI multipathing allows us to multipath to this type of clustered storage.
Another benefit is the ability to use alternate VMkernel networks outside of the ESXi Management network. This means that if the management network suffers an outage, you continue to have iSCSI connectivity via the VMkernel ports participating in the iSCSI bindings.
Let's see how you go about setting this up. In this example, I have configured a Software iSCSI adapter, vmhba32.
At present, no targets have been added, so no devices or paths have been discovered. Before implementing the iSCSI bindings, I need to create a number of additional VMkernel ports (vmk) for port binding to the Software iSCSI adapter.
As you can see from the above diagram, these vmnics are on trunked VLAN ports, allowing them to participate in multiple VLANs. For port binding to work correctly, the initiator must be able to reach the target directly on the same subnet – iSCSI port binding in vSphere 5.0 does not support routing. In this configuration, if I place my VMkernel ports on VLAN 74, they will be able to reach the iSCSI target without the need of a router. This is an important point, and needs further elaboration as it causes some confusion. If I do not implement port binding, and use a standard VMkernel port, then my initiator can reach the targets through a routed network. This is supported and works just fine. It is only when iSCSI binding is implemented that a direct, non-routed network between the initiators and targets is required, i.e. initiators and targets must be on the same subnet.
There is another important point to note when it comes to the configuration of iSCSI port bindings. On vSwitches which contain multiple vmnic uplinks, each VMkernel (vmk) port used for iSCSI bindings must be associated with a single vmnic uplink. The other uplink(s) on the vSwitch must be placed into an unused state. See below for an example of such a configuration:
This is only a requirement when you have multiple vmnic uplinks on the same vSwitch. If you are using multiple vSwitches with their own vmnic uplinks, then this isn't an issue. Continuing with the network configuration, we create a second VMkernel (vmk) port. I now have two vmk ports, labeled iscsi1 & iscsi2. These will be used for my iSCSI binding. Note below that one of the physical adapters, vmnic1, appeares disconnected from the vSwitch. This is because both of my VMkernel ports will be bound to vmnic0 only, so vmnic1 has been set to unused across the whole of the vSwitch.
Next, I return to the properties of my Software iSCSI adapter, and configure the bindings and iSCSI targets. There is now a new Network Configuration tab in the Software iSCSI Adapter properties window. This is where you add the VMkernel ports that will be used for binding to the iSCSI adapter. Click on the Software iSCSI adapater properties, then select the Network configuration tab, and you will see something similar to the screenshot shown below:
After selecting the VMkernel adapters for use with the Software iSCSI Adapter, the Port Group Policy tab will tell you whether or not these adapters are compliant or not for binding. If you have more than one active uplink on a vSwitch that has multiple vmnic uplinks, the vmk interfaces will not show up as compliant. Only one uplink should be active, all other uplinks should be placed into an unused state.
You then proceed to the Dynamic Discovery tab, where the iSCSI targets can now be added. Now, because we are using port binding, you must ensure that these targets are reachable by the Software iSCSI Adapter through a non-routeable network, i.e the storage controller ports are on the same subnet as the VMkernel NICs:
At this point, I have two vmkernel ports bound to the Software iSCSI Adapter, and 4 targets. These 4 targets are all going to the same storage array, so if I present a LUN out on all 4 targets, this should give me a total of 8 paths. Let's see what happens when I present a single LUN (ID 0):
So it does indeed look like I have 8 paths to that 1 device. Let's verify by looking at the paths view:
And if I take a look at a CLI multipath output for this device, I should see it presented on 8 different targets:
~ # esxcfg-mpath -l -d naa.6006016094602800c8e3e1c5d3c8e011 | grep "Target Identifier"
Target Identifier: 00023d000002,iqn.1992-04.com.emc:cx.ckm00100900477.a2,t,1
Target Identifier: 00023d000001,iqn.1992-04.com.emc:cx.ckm00100900477.a2,t,1
Target Identifier: 00023d000002,iqn.1992-04.com.emc:cx.ckm00100900477.b3,t,4
Target Identifier: 00023d000001,iqn.1992-04.com.emc:cx.ckm00100900477.b3,t,4
Target Identifier: 00023d000002,iqn.1992-04.com.emc:cx.ckm00100900477.b2,t,3
Target Identifier: 00023d000001,iqn.1992-04.com.emc:cx.ckm00100900477.b2,t,3
Target Identifier: 00023d000002,iqn.1992-04.com.emc:cx.ckm00100900477.a3,t,2
Target Identifier: 00023d000001,iqn.1992-04.com.emc:cx.ckm00100900477.a3,t,2
This new UI for iSCSI Bindings certainly makes configuring multipathing for the Software iSCSI Adapter so much easier. But do keep in mind the requirements to have a non-routable network between the initiator and target, and the fact that vmkernel ports must have only a single active vmnic uplink in vSwitches that have multiple vmnic uplinks.
Get notification of these blogs postings and more VMware Storage information by following me on Twitter: VMwareStorage