Product Announcements

NPIV: N-Port ID Virtualization

Let’s start by describing what this feature is. NPIV stands for N-Port ID Virtualization. It is an ANSI T11 standard that describes how a single Fibre Channel Physical HBA port can register with a fabric using several worldwide port names (WWPNs), what might be considered Virtual WWNs. This in turn means that since there a multiple Virtual HBAs per physical HBA, we can allow WWNs to be assigned to each VM.

At first glance, this might sound kind of cool. But I’m going to be blunt with you here – I still don’t see the value of this feature in its current form. The steps required to set this up are pretty complex and time consuming, and the only advantage I see from VMs having their own WWNs is possibly Quality of Service (QoS) measurement. With each VM having its own WWN, you could perceivably track Virtual Machine traffic in the fabric if you had the appropriate tools. Also, you may get more visiblity of VMs at the storage array level. But you have to use Raw Device Mappings (RDMs) mapped to the VM for NPIV, which means you do not get all the benefits associated with VMFS and VMDKs.

Having said that, I still see queries about how to configure NPIV from time to time. Therefore I decided to put the configuration steps into this post for reference. If there are readers who are considering using this feature (or have already implemented it), I’d really appreciate it if you would leave a comment and share with us the reason why you are using NPIV. Maybe I’m missing something obvious.

As you might see from the screenshots that I am using, the implementation is not being done on vSphere 5.0 but on a much earlier version of vSphere. However the steps haven’t changed one bit since we first brought out support for NPIV back in ESX 3.

Initial Configuration Check

First, you’ll need the following:

  • Administrator access to your vSphere environment
  • Administrator access to your FC Switch(es)
  • Administrator access to your FC Array

Next, you’ll need to make sure that the HBA that you are using supports NPIV. You can find this information on the VMware HCL, but I should think that any HBAs purchased in the last 3 years will support NPIV out of the box. Best to double check though.

Same goes for the FC Switch. Earlier FC switches may not support NPIV either, so best to check that too.

Lastly, ensure that one or more LUNs have been presented from the array to your ESX host(s) over FC, and that these LUNs are visible. This will also prove that your FC zoning is done correctly.

Virtual Machine Configuration

Now we’re ready to being the configuration. Select the VM that will use NPIV and then go to the VM Properties, select the Options tab, and check the Fibre Channel NPIV setting:

As you can see, there are No WWNs currently assigned. Next, we need to map an RDM to the VM in order to enable NPIV. With the RDM mapped in Physical Compatability Mode (PassThru), return to the VM Properties Options tab and Fiber Channel NPIV & select the option to Generate new WWNs. After doing this step, a WWNN and 4 WWPNs should now be visible in the WWN Assignments view:

If you now examine the VMX file belonging to this VM, you will notice a number of new entries added:

wwn.node = “2833000c2900000b”

wwn.port = “2833000c2900000c,2833000c2900000d,2833000c2900000e,2833000c2900000f”

wwn.type = “vc”

That completes the VM configuration.

FC Switch Configuration

On the FC switch, you will now have to create zones from the VM’s NPIV WWPN to the Storage Array ports WWPN. However, the VM’s NPIV WWPNs are currently not active, so they do not appear in the nameserver on the FC switch. Therefore they will need to be added manually. Once they are manually added to the FC switch, the WWPNs can then be placed in zones with the WWPN of the Storage Array ports.

In this example, the WWNN & WWPNs of the VM have all been added to the same alias as the physical Qlogic HBA. This is by no means a best practice, but the advantage of doing it this way is that the NPIV WWNs will automatically be participating in the same zones as the physical WWN from the HBA.

Ideally, you would seperate the physical HBA WWPN and the VM’s NPIV WWPN into seperate alisaes. This would make management much easier. However, in this case, it was a quick way of allowing the NPIV WWNs be zoned to the same storage array as the physical QLogic HBA.

So at this point, the NPIV WWNN and WWPNs from the Virtual Machine are added to the FC switch and now are zoned to the FC ports of the storage array. That completes the FC Switch configuration.

Storage Array Configuration

Things now start to become interesting. This procedure can vary from array to array, but basically the objective is to assign disks from the array to the NPIV WWNs of the Virtual Machine. The examples included here are taken from an EMC Symmetrix array using SYMCLI, and from an EMC Clariion using the Navisphere UI.

a) EMC Symmetrix

In this example, the physical LUNs are known by device ids 0xF & 0x14. These device are masked to the NPIV WWN’s that were created in the Virtual Machine:

 C:….bin>symmask -sid 1121 -wwn 2833000c2900000b -dir 15B -port 0 add devs F,14

The following devices are already assigned in at least one entry:

000F 0014

Would you like to continue (y/[n])?y

 We now need to refresh the masking information on the Symmetrix:

 C:Program FilesEMCSYMCLIbin>symmask -sid 1121 refresh

Refresh Symmetrix FA directors with contents of SymMask database 000281601121 (y/[n]) ? y

Symmetrix FA directors updated with contents of SymMask Database 000281601121

Finally you can review the device masking on the Symmetrix and see the Dev F & 14 masked to the Physical HBA’s and the NPIV HBA’s of the Virtual Machine:

 C:Program FilesEMCSYMCLIbin>symmaskdb list database -sid 1121 -dir 15B -port 0

Symmetrix ID            : 000281601121

Database Type           : Type1

Last updated at         : 03:44:50 PM on Thu Nov 29,2007

Director Identification : FA-15B

Director Port           : 0

                               User-generated

Identifier        Type   Node Name        Port Name         Devices

—————-  —–  ———————————  ———

210000e08b949fef  Fibre  210000e08b949fef 210000e08b949fef  000F

                                                            0014

210100e08bb49fef  Fibre  210100e08bb49fef 210100e08bb49fef  000F

                                                            0014

2833000c2900000b  Fibre  2833000c2900000b 2833000c2900000b  000F

                                                            0014

2833000c2900000c  Fibre  2833000c2900000c 2833000c2900000c  000F

                                                            0014

2833000c2900000d  Fibre  2833000c2900000d 2833000c2900000d  000F

                                                            0014

.

.

b) EMC Clariion

If on the other hand, you are doing this on an EMC CX array, you need to first open the Navisphere UI, select the object representing the array and select Connectivity Status:

Connectivity

We can see the 2 FC HBA’s logged in and Registered with the Storage Processors A & B of the Clariion. We now add the new initiators from the Virtual Machine, i.e. the NPIV WWNs. The Clariion expects that initiators are added in the format <Node WWN>:<Port WWN>. Note that in adding the initiator, we are automatically associating it with a host previously registered with the physical HBA. This has the added advantage of automatically adding the NPIV WWNs from the VM to whatever storage groups that the physical HBA is already a member of. This means that the LUN assignments wihin that storage group will automatically be presented to the NPIV WWNs too.

Initiator-creation
Once the NPIV WWNs of the VM have been added, all the initiators show up in the Connectivity Status as follows. Note that they are all associated with the same server name for reasons mentioned previously.

Connectivity-init-groups-added
Note: You might find that you have to remove and re-add the host to the Storage Group for the new initiator groups to be picked up. Don’t know the reason, just adding this point in case you run into it.

Now all that remains to be done is to power on your VM. The RDM should now appear mapped to your VM, but using its own set of NPIV WWNs. The configuration is now completed.

Troubleshooting

You will want to read this section, believe me!

First, check that all the configuration tasks have complete correctly:

Configuring the Virtual Machine

  • Add and RDM disk
  • Create NPIV Node and Port Names

Configuring the Fibre Channel Switch

  • Adding the WWN’s to the HBA Alias
  • Checking the Zone and Zone Set information

Configuring the Storage Array:

  • Create Initiator groups if necessary
  • Present/Mask LUNs to the VM’s NPIV WWNs

If all of these tasks completed correctly, but the LUN is not being successfully presented, the following places should be checked for troubleshooting:

FC Switch – on the Brocade, a useful script will allow you to monitor whether or not the NPIV WWNs are logging in (X & Y are the ports where the physcial HBAs are logging into the switch:

    while true; do portloginshow X; portloginshow Y; done

ESX host – Check /proc/scsi/qla2300/X and /var/log/vmkernel for messages pertaining to the NPIV initialization. X represents a HBA. The proc node will show both physical and virtual adapter information.

A useful, but quite old white paper on NPIV, can be found here.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: Twitter @VMwareStorage