Home > Blogs > Support Insider > Category Archives: From the Trenches

Category Archives: From the Trenches

Path failover may not be successful when using Cisco MDS Switches on NX-OS 7.3 and FCoE based HBAs

So I wanted to get this blog post out sooner rather than later as it might effect a significant number of customers. In a nutshell, if you perform array maintenance that requires you to reboot a storage controller, the probability of successful path failover is low. This is effectively due to stale entries in the fiber channel name server on Cisco MDS switches running NX-OS 7.3, which is a rather new code release. As the title suggests, this only affects FCoE HBAs, specifically ones that rely on our libfc/libfcoe stack for FCoE connectivity. Such HBAs would be Cisco fnic HBAs as well as a handful of Emulex FCoE HBAs and a couple others.

Here is an example of a successful path failover after receiving an RSCN (Register State Change Notification) from the array controller after performing a reboot:

2016-07-07T17:36:34.230Z cpu17:33461)<6>host4: disc: Received an RSCN event
 2016-07-07T17:36:34.230Z cpu17:33461)<6>host4: disc: Port address format for port (e50800)
 2016-07-07T17:36:34.230Z cpu17:33461)<6>host4: disc: RSCN received: not rediscovering. redisc 0 state 9 in_prog 0
 2016-07-07T17:36:34.231Z cpu14:33474)<6>host4: disc: GPN_ID rejected reason 9 exp 1
 2016-07-07T17:36:34.231Z cpu14:33474)<6>host4: rport e50800: Remove port
 2016-07-07T17:36:34.231Z cpu14:33474)<6>host4: rport e50800: Port entered LOGO state from Ready state
 2016-07-07T17:36:34.231Z cpu14:33474)<6>host4: rport e50800: Delete port
 2016-07-07T17:36:34.231Z cpu54:33448)<6>host4: rport e50800: work event 3
 2016-07-07T17:36:34.231Z cpu54:33448)<7>fnic : 4 :: fnic_rport_exch_reset called portid 0xe50800
 2016-07-07T17:36:34.231Z cpu54:33448)<7>fnic : 4 :: fnic_rport_reset_exch: Issuing abts
 2016-07-07T17:36:34.231Z cpu54:33448)<6>host4: rport e50800: Received a LOGO response closed
 2016-07-07T17:36:34.231Z cpu54:33448)<6>host4: rport e50800: Received a LOGO response, but in state Delete
 2016-07-07T17:36:34.231Z cpu54:33448)<6>host4: rport e50800: work delete

Here is a breakdown of what you just read:

  1. RSCN is received from the array controller
  2. Operation is now is state = 9
  3. GPN_ID (Get Port Name ID) is issued to the switches but is rejected because the state is 9 (See http://lists.open-fcoe.org/pipermail/fcoe-devel/2009-June/002828.html)
  4. LibFC begins to remove the port information on the host
  5. Port enters LOGO (Logout) state from previous state, which was Ready
  6. LibFC Deletes the port information

After this the ESX host will failover to other available ports, which would be on the peer SP:

2016-07-07T17:36:44.233Z cpu33:33459)<3> rport-4:0-1: blocked FC remote port time out: saving binding
 2016-07-07T17:36:44.233Z cpu55:33473)<7>fnic : 4 :: fnic_terminate_rport_io called wwpn 0x524a937aeb740513, wwnn0xffffffffffffffff, rport 0x0x4309b72f3c50, portid 0xffffffff
 2016-07-07T17:36:44.257Z cpu52:33320)NMP: nmp_ThrottleLogForDevice:3298: Cmd 0x2a (0x43a659d15bc0, 36277) to dev "naa.624a93704d1296f5972642ea0001101c" on path "vmhba3:C0:T0:L1" Failed: H:0x1 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0. Act:FAILOVER

A Host status of H:0x1 means NO_CONNECT, hence the failover.

Now here is an example of the same operation on a Cisco MDS switch running NX-OS 7.3 when a storage controller on the array is rebooted:

2016-07-14T19:02:03.551Z cpu47:33448)<6>host2: disc: Received an RSCN event
 2016-07-14T19:02:03.551Z cpu47:33448)<6>host2: disc: Port address format for port (e50900)
 2016-07-14T19:02:03.551Z cpu47:33448)<6>host2: disc: RSCN received: not rediscovering. redisc 0 state 9 in_prog 0
 2016-07-14T19:02:03.557Z cpu47:33444)<6>host2: rport e50900: ADISC port
 2016-07-14T19:02:03.557Z cpu47:33444)<6>host2: rport e50900: sending ADISC from Ready state
 2016-07-14T19:02:23.558Z cpu47:33448)<6>host2: rport e50900: Received a ADISC response
 2016-07-14T19:02:23.558Z cpu47:33448)<6>host2: rport e50900: Error 1 in state ADISC, retries 0
 2016-07-14T19:02:23.558Z cpu47:33448)<6>host2: rport e50900: Port entered LOGO state from ADISC state
 2016-07-14T19:02:43.560Z cpu2:33442)<6>host2: rport e50900: Received a LOGO response timeout
 2016-07-14T19:02:43.560Z cpu2:33442)<6>host2: rport e50900: Error -1 in state LOGO, retrying
 2016-07-14T19:02:43.560Z cpu58:33446)<6>host2: rport e50900: Port entered LOGO state from LOGO state
 2016-07-14T19:03:03.563Z cpu54:33449)<6>host2: rport e50900: Received a LOGO response timeout
 2016-07-14T19:03:03.563Z cpu54:33449)<6>host2: rport e50900: Error -1 in state LOGO, retrying
 2016-07-14T19:03:03.563Z cpu2:33442)<6>host2: rport e50900: Port entered LOGO state from LOGO state
 2016-07-14T19:03:23.565Z cpu32:33447)<6>host2: rport e50900: Received a LOGO response timeout
 2016-07-14T19:03:23.565Z cpu32:33447)<6>host2: rport e50900: Error -1 in state LOGO, retrying
 2016-07-14T19:03:23.565Z cpu54:33449)<6>host2: rport e50900: Port entered LOGO state from LOGO state
 2016-07-14T19:03:43.567Z cpu50:33445)<6>host2: rport e50900: Received a LOGO response timeout
 2016-07-14T19:03:43.567Z cpu50:33445)<6>host2: rport e50900: Error -1 in state LOGO, retrying
 2016-07-14T19:03:43.567Z cpu32:33447)<6>host2: rport e50900: Port entered LOGO state from LOGO state
 2016-07-14T19:04:03.568Z cpu54:33443)<6>host2: rport e50900: Received a LOGO response timeout
 2016-07-14T19:04:03.568Z cpu54:33443)<6>host2: rport e50900: Error -1 in state LOGO, retrying
 2016-07-14T19:04:03.569Z cpu32:33472)<6>host2: rport e50900: Port entered LOGO state from LOGO state
 2016-07-14T19:04:43.573Z cpu20:33473)<6>host2: rport e50900: Received a LOGO response timeout
 2016-07-14T19:04:43.573Z cpu20:33473)<6>host2: rport e50900: Error -1 in state LOGO, retrying
 2016-07-14T19:04:43.573Z cpu54:33443)<6>host2: rport e50900: Port entered LOGO state from LOGO state

Notice the difference? Here is a breakdown of what happened this time:

  1. RSCN is received from the array controller
  2. Operation is now is state = 9
  3. GPN_ID (Get Port Name ID) is issued to the switches but is NOT rejected
  4. Since GPN_ID is valid, LibFC issues an Address Discovery (ADISC)
  5. 20 seconds later the ADISC sent times out and this continues to occur every 20 seconds

The problem is that the ADISC will continue this behavior until the array controller completes the reboot and is back online:

2016-07-14T19:04:47.276Z cpu56:33451)<6>host2: disc: Received an RSCN event
 2016-07-14T19:04:47.276Z cpu56:33451)<6>host2: disc: Port address format for port (e50900)
 2016-07-14T19:04:47.276Z cpu56:33451)<6>host2: disc: RSCN received: not rediscovering. redisc 0 state 9 in_prog 0
 2016-07-14T19:04:47.277Z cpu20:33454)<6>host2: rport e50900: Login to port
 2016-07-14T19:04:47.277Z cpu20:33454)<6>host2: rport e50900: Port entered PLOGI state from LOGO state
 2016-07-14T19:04:47.278Z cpu57:33456)<6>host2: rport e50900: Received a PLOGI accept
 2016-07-14T19:04:47.278Z cpu57:33456)<6>host2: rport e50900: Port entered PRLI state from PLOGI state
 2016-07-14T19:04:47.278Z cpu52:33458)<6>host2: rport e50900: Received a PRLI accept
 2016-07-14T19:04:47.278Z cpu52:33458)<6>host2: rport e50900: PRLI spp_flags = 0x21
 2016-07-14T19:04:47.278Z cpu52:33458)<6>host2: rport e50900: Port entered RTV state from PRLI state
 2016-07-14T19:04:47.278Z cpu57:33452)<6>host2: rport e50900: Received a RTV reject
 2016-07-14T19:04:47.278Z cpu57:33452)<6>host2: rport e50900: Port is Ready

What is actually happening here is that the Cisco MDS switches are quick to receive the RSCN from the array controller and pass it along to the host HBAs however due to a timing issue the entries for that array controller in the FCNS (Fiber Channel Name Server) database are still present when the host HBAs issue the GPN_ID so the switches respond to that request instead of rejecting it. If you review the entry in http://lists.open-fcoe.org/pipermail/fcoe-devel/2009-June/002828.html you see that code was added to validate that the target is actually off the fabric instead of assuming it would be by the RSCN alone. There are various reasons to do this but suffice it to say that it is better to be safe than sorry in this instance.

Unfortunately there is no fix for this at this time, which is why this is potentially so impactful to our customers since it means they effectively are unable to perform array maintenance without the risk of VMs crashing or even corruption. Cisco is fixing this in 7.3(1), which due out in a few weeks.

Here are a couple of references regarding this issue:

 

Cheers,
Nathan Small
Technical Director
Global Support Services
VMware

Host disconnected from vCenter and VMs showing as inaccessible

Another deep-dive troubleshooting blog today from Nathan Small (twitter account: @vSphereStorage)
 
Description from customer:
 
Host is getting disconnected from vCenter and VMs are showing as inaccessible. Only one host is affected.
 
 
Analysis:
 
A quick review of the vmkernel log shows a log spew of H:0x7 errors to numerous LUNs. Here is a short snippet where you can see how frequently they are occurring (multiple times per second):
 
# cat /var/log/vmkernel.log
 
2016-01-13T18:54:42.994Z cpu68:8260)ScsiDeviceIO: 2326: Cmd(0x412540b96e80) 0x28, CmdSN 0x8000006b from world 11725 to dev “naa.600601601b703400a4f90c3d0668e311” failed H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0.
2016-01-13T18:54:43.027Z cpu68:8260)ScsiDeviceIO: 2326: Cmd(0x4125401b2580) 0x28, CmdSN 0x8000002e from world 11725 to dev “naa.600601601b70340064a24ada10fae211” failed H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0.
2016-01-13T18:54:43.030Z cpu68:8260)ScsiDeviceIO: 2326: Cmd(0x4125406d5380) 0x28, CmdSN 0x80000016 from world 11725 to dev “naa.600601601b7034000c70e4e610fae211” failed H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0.
2016-01-13T18:54:43.542Z cpu67:8259)ScsiDeviceIO: 2326: Cmd(0x412540748800) 0x28, CmdSN 0x80000045 from world 11725 to dev “naa.600601601b70340064a24ada10fae211” failed H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0.
2016-01-13T18:54:43.808Z cpu74:8266)ScsiDeviceIO: 2326: Cmd(0x412541229040) 0x28, CmdSN 0x8000003c from world 11725 to dev “naa.600601601b7034008e56670a11fae211” failed H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0.
2016-01-13T18:54:44.088Z cpu38:8230)ScsiDeviceIO: 2326: Cmd(0x4124c0ff4f80) 0x28, CmdSN 0x80000030 from world 11701 to dev “naa.600601601b703400220f77ab15fae211” failed H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0.
2016-01-13T18:54:44.180Z cpu74:8266)ScsiDeviceIO: 2326: Cmd(0x412540ccda80) 0x28, CmdSN 0x80000047 from world 11725 to dev “naa.600601601b70340042b582440668e311” failed H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0.
2016-01-13T18:54:44.741Z cpu61:8253)ScsiDeviceIO: 2326: Cmd(0x412540b94480) 0x28, CmdSN 0x80000051 from world 11725 to dev “naa.600601601b70340060918f5b0668e311” failed H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0.
2016-01-13T18:54:44.897Z cpu63:8255)ScsiDeviceIO: 2326: Cmd(0x412540ff3180) 0x28, CmdSN 0x8000007a from world 11725 to dev “naa.600601601b7034005c918f5b0668e311” failed H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0.
2016-01-13T18:54:45.355Z cpu78:8270)ScsiDeviceIO: 2326: Cmd(0x412540f3b2c0) 0x28, CmdSN 0x80000039 from world 11725 to dev “naa.600601601b70340060918f5b0668e311” failed H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0.
2016-01-13T18:54:45.522Z cpu70:8262)ScsiDeviceIO: 2326: Cmd(0x41254073d0c0) 0x28, CmdSN 0x8000002c from world 11725 to dev “naa.600601601b7034000e3e97350668e311” failed H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0.
2016-01-13T18:54:45.584Z cpu71:8263)ScsiDeviceIO: 2326: Cmd(0x412541021780) 0x28, CmdSN 0x80000067 from world 11725 to dev “naa.600601601b7034000e3e97350668e311” failed H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0.
2016-01-13T18:54:45.803Z cpu63:8255)ScsiDeviceIO: 2326: Cmd(0x412540d20480) 0x28, CmdSN 0x80000019 from world 11725 to dev “naa.600601601b703400d24fc7620668e311” failed H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0.
2016-01-13T18:54:46.253Z cpu74:8266)ScsiDeviceIO: 2326: Cmd(0x412540b96380) 0x28, CmdSN 0x8000006f from world 11725 to dev “naa.600601601b7034005e918f5b0668e311” failed H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0.
 
The Host side error (H:0x7) literally translates to Storage Initiator Error, which makes it sounds like there is something physical wrong with the card. One needs to understand that this status is sent up the stack from the HBA driver so really it is up to the those that write the driver to use this status for certain conditions. As there are no accompanying errors from the HBA driver, which in this case is a Brocade HBA, this is all we have to work with without enabling verbose logging in the driver. Verbose logging requires a reboot so this is not always an option when investigating root cause. The exception would be that the issue in ongoing so rebooting a host to capture this data is a viable option.
 
Taking a LUN as an example from ‘esxcfg-mpath -b’ output to get a view of the paths and targets:
 
# esxcfg-mpath -b
 
naa.600601601b703400b6aa124c0668e311 : DGC Fibre Channel Disk (naa.600601601b703400b6aa124c0668e311)
   vmhba0:C0:T3:L20 LUN:20 state:active fc Adapter: WWNN: 20:00:74:86:7a:ae:1c:9a WWPN: 20:01:74:86:7a:ae:1c:9a  Target: WWNN: 50:06:01:60:c7:20:7a:a8 WWPN: 50:06:01:63:47:20:7a:a8
   vmhba1:C0:T3:L20 LUN:20 state:active fc Adapter: WWNN: 20:00:74:86:7a:ae:1c:9c WWPN: 20:01:74:86:7a:ae:1c:9c  Target: WWNN: 50:06:01:60:c7:20:7a:a8 WWPN: 50:06:01:60:47:24:7a:a8
   vmhba0:C0:T1:L20 LUN:20 state:active fc Adapter: WWNN: 20:00:74:86:7a:ae:1c:9a WWPN: 20:01:74:86:7a:ae:1c:9a  Target: WWNN: 50:06:01:60:c7:20:7a:a8 WWPN: 50:06:01:6b:47:20:7a:a8
   vmhba1:C0:T2:L20 LUN:20 state:active fc Adapter: WWNN: 20:00:74:86:7a:ae:1c:9c WWPN: 20:01:74:86:7a:ae:1c:9c  Target: WWNN: 50:06:01:60:c7:20:7a:a8 WWPN: 50:06:01:68:47:24:7a:a8
   vmhba2:C0:T3:L20 LUN:20 state:active fc Adapter: WWNN: 20:00:74:86:7a:ae:1c:32 WWPN: 20:01:74:86:7a:ae:1c:32  Target: WWNN: 50:06:01:60:c7:20:7a:a8 WWPN: 50:06:01:63:47:20:7a:a8
   vmhba3:C0:T3:L20 LUN:20 state:active fc Adapter: WWNN: 20:00:74:86:7a:ae:1c:34 WWPN: 20:01:74:86:7a:ae:1c:34  Target: WWNN: 50:06:01:60:c7:20:7a:a8 WWPN: 50:06:01:60:47:24:7a:a8
   vmhba2:C0:T1:L20 LUN:20 state:active fc Adapter: WWNN: 20:00:74:86:7a:ae:1c:32 WWPN: 20:01:74:86:7a:ae:1c:32  Target: WWNN: 50:06:01:60:c7:20:7a:a8 WWPN: 50:06:01:6b:47:20:7a:a8
   vmhba3:C0:T2:L20 LUN:20 state:active fc Adapter: WWNN: 20:00:74:86:7a:ae:1c:34 WWPN: 20:01:74:86:7a:ae:1c:34  Target: WWNN: 50:06:01:60:c7:20:7a:a8 WWPN: 50:06:01:68:47:24:7a:a8
 
Let’s look at the adapter statistics for all HBAs. I would recommend always using localcli over esxcli when troubleshoot as esxcli requires hostd to be functioning properly:
 
# localcli storage core adapter stats get
 
vmhba0:
   Successful Commands: 844542177
   Blocks Read: 243114868277
   Blocks Written: 25821448417
  Read Operations: 395494703
   Write Operations: 405753901
   Reserve Operations: 0
   Reservation Conflicts: 0
   Failed Commands: 35403
   Failed Blocks Read: 57744
   Failed Blocks Written: 16843
   Failed Read Operations: 8224
   Failed Write Operations: 16450
   Failed Reserve Operations: 0
   Total Splits: 0
   PAE Commands: 0
 
vmhba1:
   Successful Commands: 502595840 <– Far less successful commands than the other adapters
   Blocks Read: 116436597821
   Blocks Written: 16509939615
   Read Operations: 216572537
   Write Operations: 245276523
   Reserve Operations: 0
   Reservation Conflicts: 0
   Failed Commands: 10942696
   Failed Blocks Read: 12055379188 <– 12 billion failed blocks read! Other adapters are all less than 60,000
   Failed Blocks Written: 933809
   Failed Read Operations: 10895926
   Failed Write Operations: 25645
   Failed Reserve Operations: 0
   Total Splits: 0
   PAE Commands: 0
 
vmhba2:
   Successful Commands: 845976973
   Blocks Read: 244034940187
   Blocks Written: 26063852941
   Read Operations: 397564994
   Write Operations: 407538414
   Reserve Operations: 0
   Reservation Conflicts: 0
   Failed Commands: 40468
   Failed Blocks Read: 44157
   Failed Blocks Written: 18676
   Failed Read Operations: 5506
   Failed Write Operations: 12152
   Failed Reserve Operations: 0
   Total Splits: 0
   PAE Commands: 0
 
vmhba3:
   Successful Commands: 866718515
   Blocks Read: 249837164491
   Blocks Written: 26492209531
   Read Operations: 406367844
   Write Operations: 416901703
   Reserve Operations: 0
   Reservation Conflicts: 0
   Failed Commands: 37723
   Failed Blocks Read: 23191
   Failed Blocks Written: 139380
   Failed Read Operations: 7372
   Failed Write Operations: 14878
   Failed Reserve Operations: 0
   Total Splits: 0
   PAE Commands: 0
 
 
Let’s see how often the vmkernel.log reports messages for that HBA:
 
# cat vmkernel.log |grep vmhba0|wc -l
112
 
# cat vmkernel.log |grep vmhba1|wc -l
8474 <– over 8000 times this HBA is mentioned! This doesn’t mean they are all errors, of course, but based on the log spew we know is already occurring it means it likely is
 
# cat vmkernel.log |grep vmhba2|wc -l
222
 
# cat vmkernel.log |grep vmhba3|wc -l
335
 
Now let’s take a look at the zoning to see if multiple adapters are zoned to the exact same array targets (WWPN) in attempt to determine if the issue is possibly array side or HBA side:
 
# esxcfg-mpath -b
 
naa.600601601b703400b6aa124c0668e311 : DGC Fibre Channel Disk (naa.600601601b703400b6aa124c0668e311)
   vmhba0:C0:T3:L20 LUN:20 state:active fc Adapter: WWNN: 20:00:74:86:7a:ae:1c:9a WWPN: 20:01:74:86:7a:ae:1c:9a  Target: WWNN: 50:06:01:60:c7:20:7a:a8 WWPN: 50:06:01:63:47:20:7a:a8
   vmhba1:C0:T3:L20 LUN:20 state:active fc Adapter: WWNN: 20:00:74:86:7a:ae:1c:9c WWPN: 20:01:74:86:7a:ae:1c:9c  Target: WWNN: 50:06:01:60:c7:20:7a:a8 WWPN: 50:06:01:60:47:24:7a:a8
   vmhba0:C0:T1:L20 LUN:20 state:active fc Adapter: WWNN: 20:00:74:86:7a:ae:1c:9a WWPN: 20:01:74:86:7a:ae:1c:9a  Target: WWNN: 50:06:01:60:c7:20:7a:a8 WWPN: 50:06:01:6b:47:20:7a:a8
   vmhba1:C0:T2:L20 LUN:20 state:active fc Adapter: WWNN: 20:00:74:86:7a:ae:1c:9c WWPN: 20:01:74:86:7a:ae:1c:9c  Target: WWNN: 50:06:01:60:c7:20:7a:a8 WWPN: 50:06:01:68:47:24:7a:a8
   vmhba2:C0:T3:L20 LUN:20 state:active fc Adapter: WWNN: 20:00:74:86:7a:ae:1c:32 WWPN: 20:01:74:86:7a:ae:1c:32  Target: WWNN: 50:06:01:60:c7:20:7a:a8 WWPN: 50:06:01:63:47:20:7a:a8
   vmhba3:C0:T3:L20 LUN:20 state:active fc Adapter: WWNN: 20:00:74:86:7a:ae:1c:34 WWPN: 20:01:74:86:7a:ae:1c:34  Target: WWNN: 50:06:01:60:c7:20:7a:a8 WWPN: 50:06:01:60:47:24:7a:a8
   vmhba2:C0:T1:L20 LUN:20 state:active fc Adapter: WWNN: 20:00:74:86:7a:ae:1c:32 WWPN: 20:01:74:86:7a:ae:1c:32  Target: WWNN: 50:06:01:60:c7:20:7a:a8 WWPN: 50:06:01:6b:47:20:7a:a8
   vmhba3:C0:T2:L20 LUN:20 state:active fc Adapter: WWNN: 20:00:74:86:7a:ae:1c:34 WWPN: 20:01:74:86:7a:ae:1c:34  Target: WWNN: 50:06:01:60:c7:20:7a:a8 WWPN: 50:06:01:68:47:24:7a:a8
 
Let’s isolate the HBAs so they are easier to visually compare the WWPN of the array targets:
 
vmhba1:
 
   vmhba1:C0:T3:L20 LUN:20 state:active fc Adapter: WWNN: 20:00:74:86:7a:ae:1c:9c WWPN: 20:01:74:86:7a:ae:1c:9c  Target: WWNN: 50:06:01:60:c7:20:7a:a8 WWPN: 50:06:01:60:47:24:7a:a8
   vmhba1:C0:T2:L20 LUN:20 state:active fc Adapter: WWNN: 20:00:74:86:7a:ae:1c:9c WWPN: 20:01:74:86:7a:ae:1c:9c  Target: WWNN: 50:06:01:60:c7:20:7a:a8 WWPN: 50:06:01:68:47:24:7a:a8
 
vmhba3:
 
   vmhba3:C0:T3:L20 LUN:20 state:active fc Adapter: WWNN: 20:00:74:86:7a:ae:1c:34 WWPN: 20:01:74:86:7a:ae:1c:34  Target: WWNN: 50:06:01:60:c7:20:7a:a8 WWPN: 50:06:01:60:47:24:7a:a8
   vmhba3:C0:T2:L20 LUN:20 state:active fc Adapter: WWNN: 20:00:74:86:7a:ae:1c:34 WWPN: 20:01:74:86:7a:ae:1c:34  Target: WWNN: 50:06:01:60:c7:20:7a:a8 WWPN: 50:06:01:68:47:24:7a:a8
 
vmhba1 and vmhba3 are zoned to the exact same array ports yet only vmhba1 is experiencing communication issues/errors.
 
 
Let’s look at the driver information under /proc/scsi/bfa/ by viewing (cat) the node information:
 
Chip Revision: Rev-E
Manufacturer: Brocade
Model Description: Brocade-1741
Instance Num: 0
Serial Num: xxxxxxxxx32
Firmware Version: 3.2.3.2
Hardware Version: Rev-E
Bios Version: 3.2.3.2
Optrom Version: 3.2.3.2
Port Count: 2
WWNN: 20:00:74:86:7a:ae:1c:9a
WWPN: 20:01:74:86:7a:ae:1c:9a
Instance num: 0
Target ID: 0 WWPN: 50:06:01:6b:47:20:7b:04
Target ID: 1 WWPN: 50:06:01:6b:47:20:7a:a8
Target ID: 2 WWPN: 50:06:01:63:47:20:7b:04
Target ID: 3 WWPN: 50:06:01:63:47:20:7a:a8
 
Chip Revision: Rev-E
Manufacturer: Brocade
Model Description: Brocade-1741
Instance Num: 1
Serial Num: xxxxxxxxx32
Firmware Version: 3.2.3.2
Hardware Version: Rev-E
Bios Version: 3.2.3.2
Optrom Version: 3.2.3.2
Port Count: 2
WWNN: 20:00:74:86:7a:ae:1c:9c
WWPN: 20:01:74:86:7a:ae:1c:9c
Instance num: 1
Target ID: 0 WWPN: 50:06:01:60:47:24:7b:04
Target ID: 1 WWPN: 50:06:01:68:47:24:7b:04
Target ID: 3 WWPN: 50:06:01:60:47:24:7a:a8
Target ID: 2 WWPN: 50:06:01:68:47:24:7a:a8
 
Chip Revision: Rev-E
Manufacturer: Brocade
Model Description: Brocade-1741
Instance Num: 2
Serial Num: xxxxxxxxx2E
Firmware Version: 3.2.3.2
Hardware Version: Rev-E
Bios Version: 3.2.3.2
Optrom Version: 3.2.3.2
Port Count: 2
WWNN: 20:00:74:86:7a:ae:1c:32
WWPN: 20:01:74:86:7a:ae:1c:32
Instance num: 2
Target ID: 0 WWPN: 50:06:01:6b:47:20:7b:04
Target ID: 1 WWPN: 50:06:01:6b:47:20:7a:a8
Target ID: 2 WWPN: 50:06:01:63:47:20:7b:04
Target ID: 3 WWPN: 50:06:01:63:47:20:7a:a8
 
Chip Revision: Rev-E
Manufacturer: Brocade
Model Description: Brocade-1741
Instance Num: 3
Serial Num: xxxxxxxxx2E
Firmware Version: 3.2.3.2
Hardware Version: Rev-E
Bios Version: 3.2.3.2
Optrom Version: 3.2.3.2
Port Count: 2
WWNN: 20:00:74:86:7a:ae:1c:34
WWPN: 20:01:74:86:7a:ae:1c:34
Instance num: 3
Target ID: 0 WWPN: 50:06:01:60:47:24:7b:04
Target ID: 1 WWPN: 50:06:01:68:47:24:7b:04
Target ID: 2 WWPN: 50:06:01:68:47:24:7a:a8
Target ID: 3 WWPN: 50:06:01:60:47:24:7a:a8
 
So all HBAs are the same firmware, which is important from a observed consistency perspective. Had the firmware versions been different then there might be something to go on, or at least verify whether there are issues with that firmware level. Obviously they are using the same driver as well since only one is loaded in the kernel.
 
We can see not only by the shared serial number above but also by the lspci output that these are 2 port physical cards:
 
# lspci
 
000:007:00.0 Serial bus controller: Brocade Communications Systems, Inc. Brocade-1010/1020/1007/1741 [vmhba0]
000:007:00.1 Serial bus controller: Brocade Communications Systems, Inc. Brocade-1010/1020/1007/1741 [vmhba1]
000:009:00.0 Serial bus controller: Brocade Communications Systems, Inc. Brocade-1010/1020/1007/1741 [vmhba2]
000:009:00.1 Serial bus controller: Brocade Communications Systems, Inc. Brocade-1010/1020/1007/1741 [vmhba3]
 
The first set of numbers are read as Domain:Bus:Slot.Function so vmhba0 and vmhba1 are both on Domain 0, Bus 7, Slot 0, amd function 0 and 1 respectively, which means it is a dual port HBA.
 
So vmhba0 and vmhba1 are the same physical card yet only vmhba1 is showing errors. The HBA chips themselves on a dual port HBA are mostly independent of each other so at least this means there isn’t a problem with the board or circuitry they both share. I say mostly since the physical ports are independent of each other as well as the HBA chip however they do share the same physical board and connection on the motherboard.
 
This host is running EMC PowerPath VE so we know that in general the I/O loads is evenly distributed across all HBAs and paths evenly. I say in general as PowerPath VE is intelligent enough to use paths that exhibit more errors than other paths less frequently, as well as paths that are more latent.
 
I believe we may be looking at either a cable issue (loose, faulty, or bad GBIC) between vmhba1 and the switch or the switch port itself that vmhba1 is connected to. Here is why:
 
1. vmhba1 is seeing thousands upon thousands of errors while the other HBAs are very quiet
2. vmhba1 and vmhba3 are zoned to the exact same targets yet only vmhba1 is seeing errors
3. vmhba0 and vmhba1 are the same physical card yet only vmhba 1 is seeing errors
 
My recommendation would be to check the physical switch port error counters and possibly replace the cable to see if the errors subside. It is standard practice to reset the switch counters and monitor to ensure errors are still happening so may be needed to do that to validate that the CRC errors or other fabric errors are still occurring.
 
Cheers,
Nathan (twitter account: @vSphereStorage)

Issues creating Desktops in VDI, and what to do about it

Connection Server rebootWe want to highlight some mitigation techniques and a handy KB article today for those of you managing Horizon View desktops. We’re talking about those occasions when no desktop can be created or recomposed in your vdi environment and no tasks submitted from Connection Brokers are acted upon by Virtual Center server.

Our Engineering team has been hard at work fixing many of the underlying causes of this happening and the latest release of View have all but eliminated these issues. However, if these issues show up in latest View releases, then we ask everyone to follow the specific steps in this KB: Pool settings are not saved, new pools cannot be created, and vCenter Server tasks are not processed in a Horizon View environment (2082413)

This KB contains several main steps, the first one of which is collecting the bundled logs from all connection brokers in the vdi environment and recording the time this issue was first observed. Steps 2 to 6 are basic steps that can potentially help address the issue but if issues persist, then step 7 requests opening a support case and submitting the collected bundle logs in step 1 alongside the recorded time when the issue was first observed. You might also include any other useful information, such as whether any recent changes were made to the environment.

When opening your support case, please note this KB article number in the description the case. That helps us get right on point ASAP.

Step 8 is what should address this issue without any connection broker reboot as it causes the stoppage of all View services in all View connection brokers and then restarting them.

If step 8 does not resolve your issue, then the last step (9) involves reboot of all connection serves and this has always addressed the issue in our experience.

Fresh vSphere 6 KB articles!

vSphere 6.0 has been out now for a few weeks and you early adopters have been busy kicking the tires. We’ve heard some very encouraging things about this release ie: the web client improvements. It’s always interesting and top of mind for us to see what issues emerge in everyone’s environments and we monitor support requests coming into support as well as social media to see what customers run into.

Here’s an fresh list of Knowledgebase articles we’ve created to address some of these inquiries. Familiarize yourself with the list and of course share with your colleagues using the buttons on this page.

Database compatibility issues during upgrade

Deprecated VMFS volume errors

Backup failures/CBT mem heap issues

Replace certificates for vSphere 6.0

Decommissioning a vCenter Server or Platform Services Controller

When Linked Clones Go Stale

One of the biggest call drivers within our VMware View support centers revolves around linked clone pools. Some of your users may be calling you to report that their desktop is not available. You begin to check your vCenter and View Administrator portal and discover some of the following symptoms:

  • You cannot provision or recompose a linked clone desktop pool
  • You see the error:
    Desktop Composer Fault: Virtual Machine with Input Specification already exists
  • Provisioning a linked clone desktop pool fails with the error:
    Virtual machine with Input Specification already exists
  • The Connection Server shows that linked clone virtual machines are stuck in a deleting state
  • You cannot delete a pool from the View Administrator page
  • You are unable to delete linked clone virtual machines
  • When viewing a pools Inventory tab, the status of one or more virtual machines may be shown as missing

There are a number of reasons this might happen, and KB: 2015112 Manually deleting linked clones or stale virtual desktop entries from the View Composer database in VMware View Manager and Horizon View covers resolving this topic comprehensively, but let’s discuss a bit of the background around these issues.

When a linked clone pool is created or modified, several backend databases are updated with configuration data. First there is the SQL database supporting vCenter Server, next there is the View Composer database, and thirdly the ADAM database. Let’s also throw in Active Directory for good measure. With all of these pieces each playing a vital role in the environment, it becomes apparent that should things go wrong, there may be an inconsistency created between these databases. These inconsistencies can present themselves with the above symptoms.

Recently a new Fling was created to address these inconsistencies. If you’re not acquainted with Flings, they’re tools our engineers build to help you explore and manipulate your systems. However, it’s important to remember they come with a disclaimer:

“I have read and agree to the Technical Preview Agreement. I also understand that Flings are experimental and should not be run on production systems.”

If you’re just in your lab environment though, they are an excellent way to learn and understand the workings of your systems at a deeper level. Here is the Fling: ViewDbChk. For production systems we recommend following the tried and true procedures documented in KB 2015112. The KB includes embedded videos to help walk you through the steps.

Using vSphere ESXi Image Builder to create an installable ISO that is not vulnerable to Heartbleed

Here is a follow-up post from Andrew Lytle, member of the VMware Mission Critical Support Team. Andrew is a Senior Support Engineer who is specializes in vCenter and ESXi related support.

VMware recently released updates to all products affected by the vulnerability dubbed “Heartbleed” (CVE-2014-0160): http://www.vmware.com/security/advisories/VMSA-2014-0004.html

As per KB article: Resolving OpenSSL Heartbleed for ESXi 5.5 – CVE-2014-0160 (2076665), the delivery method for this code change in the VMware ESXi product is through an updated ESXi vSphere Installation Bundle (VIB). VIBs are the building blocks of an ESXi image. A VIB is akin to a tarball or ZIP archive in that it’s a collection of files packaged into a single archive.

Typically a new ESXi ISO file will be made available only during major revisions of the product (Update 1, Update 2, etc). If you need an ESXi 5.5 ISO which is already protected from Heartbleed, you can make your own ISO easily using vSphere PowerCLI.

The PowerCLI ImageBuilder cmdlets are designed to make custom ESXi ISOs which have asynchronous driver releases pre-installed, but it can also be used in a situation like this to make an ISO which lines up with a Patch Release instead of a full ESXi Update Release.

In this post we will cover both the ESXi 5.5 GA branch, as well as the ESXi 5.5 Update 1 branch. Choose the set of steps which will provide the ISO branch you need for your environment.

Creating an ISO based on ESXi 5.5 GA (Pre-Update 1)

These steps are for downloading the requirements for creating an ISO which is based on the ESXi 5.5 “GA” release, which was originally released 2013-09-22.

Step 1: Download the Required Files

When creating a custom ESXi image through Image Builder, we need to start by downloading the required files:

Install PowerCLI through the Windows MSI package, and copy the zip files to a handy location. For the purposes of this example, I will copy these files to C:\Patches\

Step 2: Import the Software Depot

  • Add-EsxSoftwareDepot C:\Patches\ESXi550-201404020.zip
    1-1

Step 3: Confirm the patched version (optional)

If you wish to confirm the esx-base VIB (which includes the Heartbleed vulnerability code change) is added correctly, you can confirm the VIB has Version of 5.5.0-0.15.1746974 and the Creation Date of 4/15/2014.

  • Get-EsxSoftwarePackages –Name esx-base
    1-2

Step 4: Export the Image Profile to an ISO

  • Export-EsxImageProfile –ImageProfile ESXi-5.5.0-20140401020s-standard –ExportToISO –FilePath C:\Patches\ESXi5.5-heartbleed.iso
    1-3

Creating an ISO based on ESXi 5.5 Update 1

These steps are for creating an ISO which is based on the ESXi 5.5 “Update 1” release, which was originally released 2014-03-11.

Step 1: Download the Required Files

When creating a custom ESXi image through Image Builder, we need to start by downloading the required files:

Copy the zip files to a handy location. For the purposes of this example, I will copy it to C:\Patches\

Step 2: Import the Software Depot

  • Add-EsxSoftwareDepot C:\Patches\ESXi550-201404001.zip
    2-1

Step 3: Confirm the patched version (optional)

If you wish to confirm the esx-base VIB (which includes the Heartbleed vulnerability code change) is added correctly, you can confirm the VIB has the Version of 5.5.0-1.16.1746018 and Creation Date of 4/15/2014.

  • Get-EsxSoftwarePackages –Name esx-base
    2-2

Step 4: Export the Image Profile to an ISO

  • Export-EsxImageProfile –ImageProfile ESXi-5.5.0-20140404001-standard –ExportToISO –FilePath C:\Patches\ESXi5.5-update1-heartbleed.iso
    2-3

Installing the ESXi ISO

The ISO file which was created in this steps can be used in exactly the same manner as the normal VMware ESXi 5.5 ISO. It can be mounted in a remote management console, or burned to a CD/DVD for installation.

Patching ESXi 5.5 for Heartbleed without installing Update 1

On April 19th, VMware released a series of patches for ESX 5.5 and ESX 5.5 Update 1 to re-mediate the CVE dubbed “Heartbleed” (CVE-2014-0076 and CVE-2014-0160).

VMware also recently announced that there was an issue in the newest version of ESXi 5.5 (Update 1 and later), which can cause difficulties communicating with NFS storage. This NFS issue is still being investigated, and customers are encouraged to subscribe to KB article: Intermittent NFS APDs on ESXi 5.5 U1 (2076392) for updates.

Due to the confluence of these two unrelated issues, you might find yourself trying to patch ESXi to protect yourself from the Heartbleed vulnerability, while at the same time trying to avoid installing ESXi 5.5 Update 1.

Here is the information from the VMware Knowledge Base on the topic:

2

The note at the bottom is the key. Stated simply, if you are…

  • Using NFS storage
  • Concerned about patching to Update 1 due to change control
  • Not already running ESXi 5.5 Update 1 (build-1623387)

… then you should patch your install for the Heartbleed issue and at the same time stay at ESX 5.5 by applying Patch Release ESXi550-201404020, and not ESXi550-201404001.

An Explanation of Patch Release Codes

To better understand the Patching process in a VMware environment, it is valuable to understand the codes which are used in VMware Patch Releases.

When VMware releases a patch, or a series of patches, they are bundled together in what is known as a Patch Release. A Patch Release will have a coded name which is formed using the following structure. I have added braces to demonstrate the different sections better in each example.

[PRODUCT]-[YEAR][MONTH][THREE DIGIT RELEASE NUMBER]

For example, the patch release for ESXi 5.5 that was released in January 2013 would be coded like this (without the explanatory braces):

[ESXi550]-[2013]-[01][001]

As part of a Patch Release, there will be at least one Patch. Each Patch is given a Patch (or Bulletin) ID. Patch IDs are similarly structured to Patch Release codes, but also have a two letter suffix. For Security Bulletins, the prefix will be SG. For Bug Fix Bulletins, the prefix will be BG.

For example, the two Patch IDs which were released to patch Heartbleed are:

[ESXi550]-[2014][04][401]-[SG]
[ESXi550]-[2014][04][420]-[SG]

Note that the only difference in the Patch IDs here is in the three digit release number (401 vs 420).

Patching with VMware Update Manager

There are a number of methods for patching ESXi hosts, and the most commonly used is VMware Update Manager (VUM). VUM will present a pair of Dynamic Baselines which will be automatically updated when patches are available. The danger in this case is that VUM may show you both the Pre-Update 1 patch, as well as the Post-Update 1 patch. If you are not careful as to which patches you apply, you might accidentally end up patching your host to Post-Update 1.

Here are the patches which were released on April 19th, as seen in VUM. The Update 1 patch is highlighted in red, while the Pre-Update 1 patch is marked in green.

1

Note: VMware also released two other ESXi 5.5 patches on April 19th, as part of Patch Release but these are not related to the Heartbleed vulnerability in any fashion. (ESXi550-201404402-BG, and ESXi550-201404403-BG).

Creating a Fixed Baseline

Patching a host using ESXi550-201404420-SG (Pre-Update 1), while avoiding ESXi550-201404401-SG (Post-Update 1) requires the use of a Fixed Baseline in Update Manager.

  1. Start in the Update Manager Admin view.
  2. Select the Baselines and Groups tab.
  3. Click Create… in the Baselines column.
    3
  4. Give the new Baseline a descriptive Name (and optionally a Description).
    4
  5. Click Next.
  6. For Baseline type, select Fixed.
    5
  7. Use the Search feature to find the only Patch we want to apply. You will need to select the Patch ID option from the dropdown menu to ensure the search scans for the appropriate column.
    6
  8. Enter the Patch ID into the search field: ESXi550-201404420-SG and click Enter to search.
  9. Select the Patch which shows up in the filtered list, and click the Down Arrow to move it into the selected Baselines.
    7
  10. Click Next and confirm that the Patch ESXi550-201404420-SG is the only one selected.
    8
  11. Click Finish.

The Baseline is now created and available for use.

Remediating a Host using the Fixed Baseline

Once the Fixed Baseline has been created, we can use it to Scan and Remediate an ESXi host.

  1. Select the host you wish to patch, and place it into Maintenance Mode.
  2. Click the Update Manager tab.
  3. Make sure that there are no Dynamic Baselines attached to the host you wish to patch. Detach any baselines which are currently attached:
    Critical Host Patches (Predefined)
    Non-Critical Host Patches (Predefined)
    Any other Custom Baselines which you have created
  4. Click the Attach link.
    9
  5. Select the newly created Baseline and click Attach.
    10
  6. Click the Scan link and make sure Patches and Extensions is selected. Click Scan again.
    11
  7. When you are ready to patch the host, select Remediate.
  8. Complete the Remediation wizard.

Once the host is patched, it will reboot automatically.

Patching an ESXi host manually via the command line

Another option to patch an ESXi host is to use the esxcli command line tool. The patch files required are the same. For more information on how to proceed with this route, refer to the vSphere 5.5 Documentation under the heading Update a Host with Individual VIBs.

References

Author: Andrew Lytle
As a member of the VMware Mission Critical Support Team, Andrew Lytle is a Senior Support Engineer who is specializes in vCenter and ESXi related support.

New book: Getting Started with VMware Fusion

One of our own, Michael Roy has just published his first book: Getting Started with VMware Fusion, written to help readers to get started running Windows on their Mac the right way.

Michael talks about how to import your physical PC into the virtual world, and provides practical examples of how to keep your new Virtual Machine secure, backed up, and running smoothly.

Going a bit deeper, he teaches you about snapshots explaining their great uses, and also using Linked Clones in VMware Fusion Professional.

Michael Roy started at VMware working on VMware Fusion version 2 in 2009, where he co-led a world-class global support team, giving customers the help they needed to get the most out of VMware Fusion. He currently specializes in Technical Marketing for Hybrid Cloud Services.

iSCSI Storage and vMotion VLAN Best Practices

We got a question this morning on twitter from a customer asking for our best practices for setting up iSCSI storage and vMotion traffic on a VLAN.

The question caused a bit of a discussion here amongst our Tech Support staff and the answer it seems is too long to fit into a Tweet! Instead, here’s what you need to know if you are working on the best design for your VLANS.

iSCSI and vMotion on the same pipe (VLAN) is a big no-no unless you are using multiple teamed 1GbE uplinks or 10GbE uplinks with NIOC to avoid the two stomping on one another.

While vMotion traffic can be turned off/on/reconfigured on the fly, iSCSI traffic does not  handle any changes to the underlying network (though great improvements have been made 5.1/5.5) on the fly. You will need to take a maintenance window to reconfigure how you want your VLANs to function – especially for the iSCSI network – and then (more than likely) perform a rolling reboot of all hosts. If iSCSI traffic is already VLAN’d off, you should just leave the iSCSI traffic where it is as to avoid taking down the whole environment and just move the vMotion network to a separate VLAN.

That said, here is our most recent iSCSI Best Practice Setup guide from Cormac Hogan. Also see: vMotion Best Practice Setup guide.

Here are the pertinent pages in our documentation on the subject:

pubs.vmware.com…rking-guide.pdf – Page 187

and

pubs.vmware.com…orage-guide.pdf – Page 75

Installing async drivers in ESXi 5.x

One thing that catches a few customers up is the process of installing async drivers in their ESXi host … We have a KB article on the topic here, but there is more than one method to choose from and preparation steps involved. Since these steps might seem a little tricky, we decided a quick, live video explaining the topic might help many of you.

We called upon Kiwi Ssennyonjo to walk us through the salient points.

Again, the full KB article can be found here: Installing async drivers on ESXi 5.0/5.1/5.5 (2005205)