Home > Blogs > Support Insider > Category Archives: Datacenter

Category Archives: Datacenter

A look at All Paths Down in vSphere

Karthick SivaramakrishnanToday we have a guest post from Karthick Sivaramakrishnan, who is a 3 year veteran at VMware. His primary field of expertise is vSphere Storage and Site Recovery Manager.

This blog post is centered around how ESXi handles unscheduled storage disconnects on vSphere 5.x and 6.x. An unscheduled storage disconnect means some issue in the vSphere environment has led to All-Paths-Down (APD) for a datastore.  An APD situation will be seen when ESXi host does not have any path to communicate with a lun on the storage array.

ESXi host can encounter an APD under several conditions. As a result, we may end up having VMs running on a given datastore go down, the host could get disconnected from vCenter, and in worst cases ESXi could become unresponsive.

From vSphere version 5.x and onwards, we are able to discern whether a disconnect is permanent or transient. Ideally a transient disconnect leads to All Paths Down state and ESXi expects the device to have a temporary disconnect. When we see permanent device loss or PDL the device is expected to have a non-recoverable issue like a hardware error or the lun is unmapped.

In the below example we see all iSCSI datastores are in inactive state.

Datastores

To determine what caused this issue we see ESXi logs, particularly vmkernel and vobd. This issue will be evident in the vmkernel logs.

vmkernel log

2017-01-10T13:04:26.803Z cpu1:32896)StorageApdHandlerEv: 110: Device or filesystem with identifier [naa.6000eb31dffdc33a0000000000000028] has entered the All Paths Down state.

2017-01-10T13:04:26.818Z cpu0:32896)StorageApdHandlerEv: 110: Device or filesystem with identifier [naa.6000eb31dffdc33a000000000000002a] has entered the All Paths Down state.

vobd log

2017-01-10T13:04:26.905Z: [scsiCorrelator] 475204262us: [esx.problem.storage.connectivity.lost] Lost connectivity to storage device naa.6000eb31dffdc33a0000000000000028. Path vmhba33:C0:T1:L0 is down. Affected datastores: “Green”.

2017-01-10T13:04:26.905Z: [scsiCorrelator] 475204695us: [esx.problem.storage.connectivity.lost] Lost connectivity to storage device naa.6000eb31dffdc33a000000000000002a. Path vmhba33:C0:T0:L0 is down. Affected datastores: “Grey”.

From these logs we understand that ESXi host has lost connectivity to the datastore. Any virtual machines using the affected datastore may become unresponsive. In this example while the datastores was mounted on ESXi, we lost the network uplink on the nic that was used for iSCSI connection. This was a transient issue and the datastore came up once the network uplink was restored.

In the below example we see Datastore Black is in inactive state.

Datastore view missing

If we look into the logs to determine whats going on we see these events.

Vmkernel.log

2017-01-09T12:42:09.365Z cpu0:32888)ScsiDevice: 6878: Device naa.6000eb31dffdc33a0000000000000063 APD Notify PERM LOSS; token num:1

2017-01-09T12:42:09.366Z cpu1:32916)StorageApdHandler: 1066: Freeing APD handle 0x430180b88880 [naa.6000eb31dffdc33a0000000000000063]

2017-01-09T12:49:01.260Z cpu1:32786)WARNING: NMP: nmp_PathDetermineFailure:2973: Cmd (0xc1) PDL error (0x5/0x25/0x0) – path vmhba33:C0:T3:L0 device naa.6000eb31dffdc33a0000000000000063 – triggering path evaluation

2017-01-09T12:49:01.260Z cpu1:32786)ScsiDeviceIO: 2651: Cmd(0x439d802ec580) 0xfe, CmdSN 0x4b7 from world 32776 to dev “naa.6000eb31dffdc33a0000000000000063” failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x25 0x0.

2017-01-09T12:49:01.300Z cpu0:40210)WARNING: NMP: vmk_NmpSatpIssueTUR:1043: Device naa.6000eb31dffdc33a0000000000000063 path vmhba33:C0:T3:L0 has been unmapped from the array

After some time passes you will see this message:

2017-01-09T13:13:11.942Z cpu0:32872)ScsiDevice: 1718: Permanently inaccessible device :naa.6000eb31dffdc33a0000000000000063 has no more open connections. It is now safe to unmount datastores (if any) and delete the device.

In this case the lun was unmapped from the array for this host and that is not a transient issue. Sens data 0x5 0x25 0x0 corresponds to “LOGICAL UNIT NOT SUPPORTED” which indicates the device is in Permanent Device Loss (PDL) state. Once ESXi knows the device is in PDL state it does not wait for the device to return back.

ESXi only checks ASC/ASCQ and if it happens to be 0x25/0x0 or  0x68/0x0, it marks device as PDL.

VMware KB 2004684 has in-depth information around APD and PDL situations. It also talks about planned and unplanned PDL. You can read it here: Permanent Device Loss (PDL) and All-Paths-Down (APD) in vSphere 5.x and 6.x (2004684)

Further on in the hostd logs you will see some additional events that will correlate to storage connection.  Look for the below event id’s.

Event ID : esx.problem.storage.connectivity.lost

datestores3

“esx.problem.storage.connectivity.lost” event indicates a loss in connectivity to the specified storage device.  Any virtual machines using the affected datastore may become unresponsive.

Event ID : esx.problem.scsi.device.state.permanentloss

datastores4

“esx.problem.scsi.device.state.permanentloss” event indicates a permanent device loss.

Purging old data from the vCenter Server database

This video demonstrates how to purge old data from the SQL Server database used by vCenter Server. You would need to perform task if your vCenter database is full.

When the vCenter Server database if full:

  • You cannot log in to vCenter Server
  • VMware VirtualCenter Server service may start and stop immediately.

To resolve this issue, we need to manually purge or truncate the vCenter Server database. Details of how to do this and the script to truncate the database is documented in KB article: Purging old data from the database used by vCenter Server (1025914)

vSphere 6.5 is here! What you need to know

vSphere 6.5 has been released for all to download. We’re sure you vSphere users are all eager to install a copy and start kicking the tires, and we’re just as eager to see that you get started on the right foot. With this in mind, we have created the following list of Knowledge Base articles that are brand new, or have been updated for vSphere 6.5

First of all: Download VMware vSphere and Get Your vSphere License Key

KB articles recommended by VMware Support before you start your journey:

 

For more details on the release please refer to the vSphere 6.5 announcement.

If you are interested in learning more about vSphere 6.5, there are several options:

Top 20 vCenter Server articles for August 2016

Top 20Here is our Top 20 vCenter articles list for August 2016. This list is ranked by the number of times a VMware Support Request was resolved by following the steps in a published Knowledge Base article.

  1. Investigating virtual machine file locks on ESXi/ESX
  2. Using the VMware Knowledge Base
  3. Uploading diagnostic information for VMware through the Secure FTP portal
  4. Correlating build numbers and versions of VMware products
  5. Licensing VMware vCenter Site Recovery Manager
  6. Permanent Device Loss (PDL) and All-Paths-Down (APD) in vSphere 5.x and 6.x
  7. Resetting the VMware vCenter Server 5.x Inventory Service database
  8. Downloading, licensing, and using VMware products
  9. Build numbers and versions of VMware vCenter Server
  10. How to repoint and re-register vCenter Server 5.1 / 5.5 and components
  11. vSphere handling of LUNs detected as snapshot LUNs
  12. Upgrading to vCenter Server 6.0 best practices
  13. How to consolidate snapshots in vSphere 5.x/6.0
  14. ESXi 5.5 Update 3b and later hosts are not manageable after an upgrade
  15. Collecting diagnostic information for VMware vCenter Server 4.x, 5.x and 6.0
  16. How to enable EVC in vCenter Server
  17. Upgrading to vCenter Server 5.5 best practices
  18. VMware End User License Agreements
  19. “Failed to verify the SSL certificate for one or more vCenter Server Systems” error in the vSphere Web Client
  20. VMware vCenter Server 5.x fails to start with the error: Failed to add LDAP entry

Path failover may not be successful when using Cisco MDS Switches on NX-OS 7.3 and FCoE based HBAs

So I wanted to get this blog post out sooner rather than later as it might effect a significant number of customers. In a nutshell, if you perform array maintenance that requires you to reboot a storage controller, the probability of successful path failover is low. This is effectively due to stale entries in the fiber channel name server on Cisco MDS switches running NX-OS 7.3, which is a rather new code release. As the title suggests, this only affects FCoE HBAs, specifically ones that rely on our libfc/libfcoe stack for FCoE connectivity. Such HBAs would be Cisco fnic HBAs as well as a handful of Emulex FCoE HBAs and a couple others.

Here is an example of a successful path failover after receiving an RSCN (Register State Change Notification) from the array controller after performing a reboot:

2016-07-07T17:36:34.230Z cpu17:33461)<6>host4: disc: Received an RSCN event
 2016-07-07T17:36:34.230Z cpu17:33461)<6>host4: disc: Port address format for port (e50800)
 2016-07-07T17:36:34.230Z cpu17:33461)<6>host4: disc: RSCN received: not rediscovering. redisc 0 state 9 in_prog 0
 2016-07-07T17:36:34.231Z cpu14:33474)<6>host4: disc: GPN_ID rejected reason 9 exp 1
 2016-07-07T17:36:34.231Z cpu14:33474)<6>host4: rport e50800: Remove port
 2016-07-07T17:36:34.231Z cpu14:33474)<6>host4: rport e50800: Port entered LOGO state from Ready state
 2016-07-07T17:36:34.231Z cpu14:33474)<6>host4: rport e50800: Delete port
 2016-07-07T17:36:34.231Z cpu54:33448)<6>host4: rport e50800: work event 3
 2016-07-07T17:36:34.231Z cpu54:33448)<7>fnic : 4 :: fnic_rport_exch_reset called portid 0xe50800
 2016-07-07T17:36:34.231Z cpu54:33448)<7>fnic : 4 :: fnic_rport_reset_exch: Issuing abts
 2016-07-07T17:36:34.231Z cpu54:33448)<6>host4: rport e50800: Received a LOGO response closed
 2016-07-07T17:36:34.231Z cpu54:33448)<6>host4: rport e50800: Received a LOGO response, but in state Delete
 2016-07-07T17:36:34.231Z cpu54:33448)<6>host4: rport e50800: work delete

Here is a breakdown of what you just read:

  1. RSCN is received from the array controller
  2. Operation is now is state = 9
  3. GPN_ID (Get Port Name ID) is issued to the switches but is rejected because the state is 9 (See http://lists.open-fcoe.org/pipermail/fcoe-devel/2009-June/002828.html)
  4. LibFC begins to remove the port information on the host
  5. Port enters LOGO (Logout) state from previous state, which was Ready
  6. LibFC Deletes the port information

After this the ESX host will failover to other available ports, which would be on the peer SP:

2016-07-07T17:36:44.233Z cpu33:33459)<3> rport-4:0-1: blocked FC remote port time out: saving binding
 2016-07-07T17:36:44.233Z cpu55:33473)<7>fnic : 4 :: fnic_terminate_rport_io called wwpn 0x524a937aeb740513, wwnn0xffffffffffffffff, rport 0x0x4309b72f3c50, portid 0xffffffff
 2016-07-07T17:36:44.257Z cpu52:33320)NMP: nmp_ThrottleLogForDevice:3298: Cmd 0x2a (0x43a659d15bc0, 36277) to dev "naa.624a93704d1296f5972642ea0001101c" on path "vmhba3:C0:T0:L1" Failed: H:0x1 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0. Act:FAILOVER

A Host status of H:0x1 means NO_CONNECT, hence the failover.

Now here is an example of the same operation on a Cisco MDS switch running NX-OS 7.3 when a storage controller on the array is rebooted:

2016-07-14T19:02:03.551Z cpu47:33448)<6>host2: disc: Received an RSCN event
 2016-07-14T19:02:03.551Z cpu47:33448)<6>host2: disc: Port address format for port (e50900)
 2016-07-14T19:02:03.551Z cpu47:33448)<6>host2: disc: RSCN received: not rediscovering. redisc 0 state 9 in_prog 0
 2016-07-14T19:02:03.557Z cpu47:33444)<6>host2: rport e50900: ADISC port
 2016-07-14T19:02:03.557Z cpu47:33444)<6>host2: rport e50900: sending ADISC from Ready state
 2016-07-14T19:02:23.558Z cpu47:33448)<6>host2: rport e50900: Received a ADISC response
 2016-07-14T19:02:23.558Z cpu47:33448)<6>host2: rport e50900: Error 1 in state ADISC, retries 0
 2016-07-14T19:02:23.558Z cpu47:33448)<6>host2: rport e50900: Port entered LOGO state from ADISC state
 2016-07-14T19:02:43.560Z cpu2:33442)<6>host2: rport e50900: Received a LOGO response timeout
 2016-07-14T19:02:43.560Z cpu2:33442)<6>host2: rport e50900: Error -1 in state LOGO, retrying
 2016-07-14T19:02:43.560Z cpu58:33446)<6>host2: rport e50900: Port entered LOGO state from LOGO state
 2016-07-14T19:03:03.563Z cpu54:33449)<6>host2: rport e50900: Received a LOGO response timeout
 2016-07-14T19:03:03.563Z cpu54:33449)<6>host2: rport e50900: Error -1 in state LOGO, retrying
 2016-07-14T19:03:03.563Z cpu2:33442)<6>host2: rport e50900: Port entered LOGO state from LOGO state
 2016-07-14T19:03:23.565Z cpu32:33447)<6>host2: rport e50900: Received a LOGO response timeout
 2016-07-14T19:03:23.565Z cpu32:33447)<6>host2: rport e50900: Error -1 in state LOGO, retrying
 2016-07-14T19:03:23.565Z cpu54:33449)<6>host2: rport e50900: Port entered LOGO state from LOGO state
 2016-07-14T19:03:43.567Z cpu50:33445)<6>host2: rport e50900: Received a LOGO response timeout
 2016-07-14T19:03:43.567Z cpu50:33445)<6>host2: rport e50900: Error -1 in state LOGO, retrying
 2016-07-14T19:03:43.567Z cpu32:33447)<6>host2: rport e50900: Port entered LOGO state from LOGO state
 2016-07-14T19:04:03.568Z cpu54:33443)<6>host2: rport e50900: Received a LOGO response timeout
 2016-07-14T19:04:03.568Z cpu54:33443)<6>host2: rport e50900: Error -1 in state LOGO, retrying
 2016-07-14T19:04:03.569Z cpu32:33472)<6>host2: rport e50900: Port entered LOGO state from LOGO state
 2016-07-14T19:04:43.573Z cpu20:33473)<6>host2: rport e50900: Received a LOGO response timeout
 2016-07-14T19:04:43.573Z cpu20:33473)<6>host2: rport e50900: Error -1 in state LOGO, retrying
 2016-07-14T19:04:43.573Z cpu54:33443)<6>host2: rport e50900: Port entered LOGO state from LOGO state

Notice the difference? Here is a breakdown of what happened this time:

  1. RSCN is received from the array controller
  2. Operation is now is state = 9
  3. GPN_ID (Get Port Name ID) is issued to the switches but is NOT rejected
  4. Since GPN_ID is valid, LibFC issues an Address Discovery (ADISC)
  5. 20 seconds later the ADISC sent times out and this continues to occur every 20 seconds

The problem is that the ADISC will continue this behavior until the array controller completes the reboot and is back online:

2016-07-14T19:04:47.276Z cpu56:33451)<6>host2: disc: Received an RSCN event
 2016-07-14T19:04:47.276Z cpu56:33451)<6>host2: disc: Port address format for port (e50900)
 2016-07-14T19:04:47.276Z cpu56:33451)<6>host2: disc: RSCN received: not rediscovering. redisc 0 state 9 in_prog 0
 2016-07-14T19:04:47.277Z cpu20:33454)<6>host2: rport e50900: Login to port
 2016-07-14T19:04:47.277Z cpu20:33454)<6>host2: rport e50900: Port entered PLOGI state from LOGO state
 2016-07-14T19:04:47.278Z cpu57:33456)<6>host2: rport e50900: Received a PLOGI accept
 2016-07-14T19:04:47.278Z cpu57:33456)<6>host2: rport e50900: Port entered PRLI state from PLOGI state
 2016-07-14T19:04:47.278Z cpu52:33458)<6>host2: rport e50900: Received a PRLI accept
 2016-07-14T19:04:47.278Z cpu52:33458)<6>host2: rport e50900: PRLI spp_flags = 0x21
 2016-07-14T19:04:47.278Z cpu52:33458)<6>host2: rport e50900: Port entered RTV state from PRLI state
 2016-07-14T19:04:47.278Z cpu57:33452)<6>host2: rport e50900: Received a RTV reject
 2016-07-14T19:04:47.278Z cpu57:33452)<6>host2: rport e50900: Port is Ready

What is actually happening here is that the Cisco MDS switches are quick to receive the RSCN from the array controller and pass it along to the host HBAs however due to a timing issue the entries for that array controller in the FCNS (Fiber Channel Name Server) database are still present when the host HBAs issue the GPN_ID so the switches respond to that request instead of rejecting it. If you review the entry in http://lists.open-fcoe.org/pipermail/fcoe-devel/2009-June/002828.html you see that code was added to validate that the target is actually off the fabric instead of assuming it would be by the RSCN alone. There are various reasons to do this but suffice it to say that it is better to be safe than sorry in this instance.

Unfortunately there is no fix for this at this time, which is why this is potentially so impactful to our customers since it means they effectively are unable to perform array maintenance without the risk of VMs crashing or even corruption. Cisco is fixing this in 7.3(1), which due out in a few weeks.

Here are a couple of references regarding this issue:

 

Cheers,
Nathan Small
Technical Director
Global Support Services
VMware

Top 20 vCenter Server articles for July 2016

Top 20Here is our Top 20 vCenter articles list for July 2016. This list is ranked by the number of times a VMware Support Request was resolved by following the steps in a published Knowledge Base article.

  1. Uploading diagnostic information for VMware using FTP
  2. Downloading, licensing, and using VMware products
  3. Licensing VMware vCenter Site Recovery Manager
  4. Collecting diagnostic information for VMware vCenter Server 4.x, 5.x and 6.0
  5. Using the VMware Knowledge Base
  6. Best practices for upgrading to vCenter Server 6.0
  7. ESXi hosts are no longer manageable after an upgrade
  8. Permanent Device Loss (PDL) and All-Paths-Down (APD) in vSphere 5.x and 6.x
  9. Consolidating snapshots in vSphere 5.x/6.0
  10. Diagnosing an ESXi/ESX host that is disconnected or not responding in VMware vCenter Server
  11. How to unlock and reset the vCenter SSO administrator password
  12. Resetting the VMware vCenter Server 5.x Inventory Service database
  13. Correlating build numbers and versions of VMware products
  14. Back up and restore vCenter Server Appliance/vCenter Server 6.0 vPostgres database
  15. Build numbers and versions of VMware vCenter Server
  16. Re-pointing and re-registering VMware vCenter Server 5.1 / 5.5 and components
  17. “Deprecated VMFS volume(s) found on the host” error in ESXi hosts
  18. vmware-dataservice-sca and vsphere-client status change from green to yellow
  19. Investigating virtual machine file locks on ESXi/ESX
  20. VMware End User License Agreements

Top 20 vCenter Server articles for June 2016

Top 20Here is our Top 20 vCenter articles list for June 2016. This list is ranked by the number of times a VMware Support Request was resolved by following the steps in a published Knowledge Base article.

  1. Purging old data from the database used by VMware vCenter Server
  2. ESXi 5.5 Update 3b and later hosts are no longer manageable after upgrade
  3. Resetting the VMware vCenter Server and vCenter Server Appliance 6.0 Inventory Service database
  4. Unlocking and resetting the VMware vCenter Single Sign-On administrator password
  5. Permanent Device Loss (PDL) and All-Paths-Down (APD) in vSphere 5.x and 6.x
  6. Upgrading to vCenter Server 6.0 best practices
  7. Correlating build numbers and versions of VMware products
  8. Update sequence for vSphere 6.0 and its compatible VMware products
  9. Stopping, starting, or restarting VMware vCenter Server services
  10. In vCenter Server 6.0, the vmware-dataservice-sca and vsphere-client status change from green to yellow continually
  11. Enabling EVC on a cluster when vCenter Server is running in a virtual machine
  12. The vpxd process becomes unresponsive after upgrading to VMware vCenter Server 5.5
  13. Migrating the vCenter Server database from SQL Express to full SQL Server
  14. Reducing the size of the vCenter Server database when the rollup scripts take a long time to run
  15. Consolidating snapshots in vSphere 5.x/6.0
  16. Back up and restore vCenter Server Appliance/vCenter Server 6.0 vPostgres database
  17. Diagnosing an ESXi/ESX host that is disconnected or not responding in VMware vCenter Server
  18. Build numbers and versions of VMware vCenter Server
  19. Increasing the size of a virtual disk
  20. Determining where growth is occurring in the VMware vCenter Server database

Windows 2008+ incremental backups become full backups in ESXi 6.0 b3825889

vmware_tools_iconVMware is actively working to address a recently discovered issue wherein an incremental backup becomes a full backup when backing up Windows 2008 (or higher) virtual machines with VSS-based application quiesced snapshot.

This recent CBT (Changed Block Tracking) issue does not cause any data loss or data corruption.

This issue is well understood and VMware engineering is actively working on a fix.

For more details on this issue and latest status on resolution, please refer to KB article: After upgrading to ESXi 6.0 Build 3825889, incremental virtual machine backups effectively run as full backups when application consistent quiescing is enabled (2145895)

Subscribe to the rss feed for the KB article to ensure you do not miss any update by using this link.

Top 20 vCenter articles for May 2016

Top 20Here is our Top 20 vCenter articles list for May 2016. This list is ranked by the number of times a VMware Support Request was resolved by following the steps in a published Knowledge Base article.

  1. Purging old data from the database used by VMware vCenter Server
  2. ESXi 5.5 Update 3b and later hosts are no longer manageable after upgrade
  3. Permanent Device Loss (PDL) and All-Paths-Down (APD) in vSphere 5.x and 6.x
  4. Upgrading to vCenter Server 6.0 best practices
  5. ESX/ESXi host keeps disconnecting and reconnecting when heartbeats are not received by vCenter Server
  6. Unlocking and resetting the VMware vCenter Single Sign-On administrator password
  7. Consolidating snapshots in vSphere 5.x/6.0
  8. Powering on a virtual machine fails after a storage outage with the error: could not open/create change tracking file
  9. Diagnosing an ESXi/ESX host that is disconnected or not responding in VMware vCenter Server
  10. VMware vSphere Web Client displays the error: Failed to verify the SSL certificate for one or more vCenter Server Systems
  11. Deprecated VMFS volume warning reported by ESXi hosts
  12. Resetting the VMware vCenter Server and vCenter Server Appliance 6.0 Inventory Service database
  13. Cannot take a quiesced snapshot of Windows 2008 R2 virtual machine
  14. vCenter Server 5.5 fails to start after reboot with the error: Unable to create SSO facade: Invalid response code: 404 Not Found
  15. Update sequence for vSphere 6.0 and its compatible VMware products
  16. Registering or adding a virtual machine to the Inventory in vCenter Server or in an ESX/ESXi host
  17. Back up and restore vCenter Server Appliance/vCenter Server 6.0 vPostgres database
  18. Updating rollup jobs after the error: Performance data is currently not available for this entity
  19. Configuring VMware vCenter Server to send alarms when virtual machines are running from snapshots
  20. Determining where growth is occurring in the VMware vCenter Server database

Top 20 ESXi articles for May 2016

Top 20Here is our Top 20 ESXi articles list for May 2016. This list is ranked by the number of times a VMware Support Request was resolved by following the steps in a published Knowledge Base article.

  1. VMware ESXi 5.x host experiences a purple diagnostic screen mentioning E1000PollRxRing and E1000DevRx
  2. ESXi 5.5 Update 3b and later hosts are no longer manageable after upgrade
  3. Commands to monitor snapshot deletion in VMware ESXi/ESX
  4. Recreating a missing virtual machine disk descriptor file
  5. Determining Network/Storage firmware and driver version in ESXi/ESX 4.x, ESXi 5.x, and ESXi 6.x
  6. Permanent Device Loss (PDL) and All-Paths-Down (APD) in vSphere 5.x and 6.x
  7. Installing patches on an ESXi 5.x/6.x host from the command line
  8. Identifying and addressing Non-Maskable Interrupt events on an ESX/ESXi host
  9. Restarting the Management agents on an ESXi or ESX host
  10. Downloading and installing async drivers in VMware ESXi 5.x and ESXi 6.0.x
  11. Enabling or disabling VAAI ATS heartbeat
  12. ESXi 5.5 or 6.0 host disconnects from vCenter Server with the syslog.log error: Unable to allocate memory
  13. Powering off a virtual machine on an ESXi host
  14. Consolidating snapshots in vSphere 5.x/6.0
  15. Powering on a virtual machine fails after a storage outage with the error: could not open/create change tracking file
  16. Snapshot consolidation in VMware ESXi 5.5.x and ESXi 6.0.x fails with the error: maximum consolidate retries was exceeded for scsix:x
  17. Reverting to a previous version of ESXi
  18. Configuring a diagnostic coredump partition on an ESXi 5.x/6.0 host
  19. Diagnosing an ESXi/ESX host that is disconnected or not responding in VMware vCenter Server
  20. Enabling or disabling simultaneous write protection provided by VMFS using the multi-writer flag