Home > Blogs > Support Insider

VMware NSX for vSphere 6.2.4 now available

VMware has made NSX for vSphere 6.2.4 available for download. NSX 6.2.4 provides critical bug fixes identified in previous releases, and 6.2.4 delivers a security patch for CVE-2016-2079 which is a critical input validation vulnerability for sites that uses NSX SSL VPN.

  • For customers who use SSL VPN, VMware strongly recommends a review of CVE-2016-2079 and an upgrade to NSX 6.2.4.
  • For customers who have installed NSX 6.2.3 or 6.2.3a, VMware recommends installing NSX 6.2.4 to address critical bug fixes.

Caution: Before upgrading, consult the NSX 6.2.4 Release Notes available from the NSX Documentation Center and Recommended minimum version for NSX for vSphere with GID, ESXi, and vCenter Server (2144295).

Critical Alert on 6.2.3 and 6.2.3a for DLR users: For more information, see “Fixed issue 1703913: NSX DLR HA nodes remain in a split-brain state” in the NSX for vSphere 6.2.4 Release Notes and VMware Knowledge Base article NSX 6.2.3 DLR HA nodes remain in a split brain state (2146506). This issue will occur after approximately 24 days of BFD uptime and will continue to reoccur every 24 days.

Customers who are using 6.2.3 or 6.2.3a are strongly advised to review KB 2146506, review how to prevent or remediate the issue, and plan to upgrade to NSX 6.2.4.

vShield Endpoint Update

VMware has announced the End of Availability (EOA) and End of General Support (EOGS) of VMware vCloud Networking and Security 5.5.x. The EOGS date for VMware vCloud Networking and Security 5.5.x is September 19, 2016.  For customers using vCNS Manager specifically to manage vShield Endpoint for agentless anti-virus, Technical Guidance is available until March 31, 2017. For more information, see End of Availability and End of General Support for VMware vCloud Networking and Security 5.5.x (2144733).

For more information on additional partner solution availability, see Implementation of VMware vShield Endpoint beyond vCloud Networking and Security End of Availability (EOA) (2110078).

Note: Consult the VMware Compatibility Guide for Endpoint partner solution certification status before upgrading.  If your preferred solution is not yet certified, contact that vendor.

How to track the top field issues

Path failover may not be successful when using Cisco MDS Switches on NX-OS 7.3 and FCoE based HBAs

So I wanted to get this blog post out sooner rather than later as it might effect a significant number of customers. In a nutshell, if you perform array maintenance that requires you to reboot a storage controller, the probability of successful path failover is low. This is effectively due to stale entries in the fiber channel name server on Cisco MDS switches running NX-OS 7.3, which is a rather new code release. As the title suggests, this only affects FCoE HBAs, specifically ones that rely on our libfc/libfcoe stack for FCoE connectivity. Such HBAs would be Cisco fnic HBAs as well as a handful of Emulex FCoE HBAs and a couple others.

Here is an example of a successful path failover after receiving an RSCN (Register State Change Notification) from the array controller after performing a reboot:

2016-07-07T17:36:34.230Z cpu17:33461)<6>host4: disc: Received an RSCN event
 2016-07-07T17:36:34.230Z cpu17:33461)<6>host4: disc: Port address format for port (e50800)
 2016-07-07T17:36:34.230Z cpu17:33461)<6>host4: disc: RSCN received: not rediscovering. redisc 0 state 9 in_prog 0
 2016-07-07T17:36:34.231Z cpu14:33474)<6>host4: disc: GPN_ID rejected reason 9 exp 1
 2016-07-07T17:36:34.231Z cpu14:33474)<6>host4: rport e50800: Remove port
 2016-07-07T17:36:34.231Z cpu14:33474)<6>host4: rport e50800: Port entered LOGO state from Ready state
 2016-07-07T17:36:34.231Z cpu14:33474)<6>host4: rport e50800: Delete port
 2016-07-07T17:36:34.231Z cpu54:33448)<6>host4: rport e50800: work event 3
 2016-07-07T17:36:34.231Z cpu54:33448)<7>fnic : 4 :: fnic_rport_exch_reset called portid 0xe50800
 2016-07-07T17:36:34.231Z cpu54:33448)<7>fnic : 4 :: fnic_rport_reset_exch: Issuing abts
 2016-07-07T17:36:34.231Z cpu54:33448)<6>host4: rport e50800: Received a LOGO response closed
 2016-07-07T17:36:34.231Z cpu54:33448)<6>host4: rport e50800: Received a LOGO response, but in state Delete
 2016-07-07T17:36:34.231Z cpu54:33448)<6>host4: rport e50800: work delete

Here is a breakdown of what you just read:

  1. RSCN is received from the array controller
  2. Operation is now is state = 9
  3. GPN_ID (Get Port Name ID) is issued to the switches but is rejected because the state is 9 (See http://lists.open-fcoe.org/pipermail/fcoe-devel/2009-June/002828.html)
  4. LibFC begins to remove the port information on the host
  5. Port enters LOGO (Logout) state from previous state, which was Ready
  6. LibFC Deletes the port information

After this the ESX host will failover to other available ports, which would be on the peer SP:

2016-07-07T17:36:44.233Z cpu33:33459)<3> rport-4:0-1: blocked FC remote port time out: saving binding
 2016-07-07T17:36:44.233Z cpu55:33473)<7>fnic : 4 :: fnic_terminate_rport_io called wwpn 0x524a937aeb740513, wwnn0xffffffffffffffff, rport 0x0x4309b72f3c50, portid 0xffffffff
 2016-07-07T17:36:44.257Z cpu52:33320)NMP: nmp_ThrottleLogForDevice:3298: Cmd 0x2a (0x43a659d15bc0, 36277) to dev "naa.624a93704d1296f5972642ea0001101c" on path "vmhba3:C0:T0:L1" Failed: H:0x1 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0. Act:FAILOVER

A Host status of H:0x1 means NO_CONNECT, hence the failover.

Now here is an example of the same operation on a Cisco MDS switch running NX-OS 7.3 when a storage controller on the array is rebooted:

2016-07-14T19:02:03.551Z cpu47:33448)<6>host2: disc: Received an RSCN event
 2016-07-14T19:02:03.551Z cpu47:33448)<6>host2: disc: Port address format for port (e50900)
 2016-07-14T19:02:03.551Z cpu47:33448)<6>host2: disc: RSCN received: not rediscovering. redisc 0 state 9 in_prog 0
 2016-07-14T19:02:03.557Z cpu47:33444)<6>host2: rport e50900: ADISC port
 2016-07-14T19:02:03.557Z cpu47:33444)<6>host2: rport e50900: sending ADISC from Ready state
 2016-07-14T19:02:23.558Z cpu47:33448)<6>host2: rport e50900: Received a ADISC response
 2016-07-14T19:02:23.558Z cpu47:33448)<6>host2: rport e50900: Error 1 in state ADISC, retries 0
 2016-07-14T19:02:23.558Z cpu47:33448)<6>host2: rport e50900: Port entered LOGO state from ADISC state
 2016-07-14T19:02:43.560Z cpu2:33442)<6>host2: rport e50900: Received a LOGO response timeout
 2016-07-14T19:02:43.560Z cpu2:33442)<6>host2: rport e50900: Error -1 in state LOGO, retrying
 2016-07-14T19:02:43.560Z cpu58:33446)<6>host2: rport e50900: Port entered LOGO state from LOGO state
 2016-07-14T19:03:03.563Z cpu54:33449)<6>host2: rport e50900: Received a LOGO response timeout
 2016-07-14T19:03:03.563Z cpu54:33449)<6>host2: rport e50900: Error -1 in state LOGO, retrying
 2016-07-14T19:03:03.563Z cpu2:33442)<6>host2: rport e50900: Port entered LOGO state from LOGO state
 2016-07-14T19:03:23.565Z cpu32:33447)<6>host2: rport e50900: Received a LOGO response timeout
 2016-07-14T19:03:23.565Z cpu32:33447)<6>host2: rport e50900: Error -1 in state LOGO, retrying
 2016-07-14T19:03:23.565Z cpu54:33449)<6>host2: rport e50900: Port entered LOGO state from LOGO state
 2016-07-14T19:03:43.567Z cpu50:33445)<6>host2: rport e50900: Received a LOGO response timeout
 2016-07-14T19:03:43.567Z cpu50:33445)<6>host2: rport e50900: Error -1 in state LOGO, retrying
 2016-07-14T19:03:43.567Z cpu32:33447)<6>host2: rport e50900: Port entered LOGO state from LOGO state
 2016-07-14T19:04:03.568Z cpu54:33443)<6>host2: rport e50900: Received a LOGO response timeout
 2016-07-14T19:04:03.568Z cpu54:33443)<6>host2: rport e50900: Error -1 in state LOGO, retrying
 2016-07-14T19:04:03.569Z cpu32:33472)<6>host2: rport e50900: Port entered LOGO state from LOGO state
 2016-07-14T19:04:43.573Z cpu20:33473)<6>host2: rport e50900: Received a LOGO response timeout
 2016-07-14T19:04:43.573Z cpu20:33473)<6>host2: rport e50900: Error -1 in state LOGO, retrying
 2016-07-14T19:04:43.573Z cpu54:33443)<6>host2: rport e50900: Port entered LOGO state from LOGO state

Notice the difference? Here is a breakdown of what happened this time:

  1. RSCN is received from the array controller
  2. Operation is now is state = 9
  3. GPN_ID (Get Port Name ID) is issued to the switches but is NOT rejected
  4. Since GPN_ID is valid, LibFC issues an Address Discovery (ADISC)
  5. 20 seconds later the ADISC sent times out and this continues to occur every 20 seconds

The problem is that the ADISC will continue this behavior until the array controller completes the reboot and is back online:

2016-07-14T19:04:47.276Z cpu56:33451)<6>host2: disc: Received an RSCN event
 2016-07-14T19:04:47.276Z cpu56:33451)<6>host2: disc: Port address format for port (e50900)
 2016-07-14T19:04:47.276Z cpu56:33451)<6>host2: disc: RSCN received: not rediscovering. redisc 0 state 9 in_prog 0
 2016-07-14T19:04:47.277Z cpu20:33454)<6>host2: rport e50900: Login to port
 2016-07-14T19:04:47.277Z cpu20:33454)<6>host2: rport e50900: Port entered PLOGI state from LOGO state
 2016-07-14T19:04:47.278Z cpu57:33456)<6>host2: rport e50900: Received a PLOGI accept
 2016-07-14T19:04:47.278Z cpu57:33456)<6>host2: rport e50900: Port entered PRLI state from PLOGI state
 2016-07-14T19:04:47.278Z cpu52:33458)<6>host2: rport e50900: Received a PRLI accept
 2016-07-14T19:04:47.278Z cpu52:33458)<6>host2: rport e50900: PRLI spp_flags = 0x21
 2016-07-14T19:04:47.278Z cpu52:33458)<6>host2: rport e50900: Port entered RTV state from PRLI state
 2016-07-14T19:04:47.278Z cpu57:33452)<6>host2: rport e50900: Received a RTV reject
 2016-07-14T19:04:47.278Z cpu57:33452)<6>host2: rport e50900: Port is Ready

What is actually happening here is that the Cisco MDS switches are quick to receive the RSCN from the array controller and pass it along to the host HBAs however due to a timing issue the entries for that array controller in the FCNS (Fiber Channel Name Server) database are still present when the host HBAs issue the GPN_ID so the switches respond to that request instead of rejecting it. If you review the entry in http://lists.open-fcoe.org/pipermail/fcoe-devel/2009-June/002828.html you see that code was added to validate that the target is actually off the fabric instead of assuming it would be by the RSCN alone. There are various reasons to do this but suffice it to say that it is better to be safe than sorry in this instance.

Unfortunately there is no fix for this at this time, which is why this is potentially so impactful to our customers since it means they effectively are unable to perform array maintenance without the risk of VMs crashing or even corruption. Cisco is fixing this in 7.3(1), which due out in a few weeks.

Here are a couple of references regarding this issue:

 

Cheers,
Nathan Small
Technical Director
Global Support Services
VMware

NSX for vSphere Field Advisory – July 2016 Edition

This blog has been updated to reflect new information as it was provided. Changes are marked with an *.

VMware NSX for vSphere 6.2.3 Update

  • NSX for vSphere 6.2.3 has an issue that can affect both new NSX customers as well as customers upgrading from previous versions of NSX. The NSX for vSphere 6.2.3 release has been pulled from distribution. The current version available is NSX for vSphere 6.2.2, which is the VMware minimum recommended release.  Refer to KB 2144295. VMware is actively working towards releasing the next version to replace NSX for vSphere 6.2.3 *
  • VMware NSX for vSphere version 6.2.3 delivered a security patch to address a known SSL VPN security vulnerability (CVE-2016-2079) . This issue may allow a remote attacker to gain access to sensitive information. Customers who use SSL VPN are strongly advised to review CVE-2016-2079 and contact VMware support to request immediate assistance. For questions or concerns, contact VMware Support. *
  • The next version of NSX for vSphere contains fixes for bugs that have been found in NSX 6.2.3.
  • Customers who have already upgraded to 6.2.3 are advised to review the following  KB articles:
    • VMware knowledge base article 2146227, VMs using Distributed Firewall (DFW) and Security Groups (SG) may experience connectivity issues. A workaround is available*
    • VMware knowledgebase article 2146293, Virtual machines lose network connectivity in NSX 6.2.x. *
    • VMware Knowledgebase article 2146413, VMs lose network connectivity in NSX with DLR HA. *

Critical Alert for Edge DLR users on NSX 6.2.3 and 6.2.3a *

  • NSX 6.2.3 DLR HA nodes remain in a split brain state (2146506) *
    • A new issue has been identified that can cause both primary and secondary HA nodes into an Active State, causing network disruption.
    • This issue will occur after approximately 24 days of BFD uptime and will continue to reoccur every 24 days.
    • Customers who are using NSX-V 6.2.3 or 6.2.3a are strongly advised to review KB 2146506, review how to prevent or remediate the issue and plan to upgrade to the next version of NSX.

For questions or concerns, contact VMware Support. To contact VMware support, see Filing a Support Request in My VMware (2006985) or How to Submit a Support Request.

Top NSX for vSphere issues for July 2016

NSX for vSphere 6.2.3 other new and changed issues

Notes:

  • vCloud Director 8.0.1 is now interop-tested and supported with NSX 6.2.3.  For more information, see the VMware Interoperability Matrix
  • VMware is working actively with anti-virus solution partners to influence completion of their certification testing efforts with both NSX 6.2.2 and 6.2.3.  For more information, see the VMware Compatibility Guide (VCG)

Other trending issues

Known interoperability issues during upgrade to NSX for vSphere 6.2.3

Note: VMware vSphere 6.0 supports VIB downloads over port 443 (instead of port 80). This port is opened and closed dynamically. The intermediate devices between the ESXi hosts and vCenter Server must allow traffic using this port.

How to track Top Field Issues

Top 20 ESXi articles for July 2016

Top 20Here is our Top 20 ESXi articles list for July 2016. This list is ranked by the number of times a VMware Support Request was resolved by following the steps in a published Knowledge Base article.

  1. Uploading diagnostic information for VMware through the Secure FTP portal
  2. Support Contracts FAQs
  3. Commands to monitor snapshot deletion in ESXi/ESX
  4. How to purchase and file Pay Per Incident support for VMware products
  5. Uploading diagnostic information for VMware using FTP
  6. Downloading, licensing, and using VMware products
  7. Licensing VMware vCenter Site Recovery Manager
  8. Determining Network/Storage firmware and driver version in ESXi/ESX 4.x, ESXi 5.x, and ESXi 6.x
  9. ESXi 5.x with E1000e adapter fails with purple diagnostic screen
  10. Recreating a missing virtual machine disk descriptor file
  11. Using the VMware Knowledge Base
  12. Product offerings for vSphere 5.x
  13. ESXi hosts are no longer manageable after an upgrade
  14. Installing patches on an ESXi 5.x/6.x host from the command line
  15. Enabling or disabling VAAI ATS heartbeat
  16. Restarting the Management agents in ESXi
  17. Permanent Device Loss (PDL) and All-Paths-Down (APD) in vSphere 5.x and 6.x
  18. Consolidating snapshots in vSphere 5.x/6.0
  19. “maximum consolidate retries was exceeded for scsix:x” error in ESXi
  20. Build numbers and versions of VMware ESXi/ESX

Top 20 Horizon View articles for July 2016

Top 20Here is our Top 20 Horizon View articles list for July 2016. This list is ranked by the number of times a VMware Support Request was resolved by following the steps in a published Knowledge Base article.

 

  1. Restart order of the View environment to clear ADLDS (ADAM) synchronization in View 4.5, 4.6, 5.0, 5.1, 5.2, 5.3, 6.0, and 6.1
  2. Provisioning View desktops fails due to customization timeout errors
  3. Linked Clone pool creation and recomposition fails with VMware Horizon View 6.1.x and older releases
  4. Manually deleting replica virtual machines in VMware Horizon View 5.x
  5. Poor virtual machine application performance may be caused by processor power management settings
  6. Removing a standard (replica) connection server or a security server from a cluster of connection/security servers
  7. Recommended restart cycle of the VMware Horizon View environment
  8. Forcing replication between ADAM databases
  9. Confirming that the userinit string is configured properly
  10. Generating a Horizon View SSL certificate request using the Microsoft Management Console (MMC) Certificates snap-in
  11. VMware View SVGA driver reports newer version than the installed View Agent version
  12. Configuring security protocols on components to connect the View Client with desktops
  13. Error attaching to SVGADevTap, error 4000: EscapeFailed reported by PCoIP server
  14. Location of VMware View log files
  15. Using the vdmadmin command to exclude or include a domain on a search list for View Administrator or Security Server
  16. Finding and removing unused replica virtual machines in the VMware Horizon View
  17. Connecting to View Connection Server with SmartCard authentication enabled fails with the error: Smart Card or Certificate authentication is required
  18. View Persona Management features do not function when Windows Client-Side Caching is in effect
  19. Deploying or recomposing View desktops fails when the parent virtual machine has CBT enabled
  20. Performing an end-to-end backup and restore for VMware View Manager

Top 20 NSX articles for July 2016

Top 20Here is our Top 20 NSX articles list for July 2016. This list is ranked by the number of times a VMware Support Request was resolved by following the steps in a published Knowledge Base article.

  1. vCenter Server or Platform Services Controller certificate validation error for external VMware Solutions in vSphere 6.0
  2. Licensing VMware vSphere 5.5.x/6.0.x and VMware NSX for vSphere 6.x
  3. Deploying VMware NSX for vSphere 6.x through Auto Deploy
  4. vCenter Server or Platform Services Controller certificate validation error messages for external solutions in environments with a External Platform Services Controller
  5. Troubleshooting NSX Edge High Availability (HA) issues
  6. Slow VMs after upgrading VMware tools in NSX / vCloud Networking and Security
  7. ESXi host fails with purple diagnostic screen in NSX environment
  8. VMs learning the DLR pMac as the VM default gateway
  9. TCP and UDP Ports required to access VMware vCenter Server, VMware ESXi and ESX hosts, and other network components
  10. Windows virtual machines using the vShield Endpoint TDI Manager or NSX Network Introspection Driver (vnetflt.sys) driver fails with a blue diagnostic screen
  11. vCenter Server certificate validation error for external solutions in environments with Embedded Platform Services Controller
  12. Troubleshooting the NSX Manager Web Client Plug-In in NSX for vSphere 6.x
  13. The netcpa agent on an ESXi host fails to communicate with NSX controller(s) in VMware NSX for vSphere 6.x
  14. Migration of Service VM (SVM) may cause ESXi host issues in VMware NSX for vSphere 6.x
  15. ESXi 5.5.x/6.0.x host in a VMware NSX for vSphere 6.2.1 environment fails with a purple diagnostic screen and reports the backtrace: PFFilterPacket and VSIPDVFProcessSlowPathPackets
  16. Duplicate VTEPs in ESXi hosts after rebooting vCenter Server
  17. Networking & Security pages are blank in vSphere Web Client after a downgrade or backed-out upgrade of NSX Manager
  18. Unexpected TCP interruption on TCP sessions during Edge High Availability (HA) failover in VMware NSX for vSphere 6.2.x
  19. NSX Edge is unmanageable after upgrading to NSX 6.2.3
  20. Installation Status appears as Not Ready in NSX

Top 20 vCenter Server articles for July 2016

Top 20Here is our Top 20 vCenter articles list for July 2016. This list is ranked by the number of times a VMware Support Request was resolved by following the steps in a published Knowledge Base article.

  1. Uploading diagnostic information for VMware using FTP
  2. Downloading, licensing, and using VMware products
  3. Licensing VMware vCenter Site Recovery Manager
  4. Collecting diagnostic information for VMware vCenter Server 4.x, 5.x and 6.0
  5. Using the VMware Knowledge Base
  6. Best practices for upgrading to vCenter Server 6.0
  7. ESXi hosts are no longer manageable after an upgrade
  8. Permanent Device Loss (PDL) and All-Paths-Down (APD) in vSphere 5.x and 6.x
  9. Consolidating snapshots in vSphere 5.x/6.0
  10. Diagnosing an ESXi/ESX host that is disconnected or not responding in VMware vCenter Server
  11. How to unlock and reset the vCenter SSO administrator password
  12. Resetting the VMware vCenter Server 5.x Inventory Service database
  13. Correlating build numbers and versions of VMware products
  14. Back up and restore vCenter Server Appliance/vCenter Server 6.0 vPostgres database
  15. Build numbers and versions of VMware vCenter Server
  16. Re-pointing and re-registering VMware vCenter Server 5.1 / 5.5 and components
  17. “Deprecated VMFS volume(s) found on the host” error in ESXi hosts
  18. vmware-dataservice-sca and vsphere-client status change from green to yellow
  19. Investigating virtual machine file locks on ESXi/ESX
  20. VMware End User License Agreements

Top 20 vRealize Automation articles for July 2016

Top 20Here is our Top 20 vRealize Automation (vRA) articles list for July 2016. This list is ranked by the number of times a VMware Support Request was resolved by following the steps in a published Knowledge Base article.

  1. Unable to add Active Directory users or groups to vCenter Server Appliance or vRealize Automation permissions
  2. Using JXplorer to update the LDAP string for an identity source for VMware vRealize Automation 6.0.x, 6.1.x
  3. Connection between VMware vRealize Automation and embedded vRealize Orchestrator fails with the error: Cannot connect to Orchestrator server
  4. Users are unable to see some infrastructure tabs in vRA 7.0.x
  5. Deleting an endpoint in vRealize Automation fails with the error: This endpoint is being used by # compute resources and # storage paths and cannot be deleted
  6. Connecting to a resource using the Remote Console (VMRC) option in VMware vRealize Automation 6.2.1 fails with the error: Cannot establish a remote console connection
  7. VMware vRealize Orchestrator endpoint data collection fails
  8. Migrating to a new SSO or recovering from a reinstallation of SSO in VMware vRealize Automation
  9. Provisioning multiple virtual machines in VMware vRealize Automation with external workflows fail with Timeout on signal errors
  10. Deploying a virtual machine using a clone workflow in vRealize Automation 6.x fails with the error: The object has already been deleted or has not been completely created
  11. Multi-Machine Blueprint Reported as Partially Succeeded But All the Components Provisioned Correctly
  12. After upgrading the VMware vRealize Automation Identity Appliance to 6.2.3, the vmware-stsd service fails to start
  13. Log locations for VMware vRealize Automation 7.x
  14. Request to destroy a machine fails in VMware vRealize Automation 7.0.x
  15. vRealize Automation services fail when modifying the vIDM database
  16. vRealize Automation 7.X Manager Service extensibility callouts to Event Broker fails
  17. Services take a long time or fail to start in vRealize Automation High Availability environment
  18. Provisioning a machine using VMware vRealize Automation fails with the error: Error executing query usp_SelectHostReservation
  19. Installing or configuring VMware vRealize Automation fails with the error: This implementation is not part of the Windows Platform FIPS validated cryptographic algorithms
  20. Mounting a CD-ROM or ISO in vRealize Automation using vRealize Orchestrator

Top 20 vRealize Operations Manager articles for July 2016

Top 20Here is our Top 20 vRealize Operations Manager articles list for July 2016. This list is ranked by the number of times a VMware Support Request was resolved by following the steps in a published Knowledge Base article.

  1. Configure a certificate for use with vRealize Operations Manager
  2. After cancelling the selected alerts in VMware vRealize Operations Manager 6.0.x, one or more of the selected alerts remain
  3. High garbage collection occurs on the VMware vRealize Operations Manager (vROps) Master node
  4. After upgrading to VMware vRealize Operations 6.2.0a, vCenter adapters remain in Starting state
  5. Creating low disk space alerts for the virtual machine guest file systems in VMware vRealize Operations Manager 6.0.x
  6. Understanding Feature Accommodation between VMware vSphere 6.0 and vRealize Operations 5.8.x and 6.0.1 Feature Accommodation
  7. Change the IP address on a vRealize Operations Manager 6.0.x single-node deployment
  8. Manually removing a node from the VMware vRealize Operations Manager 6.x cluster
  9. Restarting VMware vRealize Operations Manager 6.0.x fails with a Waiting for Analytics message in the Admin UI
  10. VMware vRealize Operations Manager 6.x displays the critical alert: FSDB file corrupted for resourceInternalId
  11. Change the IP address of a vRealize Operations Manager 6.1 or 6.2 node in a multiple-node cluster
  12. Certificate errors and failed adapter instances in VMware vRealize Operations Manager 6.1
  13. Ensuring adequate free disk space is available on VMware vRealize Operations Manager 6.x nodes
  14. Searching for any of the migrated objects in the Inventory Explorer displays two copies
  15. Disable TLS 1.0 in vRealize Operations Manager 6.2
  16. vRealize Operations Manager 6.x fails to accept and apply Custom CA Certificate
  17. vRealize Operations Manager Sizing Guidelines
  18. vRealize Operations Manager 6.x is inaccessible, status of all nodes is Waiting for Analytics to Start
  19. Enabling SSH access in VMware vRealize Operations Manager 6.0.x
  20. Management pack compatibility with vRealize Operations 5.x and 6.0

Top 20 vSAN articles for July 2016

Top 20Here is our Top 20 vSAN articles list for July 2016. This list is ranked by the number of times a VMware Support Request was resolved by following the steps in a published Knowledge Base article.

  1. Performance Degradation of Hybrid disk groups on VSAN 6.2 Deployments
  2. vSphere 5.5 Virtual SAN requirements
  3. Requirements and considerations for the deployment of VMware Virtual SAN (VSAN)
  4. VMware Virtual SAN 6.1 fulfillment
  5. Considerations when using both Virtual SAN and non-Virtual SAN disks with the same storage controller
  6. “Host cannot communicate with all other nodes in virtual SAN enabled cluster” error
  7. Virtual SAN 6.2 on disk upgrade fails at 10%
  8. Enabling or disabling a Virtual SAN cluster
  9. Network interfaces used for Virtual SAN are not ready
  10. Adding a host back to a Virtual SAN cluster after an ESXi host rebuild
  11. Changing the multicast address used for a VMware Virtual SAN Cluster
  12. Creating or editing a virtual machine Storage Policy to correct a missing Virtual SAN (VSAN) VASA provider fails
  13. VSAN disk components are marked ABSENT after enabling CBT
  14. Cannot see or manually add VMware Virtual SAN (VSAN) Storage Providers in the VMware vSphere Web Client
  15. Creating new objects on a VMware Virtual SAN Datastore fails and reports the error: Failed to create directory VCENTER (Cannot Create File)
  16. Powering on virtual machines in VMware Virtual SAN 5.5 fails with error: Failed to create swap file
  17. Virtual SAN Health Service – Limits Health – After one additional host failure
  18. VMware recommended settings for RAID0 logical volumes on certain 6G LSI based RAID VSAN
  19. Upgrading the VMware Virtual SAN (VSAN) on-disk format version from 1 to 2
  20. Understanding Virtual SAN on-disk format versions