For more information, see the NSX for vSphere 6.2.4 Release Notes on all issues listed below.
There is a security vulnerability issue that was found in OpenSSL that is used in VMware NSX for vSphere 6.2.4. For more information, see https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-2107.
** Reminder – vShield Endpoint Update **
VMware has announced the End of Availability (EOA) and End of General Support (EOGS) of VMware vCloud Networking and Security 5.5.x. The EOGS date for VMware vCloud Networking and Security 5.5.x is September 19, 2016. For customers using vCNS Manager specifically to manage vShield Endpoint for agentless anti-virus, Technical Guidance is available until March 31, 2017. For more information, see End of Availability and End of General Support for VMware vCloud Networking and Security 5.5.x (2144733).
Consult the VMware Compatibility Guide for Endpoint partner solution certification status before upgrading. If your preferred solution is not yet certified, please contact that vendor.
Installation and Upgrade Known Issues
Issue 1728633 – Starting in NSX 6.2.3, a third VIB, esx-vdpi, is provided along with the esx-vsip and esx-vxlan NSX VIBs. A successful installation will include all three VIBs.
Issue 1730017: Upgrades from 6.2.3 to 6.2.4 do not show a version change for Guest Introspection.
- As the 6.2.3 Guest Introspection module is the latest version available, the version after a 6.2.4 upgrade remains unchanged. Note that upgrades from earlier NSX releases may show a version change to 6.2.4. This issue does not affect any functionality.
NSX 6.2.4 virtual machines lose network connectivity (2146171)
- Virtual machines lose network connectivity after vMotion under the following conditions:
- Distributed Firewall (DFW) is enabled in the environment and;
- NSX for vSphere setup upgraded from NSX-V 6.1.x release to NSX-V 6.2.3a/6.2.3b/6.2.4 release and virtual machines are later migrated between upgraded ESXi hosts.
- For more information, see KB 2146171 for the workaround.
NSX Manager Known Issues
Fixed issue 1489648: NSX is unavailable from the vSphere Web Client Plug-in after taking a backup of NSX Manager with quiesced snapshot
- Note following important points about NSX backup and restore:
- Backup/restore functionality provided by NSX is the only supported way to backup/restore the NSX Manager.
- Taking a snapshot of the NSX Manager with vSphere is a supported operation. However, VMware does not currently test or support any third party tool that takes snapshot of the NSX Manager.
- Restoring the NSX Manager from snapshot (taken in any way) is not supported.
See also Issue 1708769 and Increased latency on SVM (Service VM) after snapshot in NSX (2146769). There is no need to snapshot an SVM as it does not need to move or be replicated.
Security Services Known Issues
Issue 1718726: Cannot force-sync Service Composer after a user has manually deleted the Service Composer’s policy section using DFW REST API
- In a cross-vCenter NSX environment, a user’s attempt to force sync NSX Service Composer configuration will fail if there was only one policy section and that policy section (the Service Composer-managed policy section) was deleted earlier via a REST API call.
- Workaround: Do not delete the Service Composer-managed policy section via a REST API call. (Note that the UI already prevents deletion of this section).
Issue 1707931: Order of distributed firewall rules changes when service policies defined in Service Composer are present, and a firewall rule is modified or published with a filter applied in the Firewall UI
- Changing the order, adding or deleting service policies created in Service Composer after one or more publish operations are made from the Networking & Security > Firewall UI will cause the order of firewall rules to change and may result in unintended consequences.
Issue 1717635: Firewall configuration operation fails if more than one cluster is present in environment and changes are done in parallel
- In an environment with multiple clusters, if two or more users modify the firewall configuration continuously in a tight loop. (for example, Add/Delete sections or rules), some operations fail.
Issue 1732337/1724222: NSX Manager fails to push firewall rules to ESXi 6.0 P03 host
- NSX Manager fails to push firewall rules to ESXi 6.0 P03 host, and NSX Edge health check fails as vsfwd connection is closed. This is a known issue affecting VMware NSX for vSphere 6.2.x with ESXi 6.0 P03 (Build 4192238). This issue occurs when /dev/random call is blocked which affects NSX operation on password generation.
- Workaround: Contact VMware technical support. For more information, see vsfwd connection to the NSX Manager fails (2146873).
Issue 1620460: NSX fails to prevent users from creating rules in Service Composer rules section
- In the vSphere Web Client, the Networking and Security: Firewall interface fails to prevent users from adding rules to the Service Composer rules section. Users should be permitted to add rules above/below the Service Composer section, but not inside it.
- Workaround: Do not use the “+” button at the global rule level to add rules to the Service Composer rules section.
Issue 1682552: Threshold events for CPU/Memory/CPS for Distributed Firewall (DFW) are not reported
- Even when the DFW thresholds for CPU/Memory/CPS are set for reporting, the threshold events are not reported when the thresholds are crossed.
- Log in to each ESXi host and restart the DFW control plane process by running the following command:
- Verify the status using the following command:
- The result similar to following is displayed:
“vShield-Stateful-Firewall is running”
Note: You should be cautious while performing this operation as this will push all DFW rules to all the filters again. If there are lot of rules, it might take some time to enforce them on all the filters.
Logical Networking Known Issues and NSX Edge Known Issues
Issue 1704540 – High volume of MAC learning table updates with NSX L2 bridge and LACP may lead to out of memory condition
- When an NSX L2 bridge sees a MAC address on a different uplink, it reports a MAC learning table change to controllers through the netcpa process. Networking environments with LACP will learn the same MAC address on multiple interfaces, resulting in a very high volume of table updates and potentially exhausting the memory needed by the netcpa process to do the reporting.
- Workaround – Avoid setting a flow-based hashing algorithm on the physical switch when using LACP. Instead, pin MAC addresses to the same uplinks or change the policy to source-MAC.
Issue 1717369 – When configured in HA mode, both active and standby Edge VMs may be deployed on the same host.
- This issue results from anti-affinity rules not being created and applied on the vSphere hosts automatically during redeploy and upgrade operations. This issue will not be seen when HA is being enabled on existing Edge.
- In NSX releases with a fix for this issue, the following is the expected behavior:
- When vSphere HA is enabled, anti-affinity rules for Edge VMs of an HA pair will be created during redeploy, upgrade.
- When vSphere HA is disabled, anti-affinity rules for Edge VMs of an HA pair will not be created.
Issue 1716545 – Changing appliance size of Edge does not affect standby Edge’s CPU and Memory reservation
- Only the first Edge VM created as part of an HA pair is assigned the reservation settings.
- Workaround: To configure the same CPU/Memory reservation on both Edge VMs:
1) Use the PUT API https://<NSXManager>/api/4.0/edgePublish/tuningConfiguration to set explicit values for both Edge VMs.
2) Disable and re-enable Edge HA, which will delete the second Edge VM and redeploy a new one with the default reservations.
Issue 1510724: Default routes do not populate on hosts after creating a new Universal Distributed Logical Router (UDLR)
- After changing NSX Manager from Standalone to Primary mode for the purpose of configuring Cross-vCenter in NSX for vSphere 6.2.x, you may experience these symptoms:
- When you create a new UDLR, the default routes are not populated on the host instance.
- Routes are populated on the UDLR Control VM but not on the host instance.
- Running the show logical-router host host-ID dlr Edge-ID route command fails to show default routes.
- Workaround: For information on how to recover from this issue, see Default routes do not populate on the hosts after creating a new UDLR (2145959).
Issue 1733146 – Under certain conditions, creating or modifying LIFs for a Universal DLR fails when no control VM exists
- This issue is known to manifest under the following conditions:
- ECMP with two static default routes.
- Static routes with local egress flag.
- This issue results from a full synchronization being requested instead of a delta update, resulting in the rejection of duplicate entities and a failed operation.
- See the release notes for a workaround.
NSX Edge Load Balancer accepts only approved ciphers as of 6.2.3.
- In earlier releases, customer-defined ciphers are supported for ClientSSL and ServerSSL.
- NSX 6.2.3 introduced an approved ciphers list:
- Note the following expected behaviors:
- The cipher value will be reset to “DEFAULT” if the cipher is null, empty, or not in approved ciphers suite.
- Ciphers included in the approved ciphers suite are passed to the Edge.
- When upgrading from a pre-6.2.3 release, a cipher value which is null/empty or not in approved ciphers suite will be reset to “DEFAULT”.
NSX Controller Issues
Data path issues for VNIs with disconnected NSX Controller (2146973)
- Symptoms – NSX controller shows as disconnected in the vSphere Web Client, leading to data path issues for VNIs handled by the disconnected controller.
- This issue occurs because IPSec re-keying is disabled in NSX-V 6.1.5, 6.1.6, 6.2, 6,2,1 and 6.2.2 releases to avoid hitting another known IPSec issue.
NSX API now returns XML output by default when Accept header is not provided
Beginning in NSX 6.2.3, if the “Accept:” header is not provided in a REST API call, then the default formatting of NSX API return values is XML. Previously the NSX API returned JSON-formatted output by default. To receive JSON-formatted output, the API user must explicitly set “application/json” in the “Accept:” header when calling the function.
** How to track the top field issues **