Product Announcements

vCloud Suite – vSphere Storage Appliance (VSA) 5.1 Considerations for Successful Brownfield Deployments – Part I

The concept of vSphere Storage Appliance Brownfield deployments is based around the introduction of the vSphere Storage Appliance 5.1 (VSA) solution into environments with existing virtualized infrastructures. The goal is utilize the exiting hardware and deploy the vSphere Storage Appliance 5.1 solution onto that same virtualized infrastructure. Hence the term “Brownfield” deployments.

A major requirement of the previous vSphere Storage Appliance (VSA) 1.0 solution was a brand new deployment of the virtual infrastructure components. Which means, brand new build or rebuilding of all of the vSphere dependent components (ESXi 5.0, vCenter Server, Update Manager, etc). Hence the term “Greenfield” deployments.

The implications of having to rebuild any production environment can be severe from a management and operations standpoints. Having the ability to avoid such implications with the introduction of a new solution such as the vSphere Storage Appliance (VSA) 5.1 can be of great value and convenience.

 vSphere Infrastructures Scenarios Compatible for VSA 5.1

During vSphere Storage Appliance 5.1 deployments the VSA Installer will perform a series of validation check points before starting the installation and configuration procedures in order to verify that all of the vSphere Storage Appliance (VSA) 5.1 requirements are met.

Due to the fact that a functional vSphere Infrastructure is already in production, it is okay to assume that a functional configuration (ESXi hosts, networks, storage, etc) has been implemented. The vSphere Storage Appliance 5.1 has a few specific configuration requirements which automatically are applied to Greenfield deployments which are not automatically applied to the Brownfield deployments.

Consider the following recommendations for vSphere Storage Appliance (VSA) 5.1 Brownfield deployments:

ESXi Hosts VMFS Heap Size – Increase the VMFS Heap Size to its supported maximum of 256MB. This will allow ESXi Hosts to address a larger capacity of open files (VMDK) locally. Read the article VMFS Heap Size Consideration for more in-depth details on this topic. Failing to modify the the VMFS Heap Size settings prior to the installation and configuration process of the vSphere Storage Appliance (VSA) 5.1 will result in the ESXi Hosts being rebooted.

Enhanced vMotion Compatibility (EVC) – An EVC baseline is configured in order to maintain and guarantee vMotion CPU compatibility of the VSA Cluster. In Brownfield deployments virtual machines such vCenter Server and others are present and running on the ESXi hosts that will be use by the vSphere Storage Appliance (VSA) 5.1. Because of this, the running virtual machines are using ESXi Hosts CPU features. The default EVC setting for the VSA Manager is set to Greenfield deployments which means that there are no running virtual machines on any of the ESXi Hosts to be used as members of the VSA HA Cluster.

In Brownfield deployments you are given two options:

  • to power off all the virtual machines
  • to change the EVC baseline manually to the highest setting in the dev.properties file.

Applying the recommended setting will guaranteed that the lowest common denominator of EVC baselines is used, which in this case is the highest possible. The dev.properties file is located on the system where the vCenter Server is installed, under the C:Program FilesVMwareInfrastructuretomcatwebappsVSAManagerWEB-INFclasses.

Failing to change the dev.properties settings will result in a configuration error similar to the one illustrated below by the VSA Installer which will prevent the configuration process from completing.

NOTE: Do not modify any other setting within this file as doing so may cause problems with the behavior and stability of the vSphere Storage Appliance (VSA) 5.1 solution and may not be supported.

Network Configurations – The VSA network configuration requirements are very strict and there is almost zero flexibility in reference to the validation check points is looking to detect. The check points are based on very specific name conventions and objects configuration settings. All ESXi Host that will be members of a VSA Cluster should have the following:

  • A minimum of one vSphere Standard Switch (vSwitch)
  • Five Port Groups:
    • VSA-Front End – (virtual machine port group)
    • VSA-Back End – (virtual machine port group)
    • VSA-VMotion  – (vmkernel port group)
    • VM Network – (virtual machine port group)
    • Management Network – (vmkernel port group)
If the port groups are not created with exact names and case sensitivity listed above the VSA Installer will produce errors and will not allow you to continue with the installation. The image below illustrates the errors produced by the VSA Installer for every port group check points as part of the validation for required configurations.

Five VSA required Port Groups in a two vSwitch per Host Configuration Scenario

Each port group should be configured with NIC teaming for high availability with Active/Standby failover policies similar to the ones illustrated below:

NIC Team Policies

  • When a specific NIC is configured as Active for the Management and VM Network port groups, that same NIC should not be configured as Active for the VSA-Front End port group. A secondary NIC should be configured as Active for the VSA-Front End port group, and the NIC configured as Active for the Management and VM Network should be configured as Passive.
  • When a specific NIC is configured as Active for the VSA-Back End port group, that same NIC should not be configured as Active for the VSA-Front End port group. A secondary NIC should be configured as Active for the VSA-VMotion port group, and the NIC configured as Active for the Management and VM Network should be configured as Passive.

Failing to add the listed configuration settings will result in unsuccessful configuration attempts as the VSA Installer will prevent completion.

In order to protect the vSphere Storage Appliance (VSA) 5.1 against network related performance and security concerns issues such as Ethernet broadcast storms and malicious capturing and parsing of Ethernet frames, it is recommended to isolate the traffic between the VSA-Front End and VSA-Back End networks. If Jumbo frames are supported by the network physical switches in use, consider the use of jumbo frames for vmkernel interfaces (vMotion, and IP based storage).

The Isolating the VSA networks can be achieved with the use of VLANs. VLAN IDs can be provided in the VSA Installer configuration wizard. The use of VLANs is not a vSphere Storage Appliance (VSA) 5.1 Requirement. The VSA Cluster network must have at least 1 dedicated Ethernet switch that supports IEEE 802.1Q VLAN tagging standards.

IP Address Requirements – The VSA Cluster network requires a number of static IP addresses, and depending on the number of ESXi hosts used for the VSA Cluster and whether the use of DHCP is available in on the vSphere network, the number of static IP addresses can change. All ESXi hosts that are members of the VSA Cluster, including the VSA Cluster Service (VSACS) for two node VSA Cluster configurations, need to be in the same subnet.

Number of static IP addresses in the same subnet:

  • Two Node Cluster without DHCP – 11 IPs
  • Two Node Cluster with DHCP – 9 IPs
  • Three Node Cluster without DHCP – 14 IPs
  • Three Node Cluster with DHCP – 11 IPs

Number of IP addresses in a private segment for the VSA-Back End network:

  • Two Node Cluster without DHCP – 2 IPs
  • Two Node Clusters with DHCP – 2 IPs
  • Three Node Cluster without DHCP – 3 IPs
  • Three Node Cluster with DHCP – 3 IPs

In Part II, I will cover the process of the VSA storage allocation and virtual machine migration onto the newly available VSA shared storage, as well as the increase of shared storage capacity.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @PunchingClouds