Home > Blogs > VMware vSphere Blog

VMware’s vSphere Big Data Extensions (BDE) achieves Hortonworks Operations Ready Certification

 

Hortonworks announced on the 17th December 2014 that VMware’s Big Data Extensions tool for Hadoop on virtual machines is now both HDP Certified and Operations Ready. HDP is the Hortonworks Data Platform – an open Hadoop platform that is centered on YARN. The Operations Ready designation is a new certification introduced by Hortonworks to focus attention on those tools that integrate in an approved way with Apache Ambari by making use of the open Ambari management application programming interfaces. The focus of the program is to certify operational tools for managing a Hadoop/HDP cluster. The Operations Ready program also provides assurance to enterprises adopting Hadoop that the tools they select to run and interact with Hadoop have been tested and validated to work correctly. At VMware we are excited to get this additional level of certification for VMware’s BDE and we look forward to continued engineering collaboration with Hortonworks.

Here is the description of the new Operations Ready program from Hortonworks:

http://hortonworks.com/partners/certified-technology-program/ops-ready/

You probably by now have also seen the recent VMware Big Data Extensions 2.1 announcements. Here is a quick summary of those new features in 2.1:

http://blogs.vmware.com/vsphere/2014/10/whats-new-vsphere-big-data-extensions-version-2-1.html

BDE 2.1 was announced as being Generally Available in October 2014. One of the central new features in BDE 2.1 is better integration with the de-facto Hadoop management tools from the distro vendors. Chief among those tools is Ambari. This integration with Ambari was the result of a request made to us directly by the VMware BDE user community.

BDE 2.1, with the new application manager construct, can now use the Ambari APIs under the covers to provision the HDP software into the virtual machines that it has created through cloning its template virtual machine. This method of deploying everything through BDE ensures that the resulting new Hadoop cluster is entirely compatible with Ambari. That is important because many of our users would like to use Ambari and VMware vCenter together from the point at which a cluster is provisioned onwards.

  • Ambari is the management tool of choice among HDP users in order to gain insight into what is going on at runtime at the Hadoop level (e.g. checking the status of HDFS, YARN, MapReduce and other services) and to make service changes there.
  • VMware vCenter is the virtualization infrastructure management tool that is in use at tens of thousands of VMware’s customers to view system behavior and performance at the virtual infrastructure level (virtual machines, physical machines, consumed resources and performance data). vCenter with the BDE plug-in is in popular use for deploying user Hadoop clusters today at many enterprises.

The BDE plug-in uses the vCenter APIs as well as the Ambari Blueprint APIs. Combining the two tools together to collaborate on the Hadoop provisioning details simplifies the management of your virtualized Hadoop cluster significantly. Both the Hadoop application architect and the virtualization manager can converse about the components of the HDP cluster and their effect on hardware consumption.

Hortonworks’ new Operations Ready program is one of a set of certifications that are currently available from the company. Other certifications available are the YARN Ready, Security Ready and Governance Ready programs. You can read more about the new programs here:  http://hortonworks.com/blog/accelerating-adoption-enterprise-hadoop

You can find the full BDE Administrator’s  and User’s  Guide and the BDE Command Line Interface Guide, as well as the Release Notes at: https://www.vmware.com/support/pubs/vsphere-big-data-extensions-pubs.html

 

vSphere Data Protection (VDP) – Removing an External Proxy

It is not particularly clear how to remove a vSphere Data Protection (VDP) external proxy in the vSphere Data Protection (VDP) 5.8 Administration Guide. Before I get into that specifically, I should probably start with what a VDP external proxy is and how it is deployed. The external proxy functionality is currently only available with the Advanced edition of VDP. External proxies are virtual appliances that are typically deployed to locations where the VDP appliance does not have direct access to storage, e.g., another cluster or perhaps even another site such as a branch office or remote office. This reduces the amount of network bandwidth required to transmit backup data across the network. An external proxy will utilize SCSI HotAdd to attach the protected VM’s disk(s) to an external proxy during a backup job. The external proxy will first query the VDP appliance to see if the backup data segment already exists in the VDP appliance’s backup data repository – either in the VDP appliance (GSAN) or on the Data Domain appliance, if Data Domain is being used to store VDP backup data. If the segment does exist, the external proxy will not send it again across the network. Without the external proxy, VDP would have to use the Network Block Device (NBD) protocol to back up remote VMs. In this scenario, all changes to the protected VMs would be sent across the network to the VDP appliance and deduplication would happen within the VDP appliance or on the Data Domain appliance.

Continue reading

Operationalizing VMware Virtual SAN: Automating vCenter Alarm Configuration Using PowerCLI

powercli 5.8 icon

Welcome to the next installment in our Operationalizing VMware Virtual SAN series. In our previous article we detailed “How to configure vCenter alarms for Virtual SAN”. In today’s article we will demonstrate how to automate that configuration workflow leveraging PowerCLI.

(Many thanks to VMware genius Alan Renouf (@alanrenouf) for his contributions to this topic) [Joe Cook: @CloudAnimal]

The PowerCLI code required to automate the configuration of vCenter Alarms for Virtual SAN is considerably straightforward.

1. Connect to vCenter

Connect-VIServer -Server 192.168.100.1 -User Administrator@vsphere.local -Password vmware

2. Define the the Virtual SAN cluster where you would like the rules to be created

$Cluster = "Cluster Site A"

3. Next we create a hash table with the desired VMware ESXi Observeration IDs (VOB IDs) for Virtual SAN and include a description for each VOB ID.

If you are not used to programming, the concept of arrays and hash tables may be a bit confusing. Using variables is generally much easier to understand. One way of understanding variables is to think of them simply as a short amount of text used to represent a larger amount of text in your program or script ($x=”larger amount of text”). Instead of typing “larger amount of text” continually, you can simply type $x and the language interpreter (in our case PowerShell), will substitute the string “larger amount of text” wherever it finds $x in your script. Variables can be used to greatly reduce the amount of code you have to type, make your scripts much easier to read, and have many other uses as well.

If we think of variables as ways to store one value to reference, we can think of arrays as a way to store multiple values to reference. In our example today, we would have to create at least 32 variables to perform the same work that we can with one hash table.

A hash table is a type of array that is also known as a dictionary. It is a collection of name-value pairs (e.g. “name”=”value”) that can be used . Here we have an example of a basic hash table:

$HashTableName = @{
VOB_ID_A="VOB Description";
VOB_ID_B="VOB Description";
VOB_ID_C="VOB Description";
}

In the table below we have a breakdown of the components of the code used to create a hash table:

Syntax Component Description
$HashTableName = Replace “HashTableName” with the text you wish to use to reference this list of key-values pairs.
@{ Indicates the start of the hash table or array
VOB_ID_A=”VOB Description”; Key-Value pair to store within the hash table. VOB_ID_A will be the VOB ID from the VMware ESXi Observation Log (VOBD) (e.g. “esx.audit.vsan.clustering.enabled”). “VOB Description” will be the description of the associated “VOB ID” (e.g. “Virtual SAN clustering service had been enabled”). Make sure to use quotation marks whenever spaces are used and to separate each key-value pair with a semicolon (;).Examine /var/log/vobd.log on your vSphere host to obtain possible VOB IDs. See here for a list of VMware ESXi Observation IDs for Virtual SAN.
} Indicates the end of the hash table or array

Here is an example of a hash table with a single key-value pair representing a single vCenter Alarm for Virtual SAN:

$VSANAlerts = @{
"esx.audit.vsan.clustering.enabled" = "Virtual SAN clustering service had been enabled";
}

Below is the actual hash table that we will use in our example Virtual SAN Alarm Configuration script. It is fully populated with all of the recommended VOB IDs for Virtual SAN along with the description for each. We have labeled this hash table as “$VSANAlerts”. You will see $VSANAlerts referenced further along in the script as we reference the items within our hash table.

$VSANAlerts = @{
 "esx.audit.vsan.clustering.enabled" = "Virtual SAN clustering service had been enabled";
 "esx.clear.vob.vsan.pdl.online" = "Virtual SAN device has come online.";
 "esx.clear.vsan.clustering.enabled" = "Virtual SAN clustering services have now been enabled.";
 "esx.clear.vsan.vsan.network.available" = "Virtual SAN now has at least one active network configuration.";
 "esx.clear.vsan.vsan.vmknic.ready" = "A previously reported vmknic now has a valid IP.";
 "esx.problem.vob.vsan.lsom.componentthreshold" = "Virtual SAN Node: Near node component count limit.";
 "esx.problem.vob.vsan.lsom.diskerror" = "Virtual SAN device is under permanent error.";
 "esx.problem.vob.vsan.lsom.diskgrouplimit" = "Failed to create a new disk group.";
 "esx.problem.vob.vsan.lsom.disklimit" = "Failed to add disk to disk group.";
 "esx.problem.vob.vsan.pdl.offline" = "Virtual SAN device has gone offline.";
 "esx.problem.vsan.clustering.disabled" = "Virtual SAN clustering services have been disabled.";
 "esx.problem.vsan.lsom.congestionthreshold" = "Virtual SAN device Memory/SSD congestion has changed.";
 "esx.problem.vsan.net.not.ready" = "A vmknic added to Virtual SAN network config doesn't have valid IP.";
 "esx.problem.vsan.net.redundancy.lost" = "Virtual SAN doesn't haven any redundancy in its network configuration.";
 "esx.problem.vsan.net.redundancy.reduced" = "Virtual SAN is operating on reduced network redundancy.";
 "esx.problem.vsan.no.network.connectivity" = "Virtual SAN doesn't have any networking configuration for use."
 }

(For more information on working with PowerShell hash tables, see this handy Microsoft TechNet article)

4. Next we use the Get-View cmdlet to query the vCenter Alarm Manager for each VOB ID listed in step 3.

The Get-View cmdlet returns the vSphere inventory objects (VIObject) that correspond to the specified search criteria.

$alarmMgr = Get-View AlarmManager
 $entity = Get-Cluster $Cluster | Get-View
 $VSANAlerts.Keys | Foreach {
 $Name = $VSANAlerts.Get_Item($_)
 $Value = $_

5. Create the vCenter Alarm specification object

 $alarm = New-Object VMware.Vim.AlarmSpec
 $alarm.Name = $Name
 $alarm.Description = $Name
 $alarm.Enabled = $TRUE
 $expression = New-Object VMware.Vim.EventAlarmExpression
 $expression.EventType = Vim.Event.EventEx
 $expression.eventTypeId = $Value
 $expression.objectType = "HostSystem"
 $expression.status = "red"
 $alarm.expression = New-Object VMware.Vim.OrAlarmExpression
 $alarm.expression.expression += $expression
 $alarm.setting = New-Object VMware.Vim.AlarmSetting
 $alarm.setting.reportingFrequency = 0
 $alarm.setting.toleranceRange = 0

6. Create the vCenter Alarm in vCenter

 Write-Host "Creating Alarm on $Cluster for $Name"
 $CreatedAlarm = $alarmMgr.CreateAlarm($entity.MoRef, $alarm)
 }
 Write-Host "All Alarms Added to $Cluster"

As you can see, the steps to create vCenter Alarms for Virtual SAN are actually pretty straightforward. If you have not yet began monitoring your Virtual SAN environment, these steps can accelerate the process quite rapidly and you really do not have to be an expert in PowerCLI to do so.

VMware Hands on Labs

Here is a great tip brought to you by our friends at the VMware Hands on Labs. If you would like an excellent shortcut to getting “hands on” creating vCenter Alarms for Virtual SAN, using PowerCLI cmdlets, try out the lab below:

HOL-SDC-1427 – VMware Software Defined Storage: Module 5: Advanced Software Defined Storage With SPBM and PowerCLI (30 minutes)

 

We have many more articles on there way so share, re-tweet, or whatever your favorite social media method is. You will not want to miss these!

(Thanks to @millardjk for his keen eye)


Resources

 

VMware Storage Survey

spbm2bThe VMware Storage and Availability team is looking for customer and community feedback with regarding some storage technologies and use cases. Please take a few minutes to fill out the brief survey listed in the link below.

Storage Technology & Use case

Thank you for your help and support.

For future updates on Virtual SAN (VSAN), Virtual Volumes (VVols), and other Software-defined Storage technologies as well as vSphere + OpenStack be sure to follow me on Twitter: @PunchingClouds

MySQL VM Backup with vSphere Data Protection

I often get questions regarding backup of a MySQL database server (VM) using vSphere Data Protection (VDP). VDP does not have an agent for MySQL, but you can of course perform an image-level (entire-VM) backup of a VM running MySQL. With VDP and many other backup solutions that use the vSphere APIs for Data Protection (VADP), the backup and recovery of Linux VMs is crash-consistent. In other words, there is no quiescing of the file system and applications running inside the VM when a backup of the VM is performed. When you recover a VM from a crash-consistent backup, it is similar to the state of the server when the power has failed unexpectedly (no graceful shutdown) and then power is restored. MySQL, as with other popular database solutions, has good, built-in protection against data loss and corruption when recovered from a crash-consistent state, but there are no guarantees. The goal is to minimize the chance of corruption and data loss. Here are a few recommendations when using VDP to back up a VM running MySQL:

Continue reading

vCenter Server 5.5 Availability guide

It brings me great pleasure to announce the vCenter Server 5.5 Availability Guide is now available.

Continue reading

Tips for a Successful VMware Virtual SAN Evaluation

One of the biggest advantages of Virtual SAN is that it is so easy to set up and use.  Out of all the evaluation options available to them, many customers have realized that trying it out in their own environment is entirely feasible.

We’ve been keeping track of the many Virtual SAN evaluations to date, and have created a quick guide that should help anyone evaluating Virtual SAN in their environment.  In it, you’ll find a checklist on configurations, verifying compatibility, testing the network, expected behaviors for failure testing, and tips on testing performance.  It’s an essential guide for anyone working with Virtual SAN.

We hope you find this document useful!

Download: Tips for a Successful VMware Virtual SAN Evaluation

Reference Architecture for building a VMware Software–Defined Datacenter

The latest in our series of reference architectures is now available. This is an update to the previous version which brings in additional products and covers the vCloud Suite 5.8 release.

This reference architecture describes an implementation of a software-defined data center (SDDC) using VMware vCloud® Suite Enterprise 5.8, VMware NSX™ for vSphere® 6.1, VMware IT Business Management Suite™ Standard Edition 1.1, and VMware vCenter™ Log Insight™ 2.0 to create an SDDC. This SDDC implementation is based on real-world scenarios, user workloads, and infrastructure system configurations. The configuration uses industry-standard servers, IP-based storage, and 10-Gigabit Ethernet (10GbE) networking to support a scalable and redundant architecture.

Continue reading

VMware Virtual SAN Operations: Replacing Disk Devices

VSAN-Ops-LogoIn my previous Virtual SAN operations article, “VMware Virtual SAN Operations: Disk Group Management” I covered the configuration and management of the Virtual SAN disk groups, and in particular I described the recommended operating procedures for managing Virtual SAN disk groups.

In this article, I will take a similar approach and cover the recommended operating procedures for replacing flash and magnetic disk devices. In Virtual SAN, drives can be replaced for two reasons; failures, and upgrades. Regardless of the reason whenever a disk device needs to be replaced, it is important to follow the correct decommissioning procedures.

Replacing a Failed Flash Device

The failure of flash device renders an entire disk group inaccessible (i.e. in the “Degraded” state) to the cluster along with its data and storage capacity.  One important observation to highlight here is that a single flash device failure doesn’t necessarily mean that the running virtual machines will incur outages. As long as the virtual machines are configured with a VM Storage Policy with “Number of Failures to Tolerate” greater than zero, the virtual machine objects and components will be accessible.  If there is available storage capacity within the cluster, then in a matter of seconds the data resynchronization operation is triggered. The time for this operation depends on the amount of data that needs to be resynchronized.

When a flash device failure occurs, before physically removing the device from a host, you must decommission the device from Virtual SAN. The decommission process performs a number of operations in order to discard disk group memberships, deletes partitions and remove stale data from all disks. Follow either of the disk device decommission procedure defined below.

Flash Device Decommission Procedure from the vSphere Web Client

  1. Log on to the vSphere Web Client
  2. Navigate to the Hosts and Clusters view and select the cluster object
  3. Go to the manage tab and select Disk management under the Virtual SAN section
  4. Select the disk group with the failed flash device
  5. Select the failed flash device and click the delete button

Note: In the event the disk claim rule settings in Virtual SAN is set to automatic the disk delete option won’t be available in the UI. Change the disk claim rule to “Manual” in order to have access to the disk delete option.

Flash Device Decommission Procedure from the CLI (ESXCLI) (Pass-through Mode)

  1. Log on to the host with the failed flash device via SSH
  2. Identify the device ID of failed flash device
    • esxcli vsan storage list

SSD-UUID

  1. delete the failed flash device from the disk group
    • esxcli vsan storage remove -s <device id>

SSD-UUID-CLI

Note: Deleting a failed flash device will result in the removal of the disk group and all of it’s members.

  1. Remove the failed flash device from the host
  2. Add a new flash device to host and wait for the vSphere hypervisor to detect it, or perform a device rescan.

Note: These step are applicable when the storage controllers are configured in pass-though mode and support hardware hot-plug feature.

Upgrading a Flash Device

Before upgrading the flash device, you should ensure there is enough storage capacity available within the cluster to accommodate all of the currently stored data in the disk group, because you will need to migrate data off that disk group.

To migrate the data before decommissioning the device, place the host in maintenance mode and choose the suitable data migration option for the environment. Once all the data is migrated from the disk group, follow the flash device decommission procedures before removing the drive from the host.

Replacing a Failed Magnetic Disk Devices

Each magnetic disk is accountable for the storage capacity it contributes to a disk group and the overall Virtual SAN datastore. Similar to flash, magnetic disk devices can be replaced for failures or upgrade reasons. The impact imposed by a failure of a magnetic disk is smaller when compared to the impact presented by the failure of a flash device. The virtual machines remain online and operational for the same reasons described above in the flash device failure section.  The resynchronization operation is significantly less intensive than a flash device failure. However, again the time depends on the amount of data to be resynchronized.

As with flash devices, before removing a failed magnetic device from a host, decommission the device from Virtual SAN first. The action allows Virtual SAN to perform the required disk group and devices maintenance operations as well as allow the subsystem components to update the cluster capacity and configuration settings.

vSphere Web Client Procedure (Pass-through Mode)

  1. Login to the vSphere Web Client
  2. Navigate to the Hosts and Clusters view and select the Virtual SAN enabled cluster
  3. Go to the manage tab and select Disk management under the Virtual SAN section
  4. Select the disk group with the failed magnetic device
  5. Select the failed magnetic device and click the delete button

Note: It is possible to perform decommissioning operations from ESXCLI in batch mode if required. The use of the ESXCLI does introduces a level of complexity that should be avoided unless thoroughly understood. It is recommended to perform these types of operations using the vSphere Web Client until enough familiarity is gained with them.

Magnetic Device Decommission Procedure from the CLI (ESXCLI) (Pass-through Mode)

  1. Login to the host with the failed flash device via SSH
  2. Identify the device ID of failed magnetic device
    • esxcli vsan storage listmag-change
  3. delete the magnetic device from the disk group
    • esxcli vsan storage remove -d <device id>HDD-UUID-CLI
  4.  Add a new magnetic device to the host and wait for the vSphere hypervisor to detect it, or perform a device rescan.

Upgrading a Magnetic Disk Device

Before upgrading any of the magnetic devices ensure there is enough usable storage capacity available within the cluster to accommodate the data from the device that is being upgraded. The data migration can can be initiated by placing the host in maintenance mode and choosing a suitable data migration option for the environment. Once all the data is offloaded from the disks, proceed with the magnetic disk device decommission procedures.

In this particular scenario, it is imperative to first decommission the magnetic disk device before physically removing from the host. If the disk is removed from the host without performing the decommissioning procedure, data that is cached from that disk will end up being permanently stored in the cache layer. This could reduce the available amount of cache and eventually impact the performance of the system.

Note: The disk device replacement procedures discussed in this article are entirely based on storage controllers configured in pass-through mode. In the event the storage controllers are configured in a RAID0 mode, follow the manufactures instructions for adding and removing disk devices.

– Enjoy

For future updates on Virtual SAN (VSAN), Virtual Volumes (VVols), and other Software-defined Storage technologies as well as vSphere + OpenStack be sure to follow me on Twitter: @PunchingClouds

Infographic – Walk Through of VMware Availability

Here’s a visualization we put together to help people understand the various offerings from VMware that can positively affect your levels of availability.

Hope you like it!

VMware-Availability  <click here for pdf>