Home > Blogs > VMware vSphere Blog > Category Archives: Storage

Category Archives: Storage

VMware Virtual SAN and ScaleIO : Fundamentally Two different approaches to Software-Defined Storage

There’ s a lot of buzz and excitement around Software-Defined Storage (SDS) and hyper-converged storage solutions.  Particularly around VMware’s recently introduced product: VMware Virtual SAN.

VMware Virtual SAN is seeing tremendous traction in the market since its release. After only two full quarters of availability, we already have many hundreds of customers happily running a variety of applications on Virtual SAN, from VDI to test & development  to production applications and databases.  Virtual SAN customers love the product’s simplicity and integration with the VMware stack.

But along with increased awareness and traction, we are also seeing an increasing level of confusion in the market on the key differences between Virtual SAN and other SDS products.  Particularly, we have been receiving a great deal of questions from our customers and partners about differences between Virtual SAN and EMC ScaleIO. They are asking us about where Virtual SAN should be used, where ScaleIO should be used, and whether there’s any real difference between the two.

This type of confusion is unfortunate because VMware Virtual SAN and ScaleIO follow two fundamentally different approaches to SDS.

  • VMware Virtual SAN is designed specifically around tight integration with vSphere – with the objective of providing super-simple management and very high levels of performance for vSphere VMs.  Virtual SAN is always deployed in a hyper-converged configuration, where storage is converged with the vSphere compute nodes. Virtual SAN is targeted at the generalist IT professional, not just storage experts.
  • ScaleIO has a different design point – to provide highly scalable server-based storage for heterogeneous platforms, including multiple hypervisors and physical servers. ScaleIO has its own installation, configuration and management workflows which are typically driven by expert storage administrators.

The confusion between VMware Virtual SAN and ScaleIO is partially fueled by recent press articles, which claim full integration of ScaleIO into vSphere’s ESX kernel.  This claim is not accurate. There are no plans to port the core ScaleIO product in the ESX kernel or integrate it with the rest of the vSphere stack.

More specifically, ScaleIO consist of two  components: a) a block storage server that is the core of the ScaleIO product and which serves block storage to its clients through the ScaleIO protocol; b) a client which connects to the server and allows VMs and applications to access storage on ScaleIO clusters. This model is very similar to an iSCSI target server serving data to iSCSI initiators. EMC has written an ESX kernel driver that implements a ScaleIO client module. It ‘talks’ the ScaleIO protocol and accesses the ScaleIO server(s). It exposes storage to VMs running in vSphere in a way similar to iSCSI volumes. This ScaleIO driver has been written using  public kernel APIs that are available to any VMware partner who develops kernel drivers in ESX. The ScaleIO server is not being ported in or integrated with vSphere and the ESX kernel. The ScaleIO server runs on Linux servers, either on bare metal or as a virtual appliance.

This architectural model allows ScaleIO to be a great SDS solution for heterogeneous platforms.

In the case of bare metal deployments VM I/O goes through the in-kernel driver and onto the external ScaleIO cluster over an IP network as it is the case with other storage arrays. In the virtual appliance case, a VM I/O operation traverses the ESX storage stack through the virtual appliance.

In contrast, VSAN and all its components are natively integrated with vSphere. The key functional components of VSAN, including its “server” functionality, run in the ESX kernel. This fundamental difference in architecture allows VSAN to be optimized for vSphere VMs in an unparalleled way.  VSAN is also integrated directly with the ESX control plane, vCenter  and vSphere APIs to provide a simple and effective management experience.  Together, these integrations provide important benefits to vSphere customers:

  • Performance and Overhead: The full kernel integration gives VMware Virtual SAN higher levels of performance and efficiency because Virtual SAN can more efficiently utilize the available memory and CPU cycles. Hence, Virtual SAN’s memory footprint and CPU cycles consumed per operation are the lowest in the market. Furthermore, compute and storage operations are executing inside the same layer of software, minimizing communication latencies.  This efficiency translates to  more compelling performance and total-cost-of-ownership  for the end user[V3] . By contrast, no other hyper-converged solution has its “server” logic integrated in the vSphere kernel, limiting the gains and efficiencies that can be achieved by these solutions.
  • Management integration: Virtual SAN is designed to be managed through vCenter, by any administrator who is familiar with vSphere.  The setup, configuration and ongoing management of the product are simple and  fully integrated with vSphere management workflows.  As a result, there are no separate management consoles and solutions. The required storage properties of each VM and virtual disk are expressed in the form of policies.  Effectively, storage becomes a quality of every VM, not a separate function.
  • Programmatic APIs: The functionality of Virtual SAN’s control plane is exposed through new or extensions to existing vSphere APIs. These are stable APIs with s wide range of language bindings that VMware customers have been using for years to automate their operational processes.
  • vSphere Features: In addition, since Virtual SAN is embedded within the hypervisor, all vSphere features such as DRS, vMotion, SVMotion, High Availability, vSphere Replication, and others are seamlessly supported with VSAN.

VMware Virtual SAN’s architectural model allows it to be the best storage solution for hyper-converged vSphere environments and for vSphere VMs. It does not address non-vSphere storage needs today.

So what does it mean in terms of where each solution should be used? In practice things are never black or white as we would like them to be, but at a high level there are some key aspects that we can keep in mind when comparing:

  • Use VMware Virtual SAN if you value deep integration with vSphere, both on the data path and control plane. Virtual SAN is deployed in a hyper-converged model, where storage is converged with compute on the same x86 hosts and storage scales in alignment with vSphere clusters (up to 32 nodes per cluster today, soon to become 64). We believe the Virtual SAN approach delivers highly differentiated, unique advantages for customers of all sizes looking for an SDS solution for vSphere.
  • Use ScaleIO when delivering highly scalable shared storage from a single storage pool to different hypervisors or across multiple vSphere clusters. The primary use case for ScaleIO is serving storage for a heterogeneous environment (i.e when storage is served to a diverse set of hypervisor clients or between virtual and physical environments) or when the storage system needs to scale beyond the size of a vSphere cluster.

The picture below should help clarify what these two products are positioned for:

VSAN                ScaleIO

 

Gartner Predictions: Storage Integration Leading the Way in 2015

Rowing crews move in sync to keep their craft moving at a steady pace – but that synchronous movement would not exist without an investment in the right hardware and the right training. For businesses, that means not only investing in the right rowers, but also in the right tools to enable athletes. And for a long time those tools, namely storage for midmarket organizations, have either been too expensive, too resource heavy or too complex.

It doesn’t have to be that way. Midmarket teams should be able to afford the right equipment without worry so they can focus on more important tasks. That’s why we’re thrilled to discover Gartner Research included VMware Virtual SAN in their recent report, “Predicts 2015: Midmarket CIOs Must Shed IT Debt to Invest in Strategic IT Initiatives.”

The report investigates how CIOs can best invest resources to give IT teams the simplified tools they need while staying on budget. Often, this excludes “best of breed” solutions. Gartner suggests midsize businesses seek out integrated systems that combine server, storage and network components in a package suitable for their needs instead.

For many, Virtual SAN, VMware’s policy-driven storage product design for vSphere environments, is that solution. Its ease of use, performance, scalability and low total cost of ownership helps to avoid significant upfront investments. And, with its VM-level storage policies, Virtual SAN automatically and dynamically matches requirements with underlying storage resources. Meaning less time manually managing storage tasks and more time focusing on important tasks.

According to Gartner’s predictions, roughly 40% of midsize enterprises will replace all data center services and storage with integrated systems by 2018. We certainly would like to see, and be at the forefront, of that transition.

VMware Virtual SAN stands apart from the competition not just because of its ability to deliver simple software defined shared storage, but also because of its integrated partner ecosystem. More than 40 Virtual SAN Ready Nodes can be purchased from our system vendor partners.

We’re thrilled to have been a part of software defined storage in 2014, and we can’t wait to push the envelope further in 2015.

Be sure to subscribe to the Virtual SAN blog or follow our social channels at @vmwarevsan and Facebook.com/vmwarevsan for the latest updates.

For more information about VMware Virtual SAN, visit http://www.vmware.com/products/virtual-san.

VMware’s vSphere Big Data Extensions (BDE) achieves Hortonworks Operations Ready Certification

 

Hortonworks announced on the 17th December 2014 that VMware’s Big Data Extensions tool for Hadoop on virtual machines is now both HDP Certified and Operations Ready. HDP is the Hortonworks Data Platform – an open Hadoop platform that is centered on YARN. The Operations Ready designation is a new certification introduced by Hortonworks to focus attention on those tools that integrate in an approved way with Apache Ambari by making use of the open Ambari management application programming interfaces. The focus of the program is to certify operational tools for managing a Hadoop/HDP cluster. The Operations Ready program also provides assurance to enterprises adopting Hadoop that the tools they select to run and interact with Hadoop have been tested and validated to work correctly. At VMware we are excited to get this additional level of certification for VMware’s BDE and we look forward to continued engineering collaboration with Hortonworks.

Here is the description of the new Operations Ready program from Hortonworks:

http://hortonworks.com/partners/certified-technology-program/ops-ready/

You probably by now have also seen the recent VMware Big Data Extensions 2.1 announcements. Here is a quick summary of those new features in 2.1:

http://blogs.vmware.com/vsphere/2014/10/whats-new-vsphere-big-data-extensions-version-2-1.html

BDE 2.1 was announced as being Generally Available in October 2014. One of the central new features in BDE 2.1 is better integration with the de-facto Hadoop management tools from the distro vendors. Chief among those tools is Ambari. This integration with Ambari was the result of a request made to us directly by the VMware BDE user community.

BDE 2.1, with the new application manager construct, can now use the Ambari APIs under the covers to provision the HDP software into the virtual machines that it has created through cloning its template virtual machine. This method of deploying everything through BDE ensures that the resulting new Hadoop cluster is entirely compatible with Ambari. That is important because many of our users would like to use Ambari and VMware vCenter together from the point at which a cluster is provisioned onwards.

  • Ambari is the management tool of choice among HDP users in order to gain insight into what is going on at runtime at the Hadoop level (e.g. checking the status of HDFS, YARN, MapReduce and other services) and to make service changes there.
  • VMware vCenter is the virtualization infrastructure management tool that is in use at tens of thousands of VMware’s customers to view system behavior and performance at the virtual infrastructure level (virtual machines, physical machines, consumed resources and performance data). vCenter with the BDE plug-in is in popular use for deploying user Hadoop clusters today at many enterprises.

The BDE plug-in uses the vCenter APIs as well as the Ambari Blueprint APIs. Combining the two tools together to collaborate on the Hadoop provisioning details simplifies the management of your virtualized Hadoop cluster significantly. Both the Hadoop application architect and the virtualization manager can converse about the components of the HDP cluster and their effect on hardware consumption.

Hortonworks’ new Operations Ready program is one of a set of certifications that are currently available from the company. Other certifications available are the YARN Ready, Security Ready and Governance Ready programs. You can read more about the new programs here:  http://hortonworks.com/blog/accelerating-adoption-enterprise-hadoop

You can find the full BDE Administrator’s  and User’s  Guide and the BDE Command Line Interface Guide, as well as the Release Notes at: https://www.vmware.com/support/pubs/vsphere-big-data-extensions-pubs.html

 

Operationalizing VMware Virtual SAN: Automating vCenter Alarm Configuration Using PowerCLI

powercli 5.8 icon

Welcome to the next installment in our Operationalizing VMware Virtual SAN series. In our previous article we detailed “How to configure vCenter alarms for Virtual SAN”. In today’s article we will demonstrate how to automate that configuration workflow leveraging PowerCLI.

(Many thanks to VMware genius Alan Renouf (@alanrenouf) for his contributions to this topic) [Joe Cook: @CloudAnimal]

The PowerCLI code required to automate the configuration of vCenter Alarms for Virtual SAN is considerably straightforward.

1. Connect to vCenter

Connect-VIServer -Server 192.168.100.1 -User Administrator@vsphere.local -Password vmware

2. Define the the Virtual SAN cluster where you would like the rules to be created

$Cluster = "Cluster Site A"

3. Next we create a hash table with the desired VMware ESXi Observeration IDs (VOB IDs) for Virtual SAN and include a description for each VOB ID.

If you are not used to programming, the concept of arrays and hash tables may be a bit confusing. Using variables is generally much easier to understand. One way of understanding variables is to think of them simply as a short amount of text used to represent a larger amount of text in your program or script ($x=”larger amount of text”). Instead of typing “larger amount of text” continually, you can simply type $x and the language interpreter (in our case PowerShell), will substitute the string “larger amount of text” wherever it finds $x in your script. Variables can be used to greatly reduce the amount of code you have to type, make your scripts much easier to read, and have many other uses as well.

If we think of variables as ways to store one value to reference, we can think of arrays as a way to store multiple values to reference. In our example today, we would have to create at least 32 variables to perform the same work that we can with one hash table.

A hash table is a type of array that is also known as a dictionary. It is a collection of name-value pairs (e.g. “name”=”value”) that can be used . Here we have an example of a basic hash table:

$HashTableName = @{
VOB_ID_A="VOB Description";
VOB_ID_B="VOB Description";
VOB_ID_C="VOB Description";
}

In the table below we have a breakdown of the components of the code used to create a hash table:

Syntax Component Description
$HashTableName = Replace “HashTableName” with the text you wish to use to reference this list of key-values pairs.
@{ Indicates the start of the hash table or array
VOB_ID_A=”VOB Description”; Key-Value pair to store within the hash table. VOB_ID_A will be the VOB ID from the VMware ESXi Observation Log (VOBD) (e.g. “esx.audit.vsan.clustering.enabled”). “VOB Description” will be the description of the associated “VOB ID” (e.g. “Virtual SAN clustering service had been enabled”). Make sure to use quotation marks whenever spaces are used and to separate each key-value pair with a semicolon (;).Examine /var/log/vobd.log on your vSphere host to obtain possible VOB IDs. See here for a list of VMware ESXi Observation IDs for Virtual SAN.
} Indicates the end of the hash table or array

Here is an example of a hash table with a single key-value pair representing a single vCenter Alarm for Virtual SAN:

$VSANAlerts = @{
"esx.audit.vsan.clustering.enabled" = "Virtual SAN clustering service had been enabled";
}

Below is the actual hash table that we will use in our example Virtual SAN Alarm Configuration script. It is fully populated with all of the recommended VOB IDs for Virtual SAN along with the description for each. We have labeled this hash table as “$VSANAlerts”. You will see $VSANAlerts referenced further along in the script as we reference the items within our hash table.

$VSANAlerts = @{
 "esx.audit.vsan.clustering.enabled" = "Virtual SAN clustering service had been enabled";
 "esx.clear.vob.vsan.pdl.online" = "Virtual SAN device has come online.";
 "esx.clear.vsan.clustering.enabled" = "Virtual SAN clustering services have now been enabled.";
 "esx.clear.vsan.vsan.network.available" = "Virtual SAN now has at least one active network configuration.";
 "esx.clear.vsan.vsan.vmknic.ready" = "A previously reported vmknic now has a valid IP.";
 "esx.problem.vob.vsan.lsom.componentthreshold" = "Virtual SAN Node: Near node component count limit.";
 "esx.problem.vob.vsan.lsom.diskerror" = "Virtual SAN device is under permanent error.";
 "esx.problem.vob.vsan.lsom.diskgrouplimit" = "Failed to create a new disk group.";
 "esx.problem.vob.vsan.lsom.disklimit" = "Failed to add disk to disk group.";
 "esx.problem.vob.vsan.pdl.offline" = "Virtual SAN device has gone offline.";
 "esx.problem.vsan.clustering.disabled" = "Virtual SAN clustering services have been disabled.";
 "esx.problem.vsan.lsom.congestionthreshold" = "Virtual SAN device Memory/SSD congestion has changed.";
 "esx.problem.vsan.net.not.ready" = "A vmknic added to Virtual SAN network config doesn't have valid IP.";
 "esx.problem.vsan.net.redundancy.lost" = "Virtual SAN doesn't haven any redundancy in its network configuration.";
 "esx.problem.vsan.net.redundancy.reduced" = "Virtual SAN is operating on reduced network redundancy.";
 "esx.problem.vsan.no.network.connectivity" = "Virtual SAN doesn't have any networking configuration for use."
 }

(For more information on working with PowerShell hash tables, see this handy Microsoft TechNet article)

4. Next we use the Get-View cmdlet to query the vCenter Alarm Manager for each VOB ID listed in step 3.

The Get-View cmdlet returns the vSphere inventory objects (VIObject) that correspond to the specified search criteria.

$alarmMgr = Get-View AlarmManager
 $entity = Get-Cluster $Cluster | Get-View
 $VSANAlerts.Keys | Foreach {
 $Name = $VSANAlerts.Get_Item($_)
 $Value = $_

5. Create the vCenter Alarm specification object

 $alarm = New-Object VMware.Vim.AlarmSpec
 $alarm.Name = $Name
 $alarm.Description = $Name
 $alarm.Enabled = $TRUE
 $expression = New-Object VMware.Vim.EventAlarmExpression
 $expression.EventType = Vim.Event.EventEx
 $expression.eventTypeId = $Value
 $expression.objectType = "HostSystem"
 $expression.status = "red"
 $alarm.expression = New-Object VMware.Vim.OrAlarmExpression
 $alarm.expression.expression += $expression
 $alarm.setting = New-Object VMware.Vim.AlarmSetting
 $alarm.setting.reportingFrequency = 0
 $alarm.setting.toleranceRange = 0

6. Create the vCenter Alarm in vCenter

 Write-Host "Creating Alarm on $Cluster for $Name"
 $CreatedAlarm = $alarmMgr.CreateAlarm($entity.MoRef, $alarm)
 }
 Write-Host "All Alarms Added to $Cluster"

As you can see, the steps to create vCenter Alarms for Virtual SAN are actually pretty straightforward. If you have not yet began monitoring your Virtual SAN environment, these steps can accelerate the process quite rapidly and you really do not have to be an expert in PowerCLI to do so.

VMware Hands on Labs

Here is a great tip brought to you by our friends at the VMware Hands on Labs. If you would like an excellent shortcut to getting “hands on” creating vCenter Alarms for Virtual SAN, using PowerCLI cmdlets, try out the lab below:

HOL-SDC-1427 – VMware Software Defined Storage: Module 5: Advanced Software Defined Storage With SPBM and PowerCLI (30 minutes)

 

We have many more articles on there way so share, re-tweet, or whatever your favorite social media method is. You will not want to miss these!

(Thanks to @millardjk for his keen eye)


Resources

 

VMware Storage Survey

spbm2bThe VMware Storage and Availability team is looking for customer and community feedback with regarding some storage technologies and use cases. Please take a few minutes to fill out the brief survey listed in the link below.

Storage Technology & Use case

Thank you for your help and support.

For future updates on Virtual SAN (VSAN), Virtual Volumes (VVols), and other Software-defined Storage technologies as well as vSphere + OpenStack be sure to follow me on Twitter: @PunchingClouds

Tips for a Successful VMware Virtual SAN Evaluation

One of the biggest advantages of Virtual SAN is that it is so easy to set up and use.  Out of all the evaluation options available to them, many customers have realized that trying it out in their own environment is entirely feasible.

We’ve been keeping track of the many Virtual SAN evaluations to date, and have created a quick guide that should help anyone evaluating Virtual SAN in their environment.  In it, you’ll find a checklist on configurations, verifying compatibility, testing the network, expected behaviors for failure testing, and tips on testing performance.  It’s an essential guide for anyone working with Virtual SAN.

We hope you find this document useful!

Download: Tips for a Successful VMware Virtual SAN Evaluation

Reference Architecture for building a VMware Software–Defined Datacenter

The latest in our series of reference architectures is now available. This is an update to the previous version which brings in additional products and covers the vCloud Suite 5.8 release.

This reference architecture describes an implementation of a software-defined data center (SDDC) using VMware vCloud® Suite Enterprise 5.8, VMware NSX™ for vSphere® 6.1, VMware IT Business Management Suite™ Standard Edition 1.1, and VMware vCenter™ Log Insight™ 2.0 to create an SDDC. This SDDC implementation is based on real-world scenarios, user workloads, and infrastructure system configurations. The configuration uses industry-standard servers, IP-based storage, and 10-Gigabit Ethernet (10GbE) networking to support a scalable and redundant architecture.

Continue reading

VMware Virtual SAN Operations: Replacing Disk Devices

VSAN-Ops-LogoIn my previous Virtual SAN operations article, “VMware Virtual SAN Operations: Disk Group Management” I covered the configuration and management of the Virtual SAN disk groups, and in particular I described the recommended operating procedures for managing Virtual SAN disk groups.

In this article, I will take a similar approach and cover the recommended operating procedures for replacing flash and magnetic disk devices. In Virtual SAN, drives can be replaced for two reasons; failures, and upgrades. Regardless of the reason whenever a disk device needs to be replaced, it is important to follow the correct decommissioning procedures.

Replacing a Failed Flash Device

The failure of flash device renders an entire disk group inaccessible (i.e. in the “Degraded” state) to the cluster along with its data and storage capacity.  One important observation to highlight here is that a single flash device failure doesn’t necessarily mean that the running virtual machines will incur outages. As long as the virtual machines are configured with a VM Storage Policy with “Number of Failures to Tolerate” greater than zero, the virtual machine objects and components will be accessible.  If there is available storage capacity within the cluster, then in a matter of seconds the data resynchronization operation is triggered. The time for this operation depends on the amount of data that needs to be resynchronized.

When a flash device failure occurs, before physically removing the device from a host, you must decommission the device from Virtual SAN. The decommission process performs a number of operations in order to discard disk group memberships, deletes partitions and remove stale data from all disks. Follow either of the disk device decommission procedure defined below.

Flash Device Decommission Procedure from the vSphere Web Client

  1. Log on to the vSphere Web Client
  2. Navigate to the Hosts and Clusters view and select the cluster object
  3. Go to the manage tab and select Disk management under the Virtual SAN section
  4. Select the disk group with the failed flash device
  5. Select the failed flash device and click the delete button

Note: In the event the disk claim rule settings in Virtual SAN is set to automatic the disk delete option won’t be available in the UI. Change the disk claim rule to “Manual” in order to have access to the disk delete option.

Flash Device Decommission Procedure from the CLI (ESXCLI) (Pass-through Mode)

  1. Log on to the host with the failed flash device via SSH
  2. Identify the device ID of failed flash device
    • esxcli vsan storage list

SSD-UUID

  1. delete the failed flash device from the disk group
    • esxcli vsan storage remove -s <device id>

SSD-UUID-CLI

Note: Deleting a failed flash device will result in the removal of the disk group and all of it’s members.

  1. Remove the failed flash device from the host
  2. Add a new flash device to host and wait for the vSphere hypervisor to detect it, or perform a device rescan.

Note: These step are applicable when the storage controllers are configured in pass-though mode and support hardware hot-plug feature.

Upgrading a Flash Device

Before upgrading the flash device, you should ensure there is enough storage capacity available within the cluster to accommodate all of the currently stored data in the disk group, because you will need to migrate data off that disk group.

To migrate the data before decommissioning the device, place the host in maintenance mode and choose the suitable data migration option for the environment. Once all the data is migrated from the disk group, follow the flash device decommission procedures before removing the drive from the host.

Replacing a Failed Magnetic Disk Devices

Each magnetic disk is accountable for the storage capacity it contributes to a disk group and the overall Virtual SAN datastore. Similar to flash, magnetic disk devices can be replaced for failures or upgrade reasons. The impact imposed by a failure of a magnetic disk is smaller when compared to the impact presented by the failure of a flash device. The virtual machines remain online and operational for the same reasons described above in the flash device failure section.  The resynchronization operation is significantly less intensive than a flash device failure. However, again the time depends on the amount of data to be resynchronized.

As with flash devices, before removing a failed magnetic device from a host, decommission the device from Virtual SAN first. The action allows Virtual SAN to perform the required disk group and devices maintenance operations as well as allow the subsystem components to update the cluster capacity and configuration settings.

vSphere Web Client Procedure (Pass-through Mode)

  1. Login to the vSphere Web Client
  2. Navigate to the Hosts and Clusters view and select the Virtual SAN enabled cluster
  3. Go to the manage tab and select Disk management under the Virtual SAN section
  4. Select the disk group with the failed magnetic device
  5. Select the failed magnetic device and click the delete button

Note: It is possible to perform decommissioning operations from ESXCLI in batch mode if required. The use of the ESXCLI does introduces a level of complexity that should be avoided unless thoroughly understood. It is recommended to perform these types of operations using the vSphere Web Client until enough familiarity is gained with them.

Magnetic Device Decommission Procedure from the CLI (ESXCLI) (Pass-through Mode)

  1. Login to the host with the failed flash device via SSH
  2. Identify the device ID of failed magnetic device
    • esxcli vsan storage listmag-change
  3. delete the magnetic device from the disk group
    • esxcli vsan storage remove -d <device id>HDD-UUID-CLI
  4.  Add a new magnetic device to the host and wait for the vSphere hypervisor to detect it, or perform a device rescan.

Upgrading a Magnetic Disk Device

Before upgrading any of the magnetic devices ensure there is enough usable storage capacity available within the cluster to accommodate the data from the device that is being upgraded. The data migration can can be initiated by placing the host in maintenance mode and choosing a suitable data migration option for the environment. Once all the data is offloaded from the disks, proceed with the magnetic disk device decommission procedures.

In this particular scenario, it is imperative to first decommission the magnetic disk device before physically removing from the host. If the disk is removed from the host without performing the decommissioning procedure, data that is cached from that disk will end up being permanently stored in the cache layer. This could reduce the available amount of cache and eventually impact the performance of the system.

Note: The disk device replacement procedures discussed in this article are entirely based on storage controllers configured in pass-through mode. In the event the storage controllers are configured in a RAID0 mode, follow the manufactures instructions for adding and removing disk devices.

– Enjoy

For future updates on Virtual SAN (VSAN), Virtual Volumes (VVols), and other Software-defined Storage technologies as well as vSphere + OpenStack be sure to follow me on Twitter: @PunchingClouds

Virtual SAN Backup with VDP – New White Paper

Hot off of the press: A new white paper that discusses backing up virtual machines running on VMware Virtual SAN (VSAN) using VMware vSphere Data Protection (VDP).vsan_vdp_white_paper These are the main topics that are covered:

  • VDP Architectural Overview
  • Virtual SAN Backup using VDP
  • Factors Affecting Backup Performance

The paper details test scenarios, how backup transport modes affect CPU and memory utilization of the VDP virtual appliance, and how the vSphere hosts management network is impacted when the Network Block Device over Secure Sockets Layer (NBDSSL) transport mode is utilized. The paper concludes with a summary of observations, recommendations when deploying the VDP virtual appliance to a Virtual SAN datastore, and some discussion around transport modes and running concurrent backups. A special thank you goes to Weiguo He for compiling this data and writing this paper!

Click here to view/download VMware Virtual SAN Backup Using VMware vSphere Data Protection

@jhuntervmware

VMware Configuration Guide for Virtual SAN HCL Component Updates

The Virtual SAN Configuration Guide has been updated with new components. We recently certified 12 SSDs, updated 4 existing SSD certifications, and updated firmware information for 2 HDDs. Make sure to visit the VMware Configuration Guide for Virtual SAN for more details!

Here is a list of changes:

New SSDs
•  HGST HUSML4040ASS600
•  HGST HUSML4020ASS600
•  HGST HUSML4040ASS601
•  HGST HUSML4020ASS601
•  HGST HUSSL4040BSS600
•  HGST HUSSL4020BSS600
•  HGST HUSSL4010BSS600
•  HGST HUSSL4040BSS601
•  HGST HUSSL4020BSS601
•  HGST HUSSL4010BSS601
•  NEC S3700 400GB SATA 2.5 MLC RPQ
•  NEC N8150-712

Updated SSD Certifications
• Samsung SM1625 800GB SAS SSD1
• Cisco UCS-SD800G0KS2-EP
• EMC XtremSF1400 PCIEHHM-1400M
• EMC XtremSF700 PCIEHHM-700M

Updated Diskful Writes per Day (DWPD) for Samsung and Cisco drives
A new firmware, B210.06.04, was certified for EMC PCI-E SSDs

HDD Firmware Information Updates
•  Fujitsu HD SAS 6G 1.2TB 10K HOT PL 2.5” EP
•  Hitachi 6Gbps,900GB,10000r/min,2.5in.