Home > Blogs > VMware vSphere Blog > Category Archives: Storage

Category Archives: Storage

Managing Virtual SAN with RVC: Part 1 – Introduction to the Ruby vSphere Console

Allow me to introduce you to a member of the VMware CLI family that you may have not yet met, the Ruby vSphere Console, also called RVC for short. The Ruby vSphere Console is a console user interface for VMware ESXi and Virtual Center. You may already know of the Ruby vSphere Console, as it has actually been an open source project for the past 2-3 years and is based on the popular RbVmomi Ruby interface to the vSphere API. RbVmomi was created with the goal to dramatically decrease the amount of coding required to perform simple tasks, as well as increase the efficiency of task execution, all while still allowing for the full power of the API when needed. The Ruby vSphere Console comes free with and is fully supported for both the vCenter Server Appliance (VCSA) and the Windows version of vCenter Server.  Most importantly, RVC is one of the primary tools for managing and troubleshooting a Virtual SAN environment.  

Continue reading

VMware Virtual SAN Design and Sizing Guide for VDI

VSAN-VDIIt’s time for everyone to get up to speed with the latest and greatest VMware Virtual SAN Design and Sizing guidance for Horizon View Virtual Desktop Infrastructures. In this new white paper, Wade Holmes and I leveraged the previously provided guidance from the VMware Virtual SAN Design and Sizing white paper, and applied it to the Horizon View Virtual Desktop Infrastructures use case. This new paper provides prescriptive guidance for sizing and designing of all key requirements and components of Horizon View Virtual Desktop Infrastructures for VMware Virtual SAN. Some of the specifics items are listed below:

  • System Sizing and Desktop Classification
  • Host Sizing Considerations
  • Host CPU Sizing
  • Host Memory Sizing
  • CPU Sizing Assessment
  • Resource Overhead
  • Object calculations for Horizon View Desktops
  • Disk Group Sizing
  • Magnetic Disk Sizing
  • Flash Capacity Sizing

We highly recommend reviewing the white paper as inadequate sizing can a negative impact on the overall performance of Virtual Desktop Infrastructure. The new design and sizing guide can be found and downloaded from the VMware Virtual SAN product page or directly from the link provided below:

VMware Virtual SAN Design and Sizing Guide for Horizon View Virtual Desktop Infrastructure

For future updates on Virtual SAN (VSAN), Virtual Volumes (VVols), and other Software-defined Storage technologies as well as vSphere + OpenStack be sure to follow me on Twitter: @PunchingClouds

Build a Business Case for Virtual SAN – Register for the 7/17 Webcast!

Is your business looking to take the next step to software-defined storage?

On July 17th at 10:00 a.m. PDT, we invite you to join us for this VMware Webcast Series on Building a Business Base for VMware Virtual SAN.

Dive into software-defined storage with VMware Virtual SAN, and learn the factors that enable this industry-leading solution to deliver lower TCO. This webcast will include coverage of capital and operational expenditures savings, showcase case studies and outline a framework for building a cost comparison.

To those who are looking to build a business case for hardware independent software-defined storage — complete with built in failure tolerance and more — register for the webcast today!

For more information on VMware Virtual SAN, visit here.

For future updates, follow us on Twitter at @VMwareVSAN.

Understanding Data Locality in VMware Virtual SAN

VMware Virtual SAN LogoSince the release of VMware Virtual SAN, I’ve been involved in numerous costumer and field conversations around the topic VMware Virtual SAN’s ability to take advantage of data locality.

I have addressed the question in several of the Virtual SAN presentations I have delivered, but I realized that this was an ongoing topic of discussion and one which we needed to provide more details in order to satisfy everyone that has been wondering about this topic.  I figured it was time to put together some form of OFFICIAL collateral providing in-depth  details around this topic.

So, like any storage system, VMware Virtual SAN makes use of data locality. Virtual SAN uses a combination of algorithms that take advantage of both temporal and spatial locality of reference to populate the flash-based read caches across a cluster and provide high performance from available flash resources.

For more details on this topic, download the new Understanding Data Locality in VMware Virtual SAN white paper from the link below:

Understanding Data Locality in VMware Virtual SAN

- Enjoy

For future updates on Virtual SAN (VSAN), Virtual Volumes (VVols), and other Software-defined Storage technologies as well as vSphere + OpenStack be sure to follow me on Twitter: @PunchingClouds

Virtual Volumes (VVols) vSphere APIs & Cloning Operation Scenarios

VVolsI’ve been getting a number of questions around vSphere Storage APIs (VAAI and VASA) and Virtual Volumes and how they would interact with arrays that are compliant with both vSphere APIs (VAAI and VASA). So, instead of providing an individual answer to the question I figured it would be best to share with a broader audience since it’s probably something that a lot of people may also wonder about.

Virtual Volumes is based on vSphere Storage APIs – Storage Awareness (VASA) and some of the function of its operations are based on the ability to offloading operations directly to compatible storage arrays. The other vSphere Storage APIs – Array Integration (VAAI) also provide operation offloading capabilities, especially when it comes to cloning and migration operations. Listed below is the questions asked:

With VVols when a VM is cloned on an array that supports VAAI does VAAI & VASA complement each other or VASA is used for this operation?

That was a loaded question and figured that it would be better to explain and provide some illustrations and specific details  around what happens, because the way in which the cloning operation will work depends on a few facts and scenarios.

Scenario A

When virtual machines are stored on a VVol container, anytime a virtual machine is cloned onto the same VVol container, the system will use the VASA API cloneVirtualVolume and offload the entire clone operation to the array.

VVol-VASA-SC1

Scenario B

If a clone operation is to be performed across different storage containers, in this case the operation may or may not offload the clone operation via the VASA API cloneVirtualVolume. This is all dependent on vendor implementation and environment constraints, for example;

If there is VASA Provider managing two different arrays from the same vendor and each array has a VVol container (VVol-a, and VVol-b), in this case if a clone operation is performed, the system will utilize the cloneVirtualVolume VASA primitive because the source and destination containers are both VVols. Changes are this operation will fail because the VASA provider has no way to offload the clone operation from the source array’s VVol (VVol-a) to the target array’s VVol (VVol-b).

Another example could be an array that has two VVol containers exported, depending on how the containers are configured, the array vendor may or may not be able to perform a MV clone operation across the two VVol containers due to constraints based on the vendors implementation where for example there are two independent VVol groups that are not compatible with one another and that prevents the clone operation from being offloaded across the two.

VVol-VASA-SC02

For both examples, if the VASA call cloneVirtualVolume fails, the system will then fail back to a host-driven mechanism using the bitmap APIs.

If the target does not support this type of workflow, the system will use a host-based bitmap copy (making use of the allocatedBitmapVirtualVolume and/or unsharedBitmapVirtualVolume VASA API) and use the vmkernel data mover to service the operation request.

Scenario C

Another possible scenario is cloning from a VMFS datastore on a VAAI enabled array to a VVol container on the same array.  In this scenario, the system will use the XCOPY VAAI offload to accelerate the clone.   Note that this is a one way workflow,  in other words, VVol > VMFS VAAI does not use the XCOPY.

VVol-VASA-SC03

I hope this answers the questions is helpful for everyone else.

- Enjoy

For future updates on Virtual Volumes (VVols), Virtual SAN (VSAN) and other Software-defined Storage technologies as well as vSphere + OpenStack be sure to follow me on Twitter: @PunchingClouds

New Virtual SAN Ready Nodes from Cisco and Hitachi!

What is the VMware Virtual SAN team announcing today?

Further to the initial launch of the new Virtual SAN Ready Nodes two weeks back, the VMware Virtual SAN product team is launching more Virtual SAN Ready Nodes today, this time from leading OEM vendors, Cisco (4 Ready Nodes) and Hitachi (1 Ready Node).

Screen Shot 2014-07-10 at 6.32.32 PM

We now have a total of 29 Ready Nodes from leading OEMs including the ones we announced two weeks back from Dell (3 Ready Nodes), Fujitsu (5 Ready Nodes), HP (10 Ready Nodes) and SuperMicro (6 Ready Nodes)!  The more, the merrier!

We also have some exciting updates on the Ready Nodes from the other OEM vendors that we released two weeks back!

Continue reading

Virtual SAN Data Management Operations

VMware Virtual SAN LogoSince the release of Virtual SAN one of the most popular topics of discussion about Virtual SAN revolves around solution sizing and performance capabilities. For the most part, the majority of guidance around Virtual SAN designs has been focused on capacity sizing and performance characteristics of virtual machine workloads.

However, there are other aspects of sizing and design criteria for Virtual SAN, specifically those related to system-wide performance and availability during data management operations. The data management operations of Virtual SAN are focused around data resynchronization and rebalancing amongst all the all copies of data. The functions and impact of these operations should be part of all Virtual SAN design and sizing exercises for optimal results.

The design of data management operations is intrinsic to the value proposition of Virtual SAN. It is important to know the events that activate them and also understand the impact they introduce during normal operations. Inadequate size and design can have an impact on the overall performance expectation and availability capabilities of the solution.

This white paper provides detailed information about the Virtual SAN data management operations, their functions and the type of events that triggers them as well as recommendations for achieving performance and recoverability results based on cluster design and sizing. Virtual SAN data management operations can be downloaded from the VMware Virtual SAN product page as well as the directly link provided below:

VMware Virtual SAN Data Management Operations

- Enjoy

For future updates on Virtual SAN (VSAN), Virtual Volumes (VVols), and other Software-defined Storage technologies as well as vSphere + OpenStack be sure to follow me on Twitter: @PunchingClouds

 

 

Official VMware Virtual SAN Blog Index

VMware Virtual SAN LogoIntroducing the OFFICIAL VMware Virtual SAN Blog Index page. This page will server as the centralized repository for all official VMware Virtual SAN related and supported information on for the following topics and more:

  • Official Announcements
  • Technical Information
  • Interoperability
  • Hardware
  • Performance Benchmark

The page will be frequently updated with all the content being released by the Virtual SAN team. Make sure to bookmark the page to stay up to date with the latest and greatest official and supported Virtual SAN characteristics.

VMware Virtual SAN Blog Index

VMware Virtual SAN Hardware

VMware Virtual SAN Interoperability & Recommendations

VMware Virtual SAN Performance Benchmarks

VMware Virtual SAN White Papers

-Enjoy

For future updates on Virtual SAN (VSAN), Virtual Volumes (VVols), and other Software-defined Storage technologies as well as vSphere + OpenStack be sure to follow me on Twitter: @PunchingClouds

vSphere IAAS Interoperability: Virtual SAN, NSX, OpenStack

VSAN-NSX-OpenStackJust in time and right before everyone is off on a long 4th of July weekend here in the good old U.S. of A, I wanted to share a integration demo that I’ve been holding for some time now. Hopefully everyone can see the fireworks delivered by the demo as well.

In this demonstration we’re showcasing the advanced IAAS features and deep integration of vSphere with Virtual SAN, and NSX using Openstack as the Cloud Management Portal for a multi tenant IAAS platform.  To prove our point here, this is not just some isolated lab environment, this is a real environment running today and its leveraging currently available technologies.

The  environment utilized in this demonstration is actually the NSBU internal cloud which has over 200 environment as a mix of KVM and vSphere.  Virtual SAN is used for all vSphere data stores and NSX is used for all tenant connectivity with OpenStack providing a scalable and secure multi-tenant, multi-hypervisor environment.

This demonstration showcases the agility and flexibility of the integration capabilities of vSphere, NSX and Virtual SAN.  In the demonstration we rapidly standup of a two tier ‘application’ and demonstrate the connectivity between all elements of the virtual machines providing the applications.

When complete, all instances, networks and routers are decommissioned and the tenant is returned to an ‘empty state’.  The whole process takes less than 10 minutes (as can be seen in the instance uptime section in the horizon UI).

Update on Virtual Hardware Compatibility Guide

VMware is updating the VMware Virtual SAN Compatibility Guide (VCG) as part of our ongoing testing and certification efforts on Virtual SAN compatible hardware.

Specifically, we’re removing low-end IO controllers from the list, due to the impact these controllers have on Virtual SAN. The choice of IO controller in Virtual SAN really matters when it comes to sustained IO performance.  Even with a design like Virtual SAN where a flash device caches IOs, in case the flash device is behind a controller, all IOs go through the controller in each server. The outstanding IOs are managed using a queue on the controller, and the IOs are de-staged from the queue to various storage devices. IO controllers with low queue depths are not well suited for the type of workloads that Virtual SAN is designed to support. These controllers offer very low IO throughput, and hence the probability of the controller queue getting full is high. When the controller IO queue gets full, IO operations time out, and the VMs become unresponsive.

The situation is exacerbated in the case of re-build operations. Although Virtual SAN has a built-in throttling mechanism for rebuild operations, it is designed to make minimal progress in order to avoid Virtual SAN objects from being exposed to double component failures for a long time. In configurations with low queue-depth controllers, even this minimal progress can cause the controllers to get saturated, leading to high latency and IO time outs.

Given the above scenario, VMware has decided to remove controllers with queue depth of less than 256 from the Virtual SAN compatibility list. While fully functional, these controllers offer too low IO throughput to sustain the performance requirements of most VMware environments.

For a complete list of controllers that will be removed from the compatibility list, please refer to this Knowledge Base article.

If you have purchased Virtual SAN for use with these controllers, please contact VMware customer care for next steps.

Going forward, in order to make it easy for our customers and partners to put together the appropriate Virtual SAN solution for their specific scenario, we are working with our partners to list the queue depth of all controllers in the VCG in the coming weeks. For additional hardware guidance on Virtual SAN, please refer to Virtual SAN Hardware Guidance.