Home > Blogs > VMware vSphere Blog

Virtual Volumes (VVols) vSphere APIs & Cloning Operation Scenarios

VVolsI’ve been getting a number of questions around vSphere Storage APIs (VAAI and VASA) and Virtual Volumes and how they would interact with arrays that are compliant with both vSphere APIs (VAAI and VASA). So, instead of providing an individual answer to the question I figured it would be best to share with a broader audience since it’s probably something that a lot of people may also wonder about.

Virtual Volumes is based on vSphere Storage APIs – Storage Awareness (VASA) and some of the function of its operations are based on the ability to offloading operations directly to compatible storage arrays. The other vSphere Storage APIs – Array Integration (VAAI) also provide operation offloading capabilities, especially when it comes to cloning and migration operations. Listed below is the questions asked:

With VVols when a VM is cloned on an array that supports VAAI does VAAI & VASA complement each other or VASA is used for this operation?

That was a loaded question and figured that it would be better to explain and provide some illustrations and specific details  around what happens, because the way in which the cloning operation will work depends on a few facts and scenarios.

Scenario A

When virtual machines are stored on a VVol container, anytime a virtual machine is cloned onto the same VVol container, the system will use the VASA API cloneVirtualVolume and offload the entire clone operation to the array.

VVol-VASA-SC1

Scenario B

If a clone operation is to be performed across different storage containers, in this case the operation may or may not offload the clone operation via the VASA API cloneVirtualVolume. This is all dependent on vendor implementation and environment constraints, for example;

If there is VASA Provider managing two different arrays from the same vendor and each array has a VVol container (VVol-a, and VVol-b), in this case if a clone operation is performed, the system will utilize the cloneVirtualVolume VASA primitive because the source and destination containers are both VVols. Changes are this operation will fail because the VASA provider has no way to offload the clone operation from the source array’s VVol (VVol-a) to the target array’s VVol (VVol-b).

Another example could be an array that has two VVol containers exported, depending on how the containers are configured, the array vendor may or may not be able to perform a MV clone operation across the two VVol containers due to constraints based on the vendors implementation where for example there are two independent VVol groups that are not compatible with one another and that prevents the clone operation from being offloaded across the two.

VVol-VASA-SC02

For both examples, if the VASA call cloneVirtualVolume fails, the system will then fail back to a host-driven mechanism using the bitmap APIs.

If the target does not support this type of workflow, the system will use a host-based bitmap copy (making use of the allocatedBitmapVirtualVolume and/or unsharedBitmapVirtualVolume VASA API) and use the vmkernel data mover to service the operation request.

Scenario C

Another possible scenario is cloning from a VMFS datastore on a VAAI enabled array to a VVol container on the same array.  In this scenario, the system will use the XCOPY VAAI offload to accelerate the clone.   Note that this is a one way workflow,  in other words, VVol > VMFS VAAI does not use the XCOPY.

VVol-VASA-SC03

I hope this answers the questions is helpful for everyone else.

- Enjoy

For future updates on Virtual Volumes (VVols), Virtual SAN (VSAN) and other Software-defined Storage technologies as well as vSphere + OpenStack be sure to follow me on Twitter: @PunchingClouds

New Virtual SAN Ready Nodes from Cisco and Hitachi!

What is the VMware Virtual SAN team announcing today?

Further to the initial launch of the new Virtual SAN Ready Nodes two weeks back, the VMware Virtual SAN product team is launching more Virtual SAN Ready Nodes today, this time from leading OEM vendors, Cisco (4 Ready Nodes) and Hitachi (1 Ready Node).

Screen Shot 2014-07-10 at 6.32.32 PM

We now have a total of 29 Ready Nodes from leading OEMs including the ones we announced two weeks back from Dell (3 Ready Nodes), Fujitsu (5 Ready Nodes), HP (10 Ready Nodes) and SuperMicro (6 Ready Nodes)!  The more, the merrier!

We also have some exciting updates on the Ready Nodes from the other OEM vendors that we released two weeks back!

Continue reading

Virtual SAN Data Management Operations

VMware Virtual SAN LogoSince the release of Virtual SAN one of the most popular topics of discussion about Virtual SAN revolves around solution sizing and performance capabilities. For the most part, the majority of guidance around Virtual SAN designs has been focused on capacity sizing and performance characteristics of virtual machine workloads.

However, there are other aspects of sizing and design criteria for Virtual SAN, specifically those related to system-wide performance and availability during data management operations. The data management operations of Virtual SAN are focused around data resynchronization and rebalancing amongst all the all copies of data. The functions and impact of these operations should be part of all Virtual SAN design and sizing exercises for optimal results.

The design of data management operations is intrinsic to the value proposition of Virtual SAN. It is important to know the events that activate them and also understand the impact they introduce during normal operations. Inadequate size and design can have an impact on the overall performance expectation and availability capabilities of the solution.

This white paper provides detailed information about the Virtual SAN data management operations, their functions and the type of events that triggers them as well as recommendations for achieving performance and recoverability results based on cluster design and sizing. Virtual SAN data management operations can be downloaded from the VMware Virtual SAN product page as well as the directly link provided below:

VMware Virtual SAN Data Management Operations

- Enjoy

For future updates on Virtual SAN (VSAN), Virtual Volumes (VVols), and other Software-defined Storage technologies as well as vSphere + OpenStack be sure to follow me on Twitter: @PunchingClouds

 

 

Official VMware Virtual SAN Blog Index

VMware Virtual SAN LogoIntroducing the OFFICIAL VMware Virtual SAN Blog Index page. This page will server as the centralized repository for all official VMware Virtual SAN related and supported information on for the following topics and more:

  • Official Announcements
  • Technical Information
  • Interoperability
  • Hardware
  • Performance Benchmark

The page will be frequently updated with all the content being released by the Virtual SAN team. Make sure to bookmark the page to stay up to date with the latest and greatest official and supported Virtual SAN characteristics.

VMware Virtual SAN Blog Index

VMware Virtual SAN Hardware

VMware Virtual SAN Interoperability & Recommendations

VMware Virtual SAN Performance Benchmarks

VMware Virtual SAN White Papers

-Enjoy

For future updates on Virtual SAN (VSAN), Virtual Volumes (VVols), and other Software-defined Storage technologies as well as vSphere + OpenStack be sure to follow me on Twitter: @PunchingClouds

vSphere IAAS Interoperability: Virtual SAN, NSX, OpenStack

VSAN-NSX-OpenStackJust in time and right before everyone is off on a long 4th of July weekend here in the good old U.S. of A, I wanted to share a integration demo that I’ve been holding for some time now. Hopefully everyone can see the fireworks delivered by the demo as well.

In this demonstration we’re showcasing the advanced IAAS features and deep integration of vSphere with Virtual SAN, and NSX using Openstack as the Cloud Management Portal for a multi tenant IAAS platform.  To prove our point here, this is not just some isolated lab environment, this is a real environment running today and its leveraging currently available technologies.

The  environment utilized in this demonstration is actually the NSBU internal cloud which has over 200 environment as a mix of KVM and vSphere.  Virtual SAN is used for all vSphere data stores and NSX is used for all tenant connectivity with OpenStack providing a scalable and secure multi-tenant, multi-hypervisor environment.

This demonstration showcases the agility and flexibility of the integration capabilities of vSphere, NSX and Virtual SAN.  In the demonstration we rapidly standup of a two tier ‘application’ and demonstrate the connectivity between all elements of the virtual machines providing the applications.

When complete, all instances, networks and routers are decommissioned and the tenant is returned to an ‘empty state’.  The whole process takes less than 10 minutes (as can be seen in the instance uptime section in the horizon UI).

VMware vCenter Orchestrator – vCenter Invalid Credentials

There are a few errors I’ve run into over the years that just stump me. Like you, I start doing some web searches and piecing things together. I cross-reference what I find with people I think may have more details for me. Well, I have recently had the “Invalid credentials” error in VMware vCenter Orchestrator (vCO) when viewing my vCenter Server instance in the vCO inventory. I hate to admit that it had me stumped for a while.

When adding my vCenter server in the vCO plugins section, the connection and credentials tested out just fine, so why was the VCO client giving me this error?  Continue reading

Update on Virtual Hardware Compatibility Guide

VMware is updating the VMware Virtual SAN Compatibility Guide (VCG) as part of our ongoing testing and certification efforts on Virtual SAN compatible hardware.

Specifically, we’re removing low-end IO controllers from the list, due to the impact these controllers have on Virtual SAN. The choice of IO controller in Virtual SAN really matters when it comes to sustained IO performance.  Even with a design like Virtual SAN where a flash device caches IOs, in case the flash device is behind a controller, all IOs go through the controller in each server. The outstanding IOs are managed using a queue on the controller, and the IOs are de-staged from the queue to various storage devices. IO controllers with low queue depths are not well suited for the type of workloads that Virtual SAN is designed to support. These controllers offer very low IO throughput, and hence the probability of the controller queue getting full is high. When the controller IO queue gets full, IO operations time out, and the VMs become unresponsive.

The situation is exacerbated in the case of re-build operations. Although Virtual SAN has a built-in throttling mechanism for rebuild operations, it is designed to make minimal progress in order to avoid Virtual SAN objects from being exposed to double component failures for a long time. In configurations with low queue-depth controllers, even this minimal progress can cause the controllers to get saturated, leading to high latency and IO time outs.

Given the above scenario, VMware has decided to remove controllers with queue depth of less than 256 from the Virtual SAN compatibility list. While fully functional, these controllers offer too low IO throughput to sustain the performance requirements of most VMware environments.

For a complete list of controllers that will be removed from the compatibility list, please refer to this Knowledge Base article.

If you have purchased Virtual SAN for use with these controllers, please contact VMware customer care for next steps.

Going forward, in order to make it easy for our customers and partners to put together the appropriate Virtual SAN solution for their specific scenario, we are working with our partners to list the queue depth of all controllers in the VCG in the coming weeks. For additional hardware guidance on Virtual SAN, please refer to Virtual SAN Hardware Guidance.

Virtual Volumes Beta

VVolsVMware has officially announced the launch of two beta programs:

While the beta programs are private, for the first time, and unlike previous beta cycles for vSphere, the vSphere beta and VVols beta are open for everyone to sign up. This approach allows participants to help define the direction of the world’s most widely adopted, trusted, and robust virtualization platform. With Virtual Volumes (VVols), VMware offers a new paradigm, one in which an individual virtual machine and its disks, rather than a LUN, become a unit of storage management for a storage system. Virtual volumes encapsulate virtual disks and other virtual machine files, and natively store the files on the storage system.

By using a special set of APIs called vSphere APIs for Storage Awareness (VASA), the storage system becomes aware of the virtual volumes and their associations with the relevant virtual machines. Through VASA, vSphere and the underlying storage system establish a two-way out-of-band communication to perform data services and offload certain virtual machine operations to the storage system. For example, such operations as snapshots and clones can be offloaded.

For in-band communication with Virtual Volumes storage systems, vSphere continues to use standard SCSI and NFS protocols. This results in support with Virtual Volumes for any type of storage, iSCSI, Fibre Channel, FCoE, and NFS.

These are the key benefits of Virtual Volumes:

  • Operational transformation with Virtual Volumes when data services are enabled at the application level
  • Improved storage utilization with granular level provisioning
  • Common management using Policy Based Management

Sign up for the vSphere beta and visit the Virtual Volumes dedicated page to learn more. Here are some of the early demos that have been developed by partners.

Tintri

SolidFire

HP

 NetApp

 EMC

 Nimble Storage

For larger list of contributing partner and more recent demos visit the virtual volumes beta community page.

- Enjoy

For future updates, be sure to follow me on Twitter: @PunchingClouds

 

Now Open: VMware vSphere Beta Program

Today we are excited to announce the launch of the vSphere Beta Program. The vSphere Beta is open to everyone to sign up and allows participants to help define the direction of the world’s most widely adopted, trusted, and robust virtualization platform. Future releases of vSphere strive to expand vSphere 5.5 with new features and capabilities that improve IT’s efficiency, flexibility and agility to accelerate your journey to the Software Defined Enterprise. Your participation will help us continue to drive towards this goal.

This vSphere Beta Program leverages a private Beta community to download software and share information. We will provide discussion forums, webinars, and service requests to enable you to share your feedback with us.

You can expect to download, install, and test vSphere Beta software in your environment. All testing is free-form and we encourage you to use our software in ways that interest you. This will provide us with valuable insight into how you use vSphere in real-world conditions and with real-world test cases, enabling us to better align our product with your business needs.

The vSphere Beta Program has no established end date and you can provide comments throughout the program. But we strongly encourage your participation and feedback in the first 4-6 weeks of the program.
Some of the many reasons to participate in this vSphere Beta Program include:

  • Receive early access to the vSphere Beta products
  • Gain early knowledge of and visibility into product roadmap
  • Interact with the vSphere Beta team consisting of Product Managers, Engineers, Technical Support, and Technical Writers
  • Provide direct input on product functionality, configurability, usability, and performance
  • Provide feedback influencing future products, training, documentation, and services
  • Collaborate with other participants, learn about their use cases, and share advice and learnings

Sign up and join the vSphere Beta Program today at: https://communities.vmware.com/community/vmtn/vsphere-beta

Virtual SAN Partner Whitepapers

VSANAs Virtual SAN continues to gain adoption within the industry, VMware is partnering with technology partners to develop and expand Virtual SAN solution guidance on differing platforms. A couple of key Virtual SAN whitepapers have been developed in conjunction with our flash vendor partners Fusion-io and SanDisk.

  • VMware Virtual SAN and Fusion-io Reference Architecture

This provides a step-by-step reference architecture to simplify the process to deploy VMware’s Virtual SAN  technology using Fusion-io as the flash acceleration layer.

http://www.fusionio.com/white-papers/vmware-virtual-san-and-fusion-io-reference-architecture

  • High Performance VDI using SanDisk SSDs, VMware’s Horizon View, and Virtual SAN: A Deployment and Technical Considerations Guide

This whitepaper demonstrates a VMware Horizon View virtual desktop infrastructure (VDI) on Virtual SAN, and provides View Planner performance metrics in a 100 desktop per node environment.

http://www.sandisk.com/assets/docs/high-performance-vdi-using-sandisk-ssds-vmware-horizon-view-and-virtual-san.pdf