Home > Blogs > VMware vSphere Blog > Category Archives: Storage

Category Archives: Storage

VMware Virtual SAN Alarms for vCenter Server with PowerCLI

VSANPowerCLIAlarmLogoI was recently involved in a couple customer conversations where the main topics were focused on monitoring and troubleshooting events in vCenter particularly for Virtual SAN.

I know that particular topic has been covered a few times in the past, not only on the VMware corporate storage blog but also by other community blogs. To be more specific, one of the VSAN Champions William Lam has covered this topic extensively on his personal blog.

The work that we have done on the topic of vCenter Server Alarms and Virtual SAN stems from the findings identified in two articles published by William. For more information on what are the recommended vCenter Server Alarms for Virtual SAN and how to add and configure them take a look at the articles listed below:

With vSphere 6.0 and Virtual SAN 6.0 nearing generally available very soon, this script can make things a lot easier for all Virtual SAN customers and provide a simplified way to get all the available vCenter Server alarms for Virtual SAN added and configured within seconds.

I got a chance to work on this little nugget with one of the world’s baddest PowerCLI gurus on the planet and also another VSAN Champion Alan Renouf and William Lam as well whom are members of the VMware virtualization team codename #TheWreckingCrew. Here is a PowerCLI sample code that can be utilized to add and configure all of the vCenter Server Alarms for Virtual SAN. These alarms are applicable to both Virtual SAN versions 5.5 as well as 6.0. Continue reading

VMware Virtual SAN 6.0: Bootstorm Demonstration

VSAN60-All-FlashSince the official announcement of VMware Virtual SAN All-Flash architecture, most of the conversations have been focused about the solutions incredible performance capabilities and attributes with regards to IOPS, predictable performance, sub-millisecond latencies. All of those attributes are great and part of the reason as to why Virtual SAN 6.0 as a storage platform and its use cases have been expanded to also focus on business critical applications and large enterprise environments.

I want to turn the spotlight onto one of the many supported use cases for Virtual SAN 6.0 and highlight one of the invaluable capabilities of the new platform with regards to Virtual Desktop Infrastructures (VDI).

Some of the functional requirements for large enterprise infrastructure designs for VDI include the characterization of boot, refresh, and provision times for standard operations and worst case scenarios.

I have seen a fair share of VDI designs and demonstrations of different platforms showcasing bootstorms, refresh and rebuilds times they all do a pretty good job. Now with that said I wanted to take the opportunity to showcase the powerful capabilities of the Virtual SAN 6.0 by demonstrating a bootstorm at the maximum supported capabilities of the platform. This bootstorm demonstration consists of 6401 desktops on a Virtual SAN 6.0 All-Flash 64 node cluster (BigDaddy).
The key and impressive items showcased as part of the demonstration are the following:

  • BigDaddy – 64 Node All-Flash Virtual SAN Cluster
  • Desktops – booting all 6401 desktops in the cluster at once (in batches of 1024 at a time)
  • Boot Time – 24 minutes booting all desktops plus allocation of IP address about 19 minutes for a total of about 40 minutes

This demonstration does not contain tampered or custom configurations of any of the Virtual SAN settings. This is what we generally call an Out-of-the-Box experience. Another important thing to point out here is my definition for completed boot time. What I mean by complete boot, is not just when the desktop is powered on, but when all the desktops have successfully acquired an IP address and are really up and running and ready to be use.

In the interest of time, the demonstration has been sped up from its original length of time to about 5 minutes. Feel free to pay attention to the timestamp as it is displayed in the command line interface to validate the accuracy of the booting time.

This demonstration successfully highlights the one of the many powerful capabilities of available in VMware Virtual SAN 6.0.

 

- Enjoy

For future updates on Virtual SAN (VSAN), vSphere Virtual Volumes (VVOLs) and other Software-defined Storage technologies as well as vSphere + OpenStack be sure to follow me on Twitter: @PunchingClouds

What’s All the Buzz About Software-Defined Storage?

By now, you’ve more than likely heard something about Software-Defined Storage. With every mention of the term, you may be wondering, “What does it mean for me?”

Wonder no longer!

The VMware Software-Defined Storage approach enables a fundamentally more efficient operational model, driving transformation through the hypervisor, bring to storage the same operational efficiency that server virtualization brought to compute. Software-Defined Storage will enable you to better handle some of the most pressing challenges storage systems face today.

During this webcast, Mauricio Barra, Senior Product Marketing Manager at VMware, will discuss the VMware Software-Defined Storage vision, the role of the hypervisor in transforming storage, as well as key architectural components of VMware Software-Defined Storage.

If you are looking to understand how Software-Defined Storage, along with the enhanced VMware Virtual SAN 6 and new VMware vSphere Virtual Volumes, can benefit your organization, now is your chance.

Register today and take the next step toward making Software-Defined Storage a reality.

Be sure to subscribe to the Virtual SAN blog or follow our social channels at @vmwarevsan and Facebook.com/vmwarevsan for the latest updates.

For more information about VMware Virtual SAN, visit http://www.vmware.com/products/virtual-san.

vSphere Virtual Volumes Interoperability: VAAI APIs vs VVOLs

VVOLs-LOGO2015In 2011 VMware introduced block based VAAI APIs as part of vSphere 4.1 release. This APIs helped improving performance of VMFS by providing offload of some of the heavy operations to the storage array. In subsequent release, VMware added VAAI APIs for NAS, thin provisioning, and T10 command support for Block VAAI APIs.

Now with Virtual Volumes (VVOLs) VMware is introducing a new virtual machine management and integration framework that exposes virtual disks as the primary unit of data management for storage arrays. This new framework enables array-based operations at the virtual disk level that can be precisely aligned to application boundaries with the capability of providing a policy-based management approach per virtual machine.

The question now is what happens to VAAI APIs (NAS and Block) and how will virtual volumes co-exist together?. With Virtual Volumes, aside from the data path, the ESX hosts also control of the connection path to the storage arrays. The Vendor Provider typically arranges the path to the storage arrays. In this case, virtual volumes can be considered as a richer extension of the VAAI NAS APIs. On July of last year I published an article in which I discussed the interoperability between VAAI and VVOLs during cloning operations in deferent scenarios “Virtual Volumes (VVols) vSphere APIs & Cloning Operation Scenarios” consider having another look. Now let’s go over a set of interaction scenarios between the VAAI APIs and Virtual Volumes.

VAAI vs VVOLsRev2

VAAI Block and VVOLs:

VAAI Block defines basic SCSI primitives, which allows vSphere (primarily VMFS) to offload pieces of its operations to the array. There is still a heavy dependency on VMFS playing the role of an orchestrator and sending individual VAAI Block command to the storage array.

With VVOLs, the storage array systems are aware of virtual machine’s disk and hence they can efficiently perform operations such as snapshots, clones, and zeroing operations on the virtual machines disks. But still VAAI Block and thin-provisioning primitives co-exists with VVOLs.

  • ATS - All config VVOLs objects that are stored in a Block VVOLs datastore are formatted with VMFS and hence require supporting ATS commands. This support is detected based on ATS support for a PE LUN to which VVOLs are bound.
  • XCOPY - With VVOLs, ESX will always try to use array based VVOLs copy mechanism defined using copyDiffsToVirtualVolume or CloneVirtualVolume primitives. If these are not supported, it will fall back to software copy. Since software copy involves copying data between  Protocol Endpoint (PE) LUNs and VMFS LUN, there is still potential to use XCOPY command during software data copy. When falling back to software copy, vSphere will use the XCOPY command when moving a virtual machine from a VMFS datastore to VVOLs datastore or between two VVOLs datastores. In the first release, vSphere will not try to use XCOPY if the virtual machine is moving from VVOLs datastore to VMFS datastore. vSphere will detect the support for XCOPY for individual VVOLs based on the support of VAAI XCOPY on PE LUN to which it is bound.
  • Block Zeroing - Main purpose of this primitive is to initialize thick disks provisioned on VMFS datastores. So this primitive is not used for VVOLs as in VVOL, VMFS on config VVOLs only contains small descriptor files. Main purpose of this primitive is to initialize thick disks provisioned on VMFS datastores. With VVols VM provisioning type is specified as part of profile information passed during VVOL creation. Config VVol which are formatted VMFS are “thin” by definition. Also size of config VVol is very small (default 4GB) and it contains small files such as disk descriptors, vm config files, stats and logs data. So Block Zeroing primitive is not used for VVOLs by vSphere.

VAAI NAS and VVOLs:

Unlike SCSI, NFSv3 is a frozen protocol, which means all features of VAAI NAS came via private RPCs issued by vendor plugins. VVOLs extends this model of communicating outside basic protocol. VVOLs defines the rich set of VASA APIs to allow offload of most of the vSphere operations. With vSphere 6.0, existing VAAI NAS will continue work but VVOL datastores will offer richer and faster experience than VAAI NAS. Also, VVOLs doesn’t need any vendor specific plugin installation. Another noteworthy point regarding NAS VAAI and storage vMotion is that NAS VAAI snapshots cannot be migrated, when an attempt is made to migrate a virtual machine with NAS VAAI “snapshots”, the snapshot hierarchy is collapsed and all snapshot history is lost.  This is not the case with VVOLs, and further we can translate snapshot hierarchies between NFS (Non-VAAI)/VMFS/VSAN/VVol (any source->target combination of the 4).

VAAI Thin-Provisioning and VVOLs:

  • Soft Threshold Warnings – similar to a VMFS datastore with TP support, Soft threshold warning for any VVOL virtual machine’s I/O, will be seen in vCenter. Also the corresponding container gets flagged appropriately. The container gets the yellow warning icon when soft threshold warning is issued. Essentially this could be potentially confusing for vSphere admin as this warning is virtual machine specific and warning message doesn’t provide the details on which virtual machine has the problem. This will be corrected in a future update.
  • Hard Threshold Warnings - Hard threshold warning behavior is similar to that on VMFS datastore.
    When VVOL virtual machine’s I/O gets a hard threshold warning, it will stun the corresponding virtual machine. Administrator can resume the virtual machine after provisioning more space or can completely stop the virtual machine.
  • UNMAP - Since there are no disks managed by VMFS, vSphere will not be actively using the UNMAP primitive. vSphere will not actively used UNMAP primitive. Although it will pass through UNMAP to backing VVOLs when guest issues it. Although it will issue UNMAP when guest issues it like XCOPY and ATS, vSphere will detect support for UNMAP for individual VVOLs based on the support of VAAI UNMAP on a Protocol Endpoint LUN to which it is bound. Another thing is that vSphere will not enforce any of the alignment criteria when UNMAP is issued by the guest. This behavior is very similar to the one found with an RDM LUN. With VVOLs UNMAP commands going from the guest directly to the storage array the same way we send all I/O, and the array will now finally see all the individual UNMAP commands guest operating systems issues. For example, a Windows Server 2012 will immediately become a source of UNMAP commands.  On the other hand for the Linux, the filesystem checks the SCSI version supported by the virtual device and won’t issue an UNMAP with the current level of SCSI support we present (SCSI-2). That is something that will be addressed in a future release.

Now let’s identify the supported operations and behavior for the different primitives.

Primitives Supported Operations and Behavior

Powered On Storage vMotion without snapshots

For a powered on VM without snapshots, the Storage vMotion driver coordinates the copy. The Storage vMotion driver will use the data mover to move sections of the current running point. The data mover will employ “host orchestrated hardware offloads” (XCOPY, etc) when possible.

Block VAAI & Block VVOLs:
- Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
- XCOPY will be used to migrate (host orchestrated offload)

NAS VAAI
- No optimizations

NAS VVOL
- Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)

Powered On Storage vMotion with snapshots

For a powered on VM with snapshots, the migration of snapshots is done first, then the Storage vMotion driver will use the data mover to move the current running point.

Block VAAI
- Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
- XCOPY will be used to migrate snapshots + current running point

Block VVOL
- Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
- The cloneVirtualVolume and copyDiffsToVirtualVolume VASA APIs will be used to migrate all snapshots (Full hardware offload)
- XCOPY will be used to migrate the current running point (host orchestrated offload)

NAS VAAI
- NAS VAAI cannot migrate snapshots
- No further optimization

NAS VVOL
- Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
- The cloneVirtualVolume and copyDiffsToVirtualVolume VASA APIs will be used to migrate all snapshots (Full hardware offload)

Powered Off Storage vMotion without snapshots
For a powered off VM, the Storage vMotion driver is not in the picture. So, effectively a Storage vMotion of a powered off VM is a logical move (Clone + Delete Source).

Block VAAI
- Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
- XCOPY will be used to migrate current running point

Block VVOL
- Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
- The cloneVirtualVolume VASA API will be used to migrate the current running point (Full hardware offload)
- The copyDiffsToVirtualVolume VASA APIs will be used to migrate all snapshots (Full hardware offload)

NAS VAAI
- NAS VAAI clone offload will be employed to migrate the current running point

NAS VVOL (Same as block VVOL)
- Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
- The cloneVirtualVolume VASA API will be used to migrate the current running point (Full hardware offload)
- The copyDiffsToVirtualVolume VASA APIs will be used to migrate all snapshots (Full hardware offload)

Powered off Storage vMotion with snapshots
- Same general idea as above, just with snapshots too…

Block VAAI
- Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
- XCOPY will be used to migrate current running point + snapshots

Block VVOL
- Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
- The cloneVirtualVolume VASA API will be used to migrate the current running point + snapshots (Full hardware offload)

NAS VAAI
- NAS VAAI cannot migrate snapshots
- NAS VAAI clone offload will be employed to migrate the current running point

NAS VVol (Same as block VVOL)
- Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
- The cloneVirtualVolume VASA API will be used to migrate the current running point + snapshots (Full hardware offload)

- Enjoy

For future updates on Virtual SAN (VSAN), vSphere Virtual Volumes (VVOLs) and other Software-defined Storage technologies as well as vSphere + OpenStack be sure to follow me on Twitter: @PunchingClouds

Storage Blog Recap: Top Blogs from January

The third week of every month, we will be compiling a list of the top vSphere Storage posts from the previous month for you to digest.

Here are the top storage blogs from January:

VMware Virtual SAN: File Services with NexentaConnect

Rawlinson Rivera discusses NexentaConnect for Virtual SAN, a software-defined storage solution designed specifically to deliver file service on top of Virtual SAN.

SAP HANA Dynamic Tiering and the VMware Software Defined Data Center

The latest release of SAP HANA has brought the concepts of multiple-temperature data and lifecycle management to a new level. Bob Goldsand talks more about this, as well as native use cases and dynamic tiering with VMware HA and workload management.

Storage and Availability at Partner Exchange 2015

VMware Partner Exchange just wrapped up in San Francisco, California. In this post, Ken Werneburg talks about some key storage and availability sessions that were offered during the conference.

Discover Software-Defined Storage and VMware Virtual SAN at PEX 2015!

The Virtual SAN team highlights some of the can’t-miss sessions that were available to attendees of VMware Partner Exchange 2015.

Performance Unplugged: Demanding Applications

Mark Achtemichuk introduces a new series called “Performance Unplugged”, which showcases a number of talented performance gurus and also covers commonly asked questions and topics.

Be sure to subscribe to the Virtual SAN blog or follow our social channels at @vmwarevsan and Facebook.com/vmwarevsan for the latest updates.

For more information about VMware Virtual SAN, visit http://www.vmware.com/products/virtual-san.

VMware Virtual SAN: All-Flash Configuration

VSAN-ALL-FLASH-LOGOThe cat is officially out of the bag, as they say!. Everyone in the world should now be aware of the fact that VMware Virtual SAN 6.0 supports an all-flash architecture. I think it’s time to discuss a couple of items with regards to a new architecture.

The Virtual SAN 6.0 All-Flash architecture uses flash-based devices for both caching and persistent storage. In this architecture, the flash cache is used completely as write buffer. This all-flash architecture introduces a two-tier model of flash devices:

  • write-intensive, high endurance caching tier for the writes
  • read-intensive, cost-effective flash device tier for data persistence

All-Flash-Arch

The new device tiering model not only deliver incredible performance results, but it can also potentially introduce cost savings for the Virtual SAN 6.0 all-flash architecture depending on the design and hardware configuration of the solution.

Virtual SAN Configuration Requirements

In order to configure Virtual SAN 6.0 for the all-flash architecture, the flash devices need to be appropriately identified within the system. In Virtual SAN, flash devices are identified and categorized for the caching tier by default. In order to successfully enabled the all-flash architecture configuration we need manually to flag the flash devices that will be utilized for data persistence or capacity. This configuration is performed via one of the supported command line interface tools such as RVC or ESXCLI.

RVC handles the configuration of the devices at a cluster level. Below you’ll find an image, which illustrates the usable syntax for flagging the flash devices with RVC.

RVC

ESXCLI handles it at the per-host level. Below you’ll find an image, which illustrates the usable syntax for flagging the flash devices with esxcli.

RVC

Another command line utility that is worth knowing is the VSAN Disk Query (vdq). This utility allows users to identify when the flash devices are configured for used in the capacity tier as well as if they are eligible to be use by Virtual SAN.
Whenever vdq is used to query the flash devices on a host as illustrated below, the output will display a new property called “IsCapacityFlash”. This property specifies whether a flash device will be utilized for the capacity tier instead of the caching tier.

all-flash-vsan-6-vdq

For more in-depth information on the use of vdq, please take a look at a post by one of VMware’s elite engineers and VSAN Champion William Lam.

It’s important to highlight that flagging flash devices to be used for capacity cannot be performed from the option available in the vSphere Web Client UI. It has to be performed via the CLI. (wait for it….wait for it)

Once the flash devices, they will be displayed as magnetic devices (HDD) in the disk management section of the Virtual SAN management tab.

That’s about it, after the flash devices have been properly tagged, the rest of the Virtual SAN configuration procedure is as easy as it was in the previous version.

So in the spirit of making things easy and reduce any contention with getting into the CLI and manually flagging every disk. I’ve been able to design a tool along with my good pal and now a VSAN Champion Brian Graf that should take care of disk tagging process for just about everyone.

Here is a demo of how simple it is to configure a Virtual SAN 6.0 all-flash cluster with a teaser of the Virtual SAN All-Flash Configuration Utility. Oh yeah, I almost forgot to mention….. It’s a 64 node all-flash cluster (The BigDaddy).

- Enjoy

For future updates on Virtual SAN (VSAN), vSphere Virtual Volumes (VVOLs) and other Software-defined Storage technologies as well as vSphere + OpenStack be sure to follow me on Twitter: @PunchingClouds

What’s New with vSphere Data Protection 6.0 and vSphere Replication 6.0

There are many interesting items coming out of VMware’s 28 Days of February where customers can learn more about “One Cloud, Any Application, Any Device”. A couple of the biggest items are the announcements of vSphere 6.0 and Virtual SAN 6.0. In this article, we will look at what is new with two of the more popular vSphere features: vSphere Data Protection and vSphere Replication. Perhaps the biggest news with these two features is around vSphere Data Protection. Before vSphere 6.0 and vSphere Data Protection 6.0, there were two editions of vSphere Data Protection: vSphere Data Protection, included with vSphere, and vSphere Data Protection Advanced, which was sold separately. With the release of vSphere Data Protection 6.0, all vSphere Data Protection Advanced functionality has been consolidated into vSphere Data Protection 6.0 and included with vSphere 6.0 Essentials Plus Kit and higher editions. Keep reading to learn more about the advanced functionality now included as part of vSphere Data Protection 6.0.

Continue reading

One Cloud, Any Application – #VMW28days

Screen Shot 2015-02-09 at 4.11.51 PM

VMware Virtual SAN 6.0 – you heard all about it on February 2nd – you read all about it in our blog post on the vSphere Storage Blog.

 

Still want more?

 

Visit VMware’s One Cloud, Any Application site every day in February to learn more about our products and solutions including software-defined storage, Virtual SAN 6, and Virtual Volumes (VVOLs). With content for IT decision makers and practitioners alike, this site contains everything from technical documentation to infographics, whitepapers, and analyst insights.

 

Stop by today!

 

Also, this Thursday, February 12th, at 11am PST, we would like to invite you to join the software-defined storage CrowdChat! Here, you’ll be able to ask questions directly to VMware storage experts. RSVP today!

 

For more information about VMware Virtual SAN, follow us on Twitter at @VMwareVSAN and Facebook at facebook.com/vmwarevsan.

 

vSphere APIs for IO Filtering

I’ve been fortunate to have one of our super sharp product line managers, Alex Jauch (twitter @ajauch), spend some time explaining to me one of the new enabling technologies of vSphere 6.0: VAIO.  Let’s take a look at this really powerful capability and see what types of things it can enable and an overview of how it works.

VAIO stands for “vSphere APIs for IO Filtering”

This had for a time colloquially been known as “IO Filters”. Fundamentally, it is a means by which a VM can have its IO safely and securely filtered in accordance with a policy.

VAIO offers partners the ability to put their technology directly into the IO stream of a VM through a filter that intercepts data before it is committed to disk.

Why would I want to do that? What kinds of things can you do with an IO filter?

Well that’s up to our customers and our partners. VAIO is a filtering framework that will initially allow vendors to present capabilities for caching and replication to individual VMs. This will expand over time as partners come on board to write filters for the framework, so you can imagine where this can go for topics such as security, antivirus, encryption and other areas, as the framework matures. VAIO gives us the ability to do stuff to an IO stream in a safe and certified fashion, and manage the whole thing through profiles to ensure we get a view into the IO stream’s compliance with policy!

The VAIO program itself is for partners – the benefit is for consumers who want to do policy based management of their environment and pull in the value of our partner solutions directly into per-VM and indeed per-virtual disk storage management.

When partners create their solutions their data services are surfaced through the Storage Policy Based Management control plane, just like all the rest of our policy-driven storage offerings like Virtual SAN or Virtual Volumes.

Beyond that, because the data services operate at the VM virtual device level, they can also work with just about any type of storage device, again furthering the value of VSAN and VVOLs, and extending the use of these offerings through these additional data services.

How does it work?

The capabilities of a partner filter solution are registered with the VAIO framework, and are surfaced for user interaction in the SPBM Continue reading

VMware’s 64-node All-Flash VSAN demo at PEX

Another productive VMware Partner Exchange day for our partners and we had another great turnout at the Software-Defined Storage (SDS) Pavilion, where we demoed VDI boot storm on VMware’s 64-node All-Flash VSAN and we continue to have multiple ecosystem partners showcasing their joint solutions with Virtual SAN and Virtual Volumes.

SDS Pavilion_Feb4

If you missed it, on Monday Feb 2nd VMware launched VMware Virtual SAN 6the best storage platform for virtual machines, including business critical applications. Radically simple, VMware Virtual SAN 6 introduces 2x more scalability (yes 64 nodes!) and up to 4.5x greater performance (that’s 90k IOPs per host!) while adding several new enterprise-class data services capabilities. Continue reading