Home > Blogs > VMware vSphere Blog

VMware Virtual SAN 6.0: Bootstorm Demonstration

VSAN60-All-FlashSince the official announcement of VMware Virtual SAN All-Flash architecture, most of the conversations have been focused about the solutions incredible performance capabilities and attributes with regards to IOPS, predictable performance, sub-millisecond latencies. All of those attributes are great and part of the reason as to why Virtual SAN 6.0 as a storage platform and its use cases have been expanded to also focus on business critical applications and large enterprise environments.

I want to turn the spotlight onto one of the many supported use cases for Virtual SAN 6.0 and highlight one of the invaluable capabilities of the new platform with regards to Virtual Desktop Infrastructures (VDI).

Some of the functional requirements for large enterprise infrastructure designs for VDI include the characterization of boot, refresh, and provision times for standard operations and worst case scenarios.

I have seen a fair share of VDI designs and demonstrations of different platforms showcasing bootstorms, refresh and rebuilds times they all do a pretty good job. Now with that said I wanted to take the opportunity to showcase the powerful capabilities of the Virtual SAN 6.0 by demonstrating a bootstorm at the maximum supported capabilities of the platform. This bootstorm demonstration consists of 6401 desktops on a Virtual SAN 6.0 All-Flash 64 node cluster (BigDaddy).
The key and impressive items showcased as part of the demonstration are the following:

  • BigDaddy – 64 Node All-Flash Virtual SAN Cluster
  • Desktops – booting all 6401 desktops in the cluster at once (in batches of 1024 at a time)
  • Boot Time – 24 minutes booting all desktops plus allocation of IP address about 19 minutes for a total of about 40 minutes

This demonstration does not contain tampered or custom configurations of any of the Virtual SAN settings. This is what we generally call an Out-of-the-Box experience. Another important thing to point out here is my definition for completed boot time. What I mean by complete boot, is not just when the desktop is powered on, but when all the desktops have successfully acquired an IP address and are really up and running and ready to be use.

In the interest of time, the demonstration has been sped up from its original length of time to about 5 minutes. Feel free to pay attention to the timestamp as it is displayed in the command line interface to validate the accuracy of the booting time.

This demonstration successfully highlights the one of the many powerful capabilities of available in VMware Virtual SAN 6.0.

 

- Enjoy

For future updates on Virtual SAN (VSAN), vSphere Virtual Volumes (VVOLs) and other Software-defined Storage technologies as well as vSphere + OpenStack be sure to follow me on Twitter: @PunchingClouds

What’s All the Buzz About Software-Defined Storage?

By now, you’ve more than likely heard something about Software-Defined Storage. With every mention of the term, you may be wondering, “What does it mean for me?”

Wonder no longer!

The VMware Software-Defined Storage approach enables a fundamentally more efficient operational model, driving transformation through the hypervisor, bring to storage the same operational efficiency that server virtualization brought to compute. Software-Defined Storage will enable you to better handle some of the most pressing challenges storage systems face today.

During this webcast, Mauricio Barra, Senior Product Marketing Manager at VMware, will discuss the VMware Software-Defined Storage vision, the role of the hypervisor in transforming storage, as well as key architectural components of VMware Software-Defined Storage.

If you are looking to understand how Software-Defined Storage, along with the enhanced VMware Virtual SAN 6 and new VMware vSphere Virtual Volumes, can benefit your organization, now is your chance.

Register today and take the next step toward making Software-Defined Storage a reality.

Be sure to subscribe to the Virtual SAN blog or follow our social channels at @vmwarevsan and Facebook.com/vmwarevsan for the latest updates.

For more information about VMware Virtual SAN, visit http://www.vmware.com/products/virtual-san.

vSphere 6 Feature Walkthroughs

The Technical Marketing team has put out a series of vSphere 6 related feature walkthroughs. We’re covering vCenter Server install and upgrades for many different scenarios as well as vSphere Data Protection and vSphere Replication.

Continue reading

vSphere Virtual Volumes Interoperability: VAAI APIs vs VVOLs

VVOLs-LOGO2015In 2011 VMware introduced block based VAAI APIs as part of vSphere 4.1 release. This APIs helped improving performance of VMFS by providing offload of some of the heavy operations to the storage array. In subsequent release, VMware added VAAI APIs for NAS, thin provisioning, and T10 command support for Block VAAI APIs.

Now with Virtual Volumes (VVOLs) VMware is introducing a new virtual machine management and integration framework that exposes virtual disks as the primary unit of data management for storage arrays. This new framework enables array-based operations at the virtual disk level that can be precisely aligned to application boundaries with the capability of providing a policy-based management approach per virtual machine.

The question now is what happens to VAAI APIs (NAS and Block) and how will virtual volumes co-exist together?. With Virtual Volumes, aside from the data path, the ESX hosts also control of the connection path to the storage arrays. The Vendor Provider typically arranges the path to the storage arrays. In this case, virtual volumes can be considered as a richer extension of the VAAI NAS APIs. On July of last year I published an article in which I discussed the interoperability between VAAI and VVOLs during cloning operations in deferent scenarios “Virtual Volumes (VVols) vSphere APIs & Cloning Operation Scenarios” consider having another look. Now let’s go over a set of interaction scenarios between the VAAI APIs and Virtual Volumes.

VAAI vs VVOLsRev2

VAAI Block and VVOLs:

VAAI Block defines basic SCSI primitives, which allows vSphere (primarily VMFS) to offload pieces of its operations to the array. There is still a heavy dependency on VMFS playing the role of an orchestrator and sending individual VAAI Block command to the storage array.

With VVOLs, the storage array systems are aware of virtual machine’s disk and hence they can efficiently perform operations such as snapshots, clones, and zeroing operations on the virtual machines disks. But still VAAI Block and thin-provisioning primitives co-exists with VVOLs.

  • ATS - All config VVOLs objects that are stored in a Block VVOLs datastore are formatted with VMFS and hence require supporting ATS commands. This support is detected based on ATS support for a PE LUN to which VVOLs are bound.
  • XCOPY - With VVOLs, ESX will always try to use array based VVOLs copy mechanism defined using copyDiffsToVirtualVolume or CloneVirtualVolume primitives. If these are not supported, it will fall back to software copy. Since software copy involves copying data between  Protocol Endpoint (PE) LUNs and VMFS LUN, there is still potential to use XCOPY command during software data copy. When falling back to software copy, vSphere will use the XCOPY command when moving a virtual machine from a VMFS datastore to VVOLs datastore or between two VVOLs datastores. In the first release, vSphere will not try to use XCOPY if the virtual machine is moving from VVOLs datastore to VMFS datastore. vSphere will detect the support for XCOPY for individual VVOLs based on the support of VAAI XCOPY on PE LUN to which it is bound.
  • Block Zeroing - Main purpose of this primitive is to initialize thick disks provisioned on VMFS datastores. So this primitive is not used for VVOLs as in VVOL, VMFS on config VVOLs only contains small descriptor files. Main purpose of this primitive is to initialize thick disks provisioned on VMFS datastores. With VVols VM provisioning type is specified as part of profile information passed during VVOL creation. Config VVol which are formatted VMFS are “thin” by definition. Also size of config VVol is very small (default 4GB) and it contains small files such as disk descriptors, vm config files, stats and logs data. So Block Zeroing primitive is not used for VVOLs by vSphere.

VAAI NAS and VVOLs:

Unlike SCSI, NFSv3 is a frozen protocol, which means all features of VAAI NAS came via private RPCs issued by vendor plugins. VVOLs extends this model of communicating outside basic protocol. VVOLs defines the rich set of VASA APIs to allow offload of most of the vSphere operations. With vSphere 6.0, existing VAAI NAS will continue work but VVOL datastores will offer richer and faster experience than VAAI NAS. Also, VVOLs doesn’t need any vendor specific plugin installation. Another noteworthy point regarding NAS VAAI and storage vMotion is that NAS VAAI snapshots cannot be migrated, when an attempt is made to migrate a virtual machine with NAS VAAI “snapshots”, the snapshot hierarchy is collapsed and all snapshot history is lost.  This is not the case with VVOLs, and further we can translate snapshot hierarchies between NFS (Non-VAAI)/VMFS/VSAN/VVol (any source->target combination of the 4).

VAAI Thin-Provisioning and VVOLs:

  • Soft Threshold Warnings – similar to a VMFS datastore with TP support, Soft threshold warning for any VVOL virtual machine’s I/O, will be seen in vCenter. Also the corresponding container gets flagged appropriately. The container gets the yellow warning icon when soft threshold warning is issued. Essentially this could be potentially confusing for vSphere admin as this warning is virtual machine specific and warning message doesn’t provide the details on which virtual machine has the problem. This will be corrected in a future update.
  • Hard Threshold Warnings - Hard threshold warning behavior is similar to that on VMFS datastore.
    When VVOL virtual machine’s I/O gets a hard threshold warning, it will stun the corresponding virtual machine. Administrator can resume the virtual machine after provisioning more space or can completely stop the virtual machine.
  • UNMAP - Since there are no disks managed by VMFS, vSphere will not be actively using the UNMAP primitive. vSphere will not actively used UNMAP primitive. Although it will pass through UNMAP to backing VVOLs when guest issues it. Although it will issue UNMAP when guest issues it like XCOPY and ATS, vSphere will detect support for UNMAP for individual VVOLs based on the support of VAAI UNMAP on a Protocol Endpoint LUN to which it is bound. Another thing is that vSphere will not enforce any of the alignment criteria when UNMAP is issued by the guest. This behavior is very similar to the one found with an RDM LUN. With VVOLs UNMAP commands going from the guest directly to the storage array the same way we send all I/O, and the array will now finally see all the individual UNMAP commands guest operating systems issues. For example, a Windows Server 2012 will immediately become a source of UNMAP commands.  On the other hand for the Linux, the filesystem checks the SCSI version supported by the virtual device and won’t issue an UNMAP with the current level of SCSI support we present (SCSI-2). That is something that will be addressed in a future release.

Now let’s identify the supported operations and behavior for the different primitives.

Primitives Supported Operations and Behavior

Powered On Storage vMotion without snapshots

For a powered on VM without snapshots, the Storage vMotion driver coordinates the copy. The Storage vMotion driver will use the data mover to move sections of the current running point. The data mover will employ “host orchestrated hardware offloads” (XCOPY, etc) when possible.

Block VAAI & Block VVOLs:
- Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
- XCOPY will be used to migrate (host orchestrated offload)

NAS VAAI
- No optimizations

NAS VVOL
- Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)

Powered On Storage vMotion with snapshots

For a powered on VM with snapshots, the migration of snapshots is done first, then the Storage vMotion driver will use the data mover to move the current running point.

Block VAAI
- Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
- XCOPY will be used to migrate snapshots + current running point

Block VVOL
- Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
- The cloneVirtualVolume and copyDiffsToVirtualVolume VASA APIs will be used to migrate all snapshots (Full hardware offload)
- XCOPY will be used to migrate the current running point (host orchestrated offload)

NAS VAAI
- NAS VAAI cannot migrate snapshots
- No further optimization

NAS VVOL
- Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
- The cloneVirtualVolume and copyDiffsToVirtualVolume VASA APIs will be used to migrate all snapshots (Full hardware offload)

Powered Off Storage vMotion without snapshots
For a powered off VM, the Storage vMotion driver is not in the picture. So, effectively a Storage vMotion of a powered off VM is a logical move (Clone + Delete Source).

Block VAAI
- Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
- XCOPY will be used to migrate current running point

Block VVOL
- Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
- The cloneVirtualVolume VASA API will be used to migrate the current running point (Full hardware offload)
- The copyDiffsToVirtualVolume VASA APIs will be used to migrate all snapshots (Full hardware offload)

NAS VAAI
- NAS VAAI clone offload will be employed to migrate the current running point

NAS VVOL (Same as block VVOL)
- Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
- The cloneVirtualVolume VASA API will be used to migrate the current running point (Full hardware offload)
- The copyDiffsToVirtualVolume VASA APIs will be used to migrate all snapshots (Full hardware offload)

Powered off Storage vMotion with snapshots
- Same general idea as above, just with snapshots too…

Block VAAI
- Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
- XCOPY will be used to migrate current running point + snapshots

Block VVOL
- Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
- The cloneVirtualVolume VASA API will be used to migrate the current running point + snapshots (Full hardware offload)

NAS VAAI
- NAS VAAI cannot migrate snapshots
- NAS VAAI clone offload will be employed to migrate the current running point

NAS VVol (Same as block VVOL)
- Bitmap APIs will be used to determine only the relevant blocks to migrate (Space efficiency optimization)
- The cloneVirtualVolume VASA API will be used to migrate the current running point + snapshots (Full hardware offload)

- Enjoy

For future updates on Virtual SAN (VSAN), vSphere Virtual Volumes (VVOLs) and other Software-defined Storage technologies as well as vSphere + OpenStack be sure to follow me on Twitter: @PunchingClouds

Storage Blog Recap: Top Blogs from January

The third week of every month, we will be compiling a list of the top vSphere Storage posts from the previous month for you to digest.

Here are the top storage blogs from January:

VMware Virtual SAN: File Services with NexentaConnect

Rawlinson Rivera discusses NexentaConnect for Virtual SAN, a software-defined storage solution designed specifically to deliver file service on top of Virtual SAN.

SAP HANA Dynamic Tiering and the VMware Software Defined Data Center

The latest release of SAP HANA has brought the concepts of multiple-temperature data and lifecycle management to a new level. Bob Goldsand talks more about this, as well as native use cases and dynamic tiering with VMware HA and workload management.

Storage and Availability at Partner Exchange 2015

VMware Partner Exchange just wrapped up in San Francisco, California. In this post, Ken Werneburg talks about some key storage and availability sessions that were offered during the conference.

Discover Software-Defined Storage and VMware Virtual SAN at PEX 2015!

The Virtual SAN team highlights some of the can’t-miss sessions that were available to attendees of VMware Partner Exchange 2015.

Performance Unplugged: Demanding Applications

Mark Achtemichuk introduces a new series called “Performance Unplugged”, which showcases a number of talented performance gurus and also covers commonly asked questions and topics.

Be sure to subscribe to the Virtual SAN blog or follow our social channels at @vmwarevsan and Facebook.com/vmwarevsan for the latest updates.

For more information about VMware Virtual SAN, visit http://www.vmware.com/products/virtual-san.

Oracle Licensing On VMware Webinar

VMware is announcing the sponsorship of a webinar to address confusion on the subject of licensing Oracle Software running on vSphere. Database Trends and Analysis will host the webinar. Please use the link below to register. Don Sullivan will moderate a discussion between David Welch of House Of Brick and Daniel Hesselink of License Consulting. The small description below describes the event perfectly.

Many organizations that run or plan to run their Oracle Business Critical Applications on VMware virtualization do not understand that the Oracle contract is agnostic to VMware and how that translates to their infrastructure. VMware, in partnership with DBTA has invited the world’s most recognizable experts in this subject to join us on a webinar for a frank presentation and conversation about Oracle on VMware licensing by the contract.

http://www.dbta.com/Webinars/722-Straight-Talk-on-Oracle-on-VMware-Licensing.htm

VMware Virtual SAN: All-Flash Configuration

VSAN-ALL-FLASH-LOGOThe cat is officially out of the bag, as they say!. Everyone in the world should now be aware of the fact that VMware Virtual SAN 6.0 supports an all-flash architecture. I think it’s time to discuss a couple of items with regards to a new architecture.

The Virtual SAN 6.0 All-Flash architecture uses flash-based devices for both caching and persistent storage. In this architecture, the flash cache is used completely as write buffer. This all-flash architecture introduces a two-tier model of flash devices:

  • write-intensive, high endurance caching tier for the writes
  • read-intensive, cost-effective flash device tier for data persistence

All-Flash-Arch

The new device tiering model not only deliver incredible performance results, but it can also potentially introduce cost savings for the Virtual SAN 6.0 all-flash architecture depending on the design and hardware configuration of the solution.

Virtual SAN Configuration Requirements

In order to configure Virtual SAN 6.0 for the all-flash architecture, the flash devices need to be appropriately identified within the system. In Virtual SAN, flash devices are identified and categorized for the caching tier by default. In order to successfully enabled the all-flash architecture configuration we need manually to flag the flash devices that will be utilized for data persistence or capacity. This configuration is performed via one of the supported command line interface tools such as RVC or ESXCLI.

RVC handles the configuration of the devices at a cluster level. Below you’ll find an image, which illustrates the usable syntax for flagging the flash devices with RVC.

RVC

ESXCLI handles it at the per-host level. Below you’ll find an image, which illustrates the usable syntax for flagging the flash devices with esxcli.

RVC

Another command line utility that is worth knowing is the VSAN Disk Query (vdq). This utility allows users to identify when the flash devices are configured for used in the capacity tier as well as if they are eligible to be use by Virtual SAN.
Whenever vdq is used to query the flash devices on a host as illustrated below, the output will display a new property called “IsCapacityFlash”. This property specifies whether a flash device will be utilized for the capacity tier instead of the caching tier.

all-flash-vsan-6-vdq

For more in-depth information on the use of vdq, please take a look at a post by one of VMware’s elite engineers and VSAN Champion William Lam.

It’s important to highlight that flagging flash devices to be used for capacity cannot be performed from the option available in the vSphere Web Client UI. It has to be performed via the CLI. (wait for it….wait for it)

Once the flash devices, they will be displayed as magnetic devices (HDD) in the disk management section of the Virtual SAN management tab.

That’s about it, after the flash devices have been properly tagged, the rest of the Virtual SAN configuration procedure is as easy as it was in the previous version.

So in the spirit of making things easy and reduce any contention with getting into the CLI and manually flagging every disk. I’ve been able to design a tool along with my good pal and now a VSAN Champion Brian Graf that should take care of disk tagging process for just about everyone.

Here is a demo of how simple it is to configure a Virtual SAN 6.0 all-flash cluster with a teaser of the Virtual SAN All-Flash Configuration Utility. Oh yeah, I almost forgot to mention….. It’s a 64 node all-flash cluster (The BigDaddy).

- Enjoy

For future updates on Virtual SAN (VSAN), vSphere Virtual Volumes (VVOLs) and other Software-defined Storage technologies as well as vSphere + OpenStack be sure to follow me on Twitter: @PunchingClouds

vSphere 6 Web Client

With the recent announcement of VMware vSphere 6, I can finally start talking about the improvements we’ve made for vSphere 6 Web Client.  Over 100 enhancements, with some user actions performing 5x faster.  There are excel sheets and graphs full of performance data, but the best way to see the difference is to experience it yourself.  If you’ve been wary of using vSphere Web Client in the past, you should give it another shot with vSphere 6.

In my time here I’ve heard of many tips on using Web Client that I didn’t learn during training or while using it directly.  I thought it would be helpful to put all of these learnings in one place.  I’m sure many of you reading this know about some of these tips, but hopefully there are some new ones in there that are helpful to you as well.  This is a living document, so if there are tips and tricks not on the list, please share with the rest of us by adding it to the list.  I should stress that this is not an official VMware document:

https://en.wikibooks.org/w/index.php?title=VSphere_Web_Client/UI_Tips

Short url: http://tiny.cc/webclientwiki

 

There are also many enhancements in the vSphere 6 Web Client, some of which are highlighted below:

  • Controlling “All Users’ Tasks” for performance

We know that the All Users’ Tasks view of Recent Tasks is an important feature, but  it also turns out to be an incredibly “heavy” feature, which can quickly spiral out of control and impact vCenter Server performance.  The focus of this version of vSphere Web Client was improving performance and giving you more control on customizing your experience.  In order to achieve both of these goals, we had to make it a bit harder to get to All Users’ Tasks.  This will help ensure that your systems will run smoother out of the box, with the option to enable the feature if you need it.  We are also actively working on a better solution for this feature, but couldn’t get it in time for this release.

You’ll see some instructions when you first select All Users’ Tasks, and more detailed steps are in the Release Notes, but I included them here for reference.  Once you’ve enabled this feature, it becomes the default view:

A) Click More Tasks in the Recent Tasks panel to view all users’ tasks.

OR

B) Edit the webclient.properties file and change the “show.allusers.tasks” setting. For large vSphere environments, changing the “show.allusers.tasks” setting can potentially impact performance.

1. Locate the webclient.properties file

For the vCenter Server Appliance, the file is located in the /etc/vmware/vsphere-client/webclient.properties directory.

For vCenter Server on Windows, the file is located in the C:\ProgramData\VMware\vCenterServer\cfg\vsphere-client\webclient.properties directory.

2. Edit the file using a text editor and change show.allusers.tasks=false to show.allusers.tasks=true.

3. That’s it!  No restart of anything should be required.  Go to vSphere Web Client, select “All Users’ Tasks” and it should work.

  • Many performance enhancements

Performance was the primary goal of this release of vSphere Web Client.  Efforts were made to improve the performance of every portion of the interface, and you should see these improvements when you start using vSphere 6.  Here are some of the major areas we worked on: Login and Home page, Summary pages, Networking pages, Related Objects lists, General Navigation, Performance Charts, Action Menus (right click), and reducing unnecessary data retrieval, which also serves to lighten load on vCenter Server.

The net result is that the vSphere 6 Web Client is an entirely new experience and easier to use than previous versions of vSphere Web Client.

  • Tasks where they belong

This was shown at VMworld, but is worth another mention: The tasks pane is now back at the bottom, giving you room to see the information you need.

Tasks at bottom

This comes along with the ability to move and resize panes (we call this Dockable UI), allowing you to customize it to your liking, such as below where Alarms and Work in Progress have been moved to provide a larger workspace.

Dockable UI

  • Reorganized Action menus (right click)

Action menus have been reorganized and flattened so that your actions are easier to find, and placed more familiarly.  It should be much easier to pick up as you transition from the old desktop client to vSphere Web Client.

Action Menus

  • Home menu navigation

The new and improved home button now shows a navigation menu which allows you to jump from wherever you are to one of the common views.  You can now get back to any of the major inventory trees from anywhere in one click!

Homeburger

I hope this overview encourages you to upgrade your existing vCenter Servers to vSphere 6 so that you can experience these improvements (and more!) that we’ve made.

What’s New with vSphere Data Protection 6.0 and vSphere Replication 6.0

There are many interesting items coming out of VMware’s 28 Days of February where customers can learn more about “One Cloud, Any Application, Any Device”. A couple of the biggest items are the announcements of vSphere 6.0 and Virtual SAN 6.0. In this article, we will look at what is new with two of the more popular vSphere features: vSphere Data Protection and vSphere Replication. Perhaps the biggest news with these two features is around vSphere Data Protection. Before vSphere 6.0 and vSphere Data Protection 6.0, there were two editions of vSphere Data Protection: vSphere Data Protection, included with vSphere, and vSphere Data Protection Advanced, which was sold separately. With the release of vSphere Data Protection 6.0, all vSphere Data Protection Advanced functionality has been consolidated into vSphere Data Protection 6.0 and included with vSphere 6.0 Essentials Plus Kit and higher editions. Keep reading to learn more about the advanced functionality now included as part of vSphere Data Protection 6.0.

Continue reading

One Cloud, Any Application – #VMW28days

Screen Shot 2015-02-09 at 4.11.51 PM

VMware Virtual SAN 6.0 – you heard all about it on February 2nd – you read all about it in our blog post on the vSphere Storage Blog.

 

Still want more?

 

Visit VMware’s One Cloud, Any Application site every day in February to learn more about our products and solutions including software-defined storage, Virtual SAN 6, and Virtual Volumes (VVOLs). With content for IT decision makers and practitioners alike, this site contains everything from technical documentation to infographics, whitepapers, and analyst insights.

 

Stop by today!

 

Also, this Thursday, February 12th, at 11am PST, we would like to invite you to join the software-defined storage CrowdChat! Here, you’ll be able to ask questions directly to VMware storage experts. RSVP today!

 

For more information about VMware Virtual SAN, follow us on Twitter at @VMwareVSAN and Facebook at facebook.com/vmwarevsan.