Home > Blogs > Virtual Blocks

Implementing VMware Virtual Volumes on HP 3PAR StoreServ

In Q1 of this year we announced the general availability of vSphere 6.0, which includes a key capability to the VMware vision for Software-Defined Storage: Virtual Volumes (VVol). VVol is an integration framework to make 3rd party storage systems VM-aware and thereby enables control over native storage capabilities using the VMware control plane for SDS management: Storage Policy-Based Management.
There are two parts needed for customers to embark on the VVol transformation.  The first requirement is vSphere 6.0 with integrated VVol and SPBM features, and the other is a VVol-enabled array. The main reason why VVol is such a disruptive technology is because of the wide support from the storage partner ecosystem. HP is a VVol design partner and one of the few partners to deliver Day 1 support for VVol.
Today I’m very pleased to offer a guest article from Eric Siebert, HP Solutions Marketing Manager and our very dear colleague on the VVol partnership.

Continue reading

SDS – The Missing Link – Storage Automation for Application Service Catalogs

VMware-SDSAutomation technologies are a fundamental dependency to all aspects of the Software-Defined Data center. The use of automation technologies not only increases the overall productivity of the software-defined data center, but it can also accelerate the adoption of today’s modern operating models.

In recent years, a subset of the core pillars of the software-defined data center has experienced a great deal of improvements with the help of automation. The same can’t be said about storage. The lack management flexibility and capable automation frameworks have kept the storage infrastructures from delivering operational value and efficiencies similar to the ones available with the compute and network pillars.

VMware’s software-defined storage technologies and its storage policy-based management framework (SPBM) deliver the missing piece of the puzzle for storage infrastructure in the software-defined data center.

Continue reading

Virtual Volumes and the SDDC

I saw a question the other day that asked “Can someone explain what the big deal is about Virtual Volumes?” A fair question.

The shortest, easiest answer is that VVols offer per-VM management of storage that helps deliver a software defined datacenter.

That, however, is a pretty big statement that requires some unpacking. Rawlinson has done a great job of showcasing Virtual Volumes already, and has talked about how it simplifies storage management, puts the VMs in charge of their own storage, and gives us more fine-grained control over VM storage. I myself will also dive into some detail on the technical capabilities in the future, but first let’s take a broader look at why this really is an important shift in the way we do VM storage.

Continue reading

What’s New with Virtual SAN 6.0?

Software-Defined Storage is making waves in the storage and virtual infrastructure fields. Data and infrastructure are intertwined, and when they’re both brought together, companies can cut down on expenses and increase productivity.

Rawlinson Rivera, Principal Architect, Storage and Availability, recently hosted a webinar, discussing how VMware is approaching Software-Defined Storage (SDS) and virtualization in recently announced VMware updates, including updates to VMware Virtual SAN 6.0.

Software-defined storage offers organizations the ability automate, distribute and control storage better than ever before. SDS can provision storage for applications on demand and without complex processes. It also allows for standardized hardware, reducing costs for businesses everywhere.

To bring the customers the best software-defined storage experience to realization, we had to update VMware® Virtual SAN™. And we did just that. With VMware Virtual SAN 6.0, we introduced several new features with SDS in mind:

  • Software-defined storage optimized for VMs
  • All Flash architecture
  • Broad hardware support
  • The ability to run on any standard x86 server
  • Enterprise-level scalability and performance
  • Per-VM storage policy management
  • And a deep integration with the VMware stack

There’s a lot more to unpack from the latest updates to our VMware solutions. For a more in-depth guide to what’s new and how it affects you, watch the webcast here!

Be sure to subscribe to the Virtual SAN blog or follow our social channels at @vmwarevsan and Facebook.com/vmwarevsan for the latest updates.

For more information about VMware Virtual SAN, visit http://www.vmware.com/products/virtual-san.

vSphere Virtual Volumes (VVols) Interoperability Matrix

VVols-GA

Since the official release of vSphere 6.0, Virtual Volumes (VVols) has generated a great deal of interest with customers, field consultants, and the VMware community. Now that VVols is available customers can begin testing functionality and capabilities. There have been many questions about what VMware products and vSphere features are compatible and currently interoperate with VVols.

Because VMware’s product portfolio continues to expand exponentially, identifying all of the new products and features that interoperate with VVols can be a tedious and potentially time-consuming task. In the interest of time and efficiency, the need for a centralized Virtual Volumes interoperability guide is eminent, so here is one.

Below is a list of VMware products and vSphere 6.0 features that as of today March 30th, 2015 are supported and interoperate with VVols. Please keep in mind that the interoperability and supportability of any of these products and features can change with a future patch or product release. It is highly recommended to check the VMware compatibility matrix guide for the official and up to date list of products and features that are interoperable with VVols.

Continue reading

Virtual SAN VCG Update – New HP, Fujitsu & Quanta Controllers certified for Virtual SAN!

The Virtual SAN product team is pleased to announce that the following controllers are now certified on Virtual SAN and are listed on the Virtual SAN VCG:

6G 6.0 Hybrid:

HP P420

HP P420i

HP P220i

HP P822

Quanta SAS2308

12G 6.0 Hybrid

HP P440 (without hot plug)

12G 5.5 Hybrid

Fujitsu LSI 3008

We now have a total of 80 I/O controllers certified on Virtual SAN 5.5 and 40 I/O controllers certified on Virtual SAN 6.0.

What about the Ready Nodes for platforms based on the above controllers?

We are working closely with our OEM partners such as HP to publish Ready Nodes for both 6G and 12G platforms for Virtual SAN Hybrid and All Flash configurations for both 5.5 and 6.0 and these should be available soon.

What are the controllers that are currently in the certification queue and when do we expect them to get certified?

Please see the attached list of controllers that are currently undergoing Virtual SAN certification

Note: In many cases, we rely on our partners to provide driver/firmware fixes for controller issues so if there are delays in receiving these updates from partners, the certification timelines may get pushed out.

I have follow up questions on the attached controller list. Who do I reach out to?

As always, if you have questions on Hardware Compatibility for Virtual SAN, please email your queries to vsan-hcl@vmware.com and someone will get back to you soon!

Where do I learn more about the VMware Virtual SAN Certification process?

Please refer to the Virtual SAN Certification blog post for more details:

 

VMware Virtual SAN 6.0: Data Encryption with Hytrust DataControl

VSAN-Hytrust

Customers from different industries and institutions are very interested in Virtual SAN as a storage solution not just because of the technological value it delivers today, but because of the product’s undeniable value around operational efficiency, ease of management, and flexibility.

Some of these customers are from financial, healthcare and government institutions, and conduct their business in areas that are governed by regulatory compliance laws such as HIPPA, PCI-DSS, FedRAMP, Sarbanes-Oxley, etc. These laws demand compliance with numerous security measures, one of them being the ability to guarantee data integrity by securing data with some form of encryption.

Today Virtual SAN does not include encryption as one of its data services as this feature is currently under development for a future release. Now, when considering Virtual SAN as a potential solution wherever data encryption is a requirement based on regulatory compliance laws, it’s important to know what options are currently available.

In Virtual SAN the encryption data service capabilities are offloaded to hardware-based offerings available through Virtual SAN Ready Nodes. Data encryption data services are exclusively supported on Virtual SAN Ready Node appliances that are comprised with all of the certified and compatible hardware devices that provide encryption capabilities such as self-encrypting drives, and/or storage controllers. The Virtual SAN Ready Node appliances are offered by just about all the OEM hardware vendors that are part of VMware’s ecosystem.

An alternative option to the Virtual SAN Ready Nodes is a software based solution developed and offered by a company called Hytrust. Hytrust is one of the members of VMware’s partner ecosystem whose business is focused around the delivery of data security services for private and public cloud infrastructures. The solution I want to highlight in particular is called Hytrust DataControl.

Hytrust DataControl is a software-based solution that is designed with the capability of protecting virtual machines and their data throughout their entire lifecycle (from creating to decommission). Hytrust DataControl delivers both encryption and key management services.

This solution is built specifically to address the unique requirements of private, hybrid and public clouds, combining robust security, easy deployment, exceptional performance, infrastructure independence, and operational transparency. Hytrust DataControl ease of deployment and management capabilities complies with one of the main principles of Virtual SAN which is simplicity and ease of management.

Hytrust DataControl virtual machine edition is based on a software agent that encrypts data from within the Windows or Linux operating system of a virtual machine, ensuring protection and multi-tenancy of data in any infrastructure. DataControl also allows you to transfer files between VMs, so you can securely migrate stored data from your private to the public cloud.

The deployment of the Hytrust DataControl solution and installation and configuration of the software is done in a couple of easy steps which take just a few minutes. Once the software is resident, any data written to storage by an application will be encrypted both in motion, as it travels securely through the hypervisor and network, and also at rest on the Virtual SAN datastore.

HT-deployment

Continue reading

Virtual SAN Certification & VCG Update

The Virtual SAN product team is pleased to announce that last week we released new certified components (I/O controllers, SSDs and HDDs), new Ready Nodes and a new Hardware Quick Reference Guide for Virtual SAN 6.0 along with a new and improved VCG page.  Please see updated links below:

Updated Virtual SAN VCG

Updated Virtual SAN Hardware Quick Reference Guide

Updated Virtual SAN Ready Nodes

 

How many new components and Ready Nodes do we have listed for Virtual SAN 6.0?

We now have 26 I/O controllers, 170 SSDs and 125 HDDs (and counting) supported on Virtual SAN 6.0.   In addition to the Virtual SAN 5.5 Ready Nodes, we have 8 new Ready Nodes for Virtual SAN 6.0  (Cisco – 4 Hybrid, Dell – 1 Hybrid, Hitachi – 1 Hybrid, Super Micro – 1 All Flash & 1 Hybrid).

We expect this list to grow very quickly.  We have a number of components that are currently getting certified and we plan to add new certified devices and Ready Nodes to the VCG on a weekly basis.

 

How does the Virtual SAN Certification process work?

The VMware Virtual SAN team treats hardware certification very seriously.  I/O controllers play a very important part in determining the stability and performance of a Virtual SAN cluster and need to be able to withstand high I/O under stress conditions.

The I/O controllers are put through a rigorous I/O certification process while the HDD, SSD and Ready Nodes  are put through stringent paper qualifications.

We run a I/O controller card through a 3-week-long certification test plan (the certification is done by VMware or by the partner) that stress tests the card across many dimensions, particularly in high load and failure scenarios to ensure the card can withstand the level of I/O pushed down by Virtual SAN even in the most adverse situations (example: rebuilds and resyncs triggered due to host failures).

If there are issues identified, we work closely with our controller vendor/OEM partner to resolve them and re-run the entire test suite after resolution.  Sometimes an updated firmware or driver version addressing the issue is required from the vendors before we can proceed with more testing.

Only controllers that fully pass the test criteria laid out in the above process are listed on the Virtual SAN VCG.

 

Are separate I/O controller certifications required for different releases?

Yes, we require controllers to be recertified whenever any of the following change:

  • Virtual SAN Release version (eg: 5.5 to 6.0)
  • The controller driver version
  • The controller firmware version

We also certify the same controller separately for Virtual SAN All Flash vs Hybrid since the caching and I/O mechanism are different for these two configurations and we expect controllers to behave differently with varying levels of I/O.

 

What about certification of PCIe-SSD devices?

PCIe-SSDs are nothing but SSDs with an on-board I/O controller in a PCIe form factor.  Therefore, these require the same level of due diligence as required by standard I/O controllers.  As a result, we are putting these devices through the same level of rigorous certification as we do for I/O controllers.

VMware is working very closely with partners to certify the first set of PCIe-SSDs for Virtual SAN 6.0 over the coming weeks.

 

What are the new updates to the VCG page?

The Virtual SAN VCG page has been enhanced to allow users to easily build or choose their All Flash configurations in addition to Hybrid configurations.  Since All Flash Virtual SAN requires SSDs of different endurance and performance spec for caching and performance tiers (See Updated Virtual SAN Hardware Quick Reference Guide for details on specs), we have enhanced the VCG to help users easily pick SSDs for the tier they are interested in.

We have also introduced a new SSD filter called “Virtual SAN type” to help easily filter our All Flash vs Hybrid configurations.  Furthermore, we have added a filter called “Tier” to help you filter our Virtual SAN hybrid caching, Virtual SAN All Flash caching and Virtual SAN capacity caching tiers.

The endurance rating for SSDs are now displayed on the VCG in TBW (Terabytes written over 5 years) as opposed to DWPD (Drive Writes Per Day) which was used previously.

 

What are the controllers that are currently in the certification queue and when do we expect them to get certified?

Please see the attached list of controllers that are currently undergoing Virtual SAN certification

Note:  In many cases, we rely on our partners to provide driver/firmware fixes for controller issues so if there are delays in receiving these updates from partners, the certification timelines may get pushed out.

Having said that, we are making good progress on most of the controllers listed in the attached document and expect them to follow our standard certification process.

On a similar note, Ready Nodes  are primarily dependent on the controllers getting certified, so as you see new controllers on the VCG for 6.0 certified, Ready Nodes  including those controllers will follow.

Video: Virtual SAN From An Architect’s Perspective

Video: Virtual SAN From An Architect’s Perspective

Have you ever wanted a direct discussion with the people responsible for designing a product?

Recently, Stephen Foskett brought a cadre of technical bloggers to VMware as part of Storage Field Day 7 to discuss Virtual SAN in depth.  Christos Karamanolis (@XtosK), Principle Engineer and Chief Architect for our storage group went deep on VSAN: why it was created, its architectural principles, and why the design decisions were important to customers.

The result is two hours of lively technical discussion — the next best thing to being there.  What works about this session is that the attendees are not shy — they keep peppering Christos with probing questions, which he handles admirably.

The first video segment is from Alberto Farronato, explaining the broader VMware storage strategy.

The second video segment features Christos going long and deep on the thinking behind VSAN.

The third video segment is a run-over of the second.  Christos presents the filesystem implementations, and the implications for snaps and general performance.

Our big thanks to Stephen Foskett for making this event possible, and EMC for sponsoring our session.

 

How To Double Your VSAN Performance

How To Double Your VSAN Performance

VSAN 6.0 is now generally available!

Among many significant improvements, performance has been dramatically improved for both hybrid and newer all-flash configurations.

VSAN is almost infinitely configurable: how many capacity devices, disk groups, cache devices, storage controllers, etc.  Which brings up the question: how do you get the maximum storage performance out of VSAN-based cluster?

Our teams are busy running different performance characterizations, and the results are starting to surface.  The case for performance growth by simply expanding the number of storage-contributing hosts in your cluster has already been well established — performance linearly scales as more hosts are added to the cluster.

Here, we look at the impact of using two disk groups per host vs. the traditional single disk group.  Yes, additional hardware costs more — but what do you get in return?

As you’ll see, these results present a strong case that by simply doubling the number of disk -related resources (e.g. using two storage controllers, each with a caching device and some number of capacity devices), cluster-wide storage performance can be doubled — or more.

Note: just to be clear, two storage controllers are not required to create multiple disk groups with VSAN.  A single controller can support multiple disk groups.  But for this experiment, that is what we tested.

This is a particularly useful finding, as many people unfamiliar with VSAN mistakenly assume that performance might be limited by the host or network.  Not true — at least, based on these results.

For our first result, let’s establish a baseline of what we should expect with a single disk group per host, using a hybrid (mixed flash and disks) VSAN configuration.

Here, each host is running a single VM with IOmeter.  Each VM has 8 VMDKs, and 8 worker tasks driving IO to each VMDK.  The working set is adjusted to fit mostly in available cache, as per VMware recommendations.

More details: each host is using a single S3700 400GB cache device, and 4 10K SAS disk drives. Outstanding IOs (OIOs) are set to provide a reasonable balance between throughput and latency.

VSAN_perf_1

On the left, you can see the results of a 100% random read test using 4KB blocks.  As the cluster size increases from 4 to 64, performance scales linearly, as you’d expect.  Latency stays at a great ~2msec, yielding an average of 60k IOPS per host.  The cluster maxes out at a very substantial ~3.7 million IOPS.

When the mix shifts to random 70% read / 30% writes (the classic OLTP mix), we still see linear scaling of IOPS performance, and a modest increase in latency from ~2.5msec to ~3msec.  VSAN is turning it a very respectable 15.5K IOPS per host.  The cluster maxes out very close to ~1m IOPS.

Again, quite impressive.  Now let’s see what happens when more storage resources are added.

For this experiment, we added an additional controller, cache and set of capacity devices to each host.  And the resulting performance is doubled — or sometimes even greater!

VSAN_perf_2

Note that now we are seeing 116K IOPS per host for the 100% random read case, with a maximum cluster output of a stunning ~7.4 million IOPS.

For the OLTP-like 70% read / 30% write mix, we see a similar result: 31K IOPS per host, and a cluster-wide performance of ~2.2 million IOPS.

For all-flash configurations of VSAN, we see similar results, with one important exception: all-flash configurations are far less sensitive to the working set size.  They deliver predictable performance and latency almost regardless of what you throw at them.  Cache in all-flash VSAN is used to extend the life of write-sensitive capacity devices, and not as a performance booster as is the case with hybrid VSAN configurations.

In this final test, we look at an 8 node VSAN configuration, and progressively increase the working set size to well beyond available cache resources.  Note: these configurations use a storage IO controller for the capacity devices, and a PCI-e cache device which does not require a dedicated storage controller.

On the left, we can see the working set increasing from 100GB to 600GB, using our random 70% read / 30% OLTP mix as before.

Note that IOPS and latency remain largely constant:  ~40K IOPS per node with ~2msec latency.  Pretty good, I’d say.

On the right, we add another disk group (with dedicated controllers) to each node (flash group?) and instead vary the working set size from an initial 100GB to a more breathtaking 1.2TB.  Keep in mind, these very large working set sizes are essentially worst-case stress tests, and not the sort of thing you’d see in a normal environment.

VSAN_perf_3

Initially, performance is as you’d expect: roughly double of the single disk group configuration (~87K IOPS per node, ~2msec latency).  But as the working set size increases (and, correspondingly, pressure on write cache), note that per-node performance declines to ~56K IOPS per node, and latency increases to ~2.4 msec.

What Does It All Mean?

VSAN was designed to be scalable depending on available hardware resources.  For even modest cluster sizes (4 or greater), VSAN delivers substantial levels of storage performance.

With these results, we can clearly see two axes to linear scalability — one as you add more hosts in your cluster, and the other as you add more disk groups in your cluster.

Still on the table (and not discussed here): things like faster caching devices, faster spinning disks, more spinning disks, larger caches, etc.

It’s also important to point out what is not a limiting factor here: compute, memory and network resources – just the IO subsystem which consists of a storage IO controller, a cache device and one or more capacity devices.

The other implication is incredibly convenient scaling of performance as you grow — by either adding more hosts with storage to your cluster, or adding another set of disk groups to your existing hosts.

What I find interesting is that we really haven’t found the upper bounds of VSAN performance yet.  Consider, for example, a host may have as many as FIVE disk groups, vs the two presented here.   The mind boggles …

I look forward to sharing more performance results in the near future!

———–

Chuck Hollis

http://chucksblog.typepad.com

@chuckhollis