7 Reasons VMware Virtual SAN is Radically Simple

It is often that we hear technology solutions described as “revolutionary”, “incredible”, or “game-changing” and that list of adjectives could certainly go on. In some cases, those descriptions are fairly accurate while at other times, they tend to be a bit overstated. We here at VMware often use the term “radically simple” to describe Virtual SAN and I believe this flat out hits the nail on the head. Since the inception of Virtual SAN a couple of years ago, one of the main objectives has been to keep the management of Virtual SAN and the VMs running on Virtual SAN as simple as possible and, in my opinion, we have not only met that goal, but far exceeded it. Let’s look at seven examples.

1. For starters, there is plenty of flexibility in what can be used to run a Virtual SAN cluster. Standard x86 hardware serves as the building blocks for the solution. For many customers, Virtual SAN Ready Nodes are an attractive option as these servers come pre-configured with hardware and software certified by the hardware manufacturer and VMware to run vSphere and Virtual SAN. This makes it very easy to rapidly procure and provision the necessary hardware. Want to see just how simple it is? Take a look at the Virtual SAN Ready Node Configurator. If you prefer to build it yourself, that is OK, too. Just be sure to verify your configuration using the VMware Compatibility Guide to ensure components such as controllers, drives, and firmware have been tested and certified to run Virtual SAN. Last, but not least, customers can choose a complete, turn-key hyper-converged appliance solution from EMC and VMware called VxRail. For more details on that, I will simply point you to the VCE product page for VxRail. The key message I am conveying here is that customers have choices in how they can acquire and provision a Virtual SAN cluster. These choices and the tools provided make the process quite simple.


2. Enabling Virtual SAN is also easy. There is no need to deploy virtual appliances for Virtual SAN, which is the case with nearly all other hyper-converged infrastructure (HCI) storage solutions. Virtual SAN is built into vSphere, which is good for both simplicity and performance. Turning on Virtual SAN is consists of clicking a checkbox. Seriously – that’s it. Virtual SAN can automatically claim the disks in each host for the cache and capacity tiers or, if you prefer, you can do this manually, which adds only a few more steps. If you are skeptical, I invite you to take a look at “1 Enabling Virtual SAN” in the Virtual SAN Feature Walkthrough.



3. Unlike traditional storage platforms, there is no need to create LUNs or volumes and zone them to the hosts in the cluster. It has been common to create various LUNs or volumes with certain characteristics such as RAID, replication, and deduplication. The main issue with this approach is that every VM on a given LUN inherits the characteristics of that LUN whether we want them to or not. For example, a LUN is created and 10 VMs are provisioned to this LUN. The application owner requests that three of the 10 VMs are replicated for disaster recovery. I enable replication for the LUN, but that means either all 10 VMs get replicated (even though I need only three of them) or I have to create a new LUN and separate the VMs. Either option involves a fair amount of work for just a few VMs. Imagine the amount of work managing hundreds of VMs in this way. I imagine a few readers are thinking “Yes, that is what we do today and there has to be a better way.”

To help solve this challenge and make things radically simple, storage policy-based management (SPBM) is used with Virtual SAN. When migrating an existing VM or provisioning new VM to a Virtual SAN datastore, a storage policy is assigned to the VM. A storage policy consists of rules that determine characteristics such as availability and performance for VMs and/or VMDKs they are assigned to. For example, a storage policy can be created that contains a “Number of failures to tolerate” (FTT) rule. If I set FTT equal to one, this instructs Virtual SAN to place the objects that make up the VM on multiple hosts in the cluster in such a way that the VM can survive the loss of one disk or one host that contains some of those objects and still be available for use. I do not have to calculate the number of objects needed or manually place them. Virtual SAN does this automatically.


It gets better. If I want to modify an existing storage policy or assign a different storage policy to a VM, these actions can be performed without taking the datastore or VMs offline – in other words, without downtime. Here is another example: I have a VM that will likely benefit from a performance perspective if I stripe a VMDK across additional disks in the cluster. An existing storage policy can be modified or a new one created that contains the rule “Number of disk stripes per object” with the value set to 3. When the policy is applied to the VM, Virtual SAN makes the necessary changes in object count and placement to satisfy the new rule and all other rules in the policy. It is certainly possible to have multiple storage policies and one or multiple rules in each storage policy. Policies can be assigned with precision to VMs or even individual VMDKs to meet technical and business requirements.


4. Enabling features such as deduplication and compression is also a simple task – a drop-down menu that you change from disabled to enabled. The actual process will take some time as Virtual SAN performs a rolling reformat of all of the disks in the cluster, but it is automated and there is no virtual machine downtime.



5. I often get asked about how easy it is to add capacity to Virtual SAN and, yes you guessed it – it is simple. There are two ways to scale a Virtual SAN cluster. The first is “scaling up”, which involves adding new drives and/or replacing smaller drives with larger drives in existing hosts. vSphere’s maintenance mode is the best way to prepare a host for adding or replacing disks. When entering maintenance mode, the administrator is prompted for how existing data on the host should be handled. The three options are covered in the Virtual SAN documentation so I will not go into details here. The other option is “scaling out” by adding new hosts with storage to a Virtual SAN cluster. To get a better look at this option, I will again point you to the Virtual SAN Feature Walkthrough “5 Adding Capacity”.


6. Virtual SAN makes health monitoring and alerting radically simple. A health service routinely checks a large number of configuration items such as Virtual SAN object health, hardware compatibility, network configuration and connectivity, physical disk health, and cluster health. This service is on by default and provides alerting if an issue is detected. When an issue is found, Virtual SAN makes it easier to start the troubleshooting process by providing an “Ask VMware” button, which takes you to the relevant VMware Knowledge Base article for the issue.


If a support request (SR) is opened with VMware Global Support Services (GSS), Virtual SAN also makes it easier to upload the necessary logs and attach them to the SR.



7. The performance service introduced with Virtual SAN 6.2 provides information such as throughput, IOPS, and latency at the cluster, host, virtual machine, and even the VMDK levels. This information can be viewed using the vSphere Web Client in a couple of ways – over the last x number of hours or a custom date range can be specified.


I could go on with more examples, but I am guessing you are seeing the pattern: Virtual SAN is radically simple to implement, configure, and administer. It is no wonder VMware Hyper-Converged Software with Virtual SAN is the leading HCI solution. To get started with Virtual SAN, I have two recommendations:

#KeepITsimple. I have probably offered that advice more than any other in my 8+ years here at VMware and it still holds true.

Visit this page to learn more about Virtual SAN: “Getting Started” section of the Virtual SAN product page on vmware.com.

@jhuntervmware on Twitter


Leave a Reply

Your email address will not be published. Required fields are marked *