Cloning virtual machines is an area where VAAI can provide many advantages. Flash storage arrays provide excellent IO performance. We wanted to see what difference VAAI makes in virtual machine cloning operations for “All Flash Arrays”.
For the test 500GB of random data on a VMDK was created on a Linux virtual machine. This virtual machine was then cloned with VAAI turned off and then on to study its impact. The results of the testing truly attest to the big benefits that VAAI bring to massive write operations.
In today’s special webcast event VMware officially announced the release of VMware Horizon 6.0. This version is designed to meet the demands of today’s mobile workforce and optimized for the Software-defined Datacenter architectures and superior operating models.
The announcement was packed with a lot of new great new features and capabilities for the entire Horizon Suite of products, but one of my personal favorite announcements was around the support of Virtual SAN storage policies.
This new release delivers an unmatched level of integration with Virtual SAN by to leveraging all of the key benefits Virtual SAN has to offer:
Radically simple management and configuration
Storage policy base management framework,
Performance, capacity, and resilient foundation
Linear scalable capabilities (scale up or scale out).
By leveraging vSphere’s new policy driven control plane and the storage policy based management framework, Horizon 6.0 is able to guarantee performance and services levels to virtual desktops by leveraging VM Storage Policies defined for virtual desktop based on their storage capacity, performance and availability requirements.
Horizon 6.0 automatically deploys a set of VM storage policies for virtual desktops onto vCenter Server. The policies are automatically and individually applied per disk (Virtual SAN objects) and maintained thorough out the lifecycle of the virtual desktop. The policies and their respective performance, capacity and availability characteristic are listed below:
VM_HOME – Number of disk stripes per object 1, Number of Failures to tolerate 1. This corresponds to the default policy of Virtual SAN.
OS_Disk - Number of disk stripes per object 1, Number of Failures to tolerate 1. Again, this is the default policy.
REPLICA_DISK - Number of disk stripes per object 1, Number of Failures to tolerate 1, Flash Read Cache Reservation 10%. This policy dedicates some of the SSD or flash capacity to the replica disk, in order to provide greater caching for the expected level of reads that this disk will experience.
Persistent Disk - Number of disk stripes per object 1, Number of Failures to tolerate 1, object space reservation 100%. This policy ensures that this type of disk is guaranteed all the space required.
The following video illustrates the new Horizon 6.0 integration with Virtual SAN policies:
The combination of Horizon 6.0 and Virtual SAN provides customers with the ability to deploy persistent and non-persistent virtual desktops without the need for a traditional SAN.
By combining the lower cost of server-based storage with the availability benefits of a shared Datastore, and having an additional punch from SSD-based performance acceleration, Virtual SAN yields major cost saving with the overall implementation of a VDI solution.
A question that I’ve been asked about very often has been around the behavior and logic of the witness component In Virtual SAN. Apparently this is somewhat of a cloudy topic. So I wanted to take the opportunity and answer that question and for those looking for more details on the topic ahead of the official white paper where the context of this article is covered in greater depth. So be in the look out for that.
The behavior and logic I’m about to explain here is 100% transparent to the end user and there is nothing to be concerned with regards to the layout of the witness components. This behavior is managed and controlled by the system. This is intended to provide an understanding for the number of witness components you may see and why.
Virtual SAN objects are comprised of components that are distributed across hosts in vSphere cluster that is configured with Virtual SAN. These components are stored in distinctive combinations of disk groups within the Virtual SAN distributed datastore. Components are transparently assigned caching and buffering capacity from flash based devices, with its data “at rest” on the magnetic disks.
Witness components are part of every storage object. The Virtual SAN Witness components contain objects metadata and their purpose is to serve as tiebreakers whenever availability decisions have to be made in the Virtual SAN cluster in order to avoid split-brain behavior and satisfy quorum requirements.
For the third article of the Virtual SAN interoperability series, I want to showcase the interoperability between Virtual SAN vSphere Replication and vCenter Site Recovery Manager. This demonstration presents one of the many possible ways in which customers can use of vSphere Replication and vCenter Site Recovery Manager with Virtual SAN.
In the demonstration below, I performed a fully automated planned migration of virtual machines hosted on traditional SAN infrastructures onto a Virtual SAN environment seamlessly. This example particularly shows how simple this type of operation can be achieved utilizing existing vSphere tools and technologies that possess integration capabilities with Virtual SAN.
For the second article of the Virtual SAN interoperability series, I showcase the interoperability between Virtual SAN and vCloud Automation Center. This demonstration presents one of the many ways in which vCloud Automation Center can be used to provision virtual machines onto a Virtual SAN infrastructure via a service catalog.
In this scenario, I have created and published three vCloud Automation Center blueprints to a service catalog. All blueprints are accessible to all users in a private cloud. Each blueprint was created based on virtual machine templates that are configured with a VM Storage Policy which was assigned at the vSphere level.
A VM Storage Policy is a vSphere construct that store storage capabilities in order to apply them onto virtual machines or different VMDKs. In this case the capabilities are based on capacity, availability, and performance which the offerings of Virtual SAN. In the demonstration, the focus is around deploying a virtual machine with the highest level of availability. Virtual machine or VMDKs availability configurations are defined by the “Number of Failures to Tolerate” storage capability. The service catalog contains 3 different virtual machine offerings each with a different “Number of Failures to Tolerate” policy as defined below:
In effort to continue to providing information about Virtual SAN and its capability via recoded demos I’ve created a new set of Virtual SAN walkthrough demos .
The walkthrough demos are available and accessible online, for everyone that is interested in learning about how Virtual SAN works, its capabilities and how does it interoperate with other VMware products and solutions. To access the Virtual SAN walkthrough demos use the link below:
So by now most of you are aware that Virtual SAN 5.5 was released last week, and it came in with a bang. During the launch event, we announced some impressive performance numbers, detailing 2 Million IOPS achieved in a 32-node Virtual SAN cluster. One of the most frequent questions since the launch has been what are the details of the configuration we used to achieve this monumental task. Well wait no longer, this is the post that will reveal the details in all their magnificent glory! Continue reading →
The deployment and configuration of Virtual SAN has been deemed “radically simple” because of its simplistic two click configuration capability. For the most part, most people have yet to see what the deployment and configuration of a large Virtual SAN cluster looks like and in actuality what it takes. Virtual SAN requires the configurations of dedicated virtual network interfaces as well as configuration to physical uplinks.
Configuring large clusters could potentially become time consuming and susceptible to configuration errors when done manually. In the interest of Virtual SAN interoperability and deep integration with the rest of the vSphere platform and its features the use of the vSphere Distributed Switches complement the overall ease of the configuration and deployment of Virtual SAN.
The vSphere Distributed Switches configuration “Template Mode” feature can be leverage to improve the agility of the virtual network configuration and also drastically reduce the risk of virtual network mis-configurations. This demonstrations showcase the deployment and configuration of a Virtual SAN along with its requires network configuration. For those who were not aware, the vSphere Distributed Switch is included as part of the Virtual SAN licensing.
Watch how simple and easy configure and deploy and 16 node Virtual SAN cluster with a vSphere Distributed Switch in a few minutes.
I was recently involved in a conversations with regards to Virtual SAN and virtual machine migration capabilities. A few customers have been wondering about whether or not all of the vSphere migration operation and functions work with Virtual SAN due to the way the system works. One particular migration operation in question was around the ability to migrate virtual machines.