Cloning virtual machines is an area where VAAI can provide many advantages. Flash storage arrays provide excellent IO performance. We wanted to see what difference VAAI makes in virtual machine cloning operations for “All Flash Arrays”.
For the test 500GB of random data on a VMDK was created on a Linux virtual machine. This virtual machine was then cloned with VAAI turned off and then on to study its impact. The results of the testing truly attest to the big benefits that VAAI bring to massive write operations.
There is no doubt that the barriers to virtualization have been rapidly falling. In fact today with VMware’s Monster VM capabilities and the scalability of vSphere 5.5, many organizations have recognized that practically any application can be virtualized and they have adopted a virtualization first policy. But there are still some customers that have been hesitant about virtualizing their most critical applications. Often times I/O bottlenecks and high storage latency are the cause for poor application performance. That is where a new breed of storage comes to the rescue.
The latest version of App HA, 1.1, was released last week and is now available for download. This release has a number of cool new features that will greatly increase the usability of App HA. I will do additional post(s) on these in the next few weeks.
We often get requests on how to estimate vSphere Replication network bandwidth utilization. This can be rather difficult as there are a few variable factors that influence how much traffic is generated by vSphere Replication (VR). A couple of key items are data change rate in the virtual machine (VM) and the Recovery Point Objective (RPO) setting in VR for the VM. Data change rate can be difficult to determine and is rarely a constant number. Fortunately, one of the engineers here at VMware built a virtual appliance that calculates and graphs the the amount of replicated data generated by a VM and the bandwidth that would be consumed when using VR for replicating this VM.
In today’s special webcast event VMware officially announced the release of VMware Horizon 6.0. This version is designed to meet the demands of today’s mobile workforce and optimized for the Software-defined Datacenter architectures and superior operating models.
The announcement was packed with a lot of new great new features and capabilities for the entire Horizon Suite of products, but one of my personal favorite announcements was around the support of Virtual SAN storage policies.
This new release delivers an unmatched level of integration with Virtual SAN by to leveraging all of the key benefits Virtual SAN has to offer:
- Radically simple management and configuration
- Storage policy base management framework,
- Performance, capacity, and resilient foundation
- Linear scalable capabilities (scale up or scale out).
By leveraging vSphere’s new policy driven control plane and the storage policy based management framework, Horizon 6.0 is able to guarantee performance and services levels to virtual desktops by leveraging VM Storage Policies defined for virtual desktop based on their storage capacity, performance and availability requirements.
Horizon 6.0 automatically deploys a set of VM storage policies for virtual desktops onto vCenter Server. The policies are automatically and individually applied per disk (Virtual SAN objects) and maintained thorough out the lifecycle of the virtual desktop. The policies and their respective performance, capacity and availability characteristic are listed below:
- VM_HOME – Number of disk stripes per object 1, Number of Failures to tolerate 1. This corresponds to the default policy of Virtual SAN.
- OS_Disk - Number of disk stripes per object 1, Number of Failures to tolerate 1. Again, this is the default policy.
- REPLICA_DISK - Number of disk stripes per object 1, Number of Failures to tolerate 1, Flash Read Cache Reservation 10%. This policy dedicates some of the SSD or flash capacity to the replica disk, in order to provide greater caching for the expected level of reads that this disk will experience.
- Persistent Disk - Number of disk stripes per object 1, Number of Failures to tolerate 1, object space reservation 100%. This policy ensures that this type of disk is guaranteed all the space required.
The following video illustrates the new Horizon 6.0 integration with Virtual SAN policies:
The combination of Horizon 6.0 and Virtual SAN provides customers with the ability to deploy persistent and non-persistent virtual desktops without the need for a traditional SAN.
By combining the lower cost of server-based storage with the availability benefits of a shared Datastore, and having an additional punch from SSD-based performance acceleration, Virtual SAN yields major cost saving with the overall implementation of a VDI solution.
For future updates, be sure to follow me on Twitter: @PunchingClouds
Four new demonstrations using the vSphere Big Data Extensions (BDE) have recently been made available on the VMwareTV area on Youtube for your use. They are all compact (less than 5 minutes in duration). The demos are described below.
BDE Demo #1: Installing and Configuring the vSphere Big Data Extensions
This demonstration shows the process for installing and configuring the vSphere Big Data Extensions feature. This capability is available as a free download with VMware vSphere Enterprise Plus. It allows you to provision Hadoop clusters on to vSphere virtual machines easily and quickly, and to manage them in a flexible way. It also provides a number of elasticity features for scaling your Hadoop virtual machines up or down automatically or manually.
As usual, most of my blog posts come from customer or field questions. Here’s a new one crossed my path recently.
A customer, running vSphere 5.1, was finding some anomalies within their VM’s. Their belief was that some of the vSphere Hardening Guide settings were causing it. When this was assigned to me, I noticed that they were referencing the vSphere 4.1 hardening guide!
The customer was applying guidelines from the 4.1 guide against a 5.1 system. They believed that the guideline was still relevant because it was referenced in a KB. (I’m going to try and get that fixed!)
The guideline setting is “guest.commands.enabled”. The 4.1 guide said to set this to False. The 4.1 guide AND the KB both state that setting this to False would disable the operation of VMware Consolidated Backup (VCB) and VMware Update Manager (VUM), both of which call the VIX API for guest operations.
Cue the old Henny Youngman “Doc, it hurts when I do this!” so the Doctor says “Don’t do that!” Thanks, I’ll be here all week. Try the veal! <rimshot>
A question that I’ve been asked about very often has been around the behavior and logic of the witness component In Virtual SAN. Apparently this is somewhat of a cloudy topic. So I wanted to take the opportunity and answer that question and for those looking for more details on the topic ahead of the official white paper where the context of this article is covered in greater depth. So be in the look out for that.
The behavior and logic I’m about to explain here is 100% transparent to the end user and there is nothing to be concerned with regards to the layout of the witness components. This behavior is managed and controlled by the system. This is intended to provide an understanding for the number of witness components you may see and why.
Virtual SAN objects are comprised of components that are distributed across hosts in vSphere cluster that is configured with Virtual SAN. These components are stored in distinctive combinations of disk groups within the Virtual SAN distributed datastore. Components are transparently assigned caching and buffering capacity from flash based devices, with its data “at rest” on the magnetic disks.
Witness components are part of every storage object. The Virtual SAN Witness components contain objects metadata and their purpose is to serve as tiebreakers whenever availability decisions have to be made in the Virtual SAN cluster in order to avoid split-brain behavior and satisfy quorum requirements.
In Q4, 2013, Pivotal CF 1.0 was made available to customers and the vSphere team announced that VMware will begin resell of Pivotal CF. Pivotal CF is the commercially supported hybrid Platform-as-a-Service (PaaS) based on the Cloud Foundry platform and optimized to run on VMware vSphere. It provides a turnkey PaaS experience for development teams to rapidly develop, update and grow applications dynamically on a private cloud allowing enterprises to operate at Internet scale for continuous delivery.
I am excited to announce the 1.1 release of Pivotal CF. With Pivotal CF 1.1, customers can utilize useful improvements to version 1.0 such as improved developer debugging, buildpack administration, higher availability and simple experience for adding new services through the service broker.
What’s new with Pivotal CF 1.1?
- Improved app event log aggregation: developers can now go to a unified log stream for full application event visibility
- Buildpack Management: operators can add new runtimes using buildpacks and control the order in which buildpacks are applied
- Higher availability: this release introduces a 3rd generation application health manager for higher system and application availability
- Simple experience for adding new Services: providers can develop and expose new services in the Pivotal CF catalog using a streamlined V2 Service Broker API
- Faster Developer Console: faster, with enhanced usability in managing teams and interacting with services
- Faster CF CLI: faster, with native installers for all modern platforms and versions of Windows, OSX, and Linux
- And more…
For more information on the 1.1 release of Pivotal CF, please click here
The idea of changing a Placeholder Datastore in SRM has come up a few times in internal discussions recently and there was some confusion around how to deal with it so I wanted to put something together to clarify things.
As a quick refresher, Placeholder Datastores are used to contain the Placeholder VM files at the recovery site. If you intend to do a Planned Migration and Reprotect you will need Placeholder Datastore(s) at the protected site as well.
Here is the process for changing the Placeholder Datastore(s) at the recovery site. The process is the same for the protected site.