Since VMware’s acquisition of Virsto earlier this year, many customers and folks in the community have expressed a great deal of interest in the product. Since so many folks have requested more information about the product, I’ve decide to write a series of in-depth blog articles that will discuss VMware Virsto’s capabilities, benefits, and targeted use cases for the product. VMware Virsto is a software-defined storage solution design to optimize the use of external block storage in vSphere virtual infrastructures. VMware Virsto enhances the use of external Storage Area Networks (SAN) by accelerating performance and increasing overall storage utilization. When considering the storage challenges that are faced today in virtual infrastructures, one of the primary concerns revolves around performance and space efficiency. Virtualized environments tend to be performance intensive and persistent with the presentation of random I/O.
I mentioned last month that I would be presenting at the Italian VMUG event in Milan. Well, the VMUG guys recorded the session, so if you are interesting in seeing me talking about some of the cool storage projects we are working on internally here at VMware (such as Virtual SAN, Virtual Volumes & Virtual Flash), you can watch the video here:
The first few minutes are a little noisy, but that gets sorted out after a while. The one thing that is missing from the video is the disclaimer slide which I showed off at the beginning of the presentation. Its the usual stuff, in so far as we make no guarantee around the delivery of these projects. Hope you find it interesting, and much kudos to the folks at VMUG Italia for making this possible.
Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @VMwareStorage
EMC World kicked off today in Las Vegas, and much of this week’s buzz is focused squarely on big data. Specifically, VMware’s CEO Pat Gelsinger is hot on how to build big data solutions into the enterprise as a service. During his keynote, Gelsinger and VMware data architect Michael West showed attendees how smart organizations will be deploying and managing Hadoop clusters in the future that will dramatically improve time-to-insight and productivity.
I’ve been involved in a few conversations recently related to device queue depth sizes. This all came about as we discovered that the default device queue depth for QLogic Host Bus Adapters was increased from 32 to 64 in vSphere 5.0. I must admit, this caught a few of us by surprised as we didn’t have this change documented anywhere. Anyway, various Knowledge Base articles have now been updated with this information. Immediately, folks wanted to know about the device queue depth for Emulex. Well, this hasn’t changed and continues to remain at 32 (although in reality it is 30 for I/O as two slots on the Emulex HBAs are reserved). But are there other concerns?
vSphere 5.1 Update 1 is now available. For those of you running 5.1, there are a lot of critical fixes and enhancements, so I’d urge you to review the release notes and consider scheduling a slot to upgrade your infrastructure to this new release. There are updates for both vCenter and ESXi in this release.
Since this is the storage blog, I wanted to call out a few items which are directly relevant to storage and are addressed in 5.1U1, and these are features which I know a number of our customers have been waiting on.
Many of you will have read various articles related to queue depths, especially in the area of LUN/device queue depths, and how these can be tuned to provide different performance for your I/O. However there are other queue settings internal to the VMkernel, which relate to how many I/Os a Virtual Machine can issue before it has to allow another virtual machine to send I/Os to the same LUN. What follows is some detail around these internal settings, and how they are used to achieve fairness and performance for virtual machine I/O.
Warning: These settings have already been pre-configured to allow virtual machines perform optimally. There should be no reason to change these unless guided to do so by VMware Support Staff. This is all about performance vs fairness. Failure to follow this advice can can give you some very fast virtual machines in your environment, but also some extremely slow ones.
Last week I received a question from a customer asking about configuring shares for a Virtual Machine’s virtual disk (VMDK) as well as setting the IOP limits for a virtual disk. An old script that I had written was shared with the customer to provide an example but they were interested in the functionality being provided through a vCenter Orchestrator (vCO) workflow instead.
The resource management team is interested in your opinion about a feature in development; storage reservations.
The current version of Storage I/O Control provides tools to prioritize and provide fairness to I/O streams of virtual machines. However the current version does not provide a function to specify number of IOPS to guarantee a minimum level.
Because the storage subsystem can be shared, external workloads can impact the performance capacity (in terms of IOPS) of the datastores and therefore a guarantee cannot be met temporarily. This is one of the challenges that must be taken into account when developing storage reservations and we must understand how stringent you want the guarantee to be.
One of the questions we are dealing with is whether you would like a strict admission control or a relaxed admission control. With strict admission control, a virtual machine power-on operation is denied when vSphere cannot guarantee the storage reservation (similar to compute reservations). Relaxed admission control turns storage reservations into a share-like construct, defining relative priority at times where not enough IOPS are available at power-on. For example: Storage reservation on VM1 = 800 and VM2 = 200. At boot 600 IOPS are available; therefore VM1 gets 80% of 600 = 480, while VM2 gets 20%, i.e. 120 IOPS. When the array is able to provide more IOPS the correct number of IOPS are distributed to the virtual machines in order to to satisfy the storage reservation.
In order to decide which features to include and define the behavior of storage reservation we are very interested in your opinion. We have created a short list of questions and by answering you can help us define our priorities during the development process. I intentionally kept the question to a minimum so that it would not take more than 5 minutes of your time to complete the survey.
As always, this article provides information about a feature that is currently under development. This means this feature is subject to change and nor VMware nor I in no way promises to deliver on any features mentioned in this article or survey.
Any other ideas about storage reservations? Please leave a comment below.
This is an issue that has come up time and time again. The basic gist of the problem is that when there are Microsoft Cluster Service (MSCS) virtual machines deployed across ESXi hosts (commonly referred to as Cluster Across Boxes or CABs), the virtual machines are sharing access to disks which are typically Raw Device Mappings or RDMs. RDMs are LUNs presented directly to virtual machines. Because we are rebooting an ESXi host which one assumes now has the passive virtual machines/cluster nodes, the other ESXi host or hosts therefore have the active virtual machines/cluster nodes. Since the active nodes have SCSI reservations on the shared disks/RDMs, this slows up the boot process of the ESXi as it tries to interrogate each of these disks during storage discovery. So what can you do to alleviate it? Read on and find out.