By now, you’ve probably heard about VMware Virtual SAN – the industry leader in hyper-converged software defined storage for virtual environments. You’ve probably also heard how VMware Virtual SAN can lower TCO by up to 50%, or how it is the first policy-driven storage product designed for vSphere environments that simplifies and streamlines storage provisioning and management.
But in case you haven’t heard, we’d like to invite you to #VSANchat – our inaugural Twitter chat, where our experts and the larger Storage community will be discussing how to get started with Virtual SAN and things to consider. On December 2nd, 2014, at 11am PT, we invite you to discuss tips, best practices and insight on everything VMware Virtual SAN.
Get your notepads and pens ready, because we’re co-hosting a webinar with Nexenta, on November 19, at 8 a.m. PST detailing our complete, software-defined, hyper-convergence infrastructure offering. Join this webinar to learn how Virtual SAN and file services will fit in your environment, what Software-Defined Storage has to offer your organization and how your business can benefit.
VMware’s own Rawlinson Rivera, Senior Technical Marketing Architect, will co-host the webinar with Nexenta’s Michael Letschin, Director, Product Management, Solutions. During this webinar, we’ll discuss:
Storage provisioning and management of VMware Virtual SAN’s hypervisor-converged storage
Merging VMware Virtual SAN with VMware EVO: RAIL into a hyper-converged infrastructure that combines compute, networking and storage resources
How NexentaConnect for VMware Virtual SAN enables better file services, snapshot and self-service file recovery
How Nexenta can support a variety of workloads and business-critical situations through its Software-Defined Storage solutions
Oregon State University, a public institution with more than 26,000 students and growing VDI workloads wanted a high performance storage tier for their VDI environment. However, they wanted the solution to be up and running before the school summer session began, along with being easy to operate and scale on an on-going basis, without requiring large upfront investments.
In this article we will take the next step and illustrate how to leverage vSphere Storage Policies to enhance the provisioning of New VMs. We will have a few provisioning examples involving a virtual machine with a single traditional storage array backed datastore, a vsanDatastore, and a multi-vendor mixed datastore environment.
In Part I, Part II and Part III of this blog post series, we reviewed methods of running benchmark tests on a Virtual SAN cluster using three different methods; synthetic I/O Tools such is Iometer, pre-created application I/O trace replay files available for download, or custom created application I/O trace replay. Once you are running benchmark testing, there will be the need to assess and analyze the performance results of your Virtual SAN cluster, and how they meet the needs of the target applications within your environment . In this post, we will review some key concepts in performing a performance analysis of your Virtual SAN solution.
It has been 10 months since we released the first set of Virtual SAN Ready Nodes , which are validated server configurations jointly recommended by VMware and Server OEMs to accelerate Virtual SAN deployment. We have been working closely with multiple Server OEM partners to continuously update the list of Virtual SAN Ready Nodes.
The Virtual SAN Ready Node is another great option besides the DIY/Build-your-own option to deploy Virtual SAN, as we had discussed in the past such as in the June 23rd blog.
We have expanded the list from 24 (in June) to 40 Virtual SAN Ready Nodes from eight Server OEMs.
Why should you care about the Virtual SAN Ready Nodes and how do you use them?
In the previous VMware Virtual SAN Performance Testing blog post we reviewed the benefits of running performance tests utilizing I/O trace files over synthetic workload tools such as IOmeter to more accurately characterize the performance of a Virtual SAN cluster. The VMware I/O Analyzer includes pre-created trace files of specific application profiles that allows you to quickly perform scale-out testing utilizing a mix of industry standard workloads But what if you want to characterize the performance of your existing vSphere virtualized environment within a new Virtual SAN configuration? This is were the use of custom I/O Trace replays can be useful. Continue reading →
In our last article we demonstrated how to use the new vSphere PowerCLI 5.8 SPBM cmdlets to create vSphere Storage Policies. In this article we will demonstrate how to quickly associate a vSphere Storage Policy with a new or existing VM.
Example Provisioning Scenario
To illustrate how to leverage PowerCLI to associate vSphere Storage Policies with VMs we will continue with the provisioning example from our previous article.
Single virtual disk
Virtual SAN datastore
Follow these links for more information on creating vSphere Storage Policies for Virtual SAN:
Previously in order to create, manage, and associate vSphere Storage Policies with VMs using PowerCLI, one would need to leverage an intermediary method as well (e.g. Esxcli, RVC, REST API, etc). Often this could require the use of third party applications to bridge the gap in interfacing with the vSphere Storage Policy Based Management service. This resulted in added complexities and additional processing time for workflows that were automated with PowerCLI.
With the new PowerCLI 5.8 cmdlets for vSphere Storage Policy Based Management we are able to greatly reduce the complexity of vSphere Storage Policies with PowerCLI now by using PowerCLI exclusively. In the example below, we will demonstrate how to enhance the VM provisioning process by associating a vSphere Storage Policy with a virtual machine.
Greetings and welcome to our next article in the PowerCLI 5.8 series for the new vSphere Policy Based Management cmdlets. In today’s article we are going to dive right in and start building our own vSphere storage policies leveraging the new SPBM cmdlets within PowerCLI.
Before we begin though, if you have not yet had an opportunity to familiarize yourself with vSphere Storage Policy Based Management, here are a few key blog articles that can help you build a good foundation.
Big Data Extensions enables the deployment of Hadoop and HBase clusters in virtual machines on the VMware vSphere platform. This article gives you a brief introduction to the new features in BDE version 2.1. BDE ships as a virtual appliance (an OVA file) and it is a free download for users of vSphere Enterprise or Enterprise Plus.
BDE users are interested in using their favorite management tools from their Hadoop distro vendors, along with BDE and vCenter, to manage their newly created virtualized Hadoop clusters. The 2.1 release of BDE implements this feature in an elegant way!
Now you can use BDE and Cloudera Manager or Ambari together to install and manage your Hadoop clusters without leaving your Web Client BDE seat. You can also use the earlier styles of provisioning a Hadoop cluster as shown under the “BDE Only” and “BDE 2.0″ headings below. The first method on the left allows BDE to use a repository to install the Hadoop vendor’s software on to the virtual machines. BDE does the whole job of provisioning everything in this case – hence referred to as “BDE Only”.
Using BDE 2.0 (shown in the center column) you can create a basic cluster, i.e. one with no Hadoop software in it. Then you can use the Hadoop vendors’ installation and configuration tool to install the Hadoop software on those virtual machines. With BDE 2.1 you don’t have to go between the different tools; the full Hadoop installation can be done inside BDE’s user interface, but using the vendor’s APIs under the covers to do that. The difference between the BDE 2.0 and 2.1 methods is that in 2.1 the management tool from the Hadoop vendor is called by BDE directly.