Home > Blogs > VMware vSphere Blog > Tag Archives: storage

Tag Archives: storage

6 Nifty And Versatile Tools To Get You Started With VMware Virtual SAN

I was so excited to see that the Hands-On-Labs (HOL) had finally been updated to Virtual SAN 6.2 that it prompted this blog. I just couldn’t stop at HOL! The VMware storage & availability team offers many other nifty tools to get you started with VMware Virtual SAN. If you haven’t yet, check out these 6 tools to get you on the path to hyper-convergence.


1] Virtual SAN Product Walkthrough

This is a simple and easy way to get your feet wet. This series highlights features and benefits of Virtual SAN 6.1 including how to enable Virtual SAN, how to create and assign a storage policy and Virtual SAN’s resiliency to host failures. A major plus is that the interface is very simple to use and navigate.


2] Virtual SAN Hands-On-Labs (HOL)now updated to Virtual SAN 6.2

HOL-LOGOHands-on Labs are the fastest and easiest way to test-drive the full technical capabilities of Virtual SAN. These evaluations are free, up and running on your browser in minutes, and require no installation. We’re excited that the HOL has been updated to Virtual SAN 6.2. This lab contains new features including erasure coding (RAID-5/6), checksum, sparse swap and dedupe/compression. You can also see the new health check views, performance metric views and capacity views.

Also included is a workflow that will guide you through configuring Virtual SAN stretched cluster and remote-office/branch-office (ROBO) implementations, and how these features work with HA to restart VMs in the event of a failure.


Continue reading

Top 5 Virtual SAN Posts From 2016 to-Date

Time doesn’t slow down for anybody, so it’s understandable if you miss some information here and there. Luckily, we’ve got you covered on Virtual Blocks. Take a look back on some of the most popular Virtual SAN posts from the past few months.

What’s New-VMware Virtual SAN 6.2

On February 10th, we announced VMware Virtual SAN 6.2. Updated to include robust space efficiency features by delivering deduplication and compression as well as providing RAID-5/RAID-6 support for all flash Virtual SAN environments. John Nicholson also discusses new extensions to the Virtual SAN Ready Node program.

The Use of Erasure Coding In VMware Virtual SAN 6.2

Christos Karamanolis proves one size doesn’t fit all.  In this blog, he discusses Virtual SAN’s implementation of RAID-5 and RAID-6 and advices customers to evaluate their requirements in order to gain a better understanding of what they need based on their workload.

Introducing The 4th Generation VMware Virtual SAN

Yanbing Li dives into how VMware Virtual SAN continues to build on its principal benefits: simplicity, performance, cost-efficiency, and scalability.

The Road to All-Flash VMware Virtual SAN

John Nicholson talks about why he thinks 2016 will see All-Flash VMware Virtual SAN overtake 10K RPM drive-based hybrid Virtual SAN as the most popular deployment choice.

Virtual SAN Stretch Clusters-Real World Design Practices (Part 1)

Jonathan McDonald gives us a personal account setting up stretch clusters for Virtual SAN. Here, he provides tips that will ensure a flawless experience.

Be sure to subscribe to the Virtual Blocks blog, and follow our social channels at @vmwarevsan and Facebook.com/vmwarevsan for the latest updates.

Virtual SAN 6.2 – Deduplication And Compression Deep Dive

Virtual SAN 6.2 introduced several highly anticipated product features and in this blog, we’ll focus on some of the coolest ones: Dedupe & Compression. These features were requested by VMware customers and I am glad that we listened to the customer. When talking about Dedupe and Compression, one first needs to determine why an organization would want to use Dedupe & Compression and what these features actually do. One of the many reasons for using Dedupe and Compression is to lower TCO for customers. Customers benefit from space efficiency as the Virtual SAN cluster will not utilize as much storage as it would if it was not using Dedupe and Compression, hence saving dollars. It is also important to note that Dedupe and Compression are supported on All Flash Virtual SAN configurations only.

What are Dedupe and Compression?

The basics of deduplication can be seen in the figure below. What happens is that blocks of data stay in the cache tier while they are being accessed regularly, but once this trend stops, the deduplication engine checks to see if the block of data that is in the cache tier has already been stored on the capacity tier. Therefore only storing unique chunks of data.

Pic 1

Continue reading

VMware Virtual SAN: The Technology And Its Future

As discussed in earlier posts, the latest version of Virtual SAN (v.6.2) announced on February 10, 2016 is the biggest release of the product since its debut in March 2014. The list of new features is impressive and makes Virtual SAN very competitive against the most sophisticated storage platforms in the market today. Indeed, with more than 3,000 customers overall and more than 20,000 CPU licenses sold in Q4 2015 alone, Virtual SAN is one of the most widely deployed and mature Software-Defined Storage (SDS) products available.

Virtual SAN is a storage platform – the key software component that enables VMware’s Hyper-Converged Infrastructure (HCI) strategy. HCI is a drastically different model of building and operating IT infrastructure. A quick Google search returns the following definition:

Hyperconvergence (hyperconvergence) is a type of infrastructure system with a software-centric architecture that tightly integrates compute, storage, networking and virtualization resources and other technologies from scratch in a commodity hardware box supported by a single vendor. Credit: http://searchvirtualstorage.techtarget.com/definition/hyper-convergence-

This new IT architecture has many benefits for the end customer including:

  • Streamlined procurement, deployment and support. Customers can build their infrastructure in a gradual and scalable way as demands evolve.
  • Adaptable software architecture that takes advantage of commodity technology trends, such as: increasing CPU densities; new generations of solid-state storage and non-volatile memories; evolving interconnects (40GB, 100GB Ethernet) and protocols (NVMe).
  • Last but not least, a uniform operational model that allows customers to manage the entire IT infrastructure with a single set of tools.

It is not surprising that according to IDC, hyper-converged infrastructure (HCI) is the fastest growing segment of the converged (commodity-based hardware) infrastructure market.

Continue reading

Virtual SAN 6.2 Licensing Guide

VMware, the market leader in powering Hyper-Converged Infrastructure (HCI), enables the lowest cost and highest performance next-generation HCI solutions through proven VMware Hyper-Converged Software. The natively integrated software combines the marketing-leading VMware vSphere hypervisor, the VMware vCenter Server unified management solution, and radically simple VMware Virtual SAN storage with the broadest and deepest set of HCI deployment choice

Virtual SAN is quite capable of running nearly any virtual server and desktop workload. These workloads run in an ever-increasing number of environments: data centers, remote offices, call centers, retail stores, commercial ships, and the list goes on. A one-size-fits-all licensing model does not cover such a wide variety of use cases so VMware offers Virtual SAN in a few different licensing options.

The Virtual SAN 6.2 Licensing Guide has been created to help customers and partners understand what licensing editions are available, the features included in each edition, the consumption types (per-CPU and per-VM), and the scenarios in which they are used. While it might seem a bit confusing at first, you will hopefully see that the intent was to keep licensing as simple as possible while providing flexible, cost-effective options for a wide variety of implementation scenarios. This guide begins with a quick introduction to Virtual SAN and the license editions available with version 6.2. This is followed by several example scenarios along with a summary that highlights the main items to keep in mind when considering Virtual SAN 6.2 licensing.

Hopefully, this guide clears up any questions you might have around Virtual SAN licensing.


This post originally appeared on the Storage & Availability blog Virtual Blocks and was written by Jeff Hunter. Jeff is a Senior Technical Marketing Architect at VMware focusing on availability solutions. Jeff has been with VMware since 2007. Prior to VMware, Jeff spent several years in a systems engineer role expanding the virtual infrastructures at a regional bank and a Fortune 500 insurance company. Follow him on twitter: @jhuntervmware

Introducing VMware Hyper-Converged Software

VMware Hyper-Converged Software

Powering the industry’s largest Hyper-Converged Infrastructure ecosystem

Hyper-Converged Infrastructure (HCI) is transforming the way private datacenter infrastructure is being built –see this post for an overview of HCI.  It eliminates the traditional hardware silos of compute, storage and networking, to move all the intelligence into a single software layer running on industry-standard x86 servers.  By doing so, HCI makes private infrastructure a lot simpler, higher performing, and more cost-effective.  In essence, the infrastructure starts looking like the datacenters of web-scale companies such as Google or Amazon.  We’re seeing these benefits play out across thousands of VMware customers that have deployed and expanded their HCI deployments over the past year.

Hyper-Converged Infrastructure

Hyper-Converged Infrastructure relies on both great hardware and great software.  The hardware consists of industry-standard x86 building blocks, serving as the foundation for the entire datacenter.  This hardware convergence relies on critical innovations such as flash and faster CPUs.

At the same time – it’s clear that HCI is first and foremost about the software.  Software innovation is what makes HCI possible.  Compute, storage, networking and management are now delivered as software. For storage specifically – this requires a software-defined, distributed, shared storage model with all the data services typically provided by external SAN or NAS – but all delivered as software on the hypervisor.  This distributed software is very hard to build, hence why only a few vendors are able to pull it off.

Let’s introduce you to VMware Hyper-Converged Software

At VMware, we believe we have an incredibly valuable and innovative set of software assets that enables HCI:

  • vSphere is, of course, the most widely deployed and proven hypervisor in the industry. It also delivers basic Virtual Machine networking capabilities with vSphere Distributed Switch.
  • Virtual SAN provides high-performance, enterprise-class shared storage
  • vCenter Server provides unified management across the stack

Continue reading

Oracle U2VL With Virtual SAN And The Batch Processing Use Case

Unix to Virtualized Linux (U2VL) is a critical step towards SDDC, it targets to migrate applications and data from physical Unix servers to Linux virtual machines running on x86 virtualized infrastructure. These applications are typically business critical, therefore, customers normally take a very cautious approach by doing a carefully planned and executed Proof-of-Concept (POC) in order to validate performance, availability, and scalability, among many other areas.

My colleagues in China (a big shout out to Tony Wang and his team!) recently did one such POC with a large local bank, and naturally they chose Virtual SAN hyper-converged architecture for all of the compute and storage needs. The test results were so illustrative of many of the Virtual SAN benefits, I’d like to share this POC and some of the test results here, although I’m not allowed to mention the customer name due to reasons you probably understand.

Continue reading

VMware Virtual SAN Delivers Enterprise Level Availability

One of the slides we showcased during the VMware Virtual SAN 6.1 Launch that got a lot of attention was the following slide:

Pic 1

A lot of eyebrows in the audience were going up wondering how we came to the conclusion that VSAN delivers 6-9s availability level (or less than 32 seconds of downtime a year). While, Virtual SAN uses software-based RAID, which differs in implementation from traditional storage solutions, it does have the same end result – your data objects are mirrored (RAID-1) for increased reliability and availability. Moreover, with VSAN your data is mirrored across hosts in the cluster not just across storage devices, as is the case with typical hardware RAID controllers.

The VSAN users can set their goals for data availability by means of a policy that may be specified for each VM or even for each VMDK if desired. The relevant policy is called ‘Failures to Tolerate’ (FTT) and refers to the number of concurrent host and/or disk failures a storage object can tolerate. For FTT=n, “n+1” copies of the object are created and “2n+1” hosts are required (to ensure availability even under split brain situations).

For the end user, it is important to quantify the levels of availability achieved with different values of the FTT policy. With only one copy (FTT=0), the availability of the data equals the availability of the hardware the data resides on. Typically, that is in the range of 2-9s (99%) availability, i.e., 3.65 Days downtime/year. However, for higher values of FTT, more copies of the data are created across hosts and that reduces exponentially the probability of data unavailability. With FTT=1 (2 replicas), data availability goes up to at least 4-9s (99.99% or 5 minutes downtime per year), and with FTT=2 (3 replicas) it goes up to 6-9s (99.9999% or 32 seconds downtime per year). Put simply, for FTT=n, more than n hosts and/or devices have to fail concurrently for one’s data to become unavailable. Many people challenged us to show them how the math actually works to arrive at these conclusions. So let’s get to it.

Continue reading

Architecting Virtual SAP HANA Using VMware Virtual Volumes And Hitachi Storage

VMWorld Recap: SAP HANA and VMware Virtual Volumes

This is a follow up to my earlier VMWorld blog; “Virtualizing SAP HANA Databases Greater Than-1TB On vSphere-5-5”, where I discussed SAP Multi-Temperature Data Management strategies and techniques which can significantly reduce the size and cost associated with SAP HANA’s in-memory footprint. This blog will focus on Software-Defined Storage and the need for VMware Virtual volumes when deploying Mission Critical Applications/Databases like SAP HANA as discussed in my VMWorld session.

Multi-Temperature Data Management Is By Definition Software-Defined Storage

SAP and VMware customers who plan on leveraging multi-temperature strategies, where data is classified by frequency of access as either hot, warm or cold depending on data usage is the essence of Software-Defined Storage. This can also be equated to EMC’s Information Lifecycle Management which examines the value of data to the business over time. To bring the concept of the Software-Defined Data Center and more precisely Software-Defined Storage to reality, see Table 1. This table depicts the various storage options for SAP HANA so customers can create an architecture that aligns with the business and its applications demands.

Table 1: Multi-Temperature Storage Options with SAP HANA

table-j

Planning Your Journey To Software-Defined Storage

As we get into the various storage options for SAP HANA, VMware has made it very easy to create and deploy software defined storage in the form of Virtual Volumes. However I want to stress the actual definitions of how the storage should be abstracted is a collaborative task, at a minimum you must involve the storage team, VI-Admins, application owners, and dba’s in order to create an optimized virtual architecture; this should not be a siloed task.

In my previous post I discussed the storage requirements for SAP HANA In-Memory, Dynamic Tiering, Near-Line Storage, and the Archiving Components; one last option I did not cover in Table 1 is Data Aging which is specific to SAP Business Suite. Under normal operations SAP HANA does not preload data into memory, data is loaded upon first access, so the first time you access data its always off disk.

With Data Aging you can essentially mark data so its never loaded into memory and will always reside on disk. This is not available on all modules for Business Suite, so please check with SAP for availability and roadmap with respect to Data Aging.

Essentially this is another SAP HANA feature which enables customers to reduce and manage their memory footprint more efficiently and effectively. The use of Data Aging can change the design requirements of your Software-Defined Storage, if Data Aging becomes more prevalent in your SAP Landscape, VMware Virtual Volumes can be used to address the changing storage requirements of the application by seamlessly migrating data between different classes of software-defined storage or VMDKs.

VMware Virtual Volumes Transform Storage By Aligning With SAP HANA’s Requirements

Now lets get into Virtual Volumes and the problems they solve, with Virtual Volumes the fundamental model is centered around provisioning storage based on the application needs rather than the underlying infrastructure. When deploying SAP HANA using the Tailored Data Center Integration model, the storage KPIs can be quite complex, so how do customers translate latency, throughput for reads – writes – and updates, at various block sizes to the storage layer?

Plus how does a customer address the storage requirements for SAP HANA’s entire data life cycle, whether you are planning on using Dynamic Tiering, with or without Near-Line-Storage and what is the archiving strategy storage requirements as well. Also some of the storage requirements do tie back to the compute layer, as an example with Dynamic Tiering if you plan on using Row Level Versioning there is a compute to memory relationship for storage that comes into play when sizing

Addressing and achieving these design goals using an infrastructure centric model can be quite difficult because you are tied to physical LUNs and trust me, with mission critical databases, you will always have database administrators fighting over LUNs with the lowest numbers because of the concerns around radial density. This leads to tremendous waste when provisioning storage using an infrastructure centric model.

VMware Virtual Volumes significantly reduces the storage design complexity by using an Application Centric model because you are not dealing with storage at the LUN level, instead vSphere admins use policies to express the application requirements to the storage array, then the storage array maps storage containers to the application requirements.

What are VMware Virtual Volumes?

At a high level I’ll go over the architecture and components of Virtual Volumes, this blog is not intended to be a deep dive into Virtual Volumes, instead my goal is to convey that mission critical uses cases for VVOLS and software-defined storage are real. For an excellent white paper on Virtual Volumes see; “VMware vSphere Virtual Volumes Getting Started Guide”.

As shown in Figure 1., Virtual Volumes are a new type of virtual machine object which are created and stored natively on the storage array. The Vendor Provider also known as the VASA Provider, which are the vSphere Storage APIs for Storage Awareness (VASA) that provide the storage awareness services and mediates out of the box communications between vCenterServer and EXi Hosts on one side and the storage system on the other side.

The storage containers are pools of raw storage that a storage system can provide to virtual volumes and unlike LUNS and NFS, they do not require pre-configured volumes on the storage side. Also with virtual volumes you still have the functionality you would expect when using native VMDKs

Virtual Datastores represents a storage container in a vCenter Server instance, so it’s a 1:1 mapping to the storage systems storage container. The ESXi Hosts have no direct access to the virtual volumes on the storage side, so they use a logical I/O proxy called a protocol endpoint and as you would expect VVOLs are compatible with industry standard protocols, iSCSI, NFS, FC, and FCoE

The Published Storage Capabilities will vary by storage vendor depending on which capabilities have been exposed and implemented. In this blog we will be looking at the exposed capabilities of Hitachi Data Systems like latency, throughput, Raid Level, Drive Type/Speed, IOPS, and Snapshot frequency to mention a few.

Figure 1: vSphere Virtual Volumes Architecture and Components

vv

VMware HDS: Creating Storage Containers, Virtual Volumes, and Profiles for Virtual SAP HANA

Now Virtual Volumes are an Industry-wide Initiative, essentially a who’s who of the storage industry are participating in this initiative, however this next section will be representative of the work done with Hitachi Data Systems

And again the guidance here is collaboration when architecting software-defined storage for SAP HANA landscapes and for that matter any mission critical application or database. Because the beauty of software defined storage is once created and architecture correctly you can then provision your virtual machines in an automated and consistent manner.

So in the spirit of collaboration, I got together with Hitachi’s SAP alliance team, their storage team, and database architects and we came up with these profiles, policies, and containers to use when deploying SAP HANA landscapes.

We had several goals when designing this architecture; one was to use virtual volumes to address the entire data life cycle of SAP HANA, the in-memory component, Dynamic Tiering, Near-Line storage, and archiving or any supported combination of the above when creating a SAP HANA landscape. And secondly we wanted to enable rapidly provisioning of SAP HANA landscapes, so we created profiles, policies, and containers which could be used to deploy SAP HANA databases whose in-memory component could range from 512GB to 1TB in size.

I’ll review some of the capabilities HDS exposed which were used for this architecture:

  • Interestingly enough we were able to meet the SAP HANA in-memory KPIs using Hitachi Tier 2 storage which consisted of 10K SAS drives for both log and data files, as well as for the Operating System and the SAP HANA shared file system. This also simplified the design. We then used high density SAS drives for the backup areas
  • We enabled automatic storage managed snapshots for HANA data, log and the OS; and set the Snapshot frequency based on the classifications of Critical, Important, or Best Effort.
  • So snapshots for the data and log were classified as Critical while the OS was classified as Important and the backup area we didn’t snapshot at all
  • We also tagged this storage as certified, capturing the model and serial number, since the SAP HANA in-memory component requires certified storage. We wanted to make sure that when creating HANA VM’s you’re always pulling from certified storage containers.
  • The Dynamic Tiering and NLS storage had similar requirements so could be provisioned from the same containers and since these are disk based columnar databases we selected Tier 1 storage SSDs for the data files based on the random read/write patterns
  • And stuck with SAS drives for the log files since sequential workload don’t benefit much from SSDs. Again because of the disk based access we selected Tier 2 to satisfy the IOPS and Latency requirements.
  • Then finally for the archiving containers we used the lowest cost & highest density storage, pretty much just a file system.

Now there’s just too much information to cover in this effort with HDS but for those of you interested, VMware and Hitachi we will be publishing a Co-Logo White Paper which will be a much deeper dive into how we architected these landscapes so customers can do this almost out of the box.

Deploying VMware Software-Defined Storage With vSphere and Hitachi Command Suite

Example: SAP HANA Dynamic Tiering and Near-Line Storage Tiers. These next couple of screen captures will show how simple virtual volumes are to deploy once architected correctly

Figure 2: Storage Container Creation: SAP HANA DT and NLS Tier

ss1

Figure 3: Create Virtual Machine Storage Policies SAP HANA DT/NLS Data/Log File

ss2

Figure 4: Create New SAP HANA DT VM Using VVOLS Policies With Hitachi Storage

ss3

Addressing Mission Critical Use Cases with VMware Software-Defined Storage

SAP HANA and Multi-Temperature Data Management is the poster child for mission critical software-defined storage use cases. VMware Virtual Volumes solves the complexities and simplifies storage provisioning by using an application centric model rather than an infrastructure centric model.

The SAP HANA in-memory component is not yet certified for production use on vSphere 6.0, however Virtual Volumes can be used for SAP HANA Dynamic Teiring, Near-Line Storage, and Archiving. So my advice to our customers is to start architecting now, get together with your storage admins, VI Admins, application owners, and database administrators to create containers, policies, and profiles correctly so when vSphere 6.0 is certified you are ready to “Run SAP HANA Simple”.

 

 

VMWorld Topic: Virtual Volumes (VVOLS) a game changer for running Tier 1 Business Critical Databases

One of the major components released with vSphere 6 this year was the support for Virtual Volumes (VVOLS). VVOLS has been gaining momentum with storage vendors, who are enabling its capabilities in their arrays.

When virtualizing business databases there are many critical concerns that need to be addressed that include:

  1. Database Performance to meet strict SLAs
  2. Daily Operations e.g. Backup & Recovery to complete in set window
  3. Cut down time to Clone / Refresh of Databases from Production
  4. Meet different IO characteristics and capabilities based on criticality
  5. Never ending debate with DBAs
  6. File Systems v/s Raw Devices (VMFS v/s RDM)

VVOLS can offer solutions to mitigate these concerns that impact the decision to virtualize business critical databases. VVOLS can help with the following:

1. Reduce backup windows for databases
2. Provide ability to have Database consistent backups
3. Reduced cloning times for multi-terabyte databases
4. Provide capabilities for Storage Policy based management

Details on the solutions available with VVOLS and its impact on “Virtualized Tier1 Business Critical Databases” will be discussed in detail at vmworld 2015 in session STO4452:

STO4452 –  STO4452 – Virtual Volumes (VVOLS) a game changer for running Tier 1 Business Critical Databases
Session Date/Time: 08/31/2015 03:30 PM – 04:30 PM