Home > Blogs > VMTN Blog > Category Archives: storage

Category Archives: storage

High throughput iSCSI with VMware: a multi-vendor post

Chad (of EMC) and Vaughn (of NetApp) posted today a great collaborative blog article (with others from VMware, Dell/EqualLogic and  HP/Lefthand) that has a nice backgrounder on iSCSI, talks about some design considerations, links to lots of resources, and then talks about some little-known configuration  and performance considerations. If you are not an iSCSI guru, you should read this post:

Virtual Geek: A Multivendor Post to help our mutual iSCSI customers using VMware.

Today’s post is one you don’t often find in the blogosphere, see
today’s post is a collaborative effort initiated by me, Chad Sakac
(EMC), which includes contributions from Andy Banta (VMware), Vaughn
Stewart (NetApp), Eric Schott (Dell/EqualLogic), and Adam Carter
(HP/Lefthand), David Black (EMC) and various other folks at each of the

Together, our companies make up the large majority
of the iSCSI market, all make great iSCSI targets, and we (as
individuals and companies) all want our customers to have iSCSI

I have to say, I see this one often – customer
struggling to get high throughput out of iSCSI targets on ESX.  
Sometimes they are OK with that, but often I hear this comment: "…My
internal SAS controller can drive 4-5x the throughput of an iSCSI

Can you get high throughput with iSCSI with GbE on ESX?   The answer is YES
But there are some complications, and some configuration steps that are
not immediately apparent. You need to understanding some iSCSI
fundamentals, some Link Aggregation fundamentals, and know some ESX
internals – none of which are immediately obvious…

If you’re
interested (and who wouldn’t be interested with a great topic and a
bizzaro-world “multi-vendor collaboration”… I can feel the space-time
continuum collapsing around me :-), read on…

Stephen Foskett gives us the take-home. Essential Reading for VMware ESX iSCSI Users! – Stephen Foskett, Pack Rat.

  • Ethernet link aggregation doesn’t buy you anything in iSCSI environments
  • iSCSI HBA’s don’t buy you much other than boot-from-SAN in ESX, either
  • The most common configuration (ESX software iSCSI) is limited to about 160 MB/s per iSCSI target over one-gigabit Ethernet, but that’s probably fine for most applications
  • Adding multiple iSCSI targets adds performance across the board, but configurations vary by array
  • Maximum per-target performance comes from guest-side software iSCSI, which can make use of multiple Ethernet links to push each array as fast as it can go

More like this, please.

EnableResignature and/or DisallowSnapshotLUN | Yellow Bricks

Our VMware blogs are turning out to be a great resource to dig into topics that, while they might be covered in the docs or white papers, are helped by pulling them out and viewing them in tighter focus — and letting them be indexed by Google. Some examples include

And now VMware's own Duncan Epping has a nice and thorough look at two storage parameters you should know about: EnableResignature and DisallowSnapshotLUN. From looking at the comments on Duncan's post, I'd say you don't want to have to figure out these specialized options on the fly when something goes wrong.

EnableResignature and/or DisallowSnapshotLUN » Yellow Bricks.

I’ve spend a lot of time in the past trying to understand the settings
for EnableResignature and DisallowSnapshotLUN. It had me confused and
dazzled a couple of times. Every now and then I still seem to have
trouble to actually understand these settings, after a quick scan
through the VCDX Enterprise Study Guide
by Peter I decided to write this post and I took the time to get to the
bottom of it. I needed this settled once and for all, especially now I
start to focus more on BC/DR. … I do want to stress that setting the options should always be used
temporarily considering the impact these changes can have! When you set
any of both options reset them to the default.

Join us Wednesday – EMC’s Chad Sakac at the VMware Communities Roundtable

Join us on the podcast. Wednesdays noon PST / 3pm EST / 8pm GMT. Connect info. This week with EMC’s Chad Sakac. It should be free-ranging and fun. Some possible topics:

  • What VMware, EMC and Cisco are doing together around the Next Generation Datacenter
  • What’s coming in vStorage
  • Reference Architectures for Tier 1 applications like Exchange, SQL Server, Sharepoint
  • What we’re seeing around Disaster Recovery for VMware

VMFS vs. NFS for VMware Infrastructure? | VMware Storage Blog

Good answer to a frequently asked question on the new VMware Storage Blog. Click through for a nice quick read.

Link: VMware: VMware Storage Blog: VMFS vs. NFS for VMware Infrastructure?.

The dynamic, flexible environment that we call VMware Infrastructure
requires shared, coordinated storage between ESX servers. There are two
families of storage technologies that can meet this requirement today,
SAN-based block storage (e.g. Fibrechannel or iSCSI) and NAS. VMware
supports both forms of storage access for our customers. …

So which to use? The first criteria is to continue to use the type
of storage infrastructure you are familiar with. If your organization
uses block based storage – use VMFS.  If NAS is in use, it may make
more sense to deploy VMware Infrastructure with NFS. Other aspects of
storage management, such as the basic virtualization of storage on
behalf of the VM or the internal structure of the virtual disk files
(VMDK) are handled independently of this choice.  You get the same high
level VI functionality regardless.

For new deployments, there are the traditional storage tradeoffs. …

Benefits of VMFS: new VMware Storage Blog

We welcome the newest blog on the block, the VMware Storage Blog. Scott Davis starts us off with a closer look at VMFS and its benefits.

Link: VMware: VMware Storage Blog: VMware’s "Proprietary" Clustered File System.

  1. VMware’s instant one click provisioning, including storage.
    Quick, easy provisioning of a new VM, OS and application that does not
    require physical storage LUN provisioning.
  2. Mobility/Portability. i.e Vmotion and storage Vmotion. In a virtual
    world, workloads should be abstracted from, not beholden to, physical
    storage. Just like they should be abstracted from physical servers.
  3. Encapsulation and HW Independence. VMs should be entirely
    encapsulated from the physical world. This simple, but critically
    important facet of virtualization unleashes the power of virtual
    infrastructure. For an example, look at VMware’s new Site Recovery
    Manager that enables DR solutions that no longer require identical
    hardware (and software) configurations at each site.
  4. Reduced complexity. SAN management is hard, complicated work. Why shouldn’t it be simplified?

The take-home? Eliminating the complexity of physical shared storage,
while still allowing you to access the physical disk if needed.

The new VMware Storage Blog joins the VMware Networking Blog and VI Team Blog in getting you your regular dose of VI news and helping you gain a greater understanding of virtualization.

VMware is Storage Protocol Agnostic | VI Team Blog

Link: VMware: VI Team Blog: VMware is Storage Protocol Agnostic.

Which storage protocol to choose?

The most common storage related questions we are being asked today are:

  • What is the best choice for running VI3 on shared storage?

  • Should we use Fibre Channel (FC), iSCSI or NFS?

The answer to these questions will depend on a number of variables
and as such the same answer will not be the same for each environment.
VMware currently supports deployment of VI3 on all three of those
storage protocol choices, as well as on local ESX server storage, and
is focused on enabling customers to be successful at leveraging the
benefits each of those choices available for the virtualization
environment. Although differences exist in which VMware features and
functions are available on them, the current approach is to remove as
many of those differences as possible so that customers can have more
choices available to them.

Ask the Expert: Green Storage for the Enterprise Data Center

Over at VMworld.com they have just started the second "Ask the Expert" session, this time featuring Larry Aszmann, CTO of Compellent Technologies. You can view Larry’s presentation online, and then Larry has promised to stick around for a few weeks to answer your questions.

[Update: just finished watching Larry's presentation and it's very interesting. It's really much more about the Green Data Center and how to reduce your spend than an advertisement for Compellent's products. Some factoids: 80% of data center energy is wasted; data center energy consumption is going to double from 2006 to 20011. Here's the kicker: 2/3 of data center energy is on supporting your IT devices -- servers, storage, networking. Data center buildout is extremely capital intensive (and is a gift to your landlord when your lease is up). So every time you increase the energy usage of your servers & storage, your total energy spend goes up 3x as much. Thus, virtualize your servers and look at your storage. 25% of disk space is actually used -- so use thin provisioning. 80% of your data is inactive and rarely accessed, so use ILM -- information lifecycle management -- that puts inactive data on slower, less power hungry devices. Literally cool stuff.]

Link: VMworld: Compellent Expert Session.

February 11-22, 2008

Larry Aszmann, CTO of Compellent Technologies
Lawrence E. Aszmann has served as CTO and Secretary since co-founding
Compellent in March 2002. From July 1995 to August 2001, Mr. Aszmann
served as CTO of Xiotech, which Mr. Aszmann co-founded in July 1995.

Expert Session Overview
Compellent Storage Center is one of the most powerful and easy-to-use
SAN in the marketplace. Compellent offers technology independence that
allows enterprise customers to mix and match iSCSI and Fibre Channel
connectivity and manage multiple tiers of Fibre Channel and SATA disk
technologies from one pool of virtual storage. The powerful GUI manages
native thin provisioning, hardware snapshot, snapshot replication and
automated tiered storage all from a web browser with no server-side
code or agents.

Scalable Storage Performance with VMware ESX Server 3.5 – VMware VROOM!

Link: Scalable Storage Performance with VMware ESX Server 3.5 – VMware VROOM!.

It is clear from Figure 1 that except for sequential read there is no drop in aggregate throughput as we scale the number of hosts. The reason sequential read drops is that the sequential streams coming in from different ESX Server hosts are no longer sequential when intermixed at the storage array, and thus become random. Writes generally do better than reads because they are absorbed by the write cache and flushed to disks in the background.

New SAN cookbook hits shelves to good reviews

Alessandro Perilli of virtualization.info called it "remarkable" and said "It’s a worthwhile reading before your first project, the VCP certification exam, and even non-virtualized implementations." Vincent Vlieghe of Virtrix called it "a fine read." Joseph Foran of the new Server Virtualization Blog says "Overall, the paper gets 8 pokers." Magnus of the VMTN Forums says "It looks really good."

What are all these people raving about? It’s the new 219 page cookbook from VMware,  SAN System Design and Deployment Guide. It describes Storage Area Network (SAN) options supported with VMware Infrastructure 3 and
also describes benefits, implications, and disadvantages of various
design choices.

Now Joseph does point out one reason why we published this guide:

Most of the reason that VMware published this document can be summed up by this quote from page 130:

“Many of the support requests that VMware receives
concern performance optimization for specific applications. VMware has
found that a majority of the performance problems are self-inflicted,
with problems caused by misconfiguration or less-than-optimal
configuration settings for the particular mix of virtual machines, post
processors, and applications deployed in the environment.”

I have to admit, that had me laughing. It was the whole “blame the
user” mentality that I found funny – I’m glad VMware put the paper out
there, but really, they had to expect that the 80/20 rule of
troubleshooting would apply to them too – 80% of all problems are human
error. The guide does a good job of helping avoid those pitfalls, and
goes into detail on setting up your SAN to perform well.

Joseph seems to be laughing with us, not at us, but I do want to clarify this is not ‘blame the user." Blaming the user would be telling them to go take a long walk off a short pier to the nearest bookstore and get educated on SANs before touching VMware Infrastructure. Blaming the user would be just finger pointing at their hardware or storage vendor when they call support telling us their virtual infrastructure is slow. This is helping the user.

VMware Infrastructure is a powerful tool and a new architecture for the data center. It’s like any power tool — you can cut down a lot of trees with a chain saw, but you can also slice off your own limbs. Many companies are buying their first shared storage when they go virtual, and others have to rethink how that shared storage is used. That’s why we work with a channel of resellers and consultants to help you succeed. That’s why a VCP exam requires a hands-on class, to make sure we don’t have "paper VCPs" running around. That’s why we offer education and professional services. I was reading our business continuity jumpstart curriculum the other day, and it touches on every single layer of your data center — it’s practically a survey course on the entirety of modern IT. That’s why 9 times out of 10 on the VMTN Forums when somebody’s infrastructure isn’t performing correctly, the expert troubleshooters who hang out there help the poster find out it’s the application or the OS that is misconfigured, not the virtual machine. (The tenth time it’s a workload that should never have been virtualized.)

We want you to succeed and get big raises, all while VMotioning your virtual machines around the data center while you’re eating your lunch at your desk, not at midnight when your spouse is wondering when you’ll be home. And to do that, your SAN needs to be set up correctly, so go read up on it

Climbing The Mountain: Storage challenges with virtual infrastructure

EMC VP Chuck Hollis details his thoughts on the challenges facing the enterprise with their storage infrastructure and virtualization using VMware. I’ve excerpted a bit, but read the whole thing.

Link: Chuck’s Blog: VMware Virtual Infrastructure 3 – Climbing The Mountain.

The Core Infrastructure Challenge

Simply put, the central infrastructure challenge is that server
virtualization adds another layer in the stack.  As an example, instead
of server / network / storage, it’s now virtual server / physical
server / network / storage.  …

Challenge #1 – Flat Name Space for VMotion

One of the most powerful and sexy features in VMware ESX 3.0 is the
advanced capabilities of VMotion, managed by DRS. … But this presents a new challenge to the storage infrastructure.
You’re going to want the ability for every virtual server image to be
able to see every storage object from every server. …

Challenge #2 – Storage Resource Management

The starting point for enterprise-class SRM is discovery and
visualization.  What do I have, how does it connect, and how is it all
related? … Now, insert server virtualization into this stack. … What happens?  It breaks the connection.  Maybe I can see the
virtual machines.  Or maybe I can see the VMware ESX servers.  But,
unless some heavy lifting is done, I won’t be able to see that
stem-to-stern view that makes enterprise SRM useful.  And you can’t
manage what you can’t see. …

Challenge #3 – Backup and Recovery

Backup and recovery – never a pleasant topic in the physical server
world – gets even more thorny and problematic in a virtual server world. …

Challenge #4 – Managing End-To-End Service Delivery

I’ve made the case before
that we don’t live in a world anymore where one user uses one
application.  What the user sees is a logical combination of
application services that run on an increasingly complex IT
infrastructure stack.  And IT finds it harder and harder to drive back
to a root cause when there’s a performance or outage that users are
noticing. …