Home > Blogs > VMware vSphere Blog > Monthly Archives: July 2009

Monthly Archives: July 2009

Case Study: Kroll Ontrack

As
more of our customers start migrating from VMware ESX to ESXi, we would like to
start showcasing some of their deployment experiences on this blog. We recently
published a case study on the ESXi deployment of Kroll Ontrack, the world
leader in providing legal technologies and data recovery products and services.

Kroll has been a long time user of VMware ESX, but for their
latest virtualization project, they decided to deploy an embedded version of VMware
ESXi due to its more reliable architecture and simpler management requirements
such as less patching and faster provisioning.

“We
chose ESXi to virtualize our file servers based on lessened patching requirements
and the ability to get away from using hard drives in the servers,” says Joel
Fuller, a technical architect at Kroll. “ESXi is very easy to configure and
deploy. You basically take a USB key and push out a scripted configuration to
it, and it’s done.”

With
a goal of migrating 500 existing physical file servers, Kroll adhered to a
schedule of converting approximately 15 file servers per week, with an
additional 25 servers converted during monthly maintenance windows. Each file
server is built from a template, making it quick and easy
to provision new virtual machines. Kroll prepared PowerShell scripts to
automate the rollout.

Kroll
currently has 145 ESXi hosts, allowing them to consolidate 20 racks down to
only 3. They are very satisfied with their ESXi rollout: “VMware ESXi lets us
get more life out of our infrastructure while giving us a simpler and more
secure operational model,” says Fuller. “That makes us a much more scalable
organization.”

Read the full case
study
. If you have deployed ESXi, please let us know. We would love to
feature you on our blog.

The ESX team

vNetwork Distributed Switch—Migration and Configuration

For those who have not yet kicked the tires of the vNetwork Distributed Switch (vDS), or were a little confused when they did, help is at hand. I have written a guide titled, “VMware vNetwork Distributed Switch: Migration and Configuration.”

It’s a lengthy read at 37 pages, but I’ve tried to keep it as complete as possible with diagrams, screen shots and other such things to guide you on your way.

As with all our virtual networking papers for vSphere, it’s located in the Resources page behind the easy to remember URL, vmware.com/go/networking

VMware Data Recovery Taking Advantage of vSphere 4

I wanted to explain in more detail why we chose the type of dedupe that we did.  As I  had mentioned in my previous post, we chose to implement block based in-line destination deduplication for VMware Data Recovery (VDR).  There are a few reasons for this, two of which are due to enhancements in the VMware vSphere 4 platform itself.

1) Change block tracking:  Any new VM provisioned on vSphere will use virtual hardware version 7 (you can also upgrade your existing VM version 4 to version 7).  With VM version 7, the vmkernel tracks the changed blocks of the VM’s virtual disks.  (By the way, this the same change block tracking functionality that enhances Storage VMotion in vSphere 4). So, instead of having to scan the VM’s virtual disks to determine which blocks have changed every time a backup occurs, VDR just makes an API call to the vmkernel and gets this information “for free”.

Thus, VDR is able to dramatically cut down the amount of time and CPU cycles to calculate the changed blocks on a virtual disk.  In addition, change block tracking also helps on the restore side of the equation.  For example, if you wanted to restore yesterday’s VM image, VDR will make the reverse change block API call and will just transfer the changed blocks from yesterdays backup to revert the VM to its previous state.  So, given that there is a lot of intelligence in the platform about virtual disk blocks, block based dedupe seemed like a natural direction for VDR to take.

2) Hot add disk:  VDR can “hot add” virtual disk snapshots directly to the VDR virtual appliance.  This is accomplished by leveraging capabilities of the vSphere storage stack.  This means that VDR can bypass the LAN and stream the data from the snapshots directly to dedupe destination disk.  In addition to reducing load on the LAN and effectively eliminating the need to block out other LAN traffic during the backup window, the streaming of data to the destination dedupe disk on the Data Recovery appliance will be considerably faster.

Note that there are three caveats to enabling hot add disk with VDR:

a.       The source virtual disks need to be on shared storage

b.       The ESX host where the VDR appliance is running needs to have visibility to this shared storage

c.       You will need a vSphere edition that includes Hot Add as a feature

The knock against destination (or target) based dedupe is the fact that it consumes precious network bandwidth with the unnecessary transfer of data that will be discarded as part of the dedupe process.  However, given that VDR only transfers changed blocks and can transfer these blocks off-LAN, the concern did not apply and thus we felt comfortable with a destination based dedupe architecture.

So does this mean that unless you have both change block tracking and hot add disk features enabled in vSphere 4, VDR and its dedupe capability is useless to you?  Absolutely not!  All data that is protected by VDR will be deduped, so you will enjoy the storage savings independent of what VM version is being backed up or what vSphere edition you are have installed.  What change block tracking and hot add disk adds is additional efficiency and performance gains that will allow even more data to be protected in an ever shrinking backup window.

Systems Manageability of VMware ESXi on Dell PowerEdge Servers

After people have learned about ESXi, and understand all the benefits (less patching, easy deployment and manageability, etc), one of the first concerns that they raise is around hardware management. Many IT shops use management tools from OEMs such as Dell OpenManage Server Administrator (OMSA) to do things like hardware health monitoring, asset inventory, and viewing alert and command logs. Traditionally, this functionality has been provided for ESX by an agent running in the Service Console. Without the Service Console, they ask, how could this be done for ESXi?

Ever since ESXi was released almost one and half years ago (as version ESXi 3.5), VMware and Dell have been working closely together to provide hardware management capabilities via an agentless model, using industry standard interfaces for management such as WS-MAN. With the release of ESXi 4, the management capabilities of Dell servers running ESXi is almost at parity with ESX 4.   In particular, the following features are available to OMSA from an ESXi host

  • View server and storage asset data  
  • View server and storage health information
  • View alert and command logs 
  • Configure hardware (storage, BIOS, etc.)

All this is available via the familiar web-based interfaced used for servers running ESX.  Here is a screenshot of the Power Tracking Statistics Page:

Power stats

In addition, we have enhanced VMware vCenter (formerly VirtualCenter) Server to provide fairly extensive hardware-level monitoring as well. With vSphere 4, this capability is fully integrated with the rest of vCenter, e.g. you can set alarms on hardware faults. (Note that monitoring functionality is available even for the stand-alone, free version of ESXi 4; simply look in the vSphere Client).  Here is a screenshot of a Dell System being monitor in vCenter:

 Hwmon

To learn more about the management capabilities of ESXi 4 running on Dell PowerEdge servers, see this new joint white paper from VMware and Dell. There is also an online article in the June 2009 edition of Dell Power Solutions that talks about this.

Designing a DMZ on vSphere 4 using the Cisco Nexus 1000V Virtual Switch

DMZ’s with virtualization seems to elicit opinions from all quarters—security folks, network folks and server folks. A couple of months ago, we updated the DMZ with ESX paper to vSphere 4 level and reposted it as Network Segmentation in Virtualized Environments. This paper discusses and describes three different trust zone configurations with associated best practice approaches for secure deployment. (I blogged on this topic last month)

Last week we added a new paper on this topic, in this case using the Cisco Nexus 1000V virtual switch. This is a co-branded Cisco/VMware paper titled, DMZ Virtualization Using VMware vSphere 4 and the Cisco Nexus 1000V Virtual Switch. You will also find this posted on the Cisco website.

This 19-page paper is a great tutorial on the considerations around DMZ’s and how the Nexus 1000V can be used for DMZ virtualization.