Home > Blogs > VMware vSphere Blog > Monthly Archives: October 2009

Monthly Archives: October 2009

The 3 Major Benefits of vStorage Thin Provisioning


Increase Storage Utilization

Eliminate the need to dedicate full capacity upfront while still providing application users with the capacity they need for future growth. VMware vStorage Thin Provisioning lets you subscribe more capacity to virtual machines than they actually have, eliminating the waste of resources and space caused by unused over-allocated storage. With VMware vStorage Thin Provisioning storage administrators can increase their storage utilization by letting them dedicate more storage than the actual capacity.

Enhance Application Uptime for Improved Business Continuity

Eliminate application downtime by simplifying storage provisioning. Managing storage allocations to support dynamic environments can be a time-consuming process that requires extensive coordination between application owners, virtual machine owners and storage administrators, often resulting in downtime for critical applications.

Furthermore, delay during the process of storage allocation at any layer, storage to application can result in prolonged application downtime. By eliminating the need to periodically provision more capacity, VMware vStorage Thin Provisioning eliminates application downtime.


Simplify Storage Capacity Management

Let your application users proactively manage storage capacity transparent to the storage administrators and eliminate the manual processes requiring careful planning and coordination by IT management, storage administrators, system administrators, and application administrators.  In addition, VMware vSphere provides a single management point to set alarms and alerts required to safely thin provision storage to virtual machines.

Get a single unified tool for multiple storage or non-intelligent storage to thinly provision and eliminate the need to provisioning storage frequently. vStorage Thin Provisioning is a powerful storage enabling technology which streamlines capacity management for the storage and server teams.

Using Storage VMotion to Leverage Thin Provisioning

One piece that may not be all that clear for VMware users is that Storage VMotion (now available as a feature within our GUI and not just in the command line with vSphere 4.0) allows for an easy transition from previously thick provisioned virtual disks to new thin provisioning virtual disks. So, any user that upgrades to vSphere can now use this function to save up to 50% in terms of storage allocated in a virtual disk. Another product of this technology is the move to a thin provisioned virtual disk will also defrag the disk.

SRM 4.0 and Windows 2008 Support

Hello Uptime Readers,

We have seen a lot of questions lately relating to SRM support for Windows 2008 and there seems to be a lot of confusion out there so it seemed like a good time to maybe write a short blog to try and clear things up.

 

When you are working with or implementing SRM 4.0 and are asking questions or looking for information on operating system support the first thing you need to understand that will hopefully make things simpler is in what function will the operating system be used, there are two choices really:

  1. It will be the operating system we use to install the SRM server (or SRM client plug-in) in to

  2. It will be the operating system we use in the virtual machines we want to protect with SRM

Let’s take each in turn.

 

SRM Server

When deploying SRM you need two SRM servers, one at each site. The SRM server will in nearly all cases be deployed into a virtual machine itself but this virtual machine is not classed as a protected virtual machine as its role is simply to run the SRM server at that site and it will not normally be placed on replicated storage as there is no need to replicate an SRM server as the other site also runs an SRM server.

 

More frequently customers are now wanting to deploy SRM into Windows 2008 virtual machines. Before you do this you should review the SRM Compatibility Matrix.

 

Specifically review the section “SRM server operating System Compatibility” in this section you will need to be aware that although both Windows 2008 x86 and x64 are listed you should review the table carefully and understand that at time of writing the following statements are true:

  • If you want to use Windows 2008 to host your SRM Server note it is currently ONLY supported on the x86 (32bit) editions of Windows 2008 running SP1 (support for R2 editions will be reviewed on an ongoing basis). UPDATE: SP2 x86 support is now available!

  • If you want to use Windows 2008 to run your vSphere client and therefore install the SRM vSphere client plug-in note this is supported on the x86 and the x64 editions of Windows 2008 running SP1 (support for SP2 and R2 editions will be reviewed on an ongoing basis)

Table2 

Protected Virtual Machines

Now that we have covered the SRM server what about the virtual machines you actually want SRM to protect, those virtual machines running your production workloads and applications that are sitting on your replicated storage.

 

As with the SRM server if you review the SRM Compatibility Matrix you will find the following section:

 

GuestOS 
 

If you are looking for clarification for Windows 2008 support (though you could use this example for any GuestOS) you need to understand what each of the above paragraphs is telling you. First I think we could improve the clarity here and this is something we will review internally for the next documentation update. If we start with “Guest Operating System Support” the statement is “SRM 4.0 supports all guest operating systems supported by vCenter 4.0”.

 

What does this actually mean? From the SRM perspective what this is actually telling you is that SRM can “protect” any guest operating system that is a supported guest operating system on the vSphere 4.0 platform. You can review the full list of supported guest operating systems for vSphere here by setting:

 

Product Name = ESX

Product Release version = ESX 4.0

OS Use = Guest OS

OS Family = Windows

OS Name = Windows Server 2008

 

All of the guest operating systems produced in that list could be protected by SRM 4.0 with one additional consideration, do you want to customize the GuestOS (for example network changes using SRM’s ip customizer tool) during recovery? If the answer to that question is NO then any of the Windows 2008 operating systems listed on the HCL page you have just generated could be protected by SRM 4.0.

 

If you DO wish to customize the protected guest operating system during recovery using SRM 4.0's built in image customization capability (if you know how vCenter VM image customization works then you already understand this technique) then notice in the SRM 4.0 compatibility matrix picture above there is a second paragraph that refers to guest operating system customization support.

 

Although the section indicates that all of the same guest operating systems can be customized there are some versions of Windows 2008 that are not currently supported by SRM 4.0 for guest customization.

 

Currently SRM customization support for Windows 2008 does NOT include ANY R2 versions. Windows 2008 R2 is a new release of windows and is considered by many to be the server release of Windows 7. This is really the source of the supportability differences between a Windows 2008 SP1/SP2 edition and an R2 edition.

 

Customization support for the R2 releases of Windows 2008 will be reviewed as part of our ongoing SRM update program.

 

hope this helps,

Lee Dilworth

Want to be a vSphere beta participant?

We are looking for beta participants to test out some new vSphere features currently in development. Interested? If so, contact your VMware account team for further details. 

Winning Post from Cycle 3 – Data Recovery

Below is the winning entry from Antone Heyward. His original post and accompanying graphics can be found at:
http://thehyperadvisor.com/?p=540


Test driving VMware Data Recovery (vDR)

Friday, October 16, 2009By Antone Heyward

One of the most important things next to virtualizing all those physical servers is backing them up and restoring them. This of course is not a great feat with virtualization since basically the servers are now a group of files. There are and have been for some time full featured backup/restore products out there in the market but Vmware has released their virtual appliance “Vmware Data Recovery” currently at version 1.0.2 with vSphere4 Advanced, Enterprise, and Enterprise Plus. I was actually impressed at how functional and the ease of setup with the product at version 1.x.

  • Download the appliance and vdrfilerestore.exe from Vmware. 
  • Import the product into vCenter. 
  • ***Note: Make sure that the bios has the correct time. I ended up with backups in the future because the time was wrong. 
  • ***Note: I also updated the virtual hardware to version 7. 
  • ***Note: Since the appliance is CentOS5, I changed the guest OS to RHEL5, and resize the appliance to my needs (cpu/mem). 
  • Install the VDR plugin. 

Once this is done you will see the new feature button added to the vCenter Home area under “Solutions and Applications “. Click in this area and you will have to add the vDR appliance either using the name or dns name. ***Note: DNS resolution and network connectivity is key. I had a few issues like (error- 3948) which prevented me from doing backups which were all resolved by fixing the network. Make sure that the vDR appliance can communicate with your ESX hosts. The look and feel is no different with your objects on the left and information on the right. At first I thought it was too plan being used to using CA Arcserve.

So before setting up a backup job you’ll need to setup a destination location. This location is were your backups are store in a dedupped and none recognizable format. So don’t expect to see *.vmdk files in plain sight to do restores. The datastore can be either a local volume (vmdk) or NFS location and you can add them as needed. I tested with both the local volume and a windows share with successful results. The network share was slower than the local volume for backups and restores. When backing up to the local volume unless you have a dedicated datestore you have to take into account the added i/o which could affect other vm guests when backups are running.***Note: I haven’t tested this yet but there is no reason you could not replicate the dedup destination to a remote site for disaster recovery purposes since the destination location used by vDR is recognized by the vDR appliance. Your then giving an option to import the information to the new vDR appliance for restore purposes.

Setting up a backup job was simple. You can select the entire environment to backup or just one vm guest then add the destination, backup window, and retention policy. The jobs can be edited later plus you can also right a vm guest and add it to a new or current job plus remove it from a job.

The simple breakdown or process of what happens when a backup is performed is that the vm guest has a snapshot created, the vDR appliance reconfigures itself and attachs the vmdk that it’s going to backup to it’s virtual hardware, the vDR dedups the data and saves it to the destination you choose, then the vmdk is removed from the vDR, and lastly removes the snapshot.

Restores are pretty easy as well. You can either right click the vm guest or use the Restore tab. A good to have feature is that the option for a “Restore Rehearsal” can be performed. This allows you to restore the entire vm guest with the option to rename and reconfigure in vCenter without downtime of the original just to make sure the restore process would really work. You can select the restore point from the gui from different points in time all the back to your retention limits.

There is also the ability to restore at the file level. At this time it is experimental. I have seen post were others have had issues but it works well for me. You have to run the “vdrfilerestore.exe” in a command prompt which the correct switches “vdrfilerestore.exe -a <vdr server>” from the vm guest that you want to do a file level restore from. You are prompted to select the restore point, then it mounts the restore to a drive letter in the OS of the vm guest. Then you basically browse the drive and copy what you need. Once you are done, the drive can be unmounted from the command prompt. See this in actionhere.

I’d like to see the ability to mount a restore point from vCenter to any vm guest i choose to do a file level restore. Sometimes data is not needed to be restored to its origin. I’d also like to see more reporting information around how much deduplication is saving me or the % of deduplication per job and historically. And the addition of Alerts so that I could either send emails or run a script when a job fails would also be nice. Overall though, I think Vmware has done a good job with this virtual appliance in version 1.x.

Give me the skinny on Thin Provisioning with vSphere!

vStorage Thin Provisioning optimizes storage costs through the most efficient use of storage in virtual environments. Storage requests more often than not are usually overestimated by users mostly to avoid having to go through the request/approval process. With vStorage Thin Provisioning, IT departments can now assure business users of storage space availability while deferring the actual costs of purchasing storage to when it is really needed. Full reporting and alerting on allocation and consumption ensure that virtual machines don’t really run out of storage, Storage VMotion and Volume Grow ensure that virtual machines can either migrate to datastores with additional storage or volumes can be increased in size when consumption approaches allocation.


Cycle 4 for our vSphere Blog Contest – Thin Provisioning

This week our focus is on the new thin provisioning capabilities of vSphere 4.0. Please send in your blog entries and take your chance at winning $100.

Don't know the rules? No problem, have a look on our contest page.

Backing up your ESXi host configuration

One of the beauties of a thin hypervisor architecture like VMware ESXi is the fact that the entire state of the system can be described in a much more compact fashion than is possible with a general-purpose operating system.  This include areas such as: virtual networking configuration, storage settings, and host infrastructure services such as NTP and logging.  All changes to the system occur through well-defined APIs, so it's easy to know what can be modified.

This fact, among other things, makes it easy to back up the entire state of an ESXi host, in case you later need to restore the system to the same state.  The vCLI (vSphere Command Line Interface) has a command specially built for this purpose: vicfg-cfgbackup.  A recent blog posting on vmwaretips.com goes over a real-life situation where this command proved invaluable.:

A client had an ESXi host where the USB drive failed….. We needed to get this failed ESX host back online and quick!

You can read the rest of the story here.

Winning Post from Cycle 2 – Distributed Switch

Below is the winning entry from Barry Combs. His original post can be found at:
http://virtualisedreality.wordpress.com/2009/10/03/vnetwork-distributed-switches-vds-an-overview/

 

Posted by: Barry | October 3, 2009 

vNetwork Distributed Switches (vDS) an overview

We are now onto the second stage of the VMware vSphere Blogging contest, the winner of week one’s FT subject was Hany Michael from http://www.hypervizor.com/ you can read his post here >> http://www.hypervizor.com/2009/09/vsphere-40-fault-tolerance-architecture-diagram-video-and-use-cases/ you can also get the full run down from VMware on the vSphere blog here >> http://blogs.vmware.com/vsphere/ Congratulations to Hany for the win it was well deserved. 

Moving onto the next subject vNetwork Distributed Switches, there is already a lot of information regarding the vNetwork Distributed Switches, including a very good white paper by VMware which I have linked to at the end and numerous videos and how to blogs. I have decided to make my blog post more of a guide for potential new users / customers and pass on my thoughts on using vNetwork Distributed Switches. 

The vNetwork Distributed Switches vDS for short allows you to configure a single virtual switch to span multiple hosts, so you would be correct in thinking that this means you no longer need to create your virtual machine port groups on all your hosts, saving you time and taking the risk out of accidently spelling one wrong and causing issues for vMotion / HA. The vNetwork Distributed is a feature that is only available to those with vSphere Enterprise plus licensing. 

vDS also introduces a number of other features these are :-

• Private VLANs
• Network VMotion—tracking of VM networking state, improving troubleshooting and enabling
• 3rd Party Virtual Switch support with the Cisco Nexus 1000V Series Virtual Switch
• Bi-directional traffic shaping

You did read that third item right, you can now have a Cisco switch as your virtual switch, the Cisco Nexus 1000V is an optional extra that you can purchase that allows you to have a Cisco switch inside your virtual infrastructure, a must have for any large company with Cisco networking throughout. The VMware administrator can now pass the networking back to the networking team, they can now manage the virtual networking in exactly the same way as they do the physical, much to the relief of the networking team who were always probably a bit concerned with the virtual aspect of the network and possibly open the world of virtualisation to some customers who haven’t been able to proceed for this reason. 

Dvs1
Network vMotion allows counters and statistics regarding the virtual machine to move with the machine when it is vMotion’ed this ensures monitoring and troubleshooting is a lot easier for machines that are being moved by vmotion.

Vds2
There are two main concepts to understand about vNetwork Distributed Switches these are 

Distributed Virtual Port Groups (Left Side of image below) Much like the port groups on your standard virtual switches, these are port groups on the vDS that specify port configuration. 

Distributed Virtual Uplinks (Right Side of image below) This is a new concept the Distributed Virtual Uplinks contain the physical NICs that will act as uplinks on your hosts, from here you can configure NIC teaming, load balancing and failover policies. 

Dvs3

If the configuration for some reason differs on one of your hosts maybe due to downtime due to a fault or other host issues you will receive a warning making you aware of this issue.

Dvs4

When the host then becomes available again the settings will be automatically updated on that host. 

When deploying a vDS you are able to automate the configuration of your hosts by using host profiles. This will also allow you to check compliance of your hosts at any time and quickly add new hosts in the future. 

My Preference…

My preference when using vDS is to run in a hybrid mode, keeping the service consoles and vmKernel as a standard switch and moving all the Virtual Machine Port groups to a vDS. This means I handle the service console and vmKernel at installation the same as usual then add my host to the vDS, when I then find the need to add a new portgroup to my hosts I have only got to configure it in one place. In large environments this saves considerable amounts of time and the potential for error. 

Dvs5
Although running with the service console and vmKernel as part of the vDS is a fully supported configuration and what a number of people would do, indeed in an environment with less physical NICs this would be what I would choose to do. 

I have updated my VCP in vSphere Cue Cards with some key information on vSphere networking to assist you with studying towards the new VCP. These can be found here >> http://virtualisedreality.wordpress.com/vcp-in-vsphere-4-0-study-notes/ 

If you are considering using vDS’s or would like more information the following white paper from VMware is a must read! >> VMware® vNetwork Distributed Switch Migration and Con­guration

Meet the SRM 4.0 Engineers!

SRM 4.0 was released on 10/5/09.  We hope that you have gotten a chance to evaluate the new features of this SRM release.  This release is the result of the hard work of a group of dedicated VMware software engineers and we would like to have them share their perspectives on the SRM features.  Maria and Glenn – both SRM software engineers – have shared their insights on the SRM features on videos:

 

Maria provided her insights on the new features SRM 4.0.  In her video, she discussed the following topics:

·         vSphere support

o   Fault Tolerance

o   vDS

o   DPM

o   Linked Mode

·         NFS support

·         Shared Recovery Site

·         Enhancement in reliability, robustness and scalability

 

Glenn, on the other hand, focused on the SRM core features.  In his video, he discussed the following topics:

·         Virtual disaster recovery powered by SRM

·         Automated recovery workflow

·         Testing of recovery plans

·         SRM architecture and components

·         Test networks

·         Audit Trail

 

The 2 videos together give you an overview of the SRM 4.0 features and we highly recommend that you watch these videos if you are interested in learning more about SRM.  The links are listed below:

               

                Maria: SRM 4.0 Features

                Glenn: SRM Core Features

 

Thank you,

Desmond