Home > Blogs > Virtualize Business Critical Applications > Monthly Archives: November 2011

Monthly Archives: November 2011

SQL Server Rolling Patch Upgrade using Standby VM

SQL Server patching is a common use case for high availability deployments. When people think about minimizing down time for patch upgrades, most people will think of SQL Server failover clustering or SQL Server database mirroring. Those are the two SQL Server native availability features that support rolling patch upgrade. Did you know that you could also perform rolling patch upgrades with just a standby virtual machine (VM) if you are running SQL Server on VMware?

With VMware vSphere, a virtual disk can be hot removed or hot added onto a VM without impacting services running in the VM. Given that, you could put together a rolling patch upgrade solution that's similar to the SQL Server failover cluster share everything architecture by using a standby VM. The SQL Server data and log disks are shared between the primary and standby VMs, although they are only assigned to one VM at a time. The primary and the standby VM each runs an identical copy of the SQL Server binary. When you need to apply SQL Server patches to the primary VM, you can switch ownership of the SQL Server data and log disks from the primary VM to the standby VM. The standby SQL Server VM can continue servicing application requests. The followings describe the steps and process flows for the solution.

Step 1: Configure standby VM

  • Create a standby SQL Server VM, if one does not yet exist, using VMware templates or cloning technologies.
  • Confirm SQL Server logins, job, and other instance level configurations are configured identically between the standby and primary VM.

 

Step 2: Patch standby VM

  • Apply service patches to the standby SQL Server VM.

 

Step 3: Hot remove SQL Server resources from primary.

  • On the primary VM, stop client connections to the database(s). One way this could be accomplished is to disable the virtual machine network interface. Connection to the virtual machine can be made using a management interface for Remote Desktop Service connections or vSphere client console connection.
  • Detach database(s) from primary SQL Server by issuing the sp_detach_db T-SQL command.
  • From Windows Disk Management right click the data and log volumes and select Offline to prepare them for hot-remove.
  • From the vCenter client, remove SQL Server data and log virtual disk(s) from the running primary SQL Server VM.

 

Step 4: Hot add resources to the SQL Server standby VM.

  • From the vCenter client, add the virtual disk(s) containing the SQL Server data and log files to the standby VM.
  • From Windows Disk Management, bring the disks online if needed and confirm that the disk(s) are mounted with the correct drive letter(s) assigned.
  • Attach SQL Server database(s) by issuing the sp_attach_db T-SQL command(s).

     

 

Step 5: Switch role

  • On the standby VM, enable application network traffic to the VM.
  • The old standby VM is the new primary, SQL Server service is resumed for the application(s).
  • The old primary VM is ready for service patching and may be left in the standby role if desired until the next patching cycle.

 

During steps 3 to 5, application(s) would experience temporary connection issues to the SQL Server. Similar to the failover clustering or database mirroring requirements, reconnection is expected to be handled by the application layer, with zero data loss, and any in flight transactions would need to be resubmitted. All operations in steps 3 through 5 are metadata only operations, and are expected to execute instantaneously.

If you would like the ability to do rolling patch upgrades, but don't want to put up with the cost and complexity of maintaining a failover cluster or a mirrored database, this solution provides a viable alternative for you. For those of you that are into scripting, the process flow can be automated using PowerShell and PowerCLI.

-Wanda

Wanda He, Technical Solutions Architect

Virtualized Exchange Storage: VMDK or RDM or…?

One of the hottest topics I get into when talking to customers about virtualizing Exchange is storage. Not surprising considering the number of options available when we virtualize Exchange on vSphere. If you are not familiar with the common methods for provisioning storage in vSphere a brief description of each follows:

  • VMFS based virtual disk (VMDK) – VMFS is a high performance, clustered file system that allows concurrent access by multiple hosts to files on a shared volume. VMFS offers high I/O capabilities for virtual machines and is optimized for large VMDK files. VMFS volumes can be Fibre Channel or iSCSI attached.
  • Raw-device mappings (RDM) – RDM is a mapping file in a VMFS volume that acts as a proxy for a raw physical device, sometimes called a pass-thru disk. The RDM file contains metadata used to manage and redirect disk access to the physical device. RDMs can be Fibre Channel or iSCSI attached.

In early versions of ESX the virtualization overhead associated with deploying virtual disks (VMDK files) was much higher than it is today and why it was considered a best practice to place Exchange data files on physical mode raw-device mappings (RDM). As ESX and vSphere have evolved the performance difference between RDMs and virtual disks has become almost nonexistent. This leaves some questioning why we might choose to deploy RDMs for Exchange storage.

Some reasons for deploying RDMs today might include:

  • Backups are being performed using a hardware based VSS solution using array based clones or snapshots – When talking to customers I typically see backups as being the number one reason for deploying RDMs. The ability to take array based backups quickly using hardware VSS makes RDMs very attractive for large organizations with massive amounts of email data. So, if we want to take advantage of array based backups are we limited to only using RDMs? Not quite, but more on that in a minute.
  • Volumes larger than 2TB are required – With Microsoft supporting mailbox databases up to 2TB (when database resiliency is in use) volumes may need to be larger than 2TB. In vSphere 5 only physical mode RDMs support volume sizes up to 64TB, VMDK files are limited to 2TB.
  • Require the ability to swing a LUN between a native Windows host and a virtual machine – Some deployments may choose to deploy on physical mailbox servers and later migrate to virtual machines. This migration could be expedited by swinging the LUNs from the physical mailbox server and attaching them to the Exchange mailbox VM using RDMs. With database portability only the user objects would need to be updated thus avoiding the time to move mailbox data over the network.
  • Management purposes – Some environments may require greater control over the relationship between LUNs and virtual machines. An RDM is assigned to a single VM (unless using a shared-disk cluster) guaranteeing that the I/O capabilities of the LUN are dedicated to a single VM.

The good news is, if you're not limited by any of the reasons above you can deploy on VMDKs with confidence. I tend to prefer VMDKs for the portability, manageability, and scalability. By portability I mean the ability to use features like Storage vMotion, Storage DRS, and vSphere Replication to provide storage load balancing and disaster recovery. Improved management comes with the native tools available in the vSphere client for working with VMDKs. Some storage vendors have very slick plug-ins for the vCenter client if you must use RDMs, but it's always nice using the native tools. From a scaling point of view larger VMFS volumes can be used to consolidate VMDKs if dedicated RDMs are pushing the 256 LUN limit in ESXi. vSphere 5 supports VMFS volumes of up to 64TB, VMDK files are limited to 2TB.

Now that we can make some better informed choices for our storage format let's get back to the backups. If you are looking to deploy a hardware based VSS backup solution it used to be that the only option was to use physical mode RDMs. Today some storage vendors have made progress in giving customers the ability to deploy on storage other than physical mode RDMs. This comes in the following forms:

  • In-guest iSCSI – Using iSCSI initiators from within the guest operating system an administrator can directly mount storage LUNs to the virtual machine. Connecting storage in this manner can still provide the ability to backup using array based snapshots and clones. This does put additional load on the virtual machine as it is now doing the storage processing, but will allow you to avoid using RDMs and can mitigate the 256 LUN limit of ESXi. At VMworld this year (both in the US and Europe) many customers shared their success stories of using in-guest iSCSI with Exchange.
  • NFS based VMDKs – Some storage vendors have added the ability to perform hardware based VSS backups of VMDKs housed on NFS based networked-attached storage. I've also had many customers tell me of their success using this solution. My only comment here is that Microsoft has been pretty clear on their lack of support for housing Exchange data files (mailbox and queue databases and transaction logs) on network-attached storage (Premier customers check with your Microsoft rep). That said, I'm a huge fan of NFS based storage.

Whether to choose VMDK or RDM for your Exchange storage should be based on technical and business requirements and not on any preconceived notions of performance or supportability. All storage protocols discussed here have proven to perform well within the requirements of Exchange and support for each is well documented on Microsoft's TechNet site. I've included some helpful links below for your reading enjoyment. With that I'll wrap up this post which hopefully has given you a bit to think about and maybe presented some new options for your deployment.

As always, we look forward to hearing from you so please join the discussion!

-alex

Alex Fontana, Sr. Solutions Architect

Performance Best Practices for VMware vSphere 5: http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.0.pdf

Virtualized Exchange Server on NFS, iSCSI, and Fibre Channel: http://www.vmware.com/files/pdf/vsphere_perf_exchange-storage-protocols.pdf

Performance Characterization of VMFS and RDM: http://www.vmware.com/files/pdf/performance_char_vmfs_rdm.pdf

Exchange 2010 System Requirements: http://technet.microsoft.com/en-us/library/aa996719.aspx

Using Virtual Disks for Business Critical Apps Storage

Hello all!

Welcome to the Business Critical Apps Blog. This week we will be publishing a few blogs that focus on virtualizing Microsoft Tier-1 applications. This may include discussion around a specific application like MS SQL or Exchange or some more generalized discussion around topics we get the most questions on when talking to customers. If you're responsible for virtualizing Microsoft Tier-1 apps check in throughout the week and take a look at what we've got going on. If this is your first visit to this blog check out our archives for tips on virtualizing MS Exchange, SQL and even some Oracle, SAP and Java discussions. Join in on the conversation by asking a question or making a comment. On to the first topic for the week: storage!

Virtualization of I/O intensive applications is nothing new. Traditionally the virtualization of these applications involved provisioning raw-device mappings over virtual disk files, whether warranted or not. VMware has proven the performance of VMFS to be on par with that of raw-device mappings as far back as ESX 3.0.1 (Performance Characterization of VMFS and RDM Using a SAN). While technically required for some configurations (MSCS clustering, hardware-based VSS, etc.), deploying raw-device mappings is no longer the de facto standard for virtualized I/O intensive applications.

When creating a new virtual disk (VMDK) there are a few options for how the virtual disk is created and when the space is allocated. Understanding the types of disk provisioning methods available and when to use them can help you provide the best level of performance for your business critical apps. The three types of disk provisioning are described below:

  • Thick provisioned lazy zeroed – The virtual disk is allocated all of its provisioned space and immediately made accessible to the virtual machine. A lazy zeroed disk is not zeroed up front which makes the provisioning very fast. However, because each block is zeroed out before it is written to for the first time there is added latency on first write.
  • Thick provisioned eager zeroed (Recommended for I/O intensive workloads) – The virtual disk is allocated all of its provisioned space and the entire VMDK file is zeroed out before allowing the virtual machine access. This means that the VMDK file will take longer to become accessible to the virtual machine, but will not incur the additional latency of zeroing on first write. For this reason the recommendation when deploying an I/O intensive application on VMFS is to use this provisioning method.
  • Thin provision – This method provides quick access to the virtual disk and increases storage utilization by allocating disk space on demand.

Now that we've established the differences between the provisioning types, let's discuss the various ways we can create an eager zeroed thick disk, how we can check if a virtual disk is eager zeroed thick, and how we can eager zero a disk after the fact.

How can I create a virtual disk as eagerzeroedthick?

  • If using the vSphere 4 client you can check the Support clustering features such as Fault Tolerance box during disk creation. Checking this box won't enable FT for your VM, but it will format the VMDK as eagerzeroedthick as this is a requirement for FT.

  • If using the vSphere 5 client you are presented with three options during disk creation; select the Thick Provision Eager Zeroed radio button.

  • If you prefer the command line or want to build this into an automated solution you have a couple of more options:
    • Console:

      Vmkfstools –d eagerzeroedthick –c 10g /vmfs/volumes/datastore1/myVM/myVMData.vmdk

      Note: -c 10g creates a 10GB vmdk file, adjust as needed.

    • vSphere CLI:

      Vmkfstools.pl –-server <ESXHost> –-username <username> –password <passwd> –d eagerzeroedthick –c 10g /vmfs/volumes/datastore1/myVM/myVMData.vmdk

      Note: -c 10g creates a 10GB vmdk file, adjust as needed.

I'm not sure how my virtual disks were created, how can I check?

Fortunately we can check if a virtual disk was created as eagerzeroedthick. To do so we can use vmkfstools –D (capital "D") against the VMDK in question (direct the command to the <vm_name>-flat.vmdk file):

vmkfstools -D <vm_name>-flat.vmdk

The output of this command will look similar to the output below. We're interested in TBZ in the last line. This refers to the number of blocks in the disk file To Be Zeroed. A TBZ of zero indicates an eagerzeroedthick VMDK, otherwise it is zeroedthick or lazy zeroed as in our example below.

Lock [type 10c00001 offset 9345024 v 30, hb offset 3293184

gen 11, mode 1, owner 4ea9d387-964b13c3-7f81-001a4be8eae0 mtime 49198 nHld 0 nOvf 0]

Addr <4, 2, 19>, gen 25, links 1, type reg, flags 0, uid 0, gid 0, mode 600

len 1073741824, nb 128 tbz 128, cow 0, newSinceEpoch 0, zla 1, bs 8388608

For more information on determining whether or not a VMDK is eagerzeroedthick or zeroedthick refer to VMware KB article Determining if a VMDK is zeroedthick or eagerzeroedthick.

Oops, I didn't eagerzerothick my virtual disk during creation, what can I do?

There are a few ways to zero out your existing virtual disk.

  • Using vmkfstools you can use the -k option to zero out un-zeroed blocks while maintaining existing data. This is the best option if you've already started populating the disk with data as the data will not be touched. Direct the command to the <vm_name>.vmdk file.

     

    Vmkfstools –k <vm_name>.vmdk

Note: This method requires the virtual machine to be powered off or the virtual disk to be removed from the virtual machine.

  • If powering off the virtual machine or using hot-remove to disconnect the virtual disk is not an option and if there is NO data on the virtual disk you may reformat the volume from within Windows by unchecking the Quick Format option. This process removes files from the volume and scans the entire volume for bad sectors, effectively causing all blocks to be touched and zeroed.

  • Enabling Fault Tolerance for a VM requires that the VMDK be zeroed out. If your virtual machine has only one vCPU you can temporarily enable FT and the process will make sure that all virtual disks are eagerzeroedthick. Once the disk has been prepared you can disable FT. This method also preserves existing data.

     

 

Hopefully this has given you a better look at why we recommend using eager zeroed thick disks for IO intensive applications as well as how to create new disks, check existing disks, and convert existing virtual disks to the eagerzeroed format.

As always, we look forward to hearing from you so please join the discussion!

-alex

Alex Fontana, Sr. Solutions Architect

Exchange 2010 on vSphere Customer Case Study

Those of us embarking on a new virtualization project like to learn from others. At the very least we want to be sure that if someone else has done something similar we can learn from any lessons encountered along the way. Over the past year and a half we've had many conversations with customers who were in the process of evaluating Exchange 2010 or designing a logical environment with a decision on whether or not to virtualize still pending. Many of these customers wanted to hear from other customers.

We're now getting to the point where we have full deployments that we can begin to talk about. Some we may not be able to mention by name but can speak to specifics around size and design. Others have allowed us to come in and create case studies based on their success story.

Today we released our latest case study on Raymond James. As a financial company managing about 1.9 million accounts, email is one of the most critical applications the Raymond James IT organization supports. Read how Raymond James successfully virtualized an Exchange 2010 environment on vSphere to support over 18,000 mailboxes, provide high availability without the use of Database Availability Groups, and how they use VMware Site Recovery Manager to provide disaster recovery capabilities and proactively test site failover.

Case Study, Video

-alex