Home > Blogs > VMware vSphere Blog > Monthly Archives: August 2012

Monthly Archives: August 2012

New Features of the vSphere Storage Appliance version 5.1

This post is to highlight the new features of the recently announced vSphere Storage Appliance version 5.1. The major enhancements to VSA v5.1 are two fold. The first is to enhance the VSA for the SMB/SME markets; the second is to move into adjacent markets such as ROBO.

Before we start, I want to make a clarification around the required RAID configuration. Initially, VSA v1.0 required a RAID10 configuration on the local storage of each of the ESXi hosts participating in the VSA cluster. This has already been relaxed and RAID5 & RAID6 are now also supported configurations. More detail can be found here. Let’s move on to the new 5.1 features.

Support for additional disk drives & Expansion CHASSIS

In VSA 1.0, each ESXi host could have only 4 x 3TB disk drives. In VSA 5.1, we will be increasing the number of disks per ESXi host 8 x 3TB disk drives.

The number of 2TB (or less) disks per host has also been increased. 12 disks can now be supported internally in an ESXi host. In VSA 1.0, this was only 8. One other major enhancement is the support for JBODs (Just a Bunch Of Disks) or disk expansion chassis. An additional 16 disks can now be supported in an expansion chassis attached to an ESXi host. This gives a maximum number of 2TB (or less) physical disks per host of 28.

Increase Storage Capacity Online

In VSA 1.0, the cluster storage capacity cannot be resized after deployment. VSA 5.1 supports the online growing of storage capacity.

There is a new UI enhancement in VSA 5.1 to address this. It allows the VSA shared storage to be increased in size after deployment, as long as there is enough free local storage on all nodes to grow.

ROBO Support

This is the most sought after feature of the VSA 5.1 release. There have been many requests to enable VSA for ROBO (Remote Office/Branch Office) solutions. This involved two development efforts:

  • Allow a single vCenter instance to manage multiple VSA clusters
  • Allow vCenter to reside on a different network subnet to the VSA cluster

Both of these features are now in VSA 5.1. VMware will support 150 VSA clusters being managed from a single vCenter server.

vCenter running on the VSA Cluster

Another popular feature request was to allow vCenter server to run as a VM on the VSA cluster, something that wasn’t possible in VSA 1.0. Therefore vCenter had to be installed somewhere else first before a VSA cluster could be deployed. Customers can now deploy a vCenter on a local VMFS datastore of one of the ESXi hosts that will participate in the cluster. The cluster can then be created, since we can now create the VSA datastore using a subset of local VMFS storage and not require all of VMFS storage like we did in 1.0. After the shared storage is created (NFS datastores), vCenter can then be migrated to it.

Brownfield Install of the VSA Cluster

In VSA 1.0, we required a vanilla version of ESXi 5.0 installed on the 2/3 nodes (what we called a green field installation). VSA 5.1 includes a feature called the automatic brownfield install of the VSA. This is where VSA 5.1 can be installed on ESXi hosts that are already in production and may have network portgroups configured as well as running VMs. One of these running VMs can contain your vCenter server as we discussed previously.

vSphere 5.1 Specific Enhancements

VSA 5.1 will run on both vSphere 5.1 and vSphere 5.0. Another restriction which we had in VSA 1.0 is also lifted in VSA 5.1. We now support memory overcommit on VMs running on VSA 5.1. This means that you no longer need to allocate a full complement of memory to each VM running on the VSA.

That completes the list of storage ehancements in the 5.1 version of the vSphere Storage Appliance (VSA). Obviously this is only a brief overview of each of the new features. I will be elaborating on all of these new features over the coming weeks and months.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: Twitter @VMwareStorage

vSphere 5.1 New Storage Features

vSphere 5.1 is upon us. The following is a list of the major storage enhancements introduced with the vSphere 5.1 release.

VMFS File Sharing Limits

In previous versions of vSphere, the maximum number of hosts which could share a read-only file on a VMFS volume was 8. The primary use case for multiple hosts sharing read-only files is of course linked clones, where linked clones located on separate hosts all shared the same base disk image. In vSphere 5.1, with the introduction of a new locking mechanism, the number of hosts which can share a read-only file on a VMFS volume has been increased to 32. This makes VMFS as scalable as NFS for VDI deployments & vCloud Director deployments which use linked clones.

Space Efficient Sparse Virtual Disks

A new Space Efficient Sparse Virtual Disk aims to address certain limitations with Virtual Disks. The first of these is the ability to reclaim stale or stranded data in the Guest OS filesystem/database. SE Sparse Disks introduces an automated mechanism for reclaiming stranded space. The other feature is a dynamic block allocation unit size. SE Sparse disks have a new configurable block allocation size which can be tuned to the recommendations of the storage arrays vendor, or indeed the applications running inside of the Guest OS. VMware View is the only product that will use the new SE Sparse Disk in vSphere 5.1.

Continue reading

Setting the record straight on VMware vSphere Data Protection

There has been a fair amount of unsubstantiated speculation and noise around the new VMware virtual machine backup and recovery solution called vSphere Data Protection (VDP). Some of the most inaccurate statements I have read about were along the lines of – and I am paraphrasing – “EMC embeds its storage technology in vSphere” or “VDP is EMC Avamar Virtual Edition”.  I thought it might be good to take a few moments and set the record straight by providing more context as to why VMware introduced VDP, what it is and what use cases it serves.

Let’s first talk about why VMware is replacing VMware Data Recovery (VDR) with VDP. VDR was a first generation solution for the rapidly growing backup market that was first bundled with vSphere 4, and experienced rapid adoption by VMware customers. However, in the constant effort to deliver more value to customers, VMware has been actively working on improving data protection and disaster recovery with enhanced backup and replication solutions. This led VMware to introduce a new, more robust product in the form of VDP.  To maximize customer value, VMware decided to collaborate with the EMC Avamar team who has world-class industry leading expertise in backup and recovery technology to build the underlying foundation for VDP.

Just like VDR, VDP is ideally suited to protect small environments with enterprise-class backup and de-duplication technology. VDP scales to up to 2TB of de-duplicated storage or 100 VMs and leverages a variable-length de-duplication algorithm to deliver de-duplication rates of as much as 99%. VDP is easy to use and is managed directly from the vSphere Web Client, allowing administrators to quickly setup their backup policies and manage backups from a single pane of glass along with their entire virtual infrastructure.

Now on to the hot question at hand: Is VDP a “re-packaged” version of Avamar Virtual Edition (AVE)?  The answer is no.  VDP is an entirely new VMware product co-developed by VMware and EMC. It was designed specifically to be integrated with vSphere and packaged with vSphere 5.1 (Essentials Plus and above).  VDP does leverage Avamar technology “under the hood” to provide a robust and mature solution, but it is an entirely different product from AVE. VDP is only sold as a VMware product, available in the vSphere platform, and is not sold by EMC.

It is important to highlight that VMware continues to foster innovation in the backup space for the virtual environments market, supporting a broad partner ecosystem. VMware is fully committed to continuing investment in the vSphere Storage APIs for Data Protection (VADP) to enable seamless integration of third-party backup and recovery with VMware vSphere.

So there you have it.  VDP replaces functionality of VDR with new robust features and is geared toward protecting small environments. It may not have some of the elements found in other backup and recovery solutions in the enterprise market today, but keep in mind it is bundled with most editions of vSphere 5.1 – i.e. you did not have to pay extra for it.  Please give VDP a try and let us know what you think.

@jhuntervmware

The Best Keep Getting Better – VMware vSphere 5.1 Performance

Today, VMware’s CTO Steve Herrod, addressed the 20,000+ attendees at VMworld and announced that VMware has out done itself again by providing a virtualization platform more than capable of supporting even the most demanding and I/O intensive mission critical applications. One impressive performance study that Steve highlighted in his keynote was the 1 Million IOPS from a Single VM study just recently completed by the performance engineering team here at VMware.

Continue reading

SRM 5.1 and vSphere Replication as a Standalone Feature


Posted by
Ken Werneburg
Tech Marketing
Twitter @vmKen

Today at VMworld we announced a bunch of very exciting technologies, for this article I’ll be talking first about Site Recovery Manager 5.1, and then about vSphere Replication as a standalone feature of the vSphere platform.

Protection of your systems is a critical aspect of running a virtual infrastructure, and with these announcements (and that of vSphere Data Protection) we’ve really rounded out the business continuity functions of vSphere.

 

This is a fairly small release with some great features that continue to deliver on the changes we introduced with 5.0. At a high level, the changes are:

  • Improved VSS integration for quiescent applications with vSphere Replication
  • Improved storage handling for quicker and more consistent responsiveness and behaviour
  • Forced recovery with vSphere Replication
  • Reprotect and fallback with vSphere Replication
  • A move to a 64 bit process
  • Support for Essentials Plus environments.

 Now a little more detail about a  few of these items.

VSS

VMware Tools has the ability to issue commands to the operating system such as to set up VSS snapshots.  With 5.1 we have the ability to do a little more than we have in the past, and ask the OS to flush application writers as well as make the OS itself quiescent.  This means for things like databases, messaging platforms, and other applications that have VSS writers, we can ensure a higher level of application recoverability.  When using vSphere Replication we can flush all the writers for the apps and the OS ensuring data consistency for the image used for recovery.

It’s completely transparent to the OS, and a simple drop down that is chosen when setting up replication for a Windows VM.

Forced Recovery

In SRM 5.0.1 we introduced the forced failover ability: If your primary site is down or responding inconsistently sometimes we might have timeouts and errors waiting for results.  This option for failover ensures that only recovery-side operations take place, and we don’t timeout waiting for commands to return from the protected site.  This was, at the time, only possible to use with array replication.  With 5.1 it is now supported for vSphere Replication as well.

Reprotect and Failback

We can now, after failing over, simply click the “reprotect” button and the environment that has moved to the secondary site will be fully protected back to the original site, irrespective of type of replication you’re using.  Reprotect for vSphere Replication is fantastic – it’ll use the existing policies of replication, protection groups, do a full sync back to the primary, and you are then ready to recover or migrate back to the primary location!

Essentials Plus support

One of the most numerous requests we’ve received over the years is to make SRM more accessible to the small and midsize business market.  This step to make SRM compatible with Essentials Plus makes disaster recovery more accessible than ever for the SMB customers who have as much need for business continuity as every other customer!

Now on to vSphere Replication

vSphere Replication was introduced with SRM 5.0 as a means of protecting VM data using our in-hypervisor software based replication.  It was part of SRM 5.0, and continues to be, carrying forward, but now we are offering the ability to use this technology in a new fashion.

Today’s announcement about vSphere Replication is a big one:  We have decoupled it from SRM and released it as an available feature of every vSphere license from Essentials Plus through Enterprise Plus.

Every customer can now protect their environment, using vSphere Replication as a fundamental feature of the protection of your environment, just like HA.

VR does not include all the orchestration, testing, reporting and enterprise-class DR functions of SRM, but allows for individual VM protection and recovery within or across clusters.  For many customers this type of protection is critical and has been difficult to attain short of buying into a full multisite DR solution with SRM.  Now most of our customers can take advantage of virtual machine protection and recovery with vSphere Replication.

Check out an introduction to vSphere Replication at http://www.vmware.com/files/pdf/techpaper/Introduction-to-vSphere-Replication.pdf

Announcing VMware vCloud Director 5.1!

It’s amazing what can happen in a year!  Just a year ago, VMware announced the release of vCloud Director 1.5.  Today, VMware announced the vCloud Director 5.1 release, and it’s full of new features!

by Tom Stephens, Senior Technical Marketing Architect, VMware

For starters, just check out some of the increases in the scalability:

Supported in 5.0

Supported in VCD 5.1 (*)

# of VMs

20,000

30,000

# of powered on VMs

10,000

10,000

# VMs per vApp

64

128

# of hosts

2,000

2,000

# of VCs

25

25

# of users

10,000

10,000

# of Orgs

10,000

10,000

Max # vApps per org

500

3,000

# of vDCs

10,000

10,000

# of Datastores

1,024

1,024

# of consoles

300

500

# of vApp Networks

-

1,000

# of External Networks

-

512

# of Isolated vDC Networks

-

2,000

# of Direct vDC Networks

-

10,000

# of Routed vDC Networks

-

2,000

# Network Pools

-

25

# Catalogs

1,000

10,000

(Note: The numbers represented here are ‘soft’ numbers and do not reflect hard limits that can not be exceeded)

You’ll notice that there are some line items here that refer to ‘vDC Networks’.  This is a new construct in 5.1, which replaces the organization network concept in previous versions.  Organization vDC networks simplify the virtual network topology present in vCloud Director and facilitate more efficient use of resources.

That’s not all the networking changes present though!   Major enhancements have been introduced with the Edge Gateway.  Some highlights include:

-       The ability to have two different deployment models (compact and full) to provide users a choice over resource consumption and performance.

-       High availability provided by having a secondary Edge Gateway that can seamlessly take over in the event of a failure of the primary

-       Multiple interfaces.  In previous versions the vShield Edge device supported 2 interfaces.  The Edge Gateways in vCloud Director 5.1 now support 10 interfaces and can be connected to multiple external networks.

The networking services that are provided out of the box with vCloud Director 5.1 have also been enhanced.  DHCP can be provided on isolated networks.  NAT services now allow for the specification of SNAT and DNAT rules to provide a finer degree of control.  There’s also support for a virtual load balancer that can direct network traffic to a pool of servers using one of several algorithms.

Additionally, vCloud Director 5.1 introduces support for VXLAN.  This provides ‘virtual wires’ that the cloud administrator can use to define thousands of networks that can be consumed on demand.

Providing the ability to have a L2 domain with VXLAN that encompasses multiple clusters gives rise to the need to support the use of multiple clusters within a Provider VDC.  This is part of the Elastic VDC feature that has now been extended to support the Allocation Pool resource model, along with the Pay-As-You-Go model.

Support for Storage Profiles provides the ability for cloud administrators to quickly provide multiple tiers of storage to the organizations.  Previously, to do this, one had to define multiple Provider VDCs.  For those who have done this, a feature has also been added to allow for the merger of multiple Provider VDCs into a single object.

Numerous changes were also added to increase the usability.  Top on this list is the support for snapshots!  The UI has also been updated to make it a easier for the end user to create new vApps, reset leases, and find items within the catalog.  Support for tagging objects with metadata information is also provided through the UI as well.

I’m sure you’ll agree that this represents a lot of features…  And I haven’t even gotten into the API extensibility features or the support for Single Sign-On (SSO)!  For now, if you want more information, I’d suggest reading the What’s New whitepaper here:

http://www.vmware.com/files/pdf/techpaper/Whats-New-VMware-vCloud-Director-51-Technical-Whitepaper.pdf

You can come to my VMworld session if your at VMworld 2012, where I’ll be talking about all of this.

Performance Sessions @ VMworld 2012 San Francisco

VMworld is just around the corner!  Like always it is going to be an exciting action-packed week.  If you are going to be at the show make sure you check out some of the VMware performance sessions being delivered.

vSphere Performance New Features Overview  (you never know there might be some new things ;) )

INF-VSP1622 – Performance New Features and Best Practices for vSphere
Monday, Aug 27, 4:00 PM – 5:00 PM  – Moscone West, Level 3, Room 3002
Wednesday, Aug 29, 3:30 PM – 4:30 PM – Moscone West, Level 2, Room 2016

Some other Performance Sessions that are definitely worth checking out…

Continue reading

My pick of vCenter related VMworld 2012 sessions

Headed to VMworld 2012 in San Francisco and looking to learn about about the new features in vCenter Server? I have picked a handful of sessions that will bring you up to speed in no time.

or come and talk directly with me at one of these sessions

Interesting Storage Stuff at VMworld 2012 (US)

Cormac_Hogan
Posted by Cormac Hogan
Technical Marketing Architect (Storage)

I put together a short article on some of the things that I’ll be checking out from a storage perspective at VMworld 2012 in San Francisco next week. This is by no means a definitive list, and I’m sure we’ll hear more fantastic announcements in the coming weeks. However these are products and vendors which recently caught my eye, and I definitely mean to find out more about them at this year’s conference.

Disclaimer – Once again, the vSphere storage blog has to remain storage vendor neutral to retain any credibility. VMware doesn’t favour any one storage partner over another. I’m not personally endorsing any of these vendor’s products either. What I’m posting here are just a few vendors/products that are interesting to me from a storage perspective.

Continue reading

Virtualized vCenter in a Datastore Cluster

By Frank Denneman – Senior Technical Marketing Architect

To virtualize vCenter or not, that has been an age-old question. To control the location of the vCenter virtual machine, in case the hosting vSphere server crashes, a couple of best practices where developed. Such as; Disable DRS, create host-vm affinity groups or my favorite, leave all resource settings default and just document which datastore the virtual machine is stored.

Restrict Storage Load Balancing operations?
However due to the introduction of Storage DRS and its load balancing mechanisms, this recommendation requires some extra configuration to still use it. In my opinion DRS should not be limited in its load-balancing options, as disabling a specific virtual machine can effect other virtual machines as well. This focus purely concentrates on the compute level, but how should we threat the disk structure of vCenter? Should we restrict load-balancing operation of Storage DRS?

I think it depends on the configuration of the compute layer, if you do not restrict the movement of the VM at the compute cluster layer, its recommended to restrict the movement of virtual machine files. By disabling the automation level of the virtual machine in the datastore cluster, Storage DRS shall not move the virtual machine. Please be aware that datastore maintenance mode will fail until the virtual machine is migrated manually out of the datastore.

To ensure best performance, the vCenter virtual machine should receive more disk shares than other virtual machines.

 Increase disk shares:

  1. Select the virtual machine and go to edit settings.
  2. Select the Resource Tab and select the option Disk.
  3. Select one of the three predefined Share levels or configure custom shares.

Disable migration:

  1. Select the datastores and Datastore Cluster view.
  2. Select the Datastore Cluster.
  3. Go to Edit Datastore Cluster.
  4. Select the Virtual Machine Setting menu option.
  5. Select the Manual or Disabled Automation level.

 

VMDK affinity rule
The virtual machine setting window also allows you to configure the VMDK affinity rule setting of the virtual machine. By default a VMDK affinity rule is applied to each virtual machine and this rule forces Storage DRS to place all disks and files from a VM to be stored on a single datastore.  I am a strong opponent of using a VMDK anti-affinity rule for all virtual machines in a datastore cluster as it allows Storage DRS more granularity when it comes to load balancing. But for this particular scenario both configurations have merit.

The datastore where the working directory with the VMX file is placed is deemed the registered datastore. If an anti-affinity rule is applied to the VM, all VMDKs and the working directory are placed on a separate datastore. When Storage DRS needs to load balance it is very (extremely) rare that Storage DRS will move the working directory of a virtual machine. A working directory is very small compared  to a VMDK and generates under normal operations very little IO. When will it move a working directory? When the VM swap file is very big, nearing or exceeding footprints of other VMDK files and/or when the swap file is generating a lot of I/O. Take this behavior into account when not selecting a manual or disabled automation level!

Using an affinity rule reduces complexity when it comes to troubleshooting. All files are on the registered datastore.

So next time when you set up and configure a virtualized vCenter, do not only think about DRS settings, think about the Storage DRS settings as well.