Home > Blogs > VMware VROOM! Blog > Monthly Archives: August 2012

Monthly Archives: August 2012

Protecting Hadoop with VMware vSphere 5 Fault Tolerance

Hadoop provides a platform for building distributed systems for massive data storage and analysis. It has internal mechanisms to replicate user data and to tolerate many kinds of hardware and software failures. However, like many other distributed systems, Hadoop has a small number of Single Points of Failure (SPoFs). These include the NameNode (which manages the Hadoop Distributed Filesystem namespace and keeps track of storage blocks), and the JobTracker (which schedules jobs and the map and reduce tasks that make up each job). VMware vSphere Fault Tolerance (FT) can be used to protect virtual machines that run these vulnerable components of a Hadoop cluster. Recently a cluster of 24 hosts was used to run three different Hadoop applications to show that such protection has only a small impact on application performance. Various Hadoop configurations were employed to artificially create greater load on the NameNode and JobTracker daemons. With conservative extrapolation, these tests show that uniprocessor virtual machines with FT enabled are sufficient to run the master daemons for clusters of more than 200 hosts.

A new white paper, “Protecting Hadoop with VMware vSphere 5 Fault Tolerance,” is now available in which these tests are described in detail. CPU and network utilization of the protected VMs are given to enable comparisons with other distributed applications. In addition, several best practices are suggested to maximize the size of the Hadoop cluster that can be protected with FT.

VMware vSphere 5.1 vMotion Architecture, Performance, and Best Practices

vMotion and Storage vMotion are key, widely adopted technologies which enable the live migration of virtual machines on the vSphere platform. vMotion provides the ability to live migrate a virtual machine from one vSphere host to another host, with no perceivable impact to the end user. Storage vMotion technology provides the ability to live migrate the virtual disks belonging to a virtual machine across storage elements on the same host.  Together, vMotion and Storage vMotion technologies enable critical datacenter workflows, including automated load-balancing with DRS and Storage DRS, hardware maintenance, and the permanent migration of workloads.

Each vSphere release introduces new vMotion functionality, increased reliability and significant performance improvements. vSphere 5.1 continues this trend by offering new enhancements to vMotion that provide a new level of ease and flexibility for live virtual machine migrations.  vSphere 5.1 vMotion now removes the shared storage requirement for live migration and allows combining traditional vMotion and Storage vMotion into one operation. The combined migration copies both the virtual machine memory and its disk over the network to the destination vSphere host. This shared-nothing unified live migration feature offers administrators significantly more simplicity and flexibility in managing and moving virtual machines across their virtual infrastructures compared to the traditional vMotion and Storage vMotion migration solutions.

A new white paper, “VMware vSphere 5.1 vMotion Architecture, Performance and Best Practices”, is now available. In that paper, we describe the vSphere 5.1 vMotion architecture and its features. Following the overview and feature description of vMotion in vSphere 5.1, we provide a comprehensive look at the performance of live migrating virtual machines running typical Tier 1 applications using vSphere 5.1 vMotion, Storage vMotion, and vMotion. Tests measure characteristics such as total migration time and application performance during live migration. In addition, we examine vSphere 5.1 vMotion performance over a high-latency network, such as that in a metro area network. Test results show the following:

  • During storage migration, vSphere 5.1 vMotion maintains the same performance as Storage vMotion, even when using the network to migrate, due to the optimizations added to the vSphere 5.1 vMotion network data path.
  • During memory migration, vSphere 5.1 vMotion maintains nearly identical performance as the traditional vMotion, due to the optimizations added to the vSphere 5.1 vMotion memory copy path.
  • vSphere 5.1 vMotion retains the proven reliability, performance, and atomicity of the traditional vMotion and Storage vMotion technologies, even at metro area network distances.

Finally, we describe several best practices to follow when using vMotion.

For the full paper, see “VMware vSphere 5.1 vMotion Architecture, Performance and Best Practices”.

 

1millionIOPS On 1VM

Last year at VMworld 2011 we presented one million I/O operations per second (IOPS) on a single vSphere 5 host (link).  The intent was to demonstrate vSphere 5′s performance by using mutilple VMs to drive an aggregate load of one million IOPS through a single server.   There has recently been some interest in driving similar I/O load through a single VM.  We used a pair of Violin Memory 6616 flash memory arrays, which we connected to a two-socket HP DL380 server, for some quick experiments prior to VMworld.  vSphere 5.1 was able to demonstrate high performance and I/O efficiency by exceeding one million IOPS, doing so with only a modest eight-way VM.  A brief description of our configuration and results is given below.

Configuration:
Hypervisor: vSphere 5.1
Server: HP DL380 Gen8
CPU: 2 x Intel Xeon E5-2690, HyperThreading disabled
Memory: 256GB
HBAs: 5 x QLE2562
Storage: 2 x Violin Memory 6616 Flash Memory Arrays
VM: Windows Server 2008 R2, 8 vCPUs and 48GB.
Iometer Config: 4K IO size w/ 16 workers

Results:
Using the above configuration we achieved 1055896 total sustained IOPS.  Check out the following short video clip from one of our latest runs.

Look out for a more thorough write-up after VMworld.