Home > Blogs > VMware TAM Blog > Monthly Archives: June 2013

Monthly Archives: June 2013

Hidden benefits of virtualisation – uneven hardware

By guest blogger, Christian Wickham, Technical Account Manager, South Australia and Northern Territory, and Local Government and Councils in Western Australia, Victoria and New South Wales at VMware Australia and New Zealand

Hidden benefits of virtualisation – uneven hardware

Within VMware we are often focussing on the latest and greatest features and capabilities offered by all our newest software. Of course, we are always driving forward and the next version’s enhancements and benefits are forefront of our minds – but there are still some people out there who are just starting on their virtualisation journey. The advantages offered by our premium versions of vSphere such as Enterprise Plus, and the vCloud Suite editions, offer exceptional advances for businesses and enterprises, but some smaller businesses are unable to afford these editions – particularly at the start.

Some benefits of virtualisation, particularly with vSphere, are inherent and included in all versions – and deliver significant savings in both money and time. In this series, I will outline some of the simple benefits that are often not highlighted to new users of virtualisation, but well known to existing users.

It is definitely a trend within the hardware industry to develop servers that are optimised for virtualisation. High memory density, multiple built in network cards, support for the latest multi-core CPUs – and many other enhancements. It’s actually quite hard to buy a good quality server that has just 2 CPU cores and 4 GB of RAM and 40GB of fault-tolerant disk space and a single network card – often this is the requirements of server software for small and medium businesses.

Interestingly, in my experience of working with VMware customers (and my history of being a VMware customer for 5 years too), some server software actually consumes less resources than even that! It’s common to see a Windows 2008 R2 server actively using less than 256Mb of RAM, 100 MHz (that’s 0.1 GHz) of CPU and, after the installation of Windows, less than 5GB of disk space. Why don’t you try and buy a physical server with those specs – you can’t! In discussions that I have had with software vendors, they often “bump up” their official minimum hardware specifications to the level of a mainstream standard server, because customers keep on contacting them to ask if their ‘powerful’ server is appropriate.

Based on proper analysis (such as through vCenter Operations Manager – vC Ops), or even careful manual ad-hoc analysis of the vCenter performance statistics over a reasonable time, it might be apparent that your servers are over-sized. The recommendation might come back from vC Ops that your servers should have 384Mb of RAM, or 3 CPUs. Unusual sizes? Not with vSphere. You can set odd numbers of CPUs (uneven, not ‘strange’…) and memory sizes in increments less than 1GB.

There is a tiny drawback here though. If you need to resize your VMs downwards, the operating system would get quite a bit upset if you pulled out memory or CPUs whilst they were in use – that’s why you are prevented from doing it in the vSphere client(s). Instead, you need to power down the VM, make the changes, and then when it powers on, the new specifications take effect.

The upside is, if you have vSphere 5.1 Standard or above, you have hot-add of CPU and memory (vSphere 4.x and 5.0 needs Enterprise or above). This needs to be activated on each VM (version 7 or later) whilst it is powered off (so we recommend you set this on your templates), and in usual VMware fashion this is a single mouse click GUI option to ‘enable’. Depending on your Windows edition, the memory will be immediately accessible (2003 Enterprise, 2008 Enterprise, 2008R2, 2012), and the CPU will be immediately accessible (2008 R2 Enterprise, 2012), or require a reboot (for RAM; 2008 Standard, for CPU; all 2008 and 2008 R2 Standard). For Linux flavours, hot add varies depending on your distro – some recognise the new hardware immediately and some require kernel commands to recognise the additional CPU(s), or may require a reboot.

How much can you add? It depends upon your license – for vCPU per VM from 8 in the Essentials all the way up to 64 vCPUs per VM in Enterprise Plus. Memory can be added up to 1TB (or 1,048,576 MB if you would prefer). However – you can’t give an individual VM more virtual CPUs than you have physical CPU cores, and you can’t add more RAM to a VM than you physically have inside the host server. However, as I mentioned above, smaller specifications are often what is needed in most applications used in medium and smaller businesses.

So, we have covered CPU and RAM being assigned in “unusual” numbers, and the ability to assign these very low and then add to it whilst the VM is running as your needs grow. What about network? What about disks?

New disks can be added to a VM at any time, and depending upon the installed operating system, they will be recognised as an unformatted disk, ready to be initialised and used. In older versions of Windows and Linux, you may require a scan for new disks. It only takes a second or two for new disks to be added to the VM – and they can be specified in megabytes up to 2 Terabytes. You can also resize an existing disk, and when you do this it is seen by the operating system as unallocated space. With Windows 2003, you cannot resize the boot disk (C drive), or any disk containing a pagefile, but you can resize data disks. In newer versions of Windows, you can resize all disks – but with the same restrictions of CPU and RAM in that you can add, but not take away.

There are some restrictions on maximums with disks too. You can only add an individual virtual disk of up to 2TB (minus 512 bytes) in vSphere version 5.1 and below, and you can only have a maximum of 64 of these disks (assuming 4 are IDE and 60 are SCSI). You can also directly attach a disk from a SAN to a VM as a raw device mapping (RDM), up to 64 TB. However there are many reasons why again you should keep the number of virtual disks and their sizes smaller.

Network cards can be added whilst a VM is running too. Start with a single NIC and if you need a second one in the operating system, these can be added – up to 10. In practice, unless you are using your VM as a router or have other application reasons why you need multiple virtual NICs, one is often enough. Additional bandwidth and redundancy can be added at the physical host layer.

If you are new to virtualisation, or have never tried running a virtual machine with an “uneven” virtual hardware configuration – give it a try and prove to yourself (and your colleagues) that servers do not always need to have 2 or 4 or 6 (CPUs or GB of RAM). Next time you are purchasing software that has a minimum requirement, have a look at your trial or evaluation of the software and see what it is actually using – this might be a start to a whole new density improvement in your vSphere environment.

If you have got this far and are thinking, “what about large enterprises?” then you need to consider the overall density that sizing your VMs correctly can achieve. If you have a density of around 30 VMs per host, then even reducing each VM by 100MB can release (in this example) a further 3GB.