Home > Blogs > VMware vSphere Blog > Category Archives: ESXi

Category Archives: ESXi

Hadoop moves to 2.0 – Virtualizing the New YARN

Hadoop 2.0, also known as Yet Another Resource Negotiator
(YARN) is the newest generation of the Hadoop technology that is in popular use
today for highly distributed processing and management of big data. YARN is now
shipped by the Hadoop distributors as part of their Hadoop 2.x distributions. YARN
changes the architecture that was inherent in Hadoop 1.0 in order to allow the
system to scale to new levels and to assign responsibilities more clearly to
different components. Looking deeper into the functionality that YARN offers,
it is clear that there are many good reasons for virtualizing it.

We find that the YARN and vSphere technologies are complementary and that they serve mutually beneficial purposes in building your big data clusters.
Continue reading

vCenter Server 5.5 Update 1b released

Today VMware released an update to its virtualization management solution, vCenter Server. The update brings several fixes as documented in the release notes which can be reviewed in full here.

The new versions are as follows:

  • vCenter Server 5.5 Update 1b | 12 JUN 2014 | Build 1891313
  • vCenter Server 5.5 Update 1b Installation Package | 12 JUN 2014 | Build 1891310
  • vCenter Server Appliance 5.5 Update 1b | 12 JUN 2014 | Build 1891314
    downloaded now from vmware.com

Continue reading

Virtual SAN Automatic “Add Disk to Storage Mode” Fails (Part II)

In part 1 of this article, we looked at an interesting scenario in which, despite having the Virtual SAN disk management setting set on automatic, Virtual SAN would not form disk groups around the disks present in the hosts. Upon closer examination, we discovered that the server vendor pre-imaged the drives with NTFS prior to shipping. When Virtual SAN detects an existing partition, it does not automatically erase the partitions and replace it with its own. This serves to protect from accidental drive erasure. Since NTFS partitions already existed on the drives, Virtual SAN was awaiting manual intervention. In the previous article, we displayed the manual steps to remove the existing partitions and allow Virtual SAN to build the disk groups. In this article, we will look at how to expedite the process through scripting.

Warning: Removing disk partitions will render data irretrievable. This script is intended for education purposes only. Please do not use directly in a production environment.

As promised in part 1 of this article, we will demonstrate today how to create your own utility to remove unlocked/unmounted partitions from disks located within your ESXi host. The aim of the script is to provide an example workflow for removing the partitions that insists upon user validation prior to each partition removal. This example workflow can be adapted and built upon to create your own production ready utility.

Continue reading

Does Enhanced vMotion Compatibility (EVC) Affect Performance?

YES!

Now that I’ve scared you, lets take a look at these use cases.

Continue reading

Checking the vNUMA Topology

I’ve been asked a few times recently how to determine what virtual topology vNUMA recommended and created for us and besides looking at the obvious guest OS for the final result, you can also check the vmware.log file for more detailed information.

Additional background here.

Examples:

1) “Wide and “Flat” virtual machine – default configuration

This virtual machine was configured with 20 vCPUs (20 sockets and 1 corespersocket) on a 4 socket, 10 core, hyper-threading enabled, host:

numa: Exposing multicore topology with cpuid.coresPerSocket = 10 is suggested for best performance

numaHost: 2 virtual nodes, 20 virtual sockets, 2 physical domains

Here we see vNUMA has automatically set corespersocket = 10, which matches the physical topology, and presented 2 “virtual nodes” aka NUMA nodes.

2) Spanning pNUMA nodes – manually configured

This virtual machine was configured with 20 vCPUs (1 socket and 20 corespersocket) on a 4 socket, 10 core, hyper-threading enabled, host:

numa: Setting.vcpu.maxPerVirtualNode=20 to match cpuid.coresPerSocket

numaHost: 1 virtual nodes, 1 virtual sockets, 2 physical domains

Here we see vNUMA has respected the manual configuration and set the vNUMA advanced setting maxPerVirtualNode = 20 which doesn’t match the physical topology.  1 “virtual nodes” aka NUMA node is presented which spans 2 “physical domains,” aka pNUMA nodes.

So searching vmware.log for ‘numa’ and ‘numaHost’ will provide these details and again a reminder to let vNUMA provide the optimal configuration when possible.

SAP HANA Now Supported on VMware vSphere 5.5 for Production Scenarios

Eighteen months ago at SAP SAPPHIRE Madrid 2012, VMware presented impressive performance data showcasing SAP HANA running on vSphere. At that time, vSphere 5.1 received support from SAP to run SAP HANA for non-production scenarios.

After an extended and successful joint testing program with SAP, we’re proud to announce that vSphere 5.5 has achieved production support for SAP HANA.  Today, joint customers can run and scale single node SAP HANA databases up to 1TB in a virtual environment while taking advantage of all of the vSphere features they know and rely on to achieve high availability and improved Quality of Service for their mission critical workloads.

Continue reading

vSphere Distributed Switch – Network Monitoring

When users adopt the vSphere Distributed Switch (VDS) there is a whole new level to monitoring that becomes available that’s not possible with our vSphere Standard Switch. Industry standard tools such as Port Mirroring and Netflow and our own VDS monitoring called Health Check. These powerful tools not only help to troubleshoot issues or monitor traffic but help to ensure you don’t have issues to begin with.

Continue reading

App HA 1.1 Released – Now available for download

App HA overview

The latest version of App HA, 1.1, was released last week and is now available for download. This release has a number of cool new features that will greatly increase the usability of App HA. I will do additional post(s) on these in the next few weeks.

Continue reading

vSphere Distributed Switch – Backup and Restore

Continuing on with features found in the vSphere Distributed Switch, the Backup and Restore capability is a feature I rarely saw used when I was in the field. I saw, and still do see, customers going out of their way to make sure they can backup the vCenter database, and even more so SSO, but if you have to rebuild your vCenter or migrate to a new one and don’t have a backup of your Distributed Switch you’re going to be in for a lot of work.

Continue reading

PVSCSI and Large IO’s

Here’s a behavior that a few people have questioned me about recently:

Why is PVSCSI splitting my large guest operating system IO’s into smaller blocks?

Continue reading