The hugely popular Performance Troubleshooting for VMware vSphere 4 guide is now updated for vSphere 4.1 . This document provides step-by-step approaches for troubleshooting most common performance problems in vSphere-based virtual environments. The steps discussed in the document use performance data and charts readily available in the vSphere Client and esxtop to aid the troubleshooting flows. Each performance troubleshooting flow has two parts:
- How to identify the problem using specific performance counters.
- Possible causes of the problem and solutions to solve it.
New sections that were added to the document include troubleshooting performance problems in resource pools on standalone hosts and DRS clusters, additional troubleshooting steps for environments experiencing memory pressure (hosts with compressed and swapped memory), high CPU ready time in hosts that are not CPU saturated, environments sharing resources such as storage and network, and environments using snapshots.
The Troubleshooting guide can be found here. Readers are encouraged to provide their feedback and comments in the performance community site at this link.
VMmark 2.1 has been released and is available here. We had a list of improvements to VMmark 2.0 even as we finished up the initial release of the benchmark last fall. Most of the changes are intended to improve usability, managability, and scale-out-ability of the benchmark. VMmark 2.0 has already generated tremendous interest from our partners and customers and we expect VMmark 2.1 to add to that momentum.
Only the harness and vclient directories have been refreshed for VMware VMmark 2.1. The notable changes include the following:
- Uniform scaling of infrastructure operations as tile and cluster sizes increase. Previously, the dynamic storage relocation infrastructure workload was held at a single thread.
- Allowance for multiple Deploy templates as tile and cluster sizes increase.
- Addition of conditional support for clients running Windows Server 2008 Enterprise Edition 64-bit.
- Addition of support for virtual clients, provided all hardware and software requirements are met.
- Improved host-side reporter functionality.
- Improved environment time synchronization.
- Updates to several VMmark 2.0 tools to improve ease of setup and running.
- Miscellaneous improvements to configuration checking, error reporting, debug output, and user-specified options.
All currently published VMmark 2.0 results are comparable to VMmark 2.1. Beginning with the release of VMmark 2.1, any submission of benchmark results must use the VMmark 2.1 benchmark kit.
In other news, Fujitsu published their first VMmark 2.0 result last week.
Also, Intel has joined the VMmark Review Panel. Other members are AMD, Cisco, Dell, Fujitsu, HP, and VMware. Every result published on the VMmark results page is reviewed for correctness and compliance by the VMmark Review Panel. In most cases this means that a submitter's result will be examined by their competitors prior to publication, which enhances the credibility of the results.
That's all for now, but we should be back soon with more interesting experiments using VMmark 2.1.
Do you want to know how many VMware vCloud Director server instances are needed for your deployment? Do you know how to load balance the VC Listener across multiple vCloud Director instances? Are you curious about how OVF File Upload behaves on a WAN environment? What is the most efficient way to import LDAP users? This white paper, VMware vCloud Director 1.0 Performance and Best Practices, provides insight to help you answer all the above questions.
In this paper, we discuss VMware vCloud Director 1.0 architecture, server instance sizing, LDAP sync, OVF file upload, vApp clones across vCenter Server instances, inventory sync, and adjusting thread pool and cache limits. The following performance tips are provided:
- Ensure the inventory cache size is big enough to hold all inventory objects.
- Ensure JVM heap size is big enough to satisfy the memory requirement for the inventory cache and memory burst so the vCloud Director server does not run out of memory.
- Import LDAP users by groups instead of importing individual users one by one.
- Ensure the system is not running LDAP sync too frequently because the vCloud database is updated at regular intervals.
- In order to help load balance disk I/O, separate the storage location for OVF uploads from the location of the vCloud Director server logs.
- Have a central datastore to hold the most popular vApp templates and media files and have this datastore mounted to at least one ESX host per cluster.
- Be aware that the latency to deploy a vApp in fence mode has a static cost and does not increase proportionately with the number of VMs in the vApp.
- Deploy multiple vApps concurrently to achieve high throughput.
- For load balancing purposes, it is possible to move a VC Listener to another vCloud Director instance by reconnecting the vCenter Server through the vCloud Director user interface.
Please read the white paper for more performance tips with more details. You can download the full white paper from here.