Home > Blogs > VMware vSphere Blog

Video: Virtual SAN From An Architect’s Perspective

Video: Virtual SAN From An Architect’s Perspective

Have you ever wanted a direct discussion with the people responsible for designing a product?

Recently, Stephen Foskett brought a cadre of technical bloggers to VMware as part of Storage Field Day 7 to discuss Virtual SAN in depth.  Christos Karamanolis (@XtosK), Principle Engineer and Chief Architect for our storage group went deep on VSAN: why it was created, its architectural principles, and why the design decisions were important to customers.

The result is two hours of lively technical discussion — the next best thing to being there.  What works about this session is that the attendees are not shy — they keep peppering Christos with probing questions, which he handles admirably.

The first video segment is from Alberto Farronato, explaining the broader VMware storage strategy.

The second video segment features Christos going long and deep on the thinking behind VSAN.

The third video segment is a run-over of the second.  Christos presents the filesystem implementations, and the implications for snaps and general performance.

Our big thanks to Stephen Foskett for making this event possible, and EMC for sponsoring our session.


Help us improve vSphere!

Are you a vSphere user? If so, we want to hear from you. Attached is our new survey. Help us build a better product and make sure our features are aligned with your business needs.



How To Double Your VSAN Performance

How To Double Your VSAN Performance

VSAN 6.0 is now generally available!

Among many significant improvements, performance has been dramatically improved for both hybrid and newer all-flash configurations.

VSAN is almost infinitely configurable: how many capacity devices, disk groups, cache devices, storage controllers, etc.  Which brings up the question: how do you get the maximum storage performance out of VSAN-based cluster?

Our teams are busy running different performance characterizations, and the results are starting to surface.  The case for performance growth by simply expanding the number of storage-contributing hosts in your cluster has already been well established — performance linearly scales as more hosts are added to the cluster.

Here, we look at the impact of using two disk groups per host vs. the traditional single disk group.  Yes, additional hardware costs more — but what do you get in return?

As you’ll see, these results present a strong case that by simply doubling the number of disk -related resources (e.g. using two storage controllers, each with a caching device and some number of capacity devices), cluster-wide storage performance can be doubled — or more.

Note: just to be clear, two storage controllers are not required to create multiple disk groups with VSAN.  A single controller can support multiple disk groups.  But for this experiment, that is what we tested.

This is a particularly useful finding, as many people unfamiliar with VSAN mistakenly assume that performance might be limited by the host or network.  Not true — at least, based on these results.

For our first result, let’s establish a baseline of what we should expect with a single disk group per host, using a hybrid (mixed flash and disks) VSAN configuration.

Here, each host is running a single VM with IOmeter.  Each VM has 8 VMDKs, and 8 worker tasks driving IO to each VMDK.  The working set is adjusted to fit mostly in available cache, as per VMware recommendations.

More details: each host is using a single S3700 400GB cache device, and 4 10K SAS disk drives. Outstanding IOs (OIOs) are set to provide a reasonable balance between throughput and latency.


On the left, you can see the results of a 100% random read test using 4KB blocks.  As the cluster size increases from 4 to 64, performance scales linearly, as you’d expect.  Latency stays at a great ~2msec, yielding an average of 60k IOPS per host.  The cluster maxes out at a very substantial ~3.7 million IOPS.

When the mix shifts to random 70% read / 30% writes (the classic OLTP mix), we still see linear scaling of IOPS performance, and a modest increase in latency from ~2.5msec to ~3msec.  VSAN is turning it a very respectable 15.5K IOPS per host.  The cluster maxes out very close to ~1m IOPS.

Again, quite impressive.  Now let’s see what happens when more storage resources are added.

For this experiment, we added an additional controller, cache and set of capacity devices to each host.  And the resulting performance is doubled — or sometimes even greater!


Note that now we are seeing 116K IOPS per host for the 100% random read case, with a maximum cluster output of a stunning ~7.4 million IOPS.

For the OLTP-like 70% read / 30% write mix, we see a similar result: 31K IOPS per host, and a cluster-wide performance of ~2.2 million IOPS.

For all-flash configurations of VSAN, we see similar results, with one important exception: all-flash configurations are far less sensitive to the working set size.  They deliver predictable performance and latency almost regardless of what you throw at them.  Cache in all-flash VSAN is used to extend the life of write-sensitive capacity devices, and not as a performance booster as is the case with hybrid VSAN configurations.

In this final test, we look at an 8 node VSAN configuration, and progressively increase the working set size to well beyond available cache resources.  Note: these configurations use a storage IO controller for the capacity devices, and a PCI-e cache device which does not require a dedicated storage controller.

On the left, we can see the working set increasing from 100GB to 600GB, using our random 70% read / 30% OLTP mix as before.

Note that IOPS and latency remain largely constant:  ~40K IOPS per node with ~2msec latency.  Pretty good, I’d say.

On the right, we add another disk group (with dedicated controllers) to each node (flash group?) and instead vary the working set size from an initial 100GB to a more breathtaking 1.2TB.  Keep in mind, these very large working set sizes are essentially worst-case stress tests, and not the sort of thing you’d see in a normal environment.


Initially, performance is as you’d expect: roughly double of the single disk group configuration (~87K IOPS per node, ~2msec latency).  But as the working set size increases (and, correspondingly, pressure on write cache), note that per-node performance declines to ~56K IOPS per node, and latency increases to ~2.4 msec.

What Does It All Mean?

VSAN was designed to be scalable depending on available hardware resources.  For even modest cluster sizes (4 or greater), VSAN delivers substantial levels of storage performance.

With these results, we can clearly see two axes to linear scalability — one as you add more hosts in your cluster, and the other as you add more disk groups in your cluster.

Still on the table (and not discussed here): things like faster caching devices, faster spinning disks, more spinning disks, larger caches, etc.

It’s also important to point out what is not a limiting factor here: compute, memory and network resources – just the IO subsystem which consists of a storage IO controller, a cache device and one or more capacity devices.

The other implication is incredibly convenient scaling of performance as you grow — by either adding more hosts with storage to your cluster, or adding another set of disk groups to your existing hosts.

What I find interesting is that we really haven’t found the upper bounds of VSAN performance yet.  Consider, for example, a host may have as many as FIVE disk groups, vs the two presented here.   The mind boggles …

I look forward to sharing more performance results in the near future!


Chuck Hollis





vSphere 6: Updates to Host Profiles

Host Profiles_v2

With the announcement of vSphere 6 becoming Generally Available, I figure it a good time to shine some light on some of the updated features of Host Profiles. Host Profiles allow you to establish standard configurations for your ESXi hosts and to automate compliance to these configurations, simplifying operational management of large-scale environments and reducing errors caused by misconfigurations. In this release we’ve made several improvements which will make updates and applying of Host Profiles easier and with minimal disruption.

What’s New Continue reading

Upgrading to VMware Virtual SAN 6.0

VSAN-UpgradeVirtual SAN 6.0 introduced new changes to the structural components of its architecture. One of those changes is a new on-disk format which delivers better performance and capability enhancements. One of those new capabilities allows vSphere Admins to perform in-place rolling upgrades from Virtual SAN 5.5 to Virtual SAN 6.0 without introducing any application downtime.

Upgrading an existing Virtual SAN 5.5 cluster to Virtual SAN 6.0 is performed in multiple phases and it requires the re-formating of the of all of the magnetic disks that are being used in a Virtual SAN cluster. The upgrade is defined as a one-time procedure that is performed from RVC command line utility with a single command.

Upgrade Phase I: vSphere Infrastructure Upgrade

This phase of the upgrade is all components are upgraded to the vSphere 6.0 release. All vCenter Servers and ESXi hosts and all infrastructure related components need to be upgraded to version their respective and corresponding 6.0 software release. Any of the vSphere supported procedures for the individual components is supported.

  • Upgrade vCenter Server 5.5 to 6.0 first (Windows or Linux based)
  • Upgrade ESXi hosts from 5.5 to 6.0 (Interactive, Update Manager, Re-install, Scripted Updates, etc)
  • Use Maintenance Mode (Ensure accessibility - recommended for reduced times, Full data migration – not recommended unless necessary

Upgrade Phase II: Virtual SAN 6.0 Disk Format Conversion (DFC)

This phase is where the previous on-disk format (VMFS-L) is replaced on all of the magnetic disk devices with the new on-disk format (VSAN FS). The disk format conversion procedures will reformat the disk groups and upgrade all of the objects to the new version 2. Virtual SAN 6.0 provides supports for both the previous on-disk format of Virtual SAN 5.5 (VMFS-L) as well as its new native on-disk format (VSAN FS).

While both on-disk formats are supported, it is highly recommended to upgrade the Virtual SAN cluster to the new on-disk format in order to take advantage of the performance and new available features. The disk format conversion is performed sequentially performed in a Virtual SAN cluster where the upgrade takes place one disk group per host at a time. The workflow illustrated below is repeated for all disk groups on each host before the process moves onto another host that is a member of the cluster.


Continue reading

Enhancing User Experience: Customization of vRealize Automation 6.2.x Email Notifications

User Experience (“UX”) focuses on the intimate understanding of your users. What is it that they need or desire, what do they value, what are their abilities, as well as their limitations?

As you embark upon the journey to the software-defined data center (SDDC), think and architect in terms of the user experience in addition to “boxes and arrows.”

  • What are the desired UX outcomes for those consuming the service(s)?
  • Have you considered the UX in terms of its usefulness, usability, desirability, accessibility, credibility, and its value?

In addition to fundamental tenant and business group designs, entitlements and service catalogue designs, one such area for UX consideration is the messages provided to those consuming services of the software-defined data center.

For a moment, imagine you are providing automated infrastructure delivery to multiple business segments of a large media and entertainment organization, each with their own distinct brand. The segments are built upon their individual brand and identity.

  • Do you centrally brand the service that you offer or do you tailor the service to each tenant business segment?
  • How would this change if instead the services were used to provide automated infrastructure delivery only to your IT Operations team and not direct end users?

The messages that appear in the inbox of the user are part of the experience. VMware vRealize Automation can send automatic notifications for several types of events, such as, the successful completion of a catalogue request or a required approval workflow.  System Administrators can configure global email servers, senders and recipients that process email notifications.

Tenant Administrators can override those defaults, or add their own servers, senders and recipients if no global attributes are specified. They may even select which events will cause notifications to be sent to their users. Each component, such as the service catalog or infrastructure-as-a-service, can define events that can trigger notifications.


Additionally, each user can choose if they wish to receive notifications. Users either receive all notifications configured by the Tenant Administrator or no notifications.

Notification may also have links that allow the user to perform interactively. For example, a notification about a request that requires approval can have one link for approving the request and one for rejecting it. When a user clicks one of the links, a new email opens with content that is automatically generated. The user can send the email to complete the approval.

Messages can be easily and beautifully customized using a simple, powerful template engine. These may be customized per-locale, per-tenant, and per-notification scenario. You have the ability to define and craft the desired user experience for any notification.

Getting Started with Message Templates

vRealize Automation uses a folder structure to determine the appropriate template to user based on the context of the tenant. Templates files are written in Apache Velocity, an easy to use Java-based template engine. Learn more about the simple directives used in the .vm template files in the Apache Velocity User Guide.

Deploy the Sample Templates

To use message templates in vRealize Automation, you must first obtain the sample templates, customize those templates that you need, and save the templates in the file system of your vRealize Automation appliance(s).

These templates are provided in VMware KB 2088805. Download and copy the vrealize_automation.tar.gz to root “/” the vRealize Automation appliance file system.

  • From Windows, SCP the file to the vRealize Automation appliance with a utility like WinSCP.
  • From Mac or Linux, open and terminal and run the following commands.
    scp vcac.tar.gz root@<vRA-VA-FQDN>:/

​From either Windows, Mac or Linux, SSH to the vRealize Automation appliance(s) and run the following commands to extract the contents to the root of the file system and set the appropriate permissions on the templates.

ssh root@<vRA-VA-FQDN>
cd /
tar -xvzf vcac.tar.gz
find /vcac -type d -exec chmod o+rx {} \;
find /vcac -type f -exec chmod o+r {} \;

Restart the VMware vRealize Automation appliance services by running this command:

 service vcac-server restart

Before we start crafting the custom messages that for the user experience, let’s examine the structure and contents of these loaded sample templates.

There are three main folders in sample templates:

  • /vcac/templates/email/html/core/:
    Contains core templates for messages.
  • /vcac/templates/email/html/forms/​:
    Contains templates for form layouts.
  • /vcac/templates/email/html/extensions/​:
    Contains templates, which are being used to display fields in IaaS forms.

Below is the file structure:


In the core, extensions, and forms subfolders, there is a folder called defaults. The templates, represented as .vm files, in this folder are used by vRealize Automation when tenant specific templates are not specified.

Understanding Contents of /core/defaults/

Inside the /vcac/templates/email/html/core/defaults/ folder the there are five (5) template files that defined the default message structure.

  • ​main.vm: Required.
    This is the main message and is defined by including / parsing additional template files.
  • styles.vm: Optional.
    Included in the samples to define the CSS </style> element consumed by main.vm.
  • header.vm: Optional.
    Included in the samples to define the HTML </header> element consumed by main.vm.
  • links.vm: Optional.
    Included in the samples to define a set of URLs presented in main.vm.
  • footer.vm: Optional.
    Included in the samples to define a footer message presented in main.vm.

As a reminder, these files are built using an Apache Velocity template engine. Learn about the directives used and available that may be used in the Apache Velocity User Guide.

Let’s take a look at the structure of the ​main.vm.

In the main.vm you can see the basic standard building blocks of an HTML document similar to:


Inside the code blocks, Apache Velocity #parse directives call additional.vm template files, for example:

     #parse( 'core/styles.vm' )

Pretty simple, right? Plus, there is tons of room to get creative, as you will see later in this post.

If you can follow standard HTML and basic Apache Velocity directives you are already well on your way. When you create your own .vm files and place them into the defaults folder remember to include the core/ before the template name in the #parse directive.

You will also notice there is a call within the </body> element for $body. This directive is calling content for the message based on the notification scenario. Content layouts for the $body element is provided in the /vcac/templates/email/html/forms/​ directory contents.​

Now, let’s move forward and explain how to provide per-tenant, per-locale and scenario based directory structures.

Customizing Per-tenant Templates

As mentioned prior, tenant specific templates can be loaded in the tenants/<tenantName> folder in parallel to the defaults folder.

For example, if your tenant was named “CloudOperations” you would add a /CloudOperations/ folder under the tenants folder.


When a new folder/file is added for customization, you must ensure it has right permissions by executing commands on the vRealize Automation appliance(s).

find /vcac -type d -exec chmod o+rx {} \;

find /vcac -type f -exec chmod o+r {} \;

You need to wait for 120 seconds to see new customizations reloaded and reflected in messages.

Customizing Per-locale Templates

Locale specific templates can be specified in defaults or tenants/<tenantName> folders. These folders can contain further sub-folders for locale specific templates.

When searching for locale-specific template, vRealize Automation searches by country, then language, and on the defaults. ​For example:

1. /vcac/templates/email/html/core/defaults/fr/CA/<template>.vm
2. /vcac/templates/email/html/core/defaults/fr/<template>.vm
3. /vcac/templates/email/html/core/defaults/<template>.vm

When searching for a tenant-specific template, vRealize Automation searches through the tenants folders before searching the defaults folder. A search through these tenant-oriented folders simultaneously inspects for locale. When no tenant information is defined, search is confined to the defaults folder alone.

For example, when searching for a <template.vm> in the locale fr_CA (French Canada) under the CloudOperations tenant the following sequence of paths would be checked in this order:

  1. 1. /vcac/templates/email/html/core/tenants/CloudOperations/fr/CA/<template>.vm
    2. /vcac/templates/email/html/core/tenants/CloudOperations/fr/<template>.vm
    3. /vcac/templates/email/html/core/tenants/CloudOperations/<template>.vm
    4. /vcac/templates/email/html/core/defaults/fr/CA/<template>.vm
    5. /vcac/templates/email/html/core/defaults/fr/<template>.vm
    6. /vcac/templates/email/html/core/defaults/<template>.vm

Customizing Scenario Based Templates 

vRealize Automation 6.1 allows a scenario ID to be used to customize template content.

For example, in a .vm template you could specify a scenario such as:

#if ($scenario == "csp.catalog.notifications.resource.activated")
     <p>Lorem ipsum dolor amet, consectetur adipiscing elit.</p>

In vRealize Automation 6.2 we introduced a new, scalable, and recommended approach to scenario based notifications.

Note: 6.2 still supports the 6.1 customization method, but it is not recommended. ​

To customize a template file for a specific scenario, create a file named in the following format:


To create a separate main.vm for the above Resource Activated scenario, create a template file with this name:


A list of scenarios can be found in KB 2088805​ or in Administration > Notifications > Scenarios when logged into vRealize Automation as a Tenant Administrator.

Add Custom Properties

Only available in vRealize Automation 6.2 and later, custom properties that are part of the request form can be added in the email template in this format:


Customizing Scenario Subjects

Only available in vRealize Automation 6.2 and later, the subject line can be customized per notification scenario by creating a template with this naming convention:


To define a subject line for the Resource Activation scenario, create a template as:


The content of this file must be only one line of text:

[One Cloud] Your Resource Has Been Activated

Putting It All Together

Now you know how to:

  • Load and Set Default Templates
  • Template Directory Structure and Content
  • Default Template Structure and Directives
  • Customizing Per-Tenant Directory Structures
  • Customizing Per-Locale Directory Structures
  • Customizing Scenario Based Templates
  • Customizing Scenario Based Emails

Let’s take this take this to the next level with an example, and create a custom template for a tenant based on what you now know.

Tenant:                       CloudOperations
Scenario:                    Resource Activated
Locale:                        Default
Template:                   Custom Design
Subject:                      Custom Subject

 Let’s get started….

  1. Obtain the sample templates from KB 2088805 ​.
  2. Open a terminal and run the following commands.
    scp vcac.tar.gz root@<vRA-VA-FQDN>:/
  3. SSH to the vRealize Automation appliance and run the following commands to extract the contents to the root of the file system and set the appropriate permissions on the sample templates.
    ssh root@<vRA=VA-FQDN>
    cd /
    tar -xvzf vcac.tar.gz
    find /vcac -type d -exec chmod o+rx {} \;
    find /vcac -type f -exec chmod o+r {} \;
  4. Restart the VMware vRealize Automation appliance services by running this command:
    service vcac-server restart
  5. Create a custom folder for the CloudOperations tenant in the directory structure:/vcac/templates/
    /vcac/templates/email/html/core/tenants/CloudOperations/ <-- My Tenant!
    /vcac/templates/email/html/extensions/defaults/ <-- Will Use defaults.
    /vcac/templates/email/html/forms/defaults/ <-- Will Use defaults.
  6. Under /vcac/templates/email/html/core/tenants/CloudOperations/ create message and subject templates for the scenario csp.catalog.notifications.resource.activated to be called called when a new requested resource is activated. First, c​reate the file ​subject.vm-csp.catalog.notifications.resource.activated. The first line of the file is edited as seen below.subject.vm-csp.catalog.notifications.resource.activated
  7. Next, ​under /vcac/templates/email/html/core/tenants/CloudOperations/ create the file main.vm-csp.catalog.notifications.resource.activated. If you choose to use custom images, fonts, etc place those resources on an easily accessible web server that the users can access during the render.
  8. In your template, you may want to set some variables to call within the template, such as:

##     --------------------------------
##      Set variables...
##     -------------------------------- 

#set( $orgName = "VMware, Inc." )
#set( $orgStaff = "Cloud Operations" )
#set( $orgDate = "2015" )
#set( $orgSignOff = "Party on,"      )
#set( $orgPoweredBy = "Powered by VMware vRealize Automation and energy drinks." )
#set( $orgURL = "http://demo.vmware.com/" )
#set( $orgImages = "{$orgURL}images/" )
#set( $orgFonts = "{$orgURL}fonts/" )
#set( $orgLogo = "{$orgURL}{$orgImages}logo.png" )

You can call these variables within your template at any time, like so:

##     --------------------------------
##      Start the content close...
##     --------------------------------

Copyright &copy; $orgDate $orgName 

##     -------------------------------
##      End the content close...
##     --------------------------------

Now, I know what you’re thinking. Can I set these as global variables in simply parse that file and call the variable when I need it? Unfortunately, not at this time, but I’m looking into a solution.

What you end up with looks similar to the following.




Once again, remember that when a new folder/file is added for customization, you must ensure it has right permissions by executing commands on the vRealize Automation appliance(s).

find /vcac -type d -exec chmod o+rx {} \;

find /vcac -type f -exec chmod o+r {} \;

​You need to wait for 120 seconds to see new customizations reloaded and reflected in your messages.

Now, let’s put it to work and see what happens when a user requests a new resource from the CloudOperations tenant.


Voila! Isn’t that so much better?

Get creative! Define the user experience for messages from vRealize Automation in your software-defined data center and have fun while doing it.





VMware Virtual SAN 6.0 Now Generally Available

Virtual SAN 6 – you heard about it in February… thousands of you read about it in Rawlinson’s blog post… and today you can get your hands on it. Virtual SAN 6 is now generally available (GA) – download your evaluation version today! For those of you who missed Rawlinson’s blog post describing the details around what’s new with Virtual SAN 6 – you can read it here (below). But there’s a better way to get the information – register and attend this week’s webinar on “What’s New with Virtual SAN 6” hosted by Rawlinson Rivera – we’ll see you there!


It is with great pleasure and joy that I like to announce the official launch of VMware Virtual SAN 6.0, one of VMware’s most innovative software-defined storage products and the best hypervisor-converged storage platform for virtual machines. Virtual SAN 6.0 delivers a vast variety of enhancements, new features to the as well as performance and scalability improvements.

Virtual SAN 6.0 introduces support for an all-flash architecture specially designed to provide virtualized applications high performance with predictably low latencies. Now with support for both hybrid and all-flash architectures Virtual SAN 6.0 is ready to meet the performance demands of just about any virtualized application by delivering consistent performance with sub-millisecond latencies.

Hybrid Architecture

  • In the hybrid architecture, server-attached magnetic disks are pooled to create a distributed shared datastore that persists the data. In this type of architecture, you can get up to 40K IOPS per server host.

All-Flash Architecture

In All-Flash architecture, the flash-based caching tier is intelligently used as a write-buffer only while another set of flash devices forms the persistence tier to store data. Since this architecture utilizes only flash devices, it delivers extremely high IOPs of up to 90K per host, with predictable low latencies.


Virtual SAN 6.0 delivers true enterprise-level scale and performance by doubling the scalability of Virtual SAN 5.5 by scaling up to 64 nodes per cluster for both hybrid and all-flash configurations. In addition, Virtual SAN 6.0 improves the number of virtual machines per host up to 200 for both supported architectures.


The performance enhancements delivered in Virtual SAN 6.0 are partially due to the new Virtual SAN on-disk Filesystem (VSAN FS). The new version delivers a new VMDK delta file (vsanSparse) takes advantage of the new on-disk format writing and extended caching capabilities to deliver efficient performance. This results in the delivery of performance-based snapshots, and clone that are comparable to SAN snapshots.

Virtual SAN 6.0 now enables intelligent placement of virtual machine objects across server racks for enhanced application availability even in case of complete rack failures. Virtual SAN Fault Domains provide the ability to group multiple hosts within a cluster to define failure domains to ensure replicas of virtual machines data is spread across the defined failure domains (racks).


Along with all the new added features a significant amount of improvements have been added to enhance the management user experience:

  • Disk/Disk Group Evacuation – Introduce ability to evacuate data from individual disk/disk groups before removing a disk/disk group from the Virtual SAN.
  • Disk Serviceability features - easily map the location of magnetic disks and flash devices. Ability light disk LED on failures, Turn disk LED on/off from the vSphere Web Client.
  • Storage Consumption Models - adds functionality to visualize Virtual SAN 6.0 datastore resource utilization when a VM Storage Policy is created or edited.
  • UI Resynchronization Dashboard – the vSphere Web Client UI displays virtual machine objects resynchronization status and remaining bytes to sync.
  • Proactive Rebalance – provides the ability to manually trigger a rebalance operation in order to utilize newly added cluster storage capacity.
  • Health Services – deliver troubleshooting and health reports to vSphere Administrators about Virtual SAN 6.0 subsystems and their dependencies such as cluster, network, data, limits, physical disk.


With all the major enhancements and features of this release, Virtual SAN is now enterprise-ready, and customers can use it for a broad range of use cases, including business-critical and tier-1 production applications.  Stay tune, there is a lot more to come from the world’s greatest software-defined storage platform. For more information visit the Virtual SAN product page.

- Enjoy

For future updates on Virtual SAN (VSAN), vSphere Virtual Volumes (VVOLs) and other Software-defined Storage technologies as well as vSphere + OpenStack be sure to follow me on Twitter: @PunchingClouds

Be sure to subscribe to the Virtual SAN blog or follow our social channels at @vmwarevsan and Facebook.com/vmwarevsan for the latest updates.

For more information about VMware Virtual SAN, visit http://www.vmware.com/products/virtual-san.





vSphere 6.0 Lockdown Modes


Lockdown mode has been around in various forms for many releases. The behaviors have changed a few times since 5.1 with varying levels of usability success. For vSphere 6.0 we are trying to address some of these issues. Personally, what I’d love to see happen with all customers running V6.0 is that you run at a minimum the “Normal” Lockdown Mode.

Continue reading

Downloading a VMware Suite with the Push of a Button using VMware Software Manager

If you’re looking for an easy and simple way to download all of the products and features of a VMware Suite, VMware Software Manager dramatically simplifies the download process.  VMware Software Manager is a free product that:

  • Provides an easy to use interface to find, select & download the content needed to install or upgrade a VMware suite
  • Verifies the suite or product was downloaded without corruption
  • Automatically detects the release of new VMware suites, products and versions and displays them for download


The following VMware suites are available for download using VMware Software Manager:

  • VMware vCloud Suite® 6.0, 5.8, and 5.5
  • VMware vSphere® with Operations Management™ 6.0 and 5.5
  • VMware vSphere® 6.0, 5.5, and 5.1


Additional VMware suites and suite versions will be added in the future and will dynamically show up in VMware Software Manager (for you to download).

To download VMware Software Manager, visit the product information page -


Bob Perugini, Sr. Product Manager, SDDC Install & Update, VMware

VMware Announces General Availability of vSphere 6

Today, we are excited to announce the general availability of VMware vSphere 6 along with a slew of other Software-Defined Data Center (SDDC) products including VMware Integrated OpenStack, VMware Virtual SAN 6, VMware vSphere Virtual Volumes, VMware vCloud Suite 6, and VMware vSphere with Operations Management 6.

vSphere 6 is the latest release of the industry-leading virtualization platform and serves as the foundation of the SDDC. This is the largest ever release of vSphere and is the first major release of the flagship product in over three years.  vSphere 6 is jammed pack with features and innovations that enable users to virtualize any application, including both scale-up and scale-out applications, with confidence. New capabilities include increased scale and performance, breakthrough industry-first availability, storage efficiencies for virtual machines, and simplified management  at scale. For more details on the blockbuster features please refer to the vSphere 6 announcement.

If you are interested in learning more about vSphere 6, there are several options: