Home > Blogs > VMware Consulting Blog > Tag Archives: vSphere

Tag Archives: vSphere

App Volumes: Storage Migration for AppStacks

Jeremy WheelerBy Jeremy Wheeler

Inside of App Volumes you can accomplish a storage migration between different SANs using the feature called ‘Storage Groups,’ provided you have shared storage between App Volume Managers. If you don’t, I recommend creating a temporary LUN/Volume to accomplish this migration. If you are performing a migration on a large scale, such as 2X or more App Volume Manager instances, you will need to perform steps one through eight on each App Volume Manager instance.

Conceptual architecture:

JWheeler AppVolumes Manager Conceptual Architecture

 

To achieve a successful migration we will need to utilize a shared LUN/Volume between datastores. This can be an NFS or iSCSI datastore and will only be used temporarily to complete this process.

JWheeler AppVolumes Migration Setup

Stage 1: Migration Startup

  1. Select ‘Infrastructure’
  2. Select ‘Storage Groups’
  3. Give your storage group a name (my example: migration_temp)
  4. Check ‘Automatically Replicate AppStacks’ and leave ‘Automatically Import AppStacks’ unchecked. If you check the ‘Import AppStacks’ checkbox, you will need to do a lot of cleanup if you were using a temporary LUN to do this migration.
  5. Select ‘spread’ for your distribution strategy.
  6. Select your preferred template storage.
  7. Select ‘direct’ for storage selection.
  8. Select the checkbox of your local shared storage. This field will represent where you currently have the AppStacks you want migrated.
  9. Select the checkbox of your Temporary LUN. The temporary LUN is assumed empty or the AppStacks you want migrated over are not on the temporary LUN.
  10. Select ‘Create’ 

Once your storage group is created replication will begin immediately; it might take awhile depending on how many AppStacks you need to distribute within the storage group.

Stage 2: Cleanup

  1. After all AppStacks have been evenly distributed in the storage group, you can simply delete the storage group. This will not delete any AppStacks – it simply disassociates the logical bucket of resources. Both the source LUN and temporary LUN will still have the AppStacks.

Load the VMware vSphere® client and move any AppStacks from the temporary LUN to the permanent shared storage LUN, and then to the View Block.

JWheeler AppVolumes View Block

I want to dig further into explaining this process about moving AppStacks from the temporary LUN. App Volumes create pointers to all AppStacks residing on storage. This means in our example (shown above) when we replicate an AppStack between two points the inventory object in App Volumes Manager will consider all these locations as the AppStack living space. This also means that if you decide to delete an AppStack from inventory, ALL pointer locations will also be deleted. So, if you need to clean up the App Volumes Manager inventory in your Source Environment, you will need to copy, move, or detach the temporary LUN you created prior to deletion. The process for doing that is explained here.

JWheeler AppVolumes

a)      Move AppStacks from cloudvolumes/apps/* to a temporary folder /cloudvolumes/apps/tmp/* using the vSphere C# client, GUI, or vSphere command-line.
b)      Delete AppStacks from Source inventory.
c)      Move AppStacks from cloudvolumes/apps/tmp/* to a permanent shared storage in the target environment folder /cloudvolumes/apps/* using the vSphere C# client, GUI, or vSphere command-line.
d)      Select ‘Import AppStacks’ in App Volume Manager under Volumes > AppStacks.
e)      Select the LUN you moved all the AppStacks into (step c).
f)       Set the root path of where the AppStacks will live and select ‘Import.’

You can also use ‘vmkfstools’ if you have shell access to a host that can see the shared storage. This process is a lot more manual compared to using App Volumes Storage Groups, but you can still accomplish the migration using this method.

Execute the following syntax:

vmkfstools -i </source/location> </dest/location> 

This will copy the VMDK file in its current format from source to target.
(AppStacks VMDKs are Thin Provisioned by default).

Once you have copied the AppStacks you will need to ‘Import AppStacks’ from the App Volume Manager
(Volumes –> AppStacks –> Import AppStacks).

Reference this Knowledge Base for additional information when using the vmkfstools command:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1028042

For more information, be sure to check out the following Education Course:


Jeremy Wheeler is an experienced senior consultant and architect for VMware’s Professional Services Organization, End-user Computing specializing in VMware Horizon Suite product-line and vRealize products such as vROps, and Log Insight Manager. Jeremy has over 18 years of experience in the IT industry. In addition to his past experience, Jeremy has a passion for technology and thrives on educating customers. Jeremy has 7 years of hands-¬‐on virtualization experience deploying full-life cycle solutions using VMware, CITRIX, and Hyper-V. Jeremy also has 16 years of experience in computer programming in various languages ranging from basic scripting to C, C++, PERL, .NET, SQL, and PowerShell.

Jeremy Wheeler has received acclaim from several clients for his in-¬‐depth and varied technical experience and exceptional hands-on customer satisfaction skills. In February 2013, Jeremy also received VMware’s Spotlight award for his outstanding persistence and dedication to customers and was nominated again in October of 2013

Virtual SAN: An Ideal Choice for Management Storage

MHoskenBy Martin Hosken

The practice of separating management components onto a dedicated vSphere cluster has become common practice in recent years. However, such a cluster should aim to have no dependencies on the production systems with one of the key requirements being that any type of incident or outage affecting the production system does not affect the management cluster, and likewise any management cluster outage cannot affect the production workload systems.

In the past, providing dedicated out-of-band storage for management-only clusters could be cost-prohibitive. It was necessary to purchase additional independent storage hardware for management clusters that could provide the performance and availability required for I/O-intensive and highly available virtual machines, as well as to provide expensive isolated storage fabric to provide connectivity to the management-only array. VMware Virtual SAN provides a highly performant shared storage platform across the vSphere cluster making it possible to significantly reduce costs whilst maintaining enterprise levels of availability and performance.

Martin Hosken Whitepaper

This makes Virtual SAN the ideal choice to provide true out-of-band management storage to a dedicated management cluster, making this type of truly independent management environment a viable and affordable option for most medium and large organizations.

This white paper provides a detailed rational and design for utilizing Virtual SAN in a dedicated management environment.


 

Martin Hosken is a Global Cloud Architect, VCDX and vExpert 2015 in the Global Cloud Practice – vCloud Air Network

vSphere Datacenter Design – vCenter Architecture Changes in vSphere 6.0 – Part 1

jonathanm-profileBy Jonathan McDonald

As a member of VMware Global Technology and Professional Services at VMware I get the privilege of being able to work with products prior to its release. This not only gets me familiar with new changes, but also allows me to question—and figure out—how the new product will change the architecture in a datacenter.

Recently, I have been working on exactly that with vCenter 6.0 because of all the upcoming changes in the new release. One of my favorite things about vSphere 6.0 is the simplification of vCenter and associated services. Previously, each individual major service (vCenter, Single Sign-On, Inventory Service, the vSphere Web Client, Auto Deploy, etc.) was installed individually. This added complexity and uncertainty in determining the best way to architect the environment.

With the release of vSphere 6.0, vCenter Server installation and configuration has been dramatically simplified. The installation of vCenter now consists of only two components that provide all services for the virtual datacenter:

  • Platform Services Controller – This provides infrastructure services for the datacenter. The Platform Services Controller contains these services:
    • vCenter Single Sign-On
    • License Service
    • Lookup Service
    • VMware Directory Service
    • VMware Certificate Authority
  • vCenter Services – The vCenter Server group of services provides the remainder of the vCenter Server functionality, which includes:
    • vCenter Server
    • vSphere Web Client
    • vCenter Inventory Service
    • vSphere Auto Deploy
    • vSphere ESXi Dump Collector
    • vSphere Syslog Collector (Microsoft Windows)/VMware Syslog Service (Appliance)

So, when deploying vSphere 6.0 you need to understand the implications of these changes to properly architect the environment, whether it is a fresh installation, or an upgrade. This is a dramatic change from previous releases, and one that is going to be a source of many discussions.

To help prevent confusion, my colleagues in VMware Global Support, VMware Engineering, and I have developed guidance on supported architectures and deployment modes. This two-part blog series will discuss how to properly architect and deploy vCenter 6.0.

vCenter Deployment Modes

There are two basic architectures that can be used when deploying vSphere 6.0:

  • vCenter Server with an Embedded Platform Services Controller – This mode installs all services on the same virtual machine or physical server as vCenter Server. The configuration looks like this:

JMcDonald 1

This is ideal for small environments, or if simplicity and reduced resource utilization are key factors for the environment.

  • vCenter Server with an External Platform Services Controller – This mode installs the platform services on a system that is separate from where vCenter services are installed. Installing the platform services is a prerequisite for installing vCenter. The configuration looks as follows:

JMcDonald 2

 

This is ideal for larger environments, where there are multiple vCenter servers, but you want a single pane-of-glass for the site.

Choosing your architecture is critical, because once the model is chosen, it is difficult to change, and configuration limits could inhibit the scalability of the environment.

Enhanced Linked Mode

As a result of these architectural changes, Platform Services Controllers can be linked together. This enables a single pane-of-glass view of any vCenter server that has been configured to use the Platform Services Controller domain. This feature is called Enhanced Linked Mode and is a replacement for Linked Mode, which was a construct that could only be used with vCenter for Windows. The recommended configuration when using Enhanced Linked Mode is to use an external platform services controller.

Note: Although using embedded Platform Services Controllers and enabling Enhanced Linked Mode can technically be done, it is not a recommended configuration. See List of Recommended topologies for vSphere 6.0 (2108548) for further details.

The following are some recommend options on how—and how not to—configure Enhanced Linked Mode.

  • Enhanced Linked Mode with an External Platform Services Controller with No High Availability (Recommended)

In this case the Platform Services Controller is configured on a separate virtual machine, and then the vCenter servers are joined to that domain, providing the Enhanced Linked Mode functionality. The configuration would look this way:

JMcDonald 3

 

There are benefits and drawbacks to this approach. The benefits include:

  • Fewer resources consumed by the combined services
  • More vCenter instances are allowed
  • Single pane-of-glass management of the environment

The drawbacks include:

  • Network connectivity loss between vCenter and the Platform Service Controller can cause outages of services
  • More Windows licenses are required (if on a Windows Server)
  • More virtual machines to manage
  • Outage on the Platform Services Controller will cause an outage for all vCenter servers connected to it. High availability is not included in this design.
  • Enhanced Linked Mode with an External Platform Services Controller with High Availability (Recommended)

In this case the Platform Services Controllers are configured on separate virtual machines and configured behind a load balancer; this provides high availability to the configuration. The vCenter servers are then joined to that domain using the shared Load Balancer IP address, which provides the Enhanced Linked Mode functionality, but is resilient to failures. This configuration looks like the following:

JMcDonald 4

There are benefits and drawbacks to this approach. The benefits include:

  • Fewer resources are consumed by the combined services
  • More vCenter instances are allowed
  • The Platform Services Controller configuration is highly available

The drawbacks include:

  • More Windows licenses are required (if on a Windows Server)
  • More virtual machines to manage
  • Enhanced Linked Mode with Embedded Platform Services Controllers (Not Recommended)

In this case vCenter is installed as an embedded configuration on the first server. Subsequent installations are configured in embedded mode, but joined to an existing Single Sign-On domain.

Linking embedded Platform Services Controllers is possible, but is not a recommended configuration. It is preferred to have an external configuration for the Platform Services Controller.

The configuration looks like this:

JMcDonald 5

 

  • Combination Deployments (Not Recommended)

In this case there is a combination of embedded and external Platform Services Controller architectures.

Linking an embedded Platform Services Controller and an external Platform Services Controller is possible, but again, this is not a recommended configuration. It is preferred to have an external configuration for the Platform Services Controller.

Here is as an example of one such scenario:

JMcDonald 6

  • Enhanced Linked Mode Using Only an Embedded Platform Services Controller (Not Recommended)

In this case there is an embedded Platform Services Controller with vCenter Server linked to an external standalone vCenter Server.

Linking a second vCenter Server to an existing embedded vCenter Server and Platform Services Controller is possible, but this is not a recommended configuration. It is preferred to have an external configuration for the Platform Services Controller.

Here is an example of this scenario:

JMcDonald 7

 

Stay tuned for Part 2 of this blog post where we will discuss the different platforms for vCenter, high availability and different deployment recommendations.


Jonathan McDonald is a Technical Solutions Architect for the Professional Services Engineering team. He currently specializes in developing architecture designs for core Virtualization, and Software-Defined Storage, as well as providing best practices for upgrading and health checks for vSphere environments.

Overcoming Design Challenges with an Enterprise-wide Syslog Solution

MHoskenBy Martin Hosken

I’ve spent a lot of time helping my customers build a proper foundation for a successful implementation of vRealize Log Insight, and I’ve published a white paper that highlights key design challenges on how to overcome them. I’d like to share a brief overview with you here.

VMware vRealize Log Insight gives administrators the ability to consolidate logs, monitor and troubleshoot vSphere and to perform security auditing and compliance testing.

This white paper addresses the design challenges and key design decisions that arise when architecting an enterprise-wide syslog solution with vRealize Log Insight. It focuses on the design aspects of syslog in a vSphere environment and provides sample reference architectures to aid your design work and provide ideas about strategies for your own projects.

With every ESXi host in the data center generating approximately 250 MB of log file data a day, the need to centrally manage this data for proactive health monitoring, troubleshooting issues and performing security audits is something that many organizations continue to face every day.

mhosken 1
Note: A symlink is a type of file that contains a reference to another file in the form of an absolute or relative path.

VMware vRealize Log Insight is a scalable and secure solution that includes a syslog server, log consolidation tool and log analysis tool that works for any type of device that can send syslog data and not only the vSphere infrastructure.

As with any successful implementation project, the need to plan and design a solution that meets all the requirements set out by the business is key in ensuring success, and developing a design that is scalable, resilient and secure is fundamental to achieving this. And this includes keeping in mind the requirements of your business leaders, system administrators and security auditors as well.

To read the entire whitepaper, click HERE.


Martin Hosken is a Senior Consultant, VMware Professional Services EMEA

Running Microsoft SharePoint FAST Search on vSphere

By Girish Manmadkar

Girish-ManmadkarI recently worked with an enterprise customer to resolve end user reports of performance issues related to Microsoft SharePoint 2010 and FAST Search deployed on vSphere 5.1. The end users were reporting problems with initial page response and file upload and download. The customer requested architecture guidance, including a performance health check across the entire infrastructure stacks. The result of this engagement is the following architectural guidance, designed to help customers with similar deployments achieve maximum performance for Microsoft FAST Search on the VMware platform.

Specifics
The customer deployed the SharePoint FAST Search Farm with the following key components:

Software Resources

  • VMware vSphere 5.1 Update 2
  • Windows 2008 R2
  • SharePoint 2010
  • Microsoft SQL server 2008 protected with MSCS in 3 node cluster

Hardware (Virtual) Resources

Role

RAM

Local Disk

#CPU

NIC

Total VMs

Total #CPU

Total Mem (GB)

SQL
2012 Cluster Node A, B & C

32

C: 80
GB

4

2

3

 

 

E: 100
GB

12

96

WebFront End
Server

8

C: 80
GB

2

2

5

 

 

E: 50
GB

10

40

Application
Server

16

C: 80
GB

4

2

4

 

 

E: 50 GB

16

64

Services
Application Servers

16

C: 80
GB

4

2

2

 

 

E: 50
GB

8

32

Fast
Administration Server

16

C: 80
GB

4

2

1

 

 

E: 50
GB

4

16

Query
Indexer

16

C: 80
GB

4

2

5

 

 

E: 50
GB

20

80

Allocated Total Memory = 328 Gig
Allocated Total vCPU = 70

Sample FAST Servers Architecture

Discovery
During discussions and white board sessions with the customer, we encountered following issues with the deployment:

  • Storage
    • The virtual machines running query and index services were sharing the LUN and the data stores.
    • Thin provisioning was being deployed at the vSphere and EMC storage array layer.
    • The RDMs used for the SQL server MSCS environment were configured with incorrect (MRU/fixed) multi-pathing options.
  • Virtual machines had no lock pages for SQL and no memory reservations.
  • Various SQL server databases were being deployed as shared SQL instances for the entire FAST Search environment.
  • The networking configurations were set incorrectly for certain SCSI adapters.
  • Typical traffic within the guest operating systems, VMotion, and backup were not channeled properly.
  • There were no anti-affinity rules in place for the application servers within the vSphere farm.
  • The CPU subscriptions across the overall farm seemed unbalanced.

Approach/Recommendations
Throughout a series of discussions we learned more about the architecture and identified the following steps to improve performance:

  1. Reconfigure multi-pathing per EMC’s recommendations for vSphere5.1 to round robin. (This change showed immediate performance improvement.)
  2. Enable memory reservations with “Lock Pages in Memory” for SQL workloads.
  3. For a write-intensive application like FAST Search, use four (4) vSCSI controllers to separate volumes for operating systems, binaries, data, LOG and TEMPDB disks with window full format option to avoid additional write penalty.
  4. Absolutely avoid CPU over commitment in the production environment.
  5. Adopt best practices on vSphere to separate various networking traffic, including dedicated backup, which in this case was previously sharing VM traffic.

Conclusion
For any business-critical application to run with optimum performance, you must put performance ahead of consolidation and avoid over commitment of CPU and memory. Once you implement these principals for the production environment, any performance issues for business-critical applications on vSphere will be alleviated.


Girish Manmadkar is a veteran VMware SAP Virtualization Architect with extensive knowledge and hands-on experience with various SAP and VMware products, including various databases. He focuses on SAP migrations, architecture designs, and implementation, including disaster recovery.

Quick Tip: vSphere Auto Deploy Reverse Web Proxy Caching

By Ryan Johnson, Staff Technical Account Manager, VMware Professional Services
Ryan JohnsonLately, I’ve been working with a customer on a vSphere 5.1 and 5.5 Auto Deploy environment. The environment is rather large, and each pod of compute/storage requires localized access to the ESXi VIBs during the boot process. This falls in line with the best practice outlined in the VMware vSphere Installation and Setup Guide.


Auto Deploy Load Management Best Practice
Simultaneously booting large numbers of hosts places a significant load on the Auto Deploy server. Because Auto Deploy is a web server at its core, you can use existing web server scaling technologies to help distribute the load. For example, one or more caching reverse proxies can be used with Auto Deploy to serve up the static files that make up the majority of an ESXi boot image. Configure the reverse proxy to cache static content and pass requests through to the Auto Deployserver.

Configure the hosts to boot off the reverse proxy by modifying the Trivial FTP tramp file. When you click Download Trivial FTP ZIP in the vSphere Client, the system downloads the ZIP file that contains the tramp file. See Prepare Your System and Install the Auto Deploy Server. Change the URLs in that file to refer to the address of the reverse proxy.

Kyle Gleed has written a terrific article on how to set up an Apache HTTP reverse web proxy to cache the content from the Auto Deploy server. However, I noticed that he mistakenly did not mention that some additional Apache modules are needed to enable the disk caching directives within Apache HTTP Server. Otherwise, Apache will not cache the content from the Auto Deploy server and the reverse web proxy will only act as a web proxy – with no caching.

Load the Necessary Modules
Specifically, the mod_cache and mod_disk_cache Apache modules need to be loaded to enable these caching directives. Kyle covers these directives in the article.

You might be wondering where the proxy stores the cached content. This is defined in the CacheRoot directive in the Apache HTTP Serverhttpd.conf.

For example: CacheRoot /var/cache/AutoDeploy/

Set Cache Timing
In addition, the CacheDefaultExpire directive specifies a default time, in seconds, to cache a piece of content (“document”) if neither an expiry date nor last-modified date is provided with the document. The value specified with the CacheMaxExpire directive does not override this setting.

For example: CacheDefaultExpire 86400

NOTE: The CacheMaxExpire directive specifies the maximum number of seconds for which cachable HTTP documents will be retained without checking the origin server. Thus, documents cached from the Auto Deploy server will be out of date in, at most, this number of seconds. This maximum value is enforced even if an expiry date was supplied.

Verify VIBs are Loading
Lastly, if you need to verify that an ESXi host is loading VIBs from a reverse web proxy, you will need to either review the Apache logs (eg. /var/log/httpd/access_log) on the reverse web proxy for “Get” requests from the ESXi host IP address or quickly catch the address during the boot process (which moves pretty fast).

By following these steps at setup, you’ll ensure caching will work as expected.

Ryan Johnson, an avid husband, father, runner and technologist, is based in Orlando, Florida USA. Ryan is also a Staff Technical Account Manager in the Professional Services Organization at VMware. As an accomplished enterprise architect and technologist, he enables VMware’s largest customers and VMUG community members in Central and North Florida to accelerate and simplify their infrastructures services and organizations through VMware’s software defined data center and hybrid cloud solutions.
Follow him on Twitter at @tenthirtyam. This post originally appeared on his blog, tenthirtyam.org.

Go for the Gold: See vSphere with Operations Management In Action

If there’s anything we’ve learned from watching the recent Winter Olympics, it’s that world-class athletes are focused, practice endless hours, and need to be both efficient and agile to win gold.

When it comes to data centers, what sets a world-class data center apart is the software. A software-defined data center (SDDC) provides the efficiency and agility for IT to meet exploding business expectations so your business can win gold.

The VMware exclusive seminar is here! Join us to learn about the latest in SDDC.

Now through March 19, VMware TechTalk Live is hosting free, interactive half-day workshops in 32 cities across the U.S. and Canada. Attendees will get to see a live demo of vSphere with Operations Management.

The workshops will also provide a detailed overview of the key components of the SDDC architecture, as well as results of VMware customer surveys explaining how the SDDC is actually being implemented today.

Check out the TechTalk Live event information to find the location closest to you and to reserve your spot.

Creating Purpose-Built vSphere Clusters

By Sunny Dua, Senior Technology Consultant at VMware 

I recently had an opportunity to present at vForum 2013 in Mumbai, the Financial Capital of India. With more than 3,000 participants and two days of events, it was definitely one of the biggest customer events in India. Along with my team, I represented VMware Professional Services and presented on the following topic: “Architecting vSphere Environments – Everything you wanted to know!”

When we finalized the topic, I realized that presenting this topic in 45 minutes is next to impossible. With the amount of complexity that goes into Architecting a vSphere Environment, one could easily write an entire book. However, the task at hand was to keep it to the length of a presentation.

As I started planning the slides, I decided to look at the architectural decisions, which in my experience are the Most Important Ones, since they can make or break the virtual infrastructure. My other criterion was to ensure I talk about the Grey Areas where I always see uncertainty. This uncertainty can transform a good design into a bad one.

At the end I was able to come out with a final presentation which was received very well by the attendees. I thought of sharing the content with the entire community through this blog post. This is part 1, where I will give you some key design considerations for designing vSphere Clusters.

Before I begin, I also want to give the credit to a number of VMware experts in the community. Their books, blogs and the discussions I have had with them in the past helped me in creating this content. This includes books and blogs by DuncanFrankForbes GuthrieScott LoweCormac Hogan and some fantastic discussions with Michael Webster earlier this year.

Before we begin here is a small graphical disclaimer:

And here are my thoughts on creating vSphere Clusters.

The message behind the slide above is to create vSphere Clusters based on the purpose they need to fulfill in the IT landscape of your organization.

Management Cluster

The management cluster refers here to a 2- to 3-host ESXi host used by the IT team to primarily host all the workloads that are used to build up a vSphere Infrastructure. This includes VMs such as vCenter Server, Database Server, vCOps, SRM, vSphere Replication Appliance, VMA Appliance, Chargeback Manager, etc. This cluster can also host other infrastructure components such as active directory, backup servers, anti-virus etc. This approach has multiple benefits such as:

  • Security due to isolation of management workloads from production workloads. This gives complete control to the IT team on the workloads, which are critical to manage the environment.
  • Ease of upgrading the vSphere Environment and related components without impacting the production workloads.
  • Ease of troubleshooting issues within these components since the resources such as compute, storage, and network are isolated and dedicated for this cluster.
  • The number of ESXi hosts in a cluster will impact your consolidation ratios in most cases. As a rule of thumb, you will always consider one ESXi host in a 4-node cluster for HA failover (assuming), but you could also do the same on a 8-node cluster, which ideally saves one ESXi host for you for running additional workloads. Yes, the HA calculations matter and they can be either on the basis of slot size or percentage of resources.
  • Always consider at least one host as a failover limit per 8 to 10 ESXi servers. So in a 16 node cluster, do not stick with only one host for failover, look for at least taking this number to two. This is to ensure that you cover the risk as much as possible by providing an additional node for failover scenarios.
  • Setting up large clusters comes with its benefits, such as higher consolidation ratios etc. But they might have a downside as well if you do not have enterprise-class or rightly sized storage in your infrastructure. Remember, if a datastore is presented to a 16-node or a 32-node cluster, and if the VMs on that datastore are spread across the cluster, chances are you might get into contention for SCSI locking. If you are using VAAI, this will be reduced by ATS; however, try to start small and grow gradually to see if your storage behavior is not being impacted.
  •  Having separate ESXi servers for DMZ workloads is OLD SCHOOL. This was done to create physical boundaries between servers. This practice is a true burden carried over from the physical world to the virtual. It’s time to shed that load and make use of mature technologies, such as VLANs to create logical isolation zones between internal and external networks. Worst case, you might want to use separate network cards and physical network fabric, but you can still run on the same ESXi server, giving you better consolidation ratios and ensuring the level of security required in an enterprise.

Quick Tip: Ensure that this cluster is a minimum 2-node cluster for vSphere HA to protect workloads in case one host goes down. A 3-node management cluster would be ideal, since you would have the option of running maintenance tasks on ESXi servers without having to disable HA. You might want to consider using VSAN for this infrastructure as this is the primary use case that both Rawlinson & Cormac suggest. Remember, VSAN is in beta right now, so make your choices accordingly.

Production Clusters

As the name suggests this cluster would host all your production workloads. This cluster is the heart of your organization as it hosts the business applications, databases, and web services. This is what gives you the job of being a VMware architect or a virtualization admin.

Here are a few pointers to keep in mind while creating production clusters:

  • The number of ESXi hosts in a cluster will impact you consolidation ratios in most of the cases. As a rule of thumb, you will always consider one ESXi host in a 4-node cluster for HA failover (assuming), but you could also do the same on a 8-node cluster, which ideally saves one ESXi host for you for running additional workloads. Yes, the HA calculations matter and they can be either on the basis of slot size or percentage of resources.
  • Always consider at least 1 host as a failover limit per 8 to 10 ESXi servers. So in a 16 node cluster, do not stick with only 1 host for failover, look for at least taking this number to 2. This is to ensure that you cover the risk as much as possible by providing additional node for failover scenarios
  • Setting up large clusters comes with their benefits such as higher consolidation ratios etc., they might have a downside as well if you do not have the enterprise class or rightly sized storage in your infrastructure. Remember, if a Datastore is presented to a 16 Node or a 32 Node cluster, and on top of that, if the VMs on that datastore are spread across the cluster, chances that you might get into contention for SCSI locking. If you are using VAAI this will be reduced by ATS, however try to start with small and grow gradually to see if your storage behavior is not being impacted.

Having separate ESXI servers for DMZ workloads is OLD SCHOOL. This was done to create physical boundaries between servers. This practice is a true burden which is carried over from physical world to virtual. It’s time to shed that load and make use of mature technologies such as VLANs to create logical isolation zones between internal and external networks. In worst case, you might want to use separate network cards and physical network fabric but you can still run on the same ESXi server which gives you better consolidation ratios and ensures the level of security which is required in an enterprise.

Island Clusters

They sound fancy but the concept of island clusters is fairly simple: run islands of ESXi servers (small groups) that can host workloads with special license requirements. Although I do not appreciate how some vendors try to apply illogical licensing policies on their applications, middle-ware and databases, this is a great way of avoiding all the hustle and bustle created by sales folks. Some examples of island clusters would include:

  • Running Oracle Databases/Middleware/Applications on their dedicated clusters. This will not only ensure that you are able to consolidate more and more on a small cluster of ESXi hosts and save money but also ensures that you zip the mouth of your friendly sales guy by being in what they think is license compliance.
  • I have customers who have used island clusters of operating systems such as Windows. This also helps you save on those datacenter, enterprise, or standard editions of Windows OS.
  • Another important benefit of this approach is that it helps ESXi use the memory management technique of Transparent Page Sharing (TPS) more efficiently since there are chances that you are running a lot of duplicate pages spawned by these VMs in the physical memory of your ESXi servers. I have seen this reach 30 percent and can be fetched in a vCenter Operations Manager report if you have that installed in your virtual infrastructure.

With this I would close this article. I was hoping to give you a quick scoop in all these parts, but this article is now four pages. I hope this helps you make the right choices for your virtual infrastructure when it comes to vSphere Clusters.


This post originally appeared on Sunny Dua’s vXpress blog, where you can find follow-up posts 2 and 3. Sunny Dua is a Senior Technology Consultant for VMware’s Professional Services Organization, focused on India and SAARC countries.

What Did You Miss? Best Blog Posts for 2013

When you consider the constant flow of information we are submerged in on a daily basis, it’s no surprise that great insights occasionally escape our notice. As we reflect this week on the  last year, we thought we’d share a few of our most read and most shared posts from 2013—just in case you missed one. We hope they’ll help you step into 2014 with confidence, knowing you have these helpful tips in your back pocket (and that you can check back any time for new ones). Enjoy!


Four Commonly Missed and Easy to Implement Best Practices (Horizon View)
– By Nathan Smith, VMware EUC Consultant

It All Starts Here: Internal implementation of Horizon Workspace at VMware
– By Jim Zhang, VMWare Professional Services Consultant

4 Ways To Overcome Resistance to the Cloud
– By Brett Parlier, Solutions Architect, VMware Professional Services

Quickly Calculate Bandwidth Requirements with New vSphere ‘fling’
– By Sunny Dua, Senior Technology Consultant at VMware

Quickly Calculate Bandwidth Requirements with New vSphere ‘fling’

By Sunny Dua, Senior Technology Consultant at VMware 

With a number of my recent consulting engagements, I have seen an increasing demand for host-based replication solutions for data replication. In a few of my recent projects, I have implemented VMware Site Recovery Manager in combination with VMware vSphere Replication.

I have written about vSphere Replication (VR) in the past and I am not surprised that a number of VMware customers are shifting focus from a storage-based replication solution to a host-based replication solution due to the cost-benefit and flexibility that comes with such a solution.

In my projects I started with replicating simple web servers to DR site using VR; now customers are discussing database servers, exchange, and other critical workloads to be replicated using vSphere Replication. With an out-of-the-box integration with a solution such a as VMware Site Recovery Manager, building a DR environment for your virtualized datacenter has become extremely simple and cost effective.

The configuration of the replication appliance and SRM is as easy as clicking NEXT, NEXT, FINISH; however, the most common challenge has been around estimating the bandwidth requirements from Protected Site to Recovery Site for the replication of workloads. One of the most commonly asked question is: “How do I calculate the bandwidth requirements for replication?” Continue reading