Home > Blogs > VMware Consulting Blog > Tag Archives: VM

Tag Archives: VM

Troubleshooting VMs Connectivity with vRealize Network Insight

Julienne_Phamby Julienne Pham

Things can be difficult when you don’t know where a network issue may be.

In today’s datacentre, the rapid need to deploy business applications in minutes is necessary to keep the business ahead of the game. It is even more crucial to keep security and rapid networking configuration in control. So, how can a network administrator get to the bottom of a network issue in a matter of seconds in this new cloud era?

Network Virtualization brings operational and management flexibility and simplicity, but adds a complexity to troubleshooting and pinpointing the root cause of a network issue.

The traditional way would be to check the network activity from vCenter on the ESX level on the physical nic card, and check the packet going on as you run the test. It is also necessary to check the physical network activities and compare with the virtual network traffic and deduce where the bottleneck is.

If now you need to cross check between VCs and multiple sites, how long will it take to figure out the configuration issue? The required information can be gathered on one search on vRealize Network Insight.

Continue reading

Creating Purpose-Built vSphere Clusters

By Sunny Dua, Senior Technology Consultant at VMware 

I recently had an opportunity to present at vForum 2013 in Mumbai, the Financial Capital of India. With more than 3,000 participants and two days of events, it was definitely one of the biggest customer events in India. Along with my team, I represented VMware Professional Services and presented on the following topic: “Architecting vSphere Environments – Everything you wanted to know!”

When we finalized the topic, I realized that presenting this topic in 45 minutes is next to impossible. With the amount of complexity that goes into Architecting a vSphere Environment, one could easily write an entire book. However, the task at hand was to keep it to the length of a presentation.

As I started planning the slides, I decided to look at the architectural decisions, which in my experience are the Most Important Ones, since they can make or break the virtual infrastructure. My other criterion was to ensure I talk about the Grey Areas where I always see uncertainty. This uncertainty can transform a good design into a bad one.

At the end I was able to come out with a final presentation which was received very well by the attendees. I thought of sharing the content with the entire community through this blog post. This is part 1, where I will give you some key design considerations for designing vSphere Clusters.

Before I begin, I also want to give the credit to a number of VMware experts in the community. Their books, blogs and the discussions I have had with them in the past helped me in creating this content. This includes books and blogs by DuncanFrankForbes GuthrieScott LoweCormac Hogan and some fantastic discussions with Michael Webster earlier this year.

Before we begin here is a small graphical disclaimer:

And here are my thoughts on creating vSphere Clusters.

The message behind the slide above is to create vSphere Clusters based on the purpose they need to fulfill in the IT landscape of your organization.

Management Cluster

The management cluster refers here to a 2- to 3-host ESXi host used by the IT team to primarily host all the workloads that are used to build up a vSphere Infrastructure. This includes VMs such as vCenter Server, Database Server, vCOps, SRM, vSphere Replication Appliance, VMA Appliance, Chargeback Manager, etc. This cluster can also host other infrastructure components such as active directory, backup servers, anti-virus etc. This approach has multiple benefits such as:

  • Security due to isolation of management workloads from production workloads. This gives complete control to the IT team on the workloads, which are critical to manage the environment.
  • Ease of upgrading the vSphere Environment and related components without impacting the production workloads.
  • Ease of troubleshooting issues within these components since the resources such as compute, storage, and network are isolated and dedicated for this cluster.
  • The number of ESXi hosts in a cluster will impact your consolidation ratios in most cases. As a rule of thumb, you will always consider one ESXi host in a 4-node cluster for HA failover (assuming), but you could also do the same on a 8-node cluster, which ideally saves one ESXi host for you for running additional workloads. Yes, the HA calculations matter and they can be either on the basis of slot size or percentage of resources.
  • Always consider at least one host as a failover limit per 8 to 10 ESXi servers. So in a 16 node cluster, do not stick with only one host for failover, look for at least taking this number to two. This is to ensure that you cover the risk as much as possible by providing an additional node for failover scenarios.
  • Setting up large clusters comes with its benefits, such as higher consolidation ratios etc. But they might have a downside as well if you do not have enterprise-class or rightly sized storage in your infrastructure. Remember, if a datastore is presented to a 16-node or a 32-node cluster, and if the VMs on that datastore are spread across the cluster, chances are you might get into contention for SCSI locking. If you are using VAAI, this will be reduced by ATS; however, try to start small and grow gradually to see if your storage behavior is not being impacted.
  •  Having separate ESXi servers for DMZ workloads is OLD SCHOOL. This was done to create physical boundaries between servers. This practice is a true burden carried over from the physical world to the virtual. It’s time to shed that load and make use of mature technologies, such as VLANs to create logical isolation zones between internal and external networks. Worst case, you might want to use separate network cards and physical network fabric, but you can still run on the same ESXi server, giving you better consolidation ratios and ensuring the level of security required in an enterprise.

Quick Tip: Ensure that this cluster is a minimum 2-node cluster for vSphere HA to protect workloads in case one host goes down. A 3-node management cluster would be ideal, since you would have the option of running maintenance tasks on ESXi servers without having to disable HA. You might want to consider using VSAN for this infrastructure as this is the primary use case that both Rawlinson & Cormac suggest. Remember, VSAN is in beta right now, so make your choices accordingly.

Production Clusters

As the name suggests this cluster would host all your production workloads. This cluster is the heart of your organization as it hosts the business applications, databases, and web services. This is what gives you the job of being a VMware architect or a virtualization admin.

Here are a few pointers to keep in mind while creating production clusters:

  • The number of ESXi hosts in a cluster will impact you consolidation ratios in most of the cases. As a rule of thumb, you will always consider one ESXi host in a 4-node cluster for HA failover (assuming), but you could also do the same on a 8-node cluster, which ideally saves one ESXi host for you for running additional workloads. Yes, the HA calculations matter and they can be either on the basis of slot size or percentage of resources.
  • Always consider at least 1 host as a failover limit per 8 to 10 ESXi servers. So in a 16 node cluster, do not stick with only 1 host for failover, look for at least taking this number to 2. This is to ensure that you cover the risk as much as possible by providing additional node for failover scenarios
  • Setting up large clusters comes with their benefits such as higher consolidation ratios etc., they might have a downside as well if you do not have the enterprise class or rightly sized storage in your infrastructure. Remember, if a Datastore is presented to a 16 Node or a 32 Node cluster, and on top of that, if the VMs on that datastore are spread across the cluster, chances that you might get into contention for SCSI locking. If you are using VAAI this will be reduced by ATS, however try to start with small and grow gradually to see if your storage behavior is not being impacted.

Having separate ESXI servers for DMZ workloads is OLD SCHOOL. This was done to create physical boundaries between servers. This practice is a true burden which is carried over from physical world to virtual. It’s time to shed that load and make use of mature technologies such as VLANs to create logical isolation zones between internal and external networks. In worst case, you might want to use separate network cards and physical network fabric but you can still run on the same ESXi server which gives you better consolidation ratios and ensures the level of security which is required in an enterprise.

Island Clusters

They sound fancy but the concept of island clusters is fairly simple: run islands of ESXi servers (small groups) that can host workloads with special license requirements. Although I do not appreciate how some vendors try to apply illogical licensing policies on their applications, middle-ware and databases, this is a great way of avoiding all the hustle and bustle created by sales folks. Some examples of island clusters would include:

  • Running Oracle Databases/Middleware/Applications on their dedicated clusters. This will not only ensure that you are able to consolidate more and more on a small cluster of ESXi hosts and save money but also ensures that you zip the mouth of your friendly sales guy by being in what they think is license compliance.
  • I have customers who have used island clusters of operating systems such as Windows. This also helps you save on those datacenter, enterprise, or standard editions of Windows OS.
  • Another important benefit of this approach is that it helps ESXi use the memory management technique of Transparent Page Sharing (TPS) more efficiently since there are chances that you are running a lot of duplicate pages spawned by these VMs in the physical memory of your ESXi servers. I have seen this reach 30 percent and can be fetched in a vCenter Operations Manager report if you have that installed in your virtual infrastructure.

With this I would close this article. I was hoping to give you a quick scoop in all these parts, but this article is now four pages. I hope this helps you make the right choices for your virtual infrastructure when it comes to vSphere Clusters.


This post originally appeared on Sunny Dua’s vXpress blog, where you can find follow-up posts 2 and 3. Sunny Dua is a Senior Technology Consultant for VMware’s Professional Services Organization, focused on India and SAARC countries.

Don’t Miss Our PS Consultants at VMworld

This year’s VMworld in San Francisco is fast approaching: August 25–29. Are you ready? Have you been perusing the list of sessions to decide which breakouts and panels you can’t miss?

With 350+ sessions this year, we imagine you’ll be carefully planning your schedule in the coming weeks. We’d hate for you to miss the great sessions led by our VMware Professional Services Consultants and Architects, so we’ve included two on Virtualization below. Plus, don’t miss our run-down of End-User Computing sessions from last week.

***

Strategic Reasons for Classifying Workloads for Tier 1 Virtualization. Why Classify?

With David Gallant (VMware) and Denis Larocque (MolsonCoors)

Virtualizing business-critical applications can be a daunting exercise. It’s not just another application you’re putting on the virtual infrastructure. In most cases, it’s the system of record or the major finance application, the app that runs the supply chain, etc. You need to get it correct—the first time.

Workload classification of the existing environment is key to the success of virtualizing business-critical applications. Workload classification determines sizing for performance and capacity as well as application dependency.

“Getting rid of our costly UNIX environment was a good reason to virtualize, but SAP was a critical part of our portfolio, and we had to guarantee performance and reliability of the new system,” explains MolsonCoors Virtualization Architect Denis Larocque. “Having deep understanding of current state to be able to classify the workload and make a projection is the secret. It is not that difficult when you have the right information available.”

At this session you’ll hear from Laroque and VMware Virtualization Architect David Gallant, and discuss who, what, when, why, where and how to classify workloads for virtual environments.

***

How SRP Delivers More Than Power to Their Customers

With Girish Manmadkar (VMware) and Sheldon Brown (SRP)

SRP, the third-largest public power and water company in the country, with over 1,000,000 customers, has completely virtualized its entire SAP landscape (inclusive database). Since completing the production environment build in December, SRP has been busy stress testing, load testing, performance testing, and monitoring and tweaking the environment to ensure an excellent customer experience on Go Live day.

In this session you’ll hear from VMware (Consulting Architect – BCA/SAP practice North America) and SRP (hands-on SAP Technology Manager) about their reasons to virtualize the SAP environment and to migrate SAP workloads to elastic but optimal virtual environments. You’ll also find out what they did to resolve earlier performance issues like SAP BI, quick resource allocation, Oracle licensing, and much more.

***

It All Starts Here: Internal Implementation of Horizon Workspace at VMware

By Jim Zhang, VMWare Professional Services Consultant

VMware has had a dogfood tradition since previous CEO Paul Maritz’ instilled the practice of having VMware IT deploy VMware products for production use internally. As a VMware employee personally, I can understand some criticism to this practice, but I definitely believe it serves to build and deliver a solid and quality product to the market.

Prior to the release of VMware’s Horizon Suite, VMware IT provided Horizon Workspace to its employees in the production environment. It’s very exciting! Right now, I can use my iPhone and iPad to access my company files without being tied to my desk. Also, it is very easy to share a folder and files with other colleagues, expanding our ability to collaborate and also track various file versions. Additionally, with Workspace, I can access internal applications without further authentication after I login to the Horizon portal. Even my entitlement virtual desktops are still there!

While Mason and Ted discuss the IT challenges with mobility computing in this blog, we at VMware understand these challenges because ‘we eat our own dogfood’.  In this blog I’d like to share some of the key sizing concepts of each of the Horizon components and reference which sizes VMware IT utilized to deploy the Horizon Workspace for its 13,000+ employees.

Horizon Workspace is a vApp that generally has 5 Virtual Machines (VM) by default:

Lets go through each VM and see how to size it in each case:

1.  Configurator VA (virtual appliance): This is the first virtual appliance to be deployed. It is used to configure the vApp from a single point and deploy and configure the rest of the vApp. The Configurator VA is also used to add or remove other Horizon Workspace virtual appliances. There can only be one Configurator VA per vApp.

  • 1x Configurator VA is used. 2vCPU, 2G Memory

2.  Connector VA:  Enterprise deployments require more than one Connector VA to support different authentication methods, such as RSA SecureID and Kerberos SSO. To provide high availability when deploying more than one Connector VA, you must front-end the Connector VAs with a load balancer. Each Connector VA can support up to 30,000 users. Specific use cases, such as Kerberos, ThinApp integration, and View integration, require the Connector VA to be joined to the Windows domain.

  • 6x Connector VA is used. 2 vCPU, 4G Memory

3.  Gateway VA: The Gateway VA is the single namespace for all Horizon Workspace interaction. For high availability, place multiple Gateway VAs behind a load balancer. Horizon Workspace requires one Gateway VA for every two Data VAs, or one Gateway VA for every 2,000 users.

  • 4x Gateway VA is used: 2 vCPU, 8G Memory

4.  Management VA: aka Service VA. Enterprise deployments require two or more Service VAs. Each service VA can handle up to 100,000 users.

  • 2x Service VA is used: 2vCPU, 6G Memory (1 for HA)

5.  Data VM: Each Data VA can support up to 1,000 users. At least three Data VAs are required. The first Data VA is a master data node, the others are user data nodes. Each user data node requires its own dedicated volume. In proof of concept or small-scale pilot scenarios, you can use a Virtual Machine Disk (VMDK). For production, you must use NFS.

  • 11x Data VA is used: 6 vCPU, 32G Memory

6.  Database: Workspace only supports Postgres. For enterprise deployment best practice is to use an external Postgres database.

  • 2x Postgres Server is used: 4 vCPU, 4G Memory (1 for replication)

7.  MS Office Preview Server: Windows 7 Enterprise or Windows 2008 R2 Standard required; MS Office 2010 Professional, 64-bit required;Admin account w/ permissions to create local accounts; Disable UAC; Real-time conversion of documents

  • 3x MS Office Preview Server: 4vCPU, 4G Memory

 

If you want to learn more about the real deployment experience and best practices for deploying the Horzion Suite, please contact your local VMware Professional Services team. They have the breadth of experience and technical ability to help you achieve your project goals: from planning and design to implementation and maintenance. Also, be on the look out for upcoming Horizon reference guides being released from VMware soon. Good luck!

Jim Zhang joined VMware in November 2007 as a quality engineering manager for VMware View.  In 2011, he moved to Professional Services as consultant and solution architect.  Jim has extensive experience in desktop virtualization and workspace solution design and delivery.