Home > Blogs > VMware Consulting Blog > Monthly Archives: November 2014

Monthly Archives: November 2014

Celebrating Eight Years at VMware

Andrea SivieroBy Andrea Siviero, VMware Senior Solutions Architect

How fascinating!

When you are having fun, you don’t realize how fast time passes by. This has never been truer than for the eight years I have spent at VMware. On the personal side, I have gained two children, changed couple of houses, lost 20 kg and found a new passion for running. On the professional side, I’ve changed roles, from a pre-sales system engineer of the “Virtualization 1.0 Era” to an architect of “What’s Next.”

VMware acknowledges every four years of service with an award. When I celebrated four years, the award was a VASA sculpture comprising these three cubes, recalling the old-style VMware logo:


VMware 4 Years Award

(To read more about the VASA sculptures and how Diane Green got the idea, click here.)

At eight years, it was a brand new kind of VASA sculpture. There are no cubes anymore, but the design still recalls them in colors and shapes taken from different perspectives.  Moreover, the small squares inside the sculpture are actually eight, like the number of years of the award. An incrementally evolved idea, isn’t it? After all, that’s the essence of VMware.


VMware Eight Years Award: I was so pleased! 

Then: the Virtualization 1.0 Era and the “Compute Plant”

Of course, more has changed over the past eight years at VMware than just the awards. Eight years ago—in the “Virtualization 1.0 Era”—one of the biggest customer challenges was data center resource optimization and cost savings in the face of an increasing number of separated components needed for evolved applications architecture (i.e. Service Oriented Architecture) and x86 power unrelentingly following the Moore’s law.


VMware, with x86 virtualization, began to solve the problem by decoupling the hardware from the operating system and applications in a simple and disruptive approach that promised to deliver immense benefits.


Historical picture from 2007 EMEA TSX

There were three basic ways customers approached virtualization at this time, which led to vastly different outcomes:

–        Reluctant to change: These customers were informed on new IT trends but, not considering virtualization a serious alternative for production environment, they continued to allocate dedicated hardware for each new project, with IT budget demands increasing year-over-year without real business benefits.

–        Taking a tactical approach: These customers invested in virtualization using a project-specific approach to virtual infrastructure, creating different non-standardized silos with sprawling of virtual machines.

–        Making strategic moves to a shared virtual infrastructure:  These customers took a big-picture view, aggregating budgets from multiple projects to build a shared virtual infrastructure that allowed easy redistribution of compute resources while maintaining high levels of governance, increasing availability and agility, and lowering costs.


2008 Customer Virtualization adoption strategies

Over the years, VMware introduced new approaches to managing virtual infrastructure, transforming it into a “Compute Plant” where customers could dynamically manage resources. This introduced agility, automation and governance.


Figure 5 2008 VMware Historical picture: vSphere as a “Compute Plan”

Now: Transforming the Ways IT Provides Services

Now, in the mobile/cloud era, VMware has continued to be the catalyst for the evolution of IT, building disruptive advantages for managing, automating and orchestrating computing, networking, storage and security. This has transformed IT into a provider of services that can be delivered on-premise, off-premise and in a hybrid combination of the two.


VMware vRealize Suite

What about customer approaches of today? IT goals haven’t changed much over the years, and neither have the three types of organizational approaches to new technologies:

–        Reactive – With IT exhausting resources to maintain existing systems, they’re challenged to support future business results. The need for rapid innovation has driven users outside of traditional IT channels. As a result, cloud has entered the business opportunistically, threatening to create silos of activities that cannot satisfy mandates for security, risk management and compliance.

–        Proactive – IT has moved to embrace cloud as a model for achieving innovation through increased efficiency, reliability and agility. Shifts in processes and organizational responsibilities attempt to bring structure to cloud decisions and directions. More importantly, IT has embraced a new role: that of a service broker. IT is now able to leverage external providers to deliver rapid innovation within the governance structure of IT, balancing costs, risks and services levels.

–        Innovative – IT has fully implemented cloud computing as the model for producing and consuming computing, shifting legacy systems to a more flexible infrastructure. They’ve invested in automation and policy-based management for greater efficiency and reliability, enabling a broad range of stakeholders to consume IT services via self-service. They’ve also detailed measurement capabilities that quantify the financial impact of sourcing decisions, allowing them to redirect resources and drive new services and capabilities that advance business goals.

Moving Beyond a Reactive State of IT

At every stage of the virtualization evolution, there have been strategic, early adopters and those who take a “wait and see” attitude. But as workloads and end-users become more demanding, even the most reticent IT departments will need to shift away from a reactive environment, taking steps to redefine the way that it operates and the technology it leverages for its foundation. I believe in the near future enterprise customers to move beyond a “reactive state” will have to:

  • Continue to invest in private cloud to build the foundation for an efficient, agile, reliable infrastructure
  • Identify processes that can be automated, Involving our technology consulting services to create, expand or optimize their environments while gaining hands-on knowledge for their teams
  • Establish a self-service environment to deliver IT services to stakeholders on-demand across every Business Units.
  • Begin to identify the true costs of IT services.
  • Embrace third-party providers as a source of innovation.

Get ready for more bumps and fun

“It is not the strongest or the most intelligent who will survive but those who can best manage change.” C. Darwin

Evolution of any kind doesn’t happen without bumps and fun. We live and work in a constantly changing landscape, and with VMware we have opportunities every day to influence and be part of the exciting changes that are taking place today and shaping the IT of tomorrow.

Which is what makes it all so fascinating.

See more at: http://www.vmware.com/products/vrealize-business/

Andrea Siviero is an eight-year veteran of VMware and a senior solutions architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC), a part of the Global Technical Solutions (GTS) team. Prior to PSE, Andrea spent three years as pre-sales system engineer and three years as a post-sales consultant architect for cloud computing and desktop virtualization solutions focusing on very large and complex deployments, especially for service providers in the finance and telco sectors.

VDI Current Capacity Details

Anand Vaneswaran

By Anand Vaneswaran

In my previous post, I provided instructions on constructing a high-level “at-a-glance” VDI dashboard in vRealize Operations for Horizon, one that would aid in troubleshooting scenarios. In the second of this three-part blog series, I will be talking about constructing a custom dashboard that will take a holistic view of my vSphere HA clusters that run my VDI workloads in an effort to understand current capacity. The ultimate objective would be to place myself in a better position in not only understanding my current capacity, but I better hope that these stats help me identify trends to be able to help me forecast capacity. In this example, I’m going to try to gain information on the following:

  • Total number of running hosts
  • Total number of running VMs
  • VM-LUN densities
  • Usable RAM capacity (in a N+1 cluster configuration)
  • vCPU to pCPU density (in a N+1 cluster configuration)
  • Total disk space used in percentage.

You can either follow my lead and recreate this dashboard step-by-step, or simply use this as a guide and create a dashboard of your own for the most important capacity metrics you care about. In my environment, I have five (5) clusters comprising of full-clone VDI machines and three (3) clusters comprising of linked-clone VDI machines. I have decided to incorporate eight (8) “Generic Scoreboard” widgets in a two-column custom dashboard. I’m going to populate each of these “Generic Scoreboard” widgets with the relevant stats described above.


Once my widgets have been imported, I will rearrange my dashboard so that the left side of the screen occupies full-clone clusters and the right side of the screen occupies linked-clone clusters. Now, as part of this exercise I determined that I needed to create super metrics to calculate the following metrics:

  • VM-LUN densities
  • Usable RAM capacity (in a N+1 cluster configuration)
  • vCPU to pCPU density (in a N+1 cluster configuration)
  • Total disk space used in percentage

With that being said, let’s begin! The first super metric I will create will be called SM – Cluster LUN Density. I’m going to design my super metric with the following formula:

sum(This Resource:Deployed|Count Distinct VM)/sum(This Resource:Summary|Total Number of Datastores)


In this super metric I will attempt to find out how many VMs reside in my datastores on average. The objective is to make sure I’m abiding by the recommended configuration maximums of allowing a certain number of virtual machines to reside on my VMFS volume.

The next super metric I will create is called SM – Cluster N+1 RAM Usable. I want to calculate the usable RAM in a cluster in an N+1 configuration. The formula is as follows:

(((sum(This Resource:Memory|Usable Memory (KB)/sum(This Resource:Summary/Number of Running Hosts))*.80)*(sum(This Resource:Summary/Number of Running Hosts)-1))/10458576


Okay, so clearly there is a lot going on in this formula. Allow me to try to break it down and explain what is happening under the hood. I’m calculating this stat for an entire cluster. So what I will do is take the usable memory metric (installed) under the Cluster Compute Resource Kind. Then I will divide that number by the total number of running hosts to give me the average usable memory per host. But hang on, there are two caveats here that I need to take into consideration if I want an accurate representation of the true overall usage in my environment:

1)      I don’t think I want my hosts running at more than 80 percent capacity when it
comes to RAM utilization. I always want to leave a little buffer. So my utilization factor will be 80 percent or .8.

2)      I always want to account for the failure of a single host (in some environments, you might want to factor in the failure of two hosts) in my cluster design so that compute capabilities for running VMs are not compromised in the event of a host failure.  I’ll
want to incorporate this N+1 cluster configuration design in my formula.

So, I will take the result of my overall usable, or installed, memory (in KB) for the cluster, divide that by the number of running hosts on said cluster, then multiply that result by the .8 utilization factor to arrive at a number – let’s call it x – this is the amount of real usable memory I have for the cluster. Next, I’m going to take x, then multiply the total number of hosts minus 1, which will give me y. This will take into account my N+1 configuration. Finally I’m going to take y, still in KB, and divide it by (1024×1024) to convert it to GB and get my final result, z.

The next super metric I will create is called SM – Cluster N+1 vCPU to Core Ratio. The formula is as follows:

sum(This Resource:Summary|Number of vCPUs on Powered On VMs)/((sum(This Resource:CPU Usage|Provisioned CPU Cores)/sum(This Resource:Summary|Total Number of Hosts))*(sum(This Resource:Summary|Total Number of Hosts)-1))



This formula is fairly self-explanatory. I’m taking the total space used for that datastore cluster and dividing that by the total capacity of that datastore cluster. This is going to give me a number greater than 0 and less than 1, so I’m going to multiply this number by 100 to give me a percentage output.

Once I have the super metrics I want, I want to attach these super metrics to a package called SM – Cluster SuperMetrics.


The next step would be to tie this package to current Cluster resources as well as Cluster resources that will be discovered in the future. Navigate to Environment > Environment Overview > Resource Kinds > Cluster Compute Resource. Shift-select the resources you want to edit, and click on Edit Resource.


Click the checkbox to enable “Super Metric Package, and from the drop-down select SM – Cluster SuperMetrics.


To ensure that this SuperMetric package is automatically attached to future Clusters that are discovered, navigate to Environment > Configuration > Resource Kind Defaults. Click on Cluster Compute Resource, and on the right pane select SM – Cluster SuperMetrics as the Super Metric Package.


Now that we have created our super metrics and attached the super metric package to the appropriate resources, we are now ready to begin editing our “Generic Scoreboard” widgets. I will tell you how to edit two widgets (one for a full-clone cluster and one for a linked-clone cluster) with the appropriate data and show its output. We will then want to replicate the same procedures to ensure that we are hitting every unique full clone and linked clone cluster. Here is an example of what the widget for a full-clone cluster should look like:


And here’s an example of what a widget for a linked-clone cluster should look like:


Once we replicate the same process and account for all of our clusters, our end-state dashboard should resemble something like this:


And we are done. A few takeaways from this lesson:

  • We delved into the concept of super metrics in this tutorial. Super metrics are awesome resources that allow you the ability to manipulate metrics and display just the data you want to.  In our examples we created some fairly involving formulas, but a very simple example for why a super metric can be particularly useful would be memory. vRealize Operations Manager displays memory metrics in KB, but how do we get it to display in GB? Super metrics are your solution here.
  • Obviously, every environment is configured differently and therefore behaves differently, so you will want to tailor the dashboards and widgets according to your environment needs, but at the very least the above examples can be a good starting point to build your own widgets/dashboards.

In my next tutorial, I will walk through the steps for creating a high-level “at-a-glance” VDI dashboard that your operations command center team can monitor. With most organizations, IT issues are categorized on a severity basis that are then assigned to the appropriate parties by a central team that runs point on issue resolution by coordinating with different departments.  What happens if a Severity 1 issue happens to afflict your VDI environment? How are these folks supposed to know what to look for before placing that phone call to you? This upcoming dashboard will make it very easy. Stay tuned!!

Anand Vaneswaran is a Senior Technology Consultant with the End User Computing group at VMware. He is an expert in VMware Horizon (with View), VMware ThinApp, VMware vCenter Operations Manager, VMware vCenter Operations Manager for Horizon, and VMware Horizon Workspace. Outside of technology, his hobbies include filmmaking, sports, and traveling.

Overcoming Design Challenges with an Enterprise-wide Syslog Solution

MHoskenBy Martin Hosken

I’ve spent a lot of time helping my customers build a proper foundation for a successful implementation of vRealize Log Insight, and I’ve published a white paper that highlights key design challenges on how to overcome them. I’d like to share a brief overview with you here.

VMware vRealize Log Insight gives administrators the ability to consolidate logs, monitor and troubleshoot vSphere and to perform security auditing and compliance testing.

This white paper addresses the design challenges and key design decisions that arise when architecting an enterprise-wide syslog solution with vRealize Log Insight. It focuses on the design aspects of syslog in a vSphere environment and provides sample reference architectures to aid your design work and provide ideas about strategies for your own projects.

With every ESXi host in the data center generating approximately 250 MB of log file data a day, the need to centrally manage this data for proactive health monitoring, troubleshooting issues and performing security audits is something that many organizations continue to face every day.

mhosken 1
Note: A symlink is a type of file that contains a reference to another file in the form of an absolute or relative path.

VMware vRealize Log Insight is a scalable and secure solution that includes a syslog server, log consolidation tool and log analysis tool that works for any type of device that can send syslog data and not only the vSphere infrastructure.

As with any successful implementation project, the need to plan and design a solution that meets all the requirements set out by the business is key in ensuring success, and developing a design that is scalable, resilient and secure is fundamental to achieving this. And this includes keeping in mind the requirements of your business leaders, system administrators and security auditors as well.

To read the entire whitepaper, click HERE.

Martin Hosken is a Senior Consultant, VMware Professional Services EMEA