Home > Blogs > VMware TAM Blog > Category Archives: Support

Category Archives: Support

Platform Services Controller (PCS) and vCenter Server 6 Maximums

Petr McAllisterBy Petr McAllister

One of my customers successfully completed the VMware vSphere: Fast Track [V6] class. The customer provided a lot of positive feedback in regards to the class, and also about new functionality in vSphere 6. However, one thing was unclear: The instructor stated there is a maximum of 10 VMware solutions per vCenter. So the question was, “When we run a complex environment with multiple vCenter servers, vRealize Operations servers, vRealize Automation, vRealize Orchestrator, Site Recovery Manager and backup appliances, how can we fit all those solutions under the 10-solution limit?”

Finding the correct answer was pretty straight forward; VMware has published a document called “Configuration Maximums vSphere 6.0,” and the information is right there. The document has very specific content on exactly what my customer was asking:

A VMware Solution is defined as a product that creates a Machine Account and one or more Solution Users (a collection of vSphere services) … At this time, only vCenter Server is defined as a fully integrated solution and counts against these maximums. Partially integrated solutions, such as vCenter Site Recovery Manager, vCloud Director, vRealize Orchestrator, vRealize Automation Center, and vRealize Operations, do not count against these defined maximums.”

It would be easy to conclude my blog post here, but the nature of my topic is a little bit different. Looking through the PSC section of the “Configuration Maximums vSphere 6.0” document can be somewhat confusing. You’ll notice different unit maximums, some of which are specified as “per vSphere Domain,” “per site,” or “per Single PSC.”

Thermometer

 

The best way to understand PSC maximums is via a diagram found in the VMware Knowledge Base (KB) article, “List of recommended topologies for VMware vSphere 6.0.x,” which is a brilliant source of information on its own.

PMcAllister PSC

Assume User A has access to all four vCenter servers. When User A is authenticated in the Single Sign-on domain (also known as vSphere domain), the user can:

  • Log in to Site A or Site B using the same credentials
  • See all four vCenter servers in the environment (because these vCenter servers are members of the same SSO domain)
  • Accomplish any task on any of the vCenter servers the user has permissions on, and perform operations that involve multiple vCenter servers as inter-vCenter vMotions

Just to be clear, here is another example: If User B has access to only one vCenter server, he/she will still be able to log in—with the same credentials—to any site that is in the same SSO domain and do any operation that User B has permissions for – but only in the permitted vCenter.

Now let’s move on to the “per single PSC” definition. The PSC can be installed as embedded on the same server with other vCenter components, but in this case, the embedded PSC serves only one vCenter server. For any multi-vCenter server and/or multi-site configuration, PSC has to be installed as an external module on a separate machine in order to serve multiple vCenter servers. But the external PSC has maximums that are specified in the “vSphere 6 Configuration Maximums” document. These limits were introduced to ensure your infrastructure functions at a good performance level.

The final term to explain here is “vSphere Site,” which is partially self-explanatory, but it would help to be a little bit more specific. The KB article “VMware Platform Services Controller 6.0 FAQs” has the best definition of a vSphere 6 site:

A site in the VMware Directory Service is a logical container in which we group Platform Services controllers within a vSphere Domain. You can name them in an intuitive way for easier implementation. Currently, the use of sites is for configuring PSC High Availability groups behind a load balancer.”

So in other words we expect the best possible connection between sites (in terms of latency and bandwidth); however, in case of connectivity issues, every site can be autonomous—for a while—serving with full functionality – with the exception of operations that require connection to another site. PSC will get synchronized when the connection is restored. You might have more questions on PSC in vSphere 6, and if you do, the KB article noted above will answer most of those questions, and reading it is certainly a good investment of your time.


Petr McAllister is a VMware Technical Account Manager based out of Vancouver, British Columbia, Canada.

Code Stream: Bridging the Gap Between Development and Operations

Kelly DareBy Kelly Dare

When our customers start automating their infrastructure, some of the first internal customers or users of their automation tools are almost always the software developers in their organization. Infrastructure-as-a-Service and software developers are a natural fit, since software developers need a high level of autonomy to get their machines created on their timelines and to their specs. The business typically supports this wholeheartedly, since the software they are developing is often crucial to the business and/or generates revenue.

However, there is a fundamental conflict between the goals of the Developer and Operations (Dev/Ops) groups. Dev wants to release software fast and often, integrating small changes into their code base. Ops wants slower, well-tested releases, because more churn equals more chance for things to go wrong. A great deal of the time, the software development and release process includes some combination of automation and manual steps in a complex workflow—that works well for a slower release model—but when you attempt to move to an accelerated release pace, these complexities and manual steps become bottlenecks, and the process ends up straining your organization.

VMware understands these issues, and has created Code Stream—an automated DevOps tool—as part of the VMware vRealize Automation suite. It enables our customers to release their software more frequently and efficiently, with a high level of collaboration among Dev and Ops teams. If your organization has a Continuous Delivery or DevOps initiative, Code Stream can significantly accelerate your progress in those areas. Using Code Stream does not require you to change anything about your current process. You can begin by modeling your current process, and Code Stream will mature—along with your processes—all the way to a fully automated release cycle if you so choose.

KDare Software Manager Download

With vRealize Automation, you can leverage just about any existing automation process you already have by moving them into the extensible framework of vRealize Automation. Similarly, you can also take advantage of nearly any software lifecycle tools you have already invested in by connecting them into the extensible framework of Code Stream. Your current source control system, testing frameworks, and build/continuous integration tools can remain the same – you just begin to access them via Code Stream rather than multiple interfaces. Code Stream contains Artifactory for intelligent storage of all your binary artifacts, which allows for the use of nearly any provisioning and configuration management tool. You can bring along all your existing development tools such as Puppet, Chef, SaltStack, or even plain old scripts to continue to build upon them in Code Stream.

KDare Code Stream 2

Once your existing model is in Code Stream, you can continue to further automate your software delivery pipeline as much as you please, up to a fully automated model. Users can reference the Release Dashboard at any time to view the current status of any release, as well as drill down into the details of each deployment if needed.

For more information about Code Stream, follow the links below or ask your VMware account team!


Kelly is a Technical Account Manager for VMware based in Austin, Texas and serving accounts in the Austin and San Antonio areas. She has worked in many capacities in the technology field, and enjoys drawing on those varied experiences to assist her customers. When not working, she stays very busy with reading, cooking, crafts, and most of all lots of family time with her husband and three kids – one infant, one preschooler, and one high-schooler!

 

 

vRealize Operations Manager – Architecture

Carl OlafsonBy Carl Olafson

vRealize Operations Manager v6.x is a completely redesigned operations management tool. From an architectural standpoint, vRealize Operations Manager is vastly superior to vCenter Operations Manager, which was a two-VM vApp, and could only scale up. As a starting point, vRealize Operations Manager v6.x uses Gemfire cluster technology, and as such can also scale-out for additional capacity. In addition, the Advanced and Enterprise editions allow vRealize Operations Manager High Availability to be enabled (not to be confused with vSphere HA) for fault tolerance. The remainder of this article will be broken down into some key concepts and architecture terminologies.

Cluster Technology and Scale-Up/Scale-Out Capacity

As mentioned, Gemfire is a cluster technology and for vRealize Operations Manager v6.0/v6.1, there is a node cluster limit of 8 in v6.0.x, and 16 in v6.1.x. This gives vRealize Operations Manager scale-out capacity of 8–16 nodes. In addition, each node/VM has scale-up capacity of 4 vCPUs/16 GB vRAM (small) and 16 vCPUs/48 GB vRAM (large). From a best practices standpoint this brings up a couple of items that must be adhered to:

  1. For a multi-node cluster, all nodes must be the same scale-up size (small, medium or large). Gemfire assumes all nodes are equal and distributes load across the cluster equally. Performance problems will occur if you have different sized nodes in your vRealize Operations Manager cluster. And you can adjust node size after the initial implementation as your environment grows.
  2. For a multi-node cluster, all nodes must have Layer 2 (L2) adjacency. Gemfire cluster technology is latency sensitive. From a VMware supportability standpoint, placing nodes in a cluster across a WAN or Metro Cluster is not supported.
  3. Proper sizing of the cluster and utilization of Remote Collectors is key to a successful implementation. The next article will cover this in detail.

Node Types

For vRealize Operations Manager there are two primary types of nodes: cluster nodes and remote collectors.

Cluster Nodes

The cluster nodes participate in the vRealize Operations Manager cluster. There are three distinct sub-types.

  • Master node, which is the first node assigned to the cluster. The master node is also responsible for managing all the other nodes in the cluster.
  • Data nodes, which would make up the remaining nodes of a non-HA cluster.
  • Replica node, which is a backup to the master node should the master node fail. This assumes vRealize Operations Manager HA is enabled.

Examples of vRealize Operations Manager cluster architectures.

COlafson Cluster 1

Remote Collectors

Remote collectors do not participate in the vRealize Operations Manager cluster analytic process. However, the remote collector is an important node when you have a multi-site implementation or are using specific management packs that cannot be assigned to a cluster node. The remote collector only contains the Admin UI and the REST API component that allows it to talk to the vRealize Operations Manager cluster.

Although your cluster is limited to 8–16 nodes (based on version) and determines your overall object collection capacity, you can have an additional 30–50 remote collectors: 30 in version 6.0, and 50 in version 6.1. The remote collector’s object count applies against the cluster, but does not diminish the size or number of cluster nodes. With the release of vRealize Operations Manager v6.1, remote collectors can also be clustered, and an emerging best practice is to move all management packs/adapters to clustered remote collectors. This helps reduce the load on the analytics cluster, and combined with remote collector clustering provides a higher level of fault tolerance and efficiency.

The remote collector is an important design consideration if you are using management packs (like MPSD) or have vCenters across a WAN/Metro Cluster. If your vRealize Operations Manager cluster is going to collect from multiple vCenters over a WAN or utilize management packs, consult a qualified SME on your design for cluster nodes, remote collectors and level of fault tolerance. VMware Professional Services (PSO) provides vRealize Operations services ranging from Architecture to Operational Transformation.

COlafson Multinode Cluster

Load Balancer

A load balancer is another important design consideration for a multi-node cluster. vRealize Operations Manager v6.x does not currently come with a load balancer, but can utilize any third-party stateful load balancer. Utilizing a load balancer ensures the cluster is properly balanced for performance of UI traffic. It also simplifies access for users. Instead of accessing each node individually the user only needs one URL to access the entire cluster and not be concerned with what node is available.

COlafson Multinode Cluster Load Balancer

 


Carl Olafson is a VMware Technical Account Manager based out of California.

vROPS 6.1 and the New End Point Operations Monitoring Feature

Vegard-bw2By Vegard Sagbakken

In the 6.1 release of vRealize Operations, VMware merged the Hyperic Monitoring solutions into vROPS. This makes it a lot easier to get a full holistic view through the vROPS management interface all the way down to services, processes and the application layer.

To use the OS Monitoring feature described here you need vRealize Operations Advanced licensing.

Currently we support the following OSs with this End Point Operations agent:

Operating System

Processor Architecture

JVM

Scaling Considerations

Red Hat Enterprise Linux (RHEL) 5.x, 6.x, 7.x x86_64, x86_32 Oracle Java SE7
CentOS 5.x, 6.x, 7.x x86_64, x86_32 Oracle Java SE7
SUSE Enterprise Linux (SLES) 11.x, 12.x x86_64 Oracle Java SE7
Windows 2003 Server R2 x86_64, x64_32 Oracle Java SE7
Windows 2008 Server, 2008 Server R2 x86_64, x64_32 Oracle Java SE7
Windows 2012 Server, 2012 Server R2 x86_64, x64_32 Oracle Java SE7
Solaris 10, or higher x86_64, x86_32 Oracle Java SE7
HP-UX 11.11 or higher PA-RISC Oracle Java SE7
AIX 6.1, 7.1 Power PC IBM Java SE7
Ubuntu 10.11 x86_64, x86_32 Oracle Java SE7 For development environments only.

Here is an excellent example of what you can do with this dashboard; it shows the status of a Windows vCenter Server and the status of all services running. This gives your operations team a great way of making sure all vCenter services are up and running so they can take action on any anomalies early before they escalate into bigger issues. This example was created by Peter Tymbel, Sr.Consultant, PSO, VMware.

Some of my colleagues at VMware have already documented the installation and use of this new functionality within vROPS. Please see the following blogs to get started with this new functionality, and don´t forget to use the official documentation as well.

vROPS 6.1 – EPO Agents Installation Guide

http://www.vmignite.com/2015/10/vrops-6-1-epo-agents-installation-guide/

vRealize Operations 6.1 End Point with existing JRE

http://virtual-red-dot.info/vrealize-operations-6-1-end-point-with-existing-jre/

vROPS 6.1 – How to Monitor any Windows Service

http://www.vmignite.com/2015/10/vrops-6-1-how-to-monitor-any-windows-service/

vRealize Operations 6.1 End Point: how to add metrics

http://virtual-red-dot.info/vrealize-operations-6-1-end-point-how-to-add-metrics/

If you would like to dig further into this I suggest you head over to the VMworld Hands-On Labs and launch, “HOL-SDC-1601 Cloud Management with vRealize Operations Insight.” Here you can play around in a pre-installed environment and look at how the End Point Operations agents are working.


Vegard Sagbakken is a Senior Technical Account Manager working out of Oslo, Norway. He currently holds multiple VMware certifications, VCP 2-6 and VCAP-DCD 4-5