Home > Blogs > VMware TAM Blog > Monthly Archives: December 2015

Monthly Archives: December 2015

Adding More Benefits for Our Valued Customers: 2015 TAM Service Enhancements

By Steve Kemp

Our global TAM community works relentlessly to improve TAM Services, and we work relentlessly to improve TAM Service excellence and provide even more value to you, our VMware TAM customers. As 2015 draws to a close, we’re taking a moment to reflect on the value-added services and programs we’ve added this past year. The following is just a sampling of what we’ve put into play in 2015:

Expanded service offerings: Two new TAM services, NSX TAM and vRB TAM, focus on helping you succeed with new and advanced technologies, namely VMware network virtualization (NSX) and business intelligence (VMware vRealize Business Suite) offerings.

Direct access to specialized expertise: We have established a team of TAMs who are VMware product and technology experts in specific areas, and gave all of our TAMs a direct line to them. This structured, instant line of access makes all of our TAMs better at fielding requests and providing advice. Plus, it’s an ideal way for your TAMs to quickly get your feedback and requests to the right VMware product team.

On-demand access to high-value information: Three popular new communication tools give you instant access to high-value information about programs, education, break/fixes, and current alerts:

  • The TAM Source Newsletter is not your average newsletter. For starters, you don’t subscribe to it. Your TAM personally curates a custom newsletter just for you from a long list of articles, product announcements and other items, sending you only the subset that is most relevant to your needs. Be sure to ask your TAM about this cool new twist on the traditional newsletter.
  • The VMware TAM Blog was created in response to customer demand, and provides a steady stream of the latest news and information. You can bookmark it, sign up for the RSS feed, or follow us on Twitter (@vmwaretam).
  • We recently began hosting a series of TAM Customer Webcasts covering a variety of topics. Most of these webcasts are available on-demand after they are presented, so you can access them whenever it is convenient. From that same link, you can also view the upcoming live webinar events.

Expanded TAM offerings at VMworld: In case you missed it, the turnout at this year’s VMworld far exceeded expectations. And as usual, TAM Day was one of the most popular events. This year, in response to attendee feedback, we included an expanded, two-hour version of our Ask the Experts luncheon. TAM customers often tell us that this luncheon is the most impactful session they attend at VMworld because it gives them direct access to multiple experts in various product areas. Here, they can influence our product direction and/or get questions answered directly from the experts who are creating our solutions. We’re looking forward to building on these kinds of events to generate even more informative customer conversations at VMworld 2016 in Las Vegas (August 28 – September 1). We are planning to add more small-group sessions and one-on-one meetings with engineers, where you can hear about NDA programs, product roadmaps, and other insider developments. Hope to see you there!

We’re proud to have been a part of your success this year, and we look forward to creating even more value with new and enhanced programs and services in 2016. If you have suggestions for how we can improve our service offerings, please leave a comment or share them with your TAM.


Steve Kemp joined VMware in April 2006 and helped grow the TAM Program from 12 to over 300 TAMs globally.  As Director of the US West TAM Program, he leads VMware’s leading customer success program for most of private sector, leveraging 20 years of high-tech industry experience including software and hardware infrastructure, networking and storage.

Code Stream: Bridging the Gap Between Development and Operations

Kelly DareBy Kelly Dare

When our customers start automating their infrastructure, some of the first internal customers or users of their automation tools are almost always the software developers in their organization. Infrastructure-as-a-Service and software developers are a natural fit, since software developers need a high level of autonomy to get their machines created on their timelines and to their specs. The business typically supports this wholeheartedly, since the software they are developing is often crucial to the business and/or generates revenue.

However, there is a fundamental conflict between the goals of the Developer and Operations (Dev/Ops) groups. Dev wants to release software fast and often, integrating small changes into their code base. Ops wants slower, well-tested releases, because more churn equals more chance for things to go wrong. A great deal of the time, the software development and release process includes some combination of automation and manual steps in a complex workflow—that works well for a slower release model—but when you attempt to move to an accelerated release pace, these complexities and manual steps become bottlenecks, and the process ends up straining your organization.

VMware understands these issues, and has created Code Stream—an automated DevOps tool—as part of the VMware vRealize Automation suite. It enables our customers to release their software more frequently and efficiently, with a high level of collaboration among Dev and Ops teams. If your organization has a Continuous Delivery or DevOps initiative, Code Stream can significantly accelerate your progress in those areas. Using Code Stream does not require you to change anything about your current process. You can begin by modeling your current process, and Code Stream will mature—along with your processes—all the way to a fully automated release cycle if you so choose.

KDare Software Manager Download

With vRealize Automation, you can leverage just about any existing automation process you already have by moving them into the extensible framework of vRealize Automation. Similarly, you can also take advantage of nearly any software lifecycle tools you have already invested in by connecting them into the extensible framework of Code Stream. Your current source control system, testing frameworks, and build/continuous integration tools can remain the same – you just begin to access them via Code Stream rather than multiple interfaces. Code Stream contains Artifactory for intelligent storage of all your binary artifacts, which allows for the use of nearly any provisioning and configuration management tool. You can bring along all your existing development tools such as Puppet, Chef, SaltStack, or even plain old scripts to continue to build upon them in Code Stream.

KDare Code Stream 2

Once your existing model is in Code Stream, you can continue to further automate your software delivery pipeline as much as you please, up to a fully automated model. Users can reference the Release Dashboard at any time to view the current status of any release, as well as drill down into the details of each deployment if needed.

For more information about Code Stream, follow the links below or ask your VMware account team!


Kelly is a Technical Account Manager for VMware based in Austin, Texas and serving accounts in the Austin and San Antonio areas. She has worked in many capacities in the technology field, and enjoys drawing on those varied experiences to assist her customers. When not working, she stays very busy with reading, cooking, crafts, and most of all lots of family time with her husband and three kids – one infant, one preschooler, and one high-schooler!

 

 

vRealize Operations Manager – Architecture

Carl OlafsonBy Carl Olafson

vRealize Operations Manager v6.x is a completely redesigned operations management tool. From an architectural standpoint, vRealize Operations Manager is vastly superior to vCenter Operations Manager, which was a two-VM vApp, and could only scale up. As a starting point, vRealize Operations Manager v6.x uses Gemfire cluster technology, and as such can also scale-out for additional capacity. In addition, the Advanced and Enterprise editions allow vRealize Operations Manager High Availability to be enabled (not to be confused with vSphere HA) for fault tolerance. The remainder of this article will be broken down into some key concepts and architecture terminologies.

Cluster Technology and Scale-Up/Scale-Out Capacity

As mentioned, Gemfire is a cluster technology and for vRealize Operations Manager v6.0/v6.1, there is a node cluster limit of 8 in v6.0.x, and 16 in v6.1.x. This gives vRealize Operations Manager scale-out capacity of 8–16 nodes. In addition, each node/VM has scale-up capacity of 4 vCPUs/16 GB vRAM (small) and 16 vCPUs/48 GB vRAM (large). From a best practices standpoint this brings up a couple of items that must be adhered to:

  1. For a multi-node cluster, all nodes must be the same scale-up size (small, medium or large). Gemfire assumes all nodes are equal and distributes load across the cluster equally. Performance problems will occur if you have different sized nodes in your vRealize Operations Manager cluster. And you can adjust node size after the initial implementation as your environment grows.
  2. For a multi-node cluster, all nodes must have Layer 2 (L2) adjacency. Gemfire cluster technology is latency sensitive. From a VMware supportability standpoint, placing nodes in a cluster across a WAN or Metro Cluster is not supported.
  3. Proper sizing of the cluster and utilization of Remote Collectors is key to a successful implementation. The next article will cover this in detail.

Node Types

For vRealize Operations Manager there are two primary types of nodes: cluster nodes and remote collectors.

Cluster Nodes

The cluster nodes participate in the vRealize Operations Manager cluster. There are three distinct sub-types.

  • Master node, which is the first node assigned to the cluster. The master node is also responsible for managing all the other nodes in the cluster.
  • Data nodes, which would make up the remaining nodes of a non-HA cluster.
  • Replica node, which is a backup to the master node should the master node fail. This assumes vRealize Operations Manager HA is enabled.

Examples of vRealize Operations Manager cluster architectures.

COlafson Cluster 1

Remote Collectors

Remote collectors do not participate in the vRealize Operations Manager cluster analytic process. However, the remote collector is an important node when you have a multi-site implementation or are using specific management packs that cannot be assigned to a cluster node. The remote collector only contains the Admin UI and the REST API component that allows it to talk to the vRealize Operations Manager cluster.

Although your cluster is limited to 8–16 nodes (based on version) and determines your overall object collection capacity, you can have an additional 30–50 remote collectors: 30 in version 6.0, and 50 in version 6.1. The remote collector’s object count applies against the cluster, but does not diminish the size or number of cluster nodes. With the release of vRealize Operations Manager v6.1, remote collectors can also be clustered, and an emerging best practice is to move all management packs/adapters to clustered remote collectors. This helps reduce the load on the analytics cluster, and combined with remote collector clustering provides a higher level of fault tolerance and efficiency.

The remote collector is an important design consideration if you are using management packs (like MPSD) or have vCenters across a WAN/Metro Cluster. If your vRealize Operations Manager cluster is going to collect from multiple vCenters over a WAN or utilize management packs, consult a qualified SME on your design for cluster nodes, remote collectors and level of fault tolerance. VMware Professional Services (PSO) provides vRealize Operations services ranging from Architecture to Operational Transformation.

COlafson Multinode Cluster

Load Balancer

A load balancer is another important design consideration for a multi-node cluster. vRealize Operations Manager v6.x does not currently come with a load balancer, but can utilize any third-party stateful load balancer. Utilizing a load balancer ensures the cluster is properly balanced for performance of UI traffic. It also simplifies access for users. Instead of accessing each node individually the user only needs one URL to access the entire cluster and not be concerned with what node is available.

COlafson Multinode Cluster Load Balancer

 


Carl Olafson is a VMware Technical Account Manager based out of California.