Home > Blogs > VMware Operations Transformation Services > Monthly Archives: April 2014

Monthly Archives: April 2014

Configuration Management in the Cloud

by Kai Holthaus

kai_holthaus-crop“The Cloud” is one of the biggest paradigm shifts in the IT world. Instead of provisioning physical hardware in a physical data center and then managing applications running on the physical hardware, virtualization has allowed IT organizations to decouple logical infrastructure from physical infrastructure, and thereby deliver new-found flexibility to provide and manage value-add services.

Additionally, IT organizations can now provision the cloud infrastructure itself, along with the infrastructure they are already running themselves. Infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS) are now available to supplement or even replace traditional IT offerings. One common use case is to monitor the performance of a web server.

Once the performance reaches a certain limit, a new service is automatically provisioned in the cloud. The cloud-based server is used as long as performance demands require it, and once demand drops below a certain threshold, the additional cloud-based server is decommissioned. All of this can happen in a matter of minutes and can be fully automated.

Different cloud-based offerings need to be managed differently, in terms of configuration management, to ensure that the needs of the customer can be met.

Configuration Management Principles
According to ITIL, the purpose of the service asset and configuration management process is to ensure that the assets required to deliver services are properly controlled, and that accurate and reliable information about those assets is available when and where it is needed. This information includes details of how the assets have been configured and the relationships between assets.

The objectives of configuration management are to:

  • Ensure that assets under the control of the IT organization are identified, controlled and properly cared for throughout their lifecycle
  • Identify, control, record, report, audit, and verify services and other configuration items (CIs), including versions, baselines, constituent components, their attributes and relationships
  • Account for, manage and protect the integrity of CIs through the service lifecycle by working with change management to ensure that only authorized components are used and only authorized changes are made
  • Ensure the integrity of CIs and configurations required to control the services by establishing and maintaining an accurate and complete configuration management system (CMS)
  • Maintain accurate configuration information on the historical, planned and current state of services and other CIs
  • Support efficient and effective service management processes by providing accurate configuration information to enable people to make decisions at the right time – for example, to authorize changes and releases, or to resolve incidents and problems

In summary, configuration management supports the management of services by providing information about how the services are being delivered. This information is crucial to the other service management processes, especially such processes as change management, incident management, or problem management. It is also crucial to ensure meeting all agreed-to service levels.

Configuration Management and the Cloud
Let’s take a look at three common cloud-based offerings and the configuration management aspects to keep in mind.

20140421 SACM in the Cloud

Infrastructure as a service (IaaS) — IaaS is a service that offers computing resources, such as virtual machines, virtual networks, or virtual storage as a service to customers. The consumer of IaaS services usually has control over the configuration aspects of the resource, such as which operating system to run on a virtual machine, or how to utilize the storage resource.

This means that the resources provisioned in an IaaS model would be CIs that should be managed in the traditional way, as if they were physical CIs. IaaS offers customers a lot of control over the configuration of these resources.

Platform as a service (PaaS) — PaaS is a service that offers a computing platform to its customers. A computing platform could include an operating system, programming language, execution environment, database, and web server, so that developers have a ready-made platform for their development tasks that can quickly be deployed in various environments.  The management of the components of the PaaS is left to the service provider, who will need to meet service level agreements (SLA).

With PaaS, configuration management could be performed on the individual components of the platform, such as the virtual machine, the operating system, and the database, for instance. The configuration management could also be performed at the service level, meaning that there would only be one service-type CI for the platform to be entered into and managed in a CMS.

Software as a service (SaaS) — SaaS provides entire application environments, such as HR or procurement applications, as a service. The service provider must meet SLAs, so that customers of the service will be able to use the software when and where they choose. Such service levels can include all aspects of utility and warranty, as well as incident resolution, problem resolution, or promised delivery time frames for specific service requests, such as a new login for the application. With SaaS, there should only be a service-type CI in the CMS to be managed.

In summary, the need for good configuration management practices does not end when services (or parts of services) are moved to the cloud. It is still the service provider’s responsibility to ensure that services are being delivered as agreed to with the customers. Different cloud-based services, such as IaaS, PaaS, or SaaS, will require different levels of configuration management.

—-
Kai Holthaus is a transformation consultant with VMware Accelerate Advisory Services and is based in California . Follow @VMwareCloudOps on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags

SDDC: Changing Organizational Cultures

By Tim Jones

TimJones-cropI like to think of SDDC as “service-driven data center” in addition to “software-defined data center.” The vision for SDDC expands beyond technical implementation, encompassing the transformation from IT shop to service provider and from cost center to business enabler. The idea of “service-driven” opens the conversation to include the business logic that drives how the entire service is offered. Organizations have to consider the business processes that form the basis of what to automate. They must define the roles required to support both the infrastructure and the automation. There are financial models and financial maturity necessary to drive behavior on both the customer and the service provider side. And finally, the service definitions should be derived from use cases that enable customers to use the technology and define what the infrastructure should support.

When you think through all of the above, you’re really redefining how you do business, which requires a certain amount of cultural change across the entire organization. If you don’t change the thinking about how and why you offer the technology, then you will introduce new problems alongside the problems you were trying to alleviate. (Of course the same problems will happen faster and will be delivered automatically. )

I correlate the advancement to SDDC to the shift that occurred when VMware first introduced x86 virtualization. The shift to more efficient use of resources that were previously wasted on physical servers by deploying multiple virtual machines gathered momentum very quickly. But based on my experiences, the companies that truly benefited were those that implemented new processes for server requisitioning. They worked with their customers to help them understand that they no longer needed to buy today what they might need in three years, because resources could be easily added in a virtual environment.

The successful IT shops actively managed their environments to ensure that resources weren’t wasted on unnecessary servers. They also anticipated future customer needs and planned ahead. These same shops understood the need to train support staff to manage the virtualized environment efficiently, with quick response times and personal service that matched the technology advances. They instituted a “virtualization first” mentality to drive more cost savings and extend the benefits of virtualization to the broadest possible audience. And they evangelized. They believed in the benefits virtualization offered and helped change the culture of their IT shops and the business they supported from the bottom up.

The IT shops that didn’t achieve these things ended up with VM sprawl and over-sized virtual machines designed as if they were physical servers. The environment became as expensive or more expensive than the physical-server-only environment it replaced.

The same types of things will happen with this next shift from virtualized servers to virtualized, automated infrastructure. The ability for users to deploy virtual machines without IT intervention requires strict controls around chargeback and lifecycle management. Security vulnerabilities are introduced because systems aren’t added to monitoring or virus scanning applications. Time and effort—which equate to cost—are wasted because IT continues to design services without engaging the business. Instead of shadow IT, you end up with shadow applications or platforms that self-service users create because what they need isn’t offered.

The primary way to avoid these mistakes is to remake the culture of IT—and by extension the business—to support the broader vision of offering ITaaS and not just IaaS.

Tim Jones is business transformation architect with VMware Accelerate Advisory Services and is based in California. Follow @VMwareCloudOps on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags.

Forensic IT: Discover Issues Before Your End Users Do

by Paul Chapman, VMware Vice President Global Infrastructure and Cloud Operations

If you’ve ever watched five-year-olds playing a soccer game, there is very little strategy: all the kids swarm the field and chase the ball trying to score a goal.

Most IT departments take a similar sort of “swarming” approach to service incidents and problems when they occur.

For most of my career, IT has been a reactive business: we waited until there was a problem and then scrambled very well to solve it. We were tactical in terms of problem solving in a reactive mode, yet monitoring was focused on availability and capturing degradation in services, versus being proactive and predictive, analyzing patterns to stay ahead of problems. In the new world of IT as a service, where expectations are very different, that model no longer works.

New and emerging forensics tools and capabilities give IT the tools to be proactive and predictive—to focus on quality of service and end-user satisfaction, which is a must in the cloud era.

Forensics: A new role for IT
As an example, with new network forensics tools to monitor and analyze network traffic, it may seem a natural fit for network engineers to use them, but at VMware we found the skillsets to be quite different. We need people who have an inquisitive mindset — a sort of “network detective” who thinks like a data analyst and can look at different patterns and diagnostics to find problems before they’re reported or exposed into user impact.

Those in newly created IT forensic roles may have a different set of skills than a typical IT technologist. They may not even be technology subject matter experts, but they may be more like data scientists, who can find patterns and string together clues to find the root of potential problems.

Adding this new type of role in the IT organization most definitely presents challenges as it goes against the way IT has typically been done.  But this shift to a new way of delivering service, moving from the traditional swarm model to a more predictive and forensics-driven model, means a new way of thinking about problem solving. Most importantly, forensics has the potential to create a significant reduction in service impact and maintain high level of service availability and quality.

Quality of service and reducing end user friction
Every time an end user has to stop and depend on another human to fix an IT problem, it’s a friction point. Consumers have come to expect always on, 100 percent uptime, and they don’t want to take the time open a ticket or pause and create a dependency on another human to solve their need. As IT organizations, we need to focus more on the user experience and quality of service—today’s norm of being available 100 percent of the time is table stakes.

With everything connected to the “cloud,” it’s even more important for IT to be proactive and predictive about potential service issues. Applications pull from different systems and processes across the enterprise and across clouds. Without the right analysis tools, IT can’t understand the global user experience and where potential friction points may be occurring. In most experiences, IT finds out about a poor quality of service experience when users complain — perhaps even publicly on their social networks. Unless we get in front of the possible issues and take an outside-in, customer-oriented view, we’re headed for lots of complaints around quality of service.

At VMware, we have seen a significant reduction in overall service impact since using network forensics, and we’re keeping our internal customers productive. Focusing on quality of service and finding people with the right skillsets to fill the associated roles has us unearthing problems long before our end users experience so much as a glitch.

———-
Follow @VMwareCloudOps and @PaulChapmanVM on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.