Home > Blogs > VMware Operations Transformation Services > Monthly Archives: January 2014

Monthly Archives: January 2014

Using vCloud Suite to Streamline DevOps

By: Jennifer Galvin

A few weeks ago I was discussing mobile app development and deployment with a friend. This particular friend works for a company that develops mobile applications for all platforms on a contract-by-contract basis. It’s a good business. But one of the key challenges they have is the time and effort required to install a client’s development and test environment so that they can start development. Multiple platforms need to be provisioned. And development and testing tools that may be unique to each platform must be installed. This results often in needing to maintain large teams with specialized skills and having to maintain a broad range of dev/test environments.

I have always been aware that VMware’s vCloud Suite can speed up deployment of applications, (even complex application stacks), but I didn’t know if long setup times were common in the mobile application business. So I started to ask around:

“What was the shortest time possible it would take for your development teams to make a minor change to a mobile application, on ALL mobile platforms – Android, iPhone, Windows, Blackberry, etc?”

comic part 1 The answers ranged between “months” and “never”.

Sometime later, after presenting VMware’s Software Defined Datacenter vision to a tech meetup in Washington, D.C. a gentleman approached me to discuss the question posed. While he liked the SDDC vision, he wondered if I knew of a way to use vCloud Suite and software controlled everything to speed up mobile development. So I decided to sketch out how the blueprints and automated provisioning capabilities of the vCloud Suite could help speed up application development on multiple platforms.

First, let’s figure out why this is so hard in the first place – after all, mobile development SDK’s are frameworks, and while it takes a developer to write an app, the SDK is still doing a lot of the heavy lifting. So why is this still taking so long? As it turns out, there are some major obstacles to deal with:

  • Mobile applications always need a server-side application to test against: mobile applications interact with server-side applications, and unless your server-side application is already a stable, multi-tenant application that can deal with the extra performance drain of 32 developers running amok (and you don’t mind upsetting your existing customers), you’re going to need to point them at a completely separate environment.
  • The server-side application is complex and lengthy to deploy: A 3-tier web application with infrastructure (including networking and storage), scrubbed production database data to provide some working data, and front-end load balancing is the same kind of deployment you did when the application initially went into production. You’re not going to start development on your application any time soon unless this process speeds up (and gets more automated).

Let’s solve these problems by getting a copy of the application (and a copy of production-scrubbed data) out into a new Testing area so the developers can get access to it, fast. vCloud Suite provides a framework for the server-side application developers to express its deployment as a blueprint, capable of deploying not just the code, but all the properties to automate the deployment, and consumes capacity from on-premises resources as well as those from the public cloud. That means that when it comes time to deploy a new copy (with the database refreshed and available), it’s as easy as a single click of a button.

comic part 2comic part 3Since the underlying infrastructure is virtualized, compute resources are used or migrated to make room for the new server-side application. Other testing environments can even be briefly powered down so that this testing (which is our top priority) can occur.

Anyone can deploy the application, and what used to take hours and teams of engineers can now be done by one person. However, we are still aiming to deploy this on all mobile platforms. In order to put all of our developers on this challenge, we first need to ensure they have the right tools and configurations. In the mobile world, that means more than just installing a few software packages and adjusting some settings. In some cases, that could mean you need new desktops, with entirely different operating systems.

Not every mobile vendor offers an SDK on all operating systems, and in fact, there isn’t one operating system that’s common to the top selling mobile phones today.

For example, you may only develop iOS applications using xCode, which runs only on Mac OSX. Both Windows and Android rely on an SDK compatible with Windows, and each has dependencies on external libraries to function (especially Android). Many developers typically favor MacBooks running VMware Fusion to accommodate for all of these different environments, but what if you decide that to re-write the application quickly, you require some temporary contractors? Those contractors are going to need those development environments with the right SDKs and testing.

This is also where vCloud Suite shines. It provides Desktop as a Service to those new contractors. The same platform that allowed us to provision the entire server-side application allows us to provision any client-side resources they might need.

By provisioning all of the infrastructure at once, we are now ready to redevelop our mobile app. We can spend developer time development and testing, making it the best app possible, instead of wasting resources for work environment deployment.

comic part 4

Now, let’s think back to that challenge I laid out earlier. Once you start deploying your applications using VMware’s vCloud Suite, how long will it take to improve your mobile applications across all platforms? I bet we’re not measuring that time in months any longer. Instead, mobile applications are improved in just a week or two.

Your call to action is clear:

  • Implement vCloud Suite on top of your existing infrastructure and public cloud deployments.
  • Streamline your application development process by using vCloud suite to deploy both server and client-side applications, dev and test environments, dev and test tools, and sample databases – for all platforms – at the click of a button.

Follow @VMwareCloudOps on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.

A Critical Balance of Roles Must Be in Place in the Cloud Center of Excellence

By: Pierre Moncassin

There is a rather subtle balance required to make a cloud organization effective – and, as I was reminded recently, it is easy to overlook it.

One key requirement to run a private cloud infrastructure is to establish a dedicated team i.e. a Cloud Center of Excellence. As a whole this group will act as an internal service provider in charge of all technical and functional aspects of the cloud, but also deal with the user-facing aspects of the service.

But there is an important dividing line within that group: the Center of Excellence itself is separated between Tenant Operations and Infrastructure Operations. Striking a balance between these teams is critical to a well-functioning cloud. If that balance is missing, you may encounter significant inefficiencies. Let me show you how that happened to two IT organizations I talked with recently.

First, where is that balance exactly?

If we look back at that Cloud Operating Model (described in detail in ‘Organizing for the Cloud‘), we have not one, but two teams working together: Tenant Operations and Infrastructure Operations.

In a nutshell, Tenant Operations own the ‘customer-facing’ role. They want to work closely with end-users. They want to innovate and add value. They are the ‘public face’ of the Cloud Center of Excellence.

On the other side, Infrastructure Ops only have to deal one customer – Tenant Operations. In addition to this, they also have to handle hardware, vendor relationships and generally, the ‘nuts and bolts’ of the private cloud infrastructure.

Cloud Operating Model
But why do we need a balance between two separate teams? Let’s see what can happen when that balance is missing with two real-life IT organization I met a little while back. For simplicity I will call them A and B – both large corporate entities.

When I met Organization A, it had only a ‘shell’ Tenant Operations function. In other words, their cloud team was almost exclusively focused on infrastructure. The result? Unsurprisingly, they scored fairly high on standardization and technical service levels. End users either accepted a highly standardized offering, or had to go through loops to negotiate obtained exceptions – neither option was quite satisfactory. Overall, Organization A struggled to add recognizable value to their end-users: “we are seen as a commodity”. They lacked a well-developed Tenant Organization.

Organization B had the opposite challenge. They belonged to a global technology group that majors on large-scale software development. Application development leaders could practically set the rules about what infrastructure could be provisioned. Because each consumer group yielded so much influence, there was practically a separate Tenant Operation team for each software unit.

In contrast, there was no distinguishable Infrastructure Ops function. Each Tenant Operations team could dictate separate requirements. The overall intrastructure architecture lacked standardization – which risked defeating the purpose of a cloud approach in the first place. With a balance tilted towards Tenant Operations, Organization B probably scored highest on customer satisfaction – but only as long as customers did not have to bear the full cost of non-standard infrastructure.

******

In sum having two functionally distinct teams (Tenants and Infrastructure) is not just a convenient arrangement, but a necessity to operate a private cloud effectively. There should be ongoing discussions and even negotiation between the two teams and their leaders.

In order to foster this dual structure, I would recommend:

  1. Define a chart for both teams that clearly outlines their relative roles and ‘rules of engagement.’
  2. Make clear that each team’s overall objectives are aligned, although the roles are different. That could be reflected through management objectives for the leaders of each team. However, this also requires some governance in place to give them the means to resolve their discussions.
  3. To help customers realize the benefits of standardization, consider introducing service costing (if not already in place) – so that the consumer may realize the cost of customization.

Follow @VMwareCloudOps and @Moncassin on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.

Assembling Your Cloud’s Dream Team

By: Pierre Moncassin

Putting together a Cloud Center of Excellence (COE) is not about recruiting ‘super-heroes’ – but a matter of balancing skills and exploiting learning opportunities. 

On several occasions, I’ve heard customers who are embarking on the journey to the cloud ask: “How exactly do you go about putting together a ‘dream team’ capable of launching and delivering cloud services in my organization?” VMware Cloud Operation’s advice is fairly straightforward: put together a core team known as the Cloud Center of Excellence as early as possible. However, these team members will need a broad range of skills across cloud management tools, virtualization, networking and storage, as well as solid processes and organizational knowledge. Not to mention, sound business acumen as they will be expected to work closely with the business lines (far more so than traditional IT silos).

This is why at first sight, the list of skills can seem daunting. It need not be. The good news is that there is no need to try to recruit ‘super-heroes’ with impossibly long resumes. The secret is to balance skills, and taking advantage of several important opportunities to build skills.

***

First let’s have a closer look at the skills profiles for the Cloud COE as described in our whitepaper  ‘Organizing for the Cloud’. I won’t go into the specifics of each role, but as a starter, here are some core technical skills required (list not inclusive):

  • Virtualization technologies: vSphere
  • Provisioning with a combination of vCAC or VCD
  • Workflows Automation: VCO
  • Configuration and Compliance: VCM
  • Monitoring/Event Management: VC OPS
  • Applications, storage, virtual networks, applications.

But the team also needs members with broad understanding of processes and systems engineering principles, customer facing service development skills, and a leader with  sound knowledge of financial and business principles.

Few organizations will have individuals with all these skills ready from day one – but fortunately they do not need to. A cloud COE is a structure that will grow over time, and the same applies to the skills base. For example, vCO scripting skills might be required at some stage in order to develop advanced automation scripts – but that level of automation might not be required until the second or third phase of the cloud implementation, after the workflows are established. However, we need some planning to have the skills available when needed.

Make the most out of on-site experts:

Organizations usually start their cloud journey with project team as a transitional structure. They generally have consultants from VMware or a consultancy partner on-site working alongside them.  This offers an excellent opportunity to both accelerate the cloud project, and to allow internal hires to absorb critical skills from those experts. However – and this is an important caveat – the knowledge transfer needs to be intentional. Organizations can’t expect a transfer of knowledge and skills to happen entirely unprompted. Internal teams may not always have the availability or training to absorb cloud-related skills ‘spontaneously’ during the project. Ad hoc team members often have emergencies from their ‘day job’ (i.e. their business-as-usual responsibilities) that interrupt their work with the on-site experts.  So I advise to plan knowledge exchanges early in the project. That will ensure that external vendors and consultants train the internal staff, and in turn, project team members can then transfer their knowledge to the permanent cloud team.

Get formal training:

Along with informal on-site knowledge transfer, it can be a good idea to plan formal classroom-based VMware training and certifications. Compared to a project-based knowledge exchange, formal training generally provides a deeper understanding of the fundamentals, and is also valuable to employees from a personal development point of view. Team members may have additional motivation to attend formal courses that are recognized in the industry, especially if it leads to a recognized qualification such as VMware Certified Professional (VCP).

Build skills during the pilot project:

Many cloud projects begin with a pilot phase where a pilot (i.e. a prototype installation) is deployed with a cut-down functionality. This is a great opportunity to build skills in a ‘safe’ environment. Core team members get the chance to familiarize themselves both with the new technology and stakeholders. For example, a Service Catalog becomes far more real once potential users and administrators can see and touch the provisioning functions with a tool like vCenter Automation Center. For technical specialists, the pilot can be a chance to learn new technologies and overcome any fear of change. Building a prototype early in the cloud project can also give teams the opportunity to play around with tools and explore their capabilities.

A summary of how your IT organization can structure its Cloud Center of Excellence and prepare it for success:

  1. Plan to build up skills over time. All technical skills are not required in-depth from day one. Rather, look at a blend of technical skills that will grow and evolve as the cloud organization matures. The Cloud team is a ‘learning organization’.
  2. Plan ahead. Schedule formal and informal knowledge transfers – including formal training – between internal staff and external vendors and consultants.
  3. Make the most out of a pilot project. Create a safe learning environment where team members and stakeholders can acquire skills at their own pace.

Follow @VMwareCloudOps and @Moncassin on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.

From “Routine” Patch Management to Compliance-as-a-Service: How Cloud Transforms a Costly Maintenance Function into an Innovative Value-Adding Service

By: Pierre Moncassin

You might not think of Patch Management as the coolest part of running an IT operation (cloud or non-cloud). It is a challenging, often time-consuming part of keeping infrastructure secure. And in many industries, it is a critical dependency for meeting regulatory requirements. As a result, IT managers can be forgiven for thinking of it as hard work that just needs to get done.

But recent discussions with global vCloud customers have given me a new perspective on the Patch Management function. These customers’ service owners have quickly grasped some new possibilities that come with an automated cloud infrastructure, one of which involves exploring how to offer patch management services to their end users. While this is not a new concept for managed providers, it’s a step change for an internal IT department – and one that turns Patch Management into a pretty exciting role.

Let us look at the key elements of that change:

To start with – we now see the patching toolset as a key component of end-user service. There is no point in offering patching services in a cloud environment without reliable automation to deliver them – the core toolset for patch reporting is of course VMware’s vCenter Configuration Manager, complemented by vCenter Orchestration for automation of specific remediation workflows.

But beyond the automated patch level checking and deployment, VCM also opens the possibility of adapting compliance reports for each user. Without going into too much detail here, it’s relatively straight forward to configure VCM in a ‘service-aware’ structure. VCM allows administrators to define virtual machine groups that match, for example, the virtual servers assigned to a specific division. Filters can be further set to extract only the patch information relating to that end user. Then your VCM reports can be exported into customer-facing reporting tools to provide customized compliance reports.

Make that switch to a service mindset. Of course, bringing the idea of ‘service’ into the patching activity means a mindset shift. It implies service levels and well-defined expectations on both sides. And it invites a financial perspective, too: the cost of delivering the services can be evaluated and potentially communicated back to the consumer. As the cloud organization matures, it may consider charging for those specific services.

Moving to a service approach for patching, then, can be a stepping-stone towards delivering further value-added services to the end user: not just routine patching, but other compliance services that can become more and more visible. Again, patching has moved well beyond its traditional role.

Next, integrate patching with Self-Service capabilities. As with any on-demand service, patching-as-a-service will need to be published in the Service Catalog. In all likelihood, patch management would be offered as an option complementing another service (e.g. server or application provisioning). There are many ways to publish such a service – on the service portal, for example, (if there is such a dedicated portal), or directly within vCenter Automation Center (vCAC). In vCAC, a patch management service could be made available either at provisioning time, or, potentially, at runtime when a machine is already running (vCAC for example can issue a ticket to the service desk to make the request).

Beyond the Service Catalog, there is also interesting integration potential if patching requirements are ‘pre-built’ into the vCAC blueprint. In a nutshell, vCAC can be configured to select the patching option that will be applied by vCenter Configuration Manager at runtime. In my view, that type of integration has considerable potential – potential I’ll explore in a separate blog in the near future.

Lastly, communicate. It’s an obvious part of a service mindset, but also one that’s easy to overlook: if you are adopting this new way of looking at patch management, you need to ensure that two-way communication takes place with your end users, whether to define the service, to publish it, or during its delivery. This is an extended, if not new function for the service owners.

In sum:

Patch Management – often associated with ‘hard work’ done in the background – is transformed with Cloud Operations:

  • ‘Hard work’ becomes ‘service design work’ – a common theme across VMware Cloud Operations
  • Team focus shifts from ‘keeping the servers running’ to a more creative activity closely engaged with consumers – offering a new, value-adding service
  • Patching services set the foundation for more comprehensive services under the umbrella of Compliance-as-a-service
  • Technical integrations between self-service provisioning and patch management can be leveraged to open new avenues for self-service automation

Follow @VMwareCloudOps and @Moncassin on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.

Aligned Incentives – and Cool, Meaningful New Jobs! – In the Cloud Era

By: Paul Chapman, VMware Vice President Global Infrastructure and Cloud Operations

Transforming IT service delivery in the cloud era means getting all your technical ducks in a row. But those ducks won’t ever fly if your employees do not have aligned incentives.

Incentives to transform have to be aligned from top to bottom – including service delivery strategy, operating model, organizational construct, and individual job functions. Otherwise, you’ll have people in your organization wanting to work against changes that are vital for success, and in some cases almost willing for them to fail.

This can be a significant issue with what I call ‘human middleware.’ It’s that realm of work currently done by skilled employees that is both standard and repeatable at the same time: install a database; install an operating system; configure the database; upgrade the operating system; tune the operating system, etc..

These roles are prime for automation and/or digitization – allowing the same functions to be performed more efficiently, more predictably, game-changingly faster, and giving the IT organization the flexibility it needs to deliver IT as a Service.

Of course, automation also offers people in these roles the chance to move to more meaningful and interesting roles – but therein lies the aligned incentive problem. People who have built their expertise in a particular technology area over an extended period of time are less likely to be incentivized to give that up and transition to doing something ‘different.’

Shifting Roles – A VMware Example

Here’s one example from VMware IT – where building out a complete enterprise SDLC instance for a complex application environment once took 20 people 3-6 weeks.

We saw the opportunity to automate the build process in our private cloud and, indeed, with blueprints, scripting, and automation, what took 20 people 3-6 weeks, now takes 3 people less than 36 hours.

But shifting roles and aligning incentives was also very critical to making this happen.

Here was our perspective: the work of building these environments over and over again was not hugely engaging. Much of it involved coordinating efforts and requesting task work via ticketing systems, but people were also entrenched in their area of expertise and years of gained experience, so they were less inclined to automate their own role in the process. The irony was that in leveraging automation to significantly reduce the human effort and speed up service delivery, we could actually free people up to do more meaningful work – work that in turn would be much more challenging and rewarding for them.

In this case, employees went from doing standard repeatable tasks to high order blueprinting, scripting, and managing and tuning the automation process. In many cases, though, these new roles required new but extensible skills. So in order to help them be successful, we made a key decision: we would actively help (in a step-wise, non-threatening, change-management-focused way) the relevant employees grow their skills. And we’d free them up from their current roles to focus on the “future” skills that were going to be required.

Three New Roles

So there’s the bottom line incentive that can shift employees from undermining a transformation to supporting it: you can say, “yes, your role is changing, but we can help you grow into an even more meaningful role.”

And as automation frees people up and a number of formerly central tasks fall away, interesting new roles do emerge – here, for example, are three new jobs that we now have at VMware:

  •  Blueprint Designer – responsible for designing and architecting blueprints for building the next generation of automated or digitized services.
  •  Automation Engineer – responsible for engineering scripts that will automate or digitize business process and or IT services.
  •  Services Operations Manager – responsible for applications and tenant operation services in the new cloud-operating model.

The Cloud Era of Opportunity

The reality is that being an IT professional has always been highly dynamic. Of the dozen or so different IT positions that I’ve held in my career, the majority don’t exist anymore. Constant change is the steady state in IT.

Change can be uncomfortable, of course. But given its inevitability, we shouldn’t – and can’t – fight it. We should get in front of the change and engineer the transformation for success. And yet too frequently we don’t – often because we’re incented to want to keep things as they are. Indeed, misaligned incentives remain one the biggest impediments to accelerating change in IT.

We can, as IT leaders, shift those incentives, and with them an organization’s cultural comfort with regular change. And given the positives that transformation can bring both the organization and its employees, it’s clear that we should do all we can to make that shift happen.

Major Takeaways:

  • Aligning incentives is a key part of any ITaaS transformation
  • Automation will eliminate some roles, but also create more meaningful roles and opportunities for IT professionals
  • Support, coaching, and communication about new opportunities will help accelerate change
  • Defining a change-management strategy for employee freedom and support for their transition are critical for success

Follow @VMwareCloudOps and @PaulChapmanVM on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.