Home > Blogs > VMware Operations Transformation Services > Tag Archives: IT Admins

Tag Archives: IT Admins

Aligned Incentives – and Cool, Meaningful New Jobs! – In the Cloud Era

By: Paul Chapman, VMware Vice President Global Infrastructure and Cloud Operations

Transforming IT service delivery in the cloud era means getting all your technical ducks in a row. But those ducks won’t ever fly if your employees do not have aligned incentives.

Incentives to transform have to be aligned from top to bottom – including service delivery strategy, operating model, organizational construct, and individual job functions. Otherwise, you’ll have people in your organization wanting to work against changes that are vital for success, and in some cases almost willing for them to fail.

This can be a significant issue with what I call ‘human middleware.’ It’s that realm of work currently done by skilled employees that is both standard and repeatable at the same time: install a database; install an operating system; configure the database; upgrade the operating system; tune the operating system, etc..

These roles are prime for automation and/or digitization – allowing the same functions to be performed more efficiently, more predictably, game-changingly faster, and giving the IT organization the flexibility it needs to deliver IT as a Service.

Of course, automation also offers people in these roles the chance to move to more meaningful and interesting roles – but therein lies the aligned incentive problem. People who have built their expertise in a particular technology area over an extended period of time are less likely to be incentivized to give that up and transition to doing something ‘different.’

Shifting Roles – A VMware Example

Here’s one example from VMware IT – where building out a complete enterprise SDLC instance for a complex application environment once took 20 people 3-6 weeks.

We saw the opportunity to automate the build process in our private cloud and, indeed, with blueprints, scripting, and automation, what took 20 people 3-6 weeks, now takes 3 people less than 36 hours.

But shifting roles and aligning incentives was also very critical to making this happen.

Here was our perspective: the work of building these environments over and over again was not hugely engaging. Much of it involved coordinating efforts and requesting task work via ticketing systems, but people were also entrenched in their area of expertise and years of gained experience, so they were less inclined to automate their own role in the process. The irony was that in leveraging automation to significantly reduce the human effort and speed up service delivery, we could actually free people up to do more meaningful work – work that in turn would be much more challenging and rewarding for them.

In this case, employees went from doing standard repeatable tasks to high order blueprinting, scripting, and managing and tuning the automation process. In many cases, though, these new roles required new but extensible skills. So in order to help them be successful, we made a key decision: we would actively help (in a step-wise, non-threatening, change-management-focused way) the relevant employees grow their skills. And we’d free them up from their current roles to focus on the “future” skills that were going to be required.

Three New Roles

So there’s the bottom line incentive that can shift employees from undermining a transformation to supporting it: you can say, “yes, your role is changing, but we can help you grow into an even more meaningful role.”

And as automation frees people up and a number of formerly central tasks fall away, interesting new roles do emerge – here, for example, are three new jobs that we now have at VMware:

  •  Blueprint Designer – responsible for designing and architecting blueprints for building the next generation of automated or digitized services.
  •  Automation Engineer – responsible for engineering scripts that will automate or digitize business process and or IT services.
  •  Services Operations Manager – responsible for applications and tenant operation services in the new cloud-operating model.

The Cloud Era of Opportunity

The reality is that being an IT professional has always been highly dynamic. Of the dozen or so different IT positions that I’ve held in my career, the majority don’t exist anymore. Constant change is the steady state in IT.

Change can be uncomfortable, of course. But given its inevitability, we shouldn’t – and can’t – fight it. We should get in front of the change and engineer the transformation for success. And yet too frequently we don’t – often because we’re incented to want to keep things as they are. Indeed, misaligned incentives remain one the biggest impediments to accelerating change in IT.

We can, as IT leaders, shift those incentives, and with them an organization’s cultural comfort with regular change. And given the positives that transformation can bring both the organization and its employees, it’s clear that we should do all we can to make that shift happen.

Major Takeaways:

  • Aligning incentives is a key part of any ITaaS transformation
  • Automation will eliminate some roles, but also create more meaningful roles and opportunities for IT professionals
  • Support, coaching, and communication about new opportunities will help accelerate change
  • Defining a change-management strategy for employee freedom and support for their transition are critical for success

Follow @VMwareCloudOps and @PaulChapmanVM on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.

The Top 10 CloudOps Blogs of 2013

What a year it’s been for the CloudOps team! Since launching the CloudOps blog earlier this year, we’ve published 63 items and have seen a tremendous response from the larger IT and cloud operations community.

Looking back on 2013, we wanted to highlight some of the top performing content and topics from the CloudOps blog this past year:

1. “Workload Assessment for Cloud Migration Part 1: Identifying and Analyzing Your Workloads” by Andy Troup
2. “Automation – The Scripting, Orchestration, and Technology Love Triangle” by Andy Troup
3. “IT Automation Roles Depend on Service Delivery Strategy” by Kurt Milne
4. “Workload Assessment for Cloud Migration, Part 2: Service Portfolio Mapping” by Andy Troup
5. “Tips for Using KPIs to Filter Noise with vCenter Operations Manager” by Michael Steinberg and Pierre Moncassin
6. “Automated Deployment and Testing Big ‘Hairball’ Application Stacks” by Venkat Gopalakrishnan
7. “Rethinking IT for the Cloud, Pt. 1 – Calculating Your Cloud Service Costs” by Khalid Hakim
8. “The Illusion of Unlimited Capacity” by Andy Troup
9. “Transforming IT Services is More Effective with Org Changes” by Kevin Lees
10. “A VMware Perspective on IT as a Service, Part 1: The Journey” by Paul Chapman

As we look forward to 2014, we want to thank you, our readers, for taking the time to follow, share, comment, and react to all of our content. We’ve enjoyed reading your feedback and helping build the conversation around how today’s IT admins can take full advantage of cloud technologies.

From IT automation to patch management to IT-as-a-Service and beyond, we’re looking forward to bringing you even more insights from our VMware CloudOps pros in the New Year. Happy Holidays to all – we’ll see you in 2014!

Follow @VMwareCloudOps on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.

The Case for Upstream Remediation: The Third Pillar of Effective Patch Management for Cloud Computing

By: Pierre Moncassin

Patch Management fulfills an essential function in IT operations: it keeps your multiple software layers up to date, as free of vulnerabilities as possible, and consistent with vendor guidelines.

But scale that to an ever-dynamic environment like a VMware-based cloud infrastructure, and you have an extra challenge on your hands. Not only do the patches keep coming, but end users keep provisioning and amending their configuration. So how to keep track of all these layers of software?

In my experience there are three pillars that need to come together to support effective patch management in the Cloud. The first two, policy and automation, are fairly well established. But I want to make a case for a third: upstream remediation.

As a starting point, you need a solid patching policy. This may sound obvious, but the devil is in the details. Such a policy needs to be defined and agreed across a broad spectrum of stakeholders, starting with the security team. This is typically more of a technical document than a high-level security policy, and it’s far more detailed than, say, a simple rule of thumb (e.g. ‘you must apply the latest patch within X days’).

A well-written policy must account for details such as exceptions (e.g. how to remedy non-compliant configurations); security tiers (which may have different patching requirements); reporting; scheduling of patch deployment, and more.

The second pillar is Automation for Patch Management. While the need for a patching policy is clearly not specific to Cloud Infrastructure, its importance is magnified in an environment where configurations evolve rapidly and automation is pervasive. And such automation would obviously make little sense without a well-defined policy. For this, you can use a tool like VMware’s vCenter Configuration Manager (VCM).

VCM handles three key aspects of patching automation:

  1. Reporting – i.e. verifying patch levels on selected groups of machines
  2. Checking for bulleting updates on vendor sites (e.g. Microsoft)
  3. Applying patches via automated installation

In a nutshell, VCM will automate both the detection and remediation of most patching issues.

However, one other key step is easily overlooked – upstream remediation. In a cloud infrastructure, we want to remediate not just the ‘live’ configurations, but also the templates used for provisioning. This will ensure that the future configurations being provisioned are also compliant. Before the ‘cloud’ era, administrators who identified a patching issue might make a note to update their standard builds in the near future – but there would rarely be a critical urgency. In cloud environments where new machines might be provisioned say, every few seconds, this sort of updates need to happen much faster.

As part of completing any remediation, you also need to be sure to initiate a procedure to carry out updates to your blueprints, as well as to your live workloads (see the simplified process view above).

You need to remember, though, that remediating the images will depend on different criteria from the ‘live’ workload and, depending on the risk, may require a change request and related approval. You need to update the images, test that the updates are working, and then close out the change request.

In sum, this approach reflects a consistent theme across Cloud Operations processes: that the focus of activity is shifted upstream towards the demand side. This also applies to Patch Management: remediation needs to be extended to apply upstream to the provisioning blueprints (i.e. images).

Key takeaways:

  • Policy and automation are two well-understood pillars of patch management;
  • A less well-recognized third pillar is upstream remediation;
  • Upstream remediation addresses the compliance and quality of future configurations;
  • This reflects a common theme in Cloud Ops processes: that focus shifts to the demand side.

Follow @VMwareCloudOps and @Moncassin on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.

5 Key Steps to Optimizing Service Quality for Cloud-Based Services

By: Pierre Moncassin

Freebies can be hard to come by on budget airlines – but I recently received one in the form of a free lesson about designing service quality.

It was a hot day and I was on one of these ‘no-frills’ regional flights. This was obviously a well-run airline. But my overall perception of the service quickly changed after I asked for a glass of water from the attendant – who appeared to be serving refreshments generously to everyone on the flight. The attendant asked for my ticket and declared very publicly that I had the ‘wrong category’ of airfare: no extras allowed – not even a plastic cup filled with plain water.

Looking past the clichés about the headaches of no-frills airline travel, it did offer me a real lesson in service quality. The staff probably met all of their operational metrics – but that wasn’t enough to ensure an overall perception of a minimal acceptable quality. That impression was being impacted by how the service had been designed in the first place.

The same paradox applies directly to cloud services. When discussing newly established cloud services with customers, I often hear that quality is one of their top three concerns. However, quality of service is often equated with meeting specific service levels – what I would call the delivery ‘effort’. I want to argue, though, that you can make all the effort you like and still be perceived as offering poor service, if you don’t design the service right.

Traditional Service – Effort Trumps Architecture

Both budget airlines and cloud-based services are based on a high level of standardization and economies of scale, and consumers are generally very sensitive to price/quality ratios. But if you offer customers a ‘cheap’ product that they regret buying, all of your efforts at driving efficiencies can be wasted. Design, in other words, impacts perception.

So how do you build quality into a cloud service without jacking up the price at the same time? The traditional approach might be to add ‘effort’ – more stringent SLA’s, more operational staff, higher-capacity hardware resources. All of those will help, but they will also ‘gold-plate’ the service more than optimize its design – the equivalent of offering champagne to every passenger on the budget flight.

A Better Way

There is a more efficient approach – one that’s in line with the principles of VMware’s Cloud Operations: build quality upstream, when the service is defined and designed.

Here, then, are five recommendations that can help you Design First for Service Quality:

  1. From the outset, design the service end-to-end. In designing a service, we’re often tempted to focus on a narrow set of immediately important metrics (which might also be the easiest to measure) and ignore the broader perspective. But in the eyes of a consumer, quality hardly ever rests on a single metric. As you plan your initial design, combine ‘hard’ metrics (e.g. availability) with ‘soft’ metrics (e.g. customer surveys) that are likely to impact customer satisfaction down the line.
  2. Map your service dependencies. One common challenge with building quality in cloud services is that cloud infrastructure teams typically lack visibility into which part of the infrastructure delivers which part of the end user service. You can address this with mapping tools like VMware’s vCenter Infrastructure Navigator (part of the vCenter Operations Management Suite).
  3. Leverage key business-focused roles in your Cloud Center of Excellence. Designing a quality service requires close cooperation between a number of roles, including the Customer Relationship Manager, Service Owner, Service Portfolio Manager, and Service Architect (more on those roles here). In my view, Service Architects are especially key to building quality into the newly designed services, thanks to their ‘hybrid’ position between the business requirements and the technology. They’re uniquely able to evaluate the trade-offs between costs (i.e. infrastructure side) and perceived quality (business side). To go back to my airline, a good Service Architect might have decided at the design stage that a free glass of tap water is very much worth offering to ‘economy’ passengers (while Champagne, alas, is probably not).
  4. Plan for exceptions. As services are increasingly standardized and offered directly to consumers (for example, via VMware vCAC for self-provisioning), you’ll face an increasing need to handle exceptions. Perception of quality can be dramatically changed by how such user exceptions are handled. Exception handling can be built into the design, for example, via automated workflows (see this earlier blog about re-startable workflows); but also via automated interfaces with the service desk.
  5. Foster a true service culture. One major reason to setup a Cloud Center of Excellence as recommended by VMware Cloud Operations is to build a team totally dedicated to delivering high-quality services to the business. For many organizations, that requires a cultural change – moving to a truly consumer-centric perspective. From a practical point of view, the cultural change is primarily a mission for the Cloud Leader who might, for example, want to set up frequent exchanges between the other Tenant Operations roles and lines of business.

In conclusion, designing quality in cloud services relies on a precise alignment between people (organization), processes, and technologies – and on ensuring that alignment from the very start.

Of course, that’s exactly the ethos of Cloud Operations, which shifts emphasis from effort at run time (less significant, because of automation) to effort at design time (only needs to be done once). But that shift, it’s important to remember, is only possible with a cultural change.

Key Takeaways:

  • Service quality is impacted by your initial design;
  • Greater delivery effort might make up for design issues, but this is an expensive way to ‘fix’ a service after the fact;
  • A Cloud Ops approach lets you design first for service quality;
  • Follow our recommended steps for optimizing service quality;
  • Never under-estimate the cultural change required to make the transition.

Follow @VMwareCloudOps and @Moncassin on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.


Outside-In Thinking: A Simple, But Powerful Part of Delivering IT as a Service

By: Paul Chapman, VMware Vice President Global Infrastructure and Cloud Operations

Moving to deliver IT as a Service can seem like a complex and challenging undertaking. Some aspects of the move do require changing the organization and adopting a radically different mindset. But, based on my experience helping lead VMware IT through the IT as a Service transition, there are also straightforward actions you can take that are simple and provide lasting and significant benefits.

Using outside-in thinking as a guiding principle is one of them.

Thinking Outside-In Versus Inside-Out

Here’s just one example that shows how outside-in thinking led us to a very different outcome than we otherwise would have achieved.

Until fairly recently, there was no way for a VMware ERP application user to self-serve a password reset. Raising a service request or calling the helpdesk were the only ways to do it. Like most organizations, we have a lot of transient and irregular users who would forget their passwords, and this in turn created an average of 500+ password reset requests a month.

Each ticket or call, once received, took an elapsed time of about 15 minutes to resolve. That equated to one and a half people on our team tied up every day doing nothing but resolving ERP log in issues, and, even more importantly, to unhappy users being placed in a holding pattern waiting to log in and perform a function.

As the VMware employee base grew, so did the number of reset requests.

The traditional, brute force IT approach to this problem would have been to add more people (volume-based hiring) to handle the growing volume of requests. Another, more nuanced, approach would be to use task automation techniques to reduce the 15 minutes down to something much faster. In fact, the initial IT team response was an approach that leveraged task automation to reduce the resolution time from 15 minutes to 5. From an inside-out perspective, that was a 66% reduction in process time. By any measure, a big improvement.

However, from the user – or outside-in – perspective, elapsed time for password reset includes the time and trouble to make the request, the time the request spends in the service desk work queue, plus the resolution time. Seen that way, process improvement yielded a shift from hours plus 15 minutes, to hours plus 5 minutes. From an outside-in perspective, then, reducing reset task time from 15 minutes to 5 minutes was basically irrelevant.

Moving to Single Sign-On

Adopting that outside-in perspective, we realized that we were users of this system too and that eliminating the need for the task altogether was a far better approach than automating the task.

In this case, we moved our ERP application to our single sign-on portal, where VMware employees log on to dozens of business applications with a single set of credentials.

With single sign-on, those 500 plus IT support requests per month have gone away. IT has claimed back the time of 1.5 staff, and, more importantly, we’ve eliminated wait time and IT friction points for our users.

It’s a very simple example – but it illustrates how changing thinking can be a powerful part of delivering IT as a Service. Even before you reach anything like full game-changing digitization of IT service delivery, a shift in perspective can let you gain and build on relatively easy quick-wins.

Key Takeaways:

  •  You can make big gains with small and simple steps en route to IT as a Service;
  • Take an outside-in perspective to IT;
  • Drive for new levels of self-service (a ‘zero touch,’ customer-centric world);
  • Think about operating in a “ticket-less” world where the “help-desk phone” should never ring;
  • Measure levels of agility and responsiveness in seconds/minutes not hours/days;
  • Adopt the mindset of a service-oriented and change-responsive organization;
  • And understand that transition is evolutionary and make step-wise, evolutionary changes to get there.

To learn more about outside-in thinking for IT, view this webcast with Paul Chapman and Ian Clayton.

Follow @VMwareCloudOps and @PaulChapmanVM on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.

Implementing a Cloud Infrastructure Is About Changing Mindsets: Three Ways Cloud Operations Can Help

By: Pierre Moncassin

A few weeks ago, I had the privilege of attending the first in a series of cloud operations customer roundtables in Frankfurt, Germany. The workshop was expertly run by my colleague Kevin Lees, principal consultant at VMware and author of “Organizing for the Cloud” as well as numerous VMware CloudOps blog posts.

Customer participation in the round table exceeded our expectations – and was highly revealing. It quickly became obvious that process and organization challenges ranked at the top of everyone’s priorities. They needed no convincing that a successful cloud deployment needs operations transformation in addition to leading-edge tools.

Even so, I was amazed how rapidly the conversation turned from technical strategy to organizational culture and, most importantly, changing mindsets.

I remember one customer team in particular outlining for us the challenge they face in operating their globally-distributed virtual infrastructure. They were acutely aware of the need to transform mindsets to truly leverage their VMware technology – and of how difficult that was proving to be.

For them, changing mindsets meant looking beyond traditional models, such as the monolithic CMDB (an idea deeply entrenched in physical IT). It also meant handling the cultural differences that come with teams based in multiple locations around the world: and, more than ever, the need to align teams with different functional objectives to common goals and gain commitments across boundaries.

To state the obvious, changing organizational mindsets is a vast topic, and many books have written about it (with many more to come, no doubt). But here I want to explore one specific question: How can cloud operations help IT leaders, like our customer above, in their journeys to mindset change?

For them, I see three main areas where cloud operations can bring quick wins:

1) Create Opportunities to Think Beyond ‘Classic’ IT Service Management

Part of the journey to cloud operations is to look beyond traditional frames of reference. For some of our customer teams, the CMDB remains an all-powerful idea because it is so entrenched in the traditional ITSM world. In the world of cloud infrastructure, the link between configuration items and physical locations becomes far less rigid.

It is more important to create a frame of reference around the service definition and everything needed to deliver the service. But adopting a service view does require change, and that’s not something that we always embrace.

So how do you encourage teams to “cross the chasm?” One simple step would be to encourage individuals to get progressively more familiar with VMware’s Cloud Operations framework (by reading ‘Organizing for the Cloud,’ for example).

After that, they could take on a concrete example via a walk-through of some key tools. For example, a VMware vCenter Operations Manager demo can illustrate how a cloud infrastructure can be managed in a dynamic way. It would show how dashboards automatically aggregate multiple alerts and status updates. Team members would see how built-in analytics can automatically identify abnormal patterns (signaling possible faults) in virtual components wherever they are physically located. A demo of vCloud Automation Center’s use of blueprints to automate provisioning of full application stacks would show how new tools that leverage abstraction can help break through process-bound procedures that were developed for more physical environments.

All of this would build familiarity with, and likely excitement at, the possibilities inherent in cloud-based systems.

2) Break Down Silos with the Organizational Model

A key principle of VMware’s cloud operations approach is to break down silos by setting up a Center of Excellence dedicated to managing cloud operations. You can read more about how to do that in this post by Kevin Lees.

The main point, though, is that instead of breaking processes up by technology domain (e.g. windows/unix etc.) or by geography, Cloud Operations emphasizes a consistency of purpose and focus on the service delivered that is almost impossible to achieve in a siloed organizational structure.

Simply by creating a Cloud Infrastructure Operation Center of Excellence, you are creating a tool with which you can build the unity that you need.

3) Boost Team Motivation

Lastly, although a well-run cloud infrastructure should in itself add considerable value to any set of corporate results, don’t forget the influence held by individual team members facing a change in their work practice.

In particular, consider their likely answer to the question “What’s in it for me?”

Factors that might positively motivate team members include:

  • Acquiring new skills in leading-edge technologies and practices (including VMware certifications, potentially)
  • Contributing to a transformation of the IT industry
  • Being part of a well-defined, well-respected team e.g. a Center of Excellence.

So, remember to make that case where you can.

Here, then, are three key ways in which you can leverage cloud operations to help change mindsets:

  1. Understand that moving to cloud is a journey. Every person has their own pace. Build gradual familiarity both with new tools and concepts. Check out more of our CloudOps blog posts and resources!
  2. Build a bridge across cultural differences with the Center of Excellence model recommended by VMware CloudOps.
  3. Explain the benefits to the individual of making the jump to cloud e.g. being part of a new team, gaining new skills – and a chance to make history!

Follow @VMwareCloudOps on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.

The Changing Role of the IT Admin – Highlights from #CloudOpsChat

Last Thursday, we hosted our inaugural #CloudOpsChat on “The Changing Role of the IT Admin.” Special thanks to everyone who participated for making it an informational and thought-provoking conversation. We also wanted to thank Nigel Kersten (@NigelKersten) and Andrea Mauro (@Andrea_Mauro) for co-hosting the chat with us.

We kick-started #CloudOpsChat with the question, “Is increasing automation and virtualization good or bad for your career?”

Our co-host @Andrea_Mauro was the first to answer, making the point that IT is always evolving and you can’t realistically stay static in knowledge and skills. @KurtMilne agreed with Andrea, adding that more standardization and automation will help to foster the Industrial IT era and move away from the “artisanal” IT era, which is good for IT careers. Co-host @NigelKersten emphasized that IT needs to automate or prepare to be in an evolutionary dead-end in ops roles, adding that the business demands of today are too great not to do so. @andrewsmhay echoed Nigel’s thoughts, saying that the increase in automation and virtualization is good, taking a “survival of the fittest” standpoint – IT needs to evolve or perish. @ckulchar added to both Kurt and Andrea’s points, noting that IT needs to shift the focus to enabling app teams to effectively use cloud and not just port existing apps. @jakerobinson also joined the conversation, saying that increasing automation and virtualization is necessary in order to balance IT cost with capability.

With the discussion in full swing, we took to our next question: “How exactly does increasing automation change your job?”

@NigelKersten stated that increasing automation changes many roles, not just IT operations. @KurtMilne chipped in as well, saying that an increase in automation frees up your time to work on things that really matter, providing more value to your business. @jakerobinson had a similar opinion, explaining that automation eliminates human error, which means less unplanned work that he would have to take care of at a later time. @randwacker added that automation also allows businesses to move faster and be more innovative, which is a key value of Infrastructure-as-a-Service and cloud. @lamw offered a great analogy in answering this question, saying that not automating your infrastructure is like ignoring the existence of the assembly line in manufacturing.

We then asked our audience, “Do you think abstraction and better tools decrease the need for deep expertise?”

@DuncanYB thought so, but also added that abstraction does not result in a decrease of deep expertise, as you still need to build a strong foundation. @randwacker agreed with Duncan, as long as the tools package expertise with it. @KurtMilne added that automation and abstraction will definitely reduce the need for everyone to read 2-inch thick manuals. He made a point to say that someone will still need to read the manual in order to set up the automation, but from there others will be able to use the automation without the reading. @wholmes noted that deep expertise is needed in the development lifecycle of a solution, regardless of abstraction or not. He added that abstractions lessen the need for deep expertise in the operational phase of a solution. Both @NigelKersten and @KurtMilne agreed with @wholmes, saying that automation pushes expertise earlier in the service lifecycle.

Next, we asked our participants, “Do you think today’s cloud administrators need programming skills?”

@randwacker answered yes – cloud admins do need programming skills, but that’s quickly getting packaged. @DuncanYB hoped that they would not need programming skills, as he thought scripting was already difficult enough as it is. @NigelKersten pointed out to Duncan that programming could be easier than scripting, as better tools and interfaces make it easier to use the work of others. @jakerobinson said that cloud admins definitely need software development skills – from consuming APIs, as well as understanding agile methods. @ckulchar agreed, and added if cloud admins don’t learn the fundamentals of development, developers will learn cloud admins’ skills, resulting in a need to differentiate themselves. @wholmes said he hoped that cloud admins wouldn’t be required to have programming skills, but it all depends on the cloud.

From there, we asked participants, “Is PowerCLI better than your average scripting language?”

Both @lamw and @wholmes had similar viewpoints, saying that it may or may not be better, but that it depends on the background, which our co-host @Andrea_Mauro agreed with. @lamw chipped in that you have to use the right tool for the right job, and that the key is: if there is an API, you can automate it using a variety of tools – an idea that both @virtualirfan and @jakerobinson supported.

Staying with tools we then asked: “What are the advantages of managing compute, storage and network resources from a single tool?” 

Our co-host @Andrea_Mauro answered that one of the main advantages would be having complete control of all the resources. @NigelKersten added that network/storage configuration being attached to services allows for easier workload migration. @KurtMilne asked if it is reasonable to expect a single admin to effectively manage compute, storage and network, to which @wholmes said yes, but only to provision. If it were end-to-end, it would not be reasonable. However, @kix1979 said that in the current IT environment, no single tool can manage compute, storage and network resources.

We concluded our discussion by asking, “What do you think is the one skill all IT admins should learn this quarter?”

@lamw offered a short and sweet answer: Automation. @maishsk said that IT admins should learn Puppet/Chef, or even both. Co-host @Andrea_Mauro echoed William’s sentiment by saying that they should learn automation with good framework, which @wholmes and @KurtMilne agreed with. Both @kix1979 and @jakerobinson believed it would be important for IT to learn the business value and costs of running IT/services.

Thanks again to everybody who listened or participated in our #CloudOpsChat, and stay tuned details around our next #CloudOpsChat! Feel free to tweet us at @VMwareCloudOps with any questions or feedback, and join the conversation by using the #CloudOps and #SDDC hashtags.

A New Kind of Sys Admin

I’m going out on a limb.  I predict that demand for IT professionals who keep complex systems running will grow in the next 5 years. Or 10 years. Or forever.  Until people and businesses realize that tech is a fad, and start relying LESS on technology to do good work, connect with people, make life better.

For this topic, let’s accept the claim that new technologies that abstract and automate resources in the data center or the cloud simultaneously reduce costs AND improve IT responsiveness.  Double value.  Good for business.

But what about people.  Are new technologies good for careers in Infrastructure and Ops?  More importantly, are they good for YOUR career?

Assume that a growing global population coupled with a bigger global “tech footprint” means ever growing IT industry and overall more jobs. More specifically, for IT admins pondering the impact of “the cloud” on their future prospects – job prospects for:

  • Single system specialists – cool
  • Multi-function generalists – warm
  • Admins who can program a little, and get things done with tools that abstract away system details – hot hot hot

Bottom line – even though much of the savings derived from more dynamic and distributed service delivery models (read “the cloud”) is Opex savings, there is and will continue to be exciting opportunity working in IT.

There will be more opportunity to focus on adding business value, and less focus on managing the fine grained details of compute, storage, or network functions.

Here are a couple links for further reading:

Luke from Puppet Labs describes, “The rise of a new kind of administrator.”
Jasmine McTigue discusses, “IT Automation – good for business and IT careers.”

IT Joke – what people say about the water glass picture:

  • Optimist – sees a glass half full with lots of opportunity
  • Pessimist – sees a glass half empty with lots of waste
  • IT Engineer – sees glass that is twice as big as required capacity

Follow us on Twitter at @VMwareCloudOps for future updates, and join the conversation using the #CloudOps and #SDDC hashtags.