Home > Blogs > Rethink IT > Monthly Archives: June 2010

Monthly Archives: June 2010

Don’t Stop Believin’ – A Virtualization Journey

Tim Stephan here, Sr. Director of Product Marketing for vSphere. 

I was just reading, Journey to the Cloud via Virtualization – How Virtualization Helps Companies Transform their Approach to IT, an excellent article on Forbes.com by VMware VP of Desktop Products, Vittorio Viarengo. The article reminded me of a conversation I recently had with a VMware customer – a division of a major manufacturer based in the Midwestern United States – that was going through its own, unique, virtualization-driven IT transformation. I might file this instance under the “don’t try this at home” category, but it does serve as a great example of how once a customer realizes the value of VMware virtualization, it can spread very quickly throughout an organization – so Hold On!

The Journey Begins With Server Consolidation…

To give you some background, this customer had more than 50 virtualized servers running vSphere 4.0, primarily for the purpose of saving money through server consolidation, but, as the customer was telling me, word of these great savings started spreading through the business and had reached upper management. And of course when that happens, meetings and conference calls get scheduled, and task forces and tiger teams are created, to figure out how the rest of the business can benefit – and so that each business manager can figure out how he or she can claim that bringing in VMware was their idea 🙂

…Momentum Builds…

So what happened over the course of these meetings was that the business got to see for themselves not only the great benefits of server consolidation, but that because of other application and infrastructure services provided by vSphere, such as vMotion, DRS, Distributed Power Management, High Availability, Disaster Recovery, Data Recovery, Storage vMotion etc., management saw that applications were actually performing better on vSphere, with much higher levels of uptime, than they ever did in a physical environment.

…The IT Guy Becomes a Hero

Now as my new buddy told it, he suddenly became in high demand. And I think he rather enjoyed his new found celebrity – I mean how often does the IT guy get to be the hero? (Actually VMware virtualization is making the “IT guy as hero scenario” a much more common occurrence!) In his own words: “Now I had every business unit coming to me asking if they could further cut not only capital costs, but also operational costs, by virtualizing their enterprise applications. They were looking to virtualize SAP or SQL Server or Exchange or Siebel environments, and they wanted to know if virtualization would help them be more efficient and reduce downtime – of course, without impacting application performance. I told them, ‘Yes’. And, of course, their next question was always ‘How soon?’”

I’ve Already Virtualized You

See, here’s where the story gets interesting, because our friend the IT hero had already virtualized the majority of those applications months prior– without telling anyone. “I knew they’d never notice. If anything they’d just wonder why their apps were running so smoothly, and it made my job easier, so I figured, why not?” So, the business units were already realizing the benefits of cost savings, increased agility, and improved application uptime – they were already virtual and had no idea, because the transition was seamless. How did our IT hero break this news to the business units? “I told them it might take a few months to virtualize 🙂 – and they were thrilled that it could be done so quickly!”

Off to the Cloud  b/w  vSphere Can Take Us There

Of course now, a year later, the business units are all talking “cloud” and are calling meetings and creating tiger teams and our IT guy is again in high demand. “No, I haven’t already moved everyone to the cloud without them knowing” our friend said laughing, “But IT and business are now working together to determine where Cloud Computing makes sense. With vSphere as our IT infrastructure, I think we’re in a good position, whether we choose to further create a private cloud in our own datacenter, or to leverage cloud services from an external service provider – or both. Either way I am looking forward to continuing our journey – this is exciting stuff!”

So this customer’s journey hasn’t happened exactly according to Vittorio’s framework, but it did follow the same steps. The customer first virtualized what it owned and realized the cost savings of server consolidation – the “IT Production” phase. They then made virtualization a company-wide strategy and virtualized their business-critical applications — the “Business Production” phase. And now, they’re on their way to IT as a Service – whether their IT department has told them or not!

You can reach me on twitter @VMwarevSphere. I look forward to hearing about your journeys through VMware virtualization.

The Power of Policy and the Danger of Drift

Hi, I’m Bob Quillin, and I’ve recently joined VMware from the EMC Ionix team where I focused on product marketing for the Ionix product line including Application Discovery Manager, Server Configuration Manager, and Application Stack Manager and how they help customers on their virtualization journey.

The Journey Has Begun

Brittle and static – words often used to describe physical infrastructures architected for an era where applications and workloads were bound to the infrastructure silo below them and change was carefully controlled and often avoided.  And rightly so.  Studies pointed out that change contributed to the majority of incidents in physical datacenters – and many of these problems were self-inflicted.   IT management tools have mirrored this environment by focusing on managing these silos and carefully controlling change – trying to orchestrate every minute detail.

Fluid and elastic – new words in play as virtualization redesigns the architecture of the datacenter from the bottom up, turning silos on their sides – creating shared resource pools and freeing applications and workloads from the infrastructure they’ve been bound to.  That’s the good news.  The bad news is that existing management approaches need to quickly adapt to this changing world.  My colleague Martin Klaus wrote deeper about those changes in a previous post.  And change is the operative concept, because with virtualization management you can no longer expect to control change as change is inherent in the platform (motion, high availability, dynamic allocation and balancing).

Change, Change, Change

If we agree that changes in the infrastructure are going to happen at an ever accelerating rate and accept that as both a good thing and a fact, then the next step is to figure out what this means for your management strategy?   There are a wide-ranging set of implications – and I’ll address a few today and more in the future.  Today I wanted to focus on the power of policy, because it really all starts there.

Policies allow you to define desired or expected states for your infrastructure and applications.  Instead of reacting to every change that occurs – which will become virtually impossible with the ever-accelerating rate of change – policy-driven automation approaches allow you to compare discovered changes to their expected state and deal only with the exceptions or violations to policy.

It’s Midnight, Do You Know Where Your VMs Are?

Having a clear understanding of your current state is the first step.  And dynamic, agentless discovery is the only way to accurately track where your applications are running at any time.  An agentless, network-based approach has several advantages: it’s easy to deploy (no agents), it’s dynamic as it can detect applications and dependencies in real-time, all the time (critical to support virtualization’s high rate of change), and it works across physical and virtual environments (spanning physical and/or virtual switch traffic for complete visibility). Policy-based management requires a two-step, closed-loop system: Always comparing discovered state to desired state and detecting exceptions to policy.  Without real-time visibility into current state – configuration and dependencies – you risk the danger of drift.  Dynamic discovery and configuration policy together make a dynamic duo.

Compliance and Remediation, Closing the Loop

As changes occur, policies determine whether you are compliant to any range of controls that you want or need to track.  Compliance policies range from corporate best practices and gold standards for your systems, servers, VMs, and applications to governance and regulatory standards you have to follow for your industry and auditors.  To close the loop, once drift from policy has been detected and the system flags a compliance violation, you need the ability to automatically remediate and bring the infrastructure back into compliance.  Then discovery continues, new changes are compared to policy, violations are flagged, and drift is managed – and so on and so on —the loop perpetuates.

Stay tuned at VMware as we continue to expand our capabilities in the areas of dynamic discovery, policy, configuration, and compliance.  VMware’s recent acquisition of select products from the EMC Ionix product line included Application Discovery Manager (ADM), which came originally from nLayers, and Server Configuration Manager (SCM) from Configuresoft.  These products leverage the power of policy and were designed to deal with the demands of dynamic virtual datacenters while helping customers manage their physical system on their journey to cloud computing and IT as a service.

Paul Maritz at Structure 2010: transforming the IT landscape

Paul Maritz spoke this week at GigaOM Networks’ Structure 2010 Conference in a conversation with Om Malik. Their conversation started off with “The Big Shift in IT.” Paul pointed to two forces coming together that are transforming the landscape of our industry (slightly paraphrased):

  1. Within enterprises, there is a tremendous demand to do more with less. (70% of the IT spend goes to things that don’t differentiate one business from another.) The challenge is they can’t rewrite applications overnight. There is a tremendous demand to figure out how to run those applications more efficiently not only from an infrastructure point of view, but from an operational point of view. It turns out that virtualization is one of the few ways that you can really take existing applications on a journey and start severing the some of the tentacles of complexity and allow businesses to operate, whether it be internally or through external resources, in a cloud-like manner.

  2. There is a new world of new applications that come from various sources. Those applications, because they are new, can be written in fundamentally different ways, to make them available to users in different ways and to use the infrastructure in different ways.

This introduced what I think was one of the more interesting themes of the conversation: VMware’s “Open PaaS” strategy with acquisitions like SpringSource and Zimbra. Paul talked about how application frameworks are becoming important to reduce complexities and raise the level of the abstraction in the cloud; how these frameworks can help open the “black box” of the VM that will improve efficiencies in virtualization and cloud management and operations; and how these new apps are also shifting industry purchasing models.

The press picked up on Paul’s comments on the changing role of operating systems and layers of abstraction in the cloud, but I don’t think you can reduce what’s going on to a simple A vs. B scenario. 

Check it out for yourself. Here’s the video of of Paul Maritz’s interview with Om Malik at GigaOM Network’s Structure 2010 as they talk about trends in IT, operating systems and the role of the hypervisor and the application framework, application and cloud portability, Salesforce.com, Google, Amazon, Microsoft, and more. It’s well worth 25 minutes of your time:

Watch live streaming video from gigaomtv at livestream.com

Can Applications Manage Themselves?

John Gilmartin here, Director of Product Marketing for Private Cloud products and solutions.  I just left the Gartner Infrastructure Operations and Management conference where I attended a fascinating session on Cameron Haight’s research into public cloud providers.  He contrasted their architecture and management methods with those of the typical enterprise IT organization.  He covered a lot of ground but some key ideas that stuck with me:

  • Cloud providers achieve server:admin ratios of 1000+ while most enterprises are less than 100:1. 
  • They achieve this through a focus on removing complexity and by empowering small teams.  They foster close interaction between operators and developers with “infrastructure as code” as a critical concept – applications are built with the infrastructure in mind. Public cloud operators experiment often and focus on improving those processes with the highest impact.  This all reminds me of the Toyota Production System, a highly efficient, team-oriented improvement methodology, focused on continuous learning and improvement. 
  • For Enterprise IT, systems management is usually an afterthought with management systems bolted on after applications are already deployed, adding complexity on top of complexity.  Meanwhile, managers enforce top down processes around “best practices” – think ITIL.  Cameron pointed out that these methods are rooted in Frederick Taylor’s early 20th century approach to management so popular with American car manufacturers until the 80’s.  (Remember what happened to American car manufacturing in the 80’s?).

As a practical example, it occurred to me that the complexity of implementing traditional Application Performance Management (APM) systems illustrates this well.  To build an APM solution, you map out the components of an application, define the dependencies, and collect a whole series of indirect measurements like processor utilization.  You populate all of this into a service model with a big historical database, spend a long time tuning the thresholds, and then dig through a bunch of charts to try and solve problems. 

Great in concept, but this service model replicates the complexity of the underlying application, yet is not intrinsically tied to the application.  When the business service changes (which happens often) the model must be updated.  Either you experience “model drift” as people forget to update the model in their haste to support the business, or you put in place bureaucratic change processes that slow the speed of business. This approach falls short in the traditional IT environment, so how can it possibly work in the exponentially more dynamic cloud environment?

Is there a better way?  In manufacturing, you have the concept of “design for manufacturability (DFM)."  Do Enterprise developers need to “develop for management” by explicitly designing for operational management when building the application?  Can modern application frameworks like Spring make this easy for developers?  Can we at least assign outcome-based policies to applications and allow an intelligent infrastructure to automatically manage to those policies?

Share your opinions and let’s start a discussion on the future of management.

Three types of applications that are best suited for the Private Cloud

I had an experience recently that helped me understand how
companies should think about deploying Private Clouds in their organizations.

I went to my local bank branch to deposit a check I had
received.  As I was about to enter the teller
line, a bank employee asked me if I had used their new ATM station that
supported automated check deposits.  She walked
me outside and trained me on how I could make my deposit without having to talk
to a bank teller.

How brilliant was that! 
The bank had come up with a cost-effective way to service my request through
standardization and automation!  But to
do this, they must have identified which types of requests could be serviced
through the ATM and which ones would still be best served by the teller.

One of the questions we at VMware get a lot is, “If I want
to take advantage of the rapid provisioning and self-service capabilities of a
Private Cloud, what types of applications would benefit most from this model?”

We’ve found from the customers we’ve talked to that 3 types
of applications tend to be the best suited for the Private Cloud:

  • Transient apps – applications that will be
    rapidly cloned or re-allocated, like stage or pre-production environment, are great candidates for automation since there’s very little configuration differences between environments.
  • Elastic apps – applications where the resource demand will vary greatly over time, like with scientific computation, are also good choices since resource capacity can be extended very easily
  • The “Long Tail” of apps – applications that never get prioritized by IT, like a customized web farm for an extranet, are also a great fit since a Private Cloud may
    finally allow these applications to be provisioned at a cost that justifies
    their relatively smaller value.

Of course, there’s no reason to lose focus on virtualizing
all of your business-critical applications to get the benefits of higher
utilization and lower cost.

But as IT thinks about shifting specific workloads to the
Private Cloud to reduce costs and improve service, much like my bank shifted my
request to the ATM, these types of applications are a perfect place to start.  If your organization has applications that
are transient, elastic, or considered “long tail,” I’d recommend accelerating
your virtualization efforts with these apps and starting a dialogue with users
about moving to a self-service Private Cloud model.  In future posts, I’ll talk more specifically about
how customers have actually gone about evolving virtualized architecture into
Private Clouds and what types of learning and practice they gained in the
process.

Operations Management in the Virtualized Environment – What’s different?

Hi, I’m Martin Klaus, a member of the vCenter product
marketing team. Growing up in a small town in Europe, I often spent time in the
kitchen watching my mom bake and cook meals for our family. Especially during
holiday season, I was amazed by her incredible skill to transform sugar,
butter, flour and spices into delicious cookies, cakes and tarts. She rarely needed
to look up a recipe, and when she did, they were mostly hand-written, passed on
by her mom, our grand mother. Even though my dad had bought her an electric
mixer and other kitchen gadgets, she always made our favorite strudel
completely by hand because it would taste better that way. It seemed to take
forever, but when the warm scent of cinnamon filled the air, we knew she was
done and it was time to devour the treats we loved.  When we asked how she knew she had blended
the right amount of ingredients into fluffy dough, she always said, “This is
how grandma used to do it.”

What does this little story have to do with operations
management, you might ask?  Just like
microwave ovens, ready-bake cake mixes and 30 minute meals have simplified
cooking and shortened the time for food preparation for the home chef with a
busy work schedule, virtualization has fundamentally changed how IT services
can be delivered to the organization. 

As a result, we need to revisit operations management and
ask ourselves if the “old way” is still the most efficient way of delivering IT
services in virtual environments.

Virtualization is as much about people and processes as
it is about technology. Our most successful customers, those with more than 80%
or 90% of their infrastructure virtualized, have adapted their IT processes and
use virtualization for so much more than server consolidation. These IT
departments can now complete more projects in the same amount of time, have
more virtualized applications protected by disaster recovery plans, and adapt
to change more rapidly.  

So what has changed and
what are highly virtualized organizations doing differently?  Let me frame the conversation with the
following picture:

IT Transformation 

Traditionally, IT operations management is done in silos.
Every application is contained in its own hardware, OS, middleware and
application stack. You have specialized teams that own and operate “their”
application. Unless a major hardware refresh or software upgrade is needed, the
application lives and dies with the hardware. As one major retailer told me, it
would take them 18 months to release a new application into production, and all
systems are completely locked down from change during the Q4 holiday season.

In this model, more applications require more specialized
skills, processes and people who know how to operate the environment. Changes
must be carefully planned because the time and cost to recover from a failed
update is high.  ITIL has emerged as a
result of the need to document and make repeatable processes for problem,
change and incident management.

Moving to the right side in the diagram above, the
architecture of the Private Cloud is quite different from the traditional model
because it is designed to deliver IT services to end users in a more scalable
fashion.

As the foundation for the Private Cloud, virtualization
enables server, storage and networking resources to be shared very efficiently
across applications. Virtualization also allows you to standardize your service
offerings.  Templates for your corporate
Windows or Linux images can be provisioned as virtual machines in minutes. Even
higher-level server configurations with complete web, application and database
server stacks can become building blocks for your Enterprise Java environments or
Sharepoint instances, further simplifying the provisioning process and
lessening the need for one-off admin tasks. 
Automated backup, patch and update processes are additional benefits
that are easy to realize with virtualized infrastructure.

It is pretty clear that in the Private Cloud, the rate of
change will increase rapidly as business teams request more applications and
use external application and service providers as a benchmark against corporate
IT. We hear from customers that anywhere between 10-30% of business
applications already run outside the corporate firewall. It is mostly hosted
HR, Sales and Marketing applications like Salesforce.com that we’re talking
about today, but this trend is likely to increase as Infrastructure-, Platform-
and Application-as-a-Service offerings become more viable from a security and
compliance standpoint.

On the flipside, staffing levels and IT budgets will not
increase and IT organizations will need to do things differently to keep up
with the demand and cost pressures —  as
customers with highly virtualized environments have already discovered.  IT will need to transition into a new role as
a focal point for the central administration of all infrastructure and
application services — regardless of how they’re sourced.

In highly virtualized environments and the private cloud,
operations management must focus on three questions:

  1. How do we automate tasks and do more with less?
  2. How do we manage the service levels of infrastructure
    and applications?
  3. How do we optimize our resource utilization to get
    more return on our investments?

In my next few blog posts I’ll examine each of these
areas in more detail. I’ll share with you what I’ve been hearing from customers
that excel in these categories, and I’ll also talk about some of the work we’re
doing to support the people and process transformation that will simplify
operations management in the private cloud.

In the meantime, please post a comment on how
virtualization has impacted operations management in your organization.

test

testing

Cloud computing: it’s an approach, not a destination

Hi, my name Murthy Mathiprakasam. I joined VMware last week as a product marketing manager.

The first few weeks at any job are a blur of ramping up, learning the lingo, and generally drinking from the fire hose.  At VMware, this is all in hyperdrive.  Not only is there a lot of exciting stuff going on over here, there are also a dizzying amount of tech terms and acronyms.  Through it all, one word has come up in almost every meeting and conversation I’ve had: Cloud.  

Now, I have to admit, I’ve always found the term and the buzz around it kind of confusing.  It seems like everyone has their own definition of Cloud, and every technology company has a Cloud angle.  Isn’t VMware all about virtualization?

So, I set out to understand VMware’s definition of Cloud, and the first thing my new boss said was:   “It’s an approach, not a destination.”

He expanded on this, offering the following perspective:  Cloud Computing is an approach to computing that leverages the efficient pooling of on-demand, self-managed virtual infrastructure, consumed as a service.

And then, it all came together for me.   

We live in a world of getting “exactly what I want, exactly how I want it, exactly when I want it.”  People buy shoes, apps, car insurance whenever they want.  They buy music and movies however they want (a single song, a 2 day “rental,” a custom channel).  In all these cases, it doesn’t matter where the product is coming from – only that we get what we want, when we need it.

This is the promise of cloud computing, bringing the “what I want, how I want, and went I want it” to the business.

And here is where I began to understand VMware’s vision.  There is no quick answer for “going to The Cloud.”  The approach is for IT departments to take an evolutionary path from virtualizing their data centers to developing a Private Cloud architecture that can be bridged with external Public Cloud resources to get the full flexibility of a Hybrid Cloud.  For IT departments to go down this evolutionary path, they are going to have to change the way they work. 

This is Cloud Computing.

It will take me a few more weeks to understand some of the concepts, the changes, and the technologies that will make all of this possible.  I do understand that virtualization is central to this approach. Some of the words I’m picking up are agility, self-service, security, policy-driven… but even if I don’t understand the details, I understand the potential.  Today’s datacenter isn’t built to deliver “what, how, when.”  At VMware, we are helping customers transform the way IT operates  – transform the datacenter  – in order to better serve the line of business (and us business consumers) in a way that is streamlined, responsive, and cost-effective.  

I’m looking forward to learning more about this approach and hope you are too.

Avoiding the 100 VM Hangover

Hi, I’m David Friedlander, product marketing manager for our virtualization management products. When virtualization first started to go mainstream 5 or 6 years ago, most customers were only using it in limited, non-production environments. In those days, you might have had 20 or 30 VMs in a test environment. Today, many enterprises have anywhere from a few hundred to more than 25,000 VMs, running everything from file and print servers to complex enterprise applications and databases. As the number of VMs has grown, so too has the need to manage the environment effectively.

There’s no magic point where you suddenly need to pay a lot more attention, but many customers tell us that once they have more than 100 VMs, keeping tabs on the environment the old way begins to get tough. Think of it this way – if you own a small vineyard and only produce an average of 25 to 50 bottles a month, you can probably do all the bottling by hand in your kitchen, and track your sales and inventory in a spreadsheet. When your business grows, you may find yourself suddenly producing 100 or 200 bottles a month. Being in demand is a good thing. But, suddenly you’re spending hours every day bottling, packing, selling and shipping. You have so much inventory in your garage that there’s no space for the cars. And then maybe some wine magazine gives your little-known wine an astounding rating, and you’re overwhelmed.

That’s what has happened with virtualization. You had a few small successes, and before you knew it, everyone was asking you to virtualize everything. After all, it looks so simple ; it’s no wonder people assume it’s both easy and free to get a VM. But there are two problems (other than the fact that you know the VMs aren’t free): the sheer volume of requests and the number of VMs make it hard to keep up, and the old tools you have for taking orders, creating the product, tracking the inventory and all the rest aren’t cutting it. It’s kind of like trying to bottle 10,000 cases a year of wine in your kitchen – it’s not really the right set up.

With virtualization supporting an ever-growing cast of applications in the datacenter, management has taken on increasing importance and become a key area of investment for VMware. Over the past year, we’ve introduced solutions to help manage capacity, cost and even application performance in the virtual infrastructure. This is only the beginning. Once you pass the 100 VM milestone, things start to get really interesting. Virtualization is the foundation for a new approach to IT. Whether you call it cloud computing or IT-as-a-service (we use both terms here at VMware), it really is all about a completely different model for IT delivery. And with this new world, comes the opportunity to rethink (or vThink) management. And that is what I’ll be doing (sometimes over a glass of wine) – and writing about – in the coming months.

We think it’s time to rethink

Bogomil Balkansky here, vice president of product marketing for virtualization and cloud platforms at VMware. Our team is responsible for bringing to market all of VMware’s datacenter products, including vSphere and the vCenter product family.

Welcome to our new blog.

Our driving mission at VMware is to reduce the complexity of IT. Our customers are on a journey. For many, it started with server consolidation initiatives. Now, on the back of the success and value these projects have achieved, virtualization is becoming the foundation for a more flexible, scalable IT infrastructure that can be delivered as a service. AKA Cloud Computing.

“The Cloud,” IT-as-a-service, utility computing — whatever you call it – represents the same thing: the next era in computing. It’s a very exciting time to be in this industry. We are on the precipice of great change, and this is a time of market disruption, technology innovation, emerging models, opportunity, promise and confusion.

I am very lucky to be part of a team – a company – that is passionate about its products, its customers and the process of pioneering the next era of IT. With this blog, we hope to give you a window into this experience, sharing our stories, offering perspective, and diving deep into the issues, challenges, and technologies involved in the journey from the physical to the virtual data center and onto the cloud. Some of the topics we’re passionate about:

  • Virtualization “Phase 2.” Server consolidation most certainly delivers benefits. However, it’s really only the beginning. VMware did not invent disaster recovery – we just make it better and more reliable. VMware did not invent provisioning – we just make it faster. Business continuity and improved agility – these are the benefits our customers in “Phase 2” of the journey are realizing today.
  • The virtual data center as a disruptive force. Virtualization and cloud computing represent a new approach to IT, and this means new approaches to many of its disciplines. Management and security are two that immediately come to mind. How will these areas change in the “new world?”
  • The path to the private cloud. At VMware, we define cloud computing as an approach that leverages the efficient pooling of virtual infrastructure, consumed as a service. We believe that virtualization is the way forward, and we’ll explore the technology that will enable the next era of IT.

Through our posts, we’ll share what we think, and we’d like to invite you to join the conversation. Have a topic you’d like to hear more about? An experience you’d like to share? A perspective or idea? Leave us a comment.