Home > Blogs > Tribal Knowledge > Monthly Archives: November 2006

Monthly Archives: November 2006

Enterprise Software 2.0?

[photo of Srinivas Krishnamurti]

Posted by Srinivas Krishnamurti
Director of Virtual Appliances

Dan Chu, Raghu Raghuram and Steve Herrod have all talked about virtual appliances and the changing role of the operating system in their recent blogs.  I’m going to add a couple of points about recent announcements and explore the new trend that virtual appliances are driving in enterprise software.

Virtual appliances are pre-built, pre-configured and ready-to-run enterprise software applications packaged along with an operating system within virtual machines. Virtual appliances are fundamentally changing the application stack and how it is packaged and distributed, enabling ISVs to develop self-contained and optimized application stacks that are easy to deploy, run on any hardware, and are more secure and reliable.  With the new VMware Virtual Appliance Certification Program, VMware will work closely with ISVs to ensure virtual appliances are optimized to run on VMware products.  Let’s talk a bit about the benefits of virtual appliances for both ISVs and customers.

Benefits to ISVs:

  1. By picking one OS to work with, ISVs no longer have to worry about OS idiosyncrasies or at least they have to consider only one set of idiosyncrasies, which reduces code complexity.
  2. From a QA perspective, the testing matrix can be vastly simplified because QA can strictly focus on testing the virtual appliance because that’s what customers are going to deploy.
  3. ISVs can now package up their software along with an OS of their choice.  They can optimize the application for the selected OS to ensure higher performance and usability out of the box.  They can remove all the unnecessary components of the OS leading to a much thinner and more secure operating system.  All of this makes the application more stable and reliable than ever before.
  4. By distributing virtual appliances, ISVs can reduce support costs in supporting evaluations, proof of concepts and production deployments.  Virtual appliances are packaged as simple files – if you can copy files and click on ‘power on’ in an intuitive UI, you can get an application running instantaneously.

Benefits to customers:

  1. Easy to deploy enterprise software: if you can copy files and click on ‘power on,’ your application is working immediately.  This reduces time to value.
  2. Every application is configured correctly by the ISV who has most knowledge about the application.  If something doesn’t work, the ISV will immediately know the configuration without having to explain the OS patch level and what services are enabled or disabled in the OS, etc.  The buck stops with the ISV.
  3. With a certified virtual appliance, the ISV asserts that it is optimized for all VMware products and that the ISV fully supports the product deployed on VMware virtualization.
  4. Optimized and efficient hardware utilization and availability i.e. easily run multiple applications on the same hardware because virtualization provides robust isolation between the systems such that if one virtual appliance crashes, it doesn’t bring down all the virtual appliances running on the physical box.

With virtual appliances, we have a way to streamline the packaging and distribution of enterprise software that is more stable, optimized for the deployment platform, reliable and secure than ever before.  Let me digress for a quick second….

Being the product manager for our Mac initiative, I recently upgraded my home hardware from an older iMac to a new, sleek-looking Mac Book.  There are so many things to love – faster but perhaps the coolest thing is that the system came with a camera, iChat, iPhoto, iThisAndThat – everything I would need ever need was already there.  It saves me from having to find and install third-party applications which may or may not work as nicely as what comes with the system.  It is so much better when the platform already comes with a set of essential tools/features.

With that in mind, let’s examine what VMware Infrastructure provides as a platform.  (Read Raghu Raghuram’s blog to fully under the power of VMware Infrastructure.)  VI provides capabilities such as agent-less backup, failover support with VMware High Availability, automatic load balancing with VMware Distributed Resource Scheduler and ability to move workloads without downtime with VMotion technology. ISVs do not have to spend precious resources building HA, failover and other systems management features, which is good since that is not their core differentiation anyway.  From a customer standpoint, by deploying virtual appliances on VMware Infrastructure, they can instantly leverage not only the consolidation benefits but others such as high availability, ability to backup without buying expensive third-party solutions and more generally, efficient management of the datacenter.  Over time, there will be even more features available in the platform – virtual infrastructure then becomes the new design center.  The combination of virtual appliances and VMware Infrastructure completely changes the way enterprise software can be developed, deployed and managed.

I continue to be amazed at how Apple streamlined its user experience and buying process.  For example, the combination of iPods and iTunes has completely revolutionized the way consumers think about buying and listening to music.  iTunes offers thousands of songs in a central location and allows users to “evaluate” a song before buying it.  Buying a song, uploading onto your iPod and listening to it is so quick and easy that anyone can do it.  There is something for enterprise IT vendors to learn from this.  Why can’t enterprises buy software as easily as buying a song from iTunes?  Why can’t enterprises deploy software as easily as copying a downloaded song from iTunes onto an iPod? 

That’s where the Virtual Appliance Marketplace comes into the picture.  The VAM is a library of over 330 virtual appliances spanning collaboration, email security, enterprise applications, firewalls, intrusion detection/prevention, operating systems, and traffic management.  Simply download to evaluate a software package and if you like it, you can even buy it from the marketplace.  One stop-shopping for enterprise software!

With Virtual Appliances as songs, VMware Infrastructure as iPods and the new Virtual Appliance Marketplace as iTunes, the whole model of developing, buying, deploying and managing enterprise software is completely changing.  ISVs will build an application stack that includes a light-weight OS and is ready to run, customers will buy it from the virtual appliance marketplace and deploy on virtual infrastructure, which like the Mac already comes with an essential set of capabilities. 

Is this the start of enterprise software 2.0?

Virtualization and Licensing: What Customers Need

[photo of Dan Chu]

Posted by Dan Chu
Vice President, Emerging Products and Markets

We depend on software licensing policies to enable the use and unlock the benefits of new technology.   This is especially true when the new technology – like virtualization – provides transformational benefits in leveraging resources, gaining new efficiencies, and enabling new processes that substantially improve on the old. 

Vendors can evolve their licensing to allow customers to take advantage of new technology, or conversely vendors can hold back and seek to inhibit and restrict how customers can use new technology because they feel threatened by it.  Customers have adopted virtualization broadly and made it mainstream, and have been able to drive some significant changes and improvements in licensing and openness.  However, there are also a growing number of areas where specific vendors (Microsoft in particular) are threatening to use licensing to restrict and undercut the benefits that customers and the industry are gaining from virtualization. 

Virtualization and Licensing:  What’s Been Addressed/Improved

Licensing Based on Virtual CPUs/Sockets: We’ve seen customers drive substantial changes to virtualization licensing to accommodate the new efficiencies that virtualization enables.  IBM Software, BEA Systems, and Microsoft are among the major vendors who’ve moved to licensing based on the number of virtual processors or sockets that an application instance uses, as opposed to the number of physical processors or sockets. 

For the customer who is running a SQL Server database or BEA WebLogic application server instance in a one- or two-CPU virtual machine on a four-socket, eight-core machine, this makes great deal of sense (and economic difference). 

Open Virtual Machine Disk Formats: We’ve also seen customers and vendors drive virtual machine disk formats to be made open and freely usable.   The virtual machine disk format specification describes and documents the virtual machine environment and how it is encapsulated, and this specification is critical to how virtual environments are provisioned, manipulated, patched, updated, scanned and backed up by ISVs and customers.   

In April of this year, VMware announced that we were making our virtual machine disk format, VMDK, openly available and freely usable to anyone who wanted to do so.  Since then, over 2000 vendors and developers have requested to review and use our VMDK specification. 

Last month Microsoft announced that it too is moving to make its virtual machine disk format, VHD, more open.  Previously VHD had been covered by a much more restrictive license.   The ecosystem has invested broadly in VMDK, but it is good that the VHD format is now much more accessible. 

Virtualization and Licensing:  What’s Being Threatened

Licensing Restrictions on Which Operating Systems can be Virtualized: Microsoft has also recently announced a prohibition on virtualizing the less expensive versions of Vista.  Their explanation is that virtualization is not broadly usable by consumers or other mass market users, and therefore should be restricted only to the more expensive versions of Vista. 

This contention stands in stark contrast to the several million users of software like VMware Workstation and VMware Player who have adopted virtualization for their general purpose desktops.  There has been broad criticism of this policy from customers and industry observers (like from David Berlind of ZDNet) who have been clear that such moves to arbitrarily inhibit the use of operating systems are unacceptable. 

Licensing Restrictions on Which Vendor’s Product a Virtual Machine Can Be Run On: One area we are concerned about is that Microsoft has begun to put restrictive terms on the use of published VHDs.  Specifically, Microsoft is starting to restrict use of their VHDs to MS Virtual Server and Virtual PC only (an example is the EULA that accompanies this download).   

In contrast, there are over 300 virtual appliances available on VMware Technology Network (ranging from Oracle databases to CRM packages to firewalls to email security solutions to operating systems) that are freely downloadable and usable by any user regardless of platform or product. 

Microsoft recently published 30-day evaluation VHDs for Exchange, SQL Server, and Windows Server.  We have been told directly by Microsoft that users are allowed to run these VHDs with VMware products that can run VHDs (which includes the broad range of VMware products from VMware Player to VMware Workstation to VMware Infrastructure). 

But it is still troubling to see language from Microsoft that seemingly restricts VHD usage to only Microsoft products.  Customers and partners have been very clear that a closed system based on licensing restrictions that locks customers into one vendor’s products and formats is not acceptable, and we look forward to Microsoft changing its published guidelines.

Proprietary APIs and Lock-In for Communication between the Virtual Machine Operating System and the Hypervisor: Microsoft is implementing proprietary APIs, called Enlightenments, between the Windows Server Longhorn operating system and the new hypervisor product that Microsoft is developing.  APIs between the operating system and the hypervisor, generally referred to as paravirtualization, are a key lever for communication and optimization for virtualized environments. 

There has been strong cross-vendor work in the Linux community, including IBM, VMware, XenSource, Red Hat, and others, toward a public, open approach to Linux paravirtualization.   This looks like it is progressing nicely with lots of third-party support. 

Last week Microsoft and Novell announced that they would work to leverage proprietary Microsoft paravirtualization APIs, in which Novell would pay Microsoft a share of their revenue from their open source Linux operating system.  While on the surface this announcement seems to promote interoperability between Linux and Windows, it is actually not so good for customers. It basically ties Linux into proprietary Windows APIs and allows Microsoft to impose a tax on open source software.  It is in the best interests of customers to run their operating systems, both Windows and Linux, on hypervisors that use open standards and APIs, and do not lock in with proprietary interfaces.

Hypervisors, Operating Systems and Virtual Infrastructure

[photo of Raghu Raghuram]

Posted by Raghu Raghuram
Vice President of Datacenter and Desktop Platforms

It is widely acknowledged that in a couple of years, most if not all, new servers will be virtualized with the help of a hypervisor.

The average server in 2009 will overflow with compute, memory, and communication capacity – numerous CPU cores in every socket, DIMMs stocked with tens, maybe hundreds of gigabytes, and 10X increase in networking bandwidth. Virtualization is the most practical and obvious way to efficiently harness this capacity.  Especially when concurrent advances in processor and system hardware facilitate near zero-overhead virtualization, there is little reason not to virtualize.

With hypervisors on every server, the question is how do you put virtualization to best use? There are two points of view – the ‘operating system extension’ view and the ‘virtual infrastructure’ view.

Operating system vendors consider the hypervisor to be an extension to the operating system, providing mere single-machine partitioning capability while the OS continues to serve as the center of the world for such tasks as managing the hardware resources, providing system availability, governing security, and serving applications. The argument for this approach, of course, is that only incremental changes are required to existing practices for OS, hardware management and systems infrastructure in order to realize the partitioning benefits of virtualization. This is fine if you believe your current systems infrastructure is highly reliable, flexible and secure and needs only the additional benefit of higher resource utilization.

On the other hand, if you believe that today’s infrastructure is significantly complex, fragile and inflexible then you will be better served by fully exploiting additional fundamental advances that are enabled by virtualization. This is the virtual infrastructure viewpoint. Virtual infrastructure exploits the following:

Separation of the OS from the hardware resource management: For the first time in two decades, virtualization provides customers the opportunity to lift the OS cleanly off the hardware and have the OS primarily manage the application and end-users. Unlocking the hardware from the OS has already proven to simplify management of infrastructure and provides a clear separation between managing applications and managing infrastructure. Tying the OS to the application as a preconfigured, virtual appliance has proven to be remarkably powerful way of deploying and managing software faster.  In contrast, treating the hypervisor as an extension to the OS ignores this aspect of virtualization and continues the tight coupling between the OS and the hardware with all the attendant implications for complexity of change management, and makes the OS remain a single point of failure for the entire server and all its virtual machines.

Aggregation of resources and virtual resource pools: Second, these ubiquitous hypervisors distributed on every server can be orchestrated or clustered together through global resource managers to aggregate and create flexible, virtual pools of server, storage and network resources that can be freely allocated on demand. Hardware resources can be added or removed from these pools as needed. Power consumption can be orchestrated across these pools as needed. Failures of any hardware component are automatically and easily overcome using other available resources. Virtual resource pools may be dynamically and flexibly offered up to individual applications, groups of applications, business units or even discrete companies from a single shared infrastructure. This level of flexible, dynamic capacity management with an underlying shared infrastructure is only possible with virtual infrastructure because orchestrating dozens, if not hundreds of resource managers on disparate, commercial OSes is a daunting, if not impossible task.

Separation of Systems Infrastructure Services from the OS: Third, the operating system has traditionally struggled with two conflicting functions – the need to be as open and services-rich as possible in order to support all applications and the need to be simultaneously as closed as possible to deliver a high degree of reliability, availability and security.  In practice, the latter consideration has suffered at the hands of the former.  The goal of providing enabling a single OS to run all applications has resulted in unabated OS code growth from millions to tens or hundreds of millions of lines of code, mostly for application support.  As a result, customers pay a hefty price through expensive band-aids (examples: clustering, agents on every server, frequent patches for bugs and security holes, redundant identical hardware etc.) and complexity to work around this inherent design conflict.

With virtualization, there is now an opportunity to implement security, availability and reliability outside the OS, through the virtualization layer. Implementing these services outside the OS delivers significant benefits.  First, the implementation is global in scope – independent of any OS or any application. Second, implementing these capabilities once at the virtualization layer benefits every guest OS and application on every VM. You no longer have to implement and manage agents or software for availability or security or system protection per application. Third, since the implementation is not dependent on the OS, it is inherently less susceptible to attacks on the OS and therefore leads to a simpler, more robust infrastructure.

When you combine these three capabilities – applications deployed together with simplified OSes as virtual appliances; virtualization of distributed infrastructure to create virtual resource pools; and built-in, OS-independent systems infrastructure services to simplify infrastructure availability, reliability, and resource management – you get virtual infrastructure. 

Which model do you prefer? The old model of an integrated OS presiding over a monolithic systems infrastructure where the hardware, hypervisor, OS and applications are all bound together inflexibly, or the new model of virtual infrastructure that cleanly separates application requirements from hardware management and delivers shared, flexible, fault resistant system services universally to all your applications and operating systems.

Changing Role of the OS

[photo of Karthik Rau]

Posted by Karthik Rau
Vice President of Product Management

There has been a lot of discussion about operating systems in the past few weeks, first with Oracle’s Unbreakable Linux announcement and then news of the Microsoft/Novell alliance. It all points to some significant change in the operating system world, but what’s gone unnoticed so far is that the role of the OS itself is undergoing a significant transformation. 

The OS became the center of the IT universe with the move to distributed systems. It used to be no more than an application container in the mainframe days, but in the move to distributed systems the OS took over the two most significant interfaces in software – the device driver interface and the application interface – and intertwined them in a way that has locked in customers for the past 25 years. The device driver interface became critical in the move to distributed systems because you no longer had a few fully integrated hardware platforms; instead you had a layered approach where commodity pieces got assembled together in many different permutations. Application developers would only write to APIs on platforms that had significant device coverage, which in turn drove more device vendors to write drivers and add support for those specific platforms. This marked the rise of the general purpose OS.

We are beginning to see another transformation, one that strips away the interlock between the application interfaces and the driver interfaces and will give customers far more choice, flexibility, and control over their infrastructure. The shift began in the 1990s, as application developers moved away from traditional, proprietary client/server architectures and started to employ OS-neutral development frameworks like Java or open-source development platforms that afforded them more control over application interfaces. Yet despite running in these smaller, more flexible application containers, customer still needed to run the software on a full-service, general purpose OS.  They may have regained some control over the application interfaces, but they were still reliant on a fully functional OS to provide all the device compatibility and the accompanying certifications and qualifications.

Virtualization provides the missing piece to break the interlock, and as it becomes pervasive, the role of the OS will fundamentally change. Once you have a pervasive virtualization layer that focuses exclusively on managing all the underlying hardware and can run any OS, developers will finally be able to adapt and integrate the operating system as a part of their application, ship both of those together as a virtual machine and be confident it can run in any environment. Instead of having a general purpose OS underneath their applications, ISVs can strip down the OS of all its excessive functions (and corresponding security holes), make whatever modifications they need to better support their applications, and simply inherit all the hardware qualifications of the virtualization layer. This, in many ways, is what appliance vendors do when they ship a packaged hardware solution with a custom OS for a custom application – the model provides a simple, low cost of management solution but also requires purchasing custom hardware. As virtualization becomes pervasive, any ISV can bring these same benefits by shipping their software as virtual appliances.

Customers and the software industry benefit enormously from this bifurcation – they can finally mix and match the best OS to a given application. And because the hardware management layer is completely separate, there is no artificial lock-in that ties them to a specific OS. As standards emerge for the virtualization layer, customers will be able to easily run any operating system on any virtualization layer and finally have the choice they rightfully deserve.

As the market for virtualization rapidly evolves over these next few years, customers need to ask themselves the following key question: Is it really simpler to have virtualization integrated into the OS and follow the same pattern of lock-in that has dominated the past 20 years of computing, or do I want a world where I have choice and can focus on running a best-of-breed technology stack for each of my applications?

The Console’s Greatest Hits

Welcome to the new home of The Console, the blog from VMware Management. (From our first post: "Just as the service console is the dashboard for VMware ESX Server, think of this blog, ‘The Console’, as your dashboard for understanding how VMware is driving the virtualization revolution.")

In case you’re just stopping by for the first time, here are a few of the Console’s Greatest Hits:

Open Virtual Machine Disk Formats and Licensing

One highly related area we are concerned about is that we’ve seen
Microsoft beginning to put restrictive terms on the use of published
VHDs. Specifically, it seems that Microsoft is starting to restrict use
of their VHDs to MS Virtual Server and Virtual PC only. In contrast,
there are over 300 VMDK-based virtual appliances available on VMware Technology Network
(ranging from Oracle databases to CRM packages to firewalls to email
security solutions) that are freely usable by all regardless of
platform or product.

Power and cooling savings with VMware Infrastructure

One of the mainstay use cases of virtualization – server consolidation
and containment – allows customers to “squeeze” multiple workloads on
the same server. There is a flow through effect from needing fewer
physical servers – it means that VMware customers need less space in
the datacenter, and less electricity and cooling. We estimate
conservatively that for every workload moved from a physical to virtual
environment, customers can save about $290 in electricity costs, and
about $360 a year in cooling costs. The more important thing is that
these savings accrue year after year. For example, VMware customer
Provident Bank reports cutting power consumption by 13,000 watts.

On Benchmarking Virtual Infrastructure

As virtualization becomes commonplace in the industry there is increasing interest in measuring the performance of virtualized platforms. Plenty of benchmarks exist to measure the performance of physical systems, but they fail to capture essential aspects of virtual infrastructure performance. We need a common workload and methodology for virtualized systems so that benchmark results can be compared across different platforms.