Home > Blogs > Tribal Knowledge

Changing Role of the OS

[photo of Karthik Rau]

Posted by Karthik Rau
Vice President of Product Management

There has been a lot of discussion about operating systems in the past few weeks, first with Oracle’s Unbreakable Linux announcement and then news of the Microsoft/Novell alliance. It all points to some significant change in the operating system world, but what’s gone unnoticed so far is that the role of the OS itself is undergoing a significant transformation. 

The OS became the center of the IT universe with the move to distributed systems. It used to be no more than an application container in the mainframe days, but in the move to distributed systems the OS took over the two most significant interfaces in software – the device driver interface and the application interface – and intertwined them in a way that has locked in customers for the past 25 years. The device driver interface became critical in the move to distributed systems because you no longer had a few fully integrated hardware platforms; instead you had a layered approach where commodity pieces got assembled together in many different permutations. Application developers would only write to APIs on platforms that had significant device coverage, which in turn drove more device vendors to write drivers and add support for those specific platforms. This marked the rise of the general purpose OS.

We are beginning to see another transformation, one that strips away the interlock between the application interfaces and the driver interfaces and will give customers far more choice, flexibility, and control over their infrastructure. The shift began in the 1990s, as application developers moved away from traditional, proprietary client/server architectures and started to employ OS-neutral development frameworks like Java or open-source development platforms that afforded them more control over application interfaces. Yet despite running in these smaller, more flexible application containers, customer still needed to run the software on a full-service, general purpose OS.  They may have regained some control over the application interfaces, but they were still reliant on a fully functional OS to provide all the device compatibility and the accompanying certifications and qualifications.

Virtualization provides the missing piece to break the interlock, and as it becomes pervasive, the role of the OS will fundamentally change. Once you have a pervasive virtualization layer that focuses exclusively on managing all the underlying hardware and can run any OS, developers will finally be able to adapt and integrate the operating system as a part of their application, ship both of those together as a virtual machine and be confident it can run in any environment. Instead of having a general purpose OS underneath their applications, ISVs can strip down the OS of all its excessive functions (and corresponding security holes), make whatever modifications they need to better support their applications, and simply inherit all the hardware qualifications of the virtualization layer. This, in many ways, is what appliance vendors do when they ship a packaged hardware solution with a custom OS for a custom application – the model provides a simple, low cost of management solution but also requires purchasing custom hardware. As virtualization becomes pervasive, any ISV can bring these same benefits by shipping their software as virtual appliances.

Customers and the software industry benefit enormously from this bifurcation – they can finally mix and match the best OS to a given application. And because the hardware management layer is completely separate, there is no artificial lock-in that ties them to a specific OS. As standards emerge for the virtualization layer, customers will be able to easily run any operating system on any virtualization layer and finally have the choice they rightfully deserve.

As the market for virtualization rapidly evolves over these next few years, customers need to ask themselves the following key question: Is it really simpler to have virtualization integrated into the OS and follow the same pattern of lock-in that has dominated the past 20 years of computing, or do I want a world where I have choice and can focus on running a best-of-breed technology stack for each of my applications?