The rapid pace of change in end-user computing is affecting more than just the way users work through technology. It is also changing the language and the units of measure in what was, until recently, a very stable market environment. This creates ample scope for confusion, misunderstanding and (in some cases) creative license with descriptions.
Take the word “desktop” for example. Until recently, most people would have translated this as “PC”. Measures of the desktop market described how many PCs were shipped or sold. It was easy – everyone was counting the same, physical objects. A desktop today might be virtual, with all the same software elements, but located somewhere completely different and accessed through a tablet or smartphone. The physical implications have changed.
For those measuring markets, this creates challenges – without consistency in measures, there are no meaningful comparisons across time: the growth rates and forecasts on which business performance is measured risk losing their meaning. So it is very important, when language and measures are changing, to ensure we are always very clear in what we are counting.
The (hosted) virtual desktop market presents its own measurement challenges. With their roots in the counting of hardware, many market analysts naturally want to count the access device, but that makes no senses when these are increasingly diverse. Can a smartphone, a thin client, a tablet and a PC all be the same thing? To make matters more complex, multiple devices might be used to access the same virtual desktop and so artificially inflate the numbers. Counting users won’t always work either – what if the same virtual desktop in a call centre is used by multiple users in shifts across each 24 hour period?
A few years ago I wrote a research paper describing how the new language of end-user computing was “bubbles” and “footprints”. For virtual desktops, it is certainly the bubble – the virtual machine containing the desktop software – that must be counted. Everyone is (thankfully) in agreement on this, but there is still scope for misinterpretation. What about other approaches that minimize the (computational) footprint requirement for the access device, like server-based computing – are they the same thing? There may be some cases in which they achieve nearly the same functional result (when all applications are published and there is no desktop operating system), but the answer has to be no – unless there is a desktop bubble to be counted.
In allowing us to separate the physical from the various logical layers, virtualization has created multiple such issues. Add to this the natural marketing zeal of organizations keen to include the “v-word” in their descriptions and the scope for confusion is understandable. Faced with this, it always helps to fall back on first principles.
Virtualization is a process of separating the layers in the computing stack so that their configurations become independent of (or at least broadly tolerant of) changes in the configurations of the layers around them. When talking about virtualization, we have to focus on the layers in play: hardware, operating systems and applications. Virtualization creates virtual machines or virtual applications that are easier to move around. In some cases, we might make a virtual workspace, but the key is that the result of virtualization is always a bubble of software with something inside. This is not the same as accessing something remotely.
So application virtualization, where the application runs locally in a package that’s isolated from the configuration of the desktop OS, is most certainly virtualization – even if the application is delivered from somewhere else. But just delivering the application through publishing is not. Of course there are gray areas: if the same application package can be used for publishing, streaming or local execution, then it is still a virtualized application. A virtualized application can be published, but not all published applications are virtualized.
With a hosted virtual desktop, the user accesses all their applications remotely and so there is no virtualization happening on the access device: using a thin client is not virtualization in itself. The virtualization takes place on the server, where it allows the same efficient resource management we see in modern data center to be applied to operational overhead of the desktop.
We can be clear in our language and hopefully stem the flow of confusion, but that still doesn’t always simplify counting. With the hosted virtual desktop, the platform supporting the desktop VMs and the brokering layer through which the user connects might be provided by different suppliers. We could give a long list of reasons why we believe using a single provider for both drives down the operational costs and delivers better results, but this is not just a post about View 5.
Although we’re focused on both sides of the (hosted) virtual desktop equation, we must still be clear on what is being counted in any market estimates we proffer or quote. Our server platform is by far the most widely deployed to support virtual desktops: the largest implementations selected VMware at the back-end because so many organizations already know they can count on our technology for their most business critical workloads. When it comes to the brokered protocol connections to access devices, we are in a much more open market and our leadership is less pronounced. Most external commentators choose to measure the virtual desktop market by counting the connections made by the broker and we will not muddy that water.
Where these reflections get more difficult is in market evaluation of our future vision for the post-PC era. What exactly will we be counting there and how will we measure the market? On this, we don’t yet have all the answers, but we will continue to be open in our evaluation. We all need clarity as we go through these significant (and exciting) changes in end-user computing – we will continue to make every effort to focus on the most relevant metrics.