Home > Blogs > VMTN Blog > Category Archives: virtualization futures

Category Archives: virtualization futures

A week in virtualization

Weekly virtualization news, as featured on the Community Roundtable podcast.

VMware Fusion team in Palo Alto is looking for an intern. If you want to apply, check out the Fusion page on Facebook.

As John has written on the VMTN blog, we have received an overwhelming response to the vExpert call for nominations, so we're a little backed up and our judges will need two more weeks to go through all the applications with the attention they deserve. John will be announcing the newest crop of vExperts on the blog, and on the Community forums.

On Thursday, I was at the VMware Forum in Anaheim, and recorded a bunch of video interviews with attendees and partners. I tweeted from the event, so those of you who follow me got the scoop and some pictures. I'm @VmwareCommunity on Twitter, if you don't follow me yet. 

We will be making a little YouTube video or two out of the footage we shot at this VMware Forum and the others. Keep an eye out for them in our YouTube channel.

The world of virtualization has a few exciting things in store for us over the next week or so.

The VMware Forum season is in full swing, and we have more events taking place in May and Early June in Budapest, Madrid, Atlanta, Chile, Johannesburg, Houston, Kiev, and Dublin.

The web page with details is linked straight from vmware.com – in the box that says “VMware Forum 2011” on the lower left.

We also have two webinars today about When Java EE Is Overkill: Lightweight Application Server Use Cases, and a Spanish-language one on Physical IT Infrastructure Control with vCenter Configuration Manager.

Tomorrow, we you can learn to Enhance Productivity with Collaborative Workspace from VMware and Cisco. Also a Portuguese-language version of the vCenter Configuration Manager webinar.

To find out more and register, head over to webcasts.vmware.com

Two full-day regional VMUG conferences are coming up next month, one in Western Pennsylvania, on the 7th of June, and the other in Vancouver on the 21st.

Additionally, the following VMUGs are meeting over the next seven days: Tasmania, Atlanta, East Germany, Buffalo, and Wellington. You can find more details and registration links for all the VMUG meetings at myvmug.org by clicking on “Events.”

Chad’s big picture: where this train is headed

EMC’s Chad Sakac follows up his multi-vendor iSCSI mega-post with the view from 50,000 feet.

Virtual Geek: So… What’s the BIG picture stuff going on under the covers?.

Customers are telling me consistently that they are looking to
transform to a new datacenter and IT model.   They use different words
when describing it.   Here are some variants:

  • “I want to make IT a service back to the business – literally with an SLA model”
  • “I see technologies coming together to enable something… I don’t know what to call it except ‘global datacenter optimization’”

They know they need to do it for all those reasons, but no-one wants to undertake one of those “here’s a vision – now stay with me and one day you will benefit” efforts – particularly these days.

  • They need to save money at every step – constant capital expense saving.
  • They need to become faster, more flexible at every step – constant refinement in both operational expense and speed.
  • They need to use less power and become more green at every step
  • We’ve all been around the block enough to know it’s got to be open, built on standards (Scott Lowe was totally right on that here)

Introducing Replay Debugging: the end of the heisbenbug?

Record Replay, the technology that allows you to reproduce what’s going on in a virtual machine with machine-level instructions, has been shown off at VMworlds past, but is just now coming into its own. You could experiment with it a bit in Workstation 6.0, but it is now available in a useful way in VMware Workstation 6.5, (in beta but has a new Release Candidate). Let’s let E Lewis introduce it in his new blog. Link: Better Software Development with Replay Debugging: VMware Workstation 6.5: Reverse and Replay Debugging is Here!.

I’m
proud to announce that VMware Workstation 6.5 includes new experimental
features that provide replay debugging for C/C++ developers using
Microsoft Visual Studio. Replay debugging allows developers to debug
recordings of programs running in virtual machines, and it is valuable
for finding, diagnosing, and fixing bugs that are not easily
reproduced, a particularly challenging class of bugs. Once the
manifestation of a bug has been recorded, it can be replayed (and
debugged) over and over again, and it is guaranteed to have
instruction-by-instruction identical behavior each time. In addition,
Workstation includes a feature that simulates reverse execution of the
program, making it easier to pin point the origin of a bug.

Aside from being insanely cool and perhaps the end of the heisenbug, I think this shows how VMware’s 10 years of experience manifests itself in innovation. Virtualization is about more than server consolidation, and once you are virtualized, the really interesting things can start to happen.

Here’s E demonstrating how this works. I think the UI has changed a bit since we filmed this. We’re running Visual Studio on the host, outside the VM, and attaching to a process inside the VM and putting in triggers and whatnot in the debugger as it replays until we track down the bug we’re looking for. If we go too far, we can always hit rewind.

Oh, and there’s a Lenovo laptop to be won: VMware Record and Replay Challenge

Raghu Raghuram on the hypervisor and the next big opportunity

VMware VP Raghu Raghuram at Redmond Magazine. Link: Redmond | Redmond Report Article: Driving VMware.

Redmond: What are the major differences between VMware and Microsoft in how each company views hypervisors?

Raghuram: There are some stark differences. Our view
is that the core virtualization layer belongs in the hardware. It also
has to be much smaller in order to reduce its surface area for attacks.
This is why we introduced the 3i architecture, which will become
mainstream over the course of this year.

Our product will be less than 32MB, but will still have all
the functionality. Our sense is if you turn on the server, you turn on
virtualization at the same time. Our approach is similar to that of
mainframes and big Unix machines where there’s no separate
virtualization software as part of the operating system. Our
architecture enables this notion of a plug-and-play data center. So, if
they need more capacity for the data center, then they just roll in a
new server, which is automatically virtualized.

The Microsoft approach is to have virtualization be an adjunct
to the OS. With the Virtual Server architecture, it’s explicitly a
separate layer that relies on the OS. With the Hyper-V architecture,
they’re still maintaining the same dependency on the OS, so it’s not
fundamentally different than the Virtual Server in that respect. The
downside for customers is the Virtual Server architecture is still tied
to a commercial OS, which is fairly vulnerable to attacks and has a big
footprint.

Everybody ‘gets’ server consolidation. The math is easy, the ROI immediate. Do you ‘get’ business continuity? For many organizations, virtualization can be the difference between a notion of a plan and having a real, operational capability. More from the interview:

Some of these products also address what you
are calling IT Service Continuity. How important is this to your
strategy going forward?

Very important. Business continuity is the silver bullet
for virtualization beyond consolidation. In fact, two-thirds of all our
customers are already trying to do business continuity using
virtualization. These [products are] designed to automate all processes
so that if your data center fails, you can automatically failover to
another data center and then fail back. One of the interesting things
about business continuity is because it’s so complex to do, people have
business continuity plans on paper, but they are hardly ever tested.
The products we announced enable the automated testing of those sorts
of plans.

Kusnetzky on virtualization velocity and (r)evolution

Dan Kusnetzky, who has a blog here: Virtually Speaking on ZDNet, has written a number of thought pieces with his consulting/analyst hat over here: Recent Publications from the Kusnetzky Group at his website. He’s usually exploring the interface between the technology of virtualization and operationalization in a business process.

I like this recent one: Virtualization: Evolution not Revolution (pdf link). In this short 3-pager, his basic point is that things move slowly in the enterprise data center, because IT managers must be risk averse.

The Golden Rules of IT

1) If it’s not broken, don’t fix it. Most organizations simply don’t have the
time, the resources or the funds to re-implement things that are currently
working.

I think paradoxically this has been one driver for VMware’s successful adoption. It is so easy to get started with VMware — download VMware Server or a VI3 eval, then convert [warning: sound] some necessary but little-used old servers that are just sucking up electricity, and go. You don’t need a special paravirtualized kernel, just whatever you were running (Windows, Linux, Solaris, etc.); don’t need to recompile your app; don’t need to get special hardware; and you don’t even really need a SAN or other fancy enterprise storage to get started — just virtualize, no re-implementation needed. The key point you need to realize at this level is that you treat a virtual machine just like its physical counterpart — although try not to have every antivirus and backup job in every virtual machine on an ESX Server fire off at the same time.

Now when that works great and you do want to see how to take more advantage of the opportunities afforded by virtual infrastructure, then you do have to do some more planning — maybe get more storage, certainly get some expertise and evaluation of your current infrastructure, and start to figure out how this affects your processes when a new server can be provisioned in a few minutes and your DR plan is finally something more than just a fantasy.

Ultimately you end do up with a data center that looks, acts, and is managed quite differently than what you started with. So was that by evolution or revolution?

(Anyway, Dan has a lot of great stuff there; read up, then go forth and virtualize carefully but with great ultimate success.)

[Update: enterprise software is sexy when it is innovative. The relevance of this article to the current discussion is left as an exercise to the reader.]

Wide Finder, Stacks of Lamps, and Virtualization

Sun’s Tim Bray has kicked off an interesting cross-blog conversation recently. He calls it the Wide Finder Project, and the basic issue is this: we’re moving toward a future of dealing with many CPUs with many cores but with (relatively) low clock rates. What are the interesting computer science and software development challenges this raises? How can we take advantage of architectures like this when dealing with parallelism is just so … painful using today’s paradigms and tools?

One thing I find fascinating about the discussion is how it’s coming from a strategic and futures-based motivation, but it’s taking place with a real roll-up-your-sleeves hacking ethic.  Tim postulated a simple, almost a toy, problem — parsing Apache log files. Tim and others are exploring this simple problem and how currently-available languages and language features affect how easy it is to take advantage of multi-CPU, multi-core architectures to rip through the file like a chainsaw through wet cardboard.

Tim started the ball rolling with Erlang (conclusion: wicked cool, but the I/O and regexp libraries aren’t up to snuff — likely a solvable problem) and others have run with it from there.

So why the Wide Finder problem on a virtualization blog? I ran across Kevin Johnson’s blog entry A Pile of Lamps.

He starts off in an earlier entry by scaring himself:

At the risk of sounding like a pessimist, I think we’ll end up with
thousands of little SOA web services engines. Each one handling a
single piece. Each one with its own HTTP stack. Each one using
PHP/Perl/Ruby/etc to implement the service functions. Each one sitting
on top of a tiny little mysql database. Eeeep!  I just scared myself – better drop this line of thought.  I’ll have nightmares for weeks.

Kevin points to Andrew Clifford’s The Dismantling of IT, which is not talking about v12n per se, but certainly fits into the picture we’re drawing here:

The most obvious change is that the new
architecture would remove technical layers, such as databases and
middleware. These capabilities would of course still exist, but they
could be standardised and hidden inside the systems. They would not
need so much management, and we would need fewer specialists.

Mark Masterson urges
him to reconcile with our future world of cooperating tiny little machines, all
busy message-passing and presumably acting somewhat autonomously to
avoid the nightmare management burden. Sounds a bit like a job for … virtualization and resource pools? Or as Kevin puts it:

Is the answer a combination of LAMP, embedded computing, cluster management, and virtualization?

A virtualization koan: redshift or blueshift?

A student asked a zen master "What is the value of virtualization — redshift or blueshift?" The zen master answered "Mu."

Ben Rockwood at Joyent:

I was asked to co-present with an engineer from Sun
at an upcoming conference in October. I asked him to do his slides and
then shoot me over the presentation so I could fill in my half. I
noticed that his view of virtualization and mine were very different.
To put it into jargon speak, there is a difference between Redshift
virtualization and Blueshift virtualization. …

Essentially it says that there are two different classes of business: “blueshift” companies that grow according to GDP
and are essentially over-served by Moore’s Law that computing power
doubles every two years, and “redshift” companies that grow off the
charts, and which are grossly under- served by Moore’s Law.

i.e., blueshift is server consolidation and redshift is dynamically bring new servers up quickly on your virtual infrastructure.  Zen blade master Martin MacLeod says that is the wrong answer — the value of virtualization is really in transforming your business process in both slow- and fast-growth businesses.

Martin MacLeod at Blade Watch:

But where virtualization really brings benefits is in the non-technical
arena. The ability to turn it all around, to be a real business
enabler, that the IT infrastructure can grow and adapt in line with the
business need, that we move to a system of service provisioning where
IT handle everything and provide the business with the virtual
instance, a world where I can request a server for a month to test that
.NET framework 3.0 works ok with my application, then give it back,
where I can be allocated more processing power or memory in minutes not
weeks due to the purchasing process needing sign off, processing and
parts delivery.

Mendel Rosenblum: Operating systems are old and busted | The Register

Link: Operating systems are old and busted | The Register.

USENIX – Operating systems aren’t so
great. They lounge like bloated monarchs on a database server — getting
far more credit than they’re worth. Clutched in their sausage fingers
are the keys to a kingdom far too vast to properly manage.

But Stanford professor Mendel Rosenblum believes virtualization may
be the guillotine that cuts the OS reign down to size. Rosenblum, who
is also a founder of VMware, called for heads to roll during his
opening keynote at the USENIX conference in Santa Clara…Virtually
roll, of course.

Will Microsoft sunset VMware?

Massimo re Ferre’ with a long think-piece on VMware, Microsoft, market forces, value-add, and paradigm shifts in the data center: Will Microsoft sunset VMware? All his points are good, but I particularly like this one:

Maniacally focused: you need to consider
that for Microsoft this is one of the many battle-grounds. Windows
Virtualization is a line-item feature in a new OS release. This has
nothing to do with the fact that, for them, this is very important or
not. It remains a fact that their overall efforts will be diluted
across a number of markets that span from OS dominance to databases,
from mail systems to development tools etc etc. For VMware this is
"THE" market. They are laser focused to provide the best x86
virtualization experience and solutions. That’s what they do and they
can afford to run full steam towards that result. Whether they will
succeed is another matter but it’s important to notice.

And in his conclusion he invokes a paradigm shift coming in how we manage complexity in the data center. Massimo is particularly excited by Virtual Appliances (as am I), but 
his vision of the future data center isn’t dependent on that.

These are the reasons for which I don’t think Microsoft is going to sunset VMware. Clearly they will pose a challenge on them (a very tough one) but I don’t see VMware as being kicked out so easily. And the number one reason is because I really think that our Datacenters needs to be re-designed from the ground up. Let me quote myself: "This is a fascinating scenario and as you can imagine it involves more than just developing a hypervisor with a management interface: it involves creating a new culture on how we deal with IT, taking all the pieces apart and rebuild our datacenters in a much more efficient way". Now if we agree that Microsoft is making a lot of money out of this "legacy" model (this is a fact) but that we need to change it (the legacy model) to become more efficient anyway … do you think that Microsoft itself could be the agent of change in this case? If they are not pushed they will try to maintain the status-quo (well status-quo with license upgrades as new product versions come along). I remember 5 years ago I went to Microsoft asking them what they were doing about virtualization since this little company called VMware was having brilliant ideas on how to consolidate servers and they told me that they response to that was Itanium and Windows 2000 Datacenter.

See also the responses at the VMTN Forums.

New York Times: licensing, OS lock-in, and, yes, competition

From the Saturday, February 24, 2007 edition of the New York Times, A Software Maker Goes Up Against Microsoft. As the title implies, the story hook is competition between VMware and Microsoft. But the real issues are how customers are affected by hypervisor lock-in and licensing limits.

In a meeting with corporate customers in New York last month, Steven A. Ballmer,
Microsoft’s chief executive, said, “Everybody in the operating system
business wants to be the guy on the bottom,” the software that controls
the hardware. … When quizzed on
Microsoft’s plans, Mr. Ballmer replied, “Our view is that
virtualization is something that should be built into the operating
system.” …

VMware, however,
points to license changes on Microsoft software that it says limit the
ability to move virtual-machine software around data centers to
automate the management of computing work. A white paper detailing
VMware’s concerns will be posted Monday on its Web site (www.vmware.com), the company said.

“Microsoft is looking for any way it can to gain the upper hand,” said Diane Greene, the president of VMware.

The white paper will be available next week, but in the meantime, if you need to catch up, go check out our blog entries from last November, Freedom from OS lock-in.

Given the subject of the New York Times article, it must of course quickly bring up the ghost of Netscape. The article explains virtualization, the benefits of server consolidation, and gives the basic history of the company and the upcoming IPO. The real issues are touched on lightly — the article explains well the relationship of virtualization and the OS (inside or underneath?), and it mentions that VMware thinks licensing changes will affect customers and prevent many people from fully utilizing their virtual infrastructure. The article ends back on competition.

Virtual Iron and XenSource take opposing views on Microsoft’s recent
moves. “Microsoft sees VMware coming between them and their customers,”
said John Thibault, president of Virtual Iron. “So Microsoft is
manipulating its license terms to see if it can freeze the market and
slow down the trend.” …

VMware, according to Microsoft,
should see the wisdom of the path XenSource chose. In his meeting with
corporate customers recently, Mr. Ballmer sketched out a future in
which Microsoft would put fundamental virtual-machine software in its
operating systems, and “VMware builds on top.”

VMware is leery of
such an accommodation, fearing it would prove to be a one-sided
bargain. “We will not sign agreements that give Microsoft control of
this layer,” Ms. Greene said.

See you Monday for more on the issues.