Home > Blogs > VMTN Blog > Monthly Archives: March 2008

Monthly Archives: March 2008

What’s in a name?

The eagle-eyed amongst us may notice a few changes on the VMware.com site today. And no, I’m not talking about the general availability of VMware Lifecycle Manager or even the release of VMware Server Beta 2. Care to guess? I will send one "I <3 VMware" sticker to the first person who gets it in the comments.

[Update: kudos to Duncan and Frank -- ESX and ESXi are now the foundation for your dynamic and automated data center. There is no longer a Server installed on your Server.]

Best practices for securing virtual networks

Hezi Moore, co-founder and CTO of Reflex Security, has a nice 3-part primer on how to start thinking about your virtual networks as a guest post on VMblog. While Hezi does mention virtual appliances, he avoids turning this into an ad for Reflex.

Best Practices for Securing Virtual Networks – Part One of Three 

However, virtualized environments face unique network security challenges that can affect the entire organization. Adding
security to your virtual network, such as a virtual security appliance,
can protect critical resources from intrusion, theft, service denial,
regulatory compliance conflicts or other consequences. 

Fortunately, by combining prudent security measures with advancing virtualization technologies, organizations can adopt
and deploy “defense in depth” best practices without the traditional
high costs and complexities associated with physical infrastructure
and enjoy the benefits of a virtualized architecture while avoiding excessive risks. …

Virtualized environments are difficult to visually
inspect and due to virtual server mobility and related issues, they
often have dynamic configurations and server populations. In this context, threats can easily spread, devices can be overlooked, and inappropriate activity can be concealed. To
prevent configuration oversights, rogue devices, auditing omissions and
other issues, the security system should maintain persistent awareness
of all virtualized devices, services and communications. 

Best Practices for Securing Virtual Networks – Part Two of Three

Primarily, organizations have four alternative or
complementary approaches to secure virtualized environments: physical
network security devices, physical device / VLAN configurations, host
intrusion prevention systems and virtualized network security systems. 

Best Practices for Securing Virtual Networks – Part Three of Three

Leverage virtualization platform to enable security

Though
virtualization can present new security challenges, it is a powerful
technology that can have a significant impact on an organization’s
ability to become more efficient, effective and productive. Organizations
should determine not only what business applications can benefit from
virtualization but also what IT applications can benefit from
virtualization and use this trusted platform as an enabler. Determine
which physical devices make most sense to deploy in virtualization and
utilize complementary software like virtual security appliances to
provide the following capabilities in the virtual environment:

  • Security
  • Visibility
  • Control
  • Manageability
  • Policy enforcement
  • Deployment

(And thanks, Dave, for getting this kind of original article out alongside the comprehensive industry and blog news you can find at VMblog.com)

Virtualization is Easy Enough for an 11 Year Old

From Mike D. Link: From the VMware Field: Mike D’s Virtualization Blog: Virtualization is Easy Enough for an 11 Year Old.

This is a story about perhaps our youngest customer to date at
VMware. The story really doesn’t talk much about Jon’s experience with
VMware until the very end but it does show how VMware (and
virtualization in general) can be used in pretty much any environment
by just about anyone with any kind of budget.

I’d like to wish Jon good luck in his future endeavors in the IT world!

 

As for Jon, he says he loves testing virtualization
software like VMware and wants to obtain “A+ certification” by passing
the computer-technician exam by that name developed by trade group
CompTIA. “Hopefully, I can do that this summer,” he says.

 

[From NetworkWorld ]

The philosophy of clustering vs HA

IBM’s Massimo Re Ferre’ with another long thought-piece on the philosophical differences between traditional custering (application- and OS-dependent, complicated) with approaches like VI3′s High Availability (HA) (treats workload as a virtual appliance, simpler). Massimo works directly with customers, so although he recognizes that paradigms are changing, he looks at the strength and weaknesses of both approaches, and alludes some of the organizational and operational changes you’ll have to make to get there.

Link: IT 2.0 Main Blog : VMware HA Vs Microsoft Cluster Server: we are at the inflection point.

If you stop for a minute and think about what it is happening in this
x86 virtualization industry, you’ll notice that many infrastructure
services that were typically loaded within the standard Windows OS are
now being provided at the virtual infrastructure layer. An easy example
would be network interface fault tolerance: nowadays in virtual
environments you typically configure a virtual switch at the
hypervisor level, comprised of a bond of two or more Ethernet adapters
and you associate virtual machines to the switch with a single virtual
network connection. What you have done in this case is that you have
basically delegated the virtual infrastructure of dealing with Ethernet
connectivity problems. This is a very basic example and there are many
others like this such as storage configuration/ redundancy/ connectivity.  …

We are clearly at an inflection point now where many customers that
used to do standard cluster deployments on physical servers (which was
the only option to provide high availability) are now arguing how to do
that. They now have the choice to either continue to do so in virtual
servers as opposed to physical servers (thus applying the same rules,
practices and with little disruption as far their IT organization
policies are concerned) or turning to a brand new strategy to provide
the same (or similar) high availability scenarios (at the cost of
heavily changing the established rules and standards). The reason I am
saying we are at an inflection point is because I really believe that
the second scenario is the future of x86 application deployments, but
obviously as we stand today there are things that you cannot
technically do or achieve with it. Plus, there is a cultural problem
from moving from an established scenario to the other.

Raghu Raghuram on the hypervisor and the next big opportunity

VMware VP Raghu Raghuram at Redmond Magazine. Link: Redmond | Redmond Report Article: Driving VMware.

Redmond: What are the major differences between VMware and Microsoft in how each company views hypervisors?

Raghuram: There are some stark differences. Our view
is that the core virtualization layer belongs in the hardware. It also
has to be much smaller in order to reduce its surface area for attacks.
This is why we introduced the 3i architecture, which will become
mainstream over the course of this year.

Our product will be less than 32MB, but will still have all
the functionality. Our sense is if you turn on the server, you turn on
virtualization at the same time. Our approach is similar to that of
mainframes and big Unix machines where there’s no separate
virtualization software as part of the operating system. Our
architecture enables this notion of a plug-and-play data center. So, if
they need more capacity for the data center, then they just roll in a
new server, which is automatically virtualized.

The Microsoft approach is to have virtualization be an adjunct
to the OS. With the Virtual Server architecture, it’s explicitly a
separate layer that relies on the OS. With the Hyper-V architecture,
they’re still maintaining the same dependency on the OS, so it’s not
fundamentally different than the Virtual Server in that respect. The
downside for customers is the Virtual Server architecture is still tied
to a commercial OS, which is fairly vulnerable to attacks and has a big
footprint.

Everybody ‘gets’ server consolidation. The math is easy, the ROI immediate. Do you ‘get’ business continuity? For many organizations, virtualization can be the difference between a notion of a plan and having a real, operational capability. More from the interview:

Some of these products also address what you
are calling IT Service Continuity. How important is this to your
strategy going forward?

Very important. Business continuity is the silver bullet
for virtualization beyond consolidation. In fact, two-thirds of all our
customers are already trying to do business continuity using
virtualization. These [products are] designed to automate all processes
so that if your data center fails, you can automatically failover to
another data center and then fail back. One of the interesting things
about business continuity is because it’s so complex to do, people have
business continuity plans on paper, but they are hardly ever tested.
The products we announced enable the automated testing of those sorts
of plans.

Building the home lab: VMguru

Scott Herold over at the newly revivified VMguru is laying out his home lab setup for us post by post. Make sure you have plenty of power!

VMGuru Lab, Coming Soon

VMGuru Lab: Network Infrastructure

Needless to say, I was able to power through with minimal cursing and
no thrown or kicked components. What ended up being the most
challenging aspect of the entire process was digging through my boxes
of old computer junk that I refuse to throw away to find my null modem
cable. I’m glad I was able to find it because I truly question the
ability to walk into a retail store and buy one now-a-days.

VMGuru Lab: Storage Server

My operating system of choice was Ubuntu 7.10 Server. Let me start by
saying it would have absolutely been 10X easier to build this server
had I used Ubuntu 6.06 Server. The iSCSI Enterprise Target software is
not available in the universe repositories and had to be compiled, and
only then after modification to the make file.

One thing that some people may notice about the configuration is it
is unique in the fact that I have specified a ScsiSN value to each LUN.
While poking around and trying to get the stupid thing to properly
build I saw that there was a README.vmware file in the build directory.
I figured it might actually apply to what I was doing so decided to
open it up. As I expected, it absolutely applied and made sense of some
wierd issues I had seen in the past.

Make sure you check out the comments from VMware Communities regular Jason Boche, who has his own home lab. [via]

Rich Brambley at VM /ETC has also been posting about white box and on-the-cheap VI setups. See this post: Cheap ESX solutions for testing, where he points to some great threads at VMware Communities — this has been a rolling discussion for years. See also
ESX home lab hardware shopping list. (Actually, take a look at VM /ETC for the whole month of March — resource pools, VDM, small business P2V, monitoring, and more. Rich is kicking @$$ over there.)

How’s your home lab spec’ed out? Post something on your blog or drop me a line. I’m always jtroyer [at] vmware.

VMworld sessions online

Picture_3

VMworld Europe 2008 session materials are now up at VMworld.com. Some are offered as a streaming recording; other sessions have only the presentation materials at this time. More recordings will be published over time.

You must use the same login you used for the conference registration and session builder. More information on access. Some sessions are available to non-attendees, and we will continue to release access to more sessions over time.

Remaindered links for March 21, 2008

Chuck Hollis on VDI — “I’ve never seen anything like this in the industry”

EMC’s Chuck Hollis on VDI. Link: Chuck’s Blog: VDI — The Red Hot Discussion.

I’ve never seen anything like this in the industry. … Just when you thought the server-oriented ESX party was raging, over
the last 6-12 months the VDI discussion has become extremely
interesting, especially to larger organizations who are seeing the
potential to save money, deliver better user experiences, improve
security and so on.

If you’ve been around the industry for any length of time,
periodically the thin client discussion comes around.   Please, set
aside your cynicism for just a moment — this time it’s different.

Previously, it’s been an IT-driven thing.  All the benefits accrued
to IT, and few (if any) to the knowledge workers who had to use the
stuff.  There were some nasty compromises that limited thin-client
effectiveness.

With VDI, users get clear benefits. 

It’s a full experience with no compromises – an XP Pro desktop is an
XP Pro desktop — it’s very hard to detect any meaningul differences.
They get the ability to potentially work on any device (home, office,
etc.) and get a full and consistent desktop experience — no schlepping
files around, etc. …

Don’t over-optimize the environment for cost savings.  I’ve talked
to more than a few IT organizations that were trying to get the very
last pennies out of cost savings, at the expense of an improved user
experience. …

 

And, surprisingly, many of the policies around desktop usage might
be re-thought at the same time.  Like access from outside the firewall,
for example.  Or supporting consultants and other business partners
using internal applications.  A lot can potentially change here — and
for the good.

Quite humbly, I’ve never seen anything like this before …

He also talks about some of the case studies on our site from a storage guy’s perspective. A good read, as is his whole blog. Also check out the VMware Communities VDI Community for more discussion.

Memory Overcommitment in the Real World

We really think VMware Virtual Infrastructure gives a huge amount of value and features compared to other virtualization solutions on the market. Our customers tell us the ROI is high, the time to recoup their costs is small, and a virtualization-first policy can increase your IT agility and even transform your business.

If you (or your boss) can’t get beyond the price, our new Virtual Reality blog had a post last week doing a back-of-the-envelope calculation on price per VM. The point was not to present a full ROI calculation (go over to our VMware TCO-ROI Calculator for a more comprehensive view), but to point out that the sticker price doesn’t tell the whole story. In this case, because VMware VI3 can share memory pages between VMs, you can actually use more memory than is physically resident on the system, and consequently you can consolidate more VMs per physical server. Fewer servers = less hardware & fewer software licenses = lower cost per VM. That’s the reader’s digest version — read both of Eric’s pieces for more information: Cheap Hypervisors: A Fine Idea — If You Can Afford Them and More on VMware Memory Overcommit, for Those Who Don’t Trust the Numbers.

Well, you would have thought that we said Bill Gates wasn’t curing malaria or Citrix killed a man in Reno just to watch him die. The fuss was incredible. You’d begin to suspect that memory overcommit wasn’t on the product roadmap of either Microsoft or Citrix.

Everything we say here on our blogs doesn’t mean a damn thing compared to what’s going on in your data center. This stuff works in real life. (That doesn’t mean you can’t misconfigure it, but it’s probably working right now in your own data center.)

Mike DiPetrillo comes back with a real-world story. First off, he points to this great comment from a VMware customer about memory overcommit and VDI:

One virtualized Citrix server is handling 50-85 sessions and it’s not
full yet. Each of the sessions is running one of three published
applications that all share the same base PowerBuilder code and .DLLs
(about an 80MB memory footprint for each session). Because each of the
50-85 sessions shares the same code, the VMware’s Content Based Page
Sharing consolidates many of the identical pages into single read only
reference points and discards the redundant copies. The net result is
significant ESX host memory savings. As an example, what I’m seeing
inside the Citrix VM is nearly 4GB of RAM being used, but from the ESX
host perspective, 1GB or less physical RAM is being utilized, leaving
the additional 3GB of physical RAM for another VM in the cluster to
use. Now multiply this memory savings by the number of virtualized
Citrix servers and the memory savings adds up quickly.

And then he goes in detail on another real-world situation with VDI at a bank. Link: Memory Overcommitment in the Real World.

This customer configured their standard Windows XP environment for
their call centers to run in a virtual machine. Each virtual machine is
granted 512 MB of memory and 1 virtual CPU. Each VM runs a series of
applications including Marimba, Microsoft Office, a call recording
application, a customer database application, and a BPO (business
process off-shoring) application. …

Below is a screenshot of their environment showing a total of 178 VMs
running on the system. You can also see in the screenshot that less
than 20 GB of RAM out of the total 64 GB of RAM is being used on the
system. With a total of 178 VMs configured for 512 MB of RAM each they
are currently allocating 89 GB of memory to running VMs which means
they are oversubscribed on the host. …

In order to run the same setup with a competitive solution we would
need to have a server configured with at least 89 GB of RAM – the total
allocated to all of the running virtual machines. …

Compared to the original configuration this is a difference of
$11,800.00 just to add more memory to the server to support a solution
that does not have memory overcommit. The cost of a 4 socket VMware VI3
Enterprise license is $11,500.00 list price. As you can see the cost of
a VMware license is actually $300 less than the cost of adding more
memory. Not much of an advantage on the cost side but it still drives
home the point that the VMware solution is not more than the
competitive "free" solution. What’s more is now you get all of the
enhanced functionality of the VMware solution that the competitive
solutions are lacking.

We can keep on coming up with more of these, or just ask your colleagues over at VMware Communities if this works in real life. Feel free to chime in over at Mike’s post on memory overcommit examples if you want to share your story.