Home > Blogs > VMTN Blog > Monthly Archives: January 2007

Monthly Archives: January 2007

VMmark digs in with Woodcrest

Bruce Herndon does some scaling tests with VMmark on a HP DL380G5 server with the new Intel Woodcrest dual-core processors. The different workloads (mail server, java server, database, web
server, and file server)  have different scaling characteristics,
depending on their need for CPU, memory, disk, and network capacity. He also looks at the scaling of realistic workloads looks two different configurations: six disks in a single LUN, and two three-disk LUNs.

Link: VROOM! Trying VMmark on Some New Hardware

These two different disk configurations highlight some interesting tradeoffs and tuning opportunities exposed by VMmark. The single LUN configuration utilizing six disks has the benefit of providing high disk throughput for one VM at the expense of scalability if multiple disk-intensive VMs are running. On the other hand, creating multiple LUNs provides both good predictability and excellent scaling but limits the total throughput of any single VM by providing only a subset of the hardware resources to each one. From a benchmarking perspective, the multi-LUN approach is clearly better since it results in a higher overall score. In practice, the proper approach depends upon the needs and goals of each user. I am excited by the ability VMmark gives us to study these types of performance tuning tradeoffs in a representative multi-VM environment. I feel that building performance tuning expertise in these complex situations and getting that information to our customers along with the ability to evaluate hardware and software platforms for virtualization should make VMmark an extremely valuable tool. Please stay tuned as we work to make that a reality.

Despite some claims to the contrary, VMmark is available to beta testers now (email vmmark-info@vmware.com), and will be released for general availability.

Appliances on demand for the startup

John Sequiera ponders the question: "Why on-demand appliances?" He gets virtualization, but the ‘resource pool’ approach of something like Amazon’s EC2 does require a shift in thinking and comfort level with IT as a utility. I think John’s a-ha here is more about the usefulness of virtual appliances, whether they’re in the cloud or in your ESX Server at the data center. I personally see the most need for on-demand computing around capacity management (unexpected DOS attacks or planned seasonal surges) and capital management (why buy when you can lease?).

Link: John Sequeira’s Weblog.

Why is this cool? Well, consider the difference between your typical
startup and a mature web enterprise: to really run a web hosted
application according to best practices, you should have


  • staging setup
  • production setup,
  • hot standby, DR plan
  • version control repository/bug tracker
  • integrated authentication
  • distributed file system
  • load balancer
  • firewall/intrusion detection
  • etc.

And no one does initially because it takes a lot of time, money and
expertise to put all these pieces in place. But what if you could have
it all initially and it didn’t cost an arm and a leg? The idea of a
vendor (like, say Novell or RH) pre-provisioning all the machines
required to pull the above off, and offering them via the Amazon EC2
Control Panel is quite compelling. Imagine the options:


  • Statefull Firewall with mod_security? Check. 
  • Dedicated Image Server pre-configured with optional Akamai CDN support?  Check. 
  • Web analytics reporting server? Check
  • Offline bi/olap database with real-time replication? You get the idea.

Each check on that control panel is the equivalent of days or weeks of work on your hand-rolled data center.

Compatibility Guides, served 4 ways

The Hardware Compatibility Guides (colloquially known as the HCLs) now have a blog-style page for updates and a directory listing to accompany the existing RSS feed.

The Community-Supported Hardware and Software List
is up and running for vendors and for members of the community to
report working configurations for VMware Infrastructure that aren’t
officially in the official compatibility guides. Submissions do go through moderators, but we just check for appropriateness and we don’t validate these ‘reported to work’ configurations.

Performance Tuning Best Practices for ESX Server 3

From Aravind Pavuluri on the VMware performance blog, VROOM!

Link: VROOM!: Performance Tuning Best Practices for ESX Server 3

Over time a number of customers have been asking us for a single
comprehensive ESX performance tuning guide that would encompass its
CPU, memory, storage, networking, resource management and DRS,
component optimizations. Finally we have the Performance Tuning Best Practices for ESX Server 3  guide. …

Some customers will want to carefully benchmark their ESX
installations, as a way to validate their configurations and determine their
sizing requirements. In order to help such customers with a systematic
benchmarking methodology for their virtualized workloads, we’ve added a
section in the paper called "Benchmarking Best Practices". It covers the
precautions that have to be taken and things to be kept in mind during such
benchmarking. …

The strength of the paper is that it succinctly
(in 22 pages) captures the performance best practices and benchmarking tips
associated with key components.

Virtualization … Wars?

David Marshall, who has been virtualizing for years (he even was an external alpha tester for ESX Server), has written a good overview of where we are in the adoption of virtualization, and how we got here over the last few years.

Link: Virtual Strategy Magazine – Virtualization Wars.

  • Maximize resources – Perhaps the most common problem being
    solved with virtualization today – applications are running on their
    own dedicated servers, which results in low server utilization rates
    across the server environment. Server consolidation is used to help
    maximize the compute capacity on each physical server which therefore
    increases ROI on existing and future server expenditures.
  • Test and development optimization – Test and development
    servers can be rapidly provisioned by using pre-configured virtual
    machines. By leveraging virtualization, development scenarios can be
    standardized and quickly executed upon in a repeatable fashion. It also
    allows for increased collaboration, and ultimately helps with
    delivering a product to market faster and with less bugs.
  • Quickly respond to business needs – Deployment processes are
    becoming more difficult to manage in a complex environment and IT is
    unable to adapt as quickly to changing business requirements. Moving to
    a virtual environment helps with procurement, setup and delivery,
    giving IT the efficiency needed for rapid deployment.
  • Reduce business continuity costs – Virtualization
    encapsulation (creating an entire system into a single file) and
    abstraction (removing away the underlying physical hardware) help to
    reduce the cost and complexity of business continuity by offering high
    availability and disaster recovery solutions where a virtual machine
    can easily be replicated and moved to any target server.
  • Solve security concerns – In an environment where systems
    are required to be isolated from each other through complex networking
    or firewalls, these systems can now reside on the same physical server
    and yet remain in their own sandbox environment, isolated from each
    other using simple virtualization configurations.

It’s a good article; recommended. In the first part, David lays out where we are as an industry, and the current drivers and speed bumps on the way to virtual infrastructure. This would be good intro for anyone. The second part is where David talks about the last few years in the marketplace. This is interesting as context to the current players — such as Microsoft’s original perspective on virtualization as being useful as a migration tool for moving to new versions of Windows, which is very different from our current view of virtualization as freeing us from the rigid coupling of compute resources to physical hardware.

I have a bit of a problem with the title; perhaps it’s a reaction to the current world situation, but I have a hard time seeing what we’re doing as a "War," even metaphorically. This is not a mature market, like with the RDBMS in the 90’s and Oracle, Sybase, and Informix all slugging it out. It’s certainly a Race, with VMware in the lead, building value on top of the hypervisor while others are still building their core technology. We actually do a lot of teaching in the field, as customers try to figure out where they should be using hardware virtualization vs. other technologies. "War" has an unfortunately focus on the vendors themselves and the competition between them; I’d rather be listening to customers and how they’re solving problems.


Latest blog entries from VMTN

A few recent updates from us:

  • Let me know what you think of the new VMTN Front Page and if it is useful to you. I have it set as my browser start page, but I may be biased since I built it.
  • "We’ve been busy…" from Steve Herrod. Steve is looking for feedback: given that he can’t really talk about future products or features, what would you like to hear from a VP of R&D at VMware?

  • "Shrinking the VMmark Tile" from Bruce Herndon on the new performance blog VROOM! on scaling back some workloads in the VMmark standard "Tile" from 2GB to 1GB.

    Given that we initially sized our VMs based upon various industry and customer surveys, I am led to wonder if there aren’t lots of servers over-configured with not only CPU but also memory. As a final series of tests, I reran the newly modified VMmark on several systems for which I already had data for the existing 7GB tile size. Overall I saw very little effect on the benchmark scores. It looks like the 5GB VMmark tile is a go.

The end of the monolithic firewall?

Here’s a new thought on a known aspect of appliances. Appliances, being purpose-built for a single task, are usually simpler to configure and maintain than a generic compute server.  Virtual appliances (1) are easier to deploy but (2) in some cases may have a reduced performance profile because, well, they aren’t on dedicated network hardware.* Making lemonade out of any performance hit may simplify and reduce interdependencies in your network. Instead of one complicated config file on your firewall with all application traffic flowing through it, just fire up one virtual firewall per app and configure your network accordingly. There are both commercial and open source firewalls in the Virtual Appliance Marketplace, most with a very small footprint.

Link: Replicate Technologies » Network appliances go virtual.

None of these will run as fast in a vm as they will in an engineered hardware appliance, where they could conceivably achieve wire speed of 100 mbps or even 1 gbps, instead of a vm’s more typical 25-50 mbps. But then again, it’s rare that most applications ever see that much demand for their services — under 20 mbps is more typical. In fact, there are cases where the traffic from many applications are forced through a single hardware appliance “because it’s there,” when a more logical network topology would separate the traffic and give each application its own appliance. For example, firewalls sometimes have extremely complex configurations because they manage security for many different applications in a single box, when they could be more easily managed with one firewall per application. Disaggregate the traffic and you may reduce complexity and configuration errors, while lowering the traffic rates to levels more suitable for a virtual appliance. As cores become more numerous in servers, it may become more appealing to use them for network functions, replacing hardware and cabling with software.

Keep up to date with the new VMTN

The new VMTN front page gives
a dynamic view into the activity on the site and in the VMware
community. Keep up to date on the latest in VMTN News, Virtual
Appliances, Technical Resources, Discussions, Knowledge Base,
Compatibility Guides, Security Alerts, VMware Blogs, and Virtualization
Blogs. The page is updated throughout the day, and new RSS feeds will
be coming soon.

Who updates the appliances?

Red Hat’s David Lutterkort is on the money in this posting. The concept of a virtual appliance is seductive, but when the rubber hits the road, somebody has to keep it updated. That’s why we’re seeing the production-ready virtual appliances come from established appliance vendors who have the business and technical processes in place to do this.

Package management has come a long way in the past 10 years, and I expect that we’ll be seeing functionality to do unattended, automatic security updates built into our OSes and applications more and more over the next decade. This changes the role of the vendor or open source project into a service provider, but from my perspective, that’s a good thing. I’m looking forward to seeing how folks like David and Red Hat move the ball forward.

watzmann.blog – What would you like your appliance to do ?

A decent system for handling appliances therefore needs to take the plight of the typical (which means grumpy) sysadmin into account, and needs to be geared towards almost arbitrary site-specific customizations, since sysadmins will still need to do a lot of the things they do to systems today to the appliances of tomorrow.

Instead of focusing on minimizing the footprint of general-purpose appliances, or marginally improving how the binaries making up the appliance are selected and built, we should be focused on delivering appliances that fit into a manageable ecosystem made up of virtual and nonvirtual systems. Which means that good appliance tools should be focused on producing appliances that can be managed well; at a minimum, let’s make sure that users have a reasonable way to upgrade the appliance and preserve their customizations at the same time. In other words: appliances are a new way to deliver software, but to run that software maintainably, we need to get down and dirty with old management problems like package management, config management, monitoring etc.

Grokking VMWare: SQUID/SARG appliance

Link: Grokking VMWare | Jon Watson’s Tales from the Motherboard.

I also modded a SQUID VM today by adding SARG and reporting to it. This is a good example of where the true value comes in for us. …

The SQUID/SARG VM is a great example. It’s configured with two NICs
– one bridged to the 10. network and assigned and the other
set to host-only which provides the Internet access. I can now drop
this VM into any of our sites and simply by setting the proxy in the
Windows clients to full proxying complete with daily
spy-reports are available.

Some apps aren’t really suited for a VM, but things like this proxy
VM are a great example. Anything that is self-contained and provides
some significant functionality is a good candidate for VM’ing.