Web/Tech

16,000 Exchange Mailboxes, 1 Server

We recently finished a large Exchange 2007 capacity test on VMware
ESX Server 3.5. How large? Well, larger than anything ever done before on a
single server. And we did it from start to finish in about two weeks.

We did this test because we have felt for a while that
advances in processor and server technology were about to leave another
widely-used and important application unable to fully utilize the hardware that
vendors were offering. Microsoft has guidelines on what environment works well
with Exchange, and a system with more than eight CPUs and/or 32GB of RAM is beyond
the recommended maximums.

Hardware vendors are now offering commodity servers with 16 cores (quad socket with four cores each) and enough memory slots to hold 256GB of RAM. Within a year or two we would expect this to go up even further, with commodity x86 systems being built with 32 cores. Microsoft Exchange deployments
typically work well with the ‘scale out’ model, but that causes server proliferation and underutilized hardware, especially as systems get this large.  VMware ESX Server allows us to make more effective use of the hardware and improve capacity.

Using VMware ESX Server 3i version 3.5 we created eight virtual machines, each with two vCPUs and 14GB of memory, and configured 2,000 mailboxes on each one.  We chose 2,000 users based on Microsoft’s recommendation of 1,000 mailboxes per core and we selected 14GB of memory in accordance with the recommendation to use 4GB + 5MB/mailbox. We used the hardware recommendations for Exchange Server
in Multi-Role configuration because each virtual machine was running the Hub, CAS, and UM components in addition to hosting the mailboxes.

We ran this test on an IBM x3850 M2 server with 128GB of RAM. The virtual machines ran Microsoft Windows Server 2003 R2 Datacenter x64 Edition with Service Pack 2 and Microsoft Exchange 2007 Server Version 8 with Service Pack 1.

The storage used for these tests was an EMC CX3-40 with 225 disks (15 drawers of 15 disks each). Each virtual machine was configured to use two LUNs of 10 disks each for the Exchange database and a three-disk LUN for logs.

We used the Microsoft Load Generator (LoadGen) tool to drive the load on the mailboxes, and ran with the heavy user profile.  Here are the LoadGen settings:

  • Simulated day – 8 hours
  • Test run – 8 hours
  • Stress mode – disabled
  • No distribution lists or dynamic distribution lists for internal messages
  • No contacts for outgoing messages
  • No external outbound SMTP mail
  • Profile used: Outlook 2007 Online, Heavy, with Pre-Test Logon

We ran the tests using both ESX Server 3.5 and ESX Server 3i version 3.5 and the performance was the same across both versions. Tests were run with one through eight virtual machines, and even in the eight virtual machine case about half the CPU resources were still available.

Disk latencies were around 6ms across our runs. The IOPS rate started off at about .65 IOPS/mailbox in the first hour but stabilized at .37 IOPS/mailbox in the last hour (once the cache was warmed up). Over the duration of the run the average rate was .45 IOPS/mailbox.  The read/write ratio observed was approximately 4:1.

Sendmail latency is an important measure of the responsiveness of the Exchange Server. Figure 1 shows how it changed as more virtual machines were added to the system.

Exchange2007latency

Figure 1. Sendmail Latency

A 1000ms response time is considered the threshold at which user experience starts to degrade. As can be seen from the 95th percentile response times in Figure 1, there’s still a significant amount of headroom on
this server, even at our highest tested load level.

These tests ran smoothly and demonstrated what we expected. This should come as no
surprise. As new hardware becomes available, the scalability of ESX Server allows us to easily make productive use of the additional capacity.

It took many hours and creative hardware "repurposing" from our lab personnel to put this setup together within a couple of days, and it’ll probably take them even longer to get everything back to its original place.  I’d like to acknowledge that without their efforts, we wouldn’t have been able to get this done.

Summary

The large number of companies already running Microsoft Exchange Server on VMware ESX Server are experiencing improved resource utilization and better manageability as well as lower space, power, and cooling costs. New servers with greater processing power make the transition to Exchange on ESX Server even more compelling.