Home > Blogs > Virtualize Business Critical Applications > Monthly Archives: April 2012

Monthly Archives: April 2012

Protecting Exchange 2010 with vShield 5.0

by Jeff Szastak

Enhancing Exchange 2010’s Security Profile

In this post we will discuss using vShield to bolster the protection profile of Exchange 2010. We will start off with a brief discussion on vShield, and then move on to discussing the Exchange 2010 architecture, and then finally how we implemented vShield around Exchange 2010.  

vShield 5.0 Overview

The VMware vShield product family is the foundation for trusted cloud infrastructures.  vShield enables adaptive and cost-effective security services within a single management framework. vShield is a suite of products comprised of vShield Edge, vShield App, vShield Data Security, and vShield Endpoint. For purposes of this post, we will focus on two of the four products, vShield Edge and vShield App.

vShield Edge provides network edge security and gateway services to isolate VMs in a port group, vDS port group, or Cisco Nexus 1000v. vShield Edge is a stateful inspection firewall that can provide NAT, DHCP, IPsec Site to Site VPN VPN, and Web load balancing services for the virtual data center.

vShield App is a layer 2 / layer 3 virtualization aware, hypervisor based firewall that protects applications in the virtual datacenter from network based attacks. A major benefit to vShield App is configuring access control polices are based on logical and physical constructs versus purely physical constructs that a traditional firewall leverages. An example of this would be the ability to create rules based on a vApp (logical) versus IP Address (physical).

Exchange 2010 Architecture Overview

We built Exchange 2010 within the construct of a vApp. A vApp allows you to group VMs together and perform management functions against those VMs, such as power on, power off operations. vApp provide the ability to create ‘nested’ vApps. We leveraged this ability to create a multi-tier vApp for Exchange.

We created a root vApp labeled Exchange and then nested three different containers, based on Exchange 2010 roles (CAS, HUB, Mailbox). We then explicitly configured boot order within the CAS, HUB, and Mailbox vApps and at the Exchange Level.



We separated out the individual Exchange 2010 roles into individual VMs for the CAS, HUB, and mailbox roles. We used Exchange 2010 SP1 installed on Windows Server 2008 R2 Standard / Enterprise.  We also configured the SAMESUBNETDELAY setting to 2000ms since we are using HA, DRS, and vMotion with DAG. More information on running DAG on the vSphere platform, see the whitepaper Using VMware HA, DRS, and vMotion with Exchange 2010 DAGs. The VMware software used in this configuration was vSphere 5.0 and vShield 5.0.


For networking we used the vSphere Distributed Switch with one Port Group for production traffic  and a second Port Group dedicated to DAG replication traffic. In addition, we limited the number of ports in the DAG replication network to 2 so we would not have to worry about addition VMs being plugged into this Port Group. In the screen shot below, you can see the HUB01 and MBX01 VMs both using the Production dvPortGroup and the second vNIC on MBX01 using the ExchangeDAG dvPortgroup.


Once we got Exchange up and running we installed vShield. vShield installs default open so we were able to leverage the traffic flow reports inside vShield to assist us in creating the rules around Exchange 2010.

 Building the Rules

As stated earlier, vShield installs default open which allows us to leverage the traffic flows within vApp to better understand communication activity amongst systems. We decided to gradually lock down Exchange 2010 by first configuring VM to VM rules, and then implementing port based rules based on the TechNet post detailing ports used by Exchange 2010: http://technet.microsoft.com/en-us/library/bb331973.aspx.

We built our rule sets using logical constructs within vCenter Server.  For example, we built a rule stating the Mailbox vApp is allowed to communicate with the HUB vApp. By creating the rule against these logical constructs, any VMs placed into these containers will inherit the rules of that container.


As we built the rules we monitored traffic flows between Exchange 2010 systems, which was key in validating we correctly configured the rule sets and also identified other key traffic activities that were not documented in the aforementioned Ports Used by Exchange 2010 article. An example of this was UDP 139 from the Exchange vApp to our Domain Controller vApp.  

Closing Remarks

Configure an external syslog server for vShield. As you build your rules, enable logging of the rule in order to validate enforcement of the rule. Start with general rules, like VM to VM rules and if necessary move down to port specific rules. Both of these will provide better protection, be sure to implement the appropiate level for your enviornment. Be aware that as the rules become more granular you must be more diligent to ensure all ports required by the application and OS are available. When you have validated your configuration is correct, change the default allow rule to deny.


This blog is part of a series on Virtualizing Your Business Critical Applications with VMware. To learn more, including how VMware customers have successfully virtualized SAP, Oracle, Exchange, SQL and more, visit vmware.com/go/virtualizeyourapps.

SAP on VMware – Design Guidelines

We have received some requests from SAP Basis colleagues on how to go about designing SAP systems on VMware. Now that vSphere 5 can support up to 32-way virtual machines it is possible to fit  larger  SAP systems into one single virtual machine (VM) so  should we go with  2-tier versus 3-tier? Here are some guidelines.


 First let’s cover sizing as this will impact the final VM architecture. SAP sizing is conducted in the SAP metric “SAPS” (http://www.sap.com/solutions/benchmark/measuring/index.epx ).   All SAP on VMware sizing is officially conducted by the server vendor SAP practice. VMware partners with the server vendors so we can help but we are not ultimately responsible for sizing. The background behind this is as follows:

  • SAP has officially deferred sizing on physical and virtual to their hardware partners (since 1993 approx). SAP’s position is documented on SAP Marketplace https://service.sap.com/sizing   (logon credentials are required). As of April 2012:
    • Under “Sizing Responsibilities” it says “The hardware vendors are responsible for providing hardware that will meet the customer’s throughput and response time requirements”
    • Under “Virtualization – Some Statements about Sizing and Virtualization” it says “For the right virtualization strategy you should get in touch with your hardware vendor”.
  • The SAPS rating per vCPU only depends on the processor model. The hardware vendor has the most up-to-date SAPS ratings of their servers so they can size most accurately. For example the SAPS rating of a virtual machine with 4 vCPUs will change if moved from one server model to another.

 The hardware vendor can conduct the sizing and provide the number of ESX servers required to fulfill the business requirements. Once this is available VMware can work with the hardware vendor and customer to jointly fine-tune the VM size and layout.

We recommend starting conservatively for business critical workloads. An initial sizing option could be to allocate number of vCPUs = number of cores on the ESX server – we would do this even for hyper-threaded systems.

To achieve higher utilizations, the total amount of vCPUs running on an ESX server can be higher than the total amount of physical cores. The ESX hypervisor is designed to optimally schedule the workload amongst the available CPUs. Additionally, it can be configured to give more important virtual machines a higher priority. Hardware supported features like hyper-threading will increase the CPU scheduling efficiency. No general statement can be made regarding the optimal CPU over-commitment ratio, as this always depends on individual utilization patterns of the workload.

 2-tier versus 3-tier

The architecture of a single SAP system consists of: a database instance; application server instances; Central Instance (CI – includes locking and message services and other SAP processes). In newer SAP releases the Central Instance is replaced with Central Services (CS – locking and messaging only) and the Primary Application Server instance (PAS). 2-tier refers to all these components running in the same guest-OS/ VM. 3-tier refers to the situation where, for a single SAP system these components are spread out onto at least two VMs.  Each of the components can be deployed into a separate VM. Advantage of 2-tier systems is that there are less VMs to manage and there is no network latency between the SAP components.

3-tier has the following advantages:

  • For flexibility, better resource management and better overall high availability i.e. if everything is in one VM and the VM/guest OS / ESX server goes down you lose every component. If workload is dynamic e.g. month-end requires more app tier resource you can add/remove application server VMs as required so 3-tier is better for this (same principles as physical).
  • You can set up a ESX cluster in a “n+1” setup i.e. if one ESX server goes down all the VMs can restart on remaining ESX servers and continue to perform as before (auto-restart scripts required for the instances or enter  “Autostart=1” in the instance startup profile). 3-tier setups allows you to spread the VMs for a single SAP system across multiple ESX servers so if one ESX server goes down then it minimizes the impact to a single system (off course if DB/CS virtual machine is offline the SAP system is down but hopefully only one component needs to be restarted).
  • 3-tier setups allow you to size VMs better so they align with NUMA architecture.
    • DB VM – this needs to scale-up vertically so if sizing requires a large DB you can put it in a large VM. Ideally you want the VM to fit inside a NUMA node but if it can’t no big deal vSphere 5 can support a wide VM that crosses a NUMA node (and you can configure virtual NUMA to take advantage of any NUMA optimizations inside the guest OS).
    • Application server VMs can scale-out horizontally – size these in smaller blocks  such that they fit inside of a NUMA node.
    • For more background, see http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.0.pdf, pages 39-40.
  • ABAP+JAVA stack:   SAP has a policy whereby they prefer to separate ABAP + JAVA out – we can comply with this in virtual by putting the stacks in separate VMs.  Check out http://wiki.sdn.sap.com/wiki/display/SI/SAPs+Dual+Stack+Strategy – this recommends single stack except when it is a hard requirement in the SAP product e.g. Solution Manager.  Advantage of this is you can manage performance tuning separately in each VM, for example ABAP does not support large pages, but Java does (see SAP note 1681501 – Configure a SAP JVM to use large pages on Linux). However if you need to run a dual stack, you can in a single VM, just size the VM large enough to handle memory + CPU of both stacks.

In the physical world some customers run batch jobs on the CI which is on the same physical server as the DB instance. The advantage – the jobs run quicker as there is no network hop between the app and the DB.  In virtual a similar setup would require a large VM with the DB and app server/CI instance installed in the same guest-OS. The only downside is if there are long periods where the batch jobs are not run – we end up with an oversized VM with low utilization. Some datacenters may have a security requirement to separate the DB in its own guest-OS in which case your options are limited. VMware supports hot-add vCPU for the latest Linux and Windows versions but hot-remove is not supported.  One solution, if the batch job is designed to run in parallel threads (many SAP ABAP batch jobs have this ability), increase the degree of parallelism and distribute the batch workload across more app server VMs to decrease the overall runtime of the job (this assumes you have available CPU) – the latter can be provisioned and de-provisioned based on the cyclical nature of the workload.



Matthias Schlarb (SAP Technical Alliance Engineer)

Michael Hesse (SAP Technical Alliance Manager)

Vas Mitra (SAP Solutions Architect)

Oracle VM – 4x More Marketing, 4x Fewer Substantiated Facts

by Avinash Nayak

The good folks in Oracle’s marketing department deserve a raise for their efforts around promoting the latest release of the company’s virtualization solution, Oracle VM (OVM) 3. They certainly are aiming high, claiming OVM 3 is four times more scalable than VMware, four times cheaper to deploy than VMware, and is architected for efficiency while VMware is prone to inefficiencies. Not bad for a product that did not even exist until 2009 and is only on its second release (why the second release is called OVM 3, I don’t know). Unfortunately for Oracle Marketing, there’s a problem, namely – the FACTS. The facts show that VMware vSphere 5 delivers much higher scalability, greater value and unmatched performance compared to OVM.

Let’s take a closer look at Oracle’s claims and compare them with the facts:

Is OVM 3 four times more scalable than VMware? Switch the order of the products and it's perfect.

Oracle bases this claim on the fact that OVM 3 supports 128 virtual CPUs (vCPUs) per VM, where vSphere 5 VMs support 32 vCPUs.

Wait. Did someone change the definition of scalability when we weren’t looking? Since when is the scalability of a virtualization platform defined only by the number of VM vCPUs supported?

Surely, a better measure of the scalability of a platform is the number of VMs doing useful work that the platform can support (and manage) on a host or a cluster. It’s not the only measure, but one that’s far more insightful for showing scalability than just comparing vCPUs. Oracle’s documentation shows that OVM 3 is only able to run up to 128 VMs per host, compared to 512 for vSphere 5 (four times more than OVM 3, what a coincidence). So, it looks like Oracle got the “four times” part right. They just got the products mixed up. Simple mistake.

In addition, VMware’s testing has shown that a 32 vCPU guests can deliver 92-97% of native performance. Oracle has yet to provide any evidence that a 128 vCPU guest can scale linearly on OVM3.0.

Is VMware four times more expensive than OVM 3? NO. Maybe Oracle meant vSphere has four times more functionality than OVM 3.

Oracle makes this claim solely based on virtualization software costs. But virtualization software cost is only one component of the total cost of deploying an application. The other components are the hardware costs (server, storage and networking), guest OS licensing costs, power and datacenter space costs. You need to take into account all of these when calculating the total costs for deploying an application.

Thanks to the advanced features provided by vSphere, customers are able to realize significant savings from reduction in hardware necessary to deploy an application environment relative to OVM.  Through the use of multiple advanced memory management features  (transparent page sharing, ballooning, memory compression, and hypervisor swapping), vSphere is able to achieve much greater VM density per host than OVM, meaning you need fewer hosts to deploy the same number of VMs. Independent tests have shown that vSphere 5 consistently delivers higher VM density compared to competing platforms, such as Xen based OVM.

Let’s take a simple example and compare the TOTAL cost of deploying 100 Linux VMs on vSphere 5 Enterprise Plus (VMware’s highest vSphere edition) vs. OVM3. We assume a conservative 25% density advantage for vSphere over OVM. This means that if we assume we deploy 12 VMs per host for OVM, we can deploy 15 VMs per host for vSphere 5.


We see that even the highest edition of vSphere 5 is less than 6% more expensive than OVM when you take into account TOTAL cost. So it appears that Oracle’s cost claims are exaggerated by 400/6 = 66.67 times.

So what do you get for a premium of less than 6%? Here’s a subset of the features found in vSphere 5 that are absent from OVM:



VMware vSphere 5 

Oracle VM 3

Clustered File System

VMFS 5 Purpose built and tested for virtualization

OVM built on OCFS2, not built for virtualization

Thin, bare-metal hypervisor

Yes, ESXi has a small 144MB footprint for better reliability and security

No, OVM 3’s Xen hypervisor requires a large Linux management partition making it four times larger

Logical resource pools

Yes, divide and assign cluster resources to hierarchical groups

No, users share all resources across entire server pool

Role-based access controls

Extensive customizable user roles and permissions

No, single user account used for all hosts managed by OVM Mgr

Storage live migration

Storage vMotion


Storage array support

Supports over 1,200 arrays, vSphere Storage APIs for Array Integration supported by 175 arrays

Storage Connect API supported by less than 20 arrays

Auto storage tiering

Profile-Driven Storage


Thin disks

Fully supported


Broad guest OS support

Over 88 guests

Only 13 guests

Complete resource balancing

DRS and Storage DRS, balances memory, CPU, storage load

DRS feature considers only CPU and network load

“Noisy neighbor” protection

Yes, Network and Storage I/O Controls


 HA policy enforcement 

 HA supports: admission controls, VM-VM affinity/anti-affinity controls, restart priority 

 HA feature supports only basic restart ; Only anti-affinity controls

Memory overcommit

Yes, enables greater VM density, lower costs


But Oracle should be intimately familiar with how a “free” solution does not necessarily provide better value. After all, isn’t MYSQL a free product (like OVM) and you only pay for support. Does it deliver the same capabilities as Oracle’s Enterprise Database solutions? I wonder what Oracle has to say about that.

Is OVM 3 more efficient than vSphere? We’d like to see the evidence.

The basis for this claim is a four year old Oracle VM Benchmark Performance Report from The Tolly Group done using OVM 2 (the first version of the product). The performance tests compare OVM 2 (note – Not OVM 3) to physical servers, not to vSphere. Oracle has not provided any evidence to support the claim that OVM 3 is more efficient or has a performance advantage over vSphere.

Like other Xen-based hypervisors, Oracle VM requires guest operating systems with extensive Xen paravirtualization modifications to get acceptable performance. Instead, vSphere uses unmodified guest OSs together with optimized device drivers and full support for virtualization hardware assist features in modern processors to deliver unmatched performance. This approach allows customers to use standard operating systems that are fully supported by ISVs. And with a disk footprint of only 144MB, the vSphere hypervisor represents a far smaller attack surface. An OVM 3 server’s disk footprint is swollen to 588MB (four times larger than vSphere) by the Linux management operating system installed in the Dom0 partition.

Also, without advanced features like Network and Storage I/O Controls, OVM is unable to guarantee service levels for business critical applications (for example, large databases.) vSphere is the only platform that delivers capabilities to ensure that your most important applications have access to the resources they need to meet required SLAs.

Grading the claims made by Oracle with regards to OVM 3:

Marketing: PASS; Delivering on the Marketing Claims: FAIL

So it looks like VMware will not be shutting shop just yet. Despite what Oracle says, vSphere 5 is well ahead of OVM 3 in terms of performance, features and value.

This blog is part of a series on Virtualizing Your Business Critical Applications with VMware. To learn more, including how VMware customers have successfully virtualized SAP, Oracle, Exchange, SQL and more, visit vmware.com/go/virtualizeyourapps.