Home > Blogs > Virtualize Business Critical Applications

Hyper-threading Impact on Virtual SAP Sizing and Performance – Part 1 of 2

This is part 1 of 2 blogs that will cover how hyper-threading impacts virtual SAP sizing and performance.   Many virtual SAP deployments  leverage INTEL’s hyper-threading (HT) technology. For each processor core that is physically present, the hypervisor sees two logical processors and shares the workload between them when possible. A vCPU can be scheduled on a logical processor on a core while the other logical processor of the core is idle.  In this blog this is referred to as one vCPU scheduled per core. Two vCPUs can be scheduled on the two logical processors of the same core. This is referred to as two vCPUs scheduled per core. For more background on vSphere scheduling functionality, please see the whitepaper  The CPU Scheduler in VMware vSphere .

I will show three different sizing scenarios.

Scenario 1

The first scenario above shows

  • 14 physical cores with HT enabled (28 logical CPUs).
  • A virtual machine (VM) with 14 vCPUs.
  • vSphere will schedule each vCPU on a logical CPU on a separate dedicated physical core (default behavior). The scheduler prefers a whole idle core, where both logical CPUs of the core are idle, over a partial idle core, where one logical CPU is idle while the other is busy.
  • There is spare capacity for more performance as not all the logical CPUs are utilized.

Scenario 2

The scenario above shows:

  • A virtual machine with 28 vCPUs.
  • vSphere schedules the vCPUs across all the logical CPUs – the two logical CPUs of each physical core are both utilized. This can be achieved a number of ways:
    • Setting manual CPU affinity in the VM to force the vCPUs to be scheduled on specific logical CPUs.
    • Provisioning number of vCPUs greater than number of cores on the host.
    • Deploying a VM with twice the number of vCPUs as cores in a socket and setting the VM level parameter “Numa.PreferHT” to true . All the vCPUs will be scheduled across all the logical CPUs within the socket/NUMA node.
  • Utilization of all the logical CPUs in Scenario 2 provides on average 15% boost in SAP performance/transaction throughput compared to scenario 1. In SAP sizing transaction throughput and performance are measured in the metric “SAPS”. So scenario 2 provides about 15% more SAPS than scenario 1.

Scenario 3

This scenario shows:

  • 16 physical cores with HT enabled (32 logical CPUs)
  • A virtual machine with 16 vCPUs. vSphere will schedule each vCPU on a logical CPU on a separate dedicated physical core (default behavior) – same as Scenario 1.
  • The performance/SAPS throughput is approximately the same as Scenario 2 (based on 15% HT benefit).
    • As we linearly scale up vCPUs and cores in Scenario 1, adding an extra 15% vCPU (and cores) will provide us equivalent performance to Scenario 2.
    • Scaling up vCPUs in Scenario 1 by 15% = 1.15 x 14 ≈ 16 vCPUs (on 16 cores) – this is Scenario 3.

Comparing Scenarios

SAP sizing involves calculations in SAPS. You can see an example at https://blogs.vmware.com/apps/2017/06/awg_s4hana_part1.html#more-2217 . The methodology and example shown here enables you to calculate the number of vCPUs required for business requirements provided in SAPS. You then have the option to design the VMs like Scenario 2 or 3:

  • If we need 16 vCPUs on 16 cores (Scenario 3) an alternative configuration with less cores and equivalent SAPS performance is Scenario 2 (28 vCPUs on 14 cores). The calculation is: 16 / 1.15 ≈ 14 i.e.

M = # of cores utilized (either 2 vCPUs or 1 vCPU scheduled per core)

SAPS of [M cores with 1 vCPU per core] = SAPS of [ M/1.15 cores with 2 vCPUs per core]

  • If we need 28 vCPUs on 14 cores (Scenario 2) an alternative configuration with equivalent SAPS with less vCPUs but more cores is Scenario 3 (16 vCPUs on 16 cores). The calculation is: 14 x 1.15 ~ 16 i.e.

SAPS of [ M cores with 2 vCPUs per core] = SAPS of [M x 1.15 cores with 1 vCPU per core]

The above equations are estimates as we assume linear scalability of SAPS with vCPUs in all the scenarios and an average HT benefit of 15%.


I have shown above when sizing VMs we have the option to configure the VMs with 1 vCPU scheduled per core or 2 x vCPUs scheduled per core.   An equation shows how these options are numerically related. The following table summarizes the difference between the options.

1 https://blogs.vmware.com/performance/2017/03/virtual-machine-vcpu-and-vnuma-rightsizing-rules-of-thumb.html

2 https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/vmw-vsphere-virtual-saphana-application-workload-guidance-design.pdf

3 https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/whitepaper/sap_hana_on_vmware_vsphere_best_practices_guide-white-paper.pdf

Part 2 of this blog will seek to demonstrate some of the concepts discussed here with an actual SAP workload.

New Architectures for Apache Spark™ and Big Data

Key Trends in Big Data Infrastructure:

Some of the key trends in big data infrastructure over the past couple of years are:

• Decoupling of Compute and Storage Clusters
• Separate compute virtual machines from storage VMs
• Data is processed and scaled independently of compute
• Dynamic Scaling of compute nodes used for analysis from dozens to hundreds
• SPARK and other newer Big Data platforms can work with regular filesystems
• Newer platforms store and process data in memory
• New platforms can leverage Distributed Filesystems that can use local or shared storage
• Need for High Availability & Fault Tolerance for master components

Continue reading

Oracle on vSphere – Summary of Storage options

Storage – the final frontier. These are the voyages of any Business Critical Oracle database, its endless mission: to meet the business SLA, to sustain increasing workload demands and seek out new challenges, to boldly go where no database has gone before.


Storage is one of the most important aspect of any IO intensive workloads, Oracle workloads typically fit this bill and we all know how a mis-configured Storage or incorrect tuning often leads to database performance issues, irrespective of any architecture where the database is hosted on.

As part of my pre-sales Oracle Specialist role where I talk to Customers , Partners and VMware field, I always bring up the fact how we can go and procure ourselves the biggest and baddest piece of infrastructure on this face of earth and all it takes is one incorrect setting or mis-configuration and  everything goes to “Hell in a Handbasket”.

Continue reading

On Demand Scaling up resources for Oracle production workloads

The crux of this blog’s discussion is “How to stop hoarding much needed infrastructure resources and live wisely ever after by scaling up as needed effectively

Typically Oracle workloads running on bare metal environments , or for that matter any environment, are sized very conservatively, given the nature of the workload , with the premise that , in event of any workload spike, the abundant resources thrown at the workload will be able to sustain this spike, but in reality , we need to ask ourselves these questions

  • How much resource is actually allocated to the workload?
  • How much of that allocated resource is actually consumed by that workload ?
  • How often does the workload experience spikes?
  • If spikes are happening regularly then, has proper capacity planning and forecasting been done for this workload?

Proper plan and design along with capacity planning and forecasting is the key to manage any Business Critical Application (BCA) workload and there is no shortcut around this.

Unfortunately what this means in a physical environment is , for example, static allocation of resources to a BCA workload where the CPU utilization has been flat at 30-40% for 11 months of the year with utilization at 55-60% for the last month of the year.

Pre-allocating resources to a workload , in anticipation of peaks for say 1 month in a whole year, basically results in the resources underutilized for the rest of the year , starving other workloads of much needed resource, an ineffective way of resource allocation , thereby leading to increase in larger footprint of servers resulting in increase in CAPEX and OPEX.

Enter “Hot Plug” – “Hot Plug CPU and Hot Plug Memory” on vSphere Platform – Resource allocation on demand thereby resulting in effective and elastic resource management working on the principle of “Ask and thy shall receive”.

Continue reading

Oracle RAC on VMware Cloud on Amazon AWS


With the recent launch of the VMware Cloud on AWS Software Defined Data Center (SDDC) from VMware, many Business Critical Application (BCA) workloads that were previously difficult to deploy in the cloud no longer require significant platform modifications.

This post describes a Better Together demonstration VMware and AWS presented at VMworld 2017 using an Oracle RAC Database for high-availability zero-downtime client connection failover, supporting a Django-Python application running in a Native AWS Elastic Beanstalk environment.

Oracle RAC presents two requirements that are difficult to meet on AWS infrastructure:

  • Shared Storage
  • Multicast Layer 2 Networking.

VMware vSAN and NSX deployed into the VMware SDDC cluster meet those requirements succinctly.

The Django-Python application layer’s end-to-end provisioning is fully automated with AWS Elastic Beanstalk, which creates one or more environments containing the necessary Elastic Load Balancer, Auto-Scaling Group, Security Group, and EC2 Instances each complete with all of the python prerequisites needed to dynamically scale based on demand.  From a zip file containing your application code, a running environment can be launched with a single command.

By leveraging the AWS Elastic Beanstalk Service for the application tier, and VMware Cloud on AWS for the database tier, this end-to-end architecture delivers a high-performance, consistently repeatable, and straightforward deployment.  Better Together!





In the layout above, on the right, VMware Cloud on AWS is provided by VMware directly.  For each Software Defined Data Center (SDDC) cluster, the ESXi hypervisor is installed on Bare Metal hardware provided by AWS EC2, deployed into a Virtual Private Cloud (VPC) within an AWS account owned by VMware.

Each EC2 physical host contributes 8 internal NVMe high performance flash drives, which are pooled together using VMware vSAN to provide shared storage.  This service requires a minimum number of 4 cluster nodes, which can be scaled online (via portal or REST API) to 16 nodes at initial availability, with 32 and 64-node support to follow shortly thereafter.

VMware NSX provides one or more extensible overlay logical networks for Customer virtual machine workloads, while the underlying AWS VPC CIDR block provides a control plane for maintenance and internal management of the service.

All of the supporting infrastructure deployed into the AWS account on the right side of the diagram is incorporated into a consolidated hourly or annual rate to the Customer from VMware.

In the layout above, on the left, a second AWS account directly owned by the Customer is connected to the VMware owned SDDC account for optionally consuming Native AWS services alongside deployed vSphere resources (right).

When initially deploying the VMware Cloud on AWS SDDC cluster, we need to provide temporary credentials to login to a newly created or existing Customer managed AWS account.  The automation workflow then creates an Identity and Access Management (IAM) role in the Customer AWS account (left), and grants account permissions for the SDDC to assume the role in the Customer AWS account.

This role provides a minimal set of permissions necessary to create Elastic Network Interfaces (ENIs) and route table entries within the Customer AWS account to facilitate East-West routing between the Customer AWS Account’s VPC CIDR block (left), and any NSX overlay logical networks the Customer chooses to create in the SDDC account for VM workloads (right).

The East-West traffic within the same Availability Zone provides extremely low latency free of charge, enabling the Customer to integrate technology from both vSphere and AWS within the same application, choosing the best of both worlds.

Oracle RAC Configuration

Database workloads are typically IO latency sensitive.  Per VMware KB article 2121181, there are a few recommendations to consider for significantly improving disk IO performance.

Below is the disk setup for Oracle RAC Cluster using VMware multi-writer setting which allows disks to be shared between the Oracle RAC nodes.


The Oracle Databases on VMware Best Practices Guide provides best practice guidelines for deploying Oracle Single Instance and Oracle RAC cluster on VMware SDDC.


For the VMworld demo, the OCI compliant Oracle Instant Client was wrapped with the cx_Oracle python library, and Oracle’s Database Resident Connection Pooling (DRCP).  Database connections are initially evenly balanced between the ORCL1 and ORCL2 instances serving a custom Database Service named VMWORLD.

By failing the database service on a given node, we demonstrate that only 50% of client connections are affected, all of which can immediately reconnect to the surviving instance.

An often overlooked challenge with Oracle RAC is that client connections do not automatically fail back after repairing the failure.  Those client connections must be recycled at the resource pool level, which might require an application outage if only one pool was included in the design.  Multiplexing requests over two connection pools in your application code allows each pool to be iteratively taken out of service without taking the application down.

Given such application design changes often are not tenable post-deployment, AWS Elastic Beanstalk makes quick work of that limitation by simply deploying a GREEN copy of your application environment, validating it passes health-checks, and then transitioning your Customer workload from BLUE to GREEN stacks.  When the GREEN stack boots, its database connections will be properly balanced between instances as desired, after which the BLUE stack can then be safely terminated.  Similarly, application code changes can be deployed using the same BLUE/GREEN methodology, affording rapid rollback to the original stack if problems are encountered.  As many additional stacks can be deployed with a single command, “eb create GREEN”, or automated via REST-API.


At VMworld, we ran a live demo continuously failing each database service iteratively followed by an Elastic Beanstalk environment URL swap between BLUE and GREEN every 60 seconds, while monitoring Oracle’s GV$CPOOL_CC_STATS data dictionary view.  The ClassName consists of the database service name VMWORLD, followed by the Beanstalk environment name, and the application server’s EC2 instance identifier.  The second and subsequent columns of the below table indicate the RAC node servicing queries between refresh cycles.




 VMware Cloud on AWS affords many Better Together opportunities to not only streamline operational processes by leveraging Native AWS services, but also enable a cloud-first IT transformation without needing to disruptively re-platform your Enterprise Business Critical Applications.

The cloud based SDDC cluster deployment is simply another datacenter and cluster managed in the same way you manage your on-premises VMware environments today, without needing to retool or retrain staff.

Creating and expanding SDDC clusters can be accomplished in minutes, allowing you to drive utilization to a much higher efficiency without concern for 18-24 month capacity planning cycles that must be budgeted for peak usage.  Release burst capacity immediately after it is no longer needed without any CAPEX overhead, as well as the OPEX overhead of running your own datacenters.


Demo for the “Oracle RAC on VMware Cloud on Amazon AWS” can be found in the url below

All Oracle on VMware SDDC collaterals can be found in the url below

Oracle on VMware Collateral – One Stop Shop

More information on VMware Cloud on AWS can be found at the url below


Performance of SAP Central Services with VMware Fault Tolerance

Many SAP customers in their virtualization journey are considering the option to protect SAP Central Services with VMware Fault Tolerance (FT). Central Services is a single-point-of-failure in the SAP architecture that manages transaction locking and messaging across the SAP system and failure of this service results in downtime for the whole system. It is a strong candidate for VMware FT and we have conducted a 1000-user test in vSphere 6.X which is documented in Section 4  of the SAP VMware Best Practice Guide .

The VMware vSphere 6 Fault Tolerance whitepaper mentions “One of the most common performance observations of virtual machines under FT protection is a variable increase in the network latency of the virtual machine”. Given this how does Central Services and VMware FT impact the performance of the SAP application as experienced by the SAP business user – I will demonstrate a basic example here.

A  potential validation at the infrastructure level could be to run the network “ping” command and SAP utility “niping”. “niping” is a SAP network utility used to help analyze network performance.  When I ran these commands at the OS command line to test network performance between an SAP application server and Central Services in two separate VMs, results showed an increase in latency from about 0.3 to 1.8 ms when VMware FT was turned on for the Central Services VM . This is expected behavior and does not reflect the performance experience that a SAP business user will see with VMware FT.

My next test was to construct a basic SAP application level test.  This test is a custom SAP program (written in ABAP), that automates the change of a sales order document and once executed will update around 50 sales orders automatically in series. For each sales order that is changed a lock is created and managed by Central Services. The program uses standard SAP techniques based on SAP “BDC” for mass input of data by simulating user inputs in screens of transactions. The SAP transaction being called is the Change Sales Order transaction (“VA02”). The program is executed in online mode/foreground via the SAP client SAPGUI.  After each online interaction SAPGUI records the response time at the bottom right in milliseconds – this was used as the performance metric.

The following diagram shows the test environment.

The following tables shows the results.

The difference in average online response time between VMware FT off and on is around 2%. The tests simulate a single user executing the change sales order transaction multiple times very quickly. This is a basic validation which should be followed by a multi-user test with actual users or business workloads simulated in a software testing tool. Note that other tests will show different results than shown here and mileage is expected to vary. In this example the simulated user is making many document changes in a short period of time with no think time. In reality an online business user will spend more time processing data within a transaction which is activity that does not require Central Services but resources on the application server hence the frequency of lock requests generated by a single user would be less than in this example.

The Art of P2V and Oracle ASM

“Come with me if you want to live” – famous words from the Terminator series.

It’s also the very reason IT companies are adopting the ‘Virtualize First’ policy to reap all of the benefits of virtualization and move away from the soon to be legacy bare metal architecture world and ‘save a bunch of money’ , just as the Gecko said.

As part of the Virtualization journey, one of the tools VMware Professional Services (PSO), Partners and Customers use to migrate applications from physical x86 servers (Windows & Linux) to VMware Virtual Machine (VM) is using the VMware Convertor tool, the process known as P2V (Physical to Virtual). It transforms the Windows- and Linux-based physical machines and third-party image formats to VMware virtual machines.

One of the most common question I get talking the VMware field, Partners & Customers as part of my role is ‘Can I use VMware Convertor to migrate Oracle databases from physical x86 running  Linux / Oracle OVM running Linux to VMware vSphere platform ?’ , the answer, famous 2 words , ‘it depends !!’ .

Let me explain why I said that.


Database Re-Platforming

Oracle databases, being the sophisticated ‘beasts of burden’, there are many key factors to be kept in mind when we embark on an Oracle database re-platforming exercise, either between same / different system architectures, bare metal to bare metal / physical to virtual architecture, some of them include:

  • source and destination system architecture
    • are we moving between like architectures (x86 to x86)
    • are we moving between from a big endian system to a little endian system (Solaris / AIX / HP-UX to x86)
  • size and operating nature of the database (terabytes / production, pre-prod, dev, test etc)
  • database storage (File system / Oracle ASM)

More information on Handiness can be found in the link below

So, if your use case is moving Oracle databases from a big endian system to a little endian system (Solaris / AIX / HP-UX to x86), Stop Right here, you cannot use the VMware Convertor tool to migrate databases between RISC Unix and Linux x86. You need an Oracle Plan and Design exercise to migrate Oracle databases between these 2 systems.

Keep reading if you are replatforming Oracle database between x86 platforms i.e. Physical server / Virtual machine (VMware vSphere / Oracle OVM) to VMware Virtual Machine (VMware).

Continue reading

“RAC” n “RAC” all night – Oracle RAC on vSphere 6.x

“I wanna “RAC” and “RAC” all night and party every day” – mantra of an Oracle RAC DBA.

Much has been written , spoken and probably beaten to senseless 🙂 about the magical “Multi-writer” setting and how it helps multiple VM’s share vmdk’s simultaneously for Clustering and FT used cases.

I still get question from customers interested in running Oracle RAC on vSphere about if we have the ability to add shared vmdk’s to a RAC cluster online without any downtime ? Yes we do. Are the steps of adding shared vmdk’s to an extended RAC cluster online without any downtime the same? Yes.



By default, the simultaneous multi-writer “protection” is enabled for all. vmdk files ie all VM’s have exclusive access to their vmdk files. So in order for all of the VM’s to access the shared vmdk’s simultaneously, the multi-writer protection needs to be disabled.

The below table describes the various Virtual Machine Disk Modes:

As we all are aware of , Oracle RAC requires shared disks to be accessed by all nodes of the RAC cluster.

KB Article 1034165 provides more details on how to set the multi-writer option to allow VM’s to share vmdk’s. Requirement for shared disks with the multi-writer flag setting for a RAC environment is that the shared disk is

  • has to set to Eager Zero Thick provisioned
  • need not be set to Independent persistent

While Independent-Persistent disk mode is not a hard requirement to enable Multi-writer option, the default Dependent disk mode would cause the “cannot snapshot shared disk” error when a VM snapshot is taken. Use of Independent-Persistent disk mode would allow taking a snapshot of the OS disk while the shared disk would need to be backed up separately by a third-party vendor software.

Supported and Unsupported Actions or Features with Multi-Writer Flag:

**** Important ***
•    SCSI bus sharing is left at default and not touched at all in case of using shared vmdk’s
•    It’s only used for RAC’s with RDM (Raw Device Mappings) as shared disks


Facts about vmdk and multi-writer

Before version 6.0, we had the ability to add vmdk’s with multi-writer option to an Oracle RAC online , the only caveat was that this ability was not exposed in the vSphere Web/C# Client .We had to rely on PowerCLI scripting to add shared disks to an Oracle RAC Cluster online.

Setting Multi Writer Flag for Oracle RAC on vSphere using Power Cli


With vSphere 6.0 and onwards, we can add shared disks to an Oracle RAC Cluster online using the Web Client.


Key points to take away from this blog:
•    VMware recommends using shared VMDK (s) with Multi-writer setting for provisioning shared storage for ALL Oracle RAC environments (KB 1034165)
•    vSphere 6.0 and onwards, we can add shared vmdk’s to an Oracle RAC Cluster online using the Web Client
•    Prior to version 6.0, we had to rely on PowerCLI scripting to add shared disks to an Oracle RAC Cluster online


Example of an Oracle RAC Setup

As per best practices, the 2 VM’s, ‘rac01-g6’ and ‘rac02-g6’ , part of the 2 node Oracle RAC setup was deployed from a template ‘Template-12crac’.

The template has 10 vCPUs with 64 GB RAM with OEL7.3 as the operating system.

The template has 2 vmdk’s, 50GB each on SCSI 0 controller (Paravirtual SCSI Controller type)
•    Hard disk 1 is on SCSI0:0 and is for root volume (/)
•    Hard disk 2 is on SCSI0:1 and is for oracle binaries (/u01 for Grid and RDBMS binaries)

Hard Disk 1 (OS drive) & Hard Disk 2 (Oracle /u01) vmdk’s are set to
•    Thin Provisioning
•    No Sharing i.e. exclusive to the VM
•    Disk mode is set to ‘Dependent’

Template has 2 network adapters of type VMXNET3.
•    Public adapter
•    Private Interconnect

Public Adapter:

Private Interconnect:

Lets add a shared vmdk of size say 50GB to both the VM’s online without powering down the VM’s.

Add shared vmdk to an Oracle RAC online

1. Adding shared disks can be done online without downtime.

2. Add a PVSCSI Controller (SCSI 1) to RAC VM ‘rac01-g6’. Right Click on ‘rac01-g6’ , ‘Edit Settings’ and add New Controller of Type ‘Paravirtual’

Leave the SCSI Bus Sharing to ‘None’ (default)

3. Next step is to add a 50 GB shared vmdk to VM ‘rac01-g6’  to SCSI1:0 bus slot ( you can add the new vmdk it to any slot on SCSI 1 you want to)

Right Click on VM ‘rac01-g6’ and Choose ‘Edit Settings’. Choose ‘New Hard Disk’ ,  set Sharing to ‘Multi-writer’ , leave Disk mode to ‘Dependent’ and click ‘Add’. Click ‘OK’ and monitor progress.

4. Repeat Step 2 to add new ‘Paravirtual’ Controller SCSI 1 to RAC VM ‘rac02-g6’

5. . The new vmdk (vmdk with multi-writer option) created on VM ‘rac01-g6’ at SCSI1:0 bus slot needs to be shared with ‘rac02-g6’ VM for clustering purpose

6. Right Click on VM ‘rac02-g6’, Choose ‘Edit Settings’. Choose ‘Existing Hard Disk’ and Click ‘Add’.

7. Navigate to your Datastore [Group06], expand the Datastore contents and click on ‘rac01-g6’ folder. Click on the shared vmdk ‘rac01-g6_2.vmdk’ which was created on ‘rac01-g6’. Click ‘OK’

8. Note that the Sharing attribute for this vmdk needs to be set to ‘Multi-Writer’ and the SCSI controller set to the same SCSI controller as we did for ‘rac01-g6’ i.e SCSI1:0. Click ‘OK’ when done.

9. Scan the bus on the OS of both VM’s to see the new disk added and list the devices

[root@rac01-g6 ~]# fdisk -lu

Disk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00098df2

Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     2099199     1048576   83  Linux
/dev/sda2         2099200   104857599    51379200   8e  Linux LVM

Disk /dev/sdc: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[root@rac01-g6 ~]#

[root@rac02-g6 ~]# fdisk -lu
Disk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00098df2

Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     2099199     1048576   83  Linux
/dev/sda2         2099200   104857599    51379200   8e  Linux LVM
Disk /dev/sdc: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[root@rac02-g6 ~]#

10. Partition Align the shared disk (/dev/sdc) on ‘rac01-g6’ (do this on one node only) using the fdisk / parted utility / tool of your choice) :

11. After partition alignment:

root@rac01-g6 ~]# fdisk -lu /dev/sdc
Disk /dev/sdc: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x4402e64c

Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048   104857599    52427776   83  Linux
[root@rac01-g6 ~]#

[root@rac02-g6 ~]# fdisk -lu /dev/sdc
Disk /dev/sdc: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x4402e64c

Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048   104857599    52427776   83  Linux
[root@rac02-g6 ~]#

12. Create ASM disks using ASMLIB

Installing and Configuring Oracle ASMLIB Software


root@rac01-g6 ~]# /usr/sbin/oracleasm createdisk DATA_DISK01 /dev/sda1
Writing disk header: done
Instantiating disk: done
[root@rac01-g6 ~]#

[root@rac01-g6 ~]# /usr/sbin/oracleasm listdisks
[root@rac01-g6 ~]#

[root@rac02-g6 ~]# /usr/sbin/oracleasm scandisks
[root@rac02-g6 ~]# /usr/sbin/oracleasm listdisks
[root@rac02-g6 ~]#

As we can see, we have now added a shared vmdk of size 50 GB to both VM’s without any downtime online and created ASM disk on this shared disk to be used for Oracle RAC ASM disk group.

The rest of the steps to create the Oracle RAC is exactly the same as shown in the Oracle documentation

Key points to keep in mind:

  • VMware recommends using shared VMDK (s) with Multi-writer setting for provisioning shared storage for ALL Oracle RAC environments (KB 1034165)
  • vSphere 6.0 and onwards, we can add shared vmdk’s to an Oracle RAC Cluster online using the Web Client
  • Prior to version 6.0, we had to rely on PowerCLI scripting to add shared disks to an Oracle RAC Cluster online

Best Practices needs to be followed when configuring Oracle RAC environment  which can be found in the “Oracle Databases on VMware – Best Practices Guide”


All Oracle on vSphere white papers including Oracle licensing on vSphere/vSAN, Oracle best practices, RAC deployment guides, workload characterization guide can be found in the url below
Oracle on VMware Collateral – One Stop Shop

CenturyLink Transforms SAP Deployment Model with VMware Virtualization

CenturyLink SAP

We recently worked with CenturyLink, one of the largest telecommunications companies in the United States, to optimize their virtual SAP HANA solutions. The outcome is below referenced success story, where CenturyLink describes how they use the VMware platform, to provide a customized private cloud for SAP applications, including SAP HANA in less than 28 days, with no compromise on performance.

A SAP infrastructure project duration of 28 days may not sound so fast, but remember, this is a for a completely customized SAP private cloud solution and not just some standard, simple SAP HANA instances running somewhere in the public cloud, as a test or development system. Regarding CenturyLink, customers can deploy new SAP workloads up to four times faster, compared to in-house implementations, where these deployments typically take over 100 days!

Deploying a complete SAP landscape includes several systems like SAP Solution Manger, SAP Gateways, load balancers, several applications servers and finally the SAP HANA database. All these systems need to get configured, patched up to the latest software release level and connected by maintaining highest security standards. All this will get done, if wished, by CenturyLink.

Beside faster time to market, CenturyLink can utilize templates and repeatable processes, which helps it easily standardize and scale its offering while managing costs, complexity, and risks. This all leads to CapEx savings of up to 60 percent and OpEx savings in a similar range for CenturyLink customers. For instance, as an SAP HEC partner, CenturyLink had to deploy without SAP HANA VMware vSphere virtualization, 20 physical server systems to support 20 independent SAP HANA systems in the past. Now they deploy a VMware cluster of 8 hosts to support these 20 SAP HANA instances, including HA, which is a HW reduction by 12 hosts or 60 percent. 60 percent less power and cooling costs, rack space savings and reduced HW maintenance costs are only the more comprehensible cost savings realized. Additionally, to this the easier operation of a virtual, software defined environment, are major, long-term, cost saving factors.

These are the reasons why CenturyLink wants to go one step further towards a fully software defined data-center and plans to implement a VMware Virtual SAN™ based hyper-converged infrastructure ready to run even the more demanding SAP workloads.

For more information please review the success story posted here:

To be “RDM for Oracle RAC”, or not to be, that is the question

Famous words from William Shakespeare’s play Hamlet. Act III, Scene I.

This is true even in the Virtualization world for Oracle Business Critical Applications where one wonders which way to go when it comes to provisioning shared disks for Oracle RAC disks, Raw Device Mappings (RDM) or VMDK ?

Much has been written and discussed about RDM and VMDK and this post will focus on the Oracle RAC shared disks use case.

Some common questions I get talking to our customer who are embarking on the virtualization journey for Oracle on vSphere are

  • What is the recommended approach when it comes to provisioning storage for Oracle RAC or Oracle Single instance? Is it VMDK or RDM?
  • What is the use case for each approach?
  • How do I provision shared RDM (s)  in Physical or Virtual Compatibility mode for an Oracle RAC environment?
  • If I use shared RDM (s)  (Physical or Virtual) will I be able to vMotion my RAC VM ’s without any cluster node eviction?

Continue reading