Home > Blogs > Virtualize Business Critical Applications
Featured post

RUSH POST: VMware Tools and RSS Incompatibility Issues

UPDATE:

We have just released a new version of the VMware Tools which fixes the issue described in this post (below).

Please download and install this version of the VMware Tools, especially if you are using the VMXNet3 vNIC type for your Windows VMs.

We thank you very much for your patience and understanding while we worked on fixing this problem.

From the Release Notes:

Receive Side Scaling is not functional for vmxnet3 on Windows 8 and Windows 2012 Server or later. This issue is caused by an update for the vmxnet3 driver that addressed RSS features added in NDIS version 6.30 rendering the functionality unusable. It is observed in VMXNET3 driver versions from 1.6.6.0 to 1.7.3.0.

Continue reading

On Demand Scaling up resources for Oracle production workloads

The crux of this blog’s discussion is “How to stop hoarding much needed infrastructure resources and live wisely ever after by scaling up as needed effectively

Typically Oracle workloads running on bare metal environments , or for that matter any environment, are sized very conservatively, given the nature of the workload , with the premise that , in event of any workload spike, the abundant resources thrown at the workload will be able to sustain this spike, but in reality , we need to ask ourselves these questions

  • How much resource is actually allocated to the workload?
  • How much of that allocated resource is actually consumed by that workload ?
  • How often does the workload experience spikes?
  • If spikes are happening regularly then, has proper capacity planning and forecasting been done for this workload?

Proper plan and design along with capacity planning and forecasting is the key to manage any Business Critical Application (BCA) workload and there is no shortcut around this.

Unfortunately what this means in a physical environment is , for example, static allocation of resources to a BCA workload where the CPU utilization has been flat at 30-40% for 11 months of the year with utilization at 55-60% for the last month of the year.

Pre-allocating resources to a workload , in anticipation of peaks for say 1 month in a whole year, basically results in the resources underutilized for the rest of the year , starving other workloads of much needed resource, an ineffective way of resource allocation , thereby leading to increase in larger footprint of servers resulting in increase in CAPEX and OPEX.

Enter “Hot Plug” – “Hot Plug CPU and Hot Plug Memory” on vSphere Platform – Resource allocation on demand thereby resulting in effective and elastic resource management working on the principle of “Ask and thy shall receive”.

Continue reading

Oracle RAC on VMware Cloud on Amazon AWS

Summary

With the recent launch of the VMware Cloud on AWS Software Defined Data Center (SDDC) from VMware, many Business Critical Application (BCA) workloads that were previously difficult to deploy in the cloud no longer require significant platform modifications.

This post describes a Better Together demonstration VMware and AWS presented at VMworld 2017 using an Oracle RAC Database for high-availability zero-downtime client connection failover, supporting a Django-Python application running in a Native AWS Elastic Beanstalk environment.

Oracle RAC presents two requirements that are difficult to meet on AWS infrastructure:

  • Shared Storage
  • Multicast Layer 2 Networking.

VMware vSAN and NSX deployed into the VMware SDDC cluster meet those requirements succinctly.

The Django-Python application layer’s end-to-end provisioning is fully automated with AWS Elastic Beanstalk, which creates one or more environments containing the necessary Elastic Load Balancer, Auto-Scaling Group, Security Group, and EC2 Instances each complete with all of the python prerequisites needed to dynamically scale based on demand.  From a zip file containing your application code, a running environment can be launched with a single command.

By leveraging the AWS Elastic Beanstalk Service for the application tier, and VMware Cloud on AWS for the database tier, this end-to-end architecture delivers a high-performance, consistently repeatable, and straightforward deployment.  Better Together!

 

Architecture

 

 

In the layout above, on the right, VMware Cloud on AWS is provided by VMware directly.  For each Software Defined Data Center (SDDC) cluster, the ESXi hypervisor is installed on Bare Metal hardware provided by AWS EC2, deployed into a Virtual Private Cloud (VPC) within an AWS account owned by VMware.

Each EC2 physical host contributes 8 internal NVMe high performance flash drives, which are pooled together using VMware vSAN to provide shared storage.  This service requires a minimum number of 4 cluster nodes, which can be scaled online (via portal or REST API) to 16 nodes at initial availability, with 32 and 64-node support to follow shortly thereafter.

VMware NSX provides one or more extensible overlay logical networks for Customer virtual machine workloads, while the underlying AWS VPC CIDR block provides a control plane for maintenance and internal management of the service.

All of the supporting infrastructure deployed into the AWS account on the right side of the diagram is incorporated into a consolidated hourly or annual rate to the Customer from VMware.

In the layout above, on the left, a second AWS account directly owned by the Customer is connected to the VMware owned SDDC account for optionally consuming Native AWS services alongside deployed vSphere resources (right).

When initially deploying the VMware Cloud on AWS SDDC cluster, we need to provide temporary credentials to login to a newly created or existing Customer managed AWS account.  The automation workflow then creates an Identity and Access Management (IAM) role in the Customer AWS account (left), and grants account permissions for the SDDC to assume the role in the Customer AWS account.

This role provides a minimal set of permissions necessary to create Elastic Network Interfaces (ENIs) and route table entries within the Customer AWS account to facilitate East-West routing between the Customer AWS Account’s VPC CIDR block (left), and any NSX overlay logical networks the Customer chooses to create in the SDDC account for VM workloads (right).

The East-West traffic within the same Availability Zone provides extremely low latency free of charge, enabling the Customer to integrate technology from both vSphere and AWS within the same application, choosing the best of both worlds.

Oracle RAC Configuration

Database workloads are typically IO latency sensitive.  Per VMware KB article 2121181, there are a few recommendations to consider for significantly improving disk IO performance.

Below is the disk setup for Oracle RAC Cluster using VMware multi-writer setting which allows disks to be shared between the Oracle RAC nodes.

 

The Oracle Databases on VMware Best Practices Guide provides best practice guidelines for deploying Oracle Single Instance and Oracle RAC cluster on VMware SDDC.

https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/vmware-oracle-databases-on-vmware-best-practices-guide.pdf

For the VMworld demo, the OCI compliant Oracle Instant Client was wrapped with the cx_Oracle python library, and Oracle’s Database Resident Connection Pooling (DRCP).  Database connections are initially evenly balanced between the ORCL1 and ORCL2 instances serving a custom Database Service named VMWORLD.

By failing the database service on a given node, we demonstrate that only 50% of client connections are affected, all of which can immediately reconnect to the surviving instance.

An often overlooked challenge with Oracle RAC is that client connections do not automatically fail back after repairing the failure.  Those client connections must be recycled at the resource pool level, which might require an application outage if only one pool was included in the design.  Multiplexing requests over two connection pools in your application code allows each pool to be iteratively taken out of service without taking the application down.

Given such application design changes often are not tenable post-deployment, AWS Elastic Beanstalk makes quick work of that limitation by simply deploying a GREEN copy of your application environment, validating it passes health-checks, and then transitioning your Customer workload from BLUE to GREEN stacks.  When the GREEN stack boots, its database connections will be properly balanced between instances as desired, after which the BLUE stack can then be safely terminated.  Similarly, application code changes can be deployed using the same BLUE/GREEN methodology, affording rapid rollback to the original stack if problems are encountered.  As many additional stacks can be deployed with a single command, “eb create GREEN”, or automated via REST-API.

 

At VMworld, we ran a live demo continuously failing each database service iteratively followed by an Elastic Beanstalk environment URL swap between BLUE and GREEN every 60 seconds, while monitoring Oracle’s GV$CPOOL_CC_STATS data dictionary view.  The ClassName consists of the database service name VMWORLD, followed by the Beanstalk environment name, and the application server’s EC2 instance identifier.  The second and subsequent columns of the below table indicate the RAC node servicing queries between refresh cycles.

 

 

Conclusion

 VMware Cloud on AWS affords many Better Together opportunities to not only streamline operational processes by leveraging Native AWS services, but also enable a cloud-first IT transformation without needing to disruptively re-platform your Enterprise Business Critical Applications.

The cloud based SDDC cluster deployment is simply another datacenter and cluster managed in the same way you manage your on-premises VMware environments today, without needing to retool or retrain staff.

Creating and expanding SDDC clusters can be accomplished in minutes, allowing you to drive utilization to a much higher efficiency without concern for 18-24 month capacity planning cycles that must be budgeted for peak usage.  Release burst capacity immediately after it is no longer needed without any CAPEX overhead, as well as the OPEX overhead of running your own datacenters.

 

Demo for the “Oracle RAC on VMware Cloud on Amazon AWS” can be found in the url below
https://www.youtube.com/watch?v=vpU0MW8tkhc

All Oracle on VMware SDDC collaterals can be found in the url below

Oracle on VMware Collateral – One Stop Shop
https://blogs.vmware.com/apps/2017/01/oracle-vmware-collateral-one-stop-shop.html

More information on VMware Cloud on AWS can be found at the url below
https://blog.cloud.vmware.com/s/services-and-products-vmware-cloud-on-aws

 

Performance of SAP Central Services with VMware Fault Tolerance

Many SAP customers in their virtualization journey are considering the option to protect SAP Central Services with VMware Fault Tolerance (FT). Central Services is a single-point-of-failure in the SAP architecture that manages transaction locking and messaging across the SAP system and failure of this service results in downtime for the whole system. It is a strong candidate for VMware FT and we have conducted a 1000-user test in vSphere 6.X which is documented in Section 4  of the SAP VMware Best Practice Guide .

The VMware vSphere 6 Fault Tolerance whitepaper mentions “One of the most common performance observations of virtual machines under FT protection is a variable increase in the network latency of the virtual machine”. Given this how does Central Services and VMware FT impact the performance of the SAP application as experienced by the SAP business user – I will demonstrate a basic example here.

A  potential validation at the infrastructure level could be to run the network “ping” command and SAP utility “niping”. “niping” is a SAP network utility used to help analyze network performance.  When I ran these commands at the OS command line to test network performance between an SAP application server and Central Services in two separate VMs, results showed an increase in latency from about 0.3 to 1.8 ms when VMware FT was turned on for the Central Services VM . This is expected behavior and does not reflect the performance experience that a SAP business user will see with VMware FT.

My next test was to construct a basic SAP application level test.  This test is a custom SAP program (written in ABAP), that automates the change of a sales order document and once executed will update around 50 sales orders automatically in series. For each sales order that is changed a lock is created and managed by Central Services. The program uses standard SAP techniques based on SAP “BDC” for mass input of data by simulating user inputs in screens of transactions. The SAP transaction being called is the Change Sales Order transaction (“VA02”). The program is executed in online mode/foreground via the SAP client SAPGUI.  After each online interaction SAPGUI records the response time at the bottom right in milliseconds – this was used as the performance metric.

The following diagram shows the test environment.

The following tables shows the results.

The difference in average online response time between VMware FT off and on is around 2%. The tests simulate a single user executing the change sales order transaction multiple times very quickly. This is a basic validation which should be followed by a multi-user test with actual users or business workloads simulated in a software testing tool. Note that other tests will show different results than shown here and mileage is expected to vary. In this example the simulated user is making many document changes in a short period of time with no think time. In reality an online business user will spend more time processing data within a transaction which is activity that does not require Central Services but resources on the application server hence the frequency of lock requests generated by a single user would be less than in this example.

The Art of P2V and Oracle ASM

“Come with me if you want to live” – famous words from the Terminator series.

It’s also the very reason IT companies are adopting the ‘Virtualize First’ policy to reap all of the benefits of virtualization and move away from the soon to be legacy bare metal architecture world and ‘save a bunch of money’ , just as the Gecko said.

As part of the Virtualization journey, one of the tools VMware Professional Services (PSO), Partners and Customers use to migrate applications from physical x86 servers (Windows & Linux) to VMware Virtual Machine (VM) is using the VMware Convertor tool, the process known as P2V (Physical to Virtual). It transforms the Windows- and Linux-based physical machines and third-party image formats to VMware virtual machines.

One of the most common question I get talking the VMware field, Partners & Customers as part of my role is ‘Can I use VMware Convertor to migrate Oracle databases from physical x86 running  Linux / Oracle OVM running Linux to VMware vSphere platform ?’ , the answer, famous 2 words , ‘it depends !!’ .

Let me explain why I said that.

 

Database Re-Platforming

Oracle databases, being the sophisticated ‘beasts of burden’, there are many key factors to be kept in mind when we embark on an Oracle database re-platforming exercise, either between same / different system architectures, bare metal to bare metal / physical to virtual architecture, some of them include:

  • source and destination system architecture
    • are we moving between like architectures (x86 to x86)
    • are we moving between from a big endian system to a little endian system (Solaris / AIX / HP-UX to x86)
  • size and operating nature of the database (terabytes / production, pre-prod, dev, test etc)
  • database storage (File system / Oracle ASM)

More information on Handiness can be found in the link below
https://en.wikipedia.org/wiki/Endianness

So, if your use case is moving Oracle databases from a big endian system to a little endian system (Solaris / AIX / HP-UX to x86), Stop Right here, you cannot use the VMware Convertor tool to migrate databases between RISC Unix and Linux x86. You need an Oracle Plan and Design exercise to migrate Oracle databases between these 2 systems.

Keep reading if you are replatforming Oracle database between x86 platforms i.e. Physical server / Virtual machine (VMware vSphere / Oracle OVM) to VMware Virtual Machine (VMware).

Continue reading

“RAC” n “RAC” all night – Oracle RAC on vSphere 6.x

“I wanna “RAC” and “RAC” all night and party every day” – mantra of an Oracle RAC DBA.

Much has been written , spoken and probably beaten to senseless 🙂 about the magical “Multi-writer” setting and how it helps multiple VM’s share vmdk’s simultaneously for Clustering and FT used cases.

I still get question from customers interested in running Oracle RAC on vSphere about if we have the ability to add shared vmdk’s to a RAC cluster online without any downtime ? Yes we do. Are the steps of adding shared vmdk’s to an extended RAC cluster online without any downtime the same? Yes.

 

Introduction

By default, the simultaneous multi-writer “protection” is enabled for all. vmdk files ie all VM’s have exclusive access to their vmdk files. So in order for all of the VM’s to access the shared vmdk’s simultaneously, the multi-writer protection needs to be disabled.

The below table describes the various Virtual Machine Disk Modes:

As we all are aware of , Oracle RAC requires shared disks to be accessed by all nodes of the RAC cluster.

KB Article 1034165 provides more details on how to set the multi-writer option to allow VM’s to share vmdk’s. Requirement for shared disks with the multi-writer flag setting for a RAC environment is that the shared disk is

  • has to set to Eager Zero Thick provisioned
  • need not be set to Independent persistent

While Independent-Persistent disk mode is not a hard requirement to enable Multi-writer option, the default Dependent disk mode would cause the “cannot snapshot shared disk” error when a VM snapshot is taken. Use of Independent-Persistent disk mode would allow taking a snapshot of the OS disk while the shared disk would need to be backed up separately by a third-party vendor software.

Supported and Unsupported Actions or Features with Multi-Writer Flag:

**** Important ***
•    SCSI bus sharing is left at default and not touched at all in case of using shared vmdk’s
•    It’s only used for RAC’s with RDM (Raw Device Mappings) as shared disks

 

Facts about vmdk and multi-writer

Before version 6.0, we had the ability to add vmdk’s with multi-writer option to an Oracle RAC online , the only caveat was that this ability was not exposed in the vSphere Web/C# Client .We had to rely on PowerCLI scripting to add shared disks to an Oracle RAC Cluster online.

Setting Multi Writer Flag for Oracle RAC on vSphere using Power Cli
https://blogs.vmware.com/apps/2013/10/setting-multi-writer-flag-for-oracle-rac-on-vsphere-without-any-downtime.html#more-864

http://www.virtuallyghetto.com/2015/10/new-method-of-enabling-multiwriter-vmdk-flag-in-vsphere-6-0-update-1.html

With vSphere 6.0 and onwards, we can add shared disks to an Oracle RAC Cluster online using the Web Client.

 

Key points to take away from this blog:
•    VMware recommends using shared VMDK (s) with Multi-writer setting for provisioning shared storage for ALL Oracle RAC environments (KB 1034165)
•    vSphere 6.0 and onwards, we can add shared vmdk’s to an Oracle RAC Cluster online using the Web Client
•    Prior to version 6.0, we had to rely on PowerCLI scripting to add shared disks to an Oracle RAC Cluster online

 

Example of an Oracle RAC Setup

As per best practices, the 2 VM’s, ‘rac01-g6’ and ‘rac02-g6’ , part of the 2 node Oracle RAC setup was deployed from a template ‘Template-12crac’.

The template has 10 vCPUs with 64 GB RAM with OEL7.3 as the operating system.

The template has 2 vmdk’s, 50GB each on SCSI 0 controller (Paravirtual SCSI Controller type)
•    Hard disk 1 is on SCSI0:0 and is for root volume (/)
•    Hard disk 2 is on SCSI0:1 and is for oracle binaries (/u01 for Grid and RDBMS binaries)

Hard Disk 1 (OS drive) & Hard Disk 2 (Oracle /u01) vmdk’s are set to
•    Thin Provisioning
•    No Sharing i.e. exclusive to the VM
•    Disk mode is set to ‘Dependent’

Template has 2 network adapters of type VMXNET3.
•    Public adapter
•    Private Interconnect

Public Adapter:

Private Interconnect:

Lets add a shared vmdk of size say 50GB to both the VM’s online without powering down the VM’s.

Add shared vmdk to an Oracle RAC online

1. Adding shared disks can be done online without downtime.

2. Add a PVSCSI Controller (SCSI 1) to RAC VM ‘rac01-g6’. Right Click on ‘rac01-g6’ , ‘Edit Settings’ and add New Controller of Type ‘Paravirtual’

Leave the SCSI Bus Sharing to ‘None’ (default)

3. Next step is to add a 50 GB shared vmdk to VM ‘rac01-g6’  to SCSI1:0 bus slot ( you can add the new vmdk it to any slot on SCSI 1 you want to)

Right Click on VM ‘rac01-g6’ and Choose ‘Edit Settings’. Choose ‘New Hard Disk’ ,  set Sharing to ‘Multi-writer’ , leave Disk mode to ‘Dependent’ and click ‘Add’. Click ‘OK’ and monitor progress.

4. Repeat Step 2 to add new ‘Paravirtual’ Controller SCSI 1 to RAC VM ‘rac02-g6’

5. . The new vmdk (vmdk with multi-writer option) created on VM ‘rac01-g6’ at SCSI1:0 bus slot needs to be shared with ‘rac02-g6’ VM for clustering purpose

6. Right Click on VM ‘rac02-g6’, Choose ‘Edit Settings’. Choose ‘Existing Hard Disk’ and Click ‘Add’.

7. Navigate to your Datastore [Group06], expand the Datastore contents and click on ‘rac01-g6’ folder. Click on the shared vmdk ‘rac01-g6_2.vmdk’ which was created on ‘rac01-g6’. Click ‘OK’

8. Note that the Sharing attribute for this vmdk needs to be set to ‘Multi-Writer’ and the SCSI controller set to the same SCSI controller as we did for ‘rac01-g6’ i.e SCSI1:0. Click ‘OK’ when done.

9. Scan the bus on the OS of both VM’s to see the new disk added and list the devices

[root@rac01-g6 ~]# fdisk -lu

Disk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00098df2

Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     2099199     1048576   83  Linux
/dev/sda2         2099200   104857599    51379200   8e  Linux LVM

….
Disk /dev/sdc: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[root@rac01-g6 ~]#

[root@rac02-g6 ~]# fdisk -lu
Disk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00098df2

Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     2099199     1048576   83  Linux
/dev/sda2         2099200   104857599    51379200   8e  Linux LVM
….
Disk /dev/sdc: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[root@rac02-g6 ~]#

10. Partition Align the shared disk (/dev/sdc) on ‘rac01-g6’ (do this on one node only) using the fdisk / parted utility / tool of your choice) :

11. After partition alignment:

root@rac01-g6 ~]# fdisk -lu /dev/sdc
……
Disk /dev/sdc: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x4402e64c

Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048   104857599    52427776   83  Linux
[root@rac01-g6 ~]#

[root@rac02-g6 ~]# fdisk -lu /dev/sdc
…..
Disk /dev/sdc: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x4402e64c

Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048   104857599    52427776   83  Linux
[root@rac02-g6 ~]#

12. Create ASM disks using ASMLIB

Installing and Configuring Oracle ASMLIB Software
https://docs.oracle.com/database/122/LADBI/installing-and-configuring-oracle-asmlib-software.htm#LADBI-GUID-79F9D58F-E5BB-45BD-A664-260C0502D876

 

root@rac01-g6 ~]# /usr/sbin/oracleasm createdisk DATA_DISK01 /dev/sda1
Writing disk header: done
Instantiating disk: done
[root@rac01-g6 ~]#

[root@rac01-g6 ~]# /usr/sbin/oracleasm listdisks
DATA_DISK01
[root@rac01-g6 ~]#

[root@rac02-g6 ~]# /usr/sbin/oracleasm scandisks
[root@rac02-g6 ~]# /usr/sbin/oracleasm listdisks
DATA_DISK01
[root@rac02-g6 ~]#

As we can see, we have now added a shared vmdk of size 50 GB to both VM’s without any downtime online and created ASM disk on this shared disk to be used for Oracle RAC ASM disk group.

The rest of the steps to create the Oracle RAC is exactly the same as shown in the Oracle documentation
https://docs.oracle.com/database/122/CWSOL/title.htm

########
Summary
########
Key points to keep in mind:

  • VMware recommends using shared VMDK (s) with Multi-writer setting for provisioning shared storage for ALL Oracle RAC environments (KB 1034165)
  • vSphere 6.0 and onwards, we can add shared vmdk’s to an Oracle RAC Cluster online using the Web Client
  • Prior to version 6.0, we had to rely on PowerCLI scripting to add shared disks to an Oracle RAC Cluster online

Best Practices needs to be followed when configuring Oracle RAC environment  which can be found in the “Oracle Databases on VMware – Best Practices Guide”

https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/vmware-oracle-databases-on-vmware-best-practices-guide.pdf

All Oracle on vSphere white papers including Oracle licensing on vSphere/vSAN, Oracle best practices, RAC deployment guides, workload characterization guide can be found in the url below
Oracle on VMware Collateral – One Stop Shop
https://blogs.vmware.com/apps/2017/01/oracle-vmware-collateral-one-stop-shop.html

CenturyLink Transforms SAP Deployment Model with VMware Virtualization

CenturyLink SAP

We recently worked with CenturyLink, one of the largest telecommunications companies in the United States, to optimize their virtual SAP HANA solutions. The outcome is below referenced success story, where CenturyLink describes how they use the VMware platform, to provide a customized private cloud for SAP applications, including SAP HANA in less than 28 days, with no compromise on performance.

A SAP infrastructure project duration of 28 days may not sound so fast, but remember, this is a for a completely customized SAP private cloud solution and not just some standard, simple SAP HANA instances running somewhere in the public cloud, as a test or development system. Regarding CenturyLink, customers can deploy new SAP workloads up to four times faster, compared to in-house implementations, where these deployments typically take over 100 days!

Deploying a complete SAP landscape includes several systems like SAP Solution Manger, SAP Gateways, load balancers, several applications servers and finally the SAP HANA database. All these systems need to get configured, patched up to the latest software release level and connected by maintaining highest security standards. All this will get done, if wished, by CenturyLink.

Beside faster time to market, CenturyLink can utilize templates and repeatable processes, which helps it easily standardize and scale its offering while managing costs, complexity, and risks. This all leads to CapEx savings of up to 60 percent and OpEx savings in a similar range for CenturyLink customers. For instance, as an SAP HEC partner, CenturyLink had to deploy without SAP HANA VMware vSphere virtualization, 20 physical server systems to support 20 independent SAP HANA systems in the past. Now they deploy a VMware cluster of 8 hosts to support these 20 SAP HANA instances, including HA, which is a HW reduction by 12 hosts or 60 percent. 60 percent less power and cooling costs, rack space savings and reduced HW maintenance costs are only the more comprehensible cost savings realized. Additionally, to this the easier operation of a virtual, software defined environment, are major, long-term, cost saving factors.

These are the reasons why CenturyLink wants to go one step further towards a fully software defined data-center and plans to implement a VMware Virtual SAN™ based hyper-converged infrastructure ready to run even the more demanding SAP workloads.

For more information please review the success story posted here:

To be “RDM for Oracle RAC”, or not to be, that is the question

Famous words from William Shakespeare’s play Hamlet. Act III, Scene I.

This is true even in the Virtualization world for Oracle Business Critical Applications where one wonders which way to go when it comes to provisioning shared disks for Oracle RAC disks, Raw Device Mappings (RDM) or VMDK ?

Much has been written and discussed about RDM and VMDK and this post will focus on the Oracle RAC shared disks use case.

Some common questions I get talking to our customer who are embarking on the virtualization journey for Oracle on vSphere are

  • What is the recommended approach when it comes to provisioning storage for Oracle RAC or Oracle Single instance? Is it VMDK or RDM?
  • What is the use case for each approach?
  • How do I provision shared RDM (s)  in Physical or Virtual Compatibility mode for an Oracle RAC environment?
  • If I use shared RDM (s)  (Physical or Virtual) will I be able to vMotion my RAC VM ’s without any cluster node eviction?

Continue reading

Streamlining Oracle on SDDC – VMworld 2017

Interested to find out how to streamline your Business Critical Applications on VMware Software-Defined Datacenter (SDDC) seamlessly?

Come attend our session at VMworld 2017 Las Vegas on Thursday, Aug 31, 1:30 p.m. – 2:30 p.m. where Amanda Blevins and  Sudhir Balasubramanian will talk about the end to end life cycle of an Application on VMware SDDC.

This includes provisioning, management, monitoring, troubleshooting, and cost transparency with the vRealize Suite. The session will also include best practices for running Oracle databases on the SDDC including sizing and performance tuning. Business continuity requirements and procedures will be addressed in the context of the SDDC. It is a formidable task to ensure the smooth operation of critical applications running on Oracle, and the SDDC simplifies and standardizes the approach across all datacenters and systems.

Signup for our session here
https://my.vmworld.com/scripts/catalog/uscatalog.jsp?search=virt1625bu&showEnrolled=false

Oracle on vSAN HCI – VMworld 2017

Interested to find out how VMware HCI vSAN solution provides high availability, workload balancing, seamless site maintenance, stability, resilience, performance and cost effective hardware required to meet critical business SLA’s for running mission critical workloads?

Come attend our session at VMworld 2017 Las Vegas on Wednesday, Aug 30, 2:30 p.m. – 3:30 p.m. where Sudhir Balasubramanian and Palanivenkatesan Murugan will talk about the VMware HCI vSAN solution for Mission Critical Oracle workloads

This session will showcase deployment of Oracle Clustered and Non Clustered databases along with running IO intensive workloads on vSAN and also talk about seamlessly running database day 2 operations like Backup & Recovery, Database Cloning , Data Refreshes , Database Patching etc using vSAN capability.

Signup for our session here
https://my.vmworld.com/scripts/catalog/uscatalog.jsp?search=STO1167BU&showEnrolled=false

Application Workload Guidance and Design for Virtualized SAP S/4HANA® on vSphere (Part 4/4)

In part 1 we introduced the concept of SAP HANA Application Workload guidance and using example business requirements to come up with a workload and vSphere cluster design for the SAP environment. In part 2  we looked at storage, network and security design for the proposed customer environment. In part 3 we looked at monitoring & management, backup/recovery and disaster recovery for SAP S4/HANA.  In this final part we look at validating the design we built over the past three parts and conclude the four part blog series.

SAP S/4HANA Design Validation

Validation of an SAP design is often difficult because of the absence of publicly available validation and performance tools. This design utilizes best practices derived from vendor testing conducted in SAP labs. The SAP HANA database tier is critical to the infrastructure and must be validated. So as part of this SAP S/4HANA VVD solution, some SAP standard validation tools were used to exercise the designed infrastructure.

Continue reading