Oracle Google Cloud VMware Engine Oracle Cloud VMware Solution VMware Cloud on AWS vSAN vSphere

PVSCSI Controllers and Queue Depth – ASM SAME and Oracle Workloads

The previous blog post ’ PVSCSI Controllers and Queue Depth – Accelerating performance for Oracle Workloads’ purpose was to raise awareness of the importance of using PVSCSI adapters with adequate Queue Depth settings for an Oracle workload.

 

This blog focuses on the implementation of Oracle SAME (Stripe and Mirror everything) technology using Oracle ASM along with using multiple VMware PVSCSI controllers with queue depth for PVSCSI controllers and VMDK’s set to maximum to achieve maximum performance for Oracle workloads.

 

 

 

 

This blog

  • Is not meant to be any kind of performance study or bake off any sort
  • contains results that I got in my lab running a load generator SLOB against my workload, which will be way different than any real-world customer workload, your mileage may vary
  • remember, any performance data is a result of the combination of hardware configuration, software configuration, test methodology, test tool, and workload profile used in the testing. 

 

 

 

 

Oracle ASM Diskgroups and ASM Disks

 

Oracle ASM is a volume manager and a file system for Oracle Database files that supports single-instance Oracle Database and Oracle Real Application Clusters (Oracle RAC) configurations. Oracle ASM uses disk groups to store data files; an Oracle ASM disk group is a collection of disks that Oracle ASM manages as a unit.

Keep in mind, Oracle ASM provides both mirroring (except for External ASM redundancy) and striping (Coarse / Fine) right out of the gate.

More information on Oracle ASM can be found here.

 

 

 

Stripe and Mirror Everything (SAME)

 

The stripe and mirror everything (SAME) methodology has been recommended by Oracle for many years and is a means to optimize high availability, performance, and manageability.

Oracle ASM implements the SAME methodology and adds automation on top of it. To achieve maximum performance, the SAME methodology proposes to stripe across as many physical devices as possible.

More information on SAME can be found here.

 

 

 

 

Key points to take away from this blog

 

This blog focuses on the below use cases around testing Oracle workloads using a load generator SLOB for using Oracle SAME (Stripe and Mirror everything) using ASM with multiple VMware PVSCSI controllers with queue depth for PVSCSI controllers and VMDK’s set to maximum.

The intent of the blog was to find out the which layout works best for the given the workload we are using. your mileage may vary

This blog assumes that the reader has read the previous blog ’PVSCSI Controllers and Queue Depth – Accelerating performance for Oracle Workloads.

 

 

 

 

Test Setup

 

VM ‘Oracle19c-BM-ASM’ was created with 24vCPU’s, 320 GB memory with storage on All-Flash Pure array and Oracle SGA & PGA set to 64G and 6G respectively.

The OS was OEL 7.6 UEK with Oracle 19c Grid Infrastructure & RDBMS installed. Oracle ASM was the storage platform with Oracle ASMLIB. Oracle ASMFD can also be used instead of Oracle ASMLIB.

SLOB 2.5.2.4 was chosen as the load generator for this exercise, the slob SCALE parameter was set to 1530G.

 

 

 

 

 

Test Cases –

  • Use Case 1 – ASM Disks in a SAME configuration using all 4 PVSCSI controllers
    • SCSI 0:0 60G for O/S
    • SCSI 0:1 60G for Oracle RDBMS/Grid Infrastructure binaries
    • ASM Disks in a SAME configuration using all 4 PVSCSI controllers i.e. DATA_DG & FRA_DG striped across all 4 PVSCSI Controllers.
    • Redo Log files are multiplexed between DATA_DG and FRA_DG ASM diskgroups
    • No dedicated SCSI Controller for Redo log files
    • Entire database is SAME across all 4 PVSCSI Controllers

 

  • Use Case 2 – Dedicated SCSI controllers for FRA_DG & REDO_DG, DATA_DG using 2 SCSI Controllers using SAME layout
    • SCSI 0:0 60G for O/S
    • SCSI 0:1 60G for Oracle RDBMS/Grid Infrastructure binaries
    • FRA_DG on SCSI 1 controller
    • DATA_DG in a SAME configuration across SCSI 1 and SCSI 2 controllers
    • REDO_DG on SCSI 3 dedicated controller
    • Redo Log files are multiplexed on REDO_DG diskgroup

 

  • Use Case 3 – Dedicated SCSI controllers for REDO_DG, DATA_DG & FRA_DG uses 2 SCSI Controllers using SAME layout
    • SCSI 0:0 60G for O/S
    • SCSI 0:1 60G for Oracle RDBMS/Grid Infrastructure binaries
    • ASM Disks in a SAME configuration using 3 PVSCSI controllers i.e. DATA_DG & FRA_DG striped across 3 PVSCSI Controllers
    • REDO_DG on SCSI 3 dedicated controller
    • Redo Log files are multiplexed on REDO_DG diskgroup

 

 

 

 

Test Results

 

 

Compare Use Case 1 v/s Use Case 2

 

We compared the results of User Case 1 v/s Use Case 2 i.e. compare SAME everything (AWR Report 1 ) v/s FRA_DG and REDO_DG on dedicated controllers (Redo log files on REDO_DG) with DATA_DG in SAME layout across 2 controllers  (AWR Report 2).

The database AWR reports from running the same workload under the above 2 use case configurations can be found below.

We can see that the both configurations comes close.

 

 

 

However, the ‘log file switch completion’ event for Use Case 1 (Avg Time 72.30ms) is at least 2 times than Use case 2. (Avg Time 36.97ms) , the reason being Use Case 2 has a dedicated SCSI Controller with queue set to maximum for both PVSCSI and vmdk for Redo log and hence does not have to contend with DATA and FRA for IOSP bandwidth in the 2nd case  for Redo log operations.

 

 

 

 

 

Compare Use Case 2 v/s Use Case 3

 

We compared the results of User Case 2 v/s Use Case 3 i.e. compare FRA_DG and REDO_DG on dedicated controllers (Redo log files on REDO_DG) with DATA_DG in SAME layout across 2 controllers (AWR Report 1 ) v/s REDO_DG on dedicated controller (Redo log files on REDO_DG) with DATA_DG & FRA_DG in a SAME layout across remaining 3 controllers   (AWR Report 2).

The database AWR reports from running the same workload under the above 2 use case configurations can be found below.

We can see that the Use Case 3 is better than Use case 2 configuration.

 

 

 

 

As in the previous case, the ‘log file switch completion’ event for Use Case 1 (Avg Time 72.30ms) is greater than Use case 3. (Avg Time 40.11ms) for the above-mentioned reason.

 

 

 

 

 

 

Summary of Test Results Analysis

 

  • Compared Use Case 1 v/s Use Case 2
    • We can see that the both configurations comes close
    • However, the ‘log file switch completion’ event for Use Case 1 (Avg Time 72.30ms) is at least 2 times than Use case 2. (Avg Time 36.97ms) , the reason being Use Case 2 has a dedicated SCSI Controller with queue set to maximum for both PVSCSI and vmdk for Redo log and hence does not have to contend with DATA and FRA for iops bandwidth.

 

  • Comparing Use Case 2 with Use Case 3
    • We can see that the Use Case 3 is better than Use case 2 configuration
    • As in the previous case, the ‘log file switch completion’ event for Use Case 1 (Avg Time 72.30ms) is greater than Use case 3. (Avg Time 40.11ms) for the above-mentioned reason.

 

This blog is not making recommendations of one design over the other , rather this blog is just highlighting the test results for the various use case configurations that we had in our lab and recommend to test your workload with the various configurations before making any decision.

Remember, any performance data is a result of the combination of hardware configuration, software configuration, test methodology, test tool, and workload profile used in the testing , so the performance improvement I got with my workload in my lab is in no way representative of any real production workload which means the performance improvements for real world workloads will definitely be better.

 

Summing up the findings, we recommend

  • using PVSCSI adapters for database workloads with increased queue depth for both PVSCSI controllers and vmdk’s
  • in addition deploy the Oracle SAME layout
  • you may need to test and see Use Case 1 or Use Case 3 first your performance SLA’s for Redo log group.

 

The recommendation is to test your workload because no 2 workloads are born alike, and every customer has

  • different database workload profiles
  • different workload requirements
  • different business SLA’s / RTO / RPO etc.

and so, what works for Customer 1 will not work for Customer 2.

 

 

 

Summary

 

  • This blog focuses on the implementation of Oracle SAME (Stripe and Mirror everything) technology using Oracle ASM along with using multiple VMware PVSCSI controllers with queue depth for PVSCSI controllers and VMDK’s set to maximum to achieve maximum performance for Oracle workloads.
  • Is not meant to be any kind of performance study or bake off any sort
  • contains results that I got in my lab running a load generator SLOB against my workload, which will be way different than any real-world customer workload, your mileage may vary
  • remember, any performance data is a result of the combination of hardware configuration, software configuration, test methodology, test tool, and workload profile used in the testing

 

All Oracle on vSphere white papers including Oracle licensing on vSphere/vSAN, Oracle best practices, RAC deployment guides, workload characterization guide can be found in the url below

Oracle on VMware Collateral – One Stop Shop
https://blogs.vmware.com/apps/2017/01/oracle-vmware-collateral-one-stop-shop.html