Oracle vSphere vVOLs

Oracle workloads on VMware Virtual Volumes (vVOLS) using Pure Storage FlashArray X50 and Broadcom LPe36000 Fibre Channel Adapter – better performance [ SCSI-vVols v/s SCSI-FCP]

 

 

 

“He-Man and the Masters of the Universe” – I have the power!!!! Yes, you do……with VMware Virtual Volumes (vVols) using Pure Storage FlashArray X50 and Broadcom LPe36000 Fibre Channel Adapter.

 

Business Critical Oracle Workloads have stringent IO requirements and enabling, sustaining, and ensuring the highest possible performance along with continued application availability is a major goal for all mission critical Oracle applications to meet the demanding business SLA’s, all the way from on-premises to VMware Hybrid Clouds

This blog is only meant to showcase the performance improvements we got in our lab by deploying Oracle workloads on VMware Virtual Volumes (vVols) datastore [SCSI-vVols] on ESXi 7.0.3 using Pure Storage FlashArray X50 and Broadcom LPe36000 Fibre Channel Adapter and is by NO means a performance benchmark blog.

This blog contains results that I got in my lab running a load generator SLOB against my workload, which will be way different than any real-world customer workload, your mileage may vary.

Remember, any performance data is a result of the combination of hardware configuration, software configuration, test methodology, test tool, and workload profile used in the testing.

 

 

 

 

VMware Virtual Volumes (vVols)

 

vVols is a SAN/NAS management and integration framework that exposes virtual disks as native storage objects and enables array-based operations at the virtual disk level. vVols transform the data plane of SAN/NAS devices by aligning storage consumptions and operations with the VM. In other words, vVols make SAN/NAS devices VM-aware and unlocks the ability to leverage array-based data services with a VM-centric approach at the granularity of a single virtual disk.

vVols allows customers to leverage the unique capabilities of their current storage investments and transition without disruption to a simpler and more efficient operational model optimized for virtual environments that work across all storage types.

More information on VMware Virtual Volumes can be found here. More information on Pure Storage vVol implementations can be found here.

 

 

 

 

 

Test Use case

 

This blog focuses on the below specific use cases around testing Oracle workloads on VMware Virtual Volumes (vVols) and VMware Traditional Filesystem datastore on ESXI 7.0.3 using Pure FlashArray X50 and Broadcom LPe36000 Fibre Channel Adapter using SLOB load generator

  • Compare performance of using 1 vmdk on separate PVSCSI controller on
    • VMware Traditional Filesystem datastore using SCSI-FCP
    • VMware Virtual Volume (vVol)  datastore using SCSI-vVols
  • Compare performance of using 2 vmdks on 2 separate PVSCSI controllers on
    • VMware Traditional Filesystem datastore using SCSI-FCP
    • VMware Virtual Volume (vVol) datastore using SCSI-vVols

 

 

 

Test Bed

 

ESXi server ‘sc2esx31.vslab.local’ with version VMware ESXi, 7.0.3, 19482537 with 2 sockets, 20 cores per socket, 768GB RAM was provisioned for this test.

 

 

 

ESXi server ‘sc2esx31.vslab.local’ had access to 2 Datastores as shown in the illustration below:

  • VMware Traditional Filesystem datastore ‘SC2-Pure-Oracle’
  • VMware Virtual Volume (vVol) datastore ‘SC2-Pure-vVOL’

 

 

 

VM ‘Oracle19c_OEL8_Customer ‘was created with 20 vCPU’s, 256GB VM memory with storage on both VMware Traditional Filesystem ‘SC2-Pure-Oracle’ & VMware Virtual Volume (vVol) ‘SC2-Pure-vVOL’ on Pure Storage X50 Broadcom LPe36000 Fibre Channel Adapter.

The Oracle SGA & PGA set to 32G and 10G respectively, very deliberately, as we wanted to avoid Logical IOs and try to push as much Physical IO as possible.

The OS was OEL 8.6 UEK with Oracle 19.15 Grid Infrastructure & RDBMS installed. Oracle ASM was the storage platform with Oracle ASMLIB. Oracle ASMFD can also be used instead of Oracle ASMLIB.

SLOB 2.5.2.4 was chosen as the load generator for this exercise, the slob SCALE parameter was set to 1000G.

 

 

Following VM disks were allocated –

  • SCSI 0:0 80G for O/S
  • SCSI 0:1 80G for Oracle RDBMS/Grid Infrastructure binaries
  • SCSI 1:0 500GB for Oracle core database

SLOB_DG was the ASM Diskgroup housing the SLOB data.

 

The following VM vmdk allocations were done for the 1 vmdk and 2 vmdk’s test as shown below. Each VMDK allocated size was 1800GB.

 

.

For both the test, the SLOB DG ASM diskgroup was first created with the SCSI vmdk and then with the vVol vmdk, the only difference being 1 vmdk v/s 2 such vmdk’s.

 

 

 

 

Test Steps

 

SLOB 2.5.2.4 was chosen as the load generator for this exercise, the slob SCALE parameter was set to 1000G.

Multiple SLOB test runs were done for each Test Case and 3 Test runs were chosen for comparison.

For each test, Oracle AWR, Guest OS Statistics and VMware esxtop data collected and the following metric comparisons were made at different layers between SCSI-FCP and vVol runs:

  • ESXi Statistics
    • Physical Disk SCSI Device Writes/sec
    • Physical Disk SCSI Device MBytes Written/sec
    • Virtual Disk Writes/sec
    • Virtual Disk MBytes Written/sec
  • Guest OS
    • wKB/sec
    • aqu-sz
  • Oracle Statistics
    • Executions (SQL) / sec

 

 

 

 

 

Test Case 1 –  Compare performance of using 1 vmdk on separate PVSCSI controller on VMware Traditional Filesystem datastore using SCSI-FCP v/s VMware Virtual Volume (vVol) datastore

 

ESXI and VM vmdk statistics – we see performance improvements of VMware Virtual Volume (vVol) over VMware Traditional filesystem

 

 

 

ESXi and VM vmdk statistics – Details on 1 vmdk on VMware Virtual Volume (vVol) datastore

 

 

 

 

ESXi and VM vmdk statistics – Details on 1 vmdk on VMware Traditional filesystem datastore

 

 

 

 

 

Guest OS statistics – We see performance improvements of VMware Virtual Volume (vVol) over VMware Traditional filesystem w.r.t Guest OS  wKB/sec statistics:

  • Guest OS: wKB/sec
    • VMware vVol – Average=635,277.76 KB/sec
    • VMware Traditional Filesystem – Average=570,921.02 KB/sec

 

 

 

Oracle Executes (SQL) / sec statistic – We see performance improvements of VMware Virtual Volume (vVol) over VMware Traditional filesystem

 

 

 

 

 

 

 

 

Test Case 2 – Compare performance of using 2 vmdks on 2 separate PVSCSI controllers on VMware Traditional Filesystem datastore using SCSI-FCP v/s VMware Virtual Volume (vVol) datastore

 

 ESXI and VM vmdk statistics – we see performance improvements of VMware Virtual Volume (vVol) over VMware Traditional filesystem

 

 

 

 

ESXi and VM vmdk statistics – Details on 1 vmdk on VMware Virtual Volume (vVol) datastore

 

 

 

 

ESXi and VM vmdk statistics – Details on 1 vmdk on VMware Traditional Filesystem datastore

 

 

 

Guest OS statistics – We see performance improvements of VMware Virtual Volume (vVol) over VMware Traditional filesystem w.r.t Guest OS wKB/sec statistics:

  • Guest OS: wKB/sec
    • VMware vVol – Average=349,940.25 KB/sec
    • VMware Traditional Filesystem – Average=288,375.78 KB/sec

 

 

 

 

Oracle Executes (SQL) / sec statistic – We see performance improvements of VMware Virtual Volume (vVol) over VMware Traditional filesystem

 

 

 

 

 

 

 

Summary

 

This blog focuses on the below specific use cases around testing Oracle workloads on VMware Virtual Volumes (vVols) and VMware Traditional Filesystem datastore on ESXI 7.0.3 using Pure FlashArray X50 and Broadcom LPe36000 Fibre Channel Adapter using SLOB load generator

  • Compare performance of using 1 vmdk on separate PVSCSI controller on
    • VMware Traditional Filesystem datastore using SCSI-FCP
    • VMware Virtual Volume (vVol)  datastore
  • Compare performance of using 2 vmdks on 2 separate PVSCSI controllers on
    • VMware Traditional Filesystem datastore using SCSI-FCP
    • VMware Virtual Volume (vVol) datastore

 

For each test, Oracle AWR, Guest OS Statistics and VMware esxtop data collected and the following metric comparisons were made at different layers between SCSI-FCP and vVol runs:

  • ESXi Statistics
    • Physical Disk SCSI Device Writes/sec
    • Physical Disk SCSI Device MBytes Written/sec
    • Virtual Disk Writes/sec
    • Virtual Disk MBytes Written/sec
  • Guest OS
    • wKB/sec
    • aqu-sz
  • Oracle Statistics
    • Executions (SQL) / sec

 

With both test cases, as seen above, we observed performance improvements of VMware Virtual Volume (vVol) over VMware Traditional filesystem for all metrics across all layers.

This blog is only meant to showcase the performance improvements I got in my lab by deploying Oracle workloads on VMware Virtual Volumes (vVolS) datastore on ESXI 7.0.3 using Pure FlashArray X50 and Broadcom LPe36000 Fibre Channel Adapter and is by NO means a performance benchmark blog.

This blog contains results that I got in my lab running a load generator SLOB against my workload, which will be way different than any real-world customer workload, your mileage may vary

Remember, any performance data is a result of the combination of hardware configuration, software configuration, test methodology, test tool, and workload profile used in the testing.

 

 

 

 

Acknowledgements

This blog was authored by Sudhir Balasubramanian, Senior Staff Solution Architect & Global Oracle Lead – VMware.

Many thanks to the following for proving their invaluable support and inputs in this effort

  • Jayamohan Kallickal, Distinguished Engineer, Broadcom
  • Naveen Krishnamurthy, Senior Product Line Manager (Storage), VMware

We would like to thank the following vendors for lending their infrastructure and support in creating this collateral

  • Pure Storage (X50 Flash Array)
  • Broadcom (LPe36000 Fibre Channel Adapter)

 

 

 

Conclusion

Business Critical Oracle Workloads have stringent IO requirements and enabling, sustaining, and ensuring the highest possible performance along with continued application availability is a major goal for all mission critical Oracle applications to meet the demanding business SLA’s, all the way from on-premises to VMware Hybrid Clouds.

All Oracle on VMware vSphere collaterals can be found in the url below

Oracle on VMware Collateral – One Stop Shop
https://blogs.vmware.com/apps/2017/01/oracle-vmware-collateral-one-stop-shop.html