Oracle ESXi vSphere vSphere HA

“RAC” n “RAC” all night – Oracle RAC on VMware vSphere – discuss Storage Options and How to Hot add clustered disks online

“I wanna “RAC” and “RAC” all nite and party every day” – mantra of all Oracle RAC DBA’s

 

 

 

 

The parable of the Blind men and the elephant have  been oft quoted , one of the lessons learned from this story is , unless one understands the concepts of running Oracle RAC on VMware vSphere platform correctly, there is always a chance one could run into issue caused because of misconceptions.

Deploying Oracle RAC on physical architecture is subjected to challenges like those running Oracle non-RAC on physical architecture. These challenges include but are not exclusive to hardware failure due to a failed component, power outage, and complete hardware meltdown. Providing high availability in these environments presents a significant challenge for business organizations. Hardware issues negate the inherent value proposition of Oracle RAC, which is to provide application-level high availability with sustained infrastructure high availability.

With VMware vSphere, customers have successfully run business-critical, high performance demanding Oracle workloads for many years. VMware vSphere provides high availability natively at the infrastructure level and is completely complementary to the application level high availability that Oracle RAC provides.

This blog attempts to describe the anatomy of an Oracle RAC setup and showcase the Virtual Machine  components  which are part of the Oracle RAC setup.

This blog will point to sections of already published RAC blogs  and “Oracle VMware Hybrid Cloud High Availability Guide – REFERENCE ARCHITECTURE “ in order to explain various concepts.

This blog also showcases how to hot add Clustered / Shared vmdk’s to Oracle RAC online.

This is purely a technical blog and is not related to any RAC support related discussion.

 

 

 

 

 

Concepts and Misconception

 

Much has been written , spoken and probably beaten to senseless about the magical “multi-writer” setting and how it helps multiple VM’s share vmdk’s simultaneously for Clustering and FT used cases.

 

 

I still get question from customers interested in running Oracle RAC on vSphere about –

-if we have the ability to add shared vmdk’s to a RAC cluster online without any downtime ? Yes we do.

-Are the steps of adding shared vmdk’s to an extended RAC cluster online without any downtime the same? Yes.

 

 

 

 

 

Introduction

 

By default, the simultaneous multi-writer “protection” is enabled for all vmdk files i.e. all VM’s have exclusive access to their vmdk files. So in order for all of the VM’s to access the shared vmdk’s simultaneously, the multi-writer protection needs to be disabled.

The below table describes the various Virtual Machine Disk Modes:

 

 

As we all are aware of , Oracle RAC requires shared disks to be accessed by all nodes of the RAC cluster.

KB Article 1034165 for non-vSAN (KB Article 2121181 for vSAN) provides more details on how to set the multi-writer option to allow VM’s to share vmdk’s.

Requirement for shared disks with the multi-writer flag setting for a RAC environment is that the shared disk(s)

  • has to set to Eager Zero Thick provisioned (EZT)
  • need not be set to Independent Persistent

 

While Independent-Persistent disk mode is not a hard requirement to enable Multi-writer option, the default Dependent disk mode would cause the “cannot snapshot shared disk” error when a VM snapshot is taken.

Use of Independent-Persistent disk mode would allow taking a snapshot of the OS disk while the shared disk would need to be backed up separately by a third-party vendor software.

 

 

 

 

 

Supported and Unsupported Actions or Features with Multi-Writer Flag

 

 

Limitations of Multi-Writer Flag

 

KB Article 1034165 for non-vSAN (KB Article 2121181 for vSAN) provides more details on the multi-writer feature and current limitations.

 

 

 

 

 

RAC Deployment models and disk sharing

 

1) VMware VMFS datastores –  using the multi-writer attribute to share the VMDKs for Oracle RAC requires:

  • SCSI bus sharing needs to be set to none
  • VMDKs must be EZT –  thick provision lazy zeroed or thin-provisioned formats are not allowed

 

2) VMware vVols  – using the multi-writer attribute to share the VMDKs for Oracle RAC requires:

  • SCSI bus sharing needs to be set to none
  • VMDKs must be thin provisioned – thick provision lazy zeroed or EZT formats are not allowed

 

3) NFS datastores – using the multi-writer attribute to share the VMDKs for Oracle RAC requires:

  • SCSI bus sharing needs to be set to none
  • VMDKs must be EZT – thick provision lazy zeroed or thin-provisioned formats are not allowed

For NFS datastores that do not support vSphere APIs for array integration (VAAI) – Refer to KB 2147691 for steps to create EZT VMDKs.

 

4) Virtual RDM’s – Follow the above procedure for VMDKs

 

5) Physical RDM(s) – sharing physical RDM(s) for Oracle RAC requires:

  • SCSI bus sharing is set to physical
  • Compatibility mode for the shared RDM is set to physical for physical compatibility mode

Details on using shared RDM(s) for Oracle RAC can be found in the blog To be “RDM for Oracle RAC”, or not to be, that is the question.

 

6) VMware vSAN – using the multi-writer attribute to share the VMDKs for Oracle RAC requires:

  • SCSI bus sharing needs to be set to none
  • Prior to vSAN 6.7 Patch P01, the virtual disk must be EZT to enable multi-writer mode
  • Beginning with VMware vSAN 6.7 Patch P01 (ESXi 6.7 Patch Release ESXi670-201912001), Oracle RAC on vSAN does not require shared VMDKs to be EZT (OSR=100) for multi-writer mode to be enabled

KB Article 1034165 for non-vSAN (KB Article 2121181 for vSAN) can be referred for extensive documentation.

 

An important Caveat when using shared vmdk(s) with multi-writer for Oracle RAC

•    SCSI bus sharing is left at default and not touched at all in case of using shared vmdk’s with multi-writer attribute  
•    It’s only used for RAC’s with RDM (Raw Device Mappings) as shared disks

 

 

 

 

 

Important observations to keep in mind

 

  • SCSI bus sharing ensures a VM can register the keys for SCSI 3 persistent reservation.

 

  • In order to use physical RDMs as shared storage for Oracle RAC, the multi-writer attribute should not be set as physical bus sharing indirectly leads to disk opened in multi-writer mode.
    • You will run the risk of an unsupported configuration if multi-writer and physical bus sharing settings are simultaneously used, it may have worked / may be working but that is NOT a guarantee that it will continue working and more importantly , stressing to the point of ad-nauseam, this is not a supported configuration from VMware GSS perspective , so if you have to continue down this path, use at your own risk

 

  • Additionally, note the following from the Oracle MySupport document RAC: Frequently Asked Questions (RAC FAQ) (Doc ID 220970.1)
    • Oracle Clusterware and Oracle RAC do not require nor use SCSI-3 persistent group reservation (PGR) for a Oracle Clusterware-only installations.
    • In a native Oracle RAC Stack (no third-party or vendor cluster, nor Oracle Solaris Cluster) SCSI-3 PGR is not required by Oracle and should be disabled on the storage (for disks / LUNs used in the stack).
    • When using a third-party or vendor-cluster solution such as Symantec Veritas SFRAC, the third-party cluster solution may require that SCSI-3 PGR be enabled on the storage, as those solutions will use SCSI-3 PGR as part of their I/O fencing procedures.

 

More details can be found in the section ‘VMware Multi-Writer Attribute for Shared VMDKs‘ in the “Oracle VMware Hybrid Cloud High Availability Guide – REFERENCE ARCHITECTURE “ guide.

VMware recommends using shared VMDK (s) with Multi-writer setting for provisioning shared storage for ALL Oracle RAC environments.

 

 

 

 

 

 

Facts about vmdk and multi-writer

 

Before version 6.0, we had the ability to add vmdk’s with multi-writer option to an Oracle RAC online , the only caveat was that this ability was not exposed in the vSphere Web/C# Client .We had to rely on PowerCLI scripting to add shared disks to an Oracle RAC Cluster online.

Setting Multi Writer Flag for Oracle RAC on vSphere using Power Cli

http://www.virtuallyghetto.com/2015/10/new-method-of-enabling-multiwriter-vmdk-flag-in-vsphere-6-0-update-1.html

With vSphere 6.0 and onwards, we can add shared disks to an Oracle RAC Cluster online using the Web Client.

 

 

 

 

 

Key points to take away from this blog

 

•    VMware recommends using shared VMDK (s) with Multi-writer setting for provisioning shared storage for ALL Oracle RAC environments (KB 1034165)
•    vSphere 6.0 and onwards, we can add shared vmdk’s to an Oracle RAC Cluster online using the Web Client
•    Prior to version 6.0, we had to rely on PowerCLI scripting to add shared disks to an Oracle RAC Cluster online

 

 

 

 

Example of an Oracle RAC Setup

 

As per best practices, the 2 VM’s, ‘rac01-g6’ and ‘rac02-g6’ , part of the 2 node Oracle RAC setup was deployed from a template ‘Template-12crac’.

The template has 10 vCPUs with 64 GB RAM with OEL7.3 as the operating system.

The template has 2 vmdk’s, 50GB each on SCSI 0 controller (Paravirtual SCSI Controller type)
•    Hard disk 1 is on SCSI0:0 and is for root volume (/)
•    Hard disk 2 is on SCSI0:1 and is for oracle binaries (/u01 for Grid and RDBMS binaries)

Hard Disk 1 (OS drive) & Hard Disk 2 (Oracle /u01) vmdk’s are set to
•    Thin Provisioning
•    No Sharing i.e. exclusive to the VM
•    Disk mode is set to ‘Dependent’

 

 

Template has 2 network adapters of type VMXNET3.
•    Public adapter
•    Private Interconnect

Public Adapter:

 

 

Private Interconnect:

 

 

Lets add a shared vmdk of size say 50GB to both the VM’s online without powering down the VM’s.

 

 

 

 

Add shared vmdk to an Oracle RAC online – Pre 7.x  screenshots

 

1. Adding shared disks can be done online without downtime.

 

2. Add a PVSCSI Controller (SCSI 1) to RAC VM ‘rac01-g6’. Right Click on ‘rac01-g6’ , ‘Edit Settings’ and add New Controller of Type ‘Paravirtual’

 

 

 

Leave the SCSI Bus Sharing to ‘None’ (default)

 

3. Next step is to add a 50 GB shared vmdk to VM ‘rac01-g6’  to SCSI1:0 bus slot ( you can add the new vmdk it to any slot on SCSI 1 you want to)

Right Click on VM ‘rac01-g6’ and Choose ‘Edit Settings’. Choose ‘New Hard Disk’ ,  set Sharing to ‘Multi-writer’ , leave Disk mode to ‘Dependent’ and click ‘Add’. Click ‘OK’ and monitor progress.

 

 

4. Repeat Step 2 to add new ‘Paravirtual’ Controller SCSI 1 to RAC VM ‘rac02-g6’

 

5. . The new vmdk (vmdk with multi-writer option) created on VM ‘rac01-g6’ at SCSI1:0 bus slot needs to be shared with ‘rac02-g6’ VM for clustering purpose

 

6. Right Click on VM ‘rac02-g6’, Choose ‘Edit Settings’. Choose ‘Existing Hard Disk’ and Click ‘Add’.

 

 

7. Navigate to your Datastore [Group06], expand the Datastore contents and click on ‘rac01-g6’ folder. Click on the shared vmdk ‘rac01-g6_2.vmdk’ which was created on ‘rac01-g6’. Click ‘OK’

 

 

8. Note that the Sharing attribute for this vmdk needs to be set to ‘Multi-Writer’ and the SCSI controller set to the same SCSI controller as we did for ‘rac01-g6’ i.e SCSI1:0. Click ‘OK’ when done.

 

 

 

9. Scan the bus on the OS of both VM’s to see the new disk added and list the devices

[root@rac01-g6 ~]# fdisk -lu

Disk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00098df2

Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     2099199     1048576   83  Linux
/dev/sda2         2099200   104857599    51379200   8e  Linux LVM

….
Disk /dev/sdc: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[root@rac01-g6 ~]#

[root@rac02-g6 ~]# fdisk -lu
Disk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00098df2

Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     2099199     1048576   83  Linux
/dev/sda2         2099200   104857599    51379200   8e  Linux LVM
….
Disk /dev/sdc: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[root@rac02-g6 ~]#

 

 

10. Partition Align the shared disk (/dev/sdc) on ‘rac01-g6’ (do this on one node only) using the fdisk / parted utility / tool of your choice) :

 

11. After partition alignment:

root@rac01-g6 ~]# fdisk -lu /dev/sdc
……
Disk /dev/sdc: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x4402e64c

Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048   104857599    52427776   83  Linux
[root@rac01-g6 ~]#

[root@rac02-g6 ~]# fdisk -lu /dev/sdc
…..
Disk /dev/sdc: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x4402e64c

Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048   104857599    52427776   83  Linux
[root@rac02-g6 ~]#

 

12. Create ASM disks using ASMLIB

The process of creating the ASM disks can be found at Installing and Configuring Oracle ASMLIB Software

root@rac01-g6 ~]# /usr/sbin/oracleasm createdisk DATA_DISK01 /dev/sda1
Writing disk header: done
Instantiating disk: done
[root@rac01-g6 ~]#

[root@rac01-g6 ~]# /usr/sbin/oracleasm listdisks
DATA_DISK01
[root@rac01-g6 ~]#

[root@rac02-g6 ~]# /usr/sbin/oracleasm scandisks
[root@rac02-g6 ~]# /usr/sbin/oracleasm listdisks
DATA_DISK01
[root@rac02-g6 ~]#

 

As we can see, we have now added a shared vmdk of size 50 GB to both VM’s without any downtime online and created ASM disk on this shared disk to be used for Oracle RAC ASM disk group.

The rest of the steps to create the Oracle RAC is exactly the same as shown in the Oracle documentation.

 

 

 

 

 

 

Add shared vmdk to an Oracle RAC online – vSphere 7.x

 

For example, the steps to add a shared VMDK with the multi-writer attribute as an Oracle ASM disk to an Oracle RAC 19c cluster using an FC-enabled VMFS datastore  backed by Pure x50 Storage is shown in the section ‘Oracle RAC Storage on VMFS datastore‘  in the “Oracle VMware Hybrid Cloud High Availability Guide – REFERENCE ARCHITECTURE “ guide.

Below is an example of a 2 Node 19c RAC Cluster with 2 VM’s ‘prac19c1’ and ‘prac19c2’  . The Public and Private Networking is shown as below.

 

 

 

 

The disk information of the 2 Node 19c RAC Cluster with 2 VM’s ‘prac19c1’ and ‘prac19c2’  is shown as below.

 

 

Below are the contents of an example RAC VM’s .vmx file , this is provided as an example and is not in anyway definitive.

For purposes of simplicity and illustration, one ASM disk group was created (DATA_DG) housing all data files, control files, redo log files, archive log files, CRS and vote disks. 

Separate ASM disk groups are recommended for the RAC and database components as a best practice.

 

 

[root@sc2esx31:/vmfs/volumes/5faa0685-b4cf32fa-c4e4-e4434b2d2ca8/prac19c1] cat prac19c1.vmx | sort -k 1 [root@sc2esx32:/vmfs/volumes/5faa0685-b4cf32fa-c4e4-e4434b2d2ca8/prac19c2] cat prac19c2.vmx | sort -k 1
.encoding = “UTF-8” .encoding = “UTF-8”
cleanShutdown = “TRUE” cleanShutdown = “TRUE”
config.version = “8” config.version = “8”
displayName = “prac19c1” displayName = “prac19c2”
ethernet0.addressType = “vpx” ethernet0.addressType = “vpx”
ethernet0.dvs.connectionId = “1104405254” ethernet0.dvs.connectionId = “1127802446”
ethernet0.dvs.portId = “445” ethernet0.dvs.portId = “446”
ethernet0.dvs.portgroupId = “dvportgroup-120” ethernet0.dvs.portgroupId = “dvportgroup-120”
ethernet0.dvs.switchId = “50 15 a5 52 f6 2d 2b ba-5d 0f b4 fd 33 60 f0 22” ethernet0.dvs.switchId = “50 15 a5 52 f6 2d 2b ba-5d 0f b4 fd 33 60 f0 22”
ethernet0.generatedAddress = “00:50:56:80:8c:b8” ethernet0.generatedAddress = “00:50:56:80:9e:93”
ethernet0.pciSlotNumber = “192” ethernet0.pciSlotNumber = “192”
ethernet0.present = “TRUE” ethernet0.present = “TRUE”
ethernet0.shares = “normal” ethernet0.shares = “normal”
ethernet0.virtualDev = “vmxnet3” ethernet0.virtualDev = “vmxnet3”
ethernet1.addressType = “vpx” ethernet1.addressType = “vpx”
ethernet1.dvs.connectionId = “1104408524” ethernet1.dvs.connectionId = “1127804202”
ethernet1.dvs.portId = “580” ethernet1.dvs.portId = “581”
ethernet1.dvs.portgroupId = “dvportgroup-2026” ethernet1.dvs.portgroupId = “dvportgroup-2026”
ethernet1.dvs.switchId = “50 15 a5 52 f6 2d 2b ba-5d 0f b4 fd 33 60 f0 22” ethernet1.dvs.switchId = “50 15 a5 52 f6 2d 2b ba-5d 0f b4 fd 33 60 f0 22”
ethernet1.generatedAddress = “00:50:56:80:24:11” ethernet1.generatedAddress = “00:50:56:80:1f:34”
ethernet1.pciSlotNumber = “1216” ethernet1.pciSlotNumber = “1216”
ethernet1.present = “TRUE” ethernet1.present = “TRUE”
ethernet1.shares = “normal” ethernet1.shares = “normal”
ethernet1.virtualDev = “vmxnet3” ethernet1.virtualDev = “vmxnet3”
floppy0.present = “FALSE” floppy0.present = “FALSE”
ftcpt.ftEncryptionMode = “ftEncryptionOpportunistic” ftcpt.ftEncryptionMode = “ftEncryptionOpportunistic”
guestInfo.detailed.data = “bitness=’64’ distroName=’Oracle Linux Server’ distroVersion=’7.9′ familyName=’Linux’ kernelVersion=’5.4.17-2036.103.3.1.el7uek.x86_64′ prettyName=’Oracle Linux Server 7.9′” guestInfo.detailed.data = “bitness=’64’ distroName=’Oracle Linux Server’ distroVersion=’7.9′ familyName=’Linux’ kernelVersion=’5.4.17-2036.103.3.1.el7uek.x86_64′ prettyName=’Oracle Linux Server 7.9′”
guestOS = “oraclelinux7-64” guestOS = “oraclelinux7-64”
guestOS.detailed.data = “bitness=’64’ distroName=’Oracle Linux Server’ distroVersion=’7.9′ familyName=’Linux’ kernelVersion=’5.4.17-2036.103.3.1.el7uek.x86_64′ prettyName=’Oracle Linux Server 7.9′” guestOS.detailed.data = “bitness=’64’ distroName=’Oracle Linux Server’ distroVersion=’7.9′ familyName=’Linux’ kernelVersion=’5.4.17-2036.103.3.1.el7uek.x86_64′ prettyName=’Oracle Linux Server 7.9′”
guestinfo.vmtools.buildNumber = “15389592” guestinfo.vmtools.buildNumber = “15389592”
guestinfo.vmtools.description = “open-vm-tools 11.0.5 build 15389592” guestinfo.vmtools.description = “open-vm-tools 11.0.5 build 15389592”
guestinfo.vmtools.versionNumber = “11269” guestinfo.vmtools.versionNumber = “11269”
guestinfo.vmtools.versionString = “11.0.5” guestinfo.vmtools.versionString = “11.0.5”
hpet0.present = “TRUE” hpet0.present = “TRUE”
memSize = “131072” memSize = “131072”
migrate.encryptionMode = “opportunistic” migrate.encryptionMode = “opportunistic”
migrate.hostLog = “prac19c1-744c86f6.hlog” migrate.hostLog = “prac19c2-744c8717.hlog”
monitor.phys_bits_used = “45” monitor.phys_bits_used = “45”
networking.skipSnapshot = “true” networking.skipSnapshot = “true”
numa.autosize.cookie = “100012” numa.autosize.cookie = “100012”
numa.autosize.vcpu.maxPerVirtualNode = “10” numa.autosize.vcpu.maxPerVirtualNode = “10”
numvcpus = “10” numvcpus = “10”
nvram = “prac19c1.nvram” nvram = “prac19c2.nvram”
pciBridge0.pciSlotNumber = “17” pciBridge0.pciSlotNumber = “17”
pciBridge0.present = “TRUE” pciBridge0.present = “TRUE”
pciBridge4.functions = “8” pciBridge4.functions = “8”
pciBridge4.pciSlotNumber = “21” pciBridge4.pciSlotNumber = “21”
pciBridge4.present = “TRUE” pciBridge4.present = “TRUE”
pciBridge4.virtualDev = “pcieRootPort” pciBridge4.virtualDev = “pcieRootPort”
pciBridge5.functions = “8” pciBridge5.functions = “8”
pciBridge5.pciSlotNumber = “22” pciBridge5.pciSlotNumber = “22”
pciBridge5.present = “TRUE” pciBridge5.present = “TRUE”
pciBridge5.virtualDev = “pcieRootPort” pciBridge5.virtualDev = “pcieRootPort”
pciBridge6.functions = “8” pciBridge6.functions = “8”
pciBridge6.pciSlotNumber = “23” pciBridge6.pciSlotNumber = “23”
pciBridge6.present = “TRUE” pciBridge6.present = “TRUE”
pciBridge6.virtualDev = “pcieRootPort” pciBridge6.virtualDev = “pcieRootPort”
pciBridge7.functions = “8” pciBridge7.functions = “8”
pciBridge7.pciSlotNumber = “24” pciBridge7.pciSlotNumber = “24”
pciBridge7.present = “TRUE” pciBridge7.present = “TRUE”
pciBridge7.virtualDev = “pcieRootPort” pciBridge7.virtualDev = “pcieRootPort”
sata0.pciSlotNumber = “33” sata0.pciSlotNumber = “33”
sata0.present = “TRUE” sata0.present = “TRUE”
sata0:0.autodetect = “TRUE” sata0:0.autodetect = “TRUE”
sata0:0.clientDevice = “TRUE” sata0:0.clientDevice = “TRUE”
sata0:0.deviceType = “atapi-cdrom” sata0:0.deviceType = “atapi-cdrom”
sata0:0.fileName = “auto detect” sata0:0.fileName = “auto detect”
sata0:0.present = “TRUE” sata0:0.present = “TRUE”
sata0:0.startConnected = “FALSE” sata0:0.startConnected = “FALSE”
sched.cpu.latencySensitivity = “normal” sched.cpu.latencySensitivity = “normal”
sched.cpu.min = “0” sched.cpu.min = “0”
sched.cpu.shares = “normal” sched.cpu.shares = “normal”
sched.cpu.units = “mhz” sched.cpu.units = “mhz”
sched.mem.min = “0” sched.mem.min = “0”
sched.mem.minSize = “0” sched.mem.minSize = “0”
sched.mem.shares = “normal” sched.mem.shares = “normal”
sched.scsi0:0.shares = “normal” sched.scsi0:0.shares = “normal”
sched.scsi0:0.throughputCap = “off” sched.scsi0:0.throughputCap = “off”
sched.scsi0:1.shares = “normal” sched.scsi0:1.shares = “normal”
sched.scsi0:1.throughputCap = “off” sched.scsi0:1.throughputCap = “off”
sched.scsi2:0.shares = “normal” sched.scsi2:0.shares = “normal”
sched.scsi2:0.throughputCap = “off” sched.scsi2:0.throughputCap = “off”
sched.swap.derivedName = “/vmfs/volumes/5faa0685-b4cf32fa-c4e4-e4434b2d2ca8/prac19c1/prac19c1-51c26718.vswp” sched.swap.derivedName = “/vmfs/volumes/5faa0685-b4cf32fa-c4e4-e4434b2d2ca8/prac19c2/prac19c2-51c26719.vswp”
scsi0.pciSlotNumber = “160” scsi0.pciSlotNumber = “160”
scsi0.present = “TRUE” scsi0.present = “TRUE”
scsi0.sasWWID = “50 05 05 6e 39 94 f9 20” scsi0.sasWWID = “50 05 05 68 13 f3 c0 b0”
scsi0.virtualDev = “pvscsi” scsi0.virtualDev = “pvscsi”
scsi0:0.deviceType = “scsi-hardDisk” scsi0:0.deviceType = “scsi-hardDisk”
scsi0:0.fileName = “prac19c1.vmdk” scsi0:0.fileName = “prac19c2.vmdk”
scsi0:0.present = “TRUE” scsi0:0.present = “TRUE”
scsi0:0.redo = “” scsi0:0.redo = “”
scsi0:1.deviceType = “scsi-hardDisk” scsi0:1.deviceType = “scsi-hardDisk”
scsi0:1.fileName = “prac19c1_1.vmdk” scsi0:1.fileName = “prac19c2_1.vmdk”
scsi0:1.present = “TRUE” scsi0:1.present = “TRUE”
scsi0:1.redo = “” scsi0:1.redo = “”
scsi1.pciSlotNumber = “224” scsi1.pciSlotNumber = “224”
scsi1.present = “TRUE” scsi1.present = “TRUE”
scsi1.sasWWID = “50 05 05 6e 39 94 f8 20” scsi1.sasWWID = “50 05 05 68 13 f3 c1 b0”
scsi1.virtualDev = “pvscsi” scsi1.virtualDev = “pvscsi”
scsi2.pciSlotNumber = “256” scsi2.pciSlotNumber = “256”
scsi2.present = “TRUE” scsi2.present = “TRUE”
scsi2.sasWWID = “50 05 05 6e 39 94 fb 20” scsi2.sasWWID = “50 05 05 68 13 f3 c2 b0”
scsi2.virtualDev = “pvscsi” scsi2.virtualDev = “pvscsi”
scsi2:0.deviceType = “scsi-hardDisk” scsi2:0.deviceType = “scsi-hardDisk”
scsi2:0.fileName = “prac19c1_3.vmdk” scsi2:0.fileName = “/vmfs/volumes/5faa0685-b4cf32fa-c4e4-e4434b2d2ca8/prac19c1/prac19c1_3.vmdk”
scsi2:0.mode = “independent-persistent” scsi2:0.mode = “independent-persistent”
scsi2:0.present = “TRUE” scsi2:0.present = “TRUE”
scsi2:0.redo = “” scsi2:0.redo = “”
scsi2:0.sharing = “multi-writer” scsi2:0.sharing = “multi-writer”
scsi3.pciSlotNumber = “1184” scsi3.pciSlotNumber = “1184”
scsi3.present = “TRUE” scsi3.present = “TRUE”
scsi3.sasWWID = “50 05 05 6e 39 94 fa 20” scsi3.sasWWID = “50 05 05 68 13 f3 c3 b0”
scsi3.virtualDev = “pvscsi” scsi3.virtualDev = “pvscsi”
softPowerOff = “FALSE” softPowerOff = “FALSE”
svga.guestBackedPrimaryAware = “TRUE” svga.guestBackedPrimaryAware = “TRUE”
svga.present = “TRUE” svga.present = “TRUE”
svga.vramSize = “8388608” svga.vramSize = “8388608”
time.synchronize.continue = “0” time.synchronize.continue = “0”
time.synchronize.restore = “0” time.synchronize.restore = “0”
time.synchronize.resume.disk = “FALSE” time.synchronize.resume.disk = “0”
time.synchronize.resume.host = “0” time.synchronize.resume.host = “0”
time.synchronize.shrink = “0” time.synchronize.shrink = “0”
time.synchronize.tools.enable = “FALSE” time.synchronize.tools.enable = “FALSE”
time.synchronize.tools.startup = “FALSE” time.synchronize.tools.startup = “FALSE”
toolScripts.afterPowerOn = “TRUE” toolScripts.afterPowerOn = “TRUE”
toolScripts.afterResume = “TRUE” toolScripts.afterResume = “TRUE”
toolScripts.beforePowerOff = “TRUE” toolScripts.beforePowerOff = “TRUE”
toolScripts.beforeSuspend = “TRUE” toolScripts.beforeSuspend = “TRUE”
tools.guest.desktop.autolock = “FALSE” tools.guest.desktop.autolock = “FALSE”
tools.remindInstall = “FALSE” tools.remindInstall = “FALSE”
tools.syncTime = “FALSE” tools.syncTime = “FALSE”
tools.upgrade.policy = “manual” tools.upgrade.policy = “manual”
uuid.bios = “42 00 14 ae 39 94 f9 24-22 e8 df 59 4a 0c d2 bb” uuid.bios = “42 00 79 b8 13 f3 c0 b4-c8 74 6c 33 0f 4b 23 38”
uuid.location = “56 4d 4b 9f 2e e5 10 5f-77 fa 15 93 de fc b5 bc” uuid.location = “56 4d c0 a7 0e 92 cf 4d-c9 af 92 3a 19 78 8b e1”
vc.uuid = “50 00 39 9f 41 ab 63 e7-1e 2f 26 8b 46 f2 17 4e” vc.uuid = “50 00 a7 ad 9e 30 64 32-93 c5 1a fd 04 fb 13 e2”
virtualHW.version = “17” virtualHW.version = “17”
viv.moid = “6b27cda7-dc12-414e-96e4-6d8a86496692:vm-2027:9yq8bZMIOujGhSoyLC+lwn21cqLBwe/CNka2Gxm30k4=” viv.moid = “6b27cda7-dc12-414e-96e4-6d8a86496692:vm-2028:Rags5W2pXWRoEws6UeEGNDID4iPi75+DNbLSfll338Y=”
vm.createDate = “1616092092651652” vm.createDate = “1616091474143247”
vmci0.id = “1242354363” vmci0.id = “256582456”
vmci0.pciSlotNumber = “32” vmci0.pciSlotNumber = “32”
vmci0.present = “TRUE” vmci0.present = “TRUE”
vmotion.checkpointFBSize = “4194304” vmotion.checkpointFBSize = “4194304”
vmotion.checkpointSVGAPrimarySize = “8388608” vmotion.checkpointSVGAPrimarySize = “8388608”
vmotion.svga.graphicsMemoryKB = “8192” vmotion.svga.graphicsMemoryKB = “8192”
vmotion.svga.mobMaxSize = “8388608” vmotion.svga.mobMaxSize = “8388608”
[root@sc2esx31:/vmfs/volumes/5faa0685-b4cf32fa-c4e4-e4434b2d2ca8/prac19c1] [root@sc2esx32:/vmfs/volumes/5faa0685-b4cf32fa-c4e4-e4434b2d2ca8/prac19c2]

 

 

 

 

Summary

 

  • VMware recommends using shared VMDK (s) with Multi-writer setting for provisioning shared storage for ALL Oracle RAC environments (KB 1034165)
  • vSphere 6.0 and onwards, we can add shared vmdk’s to an Oracle RAC Cluster online using the Web Client .Prior to version 6.0, we had to rely on PowerCLI scripting to add shared disks to an Oracle RAC Cluster online
  • Best Practices needs to be followed when configuring Oracle RAC environment  which can be found in the “Oracle Databases on VMware – Best Practices Guide
  • All Oracle on vSphere white papers including Oracle licensing on vSphere/vSAN, Oracle best practices, RAC deployment guides, workload characterization guide can be found at the “Oracle on VMware Collateral – One Stop Shop