Oracle VMware Cloud on AWS vSAN vSphere

Using Jumbo Frames on VMware NSX for Oracle Workloads

 

 

This blog is not a deep dive on VMware NSX or VXLAN concepts , this blog will focus on Oracle Real Application Cluster (RAC) Interconnect MTU sizing using Jumbo Frames on VMware NSX.

 

VMware NSX

 

VMware NSX Data Center is the network virtualization and security platform that enables the virtual cloud network, a software-defined approach to networking that extends across data centers, clouds, and application frameworks.

With NSX Data Center, networking and security are brought closer to the application wherever it’s running, from virtual machines (VMs) to containers to bare metal. Like the operational model of VMs, networks can be provisioned and managed independent of underlying hardware.

More details on VMware NSX can be found here.

 

 

Virtual eXtensible Local Area Network (VXLAN)

 

A blog by Vyenkatesh Deshpande, describe the different components of the VMware’s VXLAN implementation.

Important concepts about Unicast, Broadcast and Multicast can be found here.

VXLAN is an overlay network technology. Overlay network can be defined as any logical network that is created on top of the existing physical networks. VXLAN creates Layer 2 logical networks on top of the IP network.

Using VXLAN adds another 50 bytes of additional overhead for the Protocol.

On top the ICMP/ping implementation doesn’t encapsulate the 28-byte ICMP (8) + IP (20) header, so we must account for 28 Bytes.

 

 

 

VMware Cloud on AWS

 

VMware Cloud on AWS is an on-demand service that enables customers to run applications across vSphere-based cloud environments with access to a broad range of AWS services.

Powered by VMware Cloud Foundation, this service integrates vSphere, vSAN and NSX along with VMware vCenter management, and is optimized to run on dedicated, elastic, bare-metal AWS infrastructure. ESXi hosts in VMware Cloud on AWS reside in an AWS availability Zone (AZ) and are protected by vSphere HA.

The use case for deploying VMware Cloud on AWS are multi-fold namely

  • Data Center Extension & DR
  • Cloud Migration
  • Application modernization & Next-Generation Apps build out

More detail on VMware Cloud on AWS can be found here.

 

 

 

Oracle Net Services

 

Oracle Net, a component of Oracle Net Services, enables a network session from a client application to an Oracle Database server. When a network session is established, Oracle Net acts as the data courier for both the client application and the database.

Oracle Net communicates with TCP/IP to enable computer-level connectivity and data transfer between the client and the database.

More details on Oracle Net Services can be found here.

  

 

 

Oracle Real Application Cluster (RAC)

 

Non-cluster Oracle databases have a one-to-one relationship between the Oracle database and the instance. Oracle RAC environments, however, have a one-to-many relationship between the database and instances. An Oracle RAC database can have several instances, all which access one database. All database instances must use the same interconnect, which can also be used by Oracle Clusterware.

Oracle Clusterware is a portable cluster management solution that is integrated with Oracle Database. Oracle Clusterware is a required component for using Oracle RAC that provides the infrastructure necessary to run Oracle RAC.

More details on Oracle RAC can be found here.

 

 

Oracle Real Application Cluster (RAC) Interconnect

 

All nodes in an Oracle RAC environment must connect to at least one Local Area Network (LAN) (commonly referred to as the public network) to enable users and applications to access the database.

In addition to the public network, Oracle RAC requires private network connectivity used exclusively for communication between the nodes and database instances running on those nodes. This network is commonly referred to as the interconnect. The interconnect network is a private network that connects all the servers in the cluster.

More details on Oracle Real Application Cluster (RAC) Interconnect can be found here.

 

 

Oracle Real Application Cluster (RAC) Networking requirements

 

Oracle documentation for RAC –There are 2 main requirements , among others , with respect to Broadcast and Multicast traffic

  • Broadcast Requirements
    • Broadcast communications (ARP and UDP) must work properly across all the public and private interfaces configured for use by Oracle Grid Infrastructure.
    • The broadcast must work across any configured VLANs as used by the public or private interfaces
    • When configuring public and private network interfaces for Oracle RAC, you must enable Address Resolution Protocol (ARP). Highly Available IP (HAIP) addresses do not require ARP on the public network, but for VIP failover, you need to enable ARP. Do not configure NOARP.

 

  • Multicast Requirements
    • For each cluster member node, the Oracle mDNS daemon uses multicasting on all interfaces to communicate with other nodes in the cluster.
    • Multicasting is required on the private interconnect. For this reason, at a minimum, you must enable multicasting for the cluster
      • Across the broadcast domain as defined for the private interconnect
      • On the IP address subnet ranges 224.0.0.0/24 and optionally 230.0.1.0/24
    • You do not need to enable multicast communications across routers.

More information on Oracle RAC networking requirements can be found here.

 

 

 

Oracle Real Application Cluster using VMware NSX

 

Oracle Workloads , both Single Instance and RAC can seamless and transparently run on top of VMware NSX without any issues.

With Extended Oracle RAC , both Storage and Network virtualization needs to be deployed to provided high availability, workload Mobility, workload balancing and effective Site Maintenance between sites.

NSX supports multi-datacenter deployments to allow L2 adjacency in software, to put it in simple words stretching the network to allow VM too utilize the same subnets in multiple sites.

The blog post here showcases the ability to stretch an Oracle RAC solution in an Extended Oracle RAC deployment between multi-datacenter and using VMware NSX for L2 Adjacency.

This topic and the related demo also featured in VMworld 2016.

VIRT7575 – Architecting NSX with Business Critical Applications for Security, Automation and Business Continuity

 

 

 

Oracle Real Application Cluster using Jumbo Frames on VMware NSX

 

The standard Maximum Transmission Unit (MTU) for IP frames is 1500 Bytes. Jumbo Frames are MTU’s larger than 1500 Bytes , we usually refer to a frame with an MTU of 9000 Bytes.

Jumbo Frames can be implemented for private Cluster Interconnects and requires very careful configuration and testing to realize its benefits.

In many cases, failures or inconsistencies can occur due to incorrect setup, bugs in the driver or switch software, which can result in sub-optimal performance and network errors.

In order to make Jumbo Frames work properly for a Cluster Interconnect network, the host’s private network adapter must be configured with a persistent MTU size of ( 9000 bytes – 50 bytes of VXLAN overhead – 28 bytes of ICMP/ping ) = 8922 Bytes.

 

 

Setting MTU to 9000 Bytes to enable Jumbo Frames on VMware SDDC

 

Jumbo frames let ESXi hosts send larger frames out onto the physical network. The network must support jumbo frames end-to-end that includes physical network adapters, physical switches, and storage devices. Before enabling jumbo frames, check with your hardware vendor to ensure that your physical network adapter supports jumbo frames.

You can enable jumbo frames on a vSphere distributed switch or vSphere standard switch by changing the maximum transmission unit (MTU) to a value greater than 1500 bytes. 9000 bytes is the maximum frame size that you can configure.

More details on Jumbo Frames on VMware vSphere can be found here.

Refer to the blog ‘What’s the Big Deal with Jumbo Frames?’ about Jumbo Frames and VMware SDDC.

For on-premises setup, using VMware Web Client , Edit Settings on the distributed switch to set the MTU size.

 

 

Distributed switch with MTU set to 9000 bytes

 

 

 

On VMware Cloud on AWS, since it’s a managed service , customer will not have to set MTU to 9000 bytes as it is set when the SDDC cluster is provisioned in the first place.

 

 

 

 In our lab setup, the standard i3 ESXi servers had a 25GB Elastic Network Adapter (PF) attached .

 

 

 

  

 

Oracle Metalink on changing RAC private network MTU size

 

Refer to Oracle Metalink document ‘Recommendation for the Real Application Cluster Interconnect and Jumbo Frames (Doc ID 341788.1)’ for more information on Jumbo frames for Oracle RAC interconnect.

Refer to Metalink document ‘How to Modify Private Network Information in Oracle Clusterware (Doc ID 283684.1)’ describes how to change private network MTU only.

For example, private network MTU is changed from 1500 to 8922 [9000 – 50 bytes of VXLAN overhead – 28 bytes of ICMP/ping = 8922 Bytes ] , network interface name and subnet remain the same.

  1. Shutdown Oracle Clusterware stack on all nodes
  2. Make the required network change of MTU size at OS network layer, ensure private network is available with the desired MTU size, in this case 8922 bytes , ping with the desired MTU size works on all cluster nodes
  3. Restart Oracle Clusterware stack on all nodes

 

 

 

Conclusion

 

VMware NSX Data Center is the network virtualization and security platform that enables the virtual cloud network, a software-defined approach to networking that extends across data centers, clouds, and application frameworks. With NSX Data Center, networking and security are brought closer to the application wherever it’s running, from virtual machines (VMs) to containers to bare metal

In addition to the public network, Oracle RAC requires private network connectivity used exclusively for communication between the nodes and database instances running on those nodes. This network is commonly referred to as the interconnect.

In case of Oracle Real Application Cluster using VMware NSX and Jumbo Frames, Jumbo Frames can be implemented for private Cluster Interconnects and requires very careful configuration and testing to realize its benefits.

In order to make Jumbo Frames work properly for a Cluster Interconnect network, the host’s private network adapter must be configured with a persistent MTU size of  ( 9000 – 50 bytes of VXLAN overhead – 28 bytes of ICMP/ping ) = 8922 Bytes

All Oracle on vSphere white papers including Oracle on VMware vSphere / VMware vSAN /  VMware Cloud on AWS , Best practices, Deployment guides, Workload characterization guide can be found at  Oracle on VMware Collateral – One Stop Shop