posted

0 Comments

In part 1 of this series we looked at traditional HPC and introduced  virtualized HPC components. Part 2 will look at the design and show sample virtual reference architectures for  HPC workloads.

vSphere Clusters:

High Performance Computing workloads have management and compute components. Most VMware environments have a separate management cluster that should be leveraged. HPC Compute workloads should run on cluster dedicated for it. There are situations where the number of compute nodes in a HPC cluster exceeds the maximum number of nodes supported in a vSphere cluster. In these situations multiple vSphere clusters should be created to accommodate all the HPC nodes.

Management Cluster

Management cluster runs the virtual machines that manage the virtual High Performance Compute environment. These virtual machines host vCenter Server, vSphere Update Manager, NSX Manager, NSX Controllers, vRealize Operations Manager, vRealize Automation as well as the administrative virtual High Performance Compute services like the master node and workload schedulers. All management, monitoring, and infrastructure services are provisioned to a vSphere cluster which provides high availability for these critical services. Permissions on the management cluster limit access only to administrators. This limitation protects the virtual machines that are running the management, monitoring, and infrastructure services from unauthorized access.

Figure 4: HPC Management Cluster

The management components for vSphere and HPC can be combined and deployed in this cluster. If no such cluster already exists, a new management cluster with a minimum cluster of 3 nodes is recommended. The new cluster should be sized based on projected management workload and some headroom for growth.  Capacity Analysis should be performed on existing management cluster and adjusted to ensure that there is enough capacity to add HPC management components.

Due to the critical nature of the workloads with many single points of failure, it is recommended that this cluster is licensed with vSphere Enterprise Plus for high availability and other advanced features. vSphere Enterprise Plus provides vSphere HA, DRS and other advanced capabilities that help reduce downtime for these critical workloads.

Compute Clusters

The compute cluster runs the actual HPC workload components. The vSphere Scale Out licensing can be leveraged for these compute clusters.

Figure 5: HPC compute cluster with vSphere Scale Out

 MPI

MPI environments are dedicated as they have unique requirements with the need for low latency communications between nodes. The nodes are connected via high speed interconnect and are not very amenable to sharing with other workloads. MPI applications leverage the entire high performance interconnects using Pass-Through mode in virtualized environments. Storage for MPI nodes is usually a parallel file system like Lustre also accessed via the high speed interconnect.

Throughput

Throughput workloads are horizontally scalable with little dependency between individual tasks. Job schedulers divvy up the work across nodes and coordinate the activity. NFS is the typical shared storage across the nodes which is accessed via TCP/IP networks.

Hybrid

Different types of HPC workloads can co-exist and potentially leverage some of the capabilities of each other. Both MPI and Throughput can require the use of accelerators such as GPUs that can be shared by these workloads. Another type of Hybrid is where the GPUs are used for desktop graphics during the day and for HPC with Deep Learning during nights and weekends in a concept called cycle harvesting.

Sample Virtualized HPCArchitectures

With all of the HPC components understood and how they match to virtualization, the following is a sample of several architectures to show how HPC environments can be deployed with virtualization technologies based on scenario.

Configuration A – Parallel Distributed Applications (MPI)

Parallel Distributed Applications based HPC environments are designed to allow complex research or design simulations, such as handling massive amounts of simulation, machine-generated data, structural analysis and computational fluid dynamics; all requiring complex algorithms for modeling, rendering and analysis. To best achieve this, a virtual HPC environment will be based on multiple compute virtual machines offering a denser environment with fast interconnects to keep up with demanding research or manufacturing workloads.

Hardware

  • Four nodes and higher computational resource cluster
  • management node(s) (how many nodes depending on your management capability consideration)
  • existing storage nodes or NFS node(s) for VMDK placement
  • Lustre nodes for application data
  • GPGPU for application acceleration needs (optional)
  • RDMA Interconnects for achieving low latency and high bandwidth
  • Ethernet cards with 10/25/50/100 GbE connectivity speed

Figure 6: Sample Architecture for MPI

Software

  • Existing VMware Infrastructure for Management and Operations resource cluster
  • VMware Management and Operations solutions
  • HPC Management and Operations solutions
  • Parallel file system (Lustre)

VM sizing

  • Single Compute Virtual machine per host

Configuration B – Throughput Workloads

This type of HPC enables high-throughput and fast turnaround of workflows in diverse fields. For a virtual HPC for Life Sciences the following is recommended. Throughput workloads can also be potentially deployed in VMware Cloud on AWS with vSphere virtual machines used for compute and Amazon EFS used for shared NFS like storage.

Hardware

  • Four nodes and higher computational resource cluster
  • management node(s) (how many nodes depending on your management capability consideration)
  • Existing storage nodes or NFS nodes for VMDK placement
  • GPGPU for application acceleration needs (optional)
  • Ethernet cards with 10/25/50/100 GbE connectivity speed

Software

  • Existing VMware Infrastructure for Management and Operations resource cluster
  • VMware Management and Operations solutions
  • HPC Management and Operations solutions

VM sizing

  • Multiple Compute Virtual machines per host

Figure 7: Sample Architecture for Throughput applications

Conclusion:

Virtualization offers tremendous benefits for HPC Solutions. With a simple understanding of HPC components and technologies, an enterprise organization can follow HPC best practices and guidelines to adopt a VMware virtualized infrastructure. The scale out licensing for vSphere and HPC helps compute nodes leverage virtualization at a low cost. Virtualization provides the building blocks that can be leveraged to provide for individualized or hybrid clusters for HPC applications.

With the addition of VMware optional VMware Management solutions, HPC can benefit from increased security, self-service, multi workflow environments and accessible from remote console. With the HPC community eying cloud computing, virtualizing HPC is a must to future proof your infrastructure and tune for the future.