Josh Simons

2013 Predictions: When Worlds Collide

December 18, 2012

It’s time again to dust off the crystal ball, cast the yarrow sticks, read the tea leaves, and share some thoughts about where the IT world is heading over the next twelve months and beyond. Because my focus at VMware is on High Performance Computing (HPC), I’ll confine my prognostications primarily to the growing area of overlap between mainstream Enterprise Computing and HPC — places where each world can benefit from the other in important ways.

Before I start, let’s get one no-brainer prediction out of the way: 2012 will be remembered for Nicira, and not Nibiru. Yes, software-defined networking (SDN) will trump planetary collisions and the entire Mayan Apocalypse. Allwyn has a great blog entry that explains how SDN is just the beginning of a much bigger trend around network virtualization and the software-defined datacenter — it’s well worth reading.

And, now, on to the main predictions, all of which are my opinions based on public information with no wink-wink-nod-nods based on any inside information whatsoever.

Prediction: HPC Public Cloud

This coming year will herald the beginning of much broader use of public clouds for running HPC workloads. This is in part because big players like Amazon (Cluster Compute), Google (Compute Engine), and Microsoft (Big Compute) are all now catering to HPC workloads with their cloud offerings. But a few large players do not an ecosystem make. It is much more interesting — and revealing — to see small companies like Bitbrains in the Netherlands and EngineRoom.io in Australia using virtualized cloud infrastructure to offer not just compute cycles, but a full range of tuning and other services to enable effective and successful use of cloud for HPC workloads.

Bitbrains IT Services designs, builds and supports Cloud Computing solutions based on VMware products for companies that require high levels of continuity, reliability, and scalability of their complex and business-critical applications. With respect to HPC specifically, Bitbrains engineers and integrates complex risk-calculation clusters for large finanical institutions using algorithmic solutions from a variety of partners and offers a risk-management-as-a-service solution that scales to over 1000 cores.

EngineRoom.io focuses on enabling companies to derive insights and revenue from their data by giving them the capacity to aggregate structured and unstructured data, to handle high-velocity streaming data, and to process these large datasets at scale. They are applying their expertise to solve customer problems in a number of HPC verticals, including Media and Entertainment, Life Sciences (bioinformatics & medical image analysis), Financial Services, and Oil & Gas.

I’ve talked recently with the folks at both Bitbrains and EngineRoom.io and was impressed by their energy and their expertise. These companies have a deep understanding of both our platform and the technical requirements of HPC workloads, allowing them to bridge the gap between the two and offer real value to their customers. While the completely self-service, zero-touch offerings from Amazon and others have their place, it’s the full-service offerings like those offered by Bitbrains, EngineRoom.io, and similar companies that will enable cloud computing to address the problems of the so-called missing middle [PDF] and open HPC techniques to a much broader, unserved market.

Prediction: HPC Private Cloud

Much of the discussion to date about uses of cloud computing for HPC has focused on public cloud infrastructure. I suspect this is mostly because people are thinking primarily about CAPEX/OPEX shifts (i.e., Use, Don’t buy) rather than other benefits of cloud, many of which derive from the use of virtualization infrastructure as the basis of those clouds. The US Department of Energy’s Magellan Project, for example, looked primarily at whether public clouds could replace or augment dedicated DOE HPC resources, but it didn’t address whether DOE resources should become private cloud resources to better serve their customer base. The project’s final report [PDF] does recommend that DOE explore how to better offer some capabilities available in clouds: For example, allowing users to easily deploy custom software stacks rather than forcing users to accomodate a facility-wide choice of a standard software distribution. But it stops short of actually considering whether cloud computing capabilities might be used to advantage by DOE to deliver self-provisioned, elastic, and customized HPC infrastructure to its users.

I recently created an audio-annotated slide presentation that explains in detail some of the benefits that can accrue by transforming one’s bare-metal HPC facility into a virtualized private cloud infrastructure. I predict we will see further adoption of private cloud techniques over the coming year as HPC organizations begin to understand that the private cloud approach can add real value and should not be viewed as a marketing attempt to expand the idea of “cloud” to cover all IT deployments, as was the case with grid computing — remember “cluster grids”?

Prediction: RDMA in the Cloud

I sensed a fundamental attitude shift towards cloud computing this year at SC12 in Salt Lake City. Rather than questioning the use of clouds for HPC workloads, many presenters seemed to make the implicit assumption that cloud computing would now be one tool of several in their toolkits, to be incorporated, as appropriate, into their overall workflow. This is due in part (finally!) to a broader understanding within the community that with current hardware virtualization support and with state-of-the-art virtualization software, many single-process (single- or multi-threaded) HPC applications achieve virtualized performance that is within about 5% of that of bare-metal, non-virtualized systems. The virtualization community has been beating this drum for awhile (yours truly included) and the publication of the DOE’s Magallen final report [PDF] has further demonstrated the point for this kind of workload.

But big challenges remain, not the least of which is the performance of MPI applications in cloud environments. Here the Magellan study did somewhat of a disservice to all concerned by comparing MPI performance on bare-metal systems using QDR InfiniBand to that of EC2 systems using either 1 GbE or 10 GbE with TCP and reporting slowdowns of up to 50X. As an assessment of current cloud computing capabilities, I suppose this is fair. However, it would be incorrect to assume that tomorrow’s cloud will be the same as today’s cloud.

Which brings me to perhaps the most important announcement in 2012 related to HPC in the Cloud: Microsoft’s statement that they will be supporting InfiniBand with Azure. And to drive home the point, they put a machine on the TOP500 list this year. The larger value to the HPC community is obvious: It demonstrates that cloud computing can feasibly be used for more than just throughput/capacity applications. As we’ve shown with our own work at VMware, latencies under 2us are achievable today with QDR InfiniBand using vSphere and Direct Path I/O; we are also exploring how to create a paravirtualized RMDA device that will support low latency while maintaining the ability to perform vMotion and other operations. Regardless of the details, enabling RDMA within cloud environments will move the discussion beyond throughput applications into the realm of many HPC applications that are not currently addressable in virtualized environments.

Amazon’s 10 GbE Cluster Compute instances were a good first step to broadening EC2 to address HPC needs, but the latency and bandwidth aren’t good enough for many distributed applications. I predict Amazon will announce RDMA support (via either InfiniBand or RoCE) in 2013 to further broaden their offering.

Prediction: Virtual Machine Evolution

The virtual machine abstraction serves as a virtualized container in which an operating system, middleware stack and applications run. The attributes of that container have evolved over time to reflect the changing capabilities of underlying real hardware. For example, the latest VMware virtual hardware version (v9) further increases scalability from earlier versions by supporting virtual machines with up to 64 virtual CPUs and 1 TB of RAM. But it isn’t just scale that needs to evolve; we must also be vigilant about new technologies that may sediment into the industry’s base computing infrastructure and evaluate when or if those technologies should be exposed as first-class objects in future virtual machine versions. Take Virtual Flash, for example, which we showed as a technology preview at VMworld this year. Because it is a tech preview, we aren’t committing yet as to whether such a capability will appear in a future product, but it’s a good example of the kind of work we do to anticipate and evaluate future hardware directions.

In the same vein, we need to keep a close watch on two HPC hardware trends and track their emergence as capabilities to be exploited by mainstream, enterprise IT workloads. The first is the use of accelerators (e.g. GPGPU from nVidia and AMD, and Xeon Phi from Intel) as massively-parallel compute engines, and the second is the use of high-bandwidth, low-latency, CPU-offloaded interconnects to improve the performance of scale-out applications.

Accelerators

Use of accelerators within HPC, primarily with GPGPU, is now a well-established trend that continues to expand as more algorithms are reworked to take advantage of these SIMD engines. There are two developments to watch for when predicting whether these accelerators will move beyond the relatively small HPC market to achieve broader adoption in the Enterprise.

The first such development would be the application of accelerators to workloads that Enterprise customers care about. Well, that’s a funny thing because, as it turns out, there are plenty of “Enterprise customers” who care a lot about HPC workloads in Life Sciences, Financial Services, Digital Content Creation (DCC), etc., where GPGPU techniques are already being used. And then there is Big Data, the poster child for crossover workloads that are important in both HPC and Enterprise. It’s very significant that we are starting to see use of accelerators to boost the performance of fundamental data-mining techniques like K-Means Clustering.

The second development to watch for is whether these accelerators move from being optional, PCI-based devices to integrated, on-die capabilities available on all systems. Should that happen, it becomes much more likely that these compute engines will be harnessed for a wider variety of tasks relevant to the Enterprise, e.g. software RAID calculations or possibly even basic services within the virtualization platform itself, like memory compression. Will this tighter hardware integration happen? AMD’s Heterogeneous System Architecture (HSA) approach suggests that it might.

Interconnects

The case for high-speed, low-latency, CPU-offloaded interconnects in the Enterprise is similar to that of accelerators, though perhaps more developed. Important Enterprise use-cases already exist: InfiniBand is used as the backbone of Oracle’s Exa family of scale-out appliances, and IBM supports the use of RoCE with its line of PureScale database products. In addition, Mellanox has demonstrated how these interconnects can improve MapReduce performance with their Unstructured Data Accelerator (PDF) and research at Ohio State and elsewhere has demonstrated the utility of these interconnects for a variety of scale-out Enterprise components. At VMware,  my colleague in the Office of the CTO, Bhavesh Davda, has also shown the potential value of such technologies for accelerating distributed services within our own virtualization platform, most notably vMotion. More generally, as application services and middleware components are re-architected to handle increasing scalability requirements, it will become apparent (as was found in HPC) that interconnect performance will often be in the critical performance path for these scale-out architectures.

As with accelerators, it will be important for us to track the trajectory of the low-latency, high-bandwidth interconnects, which currently exist primarily as PCI devices. There is much speculation in the press, for example, about Intel’s intentions with respect to their recent interconnect-related acquisitions (listed in the next section). If low-latency interconnects move on-chip to become ubiquitously available, then the case for expanding support for such interconnects as a fundamental component of a virtual environment would be greatly strengthened.

Will these changes happen in 2013? My guess is probably not, but these are important issues and they merit close attention over the coming year and beyond.

Prediction: HPC Acquisitions

This year, IBM acquired Platform Computing, provider of the popular LSF distributed job scheduler. In addition, Intel acquired both Qlogic’s InfiniBand assets and Cray’s HPC interconnect technology. And two years ago, Oracle made a strategic investment in Mellanox, the primary provider of InfiniBand and RDMA technologies for the HPC market. While it is true that both IBM and Intel are big HPC vendors, it would be a mistake to view these actions as simply consolidation within the HPC ecosystem. Instead, view this as tangible evidence that the convergence of requirements between Enterprise and HPC is accelerating and savvy Enterprise vendors are positioning themselves to address new Enterprise requirements by drawing on technologies fired within the performance-critical crucible of High Performance Computing.

I predict this trend will continue, with additional acquisitions of traditional HPC assets over the coming year and beyond. These assets will be repurposed and expanded to more quickly solve a variety of mainstream cloud computing challenges (e.g., provisioning, monitoring, management, and scheduling at massive horizontal scale) because these HPC components have been architected to address the difficult challenges of scalability and performance — two critical mainstream cloud computing issues.

Have a different opinion about any of these predictions? Post a comment!

 

Tags: , , , , , , , , ,

Josh Simons

Josh Simons

High Performance Computing

With over 20 years of experience in High Performance Computing, Josh currently leads an effort within VMware's Office of the CTO to bring the full value of virtualization to HPC. Previously, he was a Distinguished Engineer at Sun Microsystems with broad responsibilities for HPC direction and strategy ... More

Leave a Reply