Home > Blogs > OpenStack Blog for VMware > Tag Archives: vSphere

Tag Archives: vSphere

VMware Integrated OpenStack 3.1 GA. What’s New!

VMware announced general availability (GA) of VMware Integrated OpenStack 3.1 on Feb 21 2017. We are truly excited about our latest OpenStack distribution that gives our customers enhanced stability on top of the Mitaka release and streamlined user experience with Single Sign-On support with VMware Identity Manager.   For OpenStack Cloud Admins, the 3.1 release is also about enhanced integrations that allows Cloud Admins to further take advantage of the battle tested vSphere Infrastructure & Operations tooling providing enhanced security, OpenStack API performance monitoring,  brownfield workload migration, and seamless upgrade between central and distributed OpenStack management control planes.

images

 

 

 

 

VIO 3.1 is available for download here.  New features include:

  • Support for the latest versions of VMware products. VMware Integrated OpenStack 3.1 supports and is fully compatible with VMware vSphere 6.5, VMware NSX for vSphere 6.3, and VMware NSX-T 1.1.   To learn more about vSphere 6.5, visit here, vSphere 6.3 and NSXT, visit here.
  • NSX Policy Support in Neutron. NSX administrators can define security policies, shared by the OpenStack Cloud Admin with cloud users. Users can either create their own rules, bounded with the predefined ones that can’t be overridden, or only use the predefined, depending on the policy set by the OpenStack Cloud Admin.  NSX Provider policy feature allows Infrastructure Admins to enable enhanced security insertion and assurance all workloads are developed and deployed based on standard IT security policies.
  • New NFV Features. Further expanding on top of VIO 3.0 capability to leverage existing workloads in your OpenStack cloud, you can now import vSphere VMs with NSX network backing into VMware Integrated OpenStack.  The ability to import vSphere VM workloads into OpenStack and run critical Day 2 operations against them via OpenStack APIs enables you to quickly move existing development projects or production workloads to the OpenStack Framework.  VM Import steps can be found here.  In addition full passthrough support by using VMware DirectPath I/O is supported.
  • Seamless update from compact mode to HA mode. If you are updating from VMware Integrated OpenStack 3.0 that is deployed in compact mode to 3.1, you can seamlessly transition to an HA deployment during the update. Upgrade docs can be found here.
  • Single Sign-On integration with VMware Identity Manager. You can now streamline authentication for your OpenStack deployment by integrating it with VMware Identity Manager.  SSO integration steps can be found here.
  • Profiling enhancements.  Instead of writing data into Ceilometer, OpenStack OSprofiler can now leverage vRealize Log Insight to store profile data. This approach provides enhanced scalability for OpenStack API performance monitoring. Detailed steps on enabling OpenStack Profiling can be found here.

Try VMware Integrated OpenStack Today

 

 

Take Advantage of Nova Flavor Extra-specs and vSphere QoS to Make Delivering SLAs Much Simpler

Resource and over-subscription management are always the most challenging tasks facing a Cloud Admin. To deliver a guaranteed SLA, one method OpenStack Cloud Admins have used is to create separate compute aggregates with different allocation / over-subscription ratios. Production workloads that require guaranteed CPU, memory, or storage would be placed into a non-oversubscribed aggregate with 1:1 over-subscription, dev workloads may be placed into a best effort aggregate with N:1 over-subscription. While this simplistic model accomplishes its purpose of an SLA guarantee on paper, it comes with a huge CapEx and/or high overhead for capacity management / augmentation. Worst yet, because host aggregate level over-subscription in OpenStack is simply static metadata consumed by the nova scheduler during VM placement, not real time VM state or consumption, huge resource imbalances within the compute aggregate and noisy neighbor issues within a nova compute host are common occurrences.

New workloads can be placed on a host running close to capacity (real time consumption), while remaining hosts are running idle due to differences in application characteristics and usage pattern. Lack of automated day 2 resource re-balance(management) further exacerbates the issue. To provide white glove treatment to critical tenants and workloads, Cloud Admins must deploy additional tooling to discover basic VM to Hypervisor mapping based on OpenStack project IDs. This is both expensive and ineffective in meeting SLAs.

Over-subscription works if resource consumption can be tracked and balanced across a compute cluster. Noisy neighbor issues can be solved only if the underlying infrastructure supports quality of service (QoS).  By leveraging OpenStack Nova flavor extra-spec extensions along with vSphere industry proven per VM resource reservation allocation (expressed using shares, limits and reservations), OpenStack Cloud Admins can deliver enhanced QoS while maintaining uniform consumption across a compute cluster.  It is possible to leverage Image metadata to deliver QoS as well, this blog will focus on Nova flavor extra-spec.

The VMware Nova flavor extension to OpenStack was first introduced upstream in Kilo and is officially supported in VIO release 2.0 and above. Additional requirements are outlined below:

  • Requires VMware Integrated OpenStack version 2.0.x or greater
  • Requires vSphere version 6.0 or greater
  • Network Bandwidth Reservation requires NIOC version 3
  • VMware Integrated OpenStack access as a cloud administrator

Resource reservations can be set for following resource categories:

  • CPU (MHz)
  • Memory (MB)
  • Disk IO (IOPS)
  • Network Bandwidth (Mbps)

Within each resource category, Cloud Admin has the option to set:

  • Limit – Upper bound, not to exceed limit resource utilization
  • Reservation – Guaranteed minimum reservation
  • Share Level – The allocation level. This can be ‘custom’, ‘high’ ‘normal’ or ‘low’.
  • Shares Share –  In the event that ‘custom’ is used, this is the number of shares.

Complete Nova flavor extra-spec details and deployment options can be found here.  vSphere Resource Management capabilities and configuration guidelines is a great reference as well and can be found here.

Let’s look at an example using Hadoop to demonstrate VM resource management with flavor extra-specs. Data flows from Kafka into HDFS, every 30 minutes there’s a batch job to consume the newly ingested data.  Exact details of the Hadoop workflow are outside the scope of this blog. If you are not familiar with Hadoop, some details can be found here.  Resources required for this small scale deployment are outlined below:

Node Type

 

Core

(reserved – Max)

 

Memory

(reserved – Max)

 

Disk

 

Network Limit

Master / Name Node

4 16 G 70 G 500 Mbps
Data Node 4 16 G 70 G

1000 Mbps

Kafka 0.4-2 2-4 G 25 G

100 Mbps

Based on above requirements, Cloud Admin needs to create Nova flavors to match maximum CPU / Memory / Disk requirements for each Hadoop component.  Most of OpenStack Admins should be very familiar with this process.

flavor-create

 

 

 

 

Based on the reservation amount, attach corresponding nova extra specs to each flavor:

extra-spec

 

 

Once extra specs are mapped, confirm setting using the standard nova flavor-show command:

summary-flavor

 

 

 

 

 

In just three simple steps, resource reservation settings are complete.   Any new VM consumed using new flavors from OpenStack (API, command line or Horizon GUI) will have resource requirements passed to vSphere (VMs can be migrated using the nova rebuild VM feature).

Instead of best effort, vSphere will guarantee resources based on nova flavor extra-spec definition. Specific to our example, 4 vCPU / 16G / Max 1G network throughput will be reserved for each DataNode, NameNode with 4 vCPU / 16G / Max 500M throughput and Kafka nodes will have 20% vCPU / 50% Memory reserved. Instances boot into “Error” state if requested resources are not available, ensuring existing workload application SLAs are not violated. You can see that the resource reservation created by the vSphere Nova driver are reflected in the vCenter interface:

Name Node CPU / Memory:

hadoop_nn_cpu_mem

 

 

 

 

 

 

 

 

Name Node Network Bandwidth:

hadoop_nn_network

 

 

 

 

 

 

 

 

 

 

Data Node CPU / Memory:

dn_cpu_mem

 

 

 

 

 

 

 

 

Data Node Network Bandwidth:

dn_network

 

 

 

 

 

 

 

Kafka Node CPU / Memory:

kafka_cpu_mem

 

 

 

 

 

 

 

 

Kafka Network Bandwidth:

kafka_network

 

 

 

 

 

 

 

vSphere will enforce strict admission control based on real time resource allocation and load. New workloads will be admitted only if SLA can be honored for new and existing applications. Once a workload is deployed, in conjunction with vSphere DRS, workload rebalance can happen automatically between hypervisors to ensure optimal host utilization (future blog) and avoid any noisy neighbor issues. Both features are available out of box, no customization is required.

By taking advantage of vSphere VM resource reservation capabilities, OpenStack Cloud Admins can finally enjoy the superior capacity and over-subscription capabilities a cloud environment offers. Instead of deploying excess hardware, Cloud Admins can control when and where additional hardware are needed based on real time application consumption and growth. Ability to consolidate, simplify, and control your infrastructure will help to reduced power, space, and eliminate the need for any out of box customization in tooling or operational monitoring. I invite you to test out nova-extra spec in your VIO environment today or encourage your IT team to try our VMware Integrated OpenStack Hands-On-Lab, no installation is required.

 

Apples To Oranges: Why vSphere & VIO are Best Bests for OpenStack Adoption

OpenStack doesn’t mandate defaults for compute, network and storage, which frees you to select the best technology. For many VMware customers, the best choice will be vSphere to provide OpenStack Nova compute capabilities.

 

It is commonly asserted that KVM is the only hypervisor to use in an OpenStack deployment. Yet every significant commercial OpenStack distro supports vSphere. The reasons for this broad support are clear.

Costs for commercial KVM are comparable to vSphere. In addition, vSphere has tremendous added benefits: widely available and knowledgeable staff, vastly simplified operations, and proven lifecycle management that can keep up with OpenStack’s rapid release cadence.

 

Let’s talk first about cost. Traditional, commercial KVM has a yearly recurring support subscription price. Red Hat OpenStack Platform-Standard 2 sockets can be found online at $11,611/year making the 3 year cost around $34,833[i]. VMware vSphere with Operations Management Enterprise Plus (multiplied by 2 to match Red Hat’s socket pair pricing) for 3 years, plus the $200/CPU/year VMware Integrated OpenStack SnS is $14,863[ii]. Even when a customer uses vCloud Suite Advanced, costs are on par with Red Hat. (Red Hat has often compared prices using VMware’s vCloud Suite Enterprise license to exaggerate cost differences.)

 

 

When 451 Research[iii] compared distro costs based on a “basket” of total costs in 2015 they found that commercial distros had a cost that was close to regular virtualization. And if VMware Integrated OpenStack (VIO) is the point of comparison, the costs would likely be even closer. The net-net is that cost turns out not to be a significant differentiator when it comes to commercial KVM compared with vSphere. This brings us to the significant technical and operational benefits vSphere brings to an OpenStack deployment.

 

In the beginning, it was assumed that OpenStack apps would build in the resiliency that used to be assumed from a vSphere environment, thus allowing vSphere to be removed. As the OpenStack project has matured, capabilities such as VMware vMotion and DRS (Distributed Resource Scheduler) have risen in importance to end users. Regardless of the application the stability and reliability of the underlying infrastructure matters.

 

There are two sets of reasons to adopt OpenStack on vSphere.

 

First, you can use VIO to quickly (minutes or hours instead of days or weeks) build a production-grade, operational OpenStack environment with the IT staff you already have, leveraging the battle-tested infrastructure your staff already knows and relies on. No other distro uses a rigorously tested combination of best-in-class compute (vSphere Ent+ for Nova), network (NSX for Neutron), and storage (VSAN for Cinder).

 

Second, only VMware, a long-time (since 2012), active (consistently a top 10 code contributor) OpenStack community member provides BOTH the best underlying infrastructure components AND the ongoing automation and operational tools needed to successfully manage OpenStack in production.

 

In many cases, it all adds up to vSphere being the best choice for production OpenStack.

 


[i] http://www.kernelsoftware.com/products/catalog/red_hat.html
[ii] http://store.vmware.com/store/vmware/en_US/cat/ThemeID.2485600/categoryID.66071400
[iii] https://451research.com/images/Marketing/press_releases/CPI_PR_05.01.15_FINAL.pdf


This Article was written by Cameron Sturdevant,  Product Line Manager at VMware

Introducing Senlin – a new tool for speedy, load-balanced OpenStack clustering

 Senlin is a new OpenStack project that provides a generic clustering service for OpenStack clouds. It’s capable of managing homogeneous objects exposed by other OpenStack components, including Nova, Heat, or Cinder, making it of interest to anyone using, or thinking of using, VMware Integrated OpenStack.

VMware OpenStack architect Mark Voelker, along with VMware colleague Xinhui Li and Qiming Teng of IBM, offer a helpful introduction to Senlin in their 2016 OpenStack Summit session, now viewable here.

 

Voelker opens by reviewing the generic requirements for OpenStack clustering, which include simple manageability, expandability on demand, load-balancing, customizability to real-life use cases, and extensibility.

 

OpenStack already offers limited cluster management capabilities through Heat’s orchestration service, he notes. But Heat’s mission is to orchestrate composite cloud apps using a declarative template format through an OpenStack-native API. While functions like auto-scaling, high availability, and load balancing are complimentary to that mission, having those functions all in a single service isn’t ideal.

“We thought maybe we should think about cluster management as a first class service that everything else could tie into,” Volker recalls, which is where Senlin comes in.

 

Teng then describes Senlin’s origin, which started as an effort to build within Heat, but soon moved to offload Heat’s autoscaling capabilities into a separate project that expanded OpenStack autoscaling offerings more comprehensively, becoming OpenStack’s first dedicated clustering service.

 

Senlin is designed to be scalable, load-balanced, highly-available, and manageable, Teng explains, before outlining its server architecture and detailing the operations it supports. “Senlin can manage almost any object,” he says. “It can be another server, a Heat stack, a single volume or floating IP protocol, we don’t care. We wanted to just build a foundational service allowing you to manage any type of resource.”

To end the session, Li offers a demo of how Senlin creates a resilient, auto-scaling cluster with both high availability and load balancing in as little as five minutes.

 

If you want to learn more about clustering for OpenStack clouds created with VMware Integrated OpenStack (VIO) you can find expert assistance at our product homepage. Also check out our Hands-on Lab, or try VIO for yourself by downloading and installing VMware Integrated OpenStack direct.

OpenStack 2.5: VMware Integrated OpenStack 2.5 is GA – What’s New?

We are very excited about this newest release of VMware Integrated OpenStack, OpenStack 2.5. This release continues to advance VIO as the easiest and fastest route to build an OpenStack cloud on top of vSphere, NSX and Virtual SAN So, what’s in this release? Continue reading to learn more about the latest features in VMware Integrated OpenStack 2.5, which is available for download now.

  1. Seamlessly Leverage Existing VM Templates
  2. Smaller Management Footprint
  3. Support for vSphere Standard Edition with NSX
  4. Troubleshooting & Monitoring Out of the Box
  5. Neutron Layer 2 Gateway Support
  6. Optimized for NFV

Continue reading

VMware Integrated OpenStack Video Series: Security Groups

OpenStack’s security groups capability is a key feature in its support for multi-tenant workloads. Security groups are sets of rules that users utilize to specify access to their application infrastructure. This access is specified either via a classless inter-domain routing (CIDR) network range or by specifying the name of another security group.

Let’s take a look at how security groups would be applied in a simple three-tier application infrastructure consisting of web, application, and database layers:

OpenStack Security Groups

The application developer has restricted access to the various tiers of her application as follows:

  • Users can only access the Web tier, and that access is restricted solely to TCP 443 for HTTPS
  • Only instances in the Web security group can access instances in the App security group
  • Only instances in the App security group can access instances in the DB security group

VMware Integrated OpenStack leverages VMware NSX’s own security group functionality to implement this capability for our users. The application developers are not even aware of this advantage for their application security because they are using industry-standard open source APIs to deploy their infrastructure.

The following video provides a detailed walkthrough of using OpenStack security groups.

 

Stay tuned for the next installment covering OpenStack users and projects! In the meantime, you can learn more on the VMware Product Walkthrough site and on the VMware Integrated OpenStack product page.

VMware Integrated OpenStack Arrives

I know you might have been wondering where were we have been the last few months since we last shared with you some of the positive feedback from our VMware Integrated OpenStack beta program participants. Truth is, we have been very busy working behind the scenes on some very exciting announcements. We are very happy that we can now share them with you.

Today, our CEO Pat Gelsinger, and CTO Ben Fathi, announced the launch of VMware Integrated OpenStack to thousands of customers, partners and influencers around the globe.  After a very successful Beta program that included more than a thousand applications, as well as tremendous positive feedback from partners, analysts, vExperts and other influencers, we will be delivering our first-ever OpenStack distribution. VMware Integrated OpenStack will be available for use, free of charge, with VMware vSphere® Enterprise Plus, vSphere with Operations Management Enterprise Plus and all editions of vCloud Suite.

As I wrote before, our customers have been asking us to help them deliver powerful OpenStack APIs to their developers. Our goal with VMware Integrated OpenStack has been to make our customers successful with OpenStack. We wanted to help them leverage their existing VMware investments and expertise to confidently deliver production-grade OpenStack, backed by a unified support from VMware. With VMware Integrated OpenStack we deliver a solution that maximizes a customer’s chances for success. We focused our effort on simplicity and ease of use, or as we love to say it, you don’t need an OpenStack PhD with VMware Integrated OpenStack.

We believe that VMware Integrated OpenStack the fastest and easiest way to have production-grade OpenStack environments, and here’s a couple of points on why we believe so.

Streamlined Deployment and Upgrade

  • vSphere Web Client Based Deployment: VMware Integrated OpenStack is a downloaded virtual appliance that is deployed using the vSphere Web Client. The vSphere Web Client then deploys all the VMs and components needed to create a highly available, production grade OpenStack infrastructure in a few simple steps.
  • Automated Rolling Changes for OpenStack Deployment: Whether you are applying a security patch to the guest OS or fixing issues in OpenStack services, VMware Integrated OpenStack provides an easy, reliable and automated mechanism apply changes for OpenStack Services without extensive down time.
  • Power of the Ecosystem: VMware Integrated OpenStack can be deployed on any vSphere supported hardware. VMware Integrated OpenStack leverages any storage solutions supported by vSphere through vSphere datastores to implement Cinder and Glance, the OpenStack block and image storage services.

Optimized for the Software Defined Data Center

  • VMware vSphere®: VMware Integrated OpenStack leverages enterprise grade VMware vSphere features such as Dynamic Resource Scheduling (DRS) and Storage DRS through Nova, the OpenStack compute service, to achieve optimal virtual machine density. Features such as High Availability and vMotion are used to protect tenant workloads against failures.
  • VMware NSX™: VMware NSX provides a highly scalable network virtualization platform with rich features such as private networks, floating IPs, logical routing and security groups that can be consumed through Neutron the OpenStack networking service. This is a mandatory requirement for creating true three tier applications in development environments.
  • VMware Virtual SAN™: Virtual SAN uses server disks and flash to create radically simple, high performance, resilient shared storage for your virtual machines using x86 serves. The scale-out architecture drastically lowers your overall storage TCO while enabling administrators to specify storage attributes such as capacity, performance, and availability in the form of simple policies on a per-VM basis. Virtual SAN features are provided through Cinder and Glance, the OpenStack block and image storage services.

Integrated Operation and Management

  • Simplified Configuration and Operation: Pre-defined workflows automate common OpenStack operations such as adding/removing capacity, configuration changes, patching and upgrading.
  • Integrated Monitoring and Troubleshooting Tools: Out-of-the-box vRealize Operations Manager and vRealize Log Insight integrations can provide faster and easier monitoring and troubleshooting of your OpenStack infrastructure.
  • Cost Visibility, Benchmarking and Financial management: Out of the Box integration with vRealize Business provides per OpenStack tenant/project visibility on capacity, cost and efficiency. vRealize Business also allows IT to accurately model the cost of services. IT can then benchmark the costs relative to peers in the same industry or public cloud offerings.

Free to use with vSphere Enterprise Plus

  • We are very excited to announce that VMware Integrated OpenStack will be available for free for all new and existing vSphere Enterprise Plus customers, including vSphere with Operations Management Enterprise Plus and vCloud Suite. Production support for VMware Integrated OpenStack, which also includes supporting the open source code, is optional and can be purchased separately at $200/CPU (minimum 50 CPU).

What Early Users Are Saying

“The pace of adoption of our cloud is increasing at a substantial rate, and we needed to move to a self-service IT model. VMware Integrated OpenStack will enable us to deliver open API access to our VMware Infrastructure so users can quickly consume the resources they require, on-demand. VMware is providing us with a unified platform of virtualized compute, networking and storage. We can leverage an open cloud framework with VMware Integrated OpenStack, and combine it with vSphere for enterprise-class reliability and availability, and VMware NSX for secure micro-segmentation of our multi-tenant environment.” – Frans Van Rooyen, compute platform architect, Adobe 

“We want the ability for our developers to write applications once and deploy them anywhere, regardless of the underlying infrastructure, which is why the openness of the OpenStack framework is appealing. But to roll our own OpenStack cloud from the infrastructure up would be complicated and time consuming. With VMware Integrated OpenStack, that process literally takes the click of a mouse, which is really impressive. We like the idea of being able to combine our VMware expertise with a rich layer of open APIs to serve our developers.” – Chris Nakagaki, Technical Lead Engineer at Cox Automotive

“Most of our infrastructure is running on VMware, and we wanted to build a test and development environment that supported self-service for developers. We participated in the VMware Integrated OpenStack beta program and had a great experience. During our evaluation, we were able to set up a complete OpenStack environment on our production VMware clusters in less than 30 minutes. We have received great service and support from VMware, so we also like the fact that our OpenStack deployment will be supported by VMware directly.” – Hendrik Nehnes, Group Director IT Operations, Zanox AG

As we have gone through the Beta process, we have consistently heard from customers that they are interested in OpenStack, but not really interested in moving away from VMware as their core infrastructure. If VMware can deliver on the promise of the VMware Integrated OpenStack, it could truly be the best of both worlds for them. We have no doubt we’re up to the task.

Amr

VMware Infrastructure: The Best Foundation for OpenStack Clouds

This week at VMworld® 2014, we announced VMware Integrated OpenStack, a solution that simplifies the deployment and operation of an OpenStack cloud, enabling IT organizations to quickly and cost-effectively provide developers with open, cloud-style APIs to access VMware infrastructure. The VMware Integrated OpenStack distribution leverages VMware’s proven software-defined data center technologies for compute, network, storage and management to build a powerful OpenStack cloud that your IT team can efficiently manage.

VMware Integrated OpenStack is designed for enterprise customers that want to provide their developers an experience similar to public clouds by providing cloud-style APIs on top of their private VMware infrastructure. Our customers are asking us about OpenStack, and our goal is to make our customers successful with OpenStack. We want to help them leverage their existing VMware investments and expertise to confidently deliver production-grade OpenStack, backed by a unified support from VMware. With VMware Integrated OpenStack we deliver a solution that maximizes a customer’s chances for success.

Choice and Simplicity for Enterprise OpenStack Adoption

With this announcement, there are now three ways that customers can implement an OpenStack cloud powered by VMware.

To start, any customer can go to the open source repositories and download the code to build an OpenStack deployment with VMware technologies. These are the true do-it-yourself shops. But with a few exceptions, most customers want commercial support for OpenStack. As a result, VMware has been working with distro vendors across the OpenStack ecosystem to make sure VMware vSphere® and VMware NSX™ are compatible with those distros. We have previously announced partnerships with Canonical and Mirantis, and this week at VMworld we announced a new partnership with HP. These partnerships are a great fit for customers who want a loosely-integrated model for how they build clouds, in which a customer buys a cloud layer like OpenStack from one vendor, and slots in compute, network, storage, and management components that are from other vendors or perhaps are built in-house. This model is prevalent among OpenStack early-adopters.

Over time, as we talked to a wider set of VMware customers, we found that many of them place the most value on simplicity, with a goal of providing development teams with OpenStack APIs and tools in the most straightforward manner possible. They want to get to a production environment quickly, and they want to minimize the need to add new headcount with specialty expertise.

These are the customers for which VMware Integrated OpenStack (Beta) is intended. Customers who are already using and familiar with vSphere and NSX can build on that expertise, providing the fastest and most reliable path to get to a production OpenStack environment.

OpenStack Runs Best on VMware

VMware’s strategy of embracing open frameworks rests on a single, simple premise that the innovation VMware delivers across compute, network, storage, and management provides differentiated value to our customers. Despite a lot of hype around “free” clouds, those who design and run large-scale, production-grade IT environments knows that the quality and capability of the virtual infrastructure have a direct relationship to:

  • The performance, reliability, and application-visible features (e.g., load-balancing) seen by application developers;
  • The work required to get the environment up and running at production-grade, including meeting SLA and security/compliance requirements and driving quickly resolving end-user issues;
  • The total-cost-of-ownership (CAPEX, OPEX) of the solution.

This is why VMware believe it can help customers build the most powerful OpenStack clouds. VMware stands out in the industry as the company that provides the most advanced virtualization technologies for building an OpenStack cloud. VMware vSphere is the most powerful and widely adopted compute virtualization platform in the world, our VMware NSX solution is widely seen as the most advanced network virtualization solution for OpenStack, and VMware offers both the most advanced ecosystem of storage partners as well as new hyper-converged storage options such as Virtual SAN, which leverage disks and flash directly in the hypervisor. Furthermore, VMware’s portfolio of cloud management tools like vCenter Operations Manager, Log Insight, IT Business Management, fill key gaps ranging from troubleshooting, to log analysis, to cost-visibility and more. The end result is a complete stack of enterprise-grade components, all helping your run the best possible OpenStack cloud.

You can learn more about our new VMware Integrated OpenStack (Beta) and request access to the beta here. We’ll only be accepting a small number of customers to start, but if you are interested in having VMware work with you to quickly deliver an enterprise-grade OpenStack cloud, we’d love to hear from you.

Amr