Home > Blogs > OpenStack Blog for VMware > Tag Archives: VMware Integrated OpenStack

Tag Archives: VMware Integrated OpenStack

VMware Integrated OpenStack 5.0: What’s New

VMware today announced VMware Integrated OpenStack (VIO) 5.0. We are truly excited about our latest OpenStack distribution as VMware is one of the first companies to support and provide enhanced stability on top of the newest OpenStack Queens Release.  Available in both Carrier and Data Center Editions, VIO 5.0 enables customers to take advantage of advancements in Queens to support mission-critical workloads, and adds support for the latest versions of VMware products including vSphere, vSAN, and NSX.

For our Telco/NFV customers, VIO 5.0 is about delivering scale and availability for hybrid applications across VM and container-based workloads using a single VIM (Virtual Infrastructure Manager). Also for NFV operators, VIO 5.0 will help fast track a path towards Edge computing with VIO-in-a-box, secure multi-tenant isolation and accelerated network performance using the enhanced NSX-T VDS (N-VDS).  For VIO Datacenter customers, advanced security, simplified user experience, and advanced networking with DNSaaS have been on top of the wish list for many VIO customers.  We are super excited to bring those features in VIO 5.0.

VIO 5.0 NFV Feature Details:

Advanced Kubernetes Support:

Enhanced Kubernetes support:  VIO 5.0 ships with Kubernetes version 1.9.2.  In addition to the latest upstream K8S release, integration with latest NSX-T 2.2 release is also included. VIO Kubernetes customers can leverage the same Enhanced N-VDS via Multus CNI plugin to achieve significant improvements in container response time, reduced network latencies and breakthrough network performance.  We also support using Red Hat Enterprise Linux as the K8S cluster image.

Heterogeneous Cluster using Node Group:  Now you can have different types of worker nodes in the same cluster. Extending the cluster node profiles feature introduced in VIO 4.1, a cluster can now have multiple node groups, each mapping to a single node profile. Instead of building isolated special purpose Kubernetes clusters, a cloud admin can introduce a new node group(s) to accommodate heterogeneous applications such as machine learning, artificial intelligence, and video encoding.  If resource usage exceeds the node group limit, VIO 5.0 supports cluster scaling at a node group level.  With node groups, cloud admins can address cluster capacity based on application requirements, allowing the most efficient use of available resources.

Enhanced Cluster Manageability:  vkube heal and ssh allow you to directly ssh into any of the nodes of a given cluster and to recover a failed cluster nodes based on ETCD state or cluster backup in the case of complete failure.

Advanced Networking:

 N-VDS:  Also Known as NSX-T VDS in Enhanced Data-path mode.  Enhanced, because N-VDS runs in DPDK mode and allows containers and VMs to achieve significant improvements in response time, reduced network latencies and breakthrough network performance.  With performance(s) similar to SR-IOV, while maintaining the operational simplicity of virtualized NICs, NFV customers can have their cake and eat it too

NSX-V Search domain:  A new configuration setting in the NSX-V will enable the admin to configure a global search domain. Tenants will use this search domain if there is no other search domain set on the subnet.

NSX-V Exclusive DHCP server per Project:  Instead of shared DHCP edge based on subnet across multiple projects.  Exclusive DHCP edge provides the ability to assign dedicated DHCP servers per network segment. Exclusive DHCP server will provide better tenant isolation, also allowing an Admin to determine customer impact concerning maintenance windows, etc.

NSX-T availability zone (AZ):  An availability zone is used to make network resources highly available by group network nodes that run services like DHCP, L3, NAT, and others. Users can associate applications with an availability zone for high availability.  In previous releases neutron AZ was supported against NSX-V, we are extending this support to the T as well.

Security and Metering:

Keystone Federation:   Federated Identity provides a way to securely use existing credentials to access cloud resources such as servers, volumes, and databases, across multiple endpoints across multiple authorized clouds using a single set of credentials.  VIO5 supports Keystone to Keystone (K2K) federation by designating a central Keystone instance as an Identity Provider (IdP), interfacing with LDAP or an upstream SAML2 IdP.  Remote Keystone endpoints are configured as Service Providers (SP), propagating authentication requests to the central Keystone.  As part of Keystone Federation enhancement, we will also support 3rd party IdP in addition to the existing support for vIDM.

Gnocchi:   Gnocchi is the project name of a TDBaaS (Time Series Database as a Service) project that was initially created under the Ceilometer umbrella. Rather than storing raw data points, it aggregates them before storing them.  Because Gnocchi computes all the aggregations at ingestion, data retrieval is exceptionally speedy.  Gnocchi resolves performance bottlenecks in Ceilometer’s legacy architecture by providing an extremely robust foundation for the metric storage required for billing and monitoring.  The legacy Ceilometer API service has been deprecated by upstream and is no longer available in Queens.  Instead, the Ceilometer API and functionality has been broken out into the Aodh, Panko, and Gnocchi services, all of which are fully supported in VIO 5.0.

Default Drop Policy:   Enable this feature to ensure that traffic to a port that has no security groups and has port security enabled will always discard.

End to End Encryption:  The cloud admin now has the option to enable API encryption for internal API calls in addition to the existing encryption on public OpenStack endpoints.  When enabled, all internal OpenStack API calls will be sent over HTTPS using strong TLS 1.2 encryption.  Encryption on internal endpoints helps avoid man-in-the-middle attacks if the management network is compromised.

Performance and Manageability:

VIO-in-a-box:  Also known as the “Tiny” deployment. Instead of separate physical clusters for management and compute, VMware Integrated OpenStack control and data plane can now consolidate on a single physical server.   This drastically reduces the footprint of a deployment and is ideal for Edge Computing scenarios where power and space is a concern.  VIO-in-a-box can be preconfigured manually or fully automated with OMS API.

Hardware Acceleration:  GPUs are synonymous with artificial intelligence and machine learning.  vGPU support gives OpenStack operators the same benefits for graphics-intensive workloads as traditional enterprise applications: specifically resource consolidation, increased utilization, and simplified automation. The video RAM on the GPU is carved up into portions.  Multiple VM instances can be scheduled to access available vGPUs.  Cloud admins determine the amount of vGPU each VM can access based on VM flavors.  There are various ways to carve vGPU resources. Refer to the NVIDIA GRID vGPU user guide for additional detail on this topic.  

OpenStack at Scale:  VMware Integrated OpenStack 5.0 features improved scale, having been tested and validated to run 500 hosts and 15,000 VMs in a region. This release will also introduce support for multiple regions at once as well as monitoring and metrics at scale.

Elastic TvDC:  A Tenant Virtual Datacenter (TvDC) can extend across multiple clusters in VIO 5.0.  Extending on support of single cluster TvDC’s introduced in VIO 4.0, VIO 5.0 allows a TvDC to span across multiple clusters.  Cloud admins can create several resource pools across multiple clusters assigning the same name, project-id, and unique provider-id. When tenants launch a new instance, the OpenStack scheduler and placement engine will schedule VM request to any of the resource pools mapped to the TvDC.

VMware at OpenStack Summit 2018:

VMware is a Premier Sponsor of OpenStack Summit 2018 which runs May 21-24 at the Vancouver Convention Centre in Vancouver, BC, Canada. If you are attending the Summit in person, we invite you to stopped by VMware’s booth (located at A16) for feature demonstrations of VMware Integrated OpenStack 5 as well as VMware NSX and VMware vCloud NFV.  Hands on training is also available (RSVP required).   Complete schedule of VMware breakout sessions, lightening talks and training presentations can be found here.

OpenStack and Kubernetes Better Together

Virtual machines and containers are two of my favorite technologies.  In today’s DevOps driven environment, deliver applications as microservices allows an organization to provide features faster.   Splitting a monolithic application into multiple portable fragments based on containers are often top of most organization’s digital transformation strategies.   Virtual Machines, delivered as IaaS, has been around since the late 90s, it is a way to abstract hardware to offer enhanced capabilities in fault tolerance, programmability, and workload scalability.  While enterprise IT large and small are scrambling to refactor application into microservices, the reality is IaaS are proven and often used to complement container based workloads:

1). We’ve always viewed the IaaS layer as an abstraction from the infrastructure to provide a standard way of managing and consolidate disparate physical resources. Resource abstraction is one of the many reasons most of the container today runs inside of Virtual machines.

2). Today’s distributed application consists of both Cattles and Pets.  Without overly generalizing, Pet workload tends to be “hand fed” and often have significant dependencies to the legacy OS that isn’t container compatible.  As a result, for most organizations, Pet workloads will continue to run as VMs.

3). While there are considerable benefits to containerize NFV workloads, current container implementation is not sufficient enough to meet 100% NFV workload needs.  See  IETF report for additional details.

4). Ability to “Right Size” the container host for dev/test workloads where multiple environments are required to perform different testings.

Instead of mutually exclusive, over time it’s been proven that two technologies complement each other.   As long as there are legacy workloads and better ways to manage and consolidate sets of diverse physical resources, Virtual Machines (IaaS) will co-exist to complement containers.

OpenStack IaaS and Kubernetes Container Orchestration:

It’s a multi-cloud world, and OpenStack is an important part of the mix. From the datacenter to NFV, due to the richness of its vendor-neutral API, OpenStack clouds are being deployed to meet needs of organizations needs in delivering public cloud like IaaS consumption in a private cloud data center.   OpenStack is also a perfect complement to K8S by providing underline services that are outside the scope of K8S.  Kubernetes deployments in most cases can leverage the same OpenStack components to simplify the deployment or developer experiences:

 

 

 

 

1). Multi-tenancy:  Create K8S cluster separation leveraging OpenStack Projects. Development teams have complete control over cluster resources in their project and zero visibility to other development teams or projects.

2). Infrastructure usage based on HW separation:  IT department often are the central broker for development teams across the entire organization. If Development team A funded X number of servers and Y for team B, OpenStack Scheduler can ensure K8S cluster resources always mapped to Hardware allocated to respective development teams.

3).  Infrastructure allocation based on quota:  Since deciding how much of your infrastructure to assign to different use cases can be tricky.  Organizations can also leverage OpenStack quota system to control Infrastructure usage.

4). Integrated user management:  Since most K8S developers are also IaaS consumers, leverage keystone backend simplifies user authentication for K8S cluster and namespace sharing.

5). Container storage persistence:  Since K8S pods are not durable, storage persistence is a requirement for most stateful workloads.   When leverage OpenStack Cinder backend, storage volume will be re-attached automatically after a pod restart (same or different node).

6). Security:  Since VM and containers will continue to co-exist for the majority of enterprise and NFV applications.  Providing uniform security enforcement is therefore critical.   Leverage Neutron integration with industry-leading SDN controllers such as the VMware NSX-T can simplify container security insertion and implementation.

7). Container control plane flexibility: K8S HA requirements require load balanced Multi-master and scaleable worker nodes.  When Integrated with OpenStack, it is as simple as leverage LBaaSv2 for master node load balancing.  Worker nodes can scale up and down using tools native to OpenStack.  WIth VMware Integrated OpenStack, K8S worker nodes can scale vertically as well using the VM live-resize feature.

Next Steps:

I will leverage VMware Integrated OpenStack (VIO) implementation to provide examples of this perfect match made in heaven. This blog is part 1 of a 4 part blog series:

1). OpenStack and Containers Better Together (This Post)

2). How to Integrate your K8S  with your OpenStack deployment

3). Treat Containers and VMs as “equal class citizens” in networking

4). Integrate common IaaS and CI / CD tools with K8S

Leverage OpenStack for your Production Workloads

In my previous blog I wrote about VMware’s involvement in open source. The proliferation of open source projects in recent years has influenced how people think about technology, and how technology is being adopted in organizations, for a few reasons. First, open source is more accessible – developers can download projects from github to their laptops and quickly start using them. Second, open source delivers cutting edge capabilities, and companies leverage that to increase the pace of innovation. Third, developers love the idea that they can influence, customize and fix the code of the tools they’re using.  Many companies are now adopting the “open source first” strategy with the hope that they will not only speed up innovation but also cut costs, as open source is free.

However, while developers increasingly adopt open source, it often doesn’t come easy to DevOps and IT teams, who carry the heavy burden of bringing applications from developer laptop to production. These teams got to think about stability, performance, security, upgrades, patching and the list goes on. In those cases, enterprises are often happy to pay for an enterprise-grade version of the product, for which all those things mentioned are already taken care of.

When applications are ready to move to production…

OpenStack is a great example. Many organizations are keen to run their applications on top of an open source platform, also known to be the industry standard. But that doesn’t come without deployment and manageability challenges. That’s where VMware provides more value to customers.

VMware Integrated OpenStack (VIO) makes it easier for IT to deploy and run an OpenStack cloud on top of their existing VMware infrastructure. Combining VIO with the enterprise-grade capabilities of the VMware stack provides customers with the most reliable and production ready OpenStack solution. There are three key reasons for this statement: a) VMware provides best-of-breed, production ready OpenStack-compatible infrastructure; b) VIO is fully tested for both – business continuity and compatibility; and c) VMware delivers capabilities for day 2 operations. Let me go into details for each of the three.

Best-of-breed OpenStack-compatible infrastructure

First, VMware Integrated OpenStack is optimized to run on top of VMware Software Defined Data Center (SDDC), leveraging all the enterprise-grade capabilities of VMware technologies such as high availability, scalability, security and so on.

  • vSphere for Nova Compute: VIO takes advantage of vSphere capabilities such as Dynamic Resource Scheduling (DRS) to achieve optimal VM density and vMotion to protect tenant workloads against failures.
  • VMware NSX for Neutron: advanced networking services with massive scale and throughput, and with rich set of capabilities such as private networks, floating IPs, logical routing, load balancing, security groups and micro-segmentation.
  • VMware vSAN/3rd party storage for Cinder/Glance: VIO works with any vSphere-validated storage (we have the largest hardware compatibility list in the industry). VIO also brings Advanced Storage Policies through VMware vSAN.

Battle hardened and tested

OpenStack can be deployed on many combinations of storage, network, and compute hardware and software, and from multiple vendors. Testing all combinations is a challenge and often times customers who choose the DIY route will have to test their combination of hardware and software for production workloads. VMware Integrated OpenStack, on the other hand, is battle-hardened and tested against all VMware virtualization technologies to ensure the best possible user experience from deployment to management (upgrades, patching, etc.) to usage. In addition, VMware provides the broadest hardware compatibility coverage in the industry today (that has been tested in production environments).

Key capabilities for Day-2 Operations

VMware Integrated OpenStack brings operations capabilities to OpenStack users.  For example, built-in command line interface (CLI) tools enable you to troubleshoot and monitor your OpenStack deployment and the status of OpenStack services. Pre-defined workflows automate common OpenStack operations such as adding/removing capacity, configuration changes, and patching.

In addition, out-of- the-box integrations with vRealize Operations, vRealize Log Insight, and vRealize Business for Cloud provide monitoring, troubleshooting, and cost visibility for your OpenStack infrastructure.

Finally, to add to all of this, another benefit is that our customers only have one vendor and support number to call to in case of a problem. No finger pointing, no need to handle different support plans. Easy!

To learn more, visit the VIO web page and product feature walkthrough.

Introducing VMware Integrated OpenStack 4.0

We’re excited to announce the new release of VMware Integrated OpenStack 4.0 today at VMworld US 2017, as part of the VMware SDDC story. You can read more about it here.

VMware Integrated OpenStack (VIO) is an OpenStack distribution supported by VMware, optimized to run on top of VMware’s SDDC infrastructure. In the past few months we have been hard at work, adding additional enterprise grade capabilities into VIO, making it even more robust, scalable and secure, yet keeping it easy to deploy, operate and use.

VMware Integrated OpenStack 4.0 is based on Ocata, and some of the highlights include:

Containers support – users can run VMs alongside containers on VIO. Out-of-the-box container support enables developers to consume Kubernetes APIs, leveraging all the enterprise grade capabilities of VIO such as multi-tenancy, persistent volumes, high availability (HA), and so on.

Integration with vRealize Automation – vRealize Automation customers can now embed OpenStack components in blueprints. They can also manage their OpenStack deployments through the Horizon UI as a tab in vRealize Automation. This integration provides additional governance as well as single-sign-on for users.

Multi vCenter support – customers can manage multiple VMware vCenters with a single VIO deployment, for additional scale and isolation.

Additional capabilities for better performance and scale, such as live resize of VMs (changing RAM, CPU and disk without shutting down the VM), Firewall as a Service (FWaaS), CPU pinning and more.

Our customers use VMware Integrated OpenStack for a variety of use cases, including:

Developer cloud – providing public cloud-like user experience to developers, as well as more choice of consumption (Web UI, CLI or API), self-service and programmable access to VMware infrastructure. With the new container management support, developers will be able to consume Kubernetes APIs.
IaaS platform for enterprise automation – adding automation and self-service provisioning on top of best-of-breed VMware SDDC.
Advanced, programmable network – leveraging network virtualization with VMware NSX for advanced network capabilities.

Our customers tell us (consistently) that VIO is easy to deploy (“it just worked!”) and manage. Since it’s deployed on top of VMware virtualization technologies, they are able to deploy and manage it by themselves, without hiring new people or professional services. Their development and DevOps teams like VIO because it gives them the agility and user experience they want, with self-service and standard OpenStack APIs.

In most cases, in a short amount of time (few weeks!) customers trust VIO enough to run their business-critical applications, such as e-commerce website or online travel system, in production.

VMware Integrated OpenStack will be available as a standalone product later this quarter. For more information go to our website, check out the product walkthrough and try out the hands-on lab.

If you are attending VMworld, please stop by our booth (#1139) to see demos and speak with OpenStack specialists. We’re looking forward to seeing you!

OpenStack Sessions at VMworld 2017 Las Vegas

Don’t Miss Out!

VMworld 2017 Las Vegas is just around the corner and we can’t wait to meet our customers and partners, and explore all the great sessions, workshops and activities planned for next week. With over 500 sessions across all categories, it may be overwhelming to understand which sessions are most beneficial for you. Here is the list of all the OpenStack related session, make sure you register and mark your calendar in advance so you don’t miss out!

In addition, make sure to stop by the VMware Integrated OpenStack (VIO) booth (#1139) to learn more and see a demo or two.

When/Where

Description

Monday, Aug 28, 11:30 a.m. – 1:00 p.m. | South Pacific Ballroom, Lower Level, HOL 5

[ELW182001U] VMware Integrated OpenStack (VIO) – Getting Started Workshop
Monday, Aug 28, 1:00 p.m. – 2:00 p.m. | Islander C, Lower Level

[MGT2609BU] VMware Integrated OpenStack: What’s New
It is not OpenStack or VMware; it is OpenStack on VMware.
Come and learn what is new in VMware Integrated OpenStack and our plans for the future of OpenStack on the software-defined data center.
Monday, Aug 28, 2:00 p.m. – 3:00 p.m. | Islander F, Lower Level

[LDT1844BU] Open Source at VMware: A Key Ingredient to Our Success and Yours
Open-source components are part of practically every software product or service today. VMware products are no exception. And increasingly, IT departments are presented with many application roll-out requests that include large open-source components as part of the infrastructure on which they rely. From OpenStack to Docker to Kubernetes and beyond, open source is a reality of the enterprise environment. VMware is investing in open source both as a user of many components (and contributor to many of those projects) and as a creator of many successful open-source projects such as Open vSwitch, Harbor, Clarity, and many more. This session will talk about the what, the why, and the how of our engagement in open source: our vision and strategy and why all this is critically important for our customers.
Monday, Aug 28, 3:15 p.m. – 4:00 p.m. | Meet the Experts, 2nd floor foyer, Table #5 Wednesday, Aug 30, 2:15 p.m. – 3:00 p.m. | Meet the Experts, 2nd floor foyer, Table #5 Thursday, Aug 31, 11:45 a.m. – 12:30 p.m. | Meet the Experts, 2nd floor foyer, Table #5

[MTE4733U] Implementing OpenStack with VIO
Meet Xiao Gao, VMware Integrated OpenStack expert. Bring your questions!
Tuesday, Aug 29, 12:15 p.m. – 1:00 p.m. | Meet the Experts, 2nd floor foyer, Table #7

[MTE4803U] OpenStack in the Enterprise with Marcos Hernandez
Speak with Expert Marcos Hernandez about the benefits of running OpenStack in private Cloud environments.
Tuesday, Aug 29, 4:00 p.m. – 5:00 p.m. | Oceanside D, Level 2

[MGT1785PU] OpenStack in the Real World: VMware Integrated OpenStack Customer Session
More and more customers are looking to leverage OpenStack to add automation and provide open API to their application development teams. In this session, VMware Integrated OpenStack customers will share their OpenStack journey and the benefits VMware Integrated OpenStack provides to development teams and IT.
Tuesday, Aug 29, 4:00 p.m. – 4:15 p.m. | VMvillage – VMTN Community Theater

[VMTN6664U] Networking and Security Challenges in OpenStack
CloudsDecided it’s time to implement OpenStack to build your Cloud? Have you tested in the lab, evaluated the various distributions available, and hired a specialized team for OpenStack? However, when it arrives the time to put into production Neutron is not integrating with your physical network? If the above story closely resembles what you have been facing, this TechTalk is critical for you to understand the challenges of Networking and Security with any OpenStack distribution and what solutions are missing for your Cloud to fully works. NOTE: Community TechTalk taking place in VMvillage.
Tuesday, Aug 29, 5:30 p.m. – 6:30 p.m. | Mandalay Bay Ballroom B, Level 2

[NET1338BU] VMware Integrated OpenStack and NSX Integration Deep Dive
OpenStack offers a very comprehensive set of Network and Security workflows provided by a core project called Neutron. Neutron can leverage VMware NSX as a backend to bring advanced services to the applications owned by OpenStack. In this session we will cover the use cases for Neutron, and the various topologies available in OpenStack with NSX, with a focus on security. We will walk you through a number of design considerations leveraging Neutron Security Groups and the NSX Stateful Distributed Firewall integration, along with Service Chaining in NSX for Next Generation Security Integration, all available today.
Wednesday, Aug 30, 8:00 a.m. – 9:00 a.m. | Surf A, Level 2

[FUT3076BU] Simplifying Your Open-Source Cloud With VMware
Open source or VMware? Clearly, you can’t have both, right? Wrong. As open-source, cloud-based solutions continue to evolve, IT leaders are challenged with the adoption and implementation of large-scale deployments such as OpenStack and network function virtualization from both a business and technical perspective. Learn how VMware’s solutions can simplify existing open-source innovation, resulting in new levels of operations, standardization (app compatibility), and delivery of enterprise support.
Wednesday, Aug 30, 2:00 p.m. – 3:00 p.m. | Surf A, Level 2

[FUT1744BU] The Benefits of VMware Integrated OpenStack for Your NFV Platform
Communication Service Providers (CSPs) embracing network functions virtualization (NFV) are building platforms with three imperatives in mind: service agility, service uptime and platform openness. These capabilities require the cloud platform they choose to be able to easily model, deploy and modify a service, to run it on a tightly-integrated robust virtual infrastructure and migrate the service horizontally across cloud platforms when/if needed. Come to this session to learn about VIO, a VMware-supported OpenStack (OS) distribution, at the heart of the VMware NFV platform and how it can help CSPs meet those requirements. We will look in detail at the role of VIO as virtual infrastructure manager as well as its native integration with the other components of the VMware software-defined data center architecture (vSphere, NSX and VSAN).
Thursday, Aug 31, 10:45 a.m. – 11:30 a.m. | Meet the Experts, 2nd floor foyer, Table #8

[MTE4832U] How VMware IT Operates VMware integrated OpenStack
with Cloud Architect Chris Mutchler
Learn from VMware IT’s implementation of VMware’s Integrated OpenStack.

OpenStack Boston Summit VMware Sessions Recap

Watch below to experience VMware’s Speaker Sessions at this year’s OpenStack Summit in Boston!


OpenStack & VMware Getting the Best of Both

Speaker: Andrew Pearce

Come and understand the true value to your organization of combining Openstack and VMware. In this session you will understand the value of having a defcore / Openstack powered solution to enable your developers to provision IaaS, in a way that they want, using the tools that they want. In addition you will be able to enable your operations team to continue to utilize the tools, resources and methodology that they use to ensure that your organization has a production grade environment to support your developers.Deploying Openstack, and getting the advantages of Openstack does not need to be a rip and replace strategy. See how other customers have had their cake and eat it.


OpenStack and VMware: Enterprise-Grade IaaS Built on Proven Foundation

Speakers: Xiao Hu Gao & Hari Kannan 

Running production workloads on OpenStack requires a rock solid IaaS running on a trusted infrastructure platform. Think about upgrading, patching, managing the environment, high availability, disaster recovery, security and the list goes on. VMware delivers a top-notch OpenStack distribution that allows you all of the above and much more. Come to this session to see (with a demo) how you can easily and quickly deploy OpenStack for your dev test as well as production workloads.


Is Neutron Challenging to You? Learn How VMware NSX is the Solution for Regular OpenStack Network & Security Services and Kubernetes

Speakers: Dmitri Desmidt, Yves Fauser

Neutron is challenging in many aspects. The main ones reported by OpenStack admins are: complex implementation of network and security services, high-availability, management/operation/troubleshooting, scale. Additionally, with new Kubernetes and Containers deployments, security between containers and management of container traffic is a new headache. VMware NSX offers a plugin for all Neutron OpenStack installations for ESXi and KVM hypervisors. Learn in this session with multiple live demos how VMware NSX plugin resolves all the Neutron challenges in an easy way.


 Digital Transformation with OpenStack for Modern Service Providers

Speakers: Misbah Mahmoodi, Kenny Lee

The pace of technological change is accelerating at an exponential rate. With the advent of 5G networks and IoT, Communications Service Providers success depends not only on their ability to adapt to changes quickly but to do so faster than competitors. Speed is the of the essence in developing new services, deploying them to subscribers, delivering a superior Quality of Experience, and increasing operational efficiency with lowered cost structures. For CSPs to adapt and remain competitive, they are faced with important questions as they explore the digital transformatVMwareion of their business and infrastructure, and how they can leverage NFV, and OpenStack and open hardware platforms to accelerate change and modernization.


Running Kubernates on a Thin OpenStack

Speakers: Mayan Weiss & Hari Kannan 

Kubernetes is leading the container mindshare and OpenStack community has built integrations to support it. However, running production workloads on Kubernetes is still a challenge. What if there was a production ready, multi-tenant K8s distro? Dream no more. Come to this session to see how we adapted OpenStack + K8s to provide container networking, persistent storage, RBAC, LBaaS and more on VMware SDDC.


OpenStack and OVN: What’s New with OVS 2.7

Speakers: Russel Bryant, Ben Pfaff, Justin Pettit

OVN is a virtual networking project built by the Open vSwitch community.
OpenStack can make use of OVN as its backend networking implementation
for Neutron. OVN and its Neutron integration are ready for use in OpenStack
deployments.

This talk will cover the latest developments in the OVN project and the
latest release, part of OVS 2.7. Enhancements include better performance,
improved debugging capabilities, and more flexible L3 gateways. 
We will take a look ahead the next set of things we expect to work on for
OVN, which includes logging for OVN ACLs (security groups), encrypted
tunnels, native DNS integration, and more.

We will also cover some of the performance comparison results of OVN
as compared with the original OVS support in Neutron (ML2/OVS). Finally, 
we will discuss how to deploy OpenStack with OVN or migrate an existing
deployment from ML2/OVS to OVN.


DefCore to Interop and Back Again: OpenStack Programs and Certifications Explained

Speakers: Mark Voelker & Egle Sigler

Openstack Interop (formerly DefCore) guidelines have been in place for 2 years now, and anyone wanting to use OpenStack logo must pass these guidelines. How are guidelines created and updated? How would your favorite project be added to it? How can you guarantee that your OpenStack deployment will comply with the new guidelines? In this session we will cover OpenStack Interop guidelines and components, as well as explain how they are created and updated.


Senlin: An ideal Bridge Between NFV Orchestrator and OpenStack

Speakers: Xinhui Li, Ethan Lynn, Yanyan Hu

Resource Management is a top requirement in NFV field. Usually, the Orchestrator take the responsibility of parsing a virtual network function into different virtual units (VDU) to deploy and operate over Cloud. Senlin, positioned as clustering resource manager since the born time, can be the ideal bridge between NFV orchestrator with OpenStack: it uses a consolidate model which is directly mapped to a VDU to interact with different backend services like Nova, Neutron, Cinder for compute, network and storage resources per Orchestrator’s demand; it provides rich operational functions like auto-scaling, load-balancing and auto healing. We use a popular VIMS typed VNF to illustrate how to easily deploy a VNF on OpenStack and manage it in a scalable and flexible way.


High Availability and Scalability Management of VNF

Speakers: Haiwei Xu, Xinhui Li, XueFeng Liu

Now network function virtualization (NFV) is growing rapidly and widely adopted by many telcom enterprises. In openstack Tacker takes the responsibility of building a Generic VNF Manager (VNFM) and a NFV Orchestrator (NFVO) to deploy and operate Network Services and Virtual Network Functions (VNFs) on the infrastructure platform. For the VNFs which can work as a loadbalancer or a firewall, Tacker needs to consider the availability of each VNF to ensure they are not overloaded or out of work. To prevent VNFs from being overloaded or down, Tacker need to make VNFs HA and auto-scaling. So in fact the VNFs of certain function should not be a single node, but a cluster.

That comes out a problem of cluster managing. In OpenStack environment there is a Clustering service called Senlin which provides scalability management and HA functions for the nodes, those features are exactly fit for Tacker’s requirement.

In this talk we will give you a general introduction of this feature.


How an Interop Capability Becomes Part of the OpenStack Interop Guidelines

Speakers: Rochelle Grober, Mark Voelker, Luz Cazares

OpenStack Interop Working Group (formerly DefCore) produces the OpenStack Powered (TM) Guidelines (a.k.a. Interoperability Guidelines). But, how do we decide what goes into the guideline? How do we define these so called “Capabilities”? And how does the team “score” them? Attend this session to learn what we mean by “Capability”, the requirements a capability must meet, the process the group follows to grade those capabilities… And, you know what, lets score your favorite thing live.


OpenStack Interoperability Challenge and Interoperability Workgroup Updates: The Adventure Continues

Speakers: Brad Topol, Mark Voelker, Tong Li

The OpenStack community has been driving initiatives on two sides of the interoperability coin: workload portability and API/code standards for OpenStack Powered products. The first phase of the OpenStack Interoperability Challenge culminated with a Barcelona Summit Keynote demo comprised of 16 vendors all running the same enterprise workload to illustrate that OpenStack enables workload portability across OpenStack clouds. Building on this momentum for its second phase, the multi-vendor Interop Challenge team has selected new advanced workloads based on Kubernetes and NFV applications to flush out portability issues in these commonly deployed workloads. Meanwhile, the recently formed Interop Working Group continues to roll out new Guidelines, drive new initiatives, and is considering expanding its scope to cover more vertical use cases. In this presentation, we describe the progress, challenges, and lessons learned from both of these efforts.

Take Advantage of Nova Flavor Extra-specs and vSphere QoS to Make Delivering SLAs Much Simpler

Resource and over-subscription management are always the most challenging tasks facing a Cloud Admin. To deliver a guaranteed SLA, one method OpenStack Cloud Admins have used is to create separate compute aggregates with different allocation / over-subscription ratios. Production workloads that require guaranteed CPU, memory, or storage would be placed into a non-oversubscribed aggregate with 1:1 over-subscription, dev workloads may be placed into a best effort aggregate with N:1 over-subscription. While this simplistic model accomplishes its purpose of an SLA guarantee on paper, it comes with a huge CapEx and/or high overhead for capacity management / augmentation. Worst yet, because host aggregate level over-subscription in OpenStack is simply static metadata consumed by the nova scheduler during VM placement, not real time VM state or consumption, huge resource imbalances within the compute aggregate and noisy neighbor issues within a nova compute host are common occurrences.

New workloads can be placed on a host running close to capacity (real time consumption), while remaining hosts are running idle due to differences in application characteristics and usage pattern. Lack of automated day 2 resource re-balance(management) further exacerbates the issue. To provide white glove treatment to critical tenants and workloads, Cloud Admins must deploy additional tooling to discover basic VM to Hypervisor mapping based on OpenStack project IDs. This is both expensive and ineffective in meeting SLAs.

Over-subscription works if resource consumption can be tracked and balanced across a compute cluster. Noisy neighbor issues can be solved only if the underlying infrastructure supports quality of service (QoS).  By leveraging OpenStack Nova flavor extra-spec extensions along with vSphere industry proven per VM resource reservation allocation (expressed using shares, limits and reservations), OpenStack Cloud Admins can deliver enhanced QoS while maintaining uniform consumption across a compute cluster.  It is possible to leverage Image metadata to deliver QoS as well, this blog will focus on Nova flavor extra-spec.

The VMware Nova flavor extension to OpenStack was first introduced upstream in Kilo and is officially supported in VIO release 2.0 and above. Additional requirements are outlined below:

  • Requires VMware Integrated OpenStack version 2.0.x or greater
  • Requires vSphere version 6.0 or greater
  • Network Bandwidth Reservation requires NIOC version 3
  • VMware Integrated OpenStack access as a cloud administrator

Resource reservations can be set for following resource categories:

  • CPU (MHz)
  • Memory (MB)
  • Disk IO (IOPS)
  • Network Bandwidth (Mbps)

Within each resource category, Cloud Admin has the option to set:

  • Limit – Upper bound, not to exceed limit resource utilization
  • Reservation – Guaranteed minimum reservation
  • Share Level – The allocation level. This can be ‘custom’, ‘high’ ‘normal’ or ‘low’.
  • Shares Share –  In the event that ‘custom’ is used, this is the number of shares.

Complete Nova flavor extra-spec details and deployment options can be found here.  vSphere Resource Management capabilities and configuration guidelines is a great reference as well and can be found here.

Let’s look at an example using Hadoop to demonstrate VM resource management with flavor extra-specs. Data flows from Kafka into HDFS, every 30 minutes there’s a batch job to consume the newly ingested data.  Exact details of the Hadoop workflow are outside the scope of this blog. If you are not familiar with Hadoop, some details can be found here.  Resources required for this small scale deployment are outlined below:

Node Type

 

Core

(reserved – Max)

 

Memory

(reserved – Max)

 

Disk

 

Network Limit

Master / Name Node

4 16 G 70 G 500 Mbps
Data Node 4 16 G 70 G

1000 Mbps

Kafka 0.4-2 2-4 G 25 G

100 Mbps

Based on above requirements, Cloud Admin needs to create Nova flavors to match maximum CPU / Memory / Disk requirements for each Hadoop component.  Most of OpenStack Admins should be very familiar with this process.

flavor-create

 

 

 

 

Based on the reservation amount, attach corresponding nova extra specs to each flavor:

extra-spec

 

 

Once extra specs are mapped, confirm setting using the standard nova flavor-show command:

summary-flavor

 

 

 

 

 

In just three simple steps, resource reservation settings are complete.   Any new VM consumed using new flavors from OpenStack (API, command line or Horizon GUI) will have resource requirements passed to vSphere (VMs can be migrated using the nova rebuild VM feature).

Instead of best effort, vSphere will guarantee resources based on nova flavor extra-spec definition. Specific to our example, 4 vCPU / 16G / Max 1G network throughput will be reserved for each DataNode, NameNode with 4 vCPU / 16G / Max 500M throughput and Kafka nodes will have 20% vCPU / 50% Memory reserved. Instances boot into “Error” state if requested resources are not available, ensuring existing workload application SLAs are not violated. You can see that the resource reservation created by the vSphere Nova driver are reflected in the vCenter interface:

Name Node CPU / Memory:

hadoop_nn_cpu_mem

 

 

 

 

 

 

 

 

Name Node Network Bandwidth:

hadoop_nn_network

 

 

 

 

 

 

 

 

 

 

Data Node CPU / Memory:

dn_cpu_mem

 

 

 

 

 

 

 

 

Data Node Network Bandwidth:

dn_network

 

 

 

 

 

 

 

Kafka Node CPU / Memory:

kafka_cpu_mem

 

 

 

 

 

 

 

 

Kafka Network Bandwidth:

kafka_network

 

 

 

 

 

 

 

vSphere will enforce strict admission control based on real time resource allocation and load. New workloads will be admitted only if SLA can be honored for new and existing applications. Once a workload is deployed, in conjunction with vSphere DRS, workload rebalance can happen automatically between hypervisors to ensure optimal host utilization (future blog) and avoid any noisy neighbor issues. Both features are available out of box, no customization is required.

By taking advantage of vSphere VM resource reservation capabilities, OpenStack Cloud Admins can finally enjoy the superior capacity and over-subscription capabilities a cloud environment offers. Instead of deploying excess hardware, Cloud Admins can control when and where additional hardware are needed based on real time application consumption and growth. Ability to consolidate, simplify, and control your infrastructure will help to reduced power, space, and eliminate the need for any out of box customization in tooling or operational monitoring. I invite you to test out nova-extra spec in your VIO environment today or encourage your IT team to try our VMware Integrated OpenStack Hands-On-Lab, no installation is required.

 

VIO Speed Challenge – Can a New Guy Get Production Quality OpenStack Setup and Running Under Three Hours

This article describes an OpenStack setup. To put it into context, I recently joined VMware as a Technical Marketing Manager responsible for VMware Integrated OpenStack technical product marketing and field enablement. After a smooth onboarding process (accounts, benefits, etc), my first task was to get lab environments I inherited up and running again. I came from a big OpenStack shop where deployment automation was built in house using both Puppet and Ansible, so my first questions were; Where’s the site hiera data? Settings for Cobbler environment? Which playbook to run to spin up controller VM nodes? Once the vm is up, which playbooks are responsible for deployment and configuration of keystone, nova, cinder, neutron, etc. How to seed the environment once OpenStack is configured and running? It was always a multi-person, multi-week effort. Everyone maintained a cheat sheet loaded with deployment caveats and workarounds (long list since increased feature = increased complexity = increased caveats = slow time to market). Luckily with VMware Integrated OpenStack, all those decision points and complexity are abstracted for the Cloud Admin. Really, the only decision to make is:

  • Restore from database backup – Restore from backup processes can be found here.
  • Redeploy and Import.

Redeploy seemed more interesting so I decided to give it a shot (will save restore for a different blog if there is interest).

Note: Redeploy will not clean up vSphere resources. One could re-import vm resources from vCenter once deployment completes. Refer to Instructions here.

Lab Topology:

Lab environment consists of 3 ESXi clusters

  • Management (vCenter, VIO, vROPS, vRealize Log Insight, vRealize Operation Manager, NSX management, NSX Controller and etc)
  • Edge ( OpenStack neutron tenant resources – NSX Edge)
  • Production (This is where Tenant VMs reside)

One instance of NSX-v spanning across all 3 clusters from a networking perspective.

Screen Shot 2017-01-21 at 10.48.43 PM

VIO OpenStack Deployment:

VMware Integrated Openstack can be deployed in 2 ways, Full HA or Compact. Compact mode was recently introduced in 3.0 and requires significantly fewer hardware resources and memory than full HA mode. All OpenStack components can be deployed in 2 VMs with Compact mode. Enterprise grade OpenStack high availability can be achieved by taking advantage of the HA capabilities of the vSphere infrastructure. Compact mode is useful for multiple, small deployments. Full HA mode provides high availability at the application layer. The HA deployment consists of a dedicated management node (aka OMS node), 3 DB, 2 HAproxy and 2 Controller nodes. Ceilometer can be enabled after completion of the VIO base stack deployment. Enterprise HA mode is what I chose to deploy.

Since this environment is new to me, I wanted to avoid playing detective and spending days trying to reverse engineer the entire setup. Luckily VIO has a built-in export configuration capability. Simply navigate to the OpenStack deployment (Home -> VMware Integrated OpenStack -> OpenStack Deployments), all OpenStack settings can be exported with a single click:

Screen Shot 2017-01-19 at 3.32.20 PM

In some ways the exported configuration file is similar to traditional site hiera data except much simpler to consume. The data is in JSON format so it is easy to read and only information specific to my OpenStack setup is included. I no longer need to worry about kernel / TCP settings, interfaces to configure for management vs. data, NIC bonding, driver settings, haproxy endpoints, etc. Even the Ansible inventory is automatically created and managed based on deployment data. This is because VIO is designed to work out of box with the VMware suite of products, allowing Cloud Admins to focus on delivering SLAs instead of maintaining thousands of hard to remember key/value pairs. Advanced users can still look into inventory and configuration parameter details. The majority of deployment metrics are maintained on the OMS node in the following directories: (Please Note: The settings below are intended for viewing only and should not be modified without VMware support. The OMS node is primarily used as a starting point for troubleshooting, run viocli commands and SSH to the other nodes):

Screen Shot 2017-01-23 at 1.09.04 PM

With configuration saved, I went ahead and deleted the old deployment and clicked the Deploy OpenStack link to redeploy:

Screen Shot 2017-01-21 at 8.50.52 PM

The process to re-deploy a VIO OpenStack cluster is extremely simple, one simply selects an exported template to pre-fill configuration settings.

Screen Shot 2017-01-21 at 8.52.04 PM

The remaining deployment processes are well documented via other VMware blogs. References can be found here.

The entire Full HA mode deployment process took slightly over 50 minutes because of an unexpected NSX disk error that prevented neutron from starting. The deployment took 30 minutes with a clean environment (see below). Compact mode users should expect deployment to take as little as 15 minutes.

Screen Shot 2017-01-21 at 8.53.36 PM

Create VM and Test External Connectivity:

Once deployment is completed, simply create a L2 private network, and test that VMs can boot successfully in the default tenant project. Note, in order for a VM to connect externally, an external network needs to be created, associate the private and external networks to a NAT router, request a floating ip and finally associate the floating ip to the test VM. This is all extremely simple as VIO is 100% based on DefCore compliant OpenStack APIs on top of VMware’s best-of-breed SDDC infrastructure. Two neutron commands are all that’s needed to create an external network:

Screen Shot 2017-01-20 at 4.36.24 PM

Floating IP is a fancy OpenStack word for NAT. In most OpenStack implementations NAT translation happens in the neutron tenant router. A VIO neutron tenant router(s) can be deployed in 3 different modes – centralized exclusive, centralized shared, or distributed. Performance aside, the biggest difference between centralized and distributed mode is the number of control VM’s deployed to support the logical routing function. Distributed mode requires 2 NSX control plane router VMs, one router instance (dLR) for optimized East to West traffic between hypervisors. A second instance is deployed to take care of North – South external traffic flow via NAT. A single NSX control VM instance is required for centralized mode. In centralized mode, all routed traffic ( N -> S, E -> W) flows through a central NSX Edge Service Gateway (ESG). Performance and scale requirements will determine which mode to choose. Centralized shared is the default behavior. Marcos Hernandez had written an excellent blog in the past on NSX networking and VIO. Marcos’s blog can be found here.

An enterprise grade NSX NAT router can be created via three neutron commands, one command to create the router, and two commands to associate corresponding neutron networks.

Screen Shot 2017-01-20 at 4.59.24 PM

Once the router is created, simply allocate a floating IP, and associate the floating IP to the test VM instance:

Screen Shot 2017-01-21 at 12.17.41 AM

The entire process from deployment to external VM reachability took me less than 3 hours in total, not bad for a new guy!

Deploying and configuring a production grade OpenStack environment traditionally takes weeks by highly skilled DevOps engineers specializing in deployment automation, in-depth knowledge of repo and package management, and strong Linux system administration. To come up with the right CI/CD process to support new features and align with upstream within a release is extremely difficult and results in snowflake environments.

The VIO approach changes all that. I’m impressed with what VMware has done to abstract away traditional complexities involved in deploying, supporting, and maintaining an enterprise grade OpenStack environment. Leveraging VMware’s suite of products in the backend, an enterprise grade OpenStack can be deployed and configured in hours rather than weeks. If you haven’t already, make sure to give VMware Integrated OpenStack a try, you will save tremendous amounts of time, meet the most demanding requests of application developers, while providing the highest SLA possible. Download now and get started , or dare your IT team to try our VMware Integrated OpenStack Hands-On-Lab , no installation required.

How To Efficiently Derive Value From VMware Integrated OpenStack

One of our recent posts on this blog covered how VMware Integrated Openstack (VIO) can be deployed in less than 15 minutes thanks to an easy and friendly wizard-driven deployment.

That post has also mentioned that recent updates have been added to VIO that focus on its ease of use and consumption, including the integration with your existing vSphere environment.

 

This article will explore in greater detail the latter topic and will focus on two features that are designed to help customers start deriving value from VIO quickly.


vSphere Templates as OpenStack Images

 

An OpenStack cloud without any image is like a physical server without an operating system – not so useful!

 

One of the first elements you want to seed your cloud with is images so users/developers can start building applications. In a private cloud environment, cloud admins will want to expose a list of standard OS (Operating System, not OpenStack…) images to be used to that end, in other words OS master images.

 

When VIO is deployed on top of an existing vSphere environment, these OS master images are generally already present in the virtualization layer as vSphere templates and a great deal of engineering hours have gone into creating and configuring those images to reflect the very own needs of a given corporate organization in terms of security, compliance or regulatory requirements – OS hardening, customization, agents installation, etc…

 

What if you were able to reuse those vSphere templates and turn them into OpenStack images and hence preserve all of your master OS configurations across all of your cloud deployments?

VIO supports this capability out of the box (see diagram below) and enables users to leverage their existing vSphere templates by adding them to their OpenStack deployment as Glance images, which can then be booted as OpenStack instances or used to create bootable Cinder volumes.

 

The beauty of this feature is that it is done without copying the template into the Glance data-store. The media only exists in one place (the original data-store where the template is stored) and we will actually create a “pointer” from the OpenStack image object towards the vSphere template thus saving us from the tedious and possibly lengthy process of copying media from one location to another (OS images tend to be pretty large in corporate environments).

 

This feature is available through the glance CLI only and here are the high-level steps that need to be performed to create an image:

– First: create an OpenStack image

– Second: note that image ID and specify a location pointing towards the vSphere template

– Third: in the images section of the Horizon dashboard for example, a new image will show up called “corporate-windows-2012-r2” from which instances can be launched.

Note: cloud admins will have to make sure those OS images have the cloud-init package installed on them before they can be fully used in the OpenStack environment. If cloud-init needs to be installed, this can be done either pre- or post- the import process into Glance.

Run the video below for a detailed tutorial on the configuration steps, including CLI commands:

Finally, here’s the section in the official configuration guide: http://tinyurl.com/hx4z4jt


Importing vSphere VMs into OpenStack

 

A frequent request from customers deploying VIO on their existing vSphere implementation is “Can I import my existing VMs into my OpenStack environment?”

 

The business rationale for this request is that IT wants to be consistent and offer a similar level of service and user experience to both the new applications deployed through the OpenStack framework as well as the existing workloads currently running under a vSphere management plane “only”. They basically want users in charge of existing applications to enjoy capabilities such as self-service, lifecycle management, automation, etc…and hence avoid creating a two-tier IT offering.

 

VIO supports this capability by allowing users to quickly import vSphere VMs into VIO and start managing them as instances through standard OpenStack APIs. This feature is also available through CLI only and leverages the newly released VMware DCLI toolset.

 

Here are the high-level steps for importing an existing VM under OpenStack:

– First, list the “Unmanaged” VMs in vCenter (ie unmanaged by VIO)

– Import one of those VMs into a specific project/tenant in OpenStack

– The system will then generate a UUID for the newly created instance and the instance will show up in Horizon where it can be managed like any other running one.

 

 

We hope you enjoyed reading this article and that those features will make you want to go ahead and discover VIO!

If you’re ready to deploy OpenStack today, download it now and get started, or dare your IT team to try our VMware Integrated OpenStack Hands-On-Lab, no installation required.


This article was written by Hassan Hamade a Cloud Solution Architect at VMware in the EMEA SDDC technology practice team. 

16 Partners, One Live Demo – OpenStack Barcelona Interop Challenge

 

During Wednesday morning’s keynote session at the OpenStack Summit in Barcelona, I will be on stage along with several other vendors to show off some of our latest work on interoperability between OpenStack clouds.  We will be demonstrating a single workload running successfully on over a dozen different vendors’ OpenStack products without modification, including VMware Integrated OpenStack 3.0.  The idea got started back in April at the last OpenStack Summit when our friends at IBM challenged vendors to demonstrate that their products were interoperable publicly.


VMware has long been a proponent of fostering interoperability between OpenStack clouds.  I currently co-chair the Interop Working Group (formerly known as the DefCore Committee), and VMware Integrated OpenStack 3.0 is an approved OpenStack-Powered product that is compliant with the 2016.08 interoperability guideline, the newest and strictest guideline approved by the OpenStack Foundation Board of Directors.  We also helped produce the Interop Working Group’s first ever report on interoperability issues.  So why do we care about interoperability?  Shouldn’t everything built on OpenStack behave the same anyhow?  Well, to quote the previously mentioned report on interoperability issues:

 

“OpenStack is tremendously flexible, feature-rich, powerful software that can be used to create clouds that fit a wide variety of use cases including software development, web services and e-commerce, network functions virtualization (NFV), video processing, and content delivery to name a few. Commercial offerings built on OpenStack are available as public clouds, installable software distributions, managed private clouds, appliances, and services. OpenStack can be deployed on thousands of combinations of underpinning storage, network, and compute hardware and software. Because of the incredible amount of flexibility OpenStack offers and the constraints of the many use cases it can address, interoperability between OpenStack clouds is not always assured: due to various choices deployers make, different clouds may have some inconsistent behaviors.  One of the goals of the [Interop Working Group]’s work is to create high interoperability standards so that end users of clouds can expect certain behaviors to be consistent between different OpenStack-Powered Clouds and products.”

 

Think of it this way: another amazingly flexible, powerful thing we use daily is electricity.  Electricity is pretty much the same stuff no matter who supplies it to you or what you are using it for, but the way you consume it might be different for different use cases.  The outlet I plug my laptop into at home is a different shape and supplies a different voltage than the one my electric oven is connected into since the oven needs a lot more juice to bake my cookies than my laptop does to type up a blog post.  My home’s air conditioner does not even have a plug: it is wired directly into the house’s circuit breaker.  I consume most of my electricity as a service provided by my power company, but I can also generate some of my power with solar panels I own myself as long as their outputs can are connected to my power grid.  Moreover, to power up my laptop here in Barcelona, I brought along a plug adapter since Europe has some differences in their power grid based on their set of standards and requirements.  However, even though there are some differences, there are many commonalities: electricity is delivered over metal wiring, terminated at some wall socket, most of the world uses one of a few different voltage ranges, and you pay for it based on consumption.  OpenStack is similar: An OpenStack deployment built for NFV workloads might have some different characteristics and interfaces exposed than one made as a public compute cloud.

 

What makes the Interop Challenge interesting is that it is complimentary to the work of the Interop Working Group in that it looks at interoperability in a slightly different light.  To date, the Interop Working Group has mostly focused its efforts on API-level interoperability.  It does so by ensuring that products bearing the OpenStack-Powered mark, pass a set of community-maintained Tempest tests to prove that they expose a set of capabilities (things like booting up a VM with the Nova v2 API or getting a list of available images using the Glance v2 API).  Products bearing the OpenStack-Powered logo are also required to use designated sections of upstream code, so consumers know they are getting community-developed code driving those capabilities.  While the Interop Working Group’s guidelines look primarily at the server side of things, the Interop Challenge seems to address a slightly different aspect of interoperability: workload portability.  Rather than testing a particular set of API’s, the Interop Challenge took a client-side approach by running a real workload against different clouds—in this case, a LAMP stack application with a load-balanced web server tier and a database backend tier, all deployed via Ansible.  The idea was to take a typical application with commonly-used deployment tools and prove that it “just works” across several different OpenStack clouds.

 

In other words, the guidelines produced by the Interop Working Group assure you that certain capabilities are available to end users (just as I can be reasonably confident that any hotel room I walk into will have a socket in the wall from which I can get electricity).  The Interop Challenge compliments that by looking at a more valid use case: it verifies that I can plug in my laptop and get some work done.

 

Along the way, participants also hoped to begin defining some best practices for making workloads more portable among OpenStack clouds to account for some of the differences that are a natural side effect of OpenStack’s flexibility.  For example, we found that the LAMP stack workload was more portable if we let the user specify certain attributes of the cloud he intended to use – such as the name of network the instances should be attached to, the image and flavor that should be used to boot up instances, and block device or network interface names that would be utilized by that image.   Even though we will only be showing one particular workload on stage, that one workload serves as a starting point to help flesh out more best practices in the future.

 

If you want to learn more about VMware’s work on interoperability or about VMware Integrated OpenStack, see us at the keynote or stop by our booth at the OpenStack, and if you’re ready to deploy OpenStack today, download it now and get started, or dare your IT team to try our VMware Integrated OpenStack Hands-On-Lab, no installation required.


*This article was written by Mark Voelker – OpenStack Architect at VMware