Home > Blogs > OpenStack Blog for VMware > Author Archives: Xiao Gao

Author Archives: Xiao Gao

About Xiao Gao

Xiao Hu Gao is the Senior Technical Marketing Manager for OpenStack at VMware. He enjoys speaking to customers and partners about the benefits of using OpenStack with VMware technologies. Xiao has a background in DevOps, Data Center Design, and Software Defined Networking. Xiao holds CCIE certification #3000 and have filed several patents in the areas of Security and Cloud.

VMware Integrated OpenStack 5.0: What’s New

VMware today announced VMware Integrated OpenStack (VIO) 5.0. We are truly excited about our latest OpenStack distribution as VMware is one of the first companies to support and provide enhanced stability on top of the newest OpenStack Queens Release.  Available in both Carrier and Data Center Editions, VIO 5.0 enables customers to take advantage of advancements in Queens to support mission-critical workloads, and adds support for the latest versions of VMware products including vSphere, vSAN, and NSX.

For our Telco/NFV customers, VIO 5.0 is about delivering scale and availability for hybrid applications across VM and container-based workloads using a single VIM (Virtual Infrastructure Manager). Also for NFV operators, VIO 5.0 will help fast track a path towards Edge computing with VIO-in-a-box, secure multi-tenant isolation and accelerated network performance using the enhanced NSX-T VDS (N-VDS).  For VIO Datacenter customers, advanced security, simplified user experience, and advanced networking with DNSaaS have been on top of the wish list for many VIO customers.  We are super excited to bring those features in VIO 5.0.

VIO 5.0 NFV Feature Details:

Advanced Kubernetes Support:

Enhanced Kubernetes support:  VIO 5.0 ships with Kubernetes version 1.9.2.  In addition to the latest upstream K8S release, integration with latest NSX-T 2.2 release is also included. VIO Kubernetes customers can leverage the same Enhanced N-VDS via Multus CNI plugin to achieve significant improvements in container response time, reduced network latencies and breakthrough network performance.  We also support using Red Hat Enterprise Linux as the K8S cluster image.

Heterogeneous Cluster using Node Group:  Now you can have different types of worker nodes in the same cluster. Extending the cluster node profiles feature introduced in VIO 4.1, a cluster can now have multiple node groups, each mapping to a single node profile. Instead of building isolated special purpose Kubernetes clusters, a cloud admin can introduce a new node group(s) to accommodate heterogeneous applications such as machine learning, artificial intelligence, and video encoding.  If resource usage exceeds the node group limit, VIO 5.0 supports cluster scaling at a node group level.  With node groups, cloud admins can address cluster capacity based on application requirements, allowing the most efficient use of available resources.

Enhanced Cluster Manageability:  vkube heal and ssh allow you to directly ssh into any of the nodes of a given cluster and to recover a failed cluster nodes based on ETCD state or cluster backup in the case of complete failure.

Advanced Networking:

 N-VDS:  Also Known as NSX-T VDS in Enhanced Data-path mode.  Enhanced, because N-VDS runs in DPDK mode and allows containers and VMs to achieve significant improvements in response time, reduced network latencies and breakthrough network performance.  With performance(s) similar to SR-IOV, while maintaining the operational simplicity of virtualized NICs, NFV customers can have their cake and eat it too

NSX-V Search domain:  A new configuration setting in the NSX-V will enable the admin to configure a global search domain. Tenants will use this search domain if there is no other search domain set on the subnet.

NSX-V Exclusive DHCP server per Project:  Instead of shared DHCP edge based on subnet across multiple projects.  Exclusive DHCP edge provides the ability to assign dedicated DHCP servers per network segment. Exclusive DHCP server will provide better tenant isolation, also allowing an Admin to determine customer impact concerning maintenance windows, etc.

NSX-T availability zone (AZ):  An availability zone is used to make network resources highly available by group network nodes that run services like DHCP, L3, NAT, and others. Users can associate applications with an availability zone for high availability.  In previous releases neutron AZ was supported against NSX-V, we are extending this support to the T as well.

Security and Metering:

Keystone Federation:   Federated Identity provides a way to securely use existing credentials to access cloud resources such as servers, volumes, and databases, across multiple endpoints across multiple authorized clouds using a single set of credentials.  VIO5 supports Keystone to Keystone (K2K) federation by designating a central Keystone instance as an Identity Provider (IdP), interfacing with LDAP or an upstream SAML2 IdP.  Remote Keystone endpoints are configured as Service Providers (SP), propagating authentication requests to the central Keystone.  As part of Keystone Federation enhancement, we will also support 3rd party IdP in addition to the existing support for vIDM.

Gnocchi:   Gnocchi is the project name of a TDBaaS (Time Series Database as a Service) project that was initially created under the Ceilometer umbrella. Rather than storing raw data points, it aggregates them before storing them.  Because Gnocchi computes all the aggregations at ingestion, data retrieval is exceptionally speedy.  Gnocchi resolves performance bottlenecks in Ceilometer’s legacy architecture by providing an extremely robust foundation for the metric storage required for billing and monitoring.  The legacy Ceilometer API service has been deprecated by upstream and is no longer available in Queens.  Instead, the Ceilometer API and functionality has been broken out into the Aodh, Panko, and Gnocchi services, all of which are fully supported in VIO 5.0.

Default Drop Policy:   Enable this feature to ensure that traffic to a port that has no security groups and has port security enabled will always discard.

End to End Encryption:  The cloud admin now has the option to enable API encryption for internal API calls in addition to the existing encryption on public OpenStack endpoints.  When enabled, all internal OpenStack API calls will be sent over HTTPS using strong TLS 1.2 encryption.  Encryption on internal endpoints helps avoid man-in-the-middle attacks if the management network is compromised.

Performance and Manageability:

VIO-in-a-box:  Also known as the “Tiny” deployment. Instead of separate physical clusters for management and compute, VMware Integrated OpenStack control and data plane can now consolidate on a single physical server.   This drastically reduces the footprint of a deployment and is ideal for Edge Computing scenarios where power and space is a concern.  VIO-in-a-box can be preconfigured manually or fully automated with OMS API.

Hardware Acceleration:  GPUs are synonymous with artificial intelligence and machine learning.  vGPU support gives OpenStack operators the same benefits for graphics-intensive workloads as traditional enterprise applications: specifically resource consolidation, increased utilization, and simplified automation. The video RAM on the GPU is carved up into portions.  Multiple VM instances can be scheduled to access available vGPUs.  Cloud admins determine the amount of vGPU each VM can access based on VM flavors.  There are various ways to carve vGPU resources. Refer to the NVIDIA GRID vGPU user guide for additional detail on this topic.  

OpenStack at Scale:  VMware Integrated OpenStack 5.0 features improved scale, having been tested and validated to run 500 hosts and 15,000 VMs in a region. This release will also introduce support for multiple regions at once as well as monitoring and metrics at scale.

Elastic TvDC:  A Tenant Virtual Datacenter (TvDC) can extend across multiple clusters in VIO 5.0.  Extending on support of single cluster TvDC’s introduced in VIO 4.0, VIO 5.0 allows a TvDC to span across multiple clusters.  Cloud admins can create several resource pools across multiple clusters assigning the same name, project-id, and unique provider-id. When tenants launch a new instance, the OpenStack scheduler and placement engine will schedule VM request to any of the resource pools mapped to the TvDC.

VMware at OpenStack Summit 2018:

VMware is a Premier Sponsor of OpenStack Summit 2018 which runs May 21-24 at the Vancouver Convention Centre in Vancouver, BC, Canada. If you are attending the Summit in person, we invite you to stopped by VMware’s booth (located at A16) for feature demonstrations of VMware Integrated OpenStack 5 as well as VMware NSX and VMware vCloud NFV.  Hands on training is also available (RSVP required).   Complete schedule of VMware breakout sessions, lightening talks and training presentations can be found here.

A Deeper Look Into OpenStack Policy Update

Written by Xiao Gao, with valuable feedbacks and inputs from Mark Voelker.

While working with customers that are switching over to VMware Integrated OpenStack (VIO) from a different OpenStack distribution, customers expressed the need to update policies. Reasons were:

  • Backward compatibility with their legacy OpenStack deployment.
  • Internal company process and procedure alignment.

While updating policy is not any more complicated on VIO when compared to other distributions, it is an operation that we traditionally advised our customers to avoid. Following are some of the reasons:

1). Upgrade. While many non-default changes can seem trivial and straightforward, VMware can’t guarantee upstream implementation will always be backward compatible when moving between releases. Therefore, responsibility of maintaining day-2 changes lies within the customer

2). Snowflake avoidance.  Upstream gate tests focus almost exclusively on default policies. The risk of exposing unexpected side effect increases when the security posture of an operation is relaxed or tightened.  Security is also a concern when relaxing policies.  Similarly, most popular OpenStack orchestration/monitoring tools such as Terraform, Gophercloud, or Nagios are implemented assuming default policies. When policies are made more restrictive, it can cause your favorite OpenStack tools to fail.

Snowflakes not only are difficult to support and maintain, often cause of unexpected outages.

3). Leverage external CMP for enhanced governance and control. External CMP such as the vRA is designed to integrate business processes into IAAS consumption. Instead of maintaining low-level policies changes, leverage out of box capabilities of vRA to control what users will have access to.

 

 

Implementation Options:

We understand there are scenarios where policy changes are required. Our recommendation for those scenarios is to leverage VIO custom playbook to make those changes.  The basic idea behind custom playbook:

  1. Customer will code up what has to change using Ansible.
  2. VIO will decide when to make required changes, to survive upgrades and other maintenance tasks.

While VIO doesn’t sanction contents of the custom playbook, it’s essential to write the playbook in a manner that is modular and agnostic to the OpenStack version.  Ideal playbook should be stateless, grouped based on operational action, and not restrictive towards alignment with the upstream (see example section for details).  Loggings is on by default.

Working Example:

Let’s look at an example.  Say we want regular users to be able to create shared networks.  To do that we need to modify /etc/neutron/policy.json and change:

“create_network:shared”: “rule:admin_only”

to:

“create_network:shared”: “”

There is number of ways to accomplish above task.  You can go down the path of j2 templates and introduce variables for each policy modification.  But this approach requires discipline from the operator to update his/her entire set of j2 policy template(s) before any significant upgrade to avoid drift or conflicts with upstream.  On the other hand, if you leverage direct file manipulation method, you will change only parameters that are required in your local environment, and leave everything else in constant alignment with upstream.

Below example uses lineinfile to manipulate files directly:

# The custom playbook is run on initial deployment configuration, on a patch,
# or on an upgrade.  It can also be run via the viocli command line:
#   viocli deployment run-custom-playbook
#
# Copy this file and all supporting files to:
#   /opt/vmware/vio/custom/custom-playbook.yml
#
---
- hosts: controller
  sudo: true
  any_errors_fatal: true

  tasks:
    - name: stat check for policy.json
      stat: path=/etc/neutron/policy.json
      register: policy_stat

- hosts: controller
  sudo: true
  any_errors_fatal: true

  tasks:
    - name: backup policy.json
      command: cp /etc/neutron/policy.json /etc/neutron/policy.{{ lookup('pipe', 'date +%Y%m%d%H%M%S') }}.json
      when: policy_stat.stat.exists

- hosts: controller
  sudo: true
  any_errors_fatal: true

  tasks:
    - name: custom playbook - allow users to create shared networks
      lineinfile:
        dest: /etc/neutron/policy.json
        regexp: "^(\\s*){{ item.key }}:\\s*\".*\"(,?)$"
        line: "\\1{{ item.key }}: {{ item.value }}\\2"
        backrefs: yes
      with_dict: {'"create_network:shared"': '""’ }

example uses back references (e.g., the parenthesis in the “regex” line and the \\1 and \\2 in the “line” line) to preserve the indentation/leading spaces on the beginning of each line and the comma at the end of the line (if it’s present).  Back reference makes the regex a tad more complicated-looking, and it keeps the formatting in place.

Log Outputs:

Below are sample logs:

 

 

 

 

 

 

Conclusion

This post outlined thought processes involved when updating OpenStack Policies.  I would love to hear back from you.

Also, VIO 4.1 is now GA.  You can Download a 60-day VIO evaluation now and get started.

VMware Integrated OpenStack 4.1: What’s New

VMware announced general availability (GA) of VMware Integrated OpenStack (VIO) 4.1 on Jan 18th, 2018. We are truly excited about our latest OpenStack distribution that gives our customers enhanced stability on top of the Ocata release and support for the latest versions of VMware products, across vSphere, vSAN, and NSX V|T (including NSX-T LBaaSv2). For OpenStack Cloud Admins, the 4.1 release is also about enhanced control.  Control over API throughput, virtual machine bandwidth (QoS), deployment form factor, and user management across multiple LDAP domains.  For Kubernetes Admins, 4.1 is about enhanced tooling.  Tooling that enables Control plane backup and recovery, integration with Helm and Heapster allowing for simplified application deployment and monitoring, and centralized log forwarding. Finally, VIO deployment automation has never been more straightforward using newly documented OMS API.

4.1 Feature Details:

  • Support for the latest versions of VMware products – VIO 4.1 supports and is fully compatible with VMware vSphere 6.5 U1, vSAN 6.6.1, VMware NSX for vSphere 6.3.5, and VMware NSX-T 2.1.   To learn more about vSphere 6.5 U1, visit here, NSX-V 6.3.5 and NSX-T 2.1, visit here.
  • Public OMS API – Management server APIs that can be used to automate deployment and lifecycle management of VMware Integrated OpenStack is available for general consumption. Users can perform tasks such as provision OpenStack cluster, start/stop the cluster, gather support bundles, etc using the OMS public API.  Users can also leverage Swagger UI to check and validate API availability and specs,

API Base URL: https://[oms_ip]:8443/v1

Swagger UI: https://[oms_ip]:8443/swagger-ui.html

Swagger Docs: https://[oms_ip]:8443/v2/api-docs

  • HAProxy rate limiting – Cloud Admin has the option to enable API rate limiting for public facing API access. If received API rate exceeds the configured rate, clients receive a 429 error with a Retry-After header that indicates a wait duration.  Update the custom.yml deployment configuration file to enable HAproxy Rate limiting feature.
  • Neutron QoS – Before VIO 4.1, Nova image or flavor extra-spec controlled network QoS against the vCenter VDS.  With VIO 4.1, Cloud administrator can leverage Neutron QoS to create the QoS profile and map to a port(s) or logical switch. Any virtual machine associated with the port or logical switch will inherit the predefined bandwidth policy.
  • Native NSX-T Load Balancer as a Service (LBaaS) – Before VIO 4.1, NSX-T customers had to implement BYO Nginx or third party LB for application load balancing.  With VIO 4.1, NSX-T LBaaSv2 can be provisioned using both Horizon or Neutron LBaaS API.  Each load balancer must map to an NSX-T Tier 1 logical router (LR).  Missing LR or LR without a valid uplink is not a supported topology.
  • Multiple domain LDAP backend – VMware Integrated OpenStack 4.1 supports SQL plus one or more domains as an identity source.  Up to a maximum of 10 domains, each domain can belong to a different authentication backend.  Cloud administrators can create/update/delete domains and grant / revoke Domain administrator users.  Domain administrator is a local administrator, delegated to manage resources such as user, quotas, and projects for a specific domain. VIO 4.1 Support both AD and OpenDirectory as authentication backends.

4.1 NFV and Kubernetes Features:

  • VIO-in-a-box –  AKA Tiny deployment. Instead of separate physical clusters for management and compute, VIO deployment can now consolidate on a single physical server.   VIO-in-a-box drastically reduces the footprint and is suitable for environments which do not have high availability requirements nor large workloads. VIO-in-a-box can be preconfigured manually or fully automated with OMS API.  Shipped as a single RU appliance to any manned or unmanned Data Center where space, capacity, availability of onsite support are biggest concerns.
  • VM Import – Further expanding on VM import capabilities, you can now import vSphere VM with multiple disks and NICs.  Any VMDK not classified as VM root disk imports as cinder-volume(s).  Existing networks import as provider network with access restricted only to the given tenant.  Ability to import vSphere VM workloads into OpenStack and run critical Day 2 operations against them via OpenStack APIs is the foundation we are setting for future sophisticated use cases around availability.  Refer to here for VM import instructions.
  • CPU policy for latency sensitivity workflows – Latency Sensitive workflows often require dedicated reservations of CPU, memory, and network.  In 4.1, we introduced CPU policy configuration using Nova flavor extra spec ‘hw:cpu_policy”.  Setting of this policy will determine vCPU mapping to an instance,
  • Networking passthrough – Traditionally Nova flavor or image extra-specs defined the workflow for hardware passthrough, without direct involvement of Neutron.  VIO 4.1 introduces Neutron-based network passthrough device configuration.  The Neutron based approach allows a Cloud administrators to control and manage network settings such as MAC, IP, and QoS of a passthrough network device.   Although both options will continue to be available, going forward commendation is to leverage the neutron workflow for network and nova extra-specs for all other hardware passthrough devices.  Refer to Upstream and VMware documentation for details.
  • Enhanced Kubernetes support – VIO 4.1 ships with Kubernetes version 1.8.1.  In addition to the latest upstream release, integration with widely adopted application deployment and monitoring tools are standard out of the box, Helm and Heapster.  VIO 4.1 with NSX-T2.1.will allow you to consume Kubernetes network security policy as well.
  • VIO Kubernetes support bundle –  Opening support tickets couldn’t be simpler with VIOK support bundle.  Using a single line command, specify the start and end date, VIO Kubernetes will capture logs from all components required to diagnosis tenant impacting issues within the specified time range.
  • VIO Kubernetes Log Insight integration – Cloud administrator can specify FQDN of the log Insight as the logging server.  Current release supports a single logging server.
  • VIO Kubernetes control plane backup / restore –  Kubernetes admin can perform cluster level back-ups from the VIOK management VM. Each successful backup produces a compressed tar backup file.

Try VMware Integrated OpenStack 4.1 Today

Infrastructure as Code: Orchestration with Heat

This blog post was created by Anil Gupta.  Additional comments and reviews: Maya Shiran and Xiao Gao

In this blog post I will talk about the automation and orchestration of configuration that can be done via the Heat automation and orchestration program that comes with OpenStack (and VIO- VMware Integrated OpenStack).

Perhaps, a question on your mind is “Why do I need an orchestration solution such as Heat when I have access to the OpenStack Command Line Interface (CLI)”. Imagine if you are configuring a simple virtual infrastructure that consists of a web server, an application server, and a database server.  You not only have to deploy the three instances, you will need to deploy one network instance per server. You also need to account for the router to connect to the outside world, and the floating IP that will be assigned to the web server so that users can access the application. Making one-off API\CLI calls to deploy these components is fine during development. However, what happens when you’re ready to go to production? What if performance tests shows that our deployment requires multiple instances at each infrastructure tier? Managing such an infrastructure using CLI is not scalable.

This is where Heat comes in. Heat is the main project of the OpenStack orchestration program and allows users to describe deployments of complex cloud applications in text files called Heat Orchestration Templates (HOT). These templates, created in simple YAML, format are then parsed and executed by the Heat engine. For example, in your template, you can specify the different types of infrastructure resources you will need, such as servers, floating IP addresses, storage volumes, etc. The template also manages relationships between these resources (such as this volume is connected to this server), which allows it to handle complex configurations. These templates are then parsed and executed by the Heat engine to create all of your infrastructures in the correct order to completely launch your application.

Heat also offers the ability to add, modify, or delete resources of a running stack using the stack update operation.  If I want to increase the memory of a running machine, It is as simple as editing the original template and apply the changes using heat stack-update.

As a result, Heat provides a single deployment mechanism to provision the application infrastructure from detailed template files that minimizes room for error. Due to the simplicity of the YAML format, your template files can also be used as a documentation source for IT operation runbooks. Additionally, the Heat templates can be managed by version control tools such as git, so you can make changes as needed and ensure the new template version is used in future. Finally, the Integration of VIO 4.0 with vRA (vRealize Automation) provides enterprise customers the ability to consume VIO resources with governance.

Working Example:

The Heat template consists of two sections. The first part defines the parameters such as image id and instance type. The second part defines the resources that are managed through this template. All of these variables can be parameterized and the template can be made generic. Once you have parameterized these variables, you can specify appropriate values for your environment in the stack-create command without having to edit the template file. This allows you to create fairly complex orchestration scenarios using Heat templates that are reusable.

Below is an example template that shows two sections – the first section defines the various parameters, as noted above.  The second section creates a configuration for load-balancer (LB) server, along with needed router and network configurations.  You will see orchestration at work in this example, because creation step for LB server on the private subnet needs to wait for router interface step to complete, otherwise the creation step for LB server fails. Please note that the code example is only for illustration purposes and not intended to run on Heat as-is.  Complete working example can be found here

 

 

 

 

 

 

 

 

 

 

You will see the use of value_specs in the example below,  which is a way of providing vendor-specific key/value config options.

 

 

 

 

 

 

 

 

 

 

 

 

This template, when run, invokes the OpenStack orchestration service using the OpenStack Open API, which in turn leverages the core OpenStack services (such as Nova, Cinder, Glance and Neutron) in VIO for automated creation and management of the specific resources.

Conclusion

This post showed how VIO with Heat allows developers to create their infrastructure configuration as code, as well as orchestrate the routine steps such as provisioning servers, storage volumes, networks, as well as dependencies in a quick and easy manner.

Complete working example of Heat stack in this post is in the VMware Integrated OpenStack Hands-on Lab, don’t forget to try out.  You can also Download a 60-day VIO evaluation now and get started.

OpenStack and Kubernetes Better Together

Virtual machines and containers are two of my favorite technologies.  In today’s DevOps driven environment, deliver applications as microservices allows an organization to provide features faster.   Splitting a monolithic application into multiple portable fragments based on containers are often top of most organization’s digital transformation strategies.   Virtual Machines, delivered as IaaS, has been around since the late 90s, it is a way to abstract hardware to offer enhanced capabilities in fault tolerance, programmability, and workload scalability.  While enterprise IT large and small are scrambling to refactor application into microservices, the reality is IaaS are proven and often used to complement container based workloads:

1). We’ve always viewed the IaaS layer as an abstraction from the infrastructure to provide a standard way of managing and consolidate disparate physical resources. Resource abstraction is one of the many reasons most of the container today runs inside of Virtual machines.

2). Today’s distributed application consists of both Cattles and Pets.  Without overly generalizing, Pet workload tends to be “hand fed” and often have significant dependencies to the legacy OS that isn’t container compatible.  As a result, for most organizations, Pet workloads will continue to run as VMs.

3). While there are considerable benefits to containerize NFV workloads, current container implementation is not sufficient enough to meet 100% NFV workload needs.  See  IETF report for additional details.

4). Ability to “Right Size” the container host for dev/test workloads where multiple environments are required to perform different testings.

Instead of mutually exclusive, over time it’s been proven that two technologies complement each other.   As long as there are legacy workloads and better ways to manage and consolidate sets of diverse physical resources, Virtual Machines (IaaS) will co-exist to complement containers.

OpenStack IaaS and Kubernetes Container Orchestration:

It’s a multi-cloud world, and OpenStack is an important part of the mix. From the datacenter to NFV, due to the richness of its vendor-neutral API, OpenStack clouds are being deployed to meet needs of organizations needs in delivering public cloud like IaaS consumption in a private cloud data center.   OpenStack is also a perfect complement to K8S by providing underline services that are outside the scope of K8S.  Kubernetes deployments in most cases can leverage the same OpenStack components to simplify the deployment or developer experiences:

 

 

 

 

1). Multi-tenancy:  Create K8S cluster separation leveraging OpenStack Projects. Development teams have complete control over cluster resources in their project and zero visibility to other development teams or projects.

2). Infrastructure usage based on HW separation:  IT department often are the central broker for development teams across the entire organization. If Development team A funded X number of servers and Y for team B, OpenStack Scheduler can ensure K8S cluster resources always mapped to Hardware allocated to respective development teams.

3).  Infrastructure allocation based on quota:  Since deciding how much of your infrastructure to assign to different use cases can be tricky.  Organizations can also leverage OpenStack quota system to control Infrastructure usage.

4). Integrated user management:  Since most K8S developers are also IaaS consumers, leverage keystone backend simplifies user authentication for K8S cluster and namespace sharing.

5). Container storage persistence:  Since K8S pods are not durable, storage persistence is a requirement for most stateful workloads.   When leverage OpenStack Cinder backend, storage volume will be re-attached automatically after a pod restart (same or different node).

6). Security:  Since VM and containers will continue to co-exist for the majority of enterprise and NFV applications.  Providing uniform security enforcement is therefore critical.   Leverage Neutron integration with industry-leading SDN controllers such as the VMware NSX-T can simplify container security insertion and implementation.

7). Container control plane flexibility: K8S HA requirements require load balanced Multi-master and scaleable worker nodes.  When Integrated with OpenStack, it is as simple as leverage LBaaSv2 for master node load balancing.  Worker nodes can scale up and down using tools native to OpenStack.  WIth VMware Integrated OpenStack, K8S worker nodes can scale vertically as well using the VM live-resize feature.

Next Steps:

I will leverage VMware Integrated OpenStack (VIO) implementation to provide examples of this perfect match made in heaven. This blog is part 1 of a 4 part blog series:

1). OpenStack and Containers Better Together (This Post)

2). How to Integrate your K8S  with your OpenStack deployment

3). Treat Containers and VMs as “equal class citizens” in networking

4). Integrate common IaaS and CI / CD tools with K8S

Infrastructure as Code with VMware Integrated OpenStack

Historically, organizations had “racked and stacked” hardware, and then installed and configured software and applications for their IT needs. With advent of cloud computing, IT organizations could start taking advantage of virtualization to enable the on-demand provisioning of compute, network, and storage resources.  By using the CLI or GUI, users have been able to manually provision these resources. However, with manual provisioning, you carry the following risks:

  • Inconsistency due to human error, leading to deviations from the defined configuration.
  • Lack of agility by limiting the speed at which your organization can release new versions of services in response to customer needs.
  • Difficulty in attaining and maintaining compliance to corporate standards due to the absence of a repeatable process

 

 

 

 

 

Infrastructure as Code (IAC) solutions address these issues by allowing you to automate the entire configuration and provisioning process. In its essence, this concept allows IT teams to treat infrastructure the same way application developers treat their applications – with code. The definition of the infrastructure is in human readable software code. The code allows to script, in a declarative way, the final state that you want for your environment and when executed, your target environment is automatically provisioned. A recent blog on this topic by my colleague David Jasso referred to IAC paradigm as IT As Developer. For additional information on IAC, read the two Forrester reports: How A Sysadmin Becomes A Developer (Chris Gardner and Robert Stroud; Forrester Research; March 2017); Lead The I&O Software Revolution With Infrastructure-As-Code (Chris Gardner and Richard Fichera; Forrester Research; September 2017)

In this blog post I will show you how by using Terraform and VMware Integrated OpenStack (VIO), you describe and execute your target infrastructure configuration using code. Terraform allows developers to define their application infrastructure via editable text files ending in .tf extension. You can write Terraform configurations in either Terraform format (using the .tf extension) or in JSON format (using the .tf.json extension).  When executed, Terraform consumes the OpenStack API services from the VIO (OpenStack distribution from VMware) to provision the infrastructure as you have defined.  As a result, you can use these provisioning tools, in conjunction with VIO, to implement Infrastructure as code.

For those not familiar with VIO, VIO differentiates from upstream distribution in by making install, upgrade and maintenance operations simple, and leveraging VMware enterprise-grade infrastructure to provide the most stable release of OpenStack in the market.  In addition to OpenStack distribution, VIO is also helping bridge gaps in traditional OpenStack management monitoring and logging by making VMware enterprise-grade tools such as vRealize Operations Manager and Log Insight OpenStack aware with no customization.

  • Standard DefCore Compliant OpenStack Distribution delivered as an OVA
  • End to end support by VMware, OpenStack and SDDC infrastructure.
  • The best foundational Infrastructure for IaaS is available with vSphere Compute (Nova), NSX Networking (Neutron), vSphere Storage (Cinder / Glance)
  • OpenStack endpoint management and logging is simple and easy to perform with VMware vRealize Operations Manager for management, vRealize Log Insight for logging, and vRealize Business for chargeback analysis
  • Best way to leverage existing VMware investment in People, Skills, and Infrastructure

Let’s look at the structure of code that makes IAC possible. The first step in defining the configuration is defining all the variables a user needs to provide within the Terraform configuration – see example below. The variables can have default values. Putting as much site specific information as possible into variables (rather than hardcoding the configuration parameters) makes the code more reusable. Please note that the code below is for illustration only.  Complete example can be downloaded from here.

 

 

 

 

 

 

 

 

 

 

The next step in defining the configuration is identifying the provider. Terraform leverages multiple providers to talk to services such as AWS, Azure or VIO (OpenStack distribution from VMware).  In the example below we specify that the provider is OpenStack, using the variables that you defined earlier.

 

 

 

 

 

 

 

Next you define the resource configuration.  Resources are the basic building blocks of a Terraform configuration. In the example code below (please use it as an illustration), you use Terraform code, which in turn leverages VIO, to create the compute and network resource instances and then assigns network ID to the compute instance to stand a networked compute instance. As you will see in the example, the properties of a resource created may be passed as arguments to the instance creation step of the next resource, such as using Network ID from the ‘network’ resource created, when creating the resource ‘subnet’ in the code below.

 

 

 

 

 

 

 

 

 

 

Infrastructure as a code allows you to treat all aspects of operations as software and manage almost everything in code, including servers, storage, networks, log files, automated tests, deployment processes, and so on. The concept extends to making configuration changes as well.  When you want to make any infrastructure configuration changes, you can check out the configuration code files from your code repository management system such as git, edit it to make the changes you want, check-in that new version. So you can use git to make and track changes to your configuration code – just as developers do.

Summary:

In this blog post, we have shown how you can implement IAC paradigm by using Terraform, running on VIO.  Download 60-day VIO evaluation now and get started, or try out VIO 4.0 based VMware Integrated OpenStack Hands-on Lab, no installation required.

Best Practice Recommendations for Virtual Machine Live Resize

As computing demands increase, server resources must “grow” or “scale” to meet those requirements.   There are two basic ways to scale computing resources. The first is to add more VMs or “horizontally scale.” Say a web front end is using 90% of the allocated computing capacity. If traffic to the site increases, the current VM may not have enough CPU, memory, or disk available to keep up.  The site administrator could deploy an additional VM to support the growth in the workload.

 

 

 

 

 

 

 

Not all applications scale horizontally.  NFV workloads such as virtual routers or gateways may need to “vertically scale”.  For example, a virtual machine with 2 vCPU / 4 G memory may need to double it’s vCPU and memory rather than adding a second virtual machine.  While the OpenStack ecosystem offers many tools for horizontal scaling (Heat, Terraform, etc.), options for scaling up are much more limited.  The Nova project has a long-pending proposal for live resize (Hot Plug).  Unfortunately, this feature still hasn’t been implemented.  Without live-resize, to increase Memory/CPU/Disk of an instance, OpenStack must first power down the VM, migrate to a VM flavor that offers more CPU/Memory/Disk, finally power up the VM.   VM power down impacts SLA and can trigger cascading failure for NFV based workloads (route convergence, loops, etc.)

By leveraging the existing OpenStack resize API and recommendations introduced in the upstream live-resize specification, VMware Integrated OpenStack (VIO) 4.0 offers the ability to resize any machine, as long as the GuestOS supports it, without the need to power down the system. OpenStack users would issue the standard OpenStack resize request.  The VMDK driver examines the CPU/memory/disk changes specified by the flavor, and the setting of the virtual machine to determine whether the operation can be performed. If the guest OS supports live-resize, resources will be added without power down.  If guest OS cannot support live-resize, then traditional Nova instance resize operation takes place (which powers off the instance).

Best Practice Recommendations:

When implementing live-resize in your environment, be sure to follow the following recommendations:

  1. Cloud Admins or Application owners would need to indicate the GuestOS can handle live resize for a specific resource using image metadata “os_live_resize= <resource>.”  List of guest OS that supports hot plug / live-resize can be found here.  Available resource options are disk, memory or vCPU.   You can live-resize the VM based on any combination of the resource types.
    • Add CPU resources to the virtual machine
    • Add memory resource to the virtual machine.
    • Increase virtual disk size of the virtual machine
    • Add CPU and Memory, CPU and Disk, or Memory and Disk
    • Increase CPU, Memory, and Disk
    • Hot removal of CPU/Memory not supported
  2. If a resized VM exceeds the capacity of a host, VMware DRS can move the VM to another host within the cluster where resources are available.  DRS is simple to configure and extremely powerful.  My colleague Mathew Mayer wrote an excellent blog on Load balancing vSphere Clusters with DRS, be sure to take a look.
  3. Image Metadata updates for disk resize:
    • Linked clone must set to false.  This is because vCenter cannot live resize linked cloned disks
    • Disk adapter must be Non-IDE.  This is because IDE disks do not support hot-swap/add.

See diagram below:

 

 

 

 

 

 

 

 

 

 

4). VMware supports memory resize of 4G and above.  Resize below 4G should work in most cases, but not officially supported by VMware.

Live-resize Example Workflow:

Step 1). Upload image:

openstack image create –disk-format vmdk –container-format ova –property vmware_ostype=”ubuntu64Guest”  –property os_live_resize=vcpu,memory,disk –-property img_linked_clone=false –file ./xenial-server-cloudimg-amd64.ova <some name>

Step 2). Disable linked clone (if using default VIO 4.0 bundled in 16.0.4 cloud image):

openstack image set –property img_linked_clone=false <some name>

Step 3). Boot a VM:

openstack server create –flavor m1.medium –image <some name>  –nic net-id=net-uuid resize_vm

Step 4). Resize to the next flavor:

openstack server resize –flavor m1.large <resize_VM>

Step 5). Confirm resize:

openstack server resize –confirm <server>

Step 6). SSH to the VM and run the scripts below to bring the new resources online in the guest OS.

  • Memory online

“for i in `grep offline /sys/devices/system/memory/*/state | awk -F / ‘{print $6}’ | awk -F y ‘{print $2}’`; do echo “bring memory$i online”; echo online >/sys/devices/system/memory/memory$i/state; done”

  • CPU online:

https://communities.vmware.com/docs/DOC-10493

Simplify your NFV workloads by levering industry’s most stable and battle-tested OpenStack distribution.  Instead of re-architect your virtual network and security to enable horizontal scaling, live-resize it!  It’s simple and hitless.   Download 60-day evaluation now and get started, or try out VIO 4.0 based VMware Integrated OpenStack Hands-on Lab, no installation required.

VMware Integrated OpenStack 4.0: What is New

VMware announced VMware Integrated OpenStack 4.0 Data Center edition at VMworld in Las Vegas.  We are truly excited about our latest OpenStack distribution that gives our customers the new features and enhancements included in the Ocata release, with a bundled Container platform option included. For OpenStack cloud admins, the 4.0 Data Center edition is also about enhanced platform performance and manageability, increased scale and advanced networking.

 

 

 

 

 

 

 

 

 

 

New Features Include:

OpenStack Features available in Newton + Ocata:

VIO 4.0 is based on the upstream Ocata release.  Ocata is the first release in which Cells v2 is the default deployment configuration for OpenStack Nova, a single Cell is supported in Ocata.  Cell support enables future scale out of an OpenStack cloud in a more distributed fashion.  The placement service, introduced in the Newton release, is now an essential part of VIO 4.0 in determining the optimum placement of VMs. Not to be mistaken with VMware DRS, the OpenStack placement service allows a cloud admin set up pools of resources, and then set up allocations for resource providers. VM placement policies can be built on top of those resources for optimal placement of VMs (Additional blogs to follow).

New capabilities in OpenStack Horizon include: enhanced workload placements, LBaaSv2 and Heat template versions to name a few. Heat template versions provide user with a list of available template versions and functions for a particular template version.

Resource tagging, Cinder availability zones, enhanced Cinder snapshots, and Heat templates with conditions are some of the other notable enhancements available from upstream release in VIO 4.0 release.

vRealize Automation Integration

Another great example of VMware empowers customers to leverage existing investment in infrastructure management and tooling. Integration provides enterprise customers the ability to consume VIO resources with governance. Using vRA XaaS blueprints, a cloud admin can automate OpenStack user and project creation, governance based Heat template deployment or other common aspects of VIO consumption through vRA governance. Once OpenStack resources are on-boarded, vRA integration allows cloud admins and users to view the VIO Horizon dashboard directly from the vRA portal using SSO integration with vIDM.

Networking Advanced Capabilities

VIO 4.0 greatly simplifies network addressing and reachability management leveraging dynamic routing.  Instead of relying on NAT to provide address uniqueness, cloud admins can leverage Neutron address pools or get-me-a-network feature to define a scope of unique addresses spaces.  Tenants needing unique address space can allocate subnets from this pool without worrying about overlapping with another tenant.  With BGP routing, another VIO 4.0 new feature, cloud admin can enable end-to-end connectivity dynamically without managing low-level static routes.

Enhanced Neutron availability zone support allows OpenStack tenants to place NSX ESG workloads to different physical clusters, across different racks for increased availability.  Finally, Firewall-as-a-Service and guest VLAN tagging are some of the other major Neutron enhancements.

Enhanced Platform Support

We are extremely proud of multi-vCenter support in VIO 4.0.  Multi-VC support with NSX-T allows VIO customers the ability to define multiple fault/availability zones, avoiding single point of failure.  Multi-VC can also be used for scaling out VIO by adding more vCenters upon reaching concurrency or total object limits.

Enterprise workloads require both horizontal and vertical scaling.  While horizontal scaling is made simple through Heat or Terraform, vertical scaling often requires downtime/outage window.  With VIO 4.0, cloud admins can offer Glance images that support live resize: OpenStack tenants can increase CPU, Memory, and disk of their virtual machine without VM powering down.  VIO 4.0 also provides increased resiliency with vCenter HA and LVM support on the OMS server to allow flexible storage growth.

Enterprise Grade Container

Finally, VIO offers enterprise grade Kubernetes with built in security, HA and scale (up or down).  Out of box, VIO provides Cloud Admins with simplified day 1 deployment automation for Kubernetes with multi-tenancy and user management.  Once deployed, VIO Kubernetes integrates easily with SDDC vRealize suite of products solving day 2 operational challenges in container life cycle Management, monitoring and logging.  Persistent storage, load balancing and container networking powered by VMware NSX are also standard out of box.

Adopting agile processes is a key driver to help business digitally transform.  It is changing not only the way applications are coded, but also the process they are built and operated.  In the new DevOps driven era, infrastructure admins and developers are solving the same problem – faster time to value.  VIO 4.0 is the answer for any organization looking to digitally transform their business.

VIO 4.0 Data Center edition will enable DevOps teams to build and deliver:

  • Container based micro-services, in addition to traditional VM based workloads
  • End-to-end infrastructure automation leveraging existing tools
  • OpenStack deployment scale out using multi-VC, OpenStack placement API and Cells v2
  • Advanced Neutron and container networking to simplify addressing and reachability while ensuring application security
  • Solving Day 2 operational challenges in Infrastructure life cycle management

Supported by the most rock solid VMware SDDC Infrastructure, VIO enables businesses faster time to value.

Try VMware Integrated OpenStack Today

Take a free test drive, no installation required, with the VMware Integrated OpenStack Hands-on Lab.  Try out the latest VIO 4.0 HOL If you are attending VMworld Vegas or Barcelona.

 

VMware Integrated OpenStack Glance Image Best Practices

A production cloud isn’t very efficacious unless users have the ability to run virtual machine images required by their application.  A cloud image is a single file that contains a virtual disk that has an operating system.  For many organizations, the simplest way to obtain a virtual machine image is to download a prebuilt base cloud image with a pre-packaged version of cloud-init to support user-data injection.  Once downloaded, an organization would leverage tools such as Packer to further customize and harden on top of the base image before rolling to production.  Most operating system projects and vendors maintain official images for direct download.  Openstack.org maintains a list of most commonly used images here.

 

 

 

 

 

 

 

 

 

Recently we received some queries about the proper way to import prebuilt QCOW2 native cloud images into VMware Integrated OpenStack.  Images import correctly, but would not successfully boot.  Common symptoms are “no Operating System found” messages generated by the virtual machine’s BIOS, the guest OS hanging during the boot cycle, or DHCP failure when trying to acquire an IP address.  After further analysis, problems were either caused by older upstream tooling or simple adjustments required in the cloud image to match the vSphere environment.  Specifically:

  • Some storage vendors need StreamOptimized image format.
  • Guest Images are attempting to write boot log to ttyS0, but the serial interface is not available on the VM.
  • Defects in earlier versions of the qemu-img tool while creating streamOptimized images.
  • DHCP binding failure caused by Predictive Network Interface Naming.

To overcome these issues, we came up with the following set of best practices to help you simplify the image import process.  I thought it would be a good idea to share our recommendations so others can avoid running into similar issues.

1). VIO 3.x and earlier, serial console output is not enabled.  When booting an image that requires serial console support, use libguestfs to edit the grub.cfg and remove all references to “console=ttyS0”.  Libguestfs provides a suite of tools for accessing and editing VM disk images.  Once installed the “guest mount” command-line tool can be used to mount qcow2 based images.  By default, the disk image mounts in read-write mode.  More info on Libguestfs here.

# guestmount -a xxx-cloudimg-amd64.img -m /dev/sda1 /mnt

# vi /mnt/boot/grub/grub.cfg

# umount mnt

See below screen Capture:

 

 

 

 

 

 

2). VMware vSAN requires all images to be in streamOptimized format.  When converting to VMDK format, use the –o flag to specify the subformat as streamOptimized:

# qemu-img convert -f qcow2 -O vmdk -o subformat=streamOptimized -o adapter_type=lsilogic xxx-server-cloudimg-amd64.img xxx-server-cloudimg-amd64.vmdk ; printf ‘\x03’ | dd conv=notrunc of=xxx-server-cloudimg-amd64.vmdk bs=1 seek=$((0x4))

A few additional items to call out:

  • “lsilogic” is the recommended adapter type.  Although it is possible to set the adapter type during image upload into glance, we recommend as a good practice to always set the adapter type as part of the image conversion process.
  • Older versions of the qemu-img tool contain a bug that causes problems with the streamOptimized subformat.  The following command can be run after converting an image to correct the problem: printf ‘\x03’ | dd conv=notrunc of=xxx-server-cloudimg-amd64.vmdk bs=1 seek=$((0x4)).   It is harmless to execute the printf even if you’re using a version of qemu-tools that has the fix: all the command does is set the VMDK version to “3” which correct version of qemu-img will already have done.  If you are not sure what version of qemu-tools you have, apply the printf command.

3). In the case of CentOS, Udev rule ln -s /dev/null /etc/udev/rules.d/80-net-name-slot.rules as part of the image bundle is ignored during CentOS image boot up and Predictable Network Interface Naming is enabled as a result.  Our recommendation is to disable predictive naming using grub.  You can find more information on my previous blog.

4). Finally, with Cirros QOCW image, preserve the adapter type as ‘ide’ during the QCOW2 to VMDK conversion process.  There’s currently an upstream bug open.

# qemu-img convert –f qcow2 –O vmdk /var/www/images/cirros-0.3.5-x86_64-disk.img /var/www/images/cirros-0.3.5-x86_64-disk.idk.vmdk

qemu-img defaults to IDE if no adapter type is specified.

Once converted, you can look into the image metadata and validate information such as disk and image type before uploading into Glance image repository.  Image metadata can be viewed by display the first 20 lines of the VMDK

# cat xxx-server-cloudimg-amd64.vmdk | head -20

You can add the newly converted image into glance using OpenStack CLI or Horizon.  Set the public flag when ready for end user consumption.

OpenStack CLI:

# openstack image create –disk-format vmdk –public –file ./xxx-server-cloudimg-amd64.vmdk –property vmware_adaptertype=’lsiLogic’ –property vmware_disktype=’streamOptimized’ <Image display name>

Horizon:

 

 

 

 

 

 

 

 

 

 

Your cloud is as useful as the application and virtual machine images you can support.  By following above simple best practice guidelines, you will deliver a better user experience to your end users by offering more Virtual machine varieties with significantly reduced lead time.

Visit us at VMworld in Las Vegas; we have a large number of Demo and speaking sessions planned:

MGT2609BU:  VMware Integrated OpenStack 4.0: What’s New
MGT1785BU:  OpenStack in the Real World: VMware Integrated OpenStack Customer Panel
NET1338BU:  VMware Integrated OpenStack and NSX Integration Deep Dive
FUT3076BU:  Simplifying Your Open-Source Cloud With VMware
LDT2834BU:  Running Hybrid Applications: Mainframes to Containers
SPL182001U:  VMware Integrated OpenStack (VIO) – Getting Started
ELW182001U: VMware Integrated OpenStack (VIO) – Getting Started
SPL188602U: vCloud Network Functions Virtualization – Advanced Topics
LDT1844BU: Open Source at VMware: A Key Ingredient to Our Success and Yours

How to Deal with DHCP Failure Caused by Consistent Network Device Naming (VIO)

 

VMW-Integrated OpenStack-Gray.jpg

 

 

 

 

 

 

 

 

 

While testing out the latest CentOS 7 QCOW2 cloud image, we ran into an issue where the guest operating system wasn’t able to obtain a DHCP IP address after successful boot.  After some troubleshooting, we quickly realized the NIC name was assigned based on predictive consistent network device name (CNDN). You can read more about CNDN from here.  Network script required to bring up the network interface was missing from /etc/sysconfig/network-scripts, only default ifcfg-eth0 script was present. The network interface remained in DOWN status since interface script wasn’t available.  Therefore, the Linux dhclient therefore couldn’t bind to the interface, hence the DHCP failure.

Fixing the symptom we simply edited and renamed the interface script to reflect the predictive name, then restart networking.  But since this problem will show up again when booting a new VM,  we need a permanent fix in the image template.

It turns out predictive naming was intended to be disabled in the CentOS 7 Cloud Image based on the udev role below:

Screen Shot 2017-04-14 at 2.41.06 PM

 

 

The system ignored this setting during bootup and predictive naming was enabled as a result.

There are multiple ways to workaround this:

Solution 1 – Update Default GRUB to Disable CNDN:

1). To restore the old naming convention, you can edit the /etc/default/grub file and add net.ifnames=0 and biosdevname=0 at the end of the GRUB_CMDLINE_LINUX variable:

Example:   GRUB_CMDLINE_LINUX=”rd.lvm.lv=centos/swap vconsole.keymap=us crashkernel=auto rd.lvm.lv=centos/root vconsole.font=latarcyrheb-sun16 rhgb quiet net.ifnames=0 biosdevname=0″

2) Review the new configuration by printing output to STDOUT

# grub2-mkconfig

3) Update the grub2 configuration after review:

# grub2-mkconfig -o /boot/grub2/grub.cfg

 

Solution 2: Enable Network Manager

1) Install Network Manager:

# yum install NetworkManager

2) Start Network Manager

# service NetworkManager start

3) Run chkconfig to ensure Network Manager starts after system reboot

# chkconfig NetworkManager on

Solution 3: Create Customer Udev Rule

We will create an udev rule to override the unintended predictive name.

1) Create a new 80-net-name-slot.rules in /etc/udev/rules.d/

# touch /etc/udev/rules.d/80-net-name-slot.rules

2). Add below line to the new 80-net-name-slot.rules:

NAME==””, ENV{ID_NET_NAME_SLOT}!=””, NAME=”eth0″

Final Implementation

All three solutions solved the problem.  Approach #1 involves updating GRUB config, so handle with care. Solution #2 is a very hands-off approach allowing Network Manager to control interface states.   Most sysadmins have a love/hate relationship with NetworkManager however. NetworkManager simplifies management of WiFI interfaces but can lead to unpredictable behavior in interface states. Most common concerns are interfaces brought up by NetworkManager when it should be down as sysadmin are not ready to turn up those NIC yet. OpenStack community had reported cloud-init timing related issues as well, although we didn’t have any problems enabling it on the Cloud Centos 7 image.  Solution #3 needs to align with overall deployment requirements in a Multi-NIC environment.

In reality,  CNDN was designed to solve NIC naming issues in a physical server environment.  It stopped being useful with virtual workloads.  Most of the cloud workloads deploy with a single NIC.  The NIC is always eth0.  Consequently, disabling CNDN makes sense, solution #1 is what we recommend.

Once CentOS VM image is in the desirable state, create a snapshot, then refer to the OpenStack documentation to upload into glance.  A shortcut to validate the new image,  instead of creating a snapshot, download and upload back into glance, it is perfectly fine to boot VM directly from a snapshot.   Please refer to VIO documentation for recommended steps.

Be sure to test this out on your VMware Integrated OpenStack setup today.  If you don’t have VIO yet, try it on our VMware Integrated OpenStack Hands-On-Lab , no installation required.

OpenStack Summit:

We will be at the OpenStack Summit in Boston. If you are attending the conference, swing by the VMware booth or attend one of our many sessions:

OpenStack and VMware – Use the Right Foundation for Containers

Digital Transformation with OpenStack for Modern Service Providers

Is Neutron challenging to you – Learn how VMware NSX is the solution for regular OpenStack Network & Security services and Kubernetes

OpenStack and OVN – What’s New with OVS 2.7 

DefCore to Interop and back again: OpenStack Programs and Certifications Explained

Senlin, an ideal bridge between NFV Orchestrator and OpenStack 

High availability and scalability management of VNF

How an Interop Capability becomes part of the OpenStack Interop Guidelines

OpenStack Interoperability Challenge and Interoperability Workgroup Updates: The Adventure Continues

Lightning Talk:

Openstack and VMware getting the best of both. 

Demos:

Station 1: VMware NSX & VMware Integrated OpenStack

Station 2: NFV & VMware Integrated OpenStack