Home > Blogs > OpenStack Blog for VMware

The Future For Network Engineers: Where We Are Today

A little over 7 years ago, shortly after joining VMware as a Network Virtualization Engineer, I published a blog post where I speculated on the possible evolution of the Network and Security Engineer roles, both key positions within IT staffs around the world tasked with designing, deploying and maintaining datacenter networks and firewalls.


The post generated some lighthearted controversy, as evidenced by the passionate comments you can read below that article. As a side note, you should know that I am super friendly with most of the folks who opined on my thoughts, as we keep meeting in the field, for various reasons.


Given that a lot has happened since that post, I thought it would be a good idea to reflect on the statements I made back then, check if they were right, or not, and take a guess again in terms of what the future holds. Let’s start with the concept of Network Virtualization itself and then explore some of the other predictions:


Is Network Virtualization a reality today?


Network Virtualization, this notion that one can simulate a network topology different from the physical one in order to provision connectivity for applications a lot faster, WAS a reality even at the time of my post. For years, networking vendors had been providing solutions that leveraged various encapsulation techniques, in order to propagate L2 or L3 traffic over trusted or untrusted transit networks (think “datacenter” vs. “the Internet”). IPsec, MPLS, VXLAN, Nicira’s Stateless TCP Transport (STT), etc., were all means used to achieve this goal, covering a wide array of use cases and justifications.


What we did at VMware was different though, and very revolutionary. We decided to decouple these encapsulation techniques from the network hardware and offer a virtual fabric that would work on top of literally any physical fabric, irrespective of the vendor who provided it. It is because of this decision that today you can use NSX to extend consistent application connectivity across clouds (private, hybrid and public), form factors (VMs running on multiple hypervisors, bare metal, containers and cloud native) and locations (datacenter, remote office or edge). The industry momentum behind SD-WAN, with VMware as one of the leaders in this space, is also proof that Network Virtualization goes beyond the datacenter.


What about security? What is the current state of affairs?


In my post, I imagined a world in which a particular security posture, let’s call it “a micro-segmentation policy”, could be defined and applied to target workloads regardless of cloud, location or format. Has that promised materialized itself in 2020?


Fortunately, the answer is yes.


Today, you can use NSX to define a security policy that is aligned with your business intent or compliance requirements, and then enforce it without worrying about where the application lives. I routinely demonstrate this multi-cloud support by showing a top-level security ruleset that looks and works the same for on-prem, AWS, Azure and some of the other public cloud offerings. What is more important, we now see technology that leverages the power of Machine Learning (ML) and Artificial Intelligence (AI) to help automate the creation and dissemination of such policies, while providing additional capabilities like malware and anomaly detection, next-gen antivirus and compliance attestation. Examples of these offerings are: vRealize Network Insight (holistic network and security visibility), NSX Intelligence (distributed analytics for providing granular network security policy recommendations), VMware Carbon Black Cloud (cloud-based analytics for providing endpoint security and protection) and VMware Secure State (cloud-native compliance engine).


Even more significant is the fact that this security is intrinsic. This means security is built-in and not bolted-on. These policies are embedded in the infrastructure (they are agentless), and they live, evolve and are decommissioned following the same lifecycle of the application. If an application is created, so is its security posture. If the application changes, or moves, so does its security posture. And finally, when an app is destroyed, the policy is automatically removed. From an operations perspective, this model has proven to be more efficient, less error-prone and obviously, more consistent than the alternatives.


What about the role of the Network Engineer itself? How has that changed?


In terms of how the role of a Network Engineer has evolved, I am going to go ahead and say that I was spot-on. This might sound like a boast, but bear with me.


Network Engineers, in particular Network Virtualization Engineers, have adopted operational models aligned with the core principles of DevOps and have acquired skills that leverage modern instrumentation, which allows them to create, manage and troubleshoot connectivity, security and elasticity policies in a consistent and repeatable manner. Current Network Engineers understand the application geometry and treat the network infrastructure that supports it as code. Furthermore, the rapid adoption of microservices has catapulted the importance of the Network Engineer role. The distributed nature of a microservices architecture means that a network is required to efficiently connect all these disparate services. Who better than an expert in networking to help design and operate the fabric that ties them all together?


VMware has open source and commercial solutions for all of the above: providers for the most popular DevOps frameworks (Terraform, Ansible, PowerShell, vRealize Automation, OpenStack Neutron, Public Cloud IaaS, Kubernetes and several others), and Service Mesh solutions, like Tanzu Service Mesh for automatic service discovery, service-to-service encryption, multi-cloud federation, observability and Service Level Objective (SLO) tracking, all leveraging the revolutionary concept of Global Namespaces.


So, was I right? If so, what’s next?


I think that my predictions were pretty accurate. The reason is very simple: my predictions were not mine alone. I rely on a fantastic team of thought leaders, amazing engineers and sales staffs that have all helped forge our own path. When you have access to this talent and this passion for an industry, you hard work pays off. This is why I believe that we have influenced our own destiny. This outcome is, in a way, a self-fulfilling prophecy.


In terms of what’s next, I will leave you with a teaser. Come join me and Dr. Bruce Davie, at VMworld 2020. For several years now, I have helped build and present the demos that accompany his daring predictions and thoughts with regards to our industry. The name of the breakout is, very apropos, “The Future of Networking with VMware NSX” and you can find it on the VMworld 2020 Content Catalog.


So maybe in another 7 years I will be checking in with you again to see if what we anticipated today becomes a reality then. In the meantime, keep investing in your network expertise, and keep innovating.


Marcos Hernandez

Chief Technologist, Network and Security


Barbican Consumption and Operational Maintenance

VMware Integrated OpenStack(VIO) announced the official support for Barbican, OpenStack secrets manager,  in version 5.1.  With Barbican, cloud Operators can offer Key Management as a service by leveraging Barbican API and command line(CLI) to manage X.509 certificates, keys, and passwords. Basic Barbican workflow is relatively simple –  invoke the secrets-store plugin to encrypt a secret on the store and decrypt a secret on retrieval. In addition to generic secrets management, some OpenStack projects integrate with Barbican natively to provide enhanced security on top of its base offering.  This blog will introduce Barbican consumption and operation maintenance through the use of Neutron Load Balancer as a Service (LBaaS).

Understanding Policies

Barbican scopes the ownership of a secret at the OpenStack project level.  For each API call, OpenStack will check to ensure the project ID of the token matches the project ID stored as the secret owner.  Further, Barbican uses roles and policies to determine access to secrets. Following roles are defined in Barbican::

  • Admin – Project administrator. This user has full access to all resources owned by the project for which the admin role is scoped.
  • Creator – Users with this role are allowed to create and delete resources.  Users with this role cannot delete other user’s resources managed within same project. They are also allowed full access to existing secrets owned by the project in scope.
  • Observer – Users with this role are allowed access to existing resources but are not allowed to upload new secrets or delete existing secrets.
  • Audit – Users with this role are only allowed access to the resource metadata. So users with this role are unable to decrypt secrets

VIO 5.1 ships with “admin” and “creator” role out of the box.  A project member must be assigned with the creator role to consume barbican.  Based on the above roles, Barbican defines a set of rules or policies for access control. Only operations specified by the matching rule will be permitted.

While the policy framework works well, but secrets management is never one size fits all, and there are limitations with the policy framework if fine-grain control is required.  Scenarios such as grant specific user access to a particular secret or upload a secret for which only the uploader has access needs OpenStack ACLs.  Please refer to ACL API User Guide for full details.

Supported Plugin

The Barbican key manager service leverages secret-store plugins to allow authorized users to store secrets.   VIO 5.1 supports two type of plugins, simple crypto and KMIP enabled. Only a single plugin can be active for a VIO deployment.  Secret stores can be software-based, such as a software token, or hardware devices such as a hardware security module (HSM).

Simple crypto plugin

The simple crypto plugin uses a single symmetric key, stored locally on the VIO controller in the /etc/barbican/barbican.conf file to encrypt and decrypt secrets.  This plugin also leverages local Barbican database and stores user secrets as encrypted blobs in the local database.    The reliance on local text file and database for storage is considered insecure, and therefore upstream community considers simple crypto plugin to be suitable for development and testing workloads only.

Secret store KMIP plugins

The KMIP plugin stores secrets securely in an external KMIP-enabled device. The Barbican database, instead of storing encrypted secrets, maintain location references of secrets for later retrieval. Client certificate-based authentication is the recommended approach to integrate the plugin with the KMIP enabled device.

A cloud operator must use the VIOCLI to specify a plugin:


sudo viocli barbican –secret-store-plugin KMIP \

–host kmip-server –port kmip-port  \

–ca-certs ca-cert-file [–certfile local-cert-file –keyfile local-key-file –user kmip-user –password kmip-password]

Simple Crypto:

sudo viocli barbican –secret-store-plugin simple_crypto

Example Barbican Consumption:

One of the most commonly requested use case specific to VIO is Barbican integration with Neutron LBaaS to offer HTTPS offload.  This is a five step process, we will review each step in detail.

  1. Install KMIP server (Greenfield only)
  2. Integrate KMIP using VIOCLI
  3. ACL update
  4. Workflow to create secret
  5. Workflow to create LBaaSv2

Please note, you must leverage OpenStack API or CLI for step #4.  Horizon support for Barbican is not available.

Install KMIP server

Production Barbican deployment requires a KMIP server.  In a greenfield deployment, Dell EMC CloudLink is a popular solution VMware vSAN customers leverage to enable vSAN storage encryption.  CloudLink includes both a key management server (KMS) as well as the ability to control, monitor and encrypt secrets across a hybrid cloud environment. Additional details on CloudLink is available from VMware solution exchange.

Integrate KMIP using VIOCLI

To integrate with CloudLink KMS or any other KMIP based secret store, simply login into the VIO OMS server and issue the following VIOCLI command;

Configure Barbican to use the KMIP plugin.

viocli barbican –secret-store-plugin KMIP

–user viouser \

–password VMware**** \

–host <KMIP host IP> \

–ca-certs /home/viouser/viouser_key_cert/ca.pem \

–certfile /home/viouser/viouser_key_cert/cert.pem \

–keyfile /home/viouser/viouser_key_cert/key.pem –port 5696

Successful completion of VIOCLI command performs following set of actions:

  • Neutron.conf update to include Barbican specific service_auth account.
  • Barbican environment specific information provided via VIOCLI
  • Barbican service endpoints definition on the HAproxy

ACL updates based on consumption

Neutron LBaaS relies on a Barbican service account to read and push certificates and keys stored in the Barbican containers to a load balancer.  The Barbican service user is an admin member of the service project, part of the OpenStack Local domain. Default Barbican security policy does not allow admin or member of one project to access secrets stored in a different project. In order for Barbican service user to access and push certificate and keys, tenant users must grant access to the service account.  There are two ways to allow access:

Option 1:

1). Tenant creator gives Barbican service user access using the OpenStack ACL command.  Cloud administrator needs to supply the UUID of the Barbican service account.

openstack acl user add -u <barbican_service_account UUID > $(openstack secret list | awk ‘/ cert1 / {print $2}’)

Repeat this command with each certificate, key, and container you want to provide Neutron access to.

Option 2:

2.) If cloud administrators are comfortable providing Neutron with access to secrets without users granting access to individual objects, cloud administrators may elect to modify the Barbican policy file. Implementing this policy change means that tenants won’t need to add the Neutron barbican service_user to every object, which makes the process of creating TERMINATED_HTTPS listeners easier. Administrators should understand and be comfortable with the security implications of this action before implementing this approach. To perform the policy change, use a custom-playbook to change the following line in the Barbican policy.json file:

From:   “secret:get”: “rule:secret_non_private_read or rule:secret_project_creator or rule:secret_project_admin or rule:secret_acl_read”,

To:   “secret:get”: “rule:secret_non_private_read or rule:secret_project_creator or rule:secret_project_admin or rule:secret_acl_read or role:admin”,

Please refer to my previous blog on custom-playbook.

Workflow to Create Secret:

This step assumes you have pre-created certificates and keys.  If you have not created keys and certificates before, please refer to this blog for details.  To follow steps outlined below, make sure to name your output files accordingly (server.crt and server.key).  To upload a certificate:

openstack secret store –name=’certificate’ \

–payload=”$(cat server.crt)” \


Most of options are fairly self-explanatory, passphrase indicates a plain text.  Repeat the same command for keys:

openstack secret store –name=’private_key’ \

–payload=”$(cat server.key)” \


you can confirm by listing out all secrets:





Final, create a TLS container pointing to both private key and certificate secrets:

openstack secret container create –name=’tls_container’ –type=’certificate’ \

                   –secret=”certificate=$(openstack secret list | awk ‘/ certificate / {print $2}’)” \

                   –secret=”private_key=$(openstack secret list | awk ‘/ private_key / {print $2}’)”

Workflow to create LBaaSv2

With Barbican service up and running,  ACL configured to allow retrieval of secret keys, let’s start to create a Load balancer and upload a certificate and key from the KMS server.  Load balancer creation workflow does not change with Barbican.  When creating a listener, be sure to specify TERMINATED_HTTPS as the protocol, and URL of the TLS container stored in Barbican.

Please note:  

  1. If you are testing Barbican against NSX-T, NSX-MGR must be running at least version 2.2 or higher.
  2. Example assumes pre-created test VMs,  t1 router, logical switch and subnets.
  • Create TLS enabled LB:

neutron lbaas-loadbalancer-create \

$(neutron subnet-list | awk ‘/ {subnet name} / {print $2}’) \

–name lb







  • Create listener with TLS

neutron lbaas-listener-create –loadbalancer lb1 \

–protocol-port 443 \


–name listener1 \

–default-tls-container=$(openstack secret list | awk ‘/ tls_container / {print $2}’)







  • Create pool:

neutron lbaas-pool-create \

–name pool1 \

–protocol HTTP \

–listener listener1 \

–lb-algorithm ROUND_ROBIN

  • Add members:

neutron lbaas-member-create pool1 \

–address <address1> \

–protocol-port 80 \

–subnet $(neutron subnet-list | awk ‘/ test-sub / {print $2}’)

neutron lbaas-member-create pool1 \

–address <address2> \

–protocol-port 80 \

–subnet $(neutron subnet-list | awk ‘/ test-sub / {print $2}’)













you can associate a floating IP address with the Loadbalancer VIP for services requiring external access.









To test out the new LB service, simply curl the URL using the floating IP:

viouser@oms:~$  curl -k



VMware Integrated OpenStack DNSaaS – Designate Deepdive

Written by Xiao Gao, with valuable feedbacks from Damon Li.

DNS is essential to nearly every cloud application, and it is almost unimaginable to launch any cloud service without a robust DNS implementation.  Booting a VM in OpenStack takes seconds.  However, for most OpenStack operators, once a VM is booted, the first step towards production is to manually create a ticket to register the IP address with the corporate DNS.  This registration process can often take few days.  Why give users the power to boot VM only to have them submit an IT support ticket for DNS entry?  Delegating the responsibility for maintaining DNS records to the application owners, entirely based on self-service(DNSaaS), reduces the load on IT teams and gives users the power to do what they want.   DNS is so fundamental to any application lifecycle, and it should just happen.

One of the most requested features from the VIO user community has been self-service for DNS records, and we are proud to deliver OpenStack Designate as part of VIO 5.0.  OpenStack Designate is the OpenStack equivalent of AWS route 53.  There are three ways to consume Designate in VIO 5.0:

Designate Architecture

Architecturally Designate consists of the following components:

  • API service – It is the consumption layer of Designate.  API service is also responsible for validation of API input.
  • Sink – Sink is a notification event listener.   It also generates simple DNS forward lookup A record based on Nova and Neutron notification events.
  • Central Process – Business logic handler.  Central Process is responsible for the user and permission validation.  It also manages access to the Designate database and request dispatch.
  • Pool Manager – Manages the states of the DNS servers. The Pool manager divides DNS servers into ‘Pools’ (PowerDNS, BIND9, etc.) so that zones within Designate can split across different sets of backend servers. The Pool Manager is also responsible for making sure that backend DNS servers are in sync with the Designate database.
  • Designate-zone-manager – A zone shard is a collection of zones allocated based on the first three characters of zone UUID.  The Zone Manager handles all tasks relating to the zone shard.
  • Backend – Software plugins for common DNS servers ((PowerDNS, BIND9, etc.).  Pool Manager will load corresponding plugins based on type of backends.
  • MiniDNS – Serves information from the database to DNS servers via notify and zone transfer.  Also, most importantly, do not expose MiniDNS to end user consumption.

Component Mapping in VIO

Below table highlights mapping of Designate services to VIO control VM(s):

Designate Consumption

VIO 5.0 supports Bind9, PowerDNS (version 4+) and Infoblox back-ends.  Since DNS is so foundational, there’s no such thing as greenfield for Designate.  Once Designate is enabled, there are few strategies to insert Designate into your private cloud.  Graham Hayes (OpenStack PTL) and a few others gave an excellent talk on this topic.  You can find their talk here:


General recommendations are to start small and expand.  One common multi-phase approach is:

  • Phase I – Delegate – Pick a subzone and delegate it to Designate. In the delegation phase, type of backend can be different from existing the production platform.
  • Phase II – Integrate – Deploy a second pool that mirrors the production DNS, and migrate a subset of users/projects.
  • Phase III – Converge – Migrate production zones into Designated owned DNS servers.

Configure Designate

To get started with Phase I is super simple.  At a high-level, follow below steps:

1). login vSphere UI, select Home > VMware Integrated OpenStack > Manage > Setting > Configure Designate 

Note: Backend DNS server must be accessible from VIO control plane.

2).  Configure your DNS server.  VIO QA team had certified BIND / PowerDNS / Infoblox

3). Consume – VIO currently does not support provider network based DNS registration, only floating IP.  A recordset is created during floating association of a VM and deleted after a disassociation.  Customer running BGPaaS / no-nat can leverage static records to insert entries into DNS:

openstack recordset create –records <IP> –type A <domain>

I have created a video recording to demonstrate basic consumption of Designate.


There is not a one size fits all solution to Phase II and III.  Each organization may adopt different implementation strategies based on operational processes and application availability unique to their organization.  Some organizations may never implement beyond phase I.  You are not alone.  We recommend you to consult with VMware PSO team to work out the most optimal implementation based on your unique requirements.  Also, we welcome your input.  Feel free to share your experience on our VIO community page, or leave a note at the end of this blog with your thoughts.

VMware Integrated OpenStack 5.0: What’s New

VMware today announced VMware Integrated OpenStack (VIO) 5.0. We are truly excited about our latest OpenStack distribution as VMware is one of the first companies to support and provide enhanced stability on top of the newest OpenStack Queens Release.  Available in both Carrier and Data Center Editions, VIO 5.0 enables customers to take advantage of advancements in Queens to support mission-critical workloads, and adds support for the latest versions of VMware products including vSphere, vSAN, and NSX.

For our Telco/NFV customers, VIO 5.0 is about delivering scale and availability for hybrid applications across VM and container-based workloads using a single VIM (Virtual Infrastructure Manager). Also for NFV operators, VIO 5.0 will help fast track a path towards Edge computing with VIO-in-a-box, secure multi-tenant isolation and accelerated network performance using the enhanced NSX-T VDS (N-VDS).  For VIO Datacenter customers, advanced security, simplified user experience, and advanced networking with DNSaaS have been on top of the wish list for many VIO customers.  We are super excited to bring those features in VIO 5.0.

VIO 5.0 NFV Feature Details:

Advanced Kubernetes Support:

Enhanced Kubernetes support:  VIO 5.0 ships with Kubernetes version 1.9.2.  In addition to the latest upstream K8S release, integration with latest NSX-T 2.2 release is also included. VIO Kubernetes customers can leverage the same Enhanced N-VDS via Multus CNI plugin to achieve significant improvements in container response time, reduced network latencies and breakthrough network performance.

Heterogeneous Cluster using Node Group:  Now you can have different types of worker nodes in the same cluster. Extending the cluster node profiles feature introduced in VIO 4.1, a cluster can now have multiple node groups, each mapping to a single node profile. Instead of building isolated special purpose Kubernetes clusters, a cloud admin can introduce a new node group(s) to accommodate heterogeneous applications such as machine learning, artificial intelligence, and video encoding.  If resource usage exceeds the node group limit, VIO 5.0 supports cluster scaling at a node group level.  With node groups, cloud admins can address cluster capacity based on application requirements, allowing the most efficient use of available resources.

Enhanced Cluster Manageability:  vkube heal and ssh allow you to directly ssh into any of the nodes of a given cluster and to recover a failed cluster nodes based on ETCD state or cluster backup in the case of complete failure.

Advanced Networking:

 N-VDS:  Also Known as NSX-T VDS in Enhanced Data-path mode.  Enhanced, because N-VDS runs in DPDK mode and allows containers and VMs to achieve significant improvements in response time, reduced network latencies and breakthrough network performance.  With performance(s) similar to SR-IOV, while maintaining the operational simplicity of virtualized NICs, NFV customers can have their cake and eat it too

NSX-V Search domain:  A new configuration setting in the NSX-V will enable the admin to configure a global search domain. Tenants will use this search domain if there is no other search domain set on the subnet.

NSX-V Exclusive DHCP server per Project:  Instead of shared DHCP edge based on subnet across multiple projects.  Exclusive DHCP edge provides the ability to assign dedicated DHCP servers per network segment. Exclusive DHCP server will provide better tenant isolation, also allowing an Admin to determine customer impact concerning maintenance windows, etc.

NSX-T availability zone (AZ):  An availability zone is used to make network resources highly available by group network nodes that run services like DHCP, L3, NAT, and others. Users can associate applications with an availability zone for high availability.  In previous releases neutron AZ was supported against NSX-V, we are extending this support to the T as well.

Security and Metering:

Keystone Federation:   Federated Identity provides a way to securely use existing credentials to access cloud resources such as servers, volumes, and databases, across multiple endpoints across multiple authorized clouds using a single set of credentials.  VIO5 supports Keystone to Keystone (K2K) federation by designating a central Keystone instance as an Identity Provider (IdP), interfacing with LDAP or an upstream SAML2 IdP.  Remote Keystone endpoints are configured as Service Providers (SP), propagating authentication requests to the central Keystone.  As part of Keystone Federation enhancement, we will also support 3rd party IdP in addition to the existing support for vIDM.

Gnocchi:   Gnocchi is the project name of a TDBaaS (Time Series Database as a Service) project that was initially created under the Ceilometer umbrella. Rather than storing raw data points, it aggregates them before storing them.  Because Gnocchi computes all the aggregations at ingestion, data retrieval is exceptionally speedy.  Gnocchi resolves performance bottlenecks in Ceilometer’s legacy architecture by providing an extremely robust foundation for the metric storage required for billing and monitoring.  The legacy Ceilometer API service has been deprecated by upstream and is no longer available in Queens.  Instead, the Ceilometer API and functionality has been broken out into the Aodh, Panko, and Gnocchi services, all of which are fully supported in VIO 5.0.

Default Drop Policy:   Enable this feature to ensure that traffic to a port that has no security groups and has port security enabled will always discard.

End to End Encryption:  The cloud admin now has the option to enable API encryption for internal API calls in addition to the existing encryption on public OpenStack endpoints.  When enabled, all internal OpenStack API calls will be sent over HTTPS using strong TLS 1.2 encryption.  Encryption on internal endpoints helps avoid man-in-the-middle attacks if the management network is compromised.

Performance and Manageability:

VIO-in-a-box:  Also known as the “Tiny” deployment. Instead of separate physical clusters for management and compute, VMware Integrated OpenStack control and data plane can now consolidate on a single physical server.   This drastically reduces the footprint of a deployment and is ideal for Edge Computing scenarios where power and space is a concern.  VIO-in-a-box can be preconfigured manually or fully automated with OMS API.

Hardware Acceleration:  GPUs are synonymous with artificial intelligence and machine learning.  vGPU support gives OpenStack operators the same benefits for graphics-intensive workloads as traditional enterprise applications: specifically resource consolidation, increased utilization, and simplified automation. The video RAM on the GPU is carved up into portions.  Multiple VM instances can be scheduled to access available vGPUs.  Cloud admins determine the amount of vGPU each VM can access based on VM flavors.  There are various ways to carve vGPU resources. Refer to the NVIDIA GRID vGPU user guide for additional detail on this topic.  

OpenStack at Scale:  VMware Integrated OpenStack 5.0 features improved scale, having been tested and validated to run 500 hosts and 15,000 VMs in a region. This release will also introduce support for multiple regions at once as well as monitoring and metrics at scale.

Elastic TvDC:  A Tenant Virtual Datacenter (TvDC) can extend across multiple clusters in VIO 5.0.  Extending on support of single cluster TvDC’s introduced in VIO 4.0, VIO 5.0 allows a TvDC to span across multiple clusters.  Cloud admins can create several resource pools across multiple clusters assigning the same name, project-id, and unique provider-id. When tenants launch a new instance, the OpenStack scheduler and placement engine will schedule VM request to any of the resource pools mapped to the TvDC.

VMware at OpenStack Summit 2018:

VMware is a Premier Sponsor of OpenStack Summit 2018 which runs May 21-24 at the Vancouver Convention Centre in Vancouver, BC, Canada. If you are attending the Summit in person, we invite you to stopped by VMware’s booth (located at A16) for feature demonstrations of VMware Integrated OpenStack 5 as well as VMware NSX and VMware vCloud NFV.  Hands on training is also available (RSVP required).   Complete schedule of VMware breakout sessions, lightening talks and training presentations can be found here.

A Deeper Look Into OpenStack Policy Update

Written by Xiao Gao, with valuable feedbacks and inputs from Mark Voelker.

While working with customers that are switching over to VMware Integrated OpenStack (VIO) from a different OpenStack distribution, customers expressed the need to update policies. Reasons were:

  • Backward compatibility with their legacy OpenStack deployment.
  • Internal company process and procedure alignment.

While updating policy is not any more complicated on VIO when compared to other distributions, it is an operation that we traditionally advised our customers to avoid. Following are some of the reasons:

1). Upgrade. While many non-default changes can seem trivial and straightforward, VMware can’t guarantee upstream implementation will always be backward compatible when moving between releases. Therefore, responsibility of maintaining day-2 changes lies within the customer

2). Snowflake avoidance.  Upstream gate tests focus almost exclusively on default policies. The risk of exposing unexpected side effect increases when the security posture of an operation is relaxed or tightened.  Security is also a concern when relaxing policies.  Similarly, most popular OpenStack orchestration/monitoring tools such as Terraform, Gophercloud, or Nagios are implemented assuming default policies. When policies are made more restrictive, it can cause your favorite OpenStack tools to fail.

Snowflakes not only are difficult to support and maintain, often cause of unexpected outages.

3). Leverage external CMP for enhanced governance and control. External CMP such as the vRA is designed to integrate business processes into IAAS consumption. Instead of maintaining low-level policies changes, leverage out of box capabilities of vRA to control what users will have access to.



Implementation Options:

We understand there are scenarios where policy changes are required. Our recommendation for those scenarios is to leverage VIO custom playbook to make those changes.  The basic idea behind custom playbook:

  1. Customer will code up what has to change using Ansible.
  2. VIO will decide when to make required changes, to survive upgrades and other maintenance tasks.

While VIO doesn’t sanction contents of the custom playbook, it’s essential to write the playbook in a manner that is modular and agnostic to the OpenStack version.  Ideal playbook should be stateless, grouped based on operational action, and not restrictive towards alignment with the upstream (see example section for details).  Loggings is on by default.

Working Example:

Let’s look at an example.  Say we want regular users to be able to create shared networks.  To do that we need to modify /etc/neutron/policy.json and change:

“create_network:shared”: “rule:admin_only”


“create_network:shared”: “”

There is number of ways to accomplish above task.  You can go down the path of j2 templates and introduce variables for each policy modification.  But this approach requires discipline from the operator to update his/her entire set of j2 policy template(s) before any significant upgrade to avoid drift or conflicts with upstream.  On the other hand, if you leverage direct file manipulation method, you will change only parameters that are required in your local environment, and leave everything else in constant alignment with upstream.

Below example uses lineinfile to manipulate files directly:

# The custom playbook is run on initial deployment configuration, on a patch,
# or on an upgrade.  It can also be run via the viocli command line:
#   viocli deployment run-custom-playbook
# Copy this file and all supporting files to:
#   /opt/vmware/vio/custom/custom-playbook.yml
- hosts: controller
  sudo: true
  any_errors_fatal: true

    - name: stat check for policy.json
      stat: path=/etc/neutron/policy.json
      register: policy_stat

- hosts: controller
  sudo: true
  any_errors_fatal: true

    - name: backup policy.json
      command: cp /etc/neutron/policy.json /etc/neutron/policy.{{ lookup('pipe', 'date +%Y%m%d%H%M%S') }}.json
      when: policy_stat.stat.exists

- hosts: controller
  sudo: true
  any_errors_fatal: true

    - name: custom playbook - allow users to create shared networks
        dest: /etc/neutron/policy.json
        regexp: "^(\\s*){{ item.key }}:\\s*\".*\"(,?)$"
        line: "\\1{{ item.key }}: {{ item.value }}\\2"
        backrefs: yes
      with_dict: {'"create_network:shared"': '""’ }

example uses back references (e.g., the parenthesis in the “regex” line and the \\1 and \\2 in the “line” line) to preserve the indentation/leading spaces on the beginning of each line and the comma at the end of the line (if it’s present).  Back reference makes the regex a tad more complicated-looking, and it keeps the formatting in place.

Log Outputs:

Below are sample logs:








This post outlined thought processes involved when updating OpenStack Policies.  I would love to hear back from you.

Also, VIO 4.1 is now GA.  You can Download a 60-day VIO evaluation now and get started.

VMware Integrated OpenStack 4.1: What’s New

VMware announced general availability (GA) of VMware Integrated OpenStack (VIO) 4.1 on Jan 18th, 2018. We are truly excited about our latest OpenStack distribution that gives our customers enhanced stability on top of the Ocata release and support for the latest versions of VMware products, across vSphere, vSAN, and NSX V|T (including NSX-T LBaaSv2). For OpenStack Cloud Admins, the 4.1 release is also about enhanced control.  Control over API throughput, virtual machine bandwidth (QoS), deployment form factor, and user management across multiple LDAP domains.  For Kubernetes Admins, 4.1 is about enhanced tooling.  Tooling that enables Control plane backup and recovery, integration with Helm and Heapster allowing for simplified application deployment and monitoring, and centralized log forwarding. Finally, VIO deployment automation has never been more straightforward using newly documented OMS API.

4.1 Feature Details:

  • Support for the latest versions of VMware products – VIO 4.1 supports and is fully compatible with VMware vSphere 6.5 U1, vSAN 6.6.1, VMware NSX for vSphere 6.3.5, and VMware NSX-T 2.1.   To learn more about vSphere 6.5 U1, visit here, NSX-V 6.3.5 and NSX-T 2.1, visit here.
  • Public OMS API – Management server APIs that can be used to automate deployment and lifecycle management of VMware Integrated OpenStack is available for general consumption. Users can perform tasks such as provision OpenStack cluster, start/stop the cluster, gather support bundles, etc using the OMS public API.  Users can also leverage Swagger UI to check and validate API availability and specs,

API Base URL: https://[oms_ip]:8443/v1

Swagger UI: https://[oms_ip]:8443/swagger-ui.html

Swagger Docs: https://[oms_ip]:8443/v2/api-docs

  • HAProxy rate limiting – Cloud Admin has the option to enable API rate limiting for public facing API access. If received API rate exceeds the configured rate, clients receive a 429 error with a Retry-After header that indicates a wait duration.  Update the custom.yml deployment configuration file to enable HAproxy Rate limiting feature.
  • Neutron QoS – Before VIO 4.1, Nova image or flavor extra-spec controlled network QoS against the vCenter VDS.  With VIO 4.1, Cloud administrator can leverage Neutron QoS to create the QoS profile and map to a port(s) or logical switch. Any virtual machine associated with the port or logical switch will inherit the predefined bandwidth policy.
  • Native NSX-T Load Balancer as a Service (LBaaS) – Before VIO 4.1, NSX-T customers had to implement BYO Nginx or third party LB for application load balancing.  With VIO 4.1, NSX-T LBaaSv2 can be provisioned using both Horizon or Neutron LBaaS API.  Each load balancer must map to an NSX-T Tier 1 logical router (LR).  Missing LR or LR without a valid uplink is not a supported topology.
  • Multiple domain LDAP backend – VMware Integrated OpenStack 4.1 supports SQL plus one or more domains as an identity source.  Up to a maximum of 10 domains, each domain can belong to a different authentication backend.  Cloud administrators can create/update/delete domains and grant / revoke Domain administrator users.  Domain administrator is a local administrator, delegated to manage resources such as user, quotas, and projects for a specific domain. VIO 4.1 Support both AD and OpenDirectory as authentication backends.

4.1 NFV and Kubernetes Features:

  • VIO-in-a-box –  AKA Tiny deployment. Instead of separate physical clusters for management and compute, VIO deployment can now consolidate on a single physical server.   VIO-in-a-box drastically reduces the footprint and is suitable for environments which do not have high availability requirements nor large workloads. VIO-in-a-box can be preconfigured manually or fully automated with OMS API.  Shipped as a single RU appliance to any manned or unmanned Data Center where space, capacity, availability of onsite support are biggest concerns.
  • VM Import – Further expanding on VM import capabilities, you can now import vSphere VM with multiple disks and NICs.  Any VMDK not classified as VM root disk imports as cinder-volume(s).  Existing networks import as provider network with access restricted only to the given tenant.  Ability to import vSphere VM workloads into OpenStack and run critical Day 2 operations against them via OpenStack APIs is the foundation we are setting for future sophisticated use cases around availability.  Refer to here for VM import instructions.
  • CPU policy for latency sensitivity workflows – Latency Sensitive workflows often require dedicated reservations of CPU, memory, and network.  In 4.1, we introduced CPU policy configuration using Nova flavor extra spec ‘hw:cpu_policy”.  Setting of this policy will determine vCPU mapping to an instance,
  • Networking passthrough – Traditionally Nova flavor or image extra-specs defined the workflow for hardware passthrough, without direct involvement of Neutron.  VIO 4.1 introduces Neutron-based network passthrough device configuration.  The Neutron based approach allows a Cloud administrators to control and manage network settings such as MAC, IP, and QoS of a passthrough network device.   Although both options will continue to be available, going forward commendation is to leverage the neutron workflow for network and nova extra-specs for all other hardware passthrough devices.  Refer to Upstream and VMware documentation for details.
  • Enhanced Kubernetes support – VIO 4.1 ships with Kubernetes version 1.8.1.  In addition to the latest upstream release, integration with widely adopted application deployment and monitoring tools are standard out of the box, Helm and Heapster.  VIO 4.1 with NSX-T2.1.will allow you to consume Kubernetes network security policy as well.
  • VIO Kubernetes support bundle –  Opening support tickets couldn’t be simpler with VIOK support bundle.  Using a single line command, specify the start and end date, VIO Kubernetes will capture logs from all components required to diagnosis tenant impacting issues within the specified time range.
  • VIO Kubernetes Log Insight integration – Cloud administrator can specify FQDN of the log Insight as the logging server.  Current release supports a single logging server.
  • VIO Kubernetes control plane backup / restore –  Kubernetes admin can perform cluster level back-ups from the VIOK management VM. Each successful backup produces a compressed tar backup file.

Try VMware Integrated OpenStack 4.1 Today

Infrastructure as Code: Orchestration with Heat

This blog post was created by Anil Gupta.  Additional comments and reviews: Maya Shiran and Xiao Gao

In this blog post I will talk about the automation and orchestration of configuration that can be done via the Heat automation and orchestration program that comes with OpenStack (and VIO- VMware Integrated OpenStack).

Perhaps, a question on your mind is “Why do I need an orchestration solution such as Heat when I have access to the OpenStack Command Line Interface (CLI)”. Imagine if you are configuring a simple virtual infrastructure that consists of a web server, an application server, and a database server.  You not only have to deploy the three instances, you will need to deploy one network instance per server. You also need to account for the router to connect to the outside world, and the floating IP that will be assigned to the web server so that users can access the application. Making one-off API\CLI calls to deploy these components is fine during development. However, what happens when you’re ready to go to production? What if performance tests shows that our deployment requires multiple instances at each infrastructure tier? Managing such an infrastructure using CLI is not scalable.

This is where Heat comes in. Heat is the main project of the OpenStack orchestration program and allows users to describe deployments of complex cloud applications in text files called Heat Orchestration Templates (HOT). These templates, created in simple YAML, format are then parsed and executed by the Heat engine. For example, in your template, you can specify the different types of infrastructure resources you will need, such as servers, floating IP addresses, storage volumes, etc. The template also manages relationships between these resources (such as this volume is connected to this server), which allows it to handle complex configurations. These templates are then parsed and executed by the Heat engine to create all of your infrastructures in the correct order to completely launch your application.

Heat also offers the ability to add, modify, or delete resources of a running stack using the stack update operation.  If I want to increase the memory of a running machine, It is as simple as editing the original template and apply the changes using heat stack-update.

As a result, Heat provides a single deployment mechanism to provision the application infrastructure from detailed template files that minimizes room for error. Due to the simplicity of the YAML format, your template files can also be used as a documentation source for IT operation runbooks. Additionally, the Heat templates can be managed by version control tools such as git, so you can make changes as needed and ensure the new template version is used in future. Finally, the Integration of VIO 4.0 with vRA (vRealize Automation) provides enterprise customers the ability to consume VIO resources with governance.

Working Example:

The Heat template consists of two sections. The first part defines the parameters such as image id and instance type. The second part defines the resources that are managed through this template. All of these variables can be parameterized and the template can be made generic. Once you have parameterized these variables, you can specify appropriate values for your environment in the stack-create command without having to edit the template file. This allows you to create fairly complex orchestration scenarios using Heat templates that are reusable.

Below is an example template that shows two sections – the first section defines the various parameters, as noted above.  The second section creates a configuration for load-balancer (LB) server, along with needed router and network configurations.  You will see orchestration at work in this example, because creation step for LB server on the private subnet needs to wait for router interface step to complete, otherwise the creation step for LB server fails. Please note that the code example is only for illustration purposes and not intended to run on Heat as-is.  Complete working example can be found here











You will see the use of value_specs in the example below,  which is a way of providing vendor-specific key/value config options.













This template, when run, invokes the OpenStack orchestration service using the OpenStack Open API, which in turn leverages the core OpenStack services (such as Nova, Cinder, Glance and Neutron) in VIO for automated creation and management of the specific resources.


This post showed how VIO with Heat allows developers to create their infrastructure configuration as code, as well as orchestrate the routine steps such as provisioning servers, storage volumes, networks, as well as dependencies in a quick and easy manner.

Complete working example of Heat stack in this post is in the VMware Integrated OpenStack Hands-on Lab, don’t forget to try out.  You can also Download a 60-day VIO evaluation now and get started.

OpenStack and Kubernetes Better Together

Virtual machines and containers are two of my favorite technologies.  In today’s DevOps driven environment, deliver applications as microservices allows an organization to provide features faster.   Splitting a monolithic application into multiple portable fragments based on containers are often top of most organization’s digital transformation strategies.   Virtual Machines, delivered as IaaS, has been around since the late 90s, it is a way to abstract hardware to offer enhanced capabilities in fault tolerance, programmability, and workload scalability.  While enterprise IT large and small are scrambling to refactor application into microservices, the reality is IaaS are proven and often used to complement container based workloads:

1). We’ve always viewed the IaaS layer as an abstraction from the infrastructure to provide a standard way of managing and consolidate disparate physical resources. Resource abstraction is one of the many reasons most of the container today runs inside of Virtual machines.

2). Today’s distributed application consists of both Cattles and Pets.  Without overly generalizing, Pet workload tends to be “hand fed” and often have significant dependencies to the legacy OS that isn’t container compatible.  As a result, for most organizations, Pet workloads will continue to run as VMs.

3). While there are considerable benefits to containerize NFV workloads, current container implementation is not sufficient enough to meet 100% NFV workload needs.  See  IETF report for additional details.

4). Ability to “Right Size” the container host for dev/test workloads where multiple environments are required to perform different testings.

Instead of mutually exclusive, over time it’s been proven that two technologies complement each other.   As long as there are legacy workloads and better ways to manage and consolidate sets of diverse physical resources, Virtual Machines (IaaS) will co-exist to complement containers.

OpenStack IaaS and Kubernetes Container Orchestration:

It’s a multi-cloud world, and OpenStack is an important part of the mix. From the datacenter to NFV, due to the richness of its vendor-neutral API, OpenStack clouds are being deployed to meet needs of organizations needs in delivering public cloud like IaaS consumption in a private cloud data center.   OpenStack is also a perfect complement to K8S by providing underline services that are outside the scope of K8S.  Kubernetes deployments in most cases can leverage the same OpenStack components to simplify the deployment or developer experiences:





1). Multi-tenancy:  Create K8S cluster separation leveraging OpenStack Projects. Development teams have complete control over cluster resources in their project and zero visibility to other development teams or projects.

2). Infrastructure usage based on HW separation:  IT department often are the central broker for development teams across the entire organization. If Development team A funded X number of servers and Y for team B, OpenStack Scheduler can ensure K8S cluster resources always mapped to Hardware allocated to respective development teams.

3).  Infrastructure allocation based on quota:  Since deciding how much of your infrastructure to assign to different use cases can be tricky.  Organizations can also leverage OpenStack quota system to control Infrastructure usage.

4). Integrated user management:  Since most K8S developers are also IaaS consumers, leverage keystone backend simplifies user authentication for K8S cluster and namespace sharing.

5). Container storage persistence:  Since K8S pods are not durable, storage persistence is a requirement for most stateful workloads.   When leverage OpenStack Cinder backend, storage volume will be re-attached automatically after a pod restart (same or different node).

6). Security:  Since VM and containers will continue to co-exist for the majority of enterprise and NFV applications.  Providing uniform security enforcement is therefore critical.   Leverage Neutron integration with industry-leading SDN controllers such as the VMware NSX-T can simplify container security insertion and implementation.

7). Container control plane flexibility: K8S HA requirements require load balanced Multi-master and scaleable worker nodes.  When Integrated with OpenStack, it is as simple as leverage LBaaSv2 for master node load balancing.  Worker nodes can scale up and down using tools native to OpenStack.  WIth VMware Integrated OpenStack, K8S worker nodes can scale vertically as well using the VM live-resize feature.

Next Steps:

I will leverage VMware Integrated OpenStack (VIO) implementation to provide examples of this perfect match made in heaven. This blog is part 1 of a 4 part blog series:

1). OpenStack and Containers Better Together (This Post)

2). How to Integrate your K8S  with your OpenStack deployment

3). Treat Containers and VMs as “equal class citizens” in networking

4). Integrate common IaaS and CI / CD tools with K8S

Infrastructure as Code with VMware Integrated OpenStack

Historically, organizations had “racked and stacked” hardware, and then installed and configured software and applications for their IT needs. With advent of cloud computing, IT organizations could start taking advantage of virtualization to enable the on-demand provisioning of compute, network, and storage resources.  By using the CLI or GUI, users have been able to manually provision these resources. However, with manual provisioning, you carry the following risks:

  • Inconsistency due to human error, leading to deviations from the defined configuration.
  • Lack of agility by limiting the speed at which your organization can release new versions of services in response to customer needs.
  • Difficulty in attaining and maintaining compliance to corporate standards due to the absence of a repeatable process






Infrastructure as Code (IAC) solutions address these issues by allowing you to automate the entire configuration and provisioning process. In its essence, this concept allows IT teams to treat infrastructure the same way application developers treat their applications – with code. The definition of the infrastructure is in human readable software code. The code allows to script, in a declarative way, the final state that you want for your environment and when executed, your target environment is automatically provisioned. A recent blog on this topic by my colleague David Jasso referred to IAC paradigm as IT As Developer. For additional information on IAC, read the two Forrester reports: How A Sysadmin Becomes A Developer (Chris Gardner and Robert Stroud; Forrester Research; March 2017); Lead The I&O Software Revolution With Infrastructure-As-Code (Chris Gardner and Richard Fichera; Forrester Research; September 2017)

In this blog post I will show you how by using Terraform and VMware Integrated OpenStack (VIO), you describe and execute your target infrastructure configuration using code. Terraform allows developers to define their application infrastructure via editable text files ending in .tf extension. You can write Terraform configurations in either Terraform format (using the .tf extension) or in JSON format (using the .tf.json extension).  When executed, Terraform consumes the OpenStack API services from the VIO (OpenStack distribution from VMware) to provision the infrastructure as you have defined.  As a result, you can use these provisioning tools, in conjunction with VIO, to implement Infrastructure as code.

For those not familiar with VIO, VIO differentiates from upstream distribution in by making install, upgrade and maintenance operations simple, and leveraging VMware enterprise-grade infrastructure to provide the most stable release of OpenStack in the market.  In addition to OpenStack distribution, VIO is also helping bridge gaps in traditional OpenStack management monitoring and logging by making VMware enterprise-grade tools such as vRealize Operations Manager and Log Insight OpenStack aware with no customization.

  • Standard DefCore Compliant OpenStack Distribution delivered as an OVA
  • End to end support by VMware, OpenStack and SDDC infrastructure.
  • The best foundational Infrastructure for IaaS is available with vSphere Compute (Nova), NSX Networking (Neutron), vSphere Storage (Cinder / Glance)
  • OpenStack endpoint management and logging is simple and easy to perform with VMware vRealize Operations Manager for management, vRealize Log Insight for logging, and vRealize Business for chargeback analysis
  • Best way to leverage existing VMware investment in People, Skills, and Infrastructure

Let’s look at the structure of code that makes IAC possible. The first step in defining the configuration is defining all the variables a user needs to provide within the Terraform configuration – see example below. The variables can have default values. Putting as much site specific information as possible into variables (rather than hardcoding the configuration parameters) makes the code more reusable. Please note that the code below is for illustration only.  Complete example can be downloaded from here.











The next step in defining the configuration is identifying the provider. Terraform leverages multiple providers to talk to services such as AWS, Azure or VIO (OpenStack distribution from VMware).  In the example below we specify that the provider is OpenStack, using the variables that you defined earlier.








Next you define the resource configuration.  Resources are the basic building blocks of a Terraform configuration. In the example code below (please use it as an illustration), you use Terraform code, which in turn leverages VIO, to create the compute and network resource instances and then assigns network ID to the compute instance to stand a networked compute instance. As you will see in the example, the properties of a resource created may be passed as arguments to the instance creation step of the next resource, such as using Network ID from the ‘network’ resource created, when creating the resource ‘subnet’ in the code below.











Infrastructure as a code allows you to treat all aspects of operations as software and manage almost everything in code, including servers, storage, networks, log files, automated tests, deployment processes, and so on. The concept extends to making configuration changes as well.  When you want to make any infrastructure configuration changes, you can check out the configuration code files from your code repository management system such as git, edit it to make the changes you want, check-in that new version. So you can use git to make and track changes to your configuration code – just as developers do.


In this blog post, we have shown how you can implement IAC paradigm by using Terraform, running on VIO.  Download 60-day VIO evaluation now and get started, or try out VIO 4.0 based VMware Integrated OpenStack Hands-on Lab, no installation required.

Best Practice Recommendations for Virtual Machine Live Resize

As computing demands increase, server resources must “grow” or “scale” to meet those requirements.   There are two basic ways to scale computing resources. The first is to add more VMs or “horizontally scale.” Say a web front end is using 90% of the allocated computing capacity. If traffic to the site increases, the current VM may not have enough CPU, memory, or disk available to keep up.  The site administrator could deploy an additional VM to support the growth in the workload.








Not all applications scale horizontally.  NFV workloads such as virtual routers or gateways may need to “vertically scale”.  For example, a virtual machine with 2 vCPU / 4 G memory may need to double it’s vCPU and memory rather than adding a second virtual machine.  While the OpenStack ecosystem offers many tools for horizontal scaling (Heat, Terraform, etc.), options for scaling up are much more limited.  The Nova project has a long-pending proposal for live resize (Hot Plug).  Unfortunately, this feature still hasn’t been implemented.  Without live-resize, to increase Memory/CPU/Disk of an instance, OpenStack must first power down the VM, migrate to a VM flavor that offers more CPU/Memory/Disk, finally power up the VM.   VM power down impacts SLA and can trigger cascading failure for NFV based workloads (route convergence, loops, etc.)

By leveraging the existing OpenStack resize API and recommendations introduced in the upstream live-resize specification, VMware Integrated OpenStack (VIO) 4.0 offers the ability to resize any machine, as long as the GuestOS supports it, without the need to power down the system. OpenStack users would issue the standard OpenStack resize request.  The VMDK driver examines the CPU/memory/disk changes specified by the flavor, and the setting of the virtual machine to determine whether the operation can be performed. If the guest OS supports live-resize, resources will be added without power down.  If guest OS cannot support live-resize, then traditional Nova instance resize operation takes place (which powers off the instance).

Best Practice Recommendations:

When implementing live-resize in your environment, be sure to follow the following recommendations:

  1. Cloud Admins or Application owners would need to indicate the GuestOS can handle live resize for a specific resource using image metadata “os_live_resize= <resource>.”  List of guest OS that supports hot plug / live-resize can be found here.  Available resource options are disk, memory or vCPU.   You can live-resize the VM based on any combination of the resource types.
    • Add CPU resources to the virtual machine
    • Add memory resource to the virtual machine.
    • Increase virtual disk size of the virtual machine
    • Add CPU and Memory, CPU and Disk, or Memory and Disk
    • Increase CPU, Memory, and Disk
    • Hot removal of CPU/Memory not supported
  2. If a resized VM exceeds the capacity of a host, VMware DRS can move the VM to another host within the cluster where resources are available.  DRS is simple to configure and extremely powerful.  My colleague Mathew Mayer wrote an excellent blog on Load balancing vSphere Clusters with DRS, be sure to take a look.
  3. Image Metadata updates for disk resize:
    • Linked clone must set to false.  This is because vCenter cannot live resize linked cloned disks
    • Disk adapter must be Non-IDE.  This is because IDE disks do not support hot-swap/add.

See diagram below:











4). VMware supports memory resize of 4G and above.  Resize below 4G should work in most cases, but not officially supported by VMware.

Live-resize Example Workflow:

Step 1). Upload image:

openstack image create –disk-format vmdk –container-format ova –property vmware_ostype=”ubuntu64Guest”  –property os_live_resize=vcpu,memory,disk –-property img_linked_clone=false –file ./xenial-server-cloudimg-amd64.ova <some name>

Step 2). Disable linked clone (if using default VIO 4.0 bundled in 16.0.4 cloud image):

openstack image set –property img_linked_clone=false <some name>

Step 3). Boot a VM:

openstack server create –flavor m1.medium –image <some name>  –nic net-id=net-uuid resize_vm

Step 4). Resize to the next flavor:

openstack server resize –flavor m1.large <resize_VM>

Step 5). Confirm resize:

openstack server resize –confirm <server>

Step 6). SSH to the VM and run the scripts below to bring the new resources online in the guest OS.

  • Memory online

“for i in `grep offline /sys/devices/system/memory/*/state | awk -F / ‘{print $6}’ | awk -F y ‘{print $2}’`; do echo “bring memory$i online”; echo online >/sys/devices/system/memory/memory$i/state; done”

  • CPU online:


Simplify your NFV workloads by levering industry’s most stable and battle-tested OpenStack distribution.  Instead of re-architect your virtual network and security to enable horizontal scaling, live-resize it!  It’s simple and hitless.   Download 60-day evaluation now and get started, or try out VIO 4.0 based VMware Integrated OpenStack Hands-on Lab, no installation required.