Home > Blogs > OpenStack Blog for VMware

A Deeper Look Into OpenStack Policy Update

Written by Xiao Gao, with valuable feedbacks and inputs from Mark Voelker.

While working with customers that are switching over to VMware Integrated OpenStack (VIO) from a different OpenStack distribution, customers expressed the need to update policies. Reasons were:

  • Backward compatibility with their legacy OpenStack deployment.
  • Internal company process and procedure alignment.

While updating policy is not any more complicated on VIO when compared to other distributions, it is an operation that we traditionally advised our customers to avoid. Following are some of the reasons:

1). Upgrade. While many non-default changes can seem trivial and straightforward, VMware can’t guarantee upstream implementation will always be backward compatible when moving between releases. Therefore, responsibility of maintaining day-2 changes lies within the customer

2). Snowflake avoidance.  Upstream gate tests focus almost exclusively on default policies. The risk of exposing unexpected side effect increases when the security posture of an operation is relaxed or tightened.  Security is also a concern when relaxing policies.  Similarly, most popular OpenStack orchestration/monitoring tools such as Terraform, Gophercloud, or Nagios are implemented assuming default policies. When policies are made more restrictive, it can cause your favorite OpenStack tools to fail.

Snowflakes not only are difficult to support and maintain, often cause of unexpected outages.

3). Leverage external CMP for enhanced governance and control. External CMP such as the vRA is designed to integrate business processes into IAAS consumption. Instead of maintaining low-level policies changes, leverage out of box capabilities of vRA to control what users will have access to.



Implementation Options:

We understand there are scenarios where policy changes are required. Our recommendation for those scenarios is to leverage VIO custom playbook to make those changes.  The basic idea behind custom playbook:

  1. Customer will code up what has to change using Ansible.
  2. VIO will decide when to make required changes, to survive upgrades and other maintenance tasks.

While VIO doesn’t sanction contents of the custom playbook, it’s essential to write the playbook in a manner that is modular and agnostic to the OpenStack version.  Ideal playbook should be stateless, grouped based on operational action, and not restrictive towards alignment with the upstream (see example section for details).  Loggings is on by default.

Working Example:

Let’s look at an example.  Say we want regular users to be able to create shared networks.  To do that we need to modify /etc/neutron/policy.json and change:

“create_network:shared”: “rule:admin_only”


“create_network:shared”: “”

There is number of ways to accomplish above task.  You can go down the path of j2 templates and introduce variables for each policy modification.  But this approach requires discipline from the operator to update his/her entire set of j2 policy template(s) before any significant upgrade to avoid drift or conflicts with upstream.  On the other hand, if you leverage direct file manipulation method, you will change only parameters that are required in your local environment, and leave everything else in constant alignment with upstream.

Below example uses lineinfile to manipulate files directly:

# The custom playbook is run on initial deployment configuration, on a patch,
# or on an upgrade.  It can also be run via the viocli command line:
#   viocli deployment run-custom-playbook
# Copy this file and all supporting files to:
#   /opt/vmware/vio/custom/custom-playbook.yml
- hosts: controller
  sudo: true
  any_errors_fatal: true

    - name: stat check for policy.json
      stat: path=/etc/neutron/policy.json
      register: policy_stat

- hosts: controller
  sudo: true
  any_errors_fatal: true

    - name: backup policy.json
      command: cp /etc/neutron/policy.json /etc/neutron/policy.{{ lookup('pipe', 'date +%Y%m%d%H%M%S') }}.json
      when: policy_stat.stat.exists

- hosts: controller
  sudo: true
  any_errors_fatal: true

    - name: custom playbook - allow users to create shared networks
        dest: /etc/neutron/policy.json
        regexp: "^(\\s*){{ item.key }}:\\s*\".*\"(,?)$"
        line: "\\1{{ item.key }}: {{ item.value }}\\2"
        backrefs: yes
      with_dict: {'"create_network:shared"': '""’ }

example uses back references (e.g., the parenthesis in the “regex” line and the \\1 and \\2 in the “line” line) to preserve the indentation/leading spaces on the beginning of each line and the comma at the end of the line (if it’s present).  Back reference makes the regex a tad more complicated-looking, and it keeps the formatting in place.

Log Outputs:

Below are sample logs:








This post outlined thought processes involved when updating OpenStack Policies.  I would love to hear back from you.

Also, VIO 4.1 is now GA.  You can Download a 60-day VIO evaluation now and get started.

VMware Integrated OpenStack 4.1: What’s New

VMware announced general availability (GA) of VMware Integrated OpenStack (VIO) 4.1 on Jan 18th, 2018. We are truly excited about our latest OpenStack distribution that gives our customers enhanced stability on top of the Ocata release and support for the latest versions of VMware products, across vSphere, vSAN, and NSX V|T (including NSX-T LBaaSv2). For OpenStack Cloud Admins, the 4.1 release is also about enhanced control.  Control over API throughput, virtual machine bandwidth (QoS), deployment form factor, and user management across multiple LDAP domains.  For Kubernetes Admins, 4.1 is about enhanced tooling.  Tooling that enables Control plane backup and recovery, integration with Helm and Heapster allowing for simplified application deployment and monitoring, and centralized log forwarding. Finally, VIO deployment automation has never been more straightforward using newly documented OMS API.

4.1 Feature Details:

  • Support for the latest versions of VMware products – VIO 4.1 supports and is fully compatible with VMware vSphere 6.5 U1, vSAN 6.6.1, VMware NSX for vSphere 6.3.5, and VMware NSX-T 2.1.   To learn more about vSphere 6.5 U1, visit here, NSX-V 6.3.5 and NSX-T 2.1, visit here.
  • Public OMS API – Management server APIs that can be used to automate deployment and lifecycle management of VMware Integrated OpenStack is available for general consumption. Users can perform tasks such as provision OpenStack cluster, start/stop the cluster, gather support bundles, etc using the OMS public API.  Users can also leverage Swagger UI to check and validate API availability and specs,

API Base URL: https://[oms_ip]:8443/v1

Swagger UI: https://[oms_ip]:8443/swagger-ui.html

Swagger Docs: https://[oms_ip]:8443/v2/api-docs

  • HAProxy rate limiting – Cloud Admin has the option to enable API rate limiting for public facing API access. If received API rate exceeds the configured rate, clients receive a 429 error with a Retry-After header that indicates a wait duration.  Update the custom.yml deployment configuration file to enable HAproxy Rate limiting feature.
  • Neutron QoS – Before VIO 4.1, Nova image or flavor extra-spec controlled network QoS against the vCenter VDS.  With VIO 4.1, Cloud administrator can leverage Neutron QoS to create the QoS profile and map to a port(s) or logical switch. Any virtual machine associated with the port or logical switch will inherit the predefined bandwidth policy.
  • Native NSX-T Load Balancer as a Service (LBaaS) – Before VIO 4.1, NSX-T customers had to implement BYO Nginx or third party LB for application load balancing.  With VIO 4.1, NSX-T LBaaSv2 can be provisioned using both Horizon or Neutron LBaaS API.  Each load balancer must map to an NSX-T Tier 1 logical router (LR).  Missing LR or LR without a valid uplink is not a supported topology.
  • Multiple domain LDAP backend – VMware Integrated OpenStack 4.1 supports SQL plus one or more domains as an identity source.  Up to a maximum of 10 domains, each domain can belong to a different authentication backend.  Cloud administrators can create/update/delete domains and grant / revoke Domain administrator users.  Domain administrator is a local administrator, delegated to manage resources such as user, quotas, and projects for a specific domain. VIO 4.1 Support both AD and OpenDirectory as authentication backends.

4.1 NFV and Kubernetes Features:

  • VIO-in-a-box –  AKA Tiny deployment. Instead of separate physical clusters for management and compute, VIO deployment can now consolidate on a single physical server.   VIO-in-a-box drastically reduces the footprint and is suitable for environments which do not have high availability requirements nor large workloads. VIO-in-a-box can be preconfigured manually or fully automated with OMS API.  Shipped as a single RU appliance to any manned or unmanned Data Center where space, capacity, availability of onsite support are biggest concerns.
  • VM Import – Further expanding on VM import capabilities, you can now import vSphere VM with multiple disks and NICs.  Any VMDK not classified as VM root disk imports as cinder-volume(s).  Existing networks import as provider network with access restricted only to the given tenant.  Ability to import vSphere VM workloads into OpenStack and run critical Day 2 operations against them via OpenStack APIs is the foundation we are setting for future sophisticated use cases around availability.  Refer to here for VM import instructions.
  • CPU policy for latency sensitivity workflows – Latency Sensitive workflows often require dedicated reservations of CPU, memory, and network.  In 4.1, we introduced CPU policy configuration using Nova flavor extra spec ‘hw:cpu_policy”.  Setting of this policy will determine vCPU mapping to an instance,
  • Networking passthrough – Traditionally Nova flavor or image extra-specs defined the workflow for hardware passthrough, without direct involvement of Neutron.  VIO 4.1 introduces Neutron-based network passthrough device configuration.  The Neutron based approach allows a Cloud administrators to control and manage network settings such as MAC, IP, and QoS of a passthrough network device.   Although both options will continue to be available, going forward commendation is to leverage the neutron workflow for network and nova extra-specs for all other hardware passthrough devices.  Refer to Upstream and VMware documentation for details.
  • Enhanced Kubernetes support – VIO 4.1 ships with Kubernetes version 1.8.1.  In addition to the latest upstream release, integration with widely adopted application deployment and monitoring tools are standard out of the box, Helm and Heapster.  VIO 4.1 with NSX-T2.1.will allow you to consume Kubernetes network security policy as well.
  • VIO Kubernetes support bundle –  Opening support tickets couldn’t be simpler with VIOK support bundle.  Using a single line command, specify the start and end date, VIO Kubernetes will capture logs from all components required to diagnosis tenant impacting issues within the specified time range.
  • VIO Kubernetes Log Insight integration – Cloud administrator can specify FQDN of the log Insight as the logging server.  Current release supports a single logging server.
  • VIO Kubernetes control plane backup / restore –  Kubernetes admin can perform cluster level back-ups from the VIOK management VM. Each successful backup produces a compressed tar backup file.

Try VMware Integrated OpenStack 4.1 Today

Infrastructure as Code: Orchestration with Heat

This blog post was created by Anil Gupta.  Additional comments and reviews: Maya Shiran and Xiao Gao

In this blog post I will talk about the automation and orchestration of configuration that can be done via the Heat automation and orchestration program that comes with OpenStack (and VIO- VMware Integrated OpenStack).

Perhaps, a question on your mind is “Why do I need an orchestration solution such as Heat when I have access to the OpenStack Command Line Interface (CLI)”. Imagine if you are configuring a simple virtual infrastructure that consists of a web server, an application server, and a database server.  You not only have to deploy the three instances, you will need to deploy one network instance per server. You also need to account for the router to connect to the outside world, and the floating IP that will be assigned to the web server so that users can access the application. Making one-off API\CLI calls to deploy these components is fine during development. However, what happens when you’re ready to go to production? What if performance tests shows that our deployment requires multiple instances at each infrastructure tier? Managing such an infrastructure using CLI is not scalable.

This is where Heat comes in. Heat is the main project of the OpenStack orchestration program and allows users to describe deployments of complex cloud applications in text files called Heat Orchestration Templates (HOT). These templates, created in simple YAML, format are then parsed and executed by the Heat engine. For example, in your template, you can specify the different types of infrastructure resources you will need, such as servers, floating IP addresses, storage volumes, etc. The template also manages relationships between these resources (such as this volume is connected to this server), which allows it to handle complex configurations. These templates are then parsed and executed by the Heat engine to create all of your infrastructures in the correct order to completely launch your application.

Heat also offers the ability to add, modify, or delete resources of a running stack using the stack update operation.  If I want to increase the memory of a running machine, It is as simple as editing the original template and apply the changes using heat stack-update.

As a result, Heat provides a single deployment mechanism to provision the application infrastructure from detailed template files that minimizes room for error. Due to the simplicity of the YAML format, your template files can also be used as a documentation source for IT operation runbooks. Additionally, the Heat templates can be managed by version control tools such as git, so you can make changes as needed and ensure the new template version is used in future. Finally, the Integration of VIO 4.0 with vRA (vRealize Automation) provides enterprise customers the ability to consume VIO resources with governance.

Working Example:

The Heat template consists of two sections. The first part defines the parameters such as image id and instance type. The second part defines the resources that are managed through this template. All of these variables can be parameterized and the template can be made generic. Once you have parameterized these variables, you can specify appropriate values for your environment in the stack-create command without having to edit the template file. This allows you to create fairly complex orchestration scenarios using Heat templates that are reusable.

Below is an example template that shows two sections – the first section defines the various parameters, as noted above.  The second section creates a configuration for load-balancer (LB) server, along with needed router and network configurations.  You will see orchestration at work in this example, because creation step for LB server on the private subnet needs to wait for router interface step to complete, otherwise the creation step for LB server fails. Please note that the code example is only for illustration purposes and not intended to run on Heat as-is.  Complete working example can be found here











You will see the use of value_specs in the example below,  which is a way of providing vendor-specific key/value config options.













This template, when run, invokes the OpenStack orchestration service using the OpenStack Open API, which in turn leverages the core OpenStack services (such as Nova, Cinder, Glance and Neutron) in VIO for automated creation and management of the specific resources.


This post showed how VIO with Heat allows developers to create their infrastructure configuration as code, as well as orchestrate the routine steps such as provisioning servers, storage volumes, networks, as well as dependencies in a quick and easy manner.

Complete working example of Heat stack in this post is in the VMware Integrated OpenStack Hands-on Lab, don’t forget to try out.  You can also Download a 60-day VIO evaluation now and get started.

OpenStack and Kubernetes Better Together

Virtual machines and containers are two of my favorite technologies.  In today’s DevOps driven environment, deliver applications as microservices allows an organization to provide features faster.   Splitting a monolithic application into multiple portable fragments based on containers are often top of most organization’s digital transformation strategies.   Virtual Machines, delivered as IaaS, has been around since the late 90s, it is a way to abstract hardware to offer enhanced capabilities in fault tolerance, programmability, and workload scalability.  While enterprise IT large and small are scrambling to refactor application into microservices, the reality is IaaS are proven and often used to complement container based workloads:

1). We’ve always viewed the IaaS layer as an abstraction from the infrastructure to provide a standard way of managing and consolidate disparate physical resources. Resource abstraction is one of the many reasons most of the container today runs inside of Virtual machines.

2). Today’s distributed application consists of both Cattles and Pets.  Without overly generalizing, Pet workload tends to be “hand fed” and often have significant dependencies to the legacy OS that isn’t container compatible.  As a result, for most organizations, Pet workloads will continue to run as VMs.

3). While there are considerable benefits to containerize NFV workloads, current container implementation is not sufficient enough to meet 100% NFV workload needs.  See  IETF report for additional details.

4). Ability to “Right Size” the container host for dev/test workloads where multiple environments are required to perform different testings.

Instead of mutually exclusive, over time it’s been proven that two technologies complement each other.   As long as there are legacy workloads and better ways to manage and consolidate sets of diverse physical resources, Virtual Machines (IaaS) will co-exist to complement containers.

OpenStack IaaS and Kubernetes Container Orchestration:

It’s a multi-cloud world, and OpenStack is an important part of the mix. From the datacenter to NFV, due to the richness of its vendor-neutral API, OpenStack clouds are being deployed to meet needs of organizations needs in delivering public cloud like IaaS consumption in a private cloud data center.   OpenStack is also a perfect complement to K8S by providing underline services that are outside the scope of K8S.  Kubernetes deployments in most cases can leverage the same OpenStack components to simplify the deployment or developer experiences:





1). Multi-tenancy:  Create K8S cluster separation leveraging OpenStack Projects. Development teams have complete control over cluster resources in their project and zero visibility to other development teams or projects.

2). Infrastructure usage based on HW separation:  IT department often are the central broker for development teams across the entire organization. If Development team A funded X number of servers and Y for team B, OpenStack Scheduler can ensure K8S cluster resources always mapped to Hardware allocated to respective development teams.

3).  Infrastructure allocation based on quota:  Since deciding how much of your infrastructure to assign to different use cases can be tricky.  Organizations can also leverage OpenStack quota system to control Infrastructure usage.

4). Integrated user management:  Since most K8S developers are also IaaS consumers, leverage keystone backend simplifies user authentication for K8S cluster and namespace sharing.

5). Container storage persistence:  Since K8S pods are not durable, storage persistence is a requirement for most stateful workloads.   When leverage OpenStack Cinder backend, storage volume will be re-attached automatically after a pod restart (same or different node).

6). Security:  Since VM and containers will continue to co-exist for the majority of enterprise and NFV applications.  Providing uniform security enforcement is therefore critical.   Leverage Neutron integration with industry-leading SDN controllers such as the VMware NSX-T can simplify container security insertion and implementation.

7). Container control plane flexibility: K8S HA requirements require load balanced Multi-master and scaleable worker nodes.  When Integrated with OpenStack, it is as simple as leverage LBaaSv2 for master node load balancing.  Worker nodes can scale up and down using tools native to OpenStack.  WIth VMware Integrated OpenStack, K8S worker nodes can scale vertically as well using the VM live-resize feature.

Next Steps:

I will leverage VMware Integrated OpenStack (VIO) implementation to provide examples of this perfect match made in heaven. This blog is part 1 of a 4 part blog series:

1). OpenStack and Containers Better Together (This Post)

2). How to Integrate your K8S  with your OpenStack deployment

3). Treat Containers and VMs as “equal class citizens” in networking

4). Integrate common IaaS and CI / CD tools with K8S

Infrastructure as Code with VMware Integrated OpenStack

Historically, organizations had “racked and stacked” hardware, and then installed and configured software and applications for their IT needs. With advent of cloud computing, IT organizations could start taking advantage of virtualization to enable the on-demand provisioning of compute, network, and storage resources.  By using the CLI or GUI, users have been able to manually provision these resources. However, with manual provisioning, you carry the following risks:

  • Inconsistency due to human error, leading to deviations from the defined configuration.
  • Lack of agility by limiting the speed at which your organization can release new versions of services in response to customer needs.
  • Difficulty in attaining and maintaining compliance to corporate standards due to the absence of a repeatable process






Infrastructure as Code (IAC) solutions address these issues by allowing you to automate the entire configuration and provisioning process. In its essence, this concept allows IT teams to treat infrastructure the same way application developers treat their applications – with code. The definition of the infrastructure is in human readable software code. The code allows to script, in a declarative way, the final state that you want for your environment and when executed, your target environment is automatically provisioned. A recent blog on this topic by my colleague David Jasso referred to IAC paradigm as IT As Developer. For additional information on IAC, read the two Forrester reports: How A Sysadmin Becomes A Developer (Chris Gardner and Robert Stroud; Forrester Research; March 2017); Lead The I&O Software Revolution With Infrastructure-As-Code (Chris Gardner and Richard Fichera; Forrester Research; September 2017)

In this blog post I will show you how by using Terraform and VMware Integrated OpenStack (VIO), you describe and execute your target infrastructure configuration using code. Terraform allows developers to define their application infrastructure via editable text files ending in .tf extension. You can write Terraform configurations in either Terraform format (using the .tf extension) or in JSON format (using the .tf.json extension).  When executed, Terraform consumes the OpenStack API services from the VIO (OpenStack distribution from VMware) to provision the infrastructure as you have defined.  As a result, you can use these provisioning tools, in conjunction with VIO, to implement Infrastructure as code.

For those not familiar with VIO, VIO differentiates from upstream distribution in by making install, upgrade and maintenance operations simple, and leveraging VMware enterprise-grade infrastructure to provide the most stable release of OpenStack in the market.  In addition to OpenStack distribution, VIO is also helping bridge gaps in traditional OpenStack management monitoring and logging by making VMware enterprise-grade tools such as vRealize Operations Manager and Log Insight OpenStack aware with no customization.

  • Standard DefCore Compliant OpenStack Distribution delivered as an OVA
  • End to end support by VMware, OpenStack and SDDC infrastructure.
  • The best foundational Infrastructure for IaaS is available with vSphere Compute (Nova), NSX Networking (Neutron), vSphere Storage (Cinder / Glance)
  • OpenStack endpoint management and logging is simple and easy to perform with VMware vRealize Operations Manager for management, vRealize Log Insight for logging, and vRealize Business for chargeback analysis
  • Best way to leverage existing VMware investment in People, Skills, and Infrastructure

Let’s look at the structure of code that makes IAC possible. The first step in defining the configuration is defining all the variables a user needs to provide within the Terraform configuration – see example below. The variables can have default values. Putting as much site specific information as possible into variables (rather than hardcoding the configuration parameters) makes the code more reusable. Please note that the code below is for illustration only.  Complete example can be downloaded from here.











The next step in defining the configuration is identifying the provider. Terraform leverages multiple providers to talk to services such as AWS, Azure or VIO (OpenStack distribution from VMware).  In the example below we specify that the provider is OpenStack, using the variables that you defined earlier.








Next you define the resource configuration.  Resources are the basic building blocks of a Terraform configuration. In the example code below (please use it as an illustration), you use Terraform code, which in turn leverages VIO, to create the compute and network resource instances and then assigns network ID to the compute instance to stand a networked compute instance. As you will see in the example, the properties of a resource created may be passed as arguments to the instance creation step of the next resource, such as using Network ID from the ‘network’ resource created, when creating the resource ‘subnet’ in the code below.











Infrastructure as a code allows you to treat all aspects of operations as software and manage almost everything in code, including servers, storage, networks, log files, automated tests, deployment processes, and so on. The concept extends to making configuration changes as well.  When you want to make any infrastructure configuration changes, you can check out the configuration code files from your code repository management system such as git, edit it to make the changes you want, check-in that new version. So you can use git to make and track changes to your configuration code – just as developers do.


In this blog post, we have shown how you can implement IAC paradigm by using Terraform, running on VIO.  Download 60-day VIO evaluation now and get started, or try out VIO 4.0 based VMware Integrated OpenStack Hands-on Lab, no installation required.

Best Practice Recommendations for Virtual Machine Live Resize

As computing demands increase, server resources must “grow” or “scale” to meet those requirements.   There are two basic ways to scale computing resources. The first is to add more VMs or “horizontally scale.” Say a web front end is using 90% of the allocated computing capacity. If traffic to the site increases, the current VM may not have enough CPU, memory, or disk available to keep up.  The site administrator could deploy an additional VM to support the growth in the workload.








Not all applications scale horizontally.  NFV workloads such as virtual routers or gateways may need to “vertically scale”.  For example, a virtual machine with 2 vCPU / 4 G memory may need to double it’s vCPU and memory rather than adding a second virtual machine.  While the OpenStack ecosystem offers many tools for horizontal scaling (Heat, Terraform, etc.), options for scaling up are much more limited.  The Nova project has a long-pending proposal for live resize (Hot Plug).  Unfortunately, this feature still hasn’t been implemented.  Without live-resize, to increase Memory/CPU/Disk of an instance, OpenStack must first power down the VM, migrate to a VM flavor that offers more CPU/Memory/Disk, finally power up the VM.   VM power down impacts SLA and can trigger cascading failure for NFV based workloads (route convergence, loops, etc.)

By leveraging the existing OpenStack resize API and recommendations introduced in the upstream live-resize specification, VMware Integrated OpenStack (VIO) 4.0 offers the ability to resize any machine, as long as the GuestOS supports it, without the need to power down the system. OpenStack users would issue the standard OpenStack resize request.  The VMDK driver examines the CPU/memory/disk changes specified by the flavor, and the setting of the virtual machine to determine whether the operation can be performed. If the guest OS supports live-resize, resources will be added without power down.  If guest OS cannot support live-resize, then traditional Nova instance resize operation takes place (which powers off the instance).

Best Practice Recommendations:

When implementing live-resize in your environment, be sure to follow the following recommendations:

  1. Cloud Admins or Application owners would need to indicate the GuestOS can handle live resize for a specific resource using image metadata “os_live_resize= <resource>.”  List of guest OS that supports hot plug / live-resize can be found here.  Available resource options are disk, memory or vCPU.   You can live-resize the VM based on any combination of the resource types.
    • Add CPU resources to the virtual machine
    • Add memory resource to the virtual machine.
    • Increase virtual disk size of the virtual machine
    • Add CPU and Memory, CPU and Disk, or Memory and Disk
    • Increase CPU, Memory, and Disk
    • Hot removal of CPU/Memory not supported
  2. If a resized VM exceeds the capacity of a host, VMware DRS can move the VM to another host within the cluster where resources are available.  DRS is simple to configure and extremely powerful.  My colleague Mathew Mayer wrote an excellent blog on Load balancing vSphere Clusters with DRS, be sure to take a look.
  3. Image Metadata updates for disk resize:
    • Linked clone must set to false.  This is because vCenter cannot live resize linked cloned disks
    • Disk adapter must be Non-IDE.  This is because IDE disks do not support hot-swap/add.

See diagram below:











4). VMware supports memory resize of 4G and above.  Resize below 4G should work in most cases, but not officially supported by VMware.

Live-resize Example Workflow:

Step 1). Upload image:

openstack image create –disk-format vmdk –container-format ova –property vmware_ostype=”ubuntu64Guest”  –property os_live_resize=vcpu,memory,disk –-property img_linked_clone=false –file ./xenial-server-cloudimg-amd64.ova <some name>

Step 2). Disable linked clone (if using default VIO 4.0 bundled in 16.0.4 cloud image):

openstack image set –property img_linked_clone=false <some name>

Step 3). Boot a VM:

openstack server create –flavor m1.medium –image <some name>  –nic net-id=net-uuid resize_vm

Step 4). Resize to the next flavor:

openstack server resize –flavor m1.large <resize_VM>

Step 5). Confirm resize:

openstack server resize –confirm <server>

Step 6). SSH to the VM and run the scripts below to bring the new resources online in the guest OS.

  • Memory online

“for i in `grep offline /sys/devices/system/memory/*/state | awk -F / ‘{print $6}’ | awk -F y ‘{print $2}’`; do echo “bring memory$i online”; echo online >/sys/devices/system/memory/memory$i/state; done”

  • CPU online:


Simplify your NFV workloads by levering industry’s most stable and battle-tested OpenStack distribution.  Instead of re-architect your virtual network and security to enable horizontal scaling, live-resize it!  It’s simple and hitless.   Download 60-day evaluation now and get started, or try out VIO 4.0 based VMware Integrated OpenStack Hands-on Lab, no installation required.

Leverage OpenStack for your Production Workloads

In my previous blog I wrote about VMware’s involvement in open source. The proliferation of open source projects in recent years has influenced how people think about technology, and how technology is being adopted in organizations, for a few reasons. First, open source is more accessible – developers can download projects from github to their laptops and quickly start using them. Second, open source delivers cutting edge capabilities, and companies leverage that to increase the pace of innovation. Third, developers love the idea that they can influence, customize and fix the code of the tools they’re using.  Many companies are now adopting the “open source first” strategy with the hope that they will not only speed up innovation but also cut costs, as open source is free.

However, while developers increasingly adopt open source, it often doesn’t come easy to DevOps and IT teams, who carry the heavy burden of bringing applications from developer laptop to production. These teams got to think about stability, performance, security, upgrades, patching and the list goes on. In those cases, enterprises are often happy to pay for an enterprise-grade version of the product, for which all those things mentioned are already taken care of.

When applications are ready to move to production…

OpenStack is a great example. Many organizations are keen to run their applications on top of an open source platform, also known to be the industry standard. But that doesn’t come without deployment and manageability challenges. That’s where VMware provides more value to customers.

VMware Integrated OpenStack (VIO) makes it easier for IT to deploy and run an OpenStack cloud on top of their existing VMware infrastructure. Combining VIO with the enterprise-grade capabilities of the VMware stack provides customers with the most reliable and production ready OpenStack solution. There are three key reasons for this statement: a) VMware provides best-of-breed, production ready OpenStack-compatible infrastructure; b) VIO is fully tested for both – business continuity and compatibility; and c) VMware delivers capabilities for day 2 operations. Let me go into details for each of the three.

Best-of-breed OpenStack-compatible infrastructure

First, VMware Integrated OpenStack is optimized to run on top of VMware Software Defined Data Center (SDDC), leveraging all the enterprise-grade capabilities of VMware technologies such as high availability, scalability, security and so on.

  • vSphere for Nova Compute: VIO takes advantage of vSphere capabilities such as Dynamic Resource Scheduling (DRS) to achieve optimal VM density and vMotion to protect tenant workloads against failures.
  • VMware NSX for Neutron: advanced networking services with massive scale and throughput, and with rich set of capabilities such as private networks, floating IPs, logical routing, load balancing, security groups and micro-segmentation.
  • VMware vSAN/3rd party storage for Cinder/Glance: VIO works with any vSphere-validated storage (we have the largest hardware compatibility list in the industry). VIO also brings Advanced Storage Policies through VMware vSAN.

Battle hardened and tested

OpenStack can be deployed on many combinations of storage, network, and compute hardware and software, and from multiple vendors. Testing all combinations is a challenge and often times customers who choose the DIY route will have to test their combination of hardware and software for production workloads. VMware Integrated OpenStack, on the other hand, is battle-hardened and tested against all VMware virtualization technologies to ensure the best possible user experience from deployment to management (upgrades, patching, etc.) to usage. In addition, VMware provides the broadest hardware compatibility coverage in the industry today (that has been tested in production environments).

Key capabilities for Day-2 Operations

VMware Integrated OpenStack brings operations capabilities to OpenStack users.  For example, built-in command line interface (CLI) tools enable you to troubleshoot and monitor your OpenStack deployment and the status of OpenStack services. Pre-defined workflows automate common OpenStack operations such as adding/removing capacity, configuration changes, and patching.

In addition, out-of- the-box integrations with vRealize Operations, vRealize Log Insight, and vRealize Business for Cloud provide monitoring, troubleshooting, and cost visibility for your OpenStack infrastructure.

Finally, to add to all of this, another benefit is that our customers only have one vendor and support number to call to in case of a problem. No finger pointing, no need to handle different support plans. Easy!

To learn more, visit the VIO web page and product feature walkthrough.

VMware, Open Source and OpenStack

Last week at VMworld, VMware’s biggest event of the year, I attended a few sessions with various topics related to open source, and was impressed with the number of people who showed interest those sessions. Our customers are looking to leverage open source products on top of VMware technologies, and VMware is more active in the open source community than one might think.

Source: https://vmware.github.io/

We, at VMware, use open source in our products, make thousands of contributions every year to many upstream projects, and create new open source projects that are being used by many. Some of the open source projects created by VMware include:

And the list goes on. You can learn about additional projects here. VMware’s investment in open source makes a lot of sense when you think about it. First, we would like to influence and engage with our customers, who might be looking at open source projects to improve the way they do stuff (see Clarity for example). Second, we would like to improve our products and tools based on feedback and support from the community. And lastly, a lot of growth is happening at the edge of the technology and we want to leverage the opportunity.

One of the most important open source projects VMware is involved in is OpenStack. At VMworld last week, we announced our new release of VMware Integrated OpenStack, the OpenStack distribution from VMware. In the last few years we have been working hard to deliver an OpenStack distribution that would seamlessly work on VMware SDDC, without you having to spend hours on customization or professional services.

History of Working with the OpenStack Community

VMware has a history of open source contributions to the OpenStack community starting in 2010.  Initially it was via the Nicira team’s work on Open vSwitch (OVS) (Niciria was acquired by VMware).  Later, it was via other projects including Nova, Neutron, Cinder, Glance and Ceilometer. We are the #1 contributor to the Neutron project, and the #6 contributor to the Nova project. In addition, we share all the Compute, Network, and Storage drivers with the community.

Source: http://stackalytics.com/

Compliance with Interop Working Group guidelines

VMware Integrated OpenStack complies with the interoperability guidelines defined by the OpenStack Interop Working Group. This group drafts the guidelines that include a list of capabilities that a “true OpenStack” cloud must expose to end users, a list of tests they must pass in order to prove it, and a list of designated sections of the upstream codebase they must use to provide those capabilities. For example, automation tools that leverage the OpenStack APIs should work on VMware Integrated OpenStack as they would on any other OpenStack distribution. Interoperability prevents vendor lock-in because it allows you to easily switch from your current OpenStack deployment to a different vendor’s distribution.

One area where developers may have been concerned in the past is image formats, since the VMware platform currently utilizes OVA, VMDK, and ISO disk formats with Glance.  However, tools exist to convert from other formats to the formats we have adopted (for example: qemu-img to convert qcow2 to VMDK). In addition, significant community work in the area of image building with projects like Diskimage Builder and Packer enables users to auto-generate a VMware-compatible image relatively quickly.

VMWare is committed to keeping VMware Integrated OpenStack open by ensuring all its drivers are open source, ensuring vendor interoperability based on InterOp Working Group guidelines as well as being a very active participant in the OpenStack community.

To learn more, visit the VIO web page and product feature walkthrough.

VMware Integrated OpenStack 4.0: What is New

VMware announced VMware Integrated OpenStack 4.0 Data Center edition at VMworld in Las Vegas.  We are truly excited about our latest OpenStack distribution that gives our customers the new features and enhancements included in the Ocata release, with a bundled Container platform option included. For OpenStack cloud admins, the 4.0 Data Center edition is also about enhanced platform performance and manageability, increased scale and advanced networking.











New Features Include:

OpenStack Features available in Newton + Ocata:

VIO 4.0 is based on the upstream Ocata release.  Ocata is the first release in which Cells v2 is the default deployment configuration for OpenStack Nova, a single Cell is supported in Ocata.  Cell support enables future scale out of an OpenStack cloud in a more distributed fashion.  The placement service, introduced in the Newton release, is now an essential part of VIO 4.0 in determining the optimum placement of VMs. Not to be mistaken with VMware DRS, the OpenStack placement service allows a cloud admin set up pools of resources, and then set up allocations for resource providers. VM placement policies can be built on top of those resources for optimal placement of VMs (Additional blogs to follow).

New capabilities in OpenStack Horizon include: enhanced workload placements, LBaaSv2 and Heat template versions to name a few. Heat template versions provide user with a list of available template versions and functions for a particular template version.

Resource tagging, Cinder availability zones, enhanced Cinder snapshots, and Heat templates with conditions are some of the other notable enhancements available from upstream release in VIO 4.0 release.

vRealize Automation Integration

Another great example of VMware empowers customers to leverage existing investment in infrastructure management and tooling. Integration provides enterprise customers the ability to consume VIO resources with governance. Using vRA XaaS blueprints, a cloud admin can automate OpenStack user and project creation, governance based Heat template deployment or other common aspects of VIO consumption through vRA governance. Once OpenStack resources are on-boarded, vRA integration allows cloud admins and users to view the VIO Horizon dashboard directly from the vRA portal using SSO integration with vIDM.

Networking Advanced Capabilities

VIO 4.0 greatly simplifies network addressing and reachability management leveraging dynamic routing.  Instead of relying on NAT to provide address uniqueness, cloud admins can leverage Neutron address pools or get-me-a-network feature to define a scope of unique addresses spaces.  Tenants needing unique address space can allocate subnets from this pool without worrying about overlapping with another tenant.  With BGP routing, another VIO 4.0 new feature, cloud admin can enable end-to-end connectivity dynamically without managing low-level static routes.

Enhanced Neutron availability zone support allows OpenStack tenants to place NSX ESG workloads to different physical clusters, across different racks for increased availability.  Finally, Firewall-as-a-Service and guest VLAN tagging are some of the other major Neutron enhancements.

Enhanced Platform Support

We are extremely proud of multi-vCenter support in VIO 4.0.  Multi-VC support with NSX-T allows VIO customers the ability to define multiple fault/availability zones, avoiding single point of failure.  Multi-VC can also be used for scaling out VIO by adding more vCenters upon reaching concurrency or total object limits.

Enterprise workloads require both horizontal and vertical scaling.  While horizontal scaling is made simple through Heat or Terraform, vertical scaling often requires downtime/outage window.  With VIO 4.0, cloud admins can offer Glance images that support live resize: OpenStack tenants can increase CPU, Memory, and disk of their virtual machine without VM powering down.  VIO 4.0 also provides increased resiliency with vCenter HA and LVM support on the OMS server to allow flexible storage growth.

Enterprise Grade Container

Finally, VIO offers enterprise grade Kubernetes with built in security, HA and scale (up or down).  Out of box, VIO provides Cloud Admins with simplified day 1 deployment automation for Kubernetes with multi-tenancy and user management.  Once deployed, VIO Kubernetes integrates easily with SDDC vRealize suite of products solving day 2 operational challenges in container life cycle Management, monitoring and logging.  Persistent storage, load balancing and container networking powered by VMware NSX are also standard out of box.

Adopting agile processes is a key driver to help business digitally transform.  It is changing not only the way applications are coded, but also the process they are built and operated.  In the new DevOps driven era, infrastructure admins and developers are solving the same problem – faster time to value.  VIO 4.0 is the answer for any organization looking to digitally transform their business.

VIO 4.0 Data Center edition will enable DevOps teams to build and deliver:

  • Container based micro-services, in addition to traditional VM based workloads
  • End-to-end infrastructure automation leveraging existing tools
  • OpenStack deployment scale out using multi-VC, OpenStack placement API and Cells v2
  • Advanced Neutron and container networking to simplify addressing and reachability while ensuring application security
  • Solving Day 2 operational challenges in Infrastructure life cycle management

Supported by the most rock solid VMware SDDC Infrastructure, VIO enables businesses faster time to value.

Try VMware Integrated OpenStack Today

Take a free test drive, no installation required, with the VMware Integrated OpenStack Hands-on Lab.  Try out the latest VIO 4.0 HOL If you are attending VMworld Vegas or Barcelona.


Introducing VMware Integrated OpenStack 4.0

We’re excited to announce the new release of VMware Integrated OpenStack 4.0 today at VMworld US 2017, as part of the VMware SDDC story. You can read more about it here.

VMware Integrated OpenStack (VIO) is an OpenStack distribution supported by VMware, optimized to run on top of VMware’s SDDC infrastructure. In the past few months we have been hard at work, adding additional enterprise grade capabilities into VIO, making it even more robust, scalable and secure, yet keeping it easy to deploy, operate and use.

VMware Integrated OpenStack 4.0 is based on Ocata, and some of the highlights include:

Containers support – users can run VMs alongside containers on VIO. Out-of-the-box container support enables developers to consume Kubernetes APIs, leveraging all the enterprise grade capabilities of VIO such as multi-tenancy, persistent volumes, high availability (HA), and so on.

Integration with vRealize Automation – vRealize Automation customers can now embed OpenStack components in blueprints. They can also manage their OpenStack deployments through the Horizon UI as a tab in vRealize Automation. This integration provides additional governance as well as single-sign-on for users.

Multi vCenter support – customers can manage multiple VMware vCenters with a single VIO deployment, for additional scale and isolation.

Additional capabilities for better performance and scale, such as live resize of VMs (changing RAM, CPU and disk without shutting down the VM), Firewall as a Service (FWaaS), CPU pinning and more.

Our customers use VMware Integrated OpenStack for a variety of use cases, including:

Developer cloud – providing public cloud-like user experience to developers, as well as more choice of consumption (Web UI, CLI or API), self-service and programmable access to VMware infrastructure. With the new container management support, developers will be able to consume Kubernetes APIs.
IaaS platform for enterprise automation – adding automation and self-service provisioning on top of best-of-breed VMware SDDC.
Advanced, programmable network – leveraging network virtualization with VMware NSX for advanced network capabilities.

Our customers tell us (consistently) that VIO is easy to deploy (“it just worked!”) and manage. Since it’s deployed on top of VMware virtualization technologies, they are able to deploy and manage it by themselves, without hiring new people or professional services. Their development and DevOps teams like VIO because it gives them the agility and user experience they want, with self-service and standard OpenStack APIs.

In most cases, in a short amount of time (few weeks!) customers trust VIO enough to run their business-critical applications, such as e-commerce website or online travel system, in production.

VMware Integrated OpenStack will be available as a standalone product later this quarter. For more information go to our website, check out the product walkthrough and try out the hands-on lab.

If you are attending VMworld, please stop by our booth (#1139) to see demos and speak with OpenStack specialists. We’re looking forward to seeing you!