Home > Blogs > OpenStack Blog for VMware > Author Archives: Xiao Gao

Author Archives: Xiao Gao

About Xiao Gao

Xiao Hu Gao is a Technical Marketing Manager at VMware CNABU. He enjoys speaking to customers and partners about the benefits of running Kubernetes on top of VMware technologies. Xiao has a background in DevOps, Data Center Design, and Software Defined Networking. Xiao is CCIE (Emeritus) and holds few patents in the areas of Security and Cloud.

VMware Integrated OpenStack Glance Image Best Practices

A production cloud isn’t very efficacious unless users have the ability to run virtual machine images required by their application.  A cloud image is a single file that contains a virtual disk that has an operating system.  For many organizations, the simplest way to obtain a virtual machine image is to download a prebuilt base cloud image with a pre-packaged version of cloud-init to support user-data injection.  Once downloaded, an organization would leverage tools such as Packer to further customize and harden on top of the base image before rolling to production.  Most operating system projects and vendors maintain official images for direct download.  Openstack.org maintains a list of most commonly used images here.

 

 

 

 

 

 

 

 

 

Recently we received some queries about the proper way to import prebuilt QCOW2 native cloud images into VMware Integrated OpenStack.  Images import correctly, but would not successfully boot.  Common symptoms are “no Operating System found” messages generated by the virtual machine’s BIOS, the guest OS hanging during the boot cycle, or DHCP failure when trying to acquire an IP address.  After further analysis, problems were either caused by older upstream tooling or simple adjustments required in the cloud image to match the vSphere environment.  Specifically:

  • Some storage vendors need StreamOptimized image format.
  • Guest Images are attempting to write boot log to ttyS0, but the serial interface is not available on the VM.
  • Defects in earlier versions of the qemu-img tool while creating streamOptimized images.
  • DHCP binding failure caused by Predictive Network Interface Naming.

To overcome these issues, we came up with the following set of best practices to help you simplify the image import process.  I thought it would be a good idea to share our recommendations so others can avoid running into similar issues.

1). VIO 3.x and earlier, serial console output is not enabled.  When booting an image that requires serial console support, use libguestfs to edit the grub.cfg and remove all references to “console=ttyS0”.  Libguestfs provides a suite of tools for accessing and editing VM disk images.  Once installed the “guest mount” command-line tool can be used to mount qcow2 based images.  By default, the disk image mounts in read-write mode.  More info on Libguestfs here.

# guestmount -a xxx-cloudimg-amd64.img -m /dev/sda1 /mnt

# vi /mnt/boot/grub/grub.cfg

# umount mnt

See below screen Capture:

 

 

 

 

 

 

2). VMware vSAN requires all images to be in streamOptimized format.  When converting to VMDK format, use the –o flag to specify the subformat as streamOptimized:

# qemu-img convert -f qcow2 -O vmdk -o subformat=streamOptimized -o adapter_type=lsilogic xxx-server-cloudimg-amd64.img xxx-server-cloudimg-amd64.vmdk ; printf ‘\x03’ | dd conv=notrunc of=xxx-server-cloudimg-amd64.vmdk bs=1 seek=$((0x4))

A few additional items to call out:

  • “lsilogic” is the recommended adapter type.  Although it is possible to set the adapter type during image upload into glance, we recommend as a good practice to always set the adapter type as part of the image conversion process.
  • Older versions of the qemu-img tool contain a bug that causes problems with the streamOptimized subformat.  The following command can be run after converting an image to correct the problem: printf ‘\x03’ | dd conv=notrunc of=xxx-server-cloudimg-amd64.vmdk bs=1 seek=$((0x4)).   It is harmless to execute the printf even if you’re using a version of qemu-tools that has the fix: all the command does is set the VMDK version to “3” which correct version of qemu-img will already have done.  If you are not sure what version of qemu-tools you have, apply the printf command.

3). In the case of CentOS, Udev rule ln -s /dev/null /etc/udev/rules.d/80-net-name-slot.rules as part of the image bundle is ignored during CentOS image boot up and Predictable Network Interface Naming is enabled as a result.  Our recommendation is to disable predictive naming using grub.  You can find more information on my previous blog.

4). Finally, with Cirros QOCW image, preserve the adapter type as ‘ide’ during the QCOW2 to VMDK conversion process.  There’s currently an upstream bug open.

# qemu-img convert –f qcow2 –O vmdk /var/www/images/cirros-0.3.5-x86_64-disk.img /var/www/images/cirros-0.3.5-x86_64-disk.idk.vmdk

qemu-img defaults to IDE if no adapter type is specified.

Once converted, you can look into the image metadata and validate information such as disk and image type before uploading into Glance image repository.  Image metadata can be viewed by display the first 20 lines of the VMDK

# cat xxx-server-cloudimg-amd64.vmdk | head -20

You can add the newly converted image into glance using OpenStack CLI or Horizon.  Set the public flag when ready for end user consumption.

OpenStack CLI:

# openstack image create –disk-format vmdk –public –file ./xxx-server-cloudimg-amd64.vmdk –property vmware_adaptertype=’lsiLogic’ –property vmware_disktype=’streamOptimized’ <Image display name>

Horizon:

 

 

 

 

 

 

 

 

 

 

Your cloud is as useful as the application and virtual machine images you can support.  By following above simple best practice guidelines, you will deliver a better user experience to your end users by offering more Virtual machine varieties with significantly reduced lead time.

Visit us at VMworld in Las Vegas; we have a large number of Demo and speaking sessions planned:

MGT2609BU:  VMware Integrated OpenStack 4.0: What’s New
MGT1785BU:  OpenStack in the Real World: VMware Integrated OpenStack Customer Panel
NET1338BU:  VMware Integrated OpenStack and NSX Integration Deep Dive
FUT3076BU:  Simplifying Your Open-Source Cloud With VMware
LDT2834BU:  Running Hybrid Applications: Mainframes to Containers
SPL182001U:  VMware Integrated OpenStack (VIO) – Getting Started
ELW182001U: VMware Integrated OpenStack (VIO) – Getting Started
SPL188602U: vCloud Network Functions Virtualization – Advanced Topics
LDT1844BU: Open Source at VMware: A Key Ingredient to Our Success and Yours

How to Deal with DHCP Failure Caused by Consistent Network Device Naming (VIO)

 

VMW-Integrated OpenStack-Gray.jpg

 

 

 

 

 

 

 

 

 

While testing out the latest CentOS 7 QCOW2 cloud image, we ran into an issue where the guest operating system wasn’t able to obtain a DHCP IP address after successful boot.  After some troubleshooting, we quickly realized the NIC name was assigned based on predictive consistent network device name (CNDN). You can read more about CNDN from here.  Network script required to bring up the network interface was missing from /etc/sysconfig/network-scripts, only default ifcfg-eth0 script was present. The network interface remained in DOWN status since interface script wasn’t available.  Therefore, the Linux dhclient therefore couldn’t bind to the interface, hence the DHCP failure.

Fixing the symptom we simply edited and renamed the interface script to reflect the predictive name, then restart networking.  But since this problem will show up again when booting a new VM,  we need a permanent fix in the image template.

It turns out predictive naming was intended to be disabled in the CentOS 7 Cloud Image based on the udev role below:

Screen Shot 2017-04-14 at 2.41.06 PM

 

 

The system ignored this setting during bootup and predictive naming was enabled as a result.

There are multiple ways to workaround this:

Solution 1 – Update Default GRUB to Disable CNDN:

1). To restore the old naming convention, you can edit the /etc/default/grub file and add net.ifnames=0 and biosdevname=0 at the end of the GRUB_CMDLINE_LINUX variable:

Example:   GRUB_CMDLINE_LINUX=”rd.lvm.lv=centos/swap vconsole.keymap=us crashkernel=auto rd.lvm.lv=centos/root vconsole.font=latarcyrheb-sun16 rhgb quiet net.ifnames=0 biosdevname=0″

2) Review the new configuration by printing output to STDOUT

# grub2-mkconfig

3) Update the grub2 configuration after review:

# grub2-mkconfig -o /boot/grub2/grub.cfg

 

Solution 2: Enable Network Manager

1) Install Network Manager:

# yum install NetworkManager

2) Start Network Manager

# service NetworkManager start

3) Run chkconfig to ensure Network Manager starts after system reboot

# chkconfig NetworkManager on

Solution 3: Create Customer Udev Rule

We will create an udev rule to override the unintended predictive name.

1) Create a new 80-net-name-slot.rules in /etc/udev/rules.d/

# touch /etc/udev/rules.d/80-net-name-slot.rules

2). Add below line to the new 80-net-name-slot.rules:

NAME==””, ENV{ID_NET_NAME_SLOT}!=””, NAME=”eth0″

Final Implementation

All three solutions solved the problem.  Approach #1 involves updating GRUB config, so handle with care. Solution #2 is a very hands-off approach allowing Network Manager to control interface states.   Most sysadmins have a love/hate relationship with NetworkManager however. NetworkManager simplifies management of WiFI interfaces but can lead to unpredictable behavior in interface states. Most common concerns are interfaces brought up by NetworkManager when it should be down as sysadmin are not ready to turn up those NIC yet. OpenStack community had reported cloud-init timing related issues as well, although we didn’t have any problems enabling it on the Cloud Centos 7 image.  Solution #3 needs to align with overall deployment requirements in a Multi-NIC environment.

In reality,  CNDN was designed to solve NIC naming issues in a physical server environment.  It stopped being useful with virtual workloads.  Most of the cloud workloads deploy with a single NIC.  The NIC is always eth0.  Consequently, disabling CNDN makes sense, solution #1 is what we recommend.

Once CentOS VM image is in the desirable state, create a snapshot, then refer to the OpenStack documentation to upload into glance.  A shortcut to validate the new image,  instead of creating a snapshot, download and upload back into glance, it is perfectly fine to boot VM directly from a snapshot.   Please refer to VIO documentation for recommended steps.

Be sure to test this out on your VMware Integrated OpenStack setup today.  If you don’t have VIO yet, try it on our VMware Integrated OpenStack Hands-On-Lab , no installation required.

OpenStack Summit:

We will be at the OpenStack Summit in Boston. If you are attending the conference, swing by the VMware booth or attend one of our many sessions:

OpenStack and VMware – Use the Right Foundation for Containers

Digital Transformation with OpenStack for Modern Service Providers

Is Neutron challenging to you – Learn how VMware NSX is the solution for regular OpenStack Network & Security services and Kubernetes

OpenStack and OVN – What’s New with OVS 2.7 

DefCore to Interop and back again: OpenStack Programs and Certifications Explained

Senlin, an ideal bridge between NFV Orchestrator and OpenStack 

High availability and scalability management of VNF

How an Interop Capability becomes part of the OpenStack Interop Guidelines

OpenStack Interoperability Challenge and Interoperability Workgroup Updates: The Adventure Continues

Lightning Talk:

Openstack and VMware getting the best of both. 

Demos:

Station 1: VMware NSX & VMware Integrated OpenStack

Station 2: NFV & VMware Integrated OpenStack

 

VMware Integrated OpenStack 3.1 GA. What’s New!

VMware announced general availability (GA) of VMware Integrated OpenStack 3.1 on Feb 21 2017. We are truly excited about our latest OpenStack distribution that gives our customers enhanced stability on top of the Mitaka release and streamlined user experience with Single Sign-On support with VMware Identity Manager.   For OpenStack Cloud Admins, the 3.1 release is also about enhanced integrations that allows Cloud Admins to further take advantage of the battle tested vSphere Infrastructure & Operations tooling providing enhanced security, OpenStack API performance monitoring,  brownfield workload migration, and seamless upgrade between central and distributed OpenStack management control planes.

images

 

 

 

 

VIO 3.1 is available for download here.  New features include:

  • Support for the latest versions of VMware products. VMware Integrated OpenStack 3.1 supports and is fully compatible with VMware vSphere 6.5, VMware NSX for vSphere 6.3, and VMware NSX-T 1.1.   To learn more about vSphere 6.5, visit here, vSphere 6.3 and NSXT, visit here.
  • NSX Policy Support in Neutron. NSX administrators can define security policies, shared by the OpenStack Cloud Admin with cloud users. Users can either create their own rules, bounded with the predefined ones that can’t be overridden, or only use the predefined, depending on the policy set by the OpenStack Cloud Admin.  NSX Provider policy feature allows Infrastructure Admins to enable enhanced security insertion and assurance all workloads are developed and deployed based on standard IT security policies.
  • New NFV Features. Further expanding on top of VIO 3.0 capability to leverage existing workloads in your OpenStack cloud, you can now import vSphere VMs with NSX network backing into VMware Integrated OpenStack.  The ability to import vSphere VM workloads into OpenStack and run critical Day 2 operations against them via OpenStack APIs enables you to quickly move existing development projects or production workloads to the OpenStack Framework.  VM Import steps can be found here.  In addition full passthrough support by using VMware DirectPath I/O is supported.
  • Seamless update from compact mode to HA mode. If you are updating from VMware Integrated OpenStack 3.0 that is deployed in compact mode to 3.1, you can seamlessly transition to an HA deployment during the update. Upgrade docs can be found here.
  • Single Sign-On integration with VMware Identity Manager. You can now streamline authentication for your OpenStack deployment by integrating it with VMware Identity Manager.  SSO integration steps can be found here.
  • Profiling enhancements.  Instead of writing data into Ceilometer, OpenStack OSprofiler can now leverage vRealize Log Insight to store profile data. This approach provides enhanced scalability for OpenStack API performance monitoring. Detailed steps on enabling OpenStack Profiling can be found here.

Try VMware Integrated OpenStack Today

 

 

Take Advantage of Nova Flavor Extra-specs and vSphere QoS to Make Delivering SLAs Much Simpler

Resource and over-subscription management are always the most challenging tasks facing a Cloud Admin. To deliver a guaranteed SLA, one method OpenStack Cloud Admins have used is to create separate compute aggregates with different allocation / over-subscription ratios. Production workloads that require guaranteed CPU, memory, or storage would be placed into a non-oversubscribed aggregate with 1:1 over-subscription, dev workloads may be placed into a best effort aggregate with N:1 over-subscription. While this simplistic model accomplishes its purpose of an SLA guarantee on paper, it comes with a huge CapEx and/or high overhead for capacity management / augmentation. Worst yet, because host aggregate level over-subscription in OpenStack is simply static metadata consumed by the nova scheduler during VM placement, not real time VM state or consumption, huge resource imbalances within the compute aggregate and noisy neighbor issues within a nova compute host are common occurrences.

New workloads can be placed on a host running close to capacity (real time consumption), while remaining hosts are running idle due to differences in application characteristics and usage pattern. Lack of automated day 2 resource re-balance(management) further exacerbates the issue. To provide white glove treatment to critical tenants and workloads, Cloud Admins must deploy additional tooling to discover basic VM to Hypervisor mapping based on OpenStack project IDs. This is both expensive and ineffective in meeting SLAs.

Over-subscription works if resource consumption can be tracked and balanced across a compute cluster. Noisy neighbor issues can be solved only if the underlying infrastructure supports quality of service (QoS).  By leveraging OpenStack Nova flavor extra-spec extensions along with vSphere industry proven per VM resource reservation allocation (expressed using shares, limits and reservations), OpenStack Cloud Admins can deliver enhanced QoS while maintaining uniform consumption across a compute cluster.  It is possible to leverage Image metadata to deliver QoS as well, this blog will focus on Nova flavor extra-spec.

The VMware Nova flavor extension to OpenStack was first introduced upstream in Kilo and is officially supported in VIO release 2.0 and above. Additional requirements are outlined below:

  • Requires VMware Integrated OpenStack version 2.0.x or greater
  • Requires vSphere version 6.0 or greater
  • Network Bandwidth Reservation requires NIOC version 3
  • VMware Integrated OpenStack access as a cloud administrator

Resource reservations can be set for following resource categories:

  • CPU (MHz)
  • Memory (MB)
  • Disk IO (IOPS)
  • Network Bandwidth (Mbps)

Within each resource category, Cloud Admin has the option to set:

  • Limit – Upper bound, not to exceed limit resource utilization
  • Reservation – Guaranteed minimum reservation
  • Share Level – The allocation level. This can be ‘custom’, ‘high’ ‘normal’ or ‘low’.
  • Shares Share –  In the event that ‘custom’ is used, this is the number of shares.

Complete Nova flavor extra-spec details and deployment options can be found here.  vSphere Resource Management capabilities and configuration guidelines is a great reference as well and can be found here.

Let’s look at an example using Hadoop to demonstrate VM resource management with flavor extra-specs. Data flows from Kafka into HDFS, every 30 minutes there’s a batch job to consume the newly ingested data.  Exact details of the Hadoop workflow are outside the scope of this blog. If you are not familiar with Hadoop, some details can be found here.  Resources required for this small scale deployment are outlined below:

Node Type

 

Core

(reserved – Max)

 

Memory

(reserved – Max)

 

Disk

 

Network Limit

Master / Name Node

4 16 G 70 G 500 Mbps
Data Node 4 16 G 70 G

1000 Mbps

Kafka 0.4-2 2-4 G 25 G

100 Mbps

Based on above requirements, Cloud Admin needs to create Nova flavors to match maximum CPU / Memory / Disk requirements for each Hadoop component.  Most of OpenStack Admins should be very familiar with this process.

flavor-create

 

 

 

 

Based on the reservation amount, attach corresponding nova extra specs to each flavor:

extra-spec

 

 

Once extra specs are mapped, confirm setting using the standard nova flavor-show command:

summary-flavor

 

 

 

 

 

In just three simple steps, resource reservation settings are complete.   Any new VM consumed using new flavors from OpenStack (API, command line or Horizon GUI) will have resource requirements passed to vSphere (VMs can be migrated using the nova rebuild VM feature).

Instead of best effort, vSphere will guarantee resources based on nova flavor extra-spec definition. Specific to our example, 4 vCPU / 16G / Max 1G network throughput will be reserved for each DataNode, NameNode with 4 vCPU / 16G / Max 500M throughput and Kafka nodes will have 20% vCPU / 50% Memory reserved. Instances boot into “Error” state if requested resources are not available, ensuring existing workload application SLAs are not violated. You can see that the resource reservation created by the vSphere Nova driver are reflected in the vCenter interface:

Name Node CPU / Memory:

hadoop_nn_cpu_mem

 

 

 

 

 

 

 

 

Name Node Network Bandwidth:

hadoop_nn_network

 

 

 

 

 

 

 

 

 

 

Data Node CPU / Memory:

dn_cpu_mem

 

 

 

 

 

 

 

 

Data Node Network Bandwidth:

dn_network

 

 

 

 

 

 

 

Kafka Node CPU / Memory:

kafka_cpu_mem

 

 

 

 

 

 

 

 

Kafka Network Bandwidth:

kafka_network

 

 

 

 

 

 

 

vSphere will enforce strict admission control based on real time resource allocation and load. New workloads will be admitted only if SLA can be honored for new and existing applications. Once a workload is deployed, in conjunction with vSphere DRS, workload rebalance can happen automatically between hypervisors to ensure optimal host utilization (future blog) and avoid any noisy neighbor issues. Both features are available out of box, no customization is required.

By taking advantage of vSphere VM resource reservation capabilities, OpenStack Cloud Admins can finally enjoy the superior capacity and over-subscription capabilities a cloud environment offers. Instead of deploying excess hardware, Cloud Admins can control when and where additional hardware are needed based on real time application consumption and growth. Ability to consolidate, simplify, and control your infrastructure will help to reduced power, space, and eliminate the need for any out of box customization in tooling or operational monitoring. I invite you to test out nova-extra spec in your VIO environment today or encourage your IT team to try our VMware Integrated OpenStack Hands-On-Lab, no installation is required.

 

VIO Speed Challenge – Can a New Guy Get Production Quality OpenStack Setup and Running Under Three Hours

This article describes an OpenStack setup. To put it into context, I recently joined VMware as a Technical Marketing Manager responsible for VMware Integrated OpenStack technical product marketing and field enablement. After a smooth onboarding process (accounts, benefits, etc), my first task was to get lab environments I inherited up and running again. I came from a big OpenStack shop where deployment automation was built in house using both Puppet and Ansible, so my first questions were; Where’s the site hiera data? Settings for Cobbler environment? Which playbook to run to spin up controller VM nodes? Once the vm is up, which playbooks are responsible for deployment and configuration of keystone, nova, cinder, neutron, etc. How to seed the environment once OpenStack is configured and running? It was always a multi-person, multi-week effort. Everyone maintained a cheat sheet loaded with deployment caveats and workarounds (long list since increased feature = increased complexity = increased caveats = slow time to market). Luckily with VMware Integrated OpenStack, all those decision points and complexity are abstracted for the Cloud Admin. Really, the only decision to make is:

  • Restore from database backup – Restore from backup processes can be found here.
  • Redeploy and Import.

Redeploy seemed more interesting so I decided to give it a shot (will save restore for a different blog if there is interest).

Note: Redeploy will not clean up vSphere resources. One could re-import vm resources from vCenter once deployment completes. Refer to Instructions here.

Lab Topology:

Lab environment consists of 3 ESXi clusters

  • Management (vCenter, VIO, vROPS, vRealize Log Insight, vRealize Operation Manager, NSX management, NSX Controller and etc)
  • Edge ( OpenStack neutron tenant resources – NSX Edge)
  • Production (This is where Tenant VMs reside)

One instance of NSX-v spanning across all 3 clusters from a networking perspective.

Screen Shot 2017-01-21 at 10.48.43 PM

VIO OpenStack Deployment:

VMware Integrated Openstack can be deployed in 2 ways, Full HA or Compact. Compact mode was recently introduced in 3.0 and requires significantly fewer hardware resources and memory than full HA mode. All OpenStack components can be deployed in 2 VMs with Compact mode. Enterprise grade OpenStack high availability can be achieved by taking advantage of the HA capabilities of the vSphere infrastructure. Compact mode is useful for multiple, small deployments. Full HA mode provides high availability at the application layer. The HA deployment consists of a dedicated management node (aka OMS node), 3 DB, 2 HAproxy and 2 Controller nodes. Ceilometer can be enabled after completion of the VIO base stack deployment. Enterprise HA mode is what I chose to deploy.

Since this environment is new to me, I wanted to avoid playing detective and spending days trying to reverse engineer the entire setup. Luckily VIO has a built-in export configuration capability. Simply navigate to the OpenStack deployment (Home -> VMware Integrated OpenStack -> OpenStack Deployments), all OpenStack settings can be exported with a single click:

Screen Shot 2017-01-19 at 3.32.20 PM

In some ways the exported configuration file is similar to traditional site hiera data except much simpler to consume. The data is in JSON format so it is easy to read and only information specific to my OpenStack setup is included. I no longer need to worry about kernel / TCP settings, interfaces to configure for management vs. data, NIC bonding, driver settings, haproxy endpoints, etc. Even the Ansible inventory is automatically created and managed based on deployment data. This is because VIO is designed to work out of box with the VMware suite of products, allowing Cloud Admins to focus on delivering SLAs instead of maintaining thousands of hard to remember key/value pairs. Advanced users can still look into inventory and configuration parameter details. The majority of deployment metrics are maintained on the OMS node in the following directories: (Please Note: The settings below are intended for viewing only and should not be modified without VMware support. The OMS node is primarily used as a starting point for troubleshooting, run viocli commands and SSH to the other nodes):

Screen Shot 2017-01-23 at 1.09.04 PM

With configuration saved, I went ahead and deleted the old deployment and clicked the Deploy OpenStack link to redeploy:

Screen Shot 2017-01-21 at 8.50.52 PM

The process to re-deploy a VIO OpenStack cluster is extremely simple, one simply selects an exported template to pre-fill configuration settings.

Screen Shot 2017-01-21 at 8.52.04 PM

The remaining deployment processes are well documented via other VMware blogs. References can be found here.

The entire Full HA mode deployment process took slightly over 50 minutes because of an unexpected NSX disk error that prevented neutron from starting. The deployment took 30 minutes with a clean environment (see below). Compact mode users should expect deployment to take as little as 15 minutes.

Screen Shot 2017-01-21 at 8.53.36 PM

Create VM and Test External Connectivity:

Once deployment is completed, simply create a L2 private network, and test that VMs can boot successfully in the default tenant project. Note, in order for a VM to connect externally, an external network needs to be created, associate the private and external networks to a NAT router, request a floating ip and finally associate the floating ip to the test VM. This is all extremely simple as VIO is 100% based on DefCore compliant OpenStack APIs on top of VMware’s best-of-breed SDDC infrastructure. Two neutron commands are all that’s needed to create an external network:

Screen Shot 2017-01-20 at 4.36.24 PM

Floating IP is a fancy OpenStack word for NAT. In most OpenStack implementations NAT translation happens in the neutron tenant router. A VIO neutron tenant router(s) can be deployed in 3 different modes – centralized exclusive, centralized shared, or distributed. Performance aside, the biggest difference between centralized and distributed mode is the number of control VM’s deployed to support the logical routing function. Distributed mode requires 2 NSX control plane router VMs, one router instance (dLR) for optimized East to West traffic between hypervisors. A second instance is deployed to take care of North – South external traffic flow via NAT. A single NSX control VM instance is required for centralized mode. In centralized mode, all routed traffic ( N -> S, E -> W) flows through a central NSX Edge Service Gateway (ESG). Performance and scale requirements will determine which mode to choose. Centralized shared is the default behavior. Marcos Hernandez had written an excellent blog in the past on NSX networking and VIO. Marcos’s blog can be found here.

An enterprise grade NSX NAT router can be created via three neutron commands, one command to create the router, and two commands to associate corresponding neutron networks.

Screen Shot 2017-01-20 at 4.59.24 PM

Once the router is created, simply allocate a floating IP, and associate the floating IP to the test VM instance:

Screen Shot 2017-01-21 at 12.17.41 AM

The entire process from deployment to external VM reachability took me less than 3 hours in total, not bad for a new guy!

Deploying and configuring a production grade OpenStack environment traditionally takes weeks by highly skilled DevOps engineers specializing in deployment automation, in-depth knowledge of repo and package management, and strong Linux system administration. To come up with the right CI/CD process to support new features and align with upstream within a release is extremely difficult and results in snowflake environments.

The VIO approach changes all that. I’m impressed with what VMware has done to abstract away traditional complexities involved in deploying, supporting, and maintaining an enterprise grade OpenStack environment. Leveraging VMware’s suite of products in the backend, an enterprise grade OpenStack can be deployed and configured in hours rather than weeks. If you haven’t already, make sure to give VMware Integrated OpenStack a try, you will save tremendous amounts of time, meet the most demanding requests of application developers, while providing the highest SLA possible. Download now and get started , or dare your IT team to try our VMware Integrated OpenStack Hands-On-Lab , no installation required.