Home > Blogs > OpenStack Blog for VMware > Monthly Archives: January 2017

Monthly Archives: January 2017

VIO Speed Challenge – Can a New Guy Get Production Quality OpenStack Setup and Running Under Three Hours

This article describes an OpenStack setup. To put it into context, I recently joined VMware as a Technical Marketing Manager responsible for VMware Integrated OpenStack technical product marketing and field enablement. After a smooth onboarding process (accounts, benefits, etc), my first task was to get lab environments I inherited up and running again. I came from a big OpenStack shop where deployment automation was built in house using both Puppet and Ansible, so my first questions were; Where’s the site hiera data? Settings for Cobbler environment? Which playbook to run to spin up controller VM nodes? Once the vm is up, which playbooks are responsible for deployment and configuration of keystone, nova, cinder, neutron, etc. How to seed the environment once OpenStack is configured and running? It was always a multi-person, multi-week effort. Everyone maintained a cheat sheet loaded with deployment caveats and workarounds (long list since increased feature = increased complexity = increased caveats = slow time to market). Luckily with VMware Integrated OpenStack, all those decision points and complexity are abstracted for the Cloud Admin. Really, the only decision to make is:

  • Restore from database backup – Restore from backup processes can be found here.
  • Redeploy and Import.

Redeploy seemed more interesting so I decided to give it a shot (will save restore for a different blog if there is interest).

Note: Redeploy will not clean up vSphere resources. One could re-import vm resources from vCenter once deployment completes. Refer to Instructions here.

Lab Topology:

Lab environment consists of 3 ESXi clusters

  • Management (vCenter, VIO, vROPS, vRealize Log Insight, vRealize Operation Manager, NSX management, NSX Controller and etc)
  • Edge ( OpenStack neutron tenant resources – NSX Edge)
  • Production (This is where Tenant VMs reside)

One instance of NSX-v spanning across all 3 clusters from a networking perspective.

Screen Shot 2017-01-21 at 10.48.43 PM

VIO OpenStack Deployment:

VMware Integrated Openstack can be deployed in 2 ways, Full HA or Compact. Compact mode was recently introduced in 3.0 and requires significantly fewer hardware resources and memory than full HA mode. All OpenStack components can be deployed in 2 VMs with Compact mode. Enterprise grade OpenStack high availability can be achieved by taking advantage of the HA capabilities of the vSphere infrastructure. Compact mode is useful for multiple, small deployments. Full HA mode provides high availability at the application layer. The HA deployment consists of a dedicated management node (aka OMS node), 3 DB, 2 HAproxy and 2 Controller nodes. Ceilometer can be enabled after completion of the VIO base stack deployment. Enterprise HA mode is what I chose to deploy.

Since this environment is new to me, I wanted to avoid playing detective and spending days trying to reverse engineer the entire setup. Luckily VIO has a built-in export configuration capability. Simply navigate to the OpenStack deployment (Home -> VMware Integrated OpenStack -> OpenStack Deployments), all OpenStack settings can be exported with a single click:

Screen Shot 2017-01-19 at 3.32.20 PM

In some ways the exported configuration file is similar to traditional site hiera data except much simpler to consume. The data is in JSON format so it is easy to read and only information specific to my OpenStack setup is included. I no longer need to worry about kernel / TCP settings, interfaces to configure for management vs. data, NIC bonding, driver settings, haproxy endpoints, etc. Even the Ansible inventory is automatically created and managed based on deployment data. This is because VIO is designed to work out of box with the VMware suite of products, allowing Cloud Admins to focus on delivering SLAs instead of maintaining thousands of hard to remember key/value pairs. Advanced users can still look into inventory and configuration parameter details. The majority of deployment metrics are maintained on the OMS node in the following directories: (Please Note: The settings below are intended for viewing only and should not be modified without VMware support. The OMS node is primarily used as a starting point for troubleshooting, run viocli commands and SSH to the other nodes):

Screen Shot 2017-01-23 at 1.09.04 PM

With configuration saved, I went ahead and deleted the old deployment and clicked the Deploy OpenStack link to redeploy:

Screen Shot 2017-01-21 at 8.50.52 PM

The process to re-deploy a VIO OpenStack cluster is extremely simple, one simply selects an exported template to pre-fill configuration settings.

Screen Shot 2017-01-21 at 8.52.04 PM

The remaining deployment processes are well documented via other VMware blogs. References can be found here.

The entire Full HA mode deployment process took slightly over 50 minutes because of an unexpected NSX disk error that prevented neutron from starting. The deployment took 30 minutes with a clean environment (see below). Compact mode users should expect deployment to take as little as 15 minutes.

Screen Shot 2017-01-21 at 8.53.36 PM

Create VM and Test External Connectivity:

Once deployment is completed, simply create a L2 private network, and test that VMs can boot successfully in the default tenant project. Note, in order for a VM to connect externally, an external network needs to be created, associate the private and external networks to a NAT router, request a floating ip and finally associate the floating ip to the test VM. This is all extremely simple as VIO is 100% based on DefCore compliant OpenStack APIs on top of VMware’s best-of-breed SDDC infrastructure. Two neutron commands are all that’s needed to create an external network:

Screen Shot 2017-01-20 at 4.36.24 PM

Floating IP is a fancy OpenStack word for NAT. In most OpenStack implementations NAT translation happens in the neutron tenant router. A VIO neutron tenant router(s) can be deployed in 3 different modes – centralized exclusive, centralized shared, or distributed. Performance aside, the biggest difference between centralized and distributed mode is the number of control VM’s deployed to support the logical routing function. Distributed mode requires 2 NSX control plane router VMs, one router instance (dLR) for optimized East to West traffic between hypervisors. A second instance is deployed to take care of North – South external traffic flow via NAT. A single NSX control VM instance is required for centralized mode. In centralized mode, all routed traffic ( N -> S, E -> W) flows through a central NSX Edge Service Gateway (ESG). Performance and scale requirements will determine which mode to choose. Centralized shared is the default behavior. Marcos Hernandez had written an excellent blog in the past on NSX networking and VIO. Marcos’s blog can be found here.

An enterprise grade NSX NAT router can be created via three neutron commands, one command to create the router, and two commands to associate corresponding neutron networks.

Screen Shot 2017-01-20 at 4.59.24 PM

Once the router is created, simply allocate a floating IP, and associate the floating IP to the test VM instance:

Screen Shot 2017-01-21 at 12.17.41 AM

The entire process from deployment to external VM reachability took me less than 3 hours in total, not bad for a new guy!

Deploying and configuring a production grade OpenStack environment traditionally takes weeks by highly skilled DevOps engineers specializing in deployment automation, in-depth knowledge of repo and package management, and strong Linux system administration. To come up with the right CI/CD process to support new features and align with upstream within a release is extremely difficult and results in snowflake environments.

The VIO approach changes all that. I’m impressed with what VMware has done to abstract away traditional complexities involved in deploying, supporting, and maintaining an enterprise grade OpenStack environment. Leveraging VMware’s suite of products in the backend, an enterprise grade OpenStack can be deployed and configured in hours rather than weeks. If you haven’t already, make sure to give VMware Integrated OpenStack a try, you will save tremendous amounts of time, meet the most demanding requests of application developers, while providing the highest SLA possible. Download now and get started , or dare your IT team to try our VMware Integrated OpenStack Hands-On-Lab , no installation required.