Home > Blogs > OpenStack Blog for VMware > Monthly Archives: November 2015

Monthly Archives: November 2015

OpenStack Networking with VMware NSX, Part 3

Today’s blog post is the final entry in a series by Marcos Hernandez, one of the OpenStack specialists in VMware’s Networking & Security Business Unit (NSBU). Part 1 discussed the basics of the Neutron integration for VMware NSX. Part 2 discussed foundational integrations for L2 and L3 network services.

Security Groups

Neutron Security Groups have historically implemented either Linux IPTables or Open vSwitch stateless matches to filter traffic at the hypervisor level. Both approaches have presented challenges to operations teams there is serious work underway aimed at improving the experience (VMware is a contributor to these efforts).

When using NSX and vSphere, we deploy a stateful firewall on each and every ESXi host. That means that every hypervisor will protect the microcosm of virtual machines that it hosts, providing the notion of a distributed data plane. We call this a distributed firewall, or DFW. The NSX DFW runs in the kernel of ESXi and enables granular security controls at the VM vNIC level. When using Neutron Security Groups, the NSX DFW is configured, via the plugin integration. Neutron Security Groups are mapped to instances, meaning the NSX DFW will protect the VM unit.

07-fig-3-01

 

Running an actual firewall on each hypervisor within your OpenStack cloud has the following benefits:

The NSX firewall scales as your ESXi footprint grows. The mere act of increasing your compute capacity due to the organic growth of your business, automatically means you are also adding security and compliance to your virtual infrastructure.

It is important to note that Neutron Security Groups and NSX micro-segmentation can be used standalone, without adopting L2 overlays and L3 virtualization. While not as flexible as a full network virtualization implementation, the micro-segmentation use case is very popular with our customers and provides a great on ramp for customers to introduce OpenStack and NSX without disrupting whatever VLAN operational model may already be in place.

Load Balancing

As of the Kilo version of the NSX plugin, Neutron LBaaS v1.0 support was incorporated. The workflow includes the following capabilities:

  • Tenants are able to create application pools (initially empty).
  • Tenants add several members to the pool (instance IP address).
  • Tenants create one or several health monitors.
  • Tenants associate the health monitors with the pool.
  • The tenant finally creates a virtual IP (VIP) with the pool.
  • Supported protocols: TCP, HTTP and HTTPS.

08-fig-3-02

As with other network services in our implementation, we leverage the NSX Edge Services Gateway (ESG) as an inline load balancer as well as a Neutron router. The NSX load balancer is very feature-rich, and it is ready to support the Neutron LBaaS 2.0 API spec in a future version of the plugin.

Summary – Supported Topologies

The table below summarizes the topologies supported by the NSX-Neutron plugin:

Use Case Comments
VLAN-backed L2, no L3 services Micro-segmentation only No overlays. Security Groups leverage Distributed Firewall policies
VLAN-backed L2, L3 services, LBaaS optional Leverage VLANs for L2, NSX Edge for L3 No overlays. L3 provided by NSX Edge. No distributed routing support. Static routes only
L2/L3 overlays, no NAT, LBaaS optional Enterprise customers that don’t need overlapping IP addresses Can use distributed router and/or NSX Edge. No overlapping IPs allowed. Static routes only. Very efficient. Preferred enterprise model
L2/L3 overlay, NAT, LBaaS optional Enterprise customers that need overlapping IPs Can use distributed router and/or NSX Edge. Static routes only. Very efficient. Cloud provider/service provider preferred model

Conclusion – Why NSX-v with OpenStack Neutron?

The benefits of NSX align with the requirements of a robust OpenStack private cloud implementation, which are:

  • Agility – Networking at the speed of apps.
  • Mobility – Provision anywhere, move anywhere.
  • Security – Micro-segment, detect anywhere, detect early
  • Multi-tenancy – Share hardware across multiple tenants.
  • Simplified operations – Centrally manage, monitor everywhere.

By leveraging the NSX Neutron Plugin for vSphere developed by VMware, cloud administrators can introduce NSX into their OpenStack environment and offer their users and developers the open APIs they require, all without compromising uptime, stability and scalability.

This concludes the series discussing OpenStack Neutron integrations with VMware NSX. You can get some hands-on experience with VMware Integrated OpenStack with our revamped VMworld Hands-on Lab (HOL-1620) featuring VMware Integrated OpenStack and NSX Optimized for vSphere. You can also check out Part 1 and Part 2 of this blog series to read more about NSX integrations with OpenStack.

Marcos Hernandez is a Staff Systems Engineer in the Network and Security Business Unit (NSBU). He is responsible for supporting large Global Enterprise accounts and providing technical guidance around VMware’s suite of networking and cloud solutions, including NSX and OpenStack. Marcos has a background in datacenter networking design and expert knowledge in routing and switching technologies. Marcos holds the CCIE (#8283) and VCIX certifications, and he has a Masters Degree in Telecommunications from Universidad Politécnica de Madrid.

OpenStack Summit Tokyo 2015 Session Videos

openstack-summit-tokyo-booth

The VMware Integrated OpenStack team had a great time speaking with customers, partners, and contributors at the Tokyo Summit. We also presented talks focusing on our community contributions as well as on how VMware technologies can help OpenStack deployments to be successful.

You can find video replays of our talks at the following links:

The videos are collected in a YouTube playlist as well for your convenience.

Want to learn more about VMware Integrated OpenStack? Check out the VMware Product Walkthrough and the VMware Integrated OpenStack product page.

 

 

OpenStack Networking with VMware NSX, Part 2

Today’s blog post is the second entry in a series by Marcos Hernandez, one of the OpenStack specialists in VMware’s Networking & Security Business Unit (NSBU). Part 1 discussed the basics of the Neutron integration for VMware NSX. Part 3 will be published in the upcoming weeks. So, check back for more on this topic!

L2 Services

As we discussed in our previous article, when a tenant creates a Neutron network (or networks), the plugin signals NSX Manager to provision a logical switch (or switches), which are overlay constructs that utilize VXLAN to create L2 segments over L3 physical networks. VXLAN is an industry standard, co-developed by VMware and others and supported across the board. Over the past couple of years, VXLAN overlays have been largely demystified and the initial objections (like performance, lack of visibility, etc.) have given way to more practical concerns (like changes in operational processes, automation, etc.). Customers and vendors are getting more educated about each other’s vision and together are making VXLAN, and Software Defined Networking for that matter, a reality in their environments.

These L2 segments, overlays as they may be, are just that: L2 segments. Without a router to connect them together or to other networks, they are completely isolated from each other. An OpenStack Cloud administrator can control, via quotas, the number of Neutron networks allowed per tenant.

L3 Services

Tenants in OpenStack are allowed, by default, to create their own IP subnets and routers. We will cover some of the NSX capabilities available with the Neutron plugin. Before we do that though, just a quick parenthesis about self-service in general: As OpenStack gains more traction in the Enterprise, we are learning that these self-service capabilities may not be desirable. Admins may want to remain in control of the IP subnetting, for example, especially if the use case calls for routable IP address space everywhere. OpenStack lacks the necessary controls to enforce this type of restrictions, so short of forbidding API access to specific functions or simply relying on the good-old honor system, customers have little to no choice when it comes to the built-in OpenStack governance. Projects like OpenStack Congress are attempting to bridge this gap, and some commercial products are already providing the controls that IT requires. vRealize Automation (vRA) is a VMware platform that offers comprehensive, scalable governance and could potentially leverage extension packages to drive provisioning workflows in OpenStack.

 

Back to the L3 services discussion, we stated that a tenant could create Neutron routers. The NSX-Neutron plugin will translate this provisioning request and signal NSX Manager to create an NSX Edge Services Gateway, or ESG. The ESG is a network appliance that supports a vast number of network features (not all of which are visible by OpenStack, by the way) and that is broadly used in our integration.

03-fig-2-01

Once the Neutron router is created, our previously provisioned Web and App Neutron networks (L2 segments) can be connected to it and routing between them will be available.

The uplink of a Neutron router can be connected to an External network. This is also known as setting the gateway. This External network must sit on routable IP address space within the organization and is also the network where floating IPs reside. If the tenant networks sit on RFC1918 space, then the Neutron router must do Network Address Translation, or NAT (source NAT for internal to external access and DNAT for floating IPs). If the tenant networks sit on routable subnets, then the router does not have to do NAT.

The tenant networks can also be VLAN-backed, instead of VXLAN-backed. If the tenant wants to or has to use VLANs instead of VXLANs, then the admin must create these networks on behalf of the tenant.

Tenant routers can be exclusive (defined at provisioning time using an API extension) or shared (default behavior). Depending on your performance and scalability expectations, you will choose one or the other.

When using NSX, the Neutron L3 services may include a distributed router, which is a very powerful capability in NSX that allows for the optimization of East-West traffic in routed topologies. This is a good example of an enterprise-grade capability of NSX and differentiator from the reference implementation, which can be leveraged without compromising the basic tenet of OpenStack in keeping the API open. A distributed router sends traffic from the source hypervisor to the destination hypervisor without hairpinning the packets through an NSX ESG or a physical router SVI. This increases performance significantly and streamlines traffic engineering within the data center. 04-fig-2-02

Finally, Neutron only supports static routing, which means that when using NSX with your OpenStack implementation, dynamic routing is not an option. NSX supports both OSPF and BGP, but until Neutron supports either one, tenants won’t be able to use dynamic routing. Efforts to implement a BGP speaker in OpenStack began during the Juno cycle and are still ongoing. When this work is complete, the NSX platform, due to its native support of BGP, will be ready to support dynamic routing once the Neutron plugin has been updated.

The picture below shows the basic topologies supported  by the NSX-Neutron plugin:

05-fig-2-03

DHCP Services

In our implementation of DHCP, we replace the dnsmasq process that is used by the reference implementation with an NSX Edge Services Gateway configured with static DHCP bindings. This approach has proven to be very reliable at scale (thousands of VMs).

There is logic in the NSX-Neutron plugin that will automatically determine how to use an Edge Services Gateway for DHCP services. Depending on the use case (overlapping IPs vs. non-overlapping IPs) the same ESG may be reused for multiple tenant networks, as the picture below shows: 06-fig-2-04

 

In Part 3 of this article series, we will discuss the implementation of critical Neutron services such as security groups and Load-Balancing-as-a-Service. In the meantime, check out our revamped VMworld Hands-on Lab (HOL-1620) featuring VMware Integrated OpenStack and NSX Optimized for vSphere.

Marcos Hernandez is a Staff Systems Engineer in the Network and Security Business Unit (NSBU). He is responsible for supporting large Global Enterprise accounts and providing technical guidance around VMware’s suite of networking and cloud solutions, including NSX and OpenStack. Marcos has a background in datacenter networking design and expert knowledge in routing and switching technologies. Marcos holds the CCIE (#8283) and VCIX certifications, and he has a Masters Degree in Telecommunications from Universidad Politécnica de Madrid.

Vagrant Up with VMware Integrated OpenStack, Part 2

In our previous installment, we covered the basics of using Vagrant with VMware Integrated OpenStack. In today’s article, we will take a look at a multi-instance deployment from a single Vagrantfile. Today’s sample Vagrantfile looks almost identical to our previous example with a few changes, highlighted in bold, that we discuss below.

Let’s go back to the same directory where you created your Vagrantfile from our previous walkthrough. Make sure that your old Vagrant instance is deleted using the following command:

vagrant destroy -f

Now backup your original Vagrantfile if you would like to preserve it. Then, replace your Vagrantfile content with the code that follows.


puts "\nHave you sourced your OpenStack creds today???\n"

nodes = ['master','node1']

Vagrant.configure("2") do |config|
  config.vm.box = "openstack"
  config.ssh.private_key_path = "~/.ssh/id_rsa"
  nodes.each do |server|
    config.vm.define "#{server}" do |box|
      config.vm.provider :openstack do |os|
        os.endpoint     = "#{ENV['OS_AUTH_URL']}/tokens"      # e.g. "#{ENV['OS_AUTH_URL']}/tokens"   
        os.username     = "#{ENV['OS_USERNAME']}"          # e.g. "#{ENV['OS_USERNAME']}"
        os.tenant_name = "#{ENV['OS_TENANT_NAME']}"
        os.api_key      = "#{ENV['OS_PASSWORD']}"          # e.g. "#{ENV['OS_PASSWORD']}"
        os.flavor       = /m1.small/                # Regex or String
        os.image        = /ubuntu/                 # Regex or String
        os.keypair_name = "demo-keypair"      # as stored in Nova
        os.ssh_username = "ubuntu"           # login for the VM
        os.networks = ["demo-network"]
        os.floating_ip = :auto
        os.floating_ip_pool = "EXTNET"
      end
    end
  end
end

Vagrant is written in Ruby, and this allows us to use Ruby constructs to control how instances are deployed. For example, we use a Ruby list of strings (nodes = [‘master’,’node1′]) near the top of the file to declare the names of multiple servers that we want Vagrant to create for us in the OpenStack cloud.

We leverage a Ruby iterator (each), which will run the code that follows it for each element of the list. We have two entries in the node list. So, the Vagrant instructions will be run two times.  If you would like to create more than two nodes, you are free to add multiple names to that list up to the limit of instances specified by your OpenStack project’s quota.

When Vagrant is done provisioning your environment, you will see messages on the commandline similar to the following:


==> node1: The server is ready!
==> node1: Configuring and enabling network interfaces...
==> master: The server is ready!
==> master: Configuring and enabling network interfaces...
==> node1: Rsyncing folder: /Users/trobertsjr/Development/vagrant-on-openstack/ => /vagrant
==> master: Rsyncing folder: /Users/trobertsjr/Development/vagrant-on-openstack/ => /vagrant

You can then use the following command to verify that your OpenStack instances were created successfully and are available for use:


vagrant status

This command’s output should show an active state for your instances:


Current machine states:

master                    active (openstack)
node1                     active (openstack)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

The nova CLI will also show your instances being active and available:


(openstack)vagrant-on-openstack $ nova list
+--------------------------------------+--------+--------+------------+-------------+-----------------------------------------+
| ID                                   | Name   | Status | Task State | Power State | Networks                                |
+--------------------------------------+--------+--------+------------+-------------+-----------------------------------------+
| 38702893-4b33-4f77-ba9a-07c99ab16318 | master | ACTIVE | -          | Running     | demo-network=192.168.0.41, 10.115.96.32 |
| 45f9d279-d6fb-48b7-a191-985eed9452fc | node1  | ACTIVE | -          | Running     | demo-network=192.168.0.40, 10.115.96.31 |
+--------------------------------------+--------+--------+------------+-------------+-----------------------------------------+

Finally, you can ssh into the instances using Vagrant as you did with our previous post, but you will need to specify the name of the instance you are connecting to this time.


vagrant ssh master

or


vagrant ssh node1

Try Vagrant with VMware Integrated OpenStack today, and let us know how it worked out for you!

You can learn more about VMware Integrated OpenStack on the VMware Product Walkthrough site and on the VMware Integrated OpenStack product page.

OpenStack Networking with VMware NSX, Part 1

Today’s blog post is the start of a series by Marcos Hernandez, one of the OpenStack specialists in VMware’s Networking & Security Business Unit (NSBU). Part 2 and Part 3 will be published in the upcoming weeks. So, check back for more on this topic!

As OpenStack becomes more ubiquitous in the industry, cloud architects are looking for ways to provide enterprise-grade network and security services to their consumers without compromising the primary objectives of an OpenStack-based private cloud, which include:

  • Vendor-neutral APIs.
  • Infrastructure choice and flexibility.
  • A public cloud experience with on-premises infrastructure.

Neutron, the networking project in OpenStack, has come a long way in the last few years, adding powerful capabilities at a very fast pace while enabling rich network workflows and a variety of use cases. According to the latest user survey data, Neutron is favored over Nova networking in ~60% of the production OpenStack deployments, which suggests an increased interest to move off flat (VLAN-based) topologies. There is general consensus in the OpenStack community that a cloud that lacks rich network functionality is a mediocre cloud.

As more features are added to Neutron, its architecture becomes more complex (a universal perception amongst OpenStack users). As of the Kilo release, the Neutron community has wisely decided to “decompose” Neutron. The general idea is that Neutron will remain focused on core L2 and L3 services, while L4-L7 services will be “pluggable” and abide by a well-known extensible data model. For vendors like VMware, this is great news. We now have a reference architecture to develop against, promote our value-add, and expose the unique capabilities of NSX.  We can do this and still honor a consumption model that prioritizes the desirable northbound OpenStack APIs.

With all that said, it is important to note (and know) that Neutron core is developed using a reference implementation based on open source components, including:

  • Open vSwitch – hypervisor-level networking.
  • dnsmasq – DHCP/DNS services.
  • Linux iptables – for security groups.
  • L3-agent – routing services.
  • HAProxy – load balancing.

In a scaled-out production deployment, the reference implementation typically encounters challenges. As a result, OpenStack operators will either scale back the features deployed in an attempt to gain stability, or they will utilize a software-defined network provider.  This is widely recognized as factual and comes at no surprise once you understand how Neutron core is developed and tested.

The community is doing a great job at defining the core APIs and the extensibility model, while many vendors have also taken it upon themselves to test the scalability, reliability and upgradeability of an OpenStack-based solution using their own “productized” distributions. Often, as it is the case with VMware NSX, vendors will replace the reference open source components with their own technology. OpenStack consumers don’t notice the difference; they still interact with the northbound Neutron APIs by means of plugins and drivers, which “translate” the Neutron API calls into private, southbound calls targeted at the vendor’s infrastructure. In the case of VMware NSX Optimized for vSphere, such interactions look like this:

openstack-vsphere-nsx-interaction

Neutron services leverage a VMware-developed plugin that makes API calls to NSX Manager, which is the API provider and management plane of NSX.  This Neutron plugin is itself an open source project and can be used with ANY OpenStack implementation (DIY and/or off-the-shelf). VMware offers its own OpenStack distribution, VMware Integrated OpenStack, which natively integrates the NSX-Neutron plugin, in addition to other plugins and drivers that connect OpenStack compute and storage services to vSphere.  Leveraging enterprise-grade server virtualization with vSphere and enterprise-grade networking with NSX, customers will enjoy an enterprise-grade OpenStack infrastructure layer, while leveraging the vSphere skillset and tools (vMotion, host maintenance mode, DRS, etc.) which typically are available in IT groups today.

In this post, we will double-click on the Neutron-NSX interactions and will describe how NSX ultimately brings stability to Neutron. You can also see a recorded version of this content that was presented at the OpenStack Summit in Vancouver.

Basic Neutron Workflows and NSX Equivalents
Neutron consists of a number of basic network workflows that are considered “table stakes”. These include:

  • L2 services: Ability for tenants to create and consume their own L2 networks.
  • L3 services: Ability for tenants to create and consume their own IP subnets and routers. These routers can connect intra-tenant application tiers and can also connect to the external world via NATed and non-NATed topologies.
  • Floating IPs: A “Floating IP” is nothing more than a DNAT rule, living on the Neutron router, that maps a routable IP sitting on the external side of that router (External network) to a private IP on the internal side (Tenant network). This floating IP forwards all ports and protocols to the corresponding private IP of the “instance” (VM) and is typically used in cases where there is IP overlap in tenant space.
  • DHCP Services: Ability for tenants to create their own DHCP address scopes.
  • Security Groups: Ability for tenants to create their own firewall policies (L3/L4) and apply them directly to an instance or a group of instances.
  • Load Balancing as-a-Service (LBaaS): Ability for tenants to create their own load balancing rules, virtual IPs and load balancing pools.

The picture below shows these basic workflows and their situation as it relates to the application, as well as the corresponding NSX element that is leveraged each time. For more information about NSX, please visit the official VMware NSX product page.

basic-neutron-workflows-nsx-equivalents

In Part 2 of this article series, we will dig deeper into the inner workings of the Neutron plugin implementation for NSX. In the meantime, check out our revamped VMworld Hands-on Lab (HOL-1620) featuring VMware Integrated OpenStack and NSX Optimized for vSphere.

Marcos Hernandez is a Staff Systems Engineer in the Network and Security Business Unit (NSBU). He is responsible for supporting large Global Enterprise accounts and providing technical guidance around VMware’s suite of networking and cloud solutions, including NSX and OpenStack. Marcos has a background in datacenter networking design and expert knowledge in routing and switching technologies. Marcos holds the CCIE (#8283) and VCIX certifications, and he has a Masters Degree in Telecommunications from Universidad Politécnica de Madrid.

Upgrade OpenStack with VMware

The OpenStack distribution upgrade is a marquee feature in VMware Integrated OpenStack version 2.0, as we discussed in a previous blog post. Juan Manuel Rey wrote-up a detailed walkthrough for this great feature and shared his insights with the post that follows:

In a previous article I showed the process to patch an existing VIO 1.0 installation, which, as you were able to see, is a clean and easy process. VMware announced VMware Integrated OpenStack 2.0 during VMworld US and it became GA shortly after the show.

This new version of VIO has all OpenStack code updated to the latest Kilo release and comes packaged with many interesting features like Load-Balancing-as-a-Service (LBaaS) and auto-scaling capabilities based on Heat and Ceilometer.

With a new VIO version hot of the press, you can upgrade your VIO 1.0.x environment to 2.0 and take advantage of all those new great goodies. The upgrade process is pretty straightforward and consists of three main stages.

  • Upgrade the VIO Management Server
  • Deploy a new VIO 2.0 environment
  • Perform the data migration

Keep in mind that you will need to have enough hardware resources in your management cluster to be able to temporarily host two full-fledged VIO installations at the same time during the migration process. Just for the sake of transparency, the lab environment where I test the upgrade is based on vSphere 5.5 Update 2, NSX for vSphere 6.1.4 and VIO 1.0.2.

Step 1 – Upgrade VIO Management Server

From the VMware website, download the .deb upgrade package and upload it to the VIO Management Server using SCP.

VIO 2.0 Download

VIO 2.0 Download

Stage the upgrade package.

viouser@vio-oms:~$ sudo viopatch add -l vio-1.0-upgrade_2.0.0.3037964_all.deb
[sudo] password for viouser:
vio-1.0-upgrade_2.0.0.3037964_all.deb patch has been added.
viouser@vio-oms:~$ viopatch list
Name            Version       Type   Installed
--------------- ------------- ------ -----------
vio-1.0-upgrade 2.0.0.3037964 infra  No
vio-patch-2     1.0.2.2813500 infra  Yes
viouser@vio-oms:~$

Upgrade the management server.

viouser@vio-oms:~$ sudo viopatch install -p vio-1.0-upgrade -v 2.0.0.3037964
Installing patch vio-1.0-upgrade version 2.0.0.3037964
done
Installation complete for patch vio-1.0-upgrade version 2.0.0.3037964
viouser@vio-oms:~$

Go to the vSphere Web Client, logout and log back in to verify that the new version is correct.

VIO Management Server upgrade is complete!

VIO Management Server upgrade is complete

Step 2 – Deploy a new VIO 2.0 environment

With the VIO Management Server upgraded, it is now time to deploy a fresh 2.0 environment. In the VIO plugin, go to the Manage section in the right pane, and a new Upgrades tab will be available there.

VMware vSphere Web Client VIO Plugin Upgrades Tab

VMware vSphere Web Client VIO Plugin Upgrades Tab

Before starting with the deployment, check in the Networks tab that there are enough free IP addresses (18) for the new deployment. If there aren’t, then add a new IP address range in the same subnet.

VIO management network IP range extension

VIO management network IP range extension

Click on the Upgrade icon (VIO Upgrade Button). Indicate if you want to participate in the customer experience improvement program. My recommendation here is to say yes to help our engineering team to improve the VIO upgrade experience even more, and enter the name for the new deployment.

Updated VIO Deployment Name

Updated VIO Deployment Name

Enter the IP addresses for the public and private load balanced IP addresses. Keep in mind that these IP addresses must belong to the existing VIO 1.0 installation’s API and Management subnets, respectively.

Temporary Load Balancer Virtual IP Address

Temporary Load Balancer Virtual IP Address

In the next and final screen, review the configured values and click Finish. The new environment will be deployed and you will be able to monitor the progress from the Upgrades tab.

New Deployment Based On Kilo Launches

New Deployment Based On Kilo Launches

Step 3 – Migrate the data

With the new environment up and ready, we can start the OpenStack database migration process. From the Upgrades tab right-click on your existing VIO 1.0 installation and select Migrate Data.

Migrate existing OpenStack data to the database in the new deployment

Migrate existing OpenStack data to the database in the new deployment

The migration wizard will ask for confirmation, click OK. During the data migration, all OpenStack services will be unavailable. This will allow the migration process to maintain database consistency during the data transfer.

VIO database migration proceeds

VIO database migration proceeds

When the migration process is finished, the status of the new VIO 2.0 environment will appear as Migrated and the existing VIO 1.0 installation will appear as Stopped.

Database migration complete

Database migration complete

Open a browser and enter the VIO 2.0 Public Virtual IP to access the OpenStack Horizon interface. Login and verify that all your workloads, networks, images, etc. have been properly migrated. Logout from Horizon and go back to the VMware vSphere Web Client. Now that the data has been migrated, we need to migrate the original Public Virtual IP to the new environment.

Right-click on the VIO 1.0 deployment and select Switch To New Deployment.

Production Virtual IP Address Configured on the New Deployment

Production Virtual IP Address Configured on the New Deployment

A new pop-up will appear asking for confirmation since the OpenStack services will be unavailable during the IP reconfiguration.

After the reconfiguration, the new VIO 2.0 deployment will be in Running status and the Public Virtual IP will be the same as the former 1.0 deployment.

OpenStack Upgrade is Complete with a Functional Kilo Cloud!

OpenStack Upgrade is Complete with a Functional Kilo Cloud!

The upgrade procedure is finished. You can now access Horizon using the existing DNS name for your cloud. Verify that everything is still working as expected, and enjoy your new OpenStack Kilo environment!

The Kilo Version of the Horizon Dashboard

The Kilo Version of the Horizon Dashboard

With VIO, upgrading your OpenStack cloud does not have to be a painful experience, VIO provides the best OpenStack experience in a vSphere environment. Kudos to our Team OpenStack @ VMware.

Have fun and happy stacking!

Juanma.

Juan Manuel Rey is a Senior Consultant in the Professional Services Organization and a CTO Ambassador at VMware. He specializes in NSX and cloud architectures. Juan Manuel is highly experienced Unix and VMware professional and an OpenStack advocate internally and externally to VMware. In his spare time he is a Python developer, tries to contribute in some form to the broad OpenStack and VMware communities and blogs about Unix, NSX, OpenStack and VMware technical subjects.