Home > Blogs > OpenStack Blog for VMware > Tag Archives: Neutron

Tag Archives: Neutron

OpenStack Networking with VMware NSX, Part 1

Today’s blog post is the start of a series by Marcos Hernandez, one of the OpenStack specialists in VMware’s Networking & Security Business Unit (NSBU). Part 2 and Part 3 will be published in the upcoming weeks. So, check back for more on this topic!

As OpenStack becomes more ubiquitous in the industry, cloud architects are looking for ways to provide enterprise-grade network and security services to their consumers without compromising the primary objectives of an OpenStack-based private cloud, which include:

  • Vendor-neutral APIs.
  • Infrastructure choice and flexibility.
  • A public cloud experience with on-premises infrastructure.

Neutron, the networking project in OpenStack, has come a long way in the last few years, adding powerful capabilities at a very fast pace while enabling rich network workflows and a variety of use cases. According to the latest user survey data, Neutron is favored over Nova networking in ~60% of the production OpenStack deployments, which suggests an increased interest to move off flat (VLAN-based) topologies. There is general consensus in the OpenStack community that a cloud that lacks rich network functionality is a mediocre cloud.

As more features are added to Neutron, its architecture becomes more complex (a universal perception amongst OpenStack users). As of the Kilo release, the Neutron community has wisely decided to “decompose” Neutron. The general idea is that Neutron will remain focused on core L2 and L3 services, while L4-L7 services will be “pluggable” and abide by a well-known extensible data model. For vendors like VMware, this is great news. We now have a reference architecture to develop against, promote our value-add, and expose the unique capabilities of NSX.  We can do this and still honor a consumption model that prioritizes the desirable northbound OpenStack APIs.

With all that said, it is important to note (and know) that Neutron core is developed using a reference implementation based on open source components, including:

  • Open vSwitch – hypervisor-level networking.
  • dnsmasq – DHCP/DNS services.
  • Linux iptables – for security groups.
  • L3-agent – routing services.
  • HAProxy – load balancing.

In a scaled-out production deployment, the reference implementation typically encounters challenges. As a result, OpenStack operators will either scale back the features deployed in an attempt to gain stability, or they will utilize a software-defined network provider.  This is widely recognized as factual and comes at no surprise once you understand how Neutron core is developed and tested.

The community is doing a great job at defining the core APIs and the extensibility model, while many vendors have also taken it upon themselves to test the scalability, reliability and upgradeability of an OpenStack-based solution using their own “productized” distributions. Often, as it is the case with VMware NSX, vendors will replace the reference open source components with their own technology. OpenStack consumers don’t notice the difference; they still interact with the northbound Neutron APIs by means of plugins and drivers, which “translate” the Neutron API calls into private, southbound calls targeted at the vendor’s infrastructure. In the case of VMware NSX Optimized for vSphere, such interactions look like this:


Neutron services leverage a VMware-developed plugin that makes API calls to NSX Manager, which is the API provider and management plane of NSX.  This Neutron plugin is itself an open source project and can be used with ANY OpenStack implementation (DIY and/or off-the-shelf). VMware offers its own OpenStack distribution, VMware Integrated OpenStack, which natively integrates the NSX-Neutron plugin, in addition to other plugins and drivers that connect OpenStack compute and storage services to vSphere.  Leveraging enterprise-grade server virtualization with vSphere and enterprise-grade networking with NSX, customers will enjoy an enterprise-grade OpenStack infrastructure layer, while leveraging the vSphere skillset and tools (vMotion, host maintenance mode, DRS, etc.) which typically are available in IT groups today.

In this post, we will double-click on the Neutron-NSX interactions and will describe how NSX ultimately brings stability to Neutron. You can also see a recorded version of this content that was presented at the OpenStack Summit in Vancouver.

Basic Neutron Workflows and NSX Equivalents
Neutron consists of a number of basic network workflows that are considered “table stakes”. These include:

  • L2 services: Ability for tenants to create and consume their own L2 networks.
  • L3 services: Ability for tenants to create and consume their own IP subnets and routers. These routers can connect intra-tenant application tiers and can also connect to the external world via NATed and non-NATed topologies.
  • Floating IPs: A “Floating IP” is nothing more than a DNAT rule, living on the Neutron router, that maps a routable IP sitting on the external side of that router (External network) to a private IP on the internal side (Tenant network). This floating IP forwards all ports and protocols to the corresponding private IP of the “instance” (VM) and is typically used in cases where there is IP overlap in tenant space.
  • DHCP Services: Ability for tenants to create their own DHCP address scopes.
  • Security Groups: Ability for tenants to create their own firewall policies (L3/L4) and apply them directly to an instance or a group of instances.
  • Load Balancing as-a-Service (LBaaS): Ability for tenants to create their own load balancing rules, virtual IPs and load balancing pools.

The picture below shows these basic workflows and their situation as it relates to the application, as well as the corresponding NSX element that is leveraged each time. For more information about NSX, please visit the official VMware NSX product page.


In Part 2 of this article series, we will dig deeper into the inner workings of the Neutron plugin implementation for NSX. In the meantime, check out our revamped VMworld Hands-on Lab (HOL-1620) featuring VMware Integrated OpenStack and NSX Optimized for vSphere.

Marcos Hernandez is a Staff Systems Engineer in the Network and Security Business Unit (NSBU). He is responsible for supporting large Global Enterprise accounts and providing technical guidance around VMware’s suite of networking and cloud solutions, including NSX and OpenStack. Marcos has a background in datacenter networking design and expert knowledge in routing and switching technologies. Marcos holds the CCIE (#8283) and VCIX certifications, and he has a Masters Degree in Telecommunications from Universidad Politécnica de Madrid.

VMware Integrated OpenStack Video Series: Working with Networks

In our previous post, we discussed how developers can quickly provision application infrastructure using instances and images in OpenStack. Today, we’ll discuss an important topic: Networking! How do we configure the networks that OpenStack instances use to communicate with each other and with the outside world?

VMware Integrated OpenStack provides two networking options for your infrastructure:

  1. VMware NSX networking
  2. VMware vSphere Distributed Switch (VDS) networking

The VDS option is appropriate for simple networking use cases. That is, your instances only need to communicate on a few VLANs with no need for advanced functionality like overlapping IP addresses, neutron-provided layer 3 routing, etc. The NSX option allows for advanced networking use cases including private networks for tenants, attaching floating IPs to your instances, etc.

For the purposes of this article, we will focus on the VMware NSX option. Configuring VDS networking is a fairly simple process, and we’ll point out the difference in the configuration process where applicable.

The first step in setting up your OpenStack network service is configuring your external, or provider, network. This is the VLAN provisioned for your instances to have access to the outside world. The external network is configured by a user with administrator permissions using either the Horizon GUI or the neutron API/CLI.

When configuring the external network for the VMware NSX networking option, the provider network type is “Port Group”. The physical network is the port group ID (dvportgroup-50110 in my example) for the external network you defined in vSphere. See Figure 1 for a configuration example.



External network configuration for VMware NSX Networking

Figure 1: External network configuration for VMware NSX Networking

If you are working with the VDS networking option instead of the VMware NSX option, you specify the provider network type as “VLAN” with the physical network labeled simply “dvs”. The VLAN ID is specified in the Segmentation ID textbox. The “Shared” option must be selected so that your tenants can use this network when booting instances (See Figure 2 for a configuration example). VMware Integrated OpenStack will use this information to automatically create a port group on the VDS that you specified when you deployed OpenStack control plane.



External network configuration for VMware VDS Networking

Figure 2: External network configuration for VMware VDS Networking

Once your external network is defined, you define a subnet the external OpenStack network. Make sure to uncheck the “Enable DHCP” option, to specify the network address and gateway, and to specify the IP allocation range in case the entire subnet isn’t available for use.

Now that your external network configuration is complete, your tenants can allocate IP addresses for their instances using the OpenStack GUI, APIs, or CLIs as seen in our previous blog post.

The following video provides a detailed walkthrough of configuring OpenStack networks.


Stay tuned for next week’s blog post when we discuss the OpenStack storage service! In the meantime, you can learn more on the VMware Product Walkthrough site and on the VMware Integrated OpenStack product page.

VMware Integrated OpenStack Video Series: Working with Instances

In our last installment, we discussed the simplicity of the VMware Integrated OpenStack deployment process. Today, we will discuss how VMware Integrated OpenStack users can provision virtual machines. First, we need to get familiar with some OpenStack terminology:

  • Instance – a running virtual machine in your environment. The OpenStack Nova service provides users with the ability to manage hypervisors and deploy virtual machines.
  • Image – similar in concept to a VM template. The OpenStack Glance service maintains a collection of images from which users will deploy their instances.
  • Volume – this is an additional virtual disk (VMDK) that is attached to a running instance. Volumes can be added to instances ad hoc via the OpenStack Cinder service.
  • Flavor – allocation of resources (i.e. number of vCPUs, storage, RAM).
  • Security Group – rules governing network access to your deployed instance (ex: this instance may be accessed via TCP port 22 from a certain IP range).
  • Network – the VMware vSphere port group that your instance will be attached to. Your port groups are automatically created by the OpenStack Neutron service.

OpenStack emphasizes the capability for users to manage their infrastructure programmatically through REST APIs, and this is exhibited in the multiple ways that a user can deploy an instance. The Horizon GUI provides the capability to launch instances with a point-and-click interface. The Nova CLI provides users with simple commands to deploy your instances, and these commands can be combined in shell scripts.

For users who want even more control and flexibility over instance deployment, the REST APIs can be leveraged. The important thing to note is that regardless of the interface the user selects, the REST API is utilized behind the scenes. For example, if I use the nova boot CLI command, it translates my simple inputs into an HTTP request that the Nova service will understand.

If you would like to see the API code being generated by your CLI commands, you can use the “–debug” option with CLI tools (ex: nova –debug boot…). An example HTTP Request generated by the nova boot CLI command is included below:

curl -g -i -X POST https://vio-dashboard.eng.vmware.com:8774/v2/b228bcefad9f487fb6ae4821bfb90130/servers
-H "User-Agent: python-novaclient"
-H "Content-Type: application/json"
-H "Accept: application/json"
-H "X-Auth-Token: {SHA1}c1ef2534845b985dc4c52b803e357c08daea265b"
-d '{
"server": {
"name": "apitest",
"imageRef": "0723d0ac-9a08-49f5-9160-97efe05aa6ca",
"flavorRef": "2",
"max_count": 1,
"min_count": 1,
"networks": [{"uuid": "a722cb2b-f041-40b1-ad6a-74a27d30539a"}],
"security_groups": [{"name": "default"}]

My instance name (“apitest”) may seem too generic, and it’s possible that another user may use the same name. Not to worry, instance names do not need to be unique: OpenStack identifies all resources, including instances, by unique identifiers. In the sample code above, my source image, flavor, and network are all identified by their unique identifiers. Well, what about vCenter?  In vCenter, my virtual machine’s name includes its OpenStack identifier:


VMware Integrated Openstack instance uuid in vCenter

How vCenter Displays an OpenStack Instance

As we saw in the code above, the user specifies the source image, flavor, network, and security group during instance deployment. In the background, the user’s credentials and the interactions between the various OpenStack components are authenticated by the OpenStack Identity service (Keystone). The following graphic provides an illustration of these interactions:


VMware Integrated OpenStack Component Interaction

OpenStack Component Interaction

Check out the following video to see instance deployment in action with the Horizon GUI and the Nova CLI, :


Stay tuned for next week’s blog post when we discuss working with OpenStack networks! In the meantime, you can learn more on the VMware Product Walkthrough site and on the VMware Integrated OpenStack product page.