Today’s blog post is the second entry in a series by Marcos Hernandez, one of the OpenStack specialists in VMware’s Networking & Security Business Unit (NSBU). Part 1 discussed the basics of the Neutron integration for VMware NSX. Part 3 will be published in the upcoming weeks. So, check back for more on this topic!
As we discussed in our previous article, when a tenant creates a Neutron network (or networks), the plugin signals NSX Manager to provision a logical switch (or switches), which are overlay constructs that utilize VXLAN to create L2 segments over L3 physical networks. VXLAN is an industry standard, co-developed by VMware and others and supported across the board. Over the past couple of years, VXLAN overlays have been largely demystified and the initial objections (like performance, lack of visibility, etc.) have given way to more practical concerns (like changes in operational processes, automation, etc.). Customers and vendors are getting more educated about each other’s vision and together are making VXLAN, and Software Defined Networking for that matter, a reality in their environments.
These L2 segments, overlays as they may be, are just that: L2 segments. Without a router to connect them together or to other networks, they are completely isolated from each other. An OpenStack Cloud administrator can control, via quotas, the number of Neutron networks allowed per tenant.
Tenants in OpenStack are allowed, by default, to create their own IP subnets and routers. We will cover some of the NSX capabilities available with the Neutron plugin. Before we do that though, just a quick parenthesis about self-service in general: As OpenStack gains more traction in the Enterprise, we are learning that these self-service capabilities may not be desirable. Admins may want to remain in control of the IP subnetting, for example, especially if the use case calls for routable IP address space everywhere. OpenStack lacks the necessary controls to enforce this type of restrictions, so short of forbidding API access to specific functions or simply relying on the good-old honor system, customers have little to no choice when it comes to the built-in OpenStack governance. Projects like OpenStack Congress are attempting to bridge this gap, and some commercial products are already providing the controls that IT requires. vRealize Automation (vRA) is a VMware platform that offers comprehensive, scalable governance and could potentially leverage extension packages to drive provisioning workflows in OpenStack.
Back to the L3 services discussion, we stated that a tenant could create Neutron routers. The NSX-Neutron plugin will translate this provisioning request and signal NSX Manager to create an NSX Edge Services Gateway, or ESG. The ESG is a network appliance that supports a vast number of network features (not all of which are visible by OpenStack, by the way) and that is broadly used in our integration.
Once the Neutron router is created, our previously provisioned Web and App Neutron networks (L2 segments) can be connected to it and routing between them will be available.
The uplink of a Neutron router can be connected to an External network. This is also known as setting the gateway. This External network must sit on routable IP address space within the organization and is also the network where floating IPs reside. If the tenant networks sit on RFC1918 space, then the Neutron router must do Network Address Translation, or NAT (source NAT for internal to external access and DNAT for floating IPs). If the tenant networks sit on routable subnets, then the router does not have to do NAT.
The tenant networks can also be VLAN-backed, instead of VXLAN-backed. If the tenant wants to or has to use VLANs instead of VXLANs, then the admin must create these networks on behalf of the tenant.
Tenant routers can be exclusive (defined at provisioning time using an API extension) or shared (default behavior). Depending on your performance and scalability expectations, you will choose one or the other.
When using NSX, the Neutron L3 services may include a distributed router, which is a very powerful capability in NSX that allows for the optimization of East-West traffic in routed topologies. This is a good example of an enterprise-grade capability of NSX and differentiator from the reference implementation, which can be leveraged without compromising the basic tenet of OpenStack in keeping the API open. A distributed router sends traffic from the source hypervisor to the destination hypervisor without hairpinning the packets through an NSX ESG or a physical router SVI. This increases performance significantly and streamlines traffic engineering within the data center.
Finally, Neutron only supports static routing, which means that when using NSX with your OpenStack implementation, dynamic routing is not an option. NSX supports both OSPF and BGP, but until Neutron supports either one, tenants won’t be able to use dynamic routing. Efforts to implement a BGP speaker in OpenStack began during the Juno cycle and are still ongoing. When this work is complete, the NSX platform, due to its native support of BGP, will be ready to support dynamic routing once the Neutron plugin has been updated.
The picture below shows the basic topologies supported by the NSX-Neutron plugin:
In our implementation of DHCP, we replace the dnsmasq process that is used by the reference implementation with an NSX Edge Services Gateway configured with static DHCP bindings. This approach has proven to be very reliable at scale (thousands of VMs).
There is logic in the NSX-Neutron plugin that will automatically determine how to use an Edge Services Gateway for DHCP services. Depending on the use case (overlapping IPs vs. non-overlapping IPs) the same ESG may be reused for multiple tenant networks, as the picture below shows:
In Part 3 of this article series, we will discuss the implementation of critical Neutron services such as security groups and Load-Balancing-as-a-Service. In the meantime, check out our revamped VMworld Hands-on Lab (HOL-1620) featuring VMware Integrated OpenStack and NSX Optimized for vSphere.
Marcos Hernandez is a Staff Systems Engineer in the Network and Security Business Unit (NSBU). He is responsible for supporting large Global Enterprise accounts and providing technical guidance around VMware’s suite of networking and cloud solutions, including NSX and OpenStack. Marcos has a background in datacenter networking design and expert knowledge in routing and switching technologies. Marcos holds the CCIE (#8283) and VCIX certifications, and he has a Masters Degree in Telecommunications from Universidad Politécnica de Madrid.