Home > Blogs > OpenStack Blog for VMware


Subnet Pools with VMware NSX

Today’s blog post discusses how VMware NSX supports Neutron Subnet Pools. This article was written by Marcos Hernandez, one of the OpenStack specialists in VMware’s Networking & Security Business Unit (NSBU).

Neutron, the OpenStack networking project, continues to evolve to support use cases that are relevant for the Enterprise. Early on, OpenStack networking was focused on delivering overlapping IP support for tenant subnets. Over time, more complex topologies have been added to Neutron. In some cases, the network administrators may want to be in charge of the IP scheme used by the consumers of an OpenStack private cloud. These and other options, are discussed in a recent SuperUser article published by Wells Fargo, as well as in the Neutron-NSX integration documentation.

In Kilo, a new feature called Neutron Subnet Pools was added to the OpenStack networking workflows (feature documentation). Neutron subnet pools allow an administrator to create a large Classless Inter-Domain Routing (CIDR) IP address range for a Neutron network, from which Tenants can create subnets without specifying a CIDR. In cases where valid, routable IPs are used, subnet pools are very useful. Tenants will only need to specify minimal configuration parameters for creating a subnet without worrying about the IP subnet on which the VMs/Instances will sit. Although subnet pools are not supported in Horizon (the OpenStack dashboard), they can be created via the CLI or API.

Here is an example on how to use subnet pools:

1. Let’s first create a Neutron network called TestNet. Please note that this network can be Shared, but it cannot be External. This is because Neutron subnet pools only apply to tenant networks but not to external networks (where Floating IPs reside):

~$ neutron net-create TestNet
Created a new network:
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| id                    | 04e9d906-6fad-4079-90b2-8bba34dedb1e |
| name                  | TestNet                              |
| port_security_enabled | True                                 |
| router:external       | False                                |
| shared                | False                                |
| status                | ACTIVE                               |
| subnets               |                                      |
| tenant_id             | 4b36a201448b4fc19b91439d8e883b36     |
+-----------------------+--------------------------------------+

2. Next, we create a Subnet Pool called TestSubnetPool, with a default subnet prefix length of /24 from a larger CIDR of /16.

~$ neutron subnetpool-create --default-prefixlen 24 --pool-prefix 10.10.0.0/16 TestSubnetPool
Created a new subnetpool:
+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| default_prefixlen | 24                                   |
| default_quota     |                                      |
| id                | d963acd5-1afc-477c-a434-cef46f017b17 |
| ip_version        | 4                                    |
| max_prefixlen     | 32                                   |
| min_prefixlen     | 8                                    |
| name              | TestSubnetPool                       |
| prefixes          | 10.10.0.0/16                         |
| shared            | False                                |
| tenant_id         | 4b36a201448b4fc19b91439d8e883b36     |
+-------------------+--------------------------------------+

3. In the next step, either the project administrator or a project user creates a subnet called TestSubnet without specifying the CIDR. The subnet gets the first /24 from the larger CIDR specified by the admin:

~$ neutron subnet-create --name TestSubnet --subnetpool TestSubnetPool TestNet
Created a new subnet:
+-------------------+----------------------------------------------+
| Field             | Value                                        |
+-------------------+----------------------------------------------+
| allocation_pools  | {"start": "10.10.0.2", "end": "10.10.0.254"} |
| cidr              | 10.10.0.0/24                                 |
| dns_nameservers   |                                              |
| enable_dhcp       | True                                         |
| gateway_ip        | 10.10.0.1                                    |
| host_routes       |                                              |
| id                | 102de585-94ee-45c5-97cb-a535c1665a28         |
| ip_version        | 4                                            |
| ipv6_address_mode |                                              |
| ipv6_ra_mode      |                                              |
| name              | TestSubnet                                   |
| network_id        | 04e9d906-6fad-4079-90b2-8bba34dedb1e         |
| subnetpool_id     | d963acd5-1afc-477c-a434-cef46f017b17         |
| tenant_id         | 4b36a201448b4fc19b91439d8e883b36             |
+-------------------+----------------------------------------------+

4. Subsequent subnet creation commands will continue to pull from the larger CIDR in /24 allocations. In this example, a new subnet called TestSubnet2 is created:

~$ neutron subnet-create --name TestSubnet2 --subnetpool TestSubnetPool TestNet
Created a new subnet:
+-------------------+----------------------------------------------+
| Field             | Value                                        |
+-------------------+----------------------------------------------+
| allocation_pools  | {"start": "10.10.1.2", "end": "10.10.1.254"} |
| cidr              | 10.10.1.0/24                                 |
| dns_nameservers   |                                              |
| enable_dhcp       | True                                         |
| gateway_ip        | 10.10.1.1                                    |
| host_routes       |                                              |
| id                | e93408fe-839b-40b4-bb96-ebfe182860e3         |
| ip_version        | 4                                            |
| ipv6_address_mode |                                              |
| ipv6_ra_mode      |                                              |
| name              | TestSubnet2                                  |
| network_id        | 04e9d906-6fad-4079-90b2-8bba34dedb1e         |
| subnetpool_id     | d963acd5-1afc-477c-a434-cef46f017b17         |
| tenant_id         | 4b36a201448b4fc19b91439d8e883b36             |
+-------------------+----------------------------------------------+

5. In Horizon, the result looks something like the output in Figure 1:

Subnet Pool in Horizon

Figure 1: Subnet Pool in Horizon

6. And in VMware NSX, the DHCP backend is updated accordingly, which proves the feasibility of subnet pools and support in the NSX Neutron plugin (See Figure 2):

Subnet Pools in the VMware NSX Manager GUI

Figure 2: Subnet Pools in the VMware NSX Manager GUI

7. Let’s proceed to list the two subnets associated with our Neutron network:

~$ neutron subnet-list
+--------------------------------------+-------------------+------------------+--------------------------------------------------------+
| id                                   | name              | cidr             | allocation_pools                                       |
+--------------------------------------+-------------------+------------------+--------------------------------------------------------+
| 1885f7bd-9a52-417c-8a16-f47e468b4bf2 | inter-edge-subnet | 169.254.128.0/17 | {"start": "169.254.128.2", "end": "169.254.255.254"}   |
| 6c2a5009-007a-4c3e-8186-15d1fc25a529 | ext-subnet        | 192.168.100.0/24 | {"start": "192.168.100.100", "end": "192.168.100.120"} |
| 102de585-94ee-45c5-97cb-a535c1665a28 | TestSubnet        | 10.10.0.0/24     | {"start": "10.10.0.2", "end": "10.10.0.254"}           |
| e93408fe-839b-40b4-bb96-ebfe182860e3 | TestSubnet2       | 10.10.1.0/24     | {"start": "10.10.1.2", "end": "10.10.1.254"}           |
+--------------------------------------+-------------------+------------------+--------------------------------------------------------+

8. Next, we need to create a Neutron port on the network and subnet of our choosing (TestSubnet2 in this example) using the syntax below:

~$ neutron port-create TestNet --fixed-ip subnet_id=e93408fe-839b-40b4-bb96-ebfe182860e3
Created a new port:
+-----------------------+---------------------------------------------------------------------------------+
| Field                 | Value                                                                           |
+-----------------------+---------------------------------------------------------------------------------+
| admin_state_up        | True                                                                            |
| allowed_address_pairs |                                                                                 |
| binding:host_id       |                                                                                 |
| binding:vif_details   | {"port_filter": true}                                                           |
| binding:vif_type      | dvs                                                                             |
| binding:vnic_type     | normal                                                                          |
| device_id             |                                                                                 |
| device_owner          |                                                                                 |
| fixed_ips             | {"subnet_id":"e93408fe-839b-40b4-bb96-ebfe182860e3", "ip_address": "10.10.1.3"} |
| id                    | 44589bfa-82d7-4c45-85cc-c33f2845187a                                            |
| mac_address           | fa:16:3e:a6:a1:4a                                                               |
| name                  |                                                                                 |
| network_id            | 04e9d906-6fad-4079-90b2-8bba34dedb1e                                            |
| port_security_enabled | True                                                                            |
| security_groups       | 4a583b18-139b-41e8-90c5-3d3b0316ce1d                                            |
| status                | ACTIVE                                                                          |
| tenant_id             | 4b36a201448b4fc19b91439d8e883b36                                                |
| vnic_index            |                                                                                 |
+-----------------------+---------------------------------------------------------------------------------+

9. We can now boot an instance called TestVM that uses the specified Neutron port:

~$ nova boot --flavor m1.small --image ubuntu-14.04-server-amd64 --nic port-id=44589bfa-82d7-4c45-85cc-c33f2845187a TestVM
+--------------------------------------+------------------------------------------------------------------+
| Property                             | Value                                                            |
+--------------------------------------+------------------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                           |
| OS-EXT-AZ:availability_zone          | nova                                                             |
| OS-EXT-SRV-ATTR:host                 | -                                                                |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                                                |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000008                                                |
| OS-EXT-STS:power_state               | 0                                                                |
| OS-EXT-STS:task_state                | scheduling                                                       |
| OS-EXT-STS:vm_state                  | building                                                         |
| OS-SRV-USG:launched_at               | -                                                                |
| OS-SRV-USG:terminated_at             | -                                                                |
| accessIPv4                           |                                                                  |
| accessIPv6                           |                                                                  |
| adminPass                            | PfnpuXAn2BAH                                                     |
| config_drive                         |                                                                  |
| created                              | 2016-03-08T23:54:26Z                                             |
| flavor                               | m1.small (2)                                                     |
| hostId                               |                                                                  |
| id                                   | 0e2672e3-27a4-48a8-ae3f-a4de18c93af2                             |
| image                                | ubuntu-14.04-server-amd64 (0a6bd8dd-7190-4cef-96be-874b00bb8c55) |
| key_name                             | -                                                                |
| metadata                             | {}                                                               |
| name                                 | TestVM                                                           |
| os-extended-volumes:volumes_attached | []                                                               |
| progress                             | 0                                                                |
| security_groups                      | default                                                          |
| status                               | BUILD                                                            |
| tenant_id                            | 4b36a201448b4fc19b91439d8e883b36                                 |
| updated                              | 2016-03-08T23:54:26Z                                             |
| user_id                              | 4b6dbc35bade4b8ab38b42098a9d8648                                 |
+--------------------------------------+------------------------------------------------------------------+

10. Finally, the NSX static DHCP binding is updated for the instance (See Figure 3):

DHCP Static Binding for Neutron Subnet Pool Allocation to Instance

Figure 3: DHCP Static Binding for Neutron Subnet Pool Allocation to Instance

As you can see, subnet pools can be very useful to facilitate subnet creation for the tenant so that they do not have to worry about defining their own IP address space. Join us at the OpenStack Summit in Austin, where we will be talking about this and other Neutron services for the Enterprise, delivered by the VMware NSX platform.

Marcos Hernandez is a Staff Systems Engineer in the Network and Security Business Unit (NSBU). He is responsible for supporting large Global Enterprise accounts and providing technical guidance around VMware’s suite of networking and cloud solutions, including NSX and OpenStack. Marcos has a background in datacenter networking design and expert knowledge in routing and switching technologies. Marcos holds the CCIE (#8283) and VCIX certifications, and he has a Masters Degree in Telecommunications from Universidad Politécnica de Madrid.

This entry was posted in Technical and tagged , , on by .
Trevor Roberts Jr.

About Trevor Roberts Jr.

Trevor Roberts, Jr. is the Senior Technical Marketing Manager for OpenStack at VMware and the lead author of the VMware Press title, “DevOps for VMware Administrators". He enjoys speaking to customers and partners about the benefits of using OpenStack with VMware technologies. In his spare time, Trevor shares his insights on data center technologies via the VMware Blogs and on Twitter (@VMTrooper). His contributions to the IT community have garnered recognition by his designation as a VMware vExpert, Cisco Data Center Champion, and EMC Elect.

Leave a Reply

Your email address will not be published. Required fields are marked *

*