In the previous post we took a look at the simplicity of deploying VMware NSX into a new or existing VMware environment. This post looks to develop upon our existing infrastructure and build out a three-tier application with a Web, App and Database tier.
Application Topology
This application displayed above highlights what this blog seeks to build out. Note that there are three logical network segments – Web, App, DB and Uplink (Transport) – routing functionality provided by the Logical Distributed Router and an NSX Edge Services Gateway that is used to connect the logical network topology to the physical infrastructure.
Logical Switching
Traditionally we have built VLANs across our environment to enable L2 broadcast domains. This was traditionally done by logging into all network devices required and issuing commands such as vlan 10, vlan 20, vlan 30. In VMware NSX, this is done with Logical Segments. It is very easy to build out Logical Segments that span your enabled NSX domain.
Within the Network and Security inventory item, select Logical Switches then the green plus icon.
It is possible to have many Transport Zones within your environment that can be allocated to different tenants to further carve out parts of the infrastructure or to provide an additional level of control plane isolation. In this example there is only one Transport Zone. Click OK to continue. As simple as that, you have configured a logical switch across your entire environment. Repeat this three times for the Web, App and DB tiers.
Notice the Segment ID increment. We set the range as 5000-5999 earlier, and these were the first logical switches we made. Now that all the switches have been created it is time to attach workloads. The workloads that we attach, our guest VMs, can be chosen by right clicking and selecting “Add VM” or the little blue squared icon with a green plus.
The first is the Web Tier. Select the Web virtual machines in this case. Select Next.
We can select a guest virtual machines vNIC to connect to the logical switch. This allows an administrator to connect multiple vNICs to different network segments. This works well if you use multiple NICs for different functions on different segments.
Click Next and finish the attachment of virtual machines. Now for an infamous ping test to highlight some connectivity!
Here we have two Web virtual machines. Their respective IP addresses are web-sv-01a – 172.16.10.11 and web-sv-02a – 172.16.10.12. So currently there are two guests that are on different physical hosts which have a Layer 2 adjacency. They are unaware of any encapsulation occurring between the hypervisors. These workloads could be anywhere in the data center and the guests have L2 connectivity.
Logical Routing
VMware NSX provides administrators with the ability to use Layer 3 routing functions within the hypervisor. This eliminates traffic problems that plague current environments due to Layer 3 gateways being provided on physical infrastructure. The Logical Distributed Router provides “in hypervisor” L3 routing that is built for east-west communication. The communication between application tiers can occur within the hypervisor kernel and allow routing as close as possible to the source.
Within the Network and Security inventory item select NSX Edges.
When selecting the green plus you will be presented with two deployment options – Logical Distributed Router and Edge Services Gateway. Logical Distributed Router delivers in kernel routing and the Edge Services Gateway is a virtual appliance.
The radial button has Logical (Distributed) Router selected and this will allow for kernel routing. Fill out the name of the Logical Router and the optional fields.
Configure a username and password accordingly and enable access to the logical router if you desire.
Configure the Cluster and Datastore you wish to assign the Control VM to. The Control VM is a small virtual appliance which controls the control plane of the logical router, the data path is where all the routing function occurs and is located in the kernel. The control plane is where all the routing protocols, logical interfaces and other functions are managed. If the Control VM goes down (and there is a HA feature you can enable) there only disruption is do the control plane. These means updates cannot occur for the logical router until one is back online. There is no interruption to the data path or packet forwarding.
Once the data store is selected it is time to configure interfaces for the logical router to connect to. This interface is analogous to a Switched Virtual Interface (SVI) or a Routed Virtual Interface (RVI).
Select Add Interface, mark it as an internal interface, click the green plus, select the Web Tier logical segment. Next assign the default gateway for the segment. Populate the address and subnet mask.
Note that southbound interfaces that connect to logical switches with workloads on them are generally internal. Northbound interfaces are where connectivity to an upstream subnet is made and this is an uplink.
Segment | Subnet | Interface |
Web | 172.16.10.0/24 | Internal |
App | 172.16.20.0/24 | Internal |
DB | 172.16.30.0/24 | Internal |
Transport | 192.168.10.0/29 | Uplink |
Repeat this task for the addressing structure of your three-tier application. Once you have assigned your addressing and completed it for all tiers click Finish.
This will start the deployment process. You can see that the status is Busy. This is the control virtual appliance being deployed to the previously defined data store. Now time to test routing. Lets have a look at the environment. The first perspective is looking from the view of a logical router control virtual machine.
It shows connected Layer 3 interfaces. To ensure everything is correct verify the results against the hypervisor kernel.
The first command net-vdr –I –l will display instances of the logical distributed router. Here we can see the edge, the controller IP associated and the control plane device. We can see the number of Logical Interfaces (LIF) and routes in the table.
The second command net-vdr –l –route default+edge-1 will output the Layer 3 interfaces that are installed into the kernel. As LIFs are instantiated you can confirm that they are installed with this command. This approach also helps form advanced troubleshooting for use in operations and advanced troubleshooting.
Now it is time for a simple test to ping between guests in the environment. As displayed there is now routing occurring between the web tier and the app tier on a logical interface within the kernel
NSX Edge Services Gateway
So far we have configured logical routing and logical switching. This has provided us connectivity between our application tiers. No we’ve come to the point where the administrator will determine how an application will be accessed. To provide connectivity to the logical application network, one method is deploying an Edge Services Gateway.
An Edge Services Gateway (ESG) is a virtual appliance that can provide routing, firewall, load balancer, VPN, Layer 2 bridging services and more. To deploy an ESG click on NSX Edges then the green plus.
Ensure Edge Services Gateway radial button is selected and populate the relevant hostname and subsequent details. Click Next.
Populate the administrator credentials and select Next.
Here you can select the size of the appliance. The appliance sizing determines resources used when it is active. This allows the administrator a choice when selecting what is relevant for a specific application. This example chooses a Large instance.
Size | CPU | Memory |
Compact | 1 vCPU | 512 MB |
Large | 2 vCPU | 1024 MB |
Extra Large | 4 vCPU | 1024 MB |
Quad Extra Large | 6 vCPU | 8192 MB |
This table highlights the resources required for each deployment of a NSX ESG appliance. The Large instance consumes 2 vCPU and 1024 MB of RAM.
After selecting the size of the ESG appliance it is required that a Resource Pool and Datastore is selected. Select the Green Plus and allocate the Virtual Appliance to the Datastore.
There are three connections required of this Edge Services Gateway. Look at the reference diagram at the start of the post. One uplink into the external network, an internal interface that connects to the Transit network that the Logical Router our application is connected to and an internal interface to a VLAN backed port-group that our management host connects on. The Uplink into the external network is a routing link to an IP address within the physical infrastructure.
After selecting and creating these interfaces with their subsequent addressing, select Next.
To specify a default gateway, select the relevant vNIC and assign a gateway IP. This will allow a default route and a next hop IP address to be installed into the routing table. Click Next.
In the example here, the radial button for Default Traffic Policy is clicked to “Accept.” If HA has been configured, then you can specify the “keep alive” link and relevant configurations. Click Next.
Confirm the details that you have entered into the NSX Edge. This will allow the administrator to review the configuration before committing to the deployment. Select Finish.
With that ,the NSX Edge Services Gateway will deploy and be ready for configuration. With very simple information we have deployed a virtual appliance that delivers load balancing, routing, VXLAN/VLAN termination, firewall functions, VPN services, L2 Bridging and more.
Dynamic Routing
So far, we’ve deployed a three-tier logical application with this topology. It also has an Edge Services Gateway connected to the uplink of the logical router ,with an uplink of its own to the physical infrastructure. The next step is informing the Edge Services Gateway about Logical Interfaces (LIFS) connected to the logical router. This can be performed by a dynamic routing protocol such as OSPF, IS-IS, BGP or traditional static routing.
This example seeks to use an Interior Gateway Protocol (IGP) know as Open Shortest Path First (OSPF). The first configuration point will be the Logical Distributed Router. Select NSX Edges and double-click on the Logical Distributed Router that was deployed previously.
Under the Manage tab select Routing, Global Configuration and select Edit on Dynamic Router Configuration.
Select the Router ID. In this example this is the Uplink interface that connects to the Transit Logical Switch facing the Edge Services Gateway.
Accept the changes and click Publish Changes. Select the OSPF tab on the left side.
Note the default configuration of OSPF. The Area to Interface mapping, Area Definition and OSPF Configuration need to occur. Click the Edit button for OSPF Configuration.
Tick the Enable OSPF box. The Protocol address is that of the Control VM for the Logical Router. The Control VM is responsible for maintaining the control plane of OSPF e.g. maintains OSPF state, neighbour relationships and route propagation. The Forwarding Address is the uplink interface IP address. Click OK to finish.
Next click the Green Plus under Area Definitions. OSPF neighbors need to peer with routers with the same area ID. We defined Area 10 earlier and therefore we need to use this again.
Select the Uplink interface. This is the interface you want to present to OSPF to be included in the routing protocol.
Review the changes and now click Publish Changes. This will enable OSPF on your Logical Router.
Click the Route Redistribution menu along the left side. Notice how there is already a redistribution rule for any Connected interface into OSPF. Remember these? All these L3 interfaces are directly connected interfaces.
By redistributing connected routes into OSPF it will allow our LIFs that are in the kernel of every hypervisor to be redistributed. This will present the LIFs via routes in OSPF to the NSX Edge Services Gateway.
Now it is time to enable OSPF on the Edge Services Gateway.
Double click the Edge Services Gateway. This will open an advanced preference pane. Select the Manage tab and it will display settings about the Virtual Appliance. Select Routing.
Notice the Default Gateway is already populated from the deployment window.
Select the Edit button next to Dynamic Routing Configuration.
The Router-ID needs to be configured. Use the interface address of the Uplink interface. Do not enable OSPF from this window. Click Save.
Publish the changes by clicking the Publish Changes banner across the top. This allows administrators to configure various elements and Publish when ready. Along the left side select OSPF.
Network Engineers will note familiar terminology here in regards to OSPF. Click the Green Plus under the Area Definitions section.
Next create an area for OSPF. The area in this example is 10. If required, change the Authentication, and then click OK. Next, select the Green Plus under the Area to Interface Mapping section.
Configure the interface that is required in the OSPF routing process and the area it should be residing in. Area 10 is the example used here.
Notice the vNIC in Area to Interface Mapping is now in Area 10 with the default timers. Up the top click Enable to enable the OSPF protocol. To confirm that OSPF is enable and the routes are being received from the Logical Router.
Here I have used SSH to log into the NSX Edge Services Gateway. The command show ip route will show that our redistributed networks on the logical router are being advertised by OSPF to the Edge Services Gateway. The default route is in place. We can see from the output of show ip ospf statistics that the Shortest Path First algorithm has been run. Show ip ospf neighbors outputs the neighbour relationship between the Logical Router and Edge Services Gateway.
This post has seen the administrator configure dynamic routing on the NSX Edge Services Gateway and the Logical Router. Now that the logical application has a connectivity method to the physical world applications can be used and consumed.
Comments
0 Comments have been added so far