Aria Automation Cloud Automation Cloud Management Platform Day 2 Network Operations GETTING STARTED NSX vRealize Suite

Mastering NSX Load Balancers using vRealize Automation

Unlock the Greater Value of vRealize Automation

Many vRealize Automation (vRAcustomers start their IT automation journey from the basic Infrastructure-as-a-service. If you have achieved fast provisioning and delivery of infrastructure services to end users, the next logical step is to connect vRealize Automation with other existing tools, deliver custom services and make a true end-to-end automation happen in your IT environment.  vRealize Automation, with its unique extensibility framework and out-of-the-box integration solutions, can help enterprise customers to unlock the real value of an automated cloud ecosystem. 

With the goal of helping our customers who are looking to move beyond the basics, we started a new technical blog series on providing answers and examples for “Better Together” use cases between vRealize Automation and other prevailing IT tools, including service desk, storage, configuration management, load balancer, IPAM and many more.  The writers of these blogs are experts among solution engineers, technical account managers, cloud management specialist and many other teams who work with customers day in and day out. They are eager to share their knowledge, tips and tricks, and insights into making your job easier with vRealize Automation.

In the first post, we will walk through the process of defining and managing NSX Load Balancer using vRealize Automation. Load balancing is becoming an increasingly popular discussion.  Why you may ask?  Simple, load balancing is a critical component for most enterprise applications to provide both availability and scalability to the system. Using vRealize Automation with NSX, customers have the ability to define and manage on-demand NSX Load Balancers as part of an application’s design. 

This post details how to define and use on-demand NSX Load Balancers within an application blueprint design, including the day-2 functions available once the application is provisioned. 

For the purpose of this post, we assume an NSX Endpoint has been configured in vRA, a prerequisite for on-demand services. I will utilise an often-used blueprint for demonstration purposes. The WordPress Blueprint consists of a Web/Application tier and a Database Tier, the Web/Application tier will be load balanced using an NSX On-demand Load Balancer defined within the Blueprint. 

 

Defining NSX Load Balancers within a Blueprint 

Within the Blueprint Canvas and under the Network & Security Category, drag and drop the On-Demand Load Balancer component onto the design canvas.

 

 

  • NSX Load Balancers are provisioned and configured on-demand and bound to the application itself, meaning they are deployed and life-cycled with the application.
  • Multiple Load balancers can be placed onto the canvas but only one NSX Edge appliance will be deployed to provide Load Balancing Services.

The following load balancer policy settings are available directly within the design canvas:

  • ID – Provide a name to identify the component, in this example wp-nsxlb was used.
  • Member – This sets the vSphere Machines to be load balanced in this example the value is set to wp-web-server to load balance the web tier of the Wordpress application.
  • Member network – Used to select the Member Network to load balance from, In this example, the wp-web-server is only connected to one network so only corp192168110024 can be selected
  • VIP Network – The VIP Network where the Virtual IP address should be defined on.

 

 

You will also need to configure the virtual servers on the load balancer, these are the network services – ports and protocols – that need to be load balanced. In this example HTTP traffic on port 80 will be load balanced and used for health check monitoring. Multiple Virtual Servers can be configured for load balancing if required.

 

 

Additional advanced Load Balancer configuration are available when editing or creating Virtual servers, by selecting the Customize radio button and entering the information on the additional tabs. This allows you to tailor the Load Balancer configuration to the needs of the application. The settings allow for the configuration of the Load Balancer Algorithm to use, Persistence settings, Ports, Health Monitors, Transparent Mode and Connection Limits.

At provisioning time, vRA will automatically provision an NSX Edge Services Gateway (ESG), enable load balancing and configure the load balancing policy based on the details in the blueprint design. One ESG is deployed per application and can be shared with other NSX services, such as on-demand NAT networks.

 

NSX Edge Configuration

Some application designs may require additional configuration settings. For example, customers may need high-availability or additional CPU and memory resources applied to the NSX edge services gateway. Using custom properties set at the blueprint level, it is possible to configure the size and HA of the NSX Edge appliance.

The deployment size of the NSX Edge appliance can be controlled by applying the custom property NSX.Edge.ApplianceSize with a value of either compact, large, quadlarge or xlarge. The following table provides further information on the resources utilised:

 

Value Sizing
compact For small deployments, POCs, and single service use.

  • CPU = 1
  • RAM = 512 MB
  • Disk = 512 MB
large For small to medium or multi-tenant deployments.

  • CPU = 2
  • RAM = 1 GB
  • Disk = 512 MB
quadlarge

 

For high throughput, equal-cost multi-path routing (ECMP) or high-performance firewall deployments.

  • CPU = 4
  • RAM = 1 GB
  • Disk = 512 MB
xlarge

 

For L7 load balancing and dedicated core deployments.

  • CPU = 6
  • RAM = 8 GB
  • Disk = 4.5GB (4GB Swap)

 

Setting NSX.Edge.HighAvailability to true will allow you to deploy a highly available NSX edge. 

Managing Deployments

NSX Load Balancer’s can be managed and modified after the application is provisioned by the deployment owner, provided that they are entitled to the ‘Reconfigure Load Balancer’ resource action.

By selecting the Load Balancer within the deployment and selecting ‘Reconfigure’ from the Actions menu, this entitlement allows the user to add, edit or delete the virtual servers defined within the Load Balancer or configure additional virtual server settings. In the below Wordpress deployment, the Virtual Server settings were changed to remove HTTP on port 80 and add HTTPS on port 443.

 

 

After the changes are made simply hit Submit to process the request and vRA will make the necessary updates to the NSX Load Balancer settings. As is the case for all actions, this change can be governed to require additional approvals prior to being executed. 

Additional Resources

By enabling on-demand, application-centric networking and security services, vRA and NSX can significantly reduce application development time while providing the needed flexibility of dynamic networks throughout an application’s lifecycle. To learn more use cases, please read the additional blogs below.

https://blogs.vmware.com/management/2017/09/vra-nsx-intro-app-centric-networking-security.html

https://blogs.vmware.com/management/2017/10/vra-nsx-application-services-design.html

 

About the Author

Gavin Lees is Senior Technical Account Manager based in the UK. As a vExpert with a specialist interest in Cloud Management he works with a small number of large Enterprise Accounts who use vRealize Automation. He uses his experience to provide technical guidance, roadmap content and acts as a single point of contact for everything VMware related.

He has also developed and wrote content for VMware’s Hands on Labs, covering advanced vRealize Automation topics.

Gavin can be found @GavOnCloud and is the author of http://GavOnCloud.com

 

Learn More About vRealize Automation