Load Balancing Autoscale Elasticity WAF

Overcoming Application Delivery Challenges for Kubernetes

Kubernetes offers an excellent automated application deployment platform for container-based workloads. Application services such as traffic management, load balancing within a cluster and across clusters/regions and availability zones, service discovery, monitoring/analytics, and application security are critical for modern application infrastructure. However, there are a wide range of challenges when using or deploying Kubernetes. While some challenges are unique to Kubernetes, picking a container orchestration solution and the major factors inhibiting the adoption of Kubernetes are typical of the growing pains seen with the adoption of new technologies. Enterprises need a scalable, real-world-tested, and robust application networking services to deploy microservices applications in Kubernetes clusters in production environments.

Challenges with Application Delivery for Kubernetes

Common application services, such as load balancing / traffic management, network performance monitoring, and application security, that are available in traditional applications often need to be implemented or approached differently in container-based applications. Here are some of the challenges in deploying container-based applications.

Multiple discrete solutions

Modern application architectures based on microservices have made appliance-based load balancing solutions obsolete. Traditional hardware/virtual load balancers or open source tools are not equipped to support the north-south ingress services, do not support application autoscaling, and lack the native integration with peripheral services such as DNS, IPAM and web application firewall (WAF).

Complex operations

With disparate solutions, IT faces more complex operations in managing and troubleshooting multiple independent components from different vendors.

Lack of observability

End-to-end visibility is especially important with container-based applications. Application developers and operations teams alike need to be able to view the interactions between the peripheral services and the container services to identify erroneous interactions, security violations, and potential latencies.

Partial automation

Application and networking services need to be API-driven and programmable without the constraints of hardware appliances. Multi-vendor solutions can limit their flexibility and portability across environments. Multi-vendor solutions also necessitate in depth scripting knowledge for different products to provide only partial automation, if any at all, leading to compromising between feature, automation, and scale.

Therefore, it is necessary to have consolidated services for Kubernetes from a single platform that simplifies the delivery of application services and matches the cloud-native automation expected for modern applications.

Consolidated Services Fabric For Kubernetes

The VMware NSX Advanced Load Balancer (formerly Avi Platform) integrates with container orchestration platforms such as VMware Tanzu, Kubernetes or Red Hat OpenShift on virtual machines and bare metal servers across on-prem, multi-cloud, multi-cluster, and multi-region environments. To deliver comprehensive container services for both traditional and cloud-native applications, NSX ALB Kubernetes Ingress Services is optimized for North-South (ingress controller) traffic management, local and global server load balancing (GSLB), performance monitoring, dynamic service discovery, application security such as web application firewall (WAF), and DNS/IPAM management. Combining L4 through L7 load balancing, GSLB, DNS/IPAM management, and security functionalities in a single solution, NSX ALB Kubernetes Ingress Services provides operational consistency regardless of which on-prem, private-cloud or public-cloud environment the Kubernetes cluster is running on.

Kubernetes Cluster Integration with Avi Kubernetes Operator (AKO)

NSX ALB Kubernetes Ingress Services can be used for integration with multiple Kubernetes clusters, with each cluster running its own instance of Avi Kubernetes Operator (AKO).

AKO is a pod running in Kubernetes clusters that provides communications with Kubernetes primaries to provide configuration. AKO synchronizes required Kubernetes objects and calls the Avi Controller APIs to deploy and configure the ingress services via the Avi Service Engines (SEs). Clusters are separated on SEs, which are deployed outside the cluster in the Data Plane by using VRF Contexts. Automated IPAM and DNS functionality is handled by the Avi Controller.

AKO remains in sync with the required Kubernetes objects and calls the Avi Controller APIs to deploy the Ingress Services via the Avi Service Engines.

Once a new ingress service is created in a cluster, AKO automatically synchronizes with the Avi Controller, which creates a Virtual Service, allocates a VIP from IPAM, publishes the FQDN to DNS and designates Avi Service Engines to host the newly created Virtual Service for Ingress and Routes. AKO then updates this VIP and the hostname in the ingress object’s status field.

Kubernetes GSLB Integration with Avi Multi-Cluster Kubernetes Operator (AMKO)

To extend applications across Multi Region and Multi Availability Zone deployments Avi Kubernetes Multi-Cluster Operator (AMKO) is required.

AMKO is an Avi pod running in the Kubernetes/OpenShift GSLB leader cluster and in conjunction with AKO, AMKO facilitates multi-cluster application deployment, mapping the same application deployed on multiple clusters to a single GSLB service, extending application ingresses across multi-region and multi-availability-zone deployments.

Since AKO runs on all Kubernetes clusters as the ingress controller to facilitate the creation and management of Virtual Services, VIP, FQDN and DNS, AMKO recognizes these new VIPs and hostnames in the status field of the ingress object. AMKO then calls the Avi Controller APIs to create a new GSLB service with the new VIP on the leader cluster and configure GSLB services and DNS/IPAM settings which are synchronized across all the follower clusters automatically.

 

To overcome the challenges of lack of observability, partial automation, and complex operations that arise from multiple discrete solutions used in container-based implementations, NSX ALB Kubernetes Ingress Services consolidates services in a single platform. The NSX ALB Platform  delivers on this promise, empowering organizations to deliver applications and services with confidence and reduced TCO.

To learn more about NSX ALB elastic, consolidated Kubernetes ingress services please download the Deliver Elastic Kubernetes Ingress Controller and Services white paper.