VMware Marketplace

Tanzu Kubernetes Grid Multi-Cloud and F5 BIG-IP Series Load Balancer Integration

In this blog, we cover how customers can leverage the integration between Tanzu Kubernetes Grid and F5 Big-IP Container Ingress Services (CIS).

Introduction

F5 BIG-IP is a widely deployed system for application delivery, security, monitoring, and load balancing. Based on customer demand to integrate F5 BIG-IP with Tanzu Kubernetes Grid, VMware and F5 teams worked together to validate F5 Big-IP Container Ingress Services (CIS) and Tanzu Kubernetes Grid integration and verify common use cases. 

F5 Big-IP Container Ingress Services (CIS) for Tanzu is certified Partner Ready for VMware Tanzu. VMware Tanzu Kubernetes Grid supports replacing or augmenting the built-in ingress solution with a third-party option. This makes it possible for vendors such as F5 to create a seamless integration experience with VMware Tanzu Kubernetes Grid. 

Powered by F5 Big-IP, the Local Traffic Manager (LTM) series load balancers have programmable, cloud-ready, and virtual appliances with Layer 4 and Layer 7 throughput. Due to their widespread adoption, IT departments can leverage existing appliances to use the product licenses and efficient throughput for Tanzu Kubernetes Platform. 

VMware Tanzu Kubernetes Grid – MultiCloud integration with F5 BIG-IP CIS enables the L4 and L7 features from F5 on the Tanzu Kubernetes Grid Platform and applications running on it. F5 BIG-IP CIS lets you manage your F5 BIG-IP devices from Kubernetes using either environment’s native CLI or API. For more information, see Overview of F5 BIG-IP Container Ingress Services.

Configuration:

  1. Deploy F5 BIG-IP VE on vSphere and configure the Application Services 3 (AS3) package, BIG-IP partition, and VIP networking.
  2. Deploy the Tanzu Kubernetes Cluster with Antrea CNI and Kube-VIP as the control plane endpoint provider. Also, enable the Antrea NodePortLocal feature on the workload cluster by adding the parameter “ANTREA_NODEPORTLOCAL: “true” to the cluster configuration file.
  3. Create a Kubernetes secret with the Big-IP credentials for CIS to communicate with BIG-IP VM (see Table 1)
  4. Deploy the F5 custom resource definitions and cluster role required for the CIS controller by applying the RBAC:
    https://raw.githubusercontent.com/F5Networks/k8s-bigip-ctlr/master/docs/config_examples/customResourceDefinitions/customresourcedefinitions.yml
    h
    ttps://raw.githubusercontent.com/F5Networks/k8s-bigip-ctlr/master/docs/config_examples/rbac/clusterrole.yaml
  5. Install the F5 CIS controller on Tanzu Kubernetes Cluster. You can deploy the CIS controller in Nodeport or Nodeportlocal mode with customresourcemode as true or false depending on the L4 or L7 requirement (see Table 2)
# kubectl create secret generic bigip-login -n kube-system –from-literal=username=admin –from-literal=password=’VMware123!’

# kubectl create serviceaccount bigip-ctlr -n kube-system
Table 1
Table 2

For more information on configuring F5 CIS controller, see F5 documentation.
6. Create an F5 Ingress class by applying the below configuration:

7. Once you install the F5 CIS controller and F5 Ingressclass, you can proceed with deploying a sample ingress application. Ensure to add the annotation “nodeportlocal.antrea.io/enabled: “true” to the service for selecting pods for NodePortLocal.

8. If you would like to use F5 IPAM for assigning the IPs to Loadbalancer/VirtualServer, configure the required RBAC and install the IPAM controller. For more information, see Overview of F5 IPAM Controller.

9. F5 supports custom resource definitions VirtualServer(L7) and Loadbalancer(L4) for load balancing the K8s applications. For more information on configuring VirtualServer or LB, see F5 Github Documentation.

Supported CIS combinations:

  • F5 CIS controller with Custom Resource Mode (CRD):
    • CRD=TRUE supports Loadbalancer (L4), VirtualServer (L7), IPAM and some other F5 CRDs. For more information, see F5 GitHub.
    • CRD=FALSE supports Ingress.
  • F5 CIS controller deployment Pool-member-type:
    • Nodeport mode : CIS controller deployed in Nodeport + CRD mode TRUE supports K8s native service type LoadBalancer (L4) and F5 CRD VirtualServer (L7).
    • Nodeportlocal mode: CIS deployed in Nodeportlocal+CRD TRUE supports only F5 CRD VirtualServer(L7), and
      CIS deployed in Nodeportlocal+CRD FALSE supports only K8s Ingress.

Note: If you would like to run both L4 and Ingress together in a single K8s cluster, you need to deploy 2 CIS instances, one in Nodeport mode and another in Nodeportlocal mode. You can run multiple instances of CIS in the same k8s cluster by adding a parameter “—share-nodes=true” in the CIS configuration. But they should be using different Big-IP partitions and different VIP networks. 
Design Consideration:

  • Since F5 IPAM requires CRD mode as TRUE, it only supports LoadBalancers (L4) and VirtualServers (L7). It is necessary to manually assign and maintain IP addresses for K8s Ingress objects. 
  • F5 CIS supports IP assignment for LoadBalancer(L4) services using IPAM only, manual IP assignment is not supported.  
  • Nodeportlocal mode requires the backend K8s application to be deployed as service type “ClusterIP”, whereas CIS in Nodeport mode requires service type “Nodeport”. 

SaaS Integration: 

VMware Tanzu SaaS components for Tanzu for Kubernetes Operations provide additional Kubernetes lifecycle management features through Tanzu Mission Control (TMC) and observability features through Tanzu Observability (TO). For more information on SaaS Integration, see Configure Tanzu SaaS Components for Tanzu for Kubernetes Operations.

Once you deploy the Tanzu Kubernetes Grid management cluster, you can register it with VMware Tanzu Mission Control to enable lifecycle management of its workload clusters. You can also integrate Tanzu Kubernetes Grid clusters with Tanzu Observability using Tanzu Mission Control integrations.

  1. Log into Tanzu Mission control from VMware Cloud Services and register the TKG Management cluster by navigating to Administration > Management clusters > Register Management Cluster.
  2. Create a workload cluster by navigating to Clusters > Create Cluster, select the Management cluster in which to create this workload cluster and then click Continue to Create Cluster. Provide the required cluster configuration options and click on Create cluster option. For more information, see Provision a Workload Cluster in vSphere from TMC.
  3. Connect to workload cluster and proceed with configuring the F5 CIS controller and IPAM on workload clusters and provision L4 and L7 services.
  4. Set up Tanzu Observability to monitor Tanzu Kubernetes Grid clusters from Tanzu Mission Control by navigating workload cluster integrations tab.

Reference: