In Part I, we have already discussed running SQL Server Big Data Cluster that is managed by Tanzu Kubernetes Grid (TKGm) on VMware Cloud on AWS. In this blog post, we’ll continue exploring how to run SQL BDC in the Tanzu Kubernetes Grid Service (TKGs) that is managed by Workload Management (WCP) on VMware Cloud on AWS (Tanzu Kubernetes Grid with VMware Cloud on AWS).
How to Activate Tanzu Kubernetes Grid with VMware Cloud on AWS
The activation of Tanzu Kubernetes Grid in VMware Cloud on AWS is a per-cluster workflow. You may see the activation/deactivation option in the SDDC card.
You may refer to Activate Tanzu Kubernetes Grid in an SDDC Cluster documentation for detailed requirements and deployment steps. Several CIDR blocks for Tanzu workload control plane need to be defined during this process, as shown in the following picture. The “Validate and Proceed” button will help verify whether the CIDR blocks are ready to create the Supervisor cluster as the workload control plane.
Click the “Activate Tanzu Kubernetes Grid” button and a workflow will be initiated at the backend. The process may take 10-30 minutes to complete.
Create and Configure vSphere namespaces
Before you deploy the SQL BDC workloads, you need to create a vSphere namespace and define the necessary resources that are required to run the container workloads.
- Firstly, let’s create a namespace called “sqlbdc”.
2. Assign the namespace permission to “[email protected]” account.
3. Assign a storage policy to the namespace to allow dynamic provisioning of persistent volumes required by SQL BDC. We created a storage policy called “sqlbdc-storage-policy”.
4. Associate the namespace with a content library for ova images of Tanzu Kubernetes releases.
5. Add predefined or customized VM class to the namespace.
6. Finally the “sqlbdc” namespace may look like this:
Prepare a TKGs jumpbox as kubectl client to manage the namespace and workload
The workloads in the vSphere namespaces are managed through kubectl command line interface. To manage it in VMware Cloud on AWS, it is recommended to prepare a jumpbox VM as a kubectl client.
We deployed an Ubuntu 20.04 VM as a jumpbox and enabled public Internet access. You may refer to Configure the Bootstrap machine in VMC on AWS in the Part I blog post.
SSH to the jumpbox and download the kubectl vsphere tool.
Now let’s login to the “sqlbdc” namespace.
Make sure we are under the “sqlbdc” namespace context
Deploy a Tanzu Kubernetes Cluster using YAML file
Now we can proceed to deploy a Tanzu Kubernetes cluster as a workload cluster for SQL BDC. Verify the storage class:
Tanzu Kubernetes releases:
Create a sqlbdc-tkc.yaml file to provision the Tanzu Kubernetes cluster.
sqlbdc-tkc.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
apiVersion: run.tanzu.vmware.com/v1alpha2 kind: TanzuKubernetesCluster metadata: name: sqlbdc-tkc namespace: sqlbdc spec: topology: controlPlane: replicas: 3 vmClass: guaranteed-medium storageClass: sqlbdc-storage-policy volumes: - name: etcd mountPath: /var/lib/etcd capacity: storage: 40Gi tkr: reference: name: v1.21.6---vmware.1-tkg.1.b3d708a nodePools: - name: sqlbdc-worker-node replicas: 3 vmClass: guaranteed-2xlarge storageClass: sqlbdc-storage-policy volumes: - name: containerd mountPath: /var/lib/containerd capacity: storage: 1024Gi tkr: reference: name: v1.21.6---vmware.1-tkg.1.b3d708a settings: storage: defaultClass: sqlbdc-storage-policy network: cni: name: antrea services: cidrBlocks: ["198.53.100.0/16"] pods: cidrBlocks: ["192.0.5.0/16"] |
Create the TKC and check the cluster status using the following command. The cluster should be up and running within several minutes.
Configure the Tanzu Kubernetes Cluster
To deploy SQL BDC workloads in Tanzu Kubernetes cluster, first login to the cluster using the kubectl vsphere command
Before deploying any workload in the TKC, create role bindings for pod security policy. For more details, refer to Example Role Bindings for Pod Security Policy
Install the following tools that are required by SQL BDC:
- Azure Data CLI – used to create and manage SQL BDC cluster
- Azure Data Studio – used to generate deployment profile of SQL BDC cluster
The SQL BDC deployment process requires network access to the Tanzu Kubernetes cluster nodes to check the service endpoint status, e.g. the SQL BDC control plane endpoint at port 30080. By default the Tanzu Kubernetes cluster has firewall rules enabled that will block the BDC endpoint port. You’ll probably get stuck halfway in the deployment process as shown below:
1 2 3 4 5 6 |
Cluster controller endpoint is available at 10.244.xx.xx:30080. Waiting for control plane to be ready after 5 minutes. Waiting for control plane to be ready after 10 minutes. Waiting for control plane to be ready after 15 minutes. Waiting for control plane to be ready after 20 minutes. Waiting for control plane to be ready after 25 minutes. |
If you login to the cluster using azdata, it will report the following error message:
1 2 3 4 5 6 7 8 |
$ azdata login Namespace: mssql-cluster Username: admin Password: Cannot connect with endpoint. Verify that you defined the endpoint correctly by running "azdata login --endpoint <endpoint>" again. Also verify that the endpoint is reachable. (60) Reason: MaxRetryError HTTPSConnectionPool(host='10.244.xx.xx', port=30080): Max retries exceeded with url: /api/v1/token (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000016B823D0B00>: Failed to establish a new connection: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond',)) |
Here’s a workaround for a successful deployment of SQL BDC on VMware Cloud on AWS.
1. Add a network adapter on the TKGs jumpbox and connect to the same portgroup of the TKC cluster, e.g., “vnet-domain-c55:dff3902e-a4df-4a59-a496-5c93d27b9187-sqlbdc-sqlbdc-tkc-vnet-0”
2. Get the network subsets of the TKC Segment from NSX manager
3. Assign a static IP address that are within the same network range of the TKC node.
1 |
$ sudo ifconfig ens192 10.244.10.30 netmask 255.255.255.240 broadcast 10.244.10.31 up |
4. Follow the steps described in SSH to Tanzu Kubernetes Cluster Nodes as the System User Using a Password. SSH to each of the TKC node and add a firewall rule to allow the 30080 port.
1 2 |
root@sqlbdc-tkc-sqlbdc-worker-node-l8xwx-6675984b87-hhbzp [ ~ ]# iptables -I INPUT 4 -p tcp --dport 30080 -j ACCEPT root@sqlbdc-tkc-sqlbdc-worker-node-l8xwx-6675984b87-hhbzp [ ~ ]# iptables-save > /etc/systemd/scripts/ip4save |
Deploy SQL Server Big Data Clusters in Tanzu Kubernetes cluster
Now you can deploy the SQL BDC in Tanzu Kubernetes Cluster on VMware Cloud on AWS. The BDC cluster is deployed with Azure Data CLI and the deployment profile can be generated in Azure Data Studio.
1 |
azdata bdc create --accept-eula yes --config-profile sqlbdc |
Get the running pods of SQL BDC.
Get the SQL BDC endpoints as shown in the following picture. Besides the 30080 port we enabled previously to allow successful deployment of SQL BDC on VMware Cloud on AWS environment, you also need to enable the rest of the endpoint port (e.g. 30043 for Spark) on each of the TKC node to allow access.