posted

0 Comments

Leveraging VMware PKS in VMware cloud on AWS:

 

Since the announcement of VMware Tanzu—a portfolio of products and services to help enterprises build, run and manage applications on Kubernetes—at VMworld US, our customers have been expressing their excitement. They are eager to drive adoption of Kubernetes and are looking to VMware to simplify the effort. That’s why we want to highlight VMware PKS as an effective way to run cloud native applications in VMware Cloud on AWS right now.

 

Kubernetes Installation:

The setup documentation provided for PKS was leveraged to install the required packages and software for Kubernetes. Kubernetes and docker components were deployed successfully on a Centos 7.6 virtual machine.

 

Deployed Solution:

 

Templates with the required software and components were created on CentOS Linux 7.6 for both master and worker nodes in the environment. Docker with required images was installed in all the templates. A list of all the docker images are shown in Appendix B. The solution was designed for a proof of concept with one master node and four worker nodes.  A profile of a typical node used to build out the environment is shown below

Figure 1: Virtual machine specifications used for the PKS master and worker nodes

 

 

A resource pool was created in a VMware cloud for AWS instance and the created templates for master (or1kubm01) and worker nodes (or1kubw01 through or1kubw04) were deployed as shown below.

 

 

Figure 2: Resource pool showing PKS master and worker node components

 

Once the virtual machines are deployed, the master server needs to be configured to deploy the Kubernetes control plane. The configuration file used is shown below

 

Initializing the master control node

The master node is initialized using the configuration file and the control pane components are installed as shown below.

 

Adding worker nodes to the Kubernetes Cluster

Worker nodes can join the cluster by copying certificate authorities and service account keys on each node and then running the join command as root

 

 

Example Web Server deployment:

 

A simple cloud native web server such as NGINX was then deployed on the Kubernetes cluster. The application components and the deployment aspects are defined as code using yaml. The yaml file used for the deployment is shown in Appendix A.

 

Kubectl is the control command used to create pods in Kubernetes. The NGINX server is created using the following kubectl command as shown.

 

The NGINX server network port 80 is mapped to 30500 on the node. We now access the port 30500 on the node or1kubw02/192.168.7.23 via the web browser to see if the application is up. As shown below we see that NGINX has been successfully deployed and running as shown below.

Figure 3: Accessing the NGINX application running on the PKS cluster

 

Conclusion:

In this solution, we leveraged VMware Cloud on AWS infrastructure to deploy VMware PKS. This solution provides a fully featured Kubernetes environment that can be used for hosting cloud native applications. An example application was deployed and validated in the Kubernetes environment. VMware customers can leverage VMware PKS in their cloud environment to deploy their Kubernetes based applications.

 

Appendix A: Example NGINX deployment.yaml

 

Appendix B: Docker Images used in the solution