In part 1 of the blog series we looked at the challenges of running Kubernetes and how VMware Enterprise PKS can help alleviate them. In this part, we will look at the deployment of the solution with MongoDB Enterprise Operator on VMware Enterprise PKS and its validation.
The virtual infrastructure used to build the solution is shown below:
Table 1: HW components of the solution
The VMware SDDC and other SW components used in the solution are shown below:
Table 2: SW components of the solution
PKS provided the framework to create Kubernetes clusters seamlessly working with the VMware SDDC components. A logical schematic of the Kubernetes cluster and MongoDB Enterprise components are shown
Figure 5: Source: Getting Started with MongoDB Enterprise Operator for Kubernetes
The deployment of the solution involved individually deploying the different components and integrating them together. The unique critical steps in the solution deployment is shown below.
First login to PKS as an admin user.
Then create the Kubernetes Cluster using the command, create-cluster. The creation of the cluster takes a few minutes. Once it is completed, the cluster status will show succeeded.
As you can see, once the cluster is created, it can be operated/queried on by using kubectl like any other cluster. Now we are ready to deploy a MongoDB replica set with persistent storage using MongoDB Enterprise Operator for Kubernetes
MongoDB Enterprise Kubernetes Operator documentation was followed to install the operator.
There are two things to watch out for during installation of the operator.
- During creation of ConfigMap, let Kubernetes Operator create the Ops Manager project. The Operator adds additional internal information to Projects that it creates. Omit the OrgId so that Ops Manager creates an Organization called projectName that contains a Project also called projectName.
- Before doing a “create” of the operator you need to allow vSphere to manage the Security Context for the Kubernetes Operator.
After this, proceed with the installation of the operator as specified in the documentation.
You can see the status of MongoDB Operator pod using standard kubectl commands:
Given that we would like the data to be persistent, we have to first create storage class and persistent volume claim (PVC).
Define a storage class by saving the following in a file storage-class-vsphere.yml using your preferred editor:
Now define a PVC by saving the following in a file persistent-volume-claim.yml using your preferred editor:
Now deploy these objects to the cluster using kubectl:
This storage will be used to persist MongoDB data across the pod sessions.
Now prepare the definitions for the replica set creation. Save the following in a file mperm.yaml using your preferred editor:
This replica set will have 3 members. One will serve as the master MongoDB server and the other two will be secondary servers. This defines storage for these member servers, with each member having separate storage for data, journal and log files. Note that each member takes up 10 GB of storage and thus the entire storage of 30GB created earlier will be used by this deployment.
Deploy this replica set.
Check the status of the replica set pods using regular kubectl commands:
You can now view the status of this replica set from MongoDB Ops Manager. The orgId, project-name are highlighted in the top-left. The three members and their status (primary or secondary) are shown.
The MongoDB instances deployed using MongoDB Enterprise Operator for Kubernetes can be viewed, monitored using MongoDB Ops Manager as shown in the picture:
Figure 6: View of the MongoDB replicas from Enterprise Manager
In order to connect to this MongoDB replica set, you need to use a connection string that is specially crafted to make sure that it always connects to the primary MongoDB server:
Note that the node names are valid only inside the Kubernetes cluster, and it is referring to the default port (27017). Also, note that this connection string is usable only from within the Kubernetes cluster from any other pod sitting inside the same cluster.
A sample application called MERN-CRUD which does CRUD operations on a simple table with some minimal user information is being used to showcase the failover capability of the MongoDB replica set. The sample app that uses the MongoDB replica set as the backend database is run inside the same cluster.
Figure 6: Console of the MERN CRUD Application
Figure 7: The Kubernetes console is used to delete a MongoDB replica pod
MERN CRUD has just a single page UI from where one can Create, Read, Update or Delete users. The data is stored into the MongoDB replica set that was created in the earlier step. While the application is running, you delete the pod that runs the primary from Kubernetes dashboard:
While the deleted pod is restarted, one of the secondary servers will assume the role of being the Primary MongoDB server as seen from the MongoDB Ops Manager.
Figure 8: MongoDB replicas after failure recovery
This change in the primary server is seamless to the app and it continues with its operations.
We have shown that VMware Enterprise PKS and MongoDB Enterprise components can be successfully integrated to build a robust production ready enterprise MongoDB solution with Kubernetes. A two tier application was deployed with a MongoDB back-end database and seamless management and resiliency of the database infrastructure was demonstrated. Combining best in class infrastructure provided by the VMware Enterprise PKS platform and the enterprise class management capabilities of the MongoDB enterprise platform produces a robust and compelling solution. A video demo of this solution is available.