In previous blog posts, we’ve talked about the process of setting up vSphere with Tanzu (see our quick start guide) and creating your first Tanzu Kubernetes Cluster (TKC). As a vSphere Administrator, you might be saying to yourself, “This is cool and all, but what’s next? What’s an easy application to deploy?” The easiest target is the standard NGINX Kubernetes deployment, but that’s very basic. Today we expect an app store experience, one that gives us the ability to simply install (and manage) an application in a nice UI. That’s what Kubeapps is all about.
Kubeapps dubs itself as “Your Application Dashboard for Kubernetes.” Through an intuitive UI, you can deploy 80 different applications based on Helm Charts or Operators, as well as enable secure, role-based access control. All of the complicated nuances of deployment and management are abstracted through single clicks.
If this is your first time using Kubernetes, Kubeapps is a great way to see what’s possible. So let’s take a look at the deployment using vSphere with Tanzu.
One requirement of Kubeapps is a default storage class. By default, a Tanzu Kubernetes Cluster does not set a default, but this can be accomplished with the cluster specification using spec.settings.storage.defaultClass, as seen in the documentation.
yaml
apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TanzuKubernetesCluster
metadata:
name: tkg-kubeapps
namespace: tanzu01
spec:
distribution:
version: v1.18
topology:
controlPlane:
count: 1
class: best-effort-xsmall
storageClass: k8s-storage
workers:
count: 2
class: best-effort-xsmall
storageClass: k8s-storage
settings:
storage:
defaultClass: k8s-storage
After the cluster is deployed or this setting is applied to an existing TKC, we can see the storage class set as default with kubectl get sc
:
A TKC has pod security policies enabled by default. A clusterrolebinding needs to be applied that allows the current user to create pods. A simple workaround to disable pod security policies is done by applying this rolebinding, which will allow us to continue with the deployment:
Kubeapps is deployed using Helm. If you haven’t done so yet, make sure you have the Helm client installed on your machine. Next we will need to add the Bitnami repo, which is where the Kubeapps Helm Chart resides:
If you belong to a large organization where many container images are being pulled from Docker Hub, you may be running into an issue with Docker Hub pull rate limiting. If you’ve exceeded the rate limit of 100 container image requests per six hours for anonymous usage, you will be met with an error message. To help mitigate this, we will use your Docker Hub credentials to create a Kubernetes secret that gives you 200 pulls every six hours. Replace the variables with your credentials:
kubectl create secret docker-registry $SECRET --docker-username=$DHUN
--docker-password=$PW
--docker-email=$EMAIL
After the secret has been created, we need to patch the default service account so it uses those credentials to install Kubeapps:
kubectl patch serviceaccount default -p "{"imagePullSecrets": [{"name": "$SECRET"}]}"
Alright, now we’re almost ready for installation. To keep things clean, we will install Kubeapps into its own namespace:
kubectl create ns kubeapps
We also have to give Kubeapps access to your Docker Hub credentials so it can pull down images and deploy applications on its own. All we need to do is create another secret using the same Docker Hub credentials in the Kubeapps namespace. To keep things simple, I even kept the same $SECRET
name:
kubectl create secret docker-registry $SECRET
--docker-username=$DHUN
--docker-password=$PW
--docker-email=$EMAIL
--namespace=kubeapps
NOTE: If you need to troubleshoot, examine the output of your Kubernetes secret. Special characters need to be escaped:
kubectl get secret $SECRET -o go-template='{{range $k,$v := .data}}{{printf "%s: " $k}}{{if not $v}}{{$v}}{{else}}{{$v | base64decode}}{{end}}{{"n"}}{{end}}'
Now let's install Kubeapps using Helm! For Tanzu Kubernetes Clusters, we are going to set two flags:
--set frontend.service.type=LoadBalancer
is used to automatically get an IP from our HAProxy virtual appliance so we can access Kubeapps externally from the cluster.
--set global.imagePullSecrets={$SECRET}
is used to apply our Docker Hub credentials. The brackets are necessary because it’s an array that is passed.
helm install kubeapps --namespace kubeapps bitnami/kubeapps --set frontend.service.type=LoadBalancer --set global.imagePullSecrets={$SECRET}
After a few minutes, the images will be up and you can get the IP address to access Kubeapps using:
kubectl get svc -n kubeapps
Once you navigate to the IP address, you will have to get a token to access the page.
We recommend following the Kubeapps documentation on securing access control. As a way to quickly get started, create a serviceaccount and clusterrolebinding:
kubectl create serviceaccount kubeapps-operator
kubectl create clusterrolebinding kubeapps-operator --clusterrole=cluster-admin --serviceaccount=default:kubeapps-operator
Get the secret token for this service account using this command and copy/paste the token into the kubeapps page and log in:
kubectl get secret $(kubectl get serviceaccount kubeapps-operator -o jsonpath='{range .secrets[*]}{.name}{"n"}{end}' | grep kubeapps-operator-token) -o jsonpath='{.data.token}' -o go-template='{{.data.token | base64decode}}' && echo
Now you can browse the catalog to see all the applications available!
To put this to the test, let’s check out WordPress. In the top-right corner, click “Deploy,” which will take you to a simple form. Edit the application name, username, password, and email combo, then scroll to the bottom and click “Deploy.” After a few minutes, the pods should be up and another IP address from our HAProxy load balancer will have been used.
Navigate to the IP address and as you’ll see, WordPress is now up and running!
But don’t just stop there! There are 80 applications available in Kubeapps. Take this opportunity to deploy some of them and then dig around inside your cluster to see how they are connected:
-
kubectl get namespaces
-
kubectl get deployments -n kubeapps
-
kubectl get pods -n kubeapps
-
kubectl describe pod <pod name> -n kubeapps
-
kubectl get pvc -A
To see it all in action, watch this video: