With the release of vRealize Log Insight 8.4, we added Log Sources which provides in-product documentation on how to configure container log sources to send logs to Log Insight. Some people have reached out with some questions around configuring logging for Tanzu Kubernetes Grid clusters, so I wanted to write a quick blog to run through steps and expected results. There are already a few blogs out there on this, but in this post, I will focus on the setup for the on-prem version of Log Insight.
Installing the CLI Tools
First, you will need to make sure you install the CLI tools on your Linux or Windows workstation. This is available from the clusters view in the vCenter where you have TKG deployed. (Menu > Workload Management > Summary Tab)
Follow the instructions to install the CLI tools on the appropriate OS.
Connect to the Cluster
Since I’m working in a lab environment I’m not using certificates, therefore I’m adding a line to bypass TLS.
kubectl-vsphere login --insecure-skip-tls-verify --server https://10.0.0.54 --vsphere-username [email protected] --tanzu-kubernetes-cluster-name fd-cluster-cluster --tanzu-kubernetes-cluster-namespace field-demo-clusters
You will be prompted for the password. Once you enter the password, you will get a list of contexts that your account has permission to access.
Password:
Logged in successfully.
You have access to the following contexts:
10.176.192.50
10.176.192.52
10.176.193.1
auctionw1
fd-cluster-dev
field-demo-clusters
Navigate to Log Sources Documentation
Log Sources has the information to create the configuration files required to forward logs. Log in to your Log Insight deployment and go to Log Sources > Containers > Tanzu Kubernetes Grid. If you are using TKG on vSphere 7.0 or above, you don’t need to install Fluentd, it’s already there. We need to create the fluent.conf
file to apply the ConfigMap. Simply copy the text in Step 1 to a new file and name it fluent.conf.
If you are running vSphere 6.7 you can install Fluentd using these instructions.
Update / Create Configmap
Check the following code block in the fluent.conf
file and update accordingly. Since I’m not using SSL, I’m going to change the scheme from https to http, add the FQDN or IP of the host, and change the port from 9543 to 9000.
@type vmware_loginsight
scheme http
ssl_verify false
host li-host.domain.com
port 9000
http_method post
serializer json
rate_limit_msec 0
raise_on_error true
include_tag_key true
tag_key tag
http_conn_debug false
Once I’ve updated the required parameters, I can apply the config file to this cluster. Pass the config map as li-fluentd-config vs. loginsight-fluentd-config in the log sources documentation. (You could also update the name of the config map in the loginsight-fluent.yml.) The instructions will be updated in the 8.4.1 release.
kubectl -n kube-system create configmap li-fluentd-config --from-file=fluent.conf
You should see the following output:
configmap/li-fluentd-config created
Configure Access to Logs
Now I can copy the contents from step 3 into a file loginsight-fluent.yml. This creates the service account and grants access to the log data. I don’t need to make any modifications to this file unless I want to change the name of the config map which is currently set to li-fluentd-config. Next, I run the below command.
kubectl apply -f loginsight-fluent.yml
You should see the following output:
clusterrole.rbac.authorization.k8s.io/fluentd created
clusterrolebinding.rbac.authorization.k8s.io/fluentd created
daemonset.apps/log-collector created
Verify the Log-Collector Container is Created
Run:kubectl get pods --all-namespaces | grep log
You should see the log collector containers are running.
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system log-collector-kqlv5 1/1 Running 0 13s
kube-system log-collector-lffwz 1/1 Running 0 13s
kube-system log-collector-xrvv6 1/1 Running 0 13s
Create the ConfigMap (fluent.conf) and apply the access control (loginsight-fluent.yml) for each cluster that you want to configure log forwarding on.
Verify Log Flow
The fluent.conf
file tags the logs so we can easily filter through the data:
environment tanzu_k8s_grid
log_type kubernetes
We’ll search for log_type starts with kube and view the logs by namespace.
Troubleshooting
I’ve documented a few things in case you run into issues or need to make configuration changes as you go.
Review or Remove Configurationkubectl get configmap -n kube-system
kubectl -n kube-system delete configmap li-fluentd-config
kubectl delete -f fluent.yml
Change or Get Contextkubectl config get-contexts
kubectl config use-context <context name>
View Container Logs
kubectl logs log-collector-7xtkz -n kube-system
View Events
If you see any status other than running after you run “kubectl get pods
“, e.g. Terminating, CrashLoopBackOff, ErrImagePull, etc…
Run:kubectl get events --all-namespaces -sort-by+'.metadata.creationTimestamp'
This should give you more information as to why the container could not deploy correctly.
That should cover everything you need to know to set up logging for your TKG clusters in a few minutes!