Last week was an eventful one for cloud security – a very high-profile vulnerability was discovered in the Open Management Infrastructure (OMI) extension that affected a significant percentage of Azure Linux VMs.
This is probably the reason why another interesting vulnerability disclosed last week received very little attention – CVE-2021-25741 – a kubelet vulnerability that allows access to the host filesystem by abusing symlinks.
While this vulnerability requires much more specific circumstances to exploit than the OMI one, it has a Common Vulnerability Scoring System (CVSS) rating of 8.8 (considered a “high” level of severity on a scale of 0.0 to 10.0) and provides a convenient way for attackers to escalate privileges after gaining some initial access in a Kubernetes cluster. One would be forgiven the feeling of déjà vu, as there is a three-year old blog post, documenting a vulnerability that is essentially the same.
Quoting the GitHub issue, which is as close to an official security advisory as Kubernetes can get, “Environments where cluster administrators have restricted the ability to create hostPath mounts are the most seriously affected. Exploitation allows hostPath-like access without use of the hostPath feature, thus bypassing the restriction. In a default Kubernetes environment, exploitation could be used to obscure misuse of already-granted privileges.“
That doesn’t really sound that scary, until we go and read the official documentation about hostPath: “HostPath volumes present many security risks, and it’s a best practice to avoid the use of HostPaths when possible.” Indeed, there are plenty of attack tactics that can be applied with access to the host filesystem, such as abusing the /proc filesystem, finding improperly managed setuid() binaries, reading secrets stored in text files, and so on.
OK, it sounds like this is a big deal after all – let’s see how we can detect and remediate it across any clusters we manage.
On first glance, detecting the vulnerability seems trivial – there is a complete list of all non-vulnerable versions on the GitHub page. For now, the fix is present only in these versions:
So if our cluster runs any of those versions, we’re good right?
Actually, not at all. First of all, no major cloud provider supports any of those versions right now. They’re all less than a week old at this point – it’s reasonable to expect some delay in adopting the new versions in order to test and validate the changes.
Second, the vulnerability is present in kubelet, which is the Kubernetes component that runs on each node in the cluster. This might seem like a small detail, but due to the Kubernetes Version Skew Policy, nodes in the cluster are allowed to run kubelet versions up to two minor versions older than the kube-apiserver version (or the control plane version, if your cluster is managed by a cloud provider’s Kubernetes SaaS offerring). There are really good reasons for this policy, such as enabling live upgrades, high-availability clusters, and others, but still – it means that we cannot detect this entirely from the cloud provider’s infrastructure, as this will give us the control plane API version, rather than the node kubelet version. Another solution that will not work is to run `kubectl version –short`. This will again provide us with the control plane API version and our local kubectl version.
Instead, we need to run `kubectl get nodes` on each cluster and ensure the versions are not vulnerable to this CVE… sounds like a lot of work.
Enter CloudHealth Secure State Kubernetes Beta collector
The CloudHealth Secure State engineering team has been hard at work to bring first-class Kubernetes support to our product and, as of today, our Kubernetes collector is in customer only beta. This means that if you are a CloudHealth Secure State customer, all you need to do is reach out to your TAM to get support enabled.
Currently, all major cloud provider managed services including Amazon Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS), and Google Kubernetes Engine (GKE) clusters are supported, and we’re working hard on extending that support to on-premises and self-managed clusters as well. In order to onboard your cluster to Secure State, you need to follow the simple steps outlined in the Kubernetes Getting Started guide.
We’ve prepared an SSQL query for that purpose (for simplicity’s sake, this query covers only GCP/GKE):
Kubernetes.Cluster.Node HAS NodeInfoKubeletVersion != v1.21.3-gke.2001 AND NodeInfoKubeletVersion != v1.20.9-gke.701 AND NodeInfoKubeletVersion != v1.20.9-gke.1001 AND NodeInfoKubeletVersion != v1.20.8-gke.2101 AND NodeInfoKubeletVersion != v1.19.14-gke.301 AND NodeInfoKubeletVersion != v1.19.13-gke.701 AND NodeInfoKubeletVersion != v1.19.12-gke.2101 AND NodeInfoKubeletVersion != v1.18.20-gke.4501 AND NodeInfoKubeletVersion != V1.18.20-gke.3001
In order to remediate this vulnerability, you’ll need to update the nodes of your clusters to a kubelet version that contains the security fix. In practice, this means updating the Linux distribution (usually one of AKS-ubuntu, Container-optimized OS or EKS Distro) of the nodes to one that contains the fix. You can consult your cloud provider’s documentation for specific instructions.
Don’t forget to validate the successful upgrades using the queries – upgrades are a bit like backups in that you can’t be certain they’re working, unless you’ve tested them.
It’s clear that Kubernetes, as a cornerstone of modern deployments and operations, is here to stay. And even though it provides an immensely powerful set of abstractions and tools to manage various use cases, it’s also clear that such a complex piece of software cannot be bug-free (or even vulnerability-free). As such, it isn’t surprising to learn that Kubernetes security is one of the ‘hot topics’ – and both defenders and attackers are taking note.
For now, there haven’t been that many serious Kubernetes vulnerabilities, but it’s evident that this area is under intense scrutiny by virtually everyone in the cybersecurity space. We at CloudHealth Secure State remain committed to providing the toolkit for securing the next-generation of cloud workloads and, as of today, this includes Kubernetes at a central position, defying and even erasing cloud vendor differences.
Thus, we’ll continue to monitor the Kubernetes security landscape and work hard to help our customers improve their cloud security posture. In the meantime, if Kubernetes security is getting in the way of your good night’s sleep, check out our Kubernetes-specific rules (accessible for customers only) and hopefully we can alleviate some of that (regrettably justified) stress.
And if you’re looking for even more information, please don’t hesitate to get in touch with us directly. Our team would be happy to answer any questions you may have and walk you through the capabilities of the CloudHealth Secure State platform.