All software has vulnerabilities. Even Kubernetes. What really matters is how quickly you can eliminate the bugs when discovered, especially if they expose a security vulnerability.
A recent CVE in Kubernetes is a useful opportunity to benchmark how well your organization is doing when it comes to rapid patching. Let’s take a look at the timeline for the issue, and how Pivotal and our Pivotal Container Service (PKS) customers were able to respond.
We started the year helping customers protect against Meltdown. It’s fitting we close out the year patching a Kubernetes CVE.
Here’s what happened.
Monday, December 3rd, the Kubernetes Product Security Team announced that CVE-2018-1002105 had been identified, enabling any user to elevate their privilege from Kubernetes API to having full control of the node running their pod. A bad actor could exploit this vulnerability and steal sensitive customer data, inject malicious code, or even crash production applications and services.
By lunchtime on the same Monday, the tech press was all over the story, and it's likely your boss wanted to know when all your Kubernetes instances would be patched against bad actors stealing customer data. How reassuring (and cool) would it be to be able to say that all your Kubernetes environments were already patched before the CVE was announced? Sounds too good to be true? It’s business as usual if you’re running PKS.
Let’s break down how the PKS magic worked. Rewind a week, before the story broke to the public.
As a Certified Kubernetes Distributor, Pivotal received advance notice of the security exposure. From there, the PKS R&D team immediately prioritized the inclusion of the fix. |
||||
When the Kubernetes upstream fix was posted (Kubernetes 1.11.5 in this case), Pivotal engineers added the fix into the latest version of PKS. From there, we built and tested the image.
The world at large learns about the CVE. PKS customers have been patched for a few days at this point!
Want to see this in action? Here’s a video of a PKS cluster running NGINX, Cassandra, Redis, and Istio. The whole process took little more than an hour but we’ve condensed it into 2 minutes of highlights:
How does this process compare to your homegrown Kubernetes environments? If you manage your own environments with custom automation, your first notice of the CVE was probably the announcement on Monday 12/3. You’d likely have to endure several days operating with a publicly disclosed vulnerability. And you’ll probably take a hit on uptime availability, while clusters are upgraded. What’s more, you also run the risk that the upgraded clusters may cause workloads to fail. Jump aboard the upgrade train With PKS, automated patching of Kubernetes becomes a routine, automated task without any drama. We heard from one Pivotal customer that their automated pipeline had applied the patch before they heard of the critical CVE. They were pleasantly surprised to find their systems and data were already protected. A recent Forrester report "Reduce Risk And Improve Security Through Infrastructure Automation" captures the urgency of the problem, and the solution:
You can read more about setting up automated upgrade pipelines with Concourse (like the one below) in this blog. Learn more about the other benefits of running Kubernetes with PKS here. |
Fixing a vulnerability before it’s announced. Your boss may think that’s magic, but you know it’s PKS.
Any sufficiently advanced technology is indistinguishable from magic.
– Arthur C. Clarke