Introduction
With vSphere 6.7 U3 we introduced the Cloud Native Storage control plane into vSphere, built to cater for container based workloads that required on-demand provisioning of volumes on vSphere.
The initial release catered for the most common workloads that required Persistent Volumes, namely of the ReadWriteOnce
type (essentially, a single container mounting a single volume). While this is the most common use case – there were other use cases that we were well aware of, and indeed, were motivated to cater to their storage needs as well.
A great many applications can take advantage of ReadWriteMany
volumes (multiple containers mounting a single volume in a read/write fashion). Think applications like NginX, Apache, Tomcat, Harbor registry, etc – where it makes sense, or in some cases is necessary, to have multiple containers mounting a single volume – either to realise a storage efficiency, to make updates to frontends easier, or otherwise.
And with vSphere and vSAN 7.0 coupled with CNS that’s what we are delivering – Full on-demand provisioning for Kubernetes Persistent Volumes in RWM mode.
Why do I need it?
Glad you asked. Some applications, in particular web servers, app servers and container registries can realise massive storage efficiencies and ease of scaling by using a Kubernetes concept known as ReadWriteMany
(RWM
) volumes.
If you think about how these applications scale, they do so elastically based on load, which is very changable. But they all serve the same content – what good is having ten web or app servers serving your application, if they’re all serving different content?
So if they all serve the same content – why put the content into the container itself, when you can make it external and as such, realise a very considerable storage efficiency? The amount of storage you save is actually directly proportional to the number of containers that mount that volume – if you’ve got ten web servers all mounting the same volume where the actual website data lives – you’re in essence making a saving of 10-1 on that storage.
That is not to mention the fact that when content is updated through one container (say a website update is made) – then that content it immediately being served by all other containers, as they’re backed by the same volume. So there are some nice operational efficiencies to be had too.
What does it look like?
With the “why” out of the way, let’s get on to the “what”. This feature is demoed best when you can see how something works, so I encourage you to check out the short video demo of the feature below:
When a user requests a RWM
volume in Kubernetes, CNS will ask vSAN File Services to create a NFS based file share, of the requested size and the appropriate SPBM policy – and mount that into the Kubernetes worker node that the Pod requesting the volume lives on, additionally, if multiple nodes are requesting access to the RWM
volume, or the application is scaled out, CNS will note that a RWM
volume already exists for that particular deployment and will mount the existing volume into them as well.
Note above, multiple Pods mounting a single Volume, backed by vSAN File Services.
Note that File Shares provisioned by CNS are managed right alongside file shares created by users, offering the same operational experience whether you are dealing with VM or container volumes.
What are the requirements?
CNS support for RWM
volumes requires a few things, vSAN and vSphere 7.0 – vSAN File Services is mandatory, as we have control over share lifecycle with vSAN, and not with other platforms. It also requires Kubernetes v1.14 or above and the latest version of the CSI driver – which will be published in the near future.
For more info or any questions, ask @mylesagray on Twitter!