Note: The terms TKG Service, TKGS and vSphere Kubernetes Service (VKS) are used interchangeably in this blog post. We are undergoing a bit of a rebrand here and you’ll start to see the updated branding appear alongside our new releases!
With the release of TKG Service 3.2.0 in October 2024, we deprecated the TanzuKubernetesCluster (TKC) API, shifting customers toward using Cluster API’s Cluster API as the supported method for bootstrapping, configuring, and managing Kubernetes clusters. This blog post highlights the process required to “retire” your usage of the TKC API in favor of the more modern Cluster API approach.
The TanzuKubernetesCluster API has been a valuable tool for managing Kubernetes clusters, but as the ecosystem evolved, Cluster API provides a mature, feature-rich, and versionable approach to handling cluster lifecycle operations. The transition ensures customers benefit from standardization, greater flexibility, and continued support. However, we recognize that customers have many existing Kubernetes clusters created using the TanzuKubernetesCluster API. This presents a challenge: customers need a way to cleanly retire their TKC resources without disrupting their existing workflows. Our overall strategy consists of three key steps:
Deprecation Notices
As of TKG Service 3.2.0, customers interacting with TKC resources receive deprecation warnings and are encouraged to transition to Cluster API.
Retirement Process
Introduced with TKG Service 3.3.0, customers have been provided with a streamlined method to retire TKC resources for existing clusters while continuing to manage them via Cluster API. This process is explained in detail below.
Full Removal
In an upcoming release, support for the TKC API will be fully removed. At that point, customers will be required to retire any TKC resources before upgrading to newer versions of TKG Service/VKS.
Retirement process overview
The retirement process allows for the clean removal of TKC resources while ensuring clusters remain fully operational and manageable via Cluster API’s Cluster
resource.
When you apply the kubernetes.vmware.com/retire-tkc
label to a TKC cluster:
- If the cluster meets the retirement prerequisites, the TKC resource is deleted, and the cluster is managed entirely through its
Cluster
resource. - If prerequisites are not met, validation will block the retirement process, and you can use the detailed error conditions to resolve issues.

Note: Retiring the TKC resource has no impact on the underlying cluster or its nodes and does not trigger a rolling update.
Getting started
To begin, ensure your cluster is not running a legacy Kubernetes release. Legacy releases are only compatible with vSphere 7.x (and 8.x for upgrade purposes) and must be upgraded to a non-legacy Kubernetes release before retirement. Use the cluster compatibility verification steps to identify and upgrade legacy releases.
Ensure there are no active upgrades happening and that the cluster you are targeting reports a healthy state.
Note about TMC Management: Clusters managed by Tanzu Mission Control (TMC) cannot currently be retired. This functionality is not yet supported.
Steps
Detailed steps for the retirement process can be found in the official documentation on Retiring TanzuKubernetesCluster resources.
Apply the kubernetes.vmware.com/retire-tkc
Label
Apply the label to the TKC resource for the cluster you want to retire.
kubectl label tanzukubernetescluster/<cluster-name> kubernetes.vmware.com/retire-tkc=""
Validation Checks
When the label is applied, a webhook runs through a series of automated validation checks. Specifically, the following conditions are checked:
- Non-legacy Kubernetes release: The cluster must be running a supported version of Kubernetes. (Cannot be overridden.)
- Cluster resource exists: A corresponding Cluster resource must exist and report the Ready condition. (Can be overridden – see the documentation for advice)
- No ongoing upgrades: There must not be an active upgrade process. (Can be overridden if Kubernetes versions match.)
- No other migration in progress: Clusters already in the process of being migrated cannot be retired. (Cannot be overridden.)
- Cluster not managed by TMC: TMC-managed clusters cannot be retired at this time. (Cannot be overridden.)
If all these checks pass, the retirement process proceeds. If any fail, the process stops, and the system updates the TKCRetired condition with the reason for failure. Issues can then be resolved, after which the process will retry. You can use the following command to monitor the retirement process:
kubectl describe tanzukubernetescluster/<cluster-name>
After initiating the retirement, you’ll see updates through events:
- A RetirementTriggered event is published when the process starts.
- A RetirementDone event is published when the TKC resource is deleted.
Halt the Retirement if Necessary
If you need to pause or stop the retirement process, you can set the label’s value to false to prevent further actions:
kubectl label tanzukubernetescluster/<cluster-name> kubernetes.vmware.com/retire-tkc="false"
Outcome
All being well, once the process has completed you’ll see the following:
- The old TKC API resource has been deleted.
- The cluster continues to operate without interruption and is now managed exclusively via the
Cluster
resource. - All lifecycle operations, including upgrades and scaling, are now performed using the Cluster resource.
Note: If you have previously customized the TkgServiceConfiguration specification, these configuration settings will now be found on the transitioned Cluster resource and future changes will require you to patch or edit the Cluster object directly. New clusters using the Cluster API will require these settings adding to their specification. A mapping of common TkgServiceConfiguration settings to the new location in Cluster API is provided below:
Setting in TkgServiceConfiguration | How to update this setting in a Cluster (v1beta1 API) | Documentation reference |
defaultCNI | There is no default Network setting for the Cluster type. Specify the cluster network in spec.clusterNetwork | v1beta1 Example: Default Cluster |
proxy.httpProxy | Update the proxy object variable, setting the equivalent fields to identical values | Cluster v1beta1 API |
trust | Create a secret to store the additional Trusted CAs and update the trust object variable, mapping to the data from the secret | v1beta1 Example: Cluster with Additional Trusted CA Certificates for SSL/TLS |
This process provides a clean, low-risk path to transition your existing, TKC-managed Kubernetes clusters to full Cluster API management. Happy deploying!