Migrating applications to a modern platform like VMware vSphere Kubernetes Service (VKS) is rarely a “one-size-fits-all” operation. Whether you are moving from a generic Kubernetes provider (like OpenShift, EKS, or GKE), migrating from virtual machines, or repatriating cloud services, the strategy you choose depends heavily on your current automation maturity and workload requirements.
In this post, we discuss the primary migration patterns: re-platforming (“lift and shift”) versus re-deploying (pipeline-driven). The following guide can help you choose the right path for your organization.
Strategy 1: The “Lift and Shift” (Re-platforming)
Best for: Organizations with little to no automation or deployment pipelines
If your team is hand-deploying applications or lacks mature GitOps practices, the most viable path is often a “backup and restore” approach using tools like Velero.
How it works
- Backup source state: Velero queries the source Kubernetes API to grab all objects (deployments, secrets, config maps, services) and bundles them into an S3-compatible storage bucket.
- Restore to VKS: You install Velero on the destination VKS cluster and restore from that backup. Since VKS is CNCF-conformant, it replays the API details into the new cluster.
The caveats
- Be selective: You cannot simply back up everything. You must avoid system namespaces (like kube-system) and target only specific application workloads using labels and selectors.
- Manual cleanup: This is a 1:1 state copy. If the destination environment requires different configurations (e.g., load balancer IPs or storage classes), you will likely need to perform manual edits post-restore.
- Technical debt: While functional, this approach effectively moves “technical debt” from one cluster to another. Once shifted, the running configuration immediately becomes the “source of truth,” potentially drifting from any source code you might have had.
Strategy 2: The “Pipeline Retargeting” (Re-deploy)
Best for: Organizations with existing CI/CD pipelines (Jenkins, Flux, ArgoCD, Harness, etc.)
If you are already storing configuration in Git and deploying via a pipeline, do not use Velero for application migration. Doing so breaks the link between your Git repository and your running cluster. Instead, treat VKS as just another standard Kubernetes endpoint.
How it works
- Retarget the pipeline: Simply point your existing deployment pipeline to the new VKS API endpoint.
- Validate configs: Inspect your Git repositories for deprecated APIs or Custom Resource Definitions (CRDs) that might not exist in the new VKS environment (e.g., specific ingress controllers or security tools).
- Deploy: Trigger the pipeline to deploy a fresh instance of the application onto VKS.
The “VCF native” evolution
This migration is also the perfect time to mature your infrastructure operations. Instead of just moving apps, you can adopt the following VMware Cloud Foundation (VCF) principles.
- Self-service tenancy: Use VCF automation (via Terraform) to create Projects and Tenants, allowing platform teams to self-service their own VKS clusters.
- GitOps adoption: Transition infrastructure tooling (Ingress, Cert Manager) to be managed by ArgoCD or Flux within the new VKS clusters.
The Stateful Dilemma: Handling Persistent Data
Migrating stateless applications is relatively easy. Stateful applications (databases, queues) utilizing Persistent Volume Claims (PVCs) present a much harder challenge.
You cannot simply “move” a disk object from one cluster to another because the PVC is tied to a specific Volume ID and CSI driver in the source infrastructure.
The solution
- For External Storage (NFS/Object): This is trivial. Just point the new app instance to the existing NFS share or object store.
- For Block Storage (CSI): You must use a backup tool like Velero that supports CSI Snapshotting. This backs up the data (the disk snapshot), not just the configuration. During restore, Velero triggers the creation of a new disk on VKS and rehydrates it with the snapshot data.
Best Practice: Isolate your stateful workloads. Running stateful apps side-by-side with stateless ones makes cluster lifecycle management significantly harder. Use dedicated clusters for stateful workloads where possible.
Handling “The Big Version Jump” (Upgrades)
Migration strategies also apply to upgrades. If you are running an older version of Kubernetes (e.g., v1.30) and need to jump to a modern version (e.g., v1.35), sequential in-place upgrades can be painful and time-consuming.
If you have a mature deployment pipeline and have designed your VKS infrastructure with adequate network and spare capacity support, the “Blue/Green” redeploy strategy is often superior:
- Build new: Deploy a fresh VKS cluster on the target version (v1.35).
- Deploy apps: Let the pipeline deploy applications to the new cluster.
- Flip the switch: Change DNS/Load Balancing to point to the new cluster.
This method is faster, cleaner, and avoids the risk of multi-step sequential upgrades on live production clusters.
For a detailed comparison of these two methods, please refer to my blog Navigating VKS Upgrades: Balancing Infrastructure Constraints and Application Reality.
Conclusion
While VKS provides the infrastructure, the migration path is defined by your application’s architecture and your operational maturity.
- Low automation? Lift and shift with Velero.
- High automation? Retarget your pipelines.
- Stateful apps? Plan carefully around CSI snapshots.
By understanding these patterns, you can turn a daunting migration into a structured, manageable transformation.
If you’re looking to take advantage of Kubernetes on VCF and want to leverage the experience and expertise of our Professional Services team, reach out to your Broadcom Account Director. We can discuss the specific technical requirements of your environment and how our team can support your objectives.
Discover more from VMware Cloud Foundation (VCF) Blog
Subscribe to get the latest posts sent to your email.