News & Highlights

VMware Cloud on AWS: Get your basics right Part 3: Extend on-premises infrastructure to the cloud

By Sonali Desai, Product Marketing Manager

Data center extension enables VMware Cloud on AWS users to extend on-premises infrastructure to the cloud to meet fluctuating capacity requirements. Learn about common challenges and how to overcome them in the final instalment of our three-part series.

In the first blog of this series, we introduced VMware Cloud on AWS and the key customer driven use cases. In the second blog of this series, we deep dived into cloud migration use cases. In the 3rd part of this series, I would like to dig deeper into the second use case: Data Center Extension- Extend your on-premises infrastructure to the cloud with VMware Cloud on AWS.

There are multiple scenarios under which customers want to extend to the cloud such as:

  • Have geographic capacity needs (such as data sovereignty rules or the need to be closer to their end users) and do not want to invest in building out a new data center
  • Have capacity constraints on-premises to handle seasonal spikes in demand
  • Want to handle unplanned temporary capacity needs or need capacity for new projects and do not want to invest in over provisioning or in building new capacity on-premises
  • Easily add and extend on-premises desktop services without buying additional hardware
  • Need to develop new applications that need to integrate with on-premises applications or access native cloud services
  • Have a need to perform test and development activities in a cloud environment that is operationally similar to on-premises environments

But when customers try to extend their existing on-premises environment to the cloud, they face many challenges including those below.

Typical challenges while extending on-premises environment to the cloud:

  • Interoperability between the environments: Need application re-architecting/refactoring, machine format conversion etc when migrating to cloud
  • Incompatible skills, tools and processes: Infrastructure and operations teams must learn new skills, acquire different tools, and change existing processes to maximize the benefits of public cloud integration.
  • Management of disparate infrastructures: Inconsistent management tools that operate in isolation across on-premises and cloud environments
  • Bi-directional application mobility is complex and costly: Once applications move to public cloud; it is virtually impossible to move them back on-premises without a significant reverse rework that makes the migration very costly and time-consuming
  • Inconsistent security and governance: The differences between on-premises and public cloud infrastructure limits the reuse of established security and governance procedures and tools.

How VMware Cloud on AWS overcomes these challenges:

  • VMware Cloud on AWS extends your on-premises infrastructure to the cloud, with same vSphere hypervisor that runs tens of millions of workloads. Itis available on a dedicated, bare-metal EC2(Elastic Compute Cloud) instance in the AWS Cloud, and therefore no redesign is required to migrate applications. It saves migration cost, time and complexity and makes bi-directional migration simple and easy
  • With VMware Cloud on AWS, customers can leverage familiar and proven VMware skills, tools and processes in the cloud, so there is no need to learn new skills or acquire new tools
  • VMware vCenter, a widely-used and proven management tool used by infrastructure administrators across the world to operate their on-premises vSphere infrastructure, is the management tool for VMware Cloud on AWS and provides consistent operations for vCenter administrators
  • Applications require no redesign to migrate to VMware Cloud on AWS, saving on migration cost, time and complexity that allows for seamless bi-directional migration
  • With VMware Cloud on AWS, customers can extend their current on-premises security and governance policies to the cloud

Now, let’s find out what are the different features and capabilities the service offers that helps you seamlessly extend from on-premises to cloud:

  1. Hybrid Linked Mode:
  • It provides a single pane of glass to view to manage on-premises and Cloud resources in the VMware Cloud on AWS SDDC vCenter. This provides operational consistency and visibility across both environments
  • Hybrid Linked Mode allows you to link your VMware Cloud on AWS vCenter Server instance with an on-premises vCenter Server instance
  • If you link your cloud vCenter Server to a domain that contains multiple vCenter Server instances linked using Enhanced Linked Mode, all of those instances are linked to your cloud SDDC
  • Using Hybrid Linked Mode, you can:
  • Log in to the vCenter Server instance in your SDDC using your on-premises credentials.
  • View and manage the inventories of both your on-premises and Cloud SDDC from a single vSphere Client interface
  • Cold migrate and vMotion workloads between your on-premises data center and Cloud SDDC directly from the UI.
  1. vCenter Cloud Gateway:
  • The vCenter Cloud Gateway is an appliance that you can download and install on-premises to connect your on-premises and Cloud vCenters
  • It joins the on-premises Single Sign On (SSO) domain and allows you to configure Hybrid Linked Mode to manage the hybrid resources from the on-premises data center
  • The vCenter Cloud Gateway includes the vSphere UI interface and customers can use that UI to manage both their on-premises vCenter Server as well as the VMware Cloud on AWS vCenter Server. The vCenter Cloud Gateway experience is exactly the same as the Hybrid Linked Mode experience in VMware Cloud on AWS, with the exception that it is now running locally in your on-premises environment
  • Resources: Documentation LinkRelated Blog
  1. vCenter Content Library:
  • vCenter content library is the perfect feature to keep the availability of templates, OVAs, ISO Images, and scripts in sync between on-premises and in-cloud SDDC deployments
  • You can deploy from, clone to, and sync VMTX OVF templates, mount ISOs, and even perform guest customization
  • By adopting content library, you are ready to use VMware Cloud on AWS to its full potential from day one
  • Resources: Documentation LinkOperations Guide: Use content library
    1. Stretched Clusters:

  • In VMware Cloud on AWS, standard clusters are contained within a single Amazon region and availability zone. The implication being that if Amazon were to ever suffer an AZ failure, it would cause a loss in availability to the cluster
  • For the workloads which cannot tolerate the potential of an AZ failure, customers can choose to deploy a Stretched Cluster
  • With stretched clusters, VMware Cloud on AWS infrastructure delivers protection against failures of AWS AZs at an infrastructure level. Stretching an SDDC cluster across two AWS AZs within a region means if an AZ goes down, it is simply treated as a vSphere HA event and the virtual machine is restarted in the other AZ, thus providing 99.99% infrastructure availability
  • Now, applications can span multiple AWS availability zones within a VMware Cloud on AWS cluster
  • Main advantages of stretched clusters are:
  • Zero RPO High Availability for enterprise applications virtualized on vSphere across AWS Availability Zones (AZ), leveraging multi-AZ stretched clustering
  • Stretched clusters enable developers to focus on core application requirements and capabilities, instead of infrastructure availability
  • Significantly improves your application’s availability without needing to architect it into your application
  1. Elastic DRS:
  • Elastic DRS allows you to set policies to automatically scale your cloud SDDC by adding or removing hosts in response to demand
  • It uses an algorithm to maintain an optimal number of provisioned hosts to keep cluster utilization high while maintaining desired CPU, memory, and storage performance
  • It makes recommendations to either scale-in or scale-out the cluster. A decision engine responds to a scale-out recommendation by provisioning a new host into the cluster. It responds to a scale-in recommendation by removing the least utilized host from the cluster.
  • This feature is enabled at cluster level. Monitor interval is every 5 minutes
  • It is enabled by default for scale up for storage only. It can be manually optimized for best performance and/or for lowest costs
  • You can enable eDRS via policy or through RESTful APIs to automate the configuration of this policy
  • Resources: Documentation LinkRelated blogElastic DRS using RESTful APIs blog
  1. Expand/Contract the Cloud SDDC automatically as needed:

One of the great benefits of using VMware Cloud on AWS is the ability to quickly and easily add and remove hosts/clusters from your SDDC.

    • Add hosts:
  • You can add hosts to your SDDC to increase the amount of computing and storage capacity available in your SDDC. You can add max up to 16 hosts per cluster and you can have 20 clusters per SDDC
  • Hosts are pulled from AWS’s pool of servers. ESXi is booted and fully configured including every VMkernel and logical network, and it is then added to your vCenter/SDDC. This whole process takes about 10-15 minutes
  • After the host is connected to the network and added to the cluster, the vSAN Datastore is automatically expanded, allowing the cluster to consume the new storage capacity and begin to sync the vSAN objects
    • Remove hosts:
    • You can remove hosts from your SDDC as long as the number of hosts in your SDDC cluster remains above the minimum which is 3 hosts
      • Add clusters:
    • You can add clusters to a cloud SDDC up to the maximum configured for your account. Additional clusters are created in the same availability zone as the initial SDDC
    • Logical networks you have created for your SDDC are automatically shared across all clusters. Compute and storage resources are configured similarly for all clusters.
      • Remove clusters:
    • You can remove any cluster in an SDDC except for the initial cluster, Cluster-1.
    • When you delete a cluster, all workload VMs in the cluster are immediately terminated and all data and configuration information is deleted. You lose API and UI access to the cluster. Public IP addresses associated with VMs in the cluster are released.
    1. Policy management:
    • In VMware Cloud on AWS, you can define the compute policies that allow you to specify how the vSphere Distributed Resource Scheduler (DRS) should place VMs on hosts in a resource pool.
    • There are 5 types of policies you can specify currently:
    • VM-Host Affinity Policy
      • A VM-Host affinity policy establishes an affinity relationship between a category of virtual machines and a category of hosts
      • Such policies can be useful when host-based licensing requires VMs that are running certain applications to be placed on hosts that are licensed to run those applications
      • They can also be useful when virtual machines with workload-specific configurations require placement on hosts that have certain characteristics
    • VM-Host Anti-Affinity Policy:
      • VM-Host anti-affinity policy allows the user to specify anti-affinity relations between a group of VMs and a group of hosts
      • This can be useful to avoid running general purpose workloads on hosts that are running resource intensive applications to avoid resource contention
    • VM-VM Affinity Policy:
      • A VM-VM affinity policy allows the user to specify affinity relations between VMs
      • This policy can be useful when two or more VMs can benefit from placement on the same host to keep latency to a minimum
    • VM-VM Anti-Affinity policy:
      • A VM-VM anti-affinity policy describes a relationship among a category of VMs
      • A VM-VM anti-affinity policy discourages placement of virtual machines in the same category on the same host
      • This policy can be useful when you want to place virtual machines running critical workloads on separate hosts, so that the failure of one host does not affect other VMs in the category
    • Disable DRS vMotion Policy:
      • A Disable DRS vMotion policy applied to a VM prevents DRS from migrating the VM to a different host unless the current host fails or is put into maintenance mode.
      • This type of policy can be useful for a VM running an application that creates resources on the local host and expects those resources to remain local. If DRS moves the VM to another host for load-balancing or to meet reservation requirements, resources created by the application are left behind and performance can be degraded when locality of reference is compromised.
      • This policy takes effect after a tagged VM is powered on and is intended to keep the VM on its current host as long as the host remains available. The policy does not affect the choice of the host where a VM is powered on.
    • Resources: Documentation LinkRelated blog
      1. Multiple Interconnectivity Options:

        • VPN
        • Configure an IPsec VPN to provide a secure connection to your SDDC. Route-based and policy-based VPNs are supported. Either type of VPN can connect to the SDDC over the Internet. A route-based VPN can also connect to the SDDC over AWS Direct Connect
        • The VMware Cloud on AWS L2VPN feature supports extending VLAN networks. The L2VPN connection to the NSX-T server uses an IPsec tunnel. The L2VPN extended network is used to extend Virtual Machine networks and carries only workload traffic. It is independent of the VMkernel networks used for migration traffic (ESXi management or vMotion), which use either a separate IPsec VPN or a Direct Connect connection.
        • An L2VPN on the Compute Gateway can extend up to 100 of your on-premises networks. VMware Cloud on AWS uses NSX-T to provide the L2VPN server in your cloud SDDC. L2VPN client functions can be provided by a standalone NSX Edge that you download and deploy into your on-premises data center.
        • VMware HCX
        • Provides high performance, multi-site interconnectivity capabilities by abstracting your infrastructure and allowing you to interoperate across different versions of vSphere and different network types.
        • Provides the ability to setup a WAN optimized multi-site IPSec VPN Mesh for secure site-to-site connectivity.
        • Stretch Layer-2 networks and extend data centers between sites.
        • Perform bulk workload migrations, live bi-directional vMotion with the ability to retain MAC and IP addresses.
        • Using AWS Direct Connect:
        • AWS Direct Connect is a service provided by AWS that allows you to create a high-speed, low latency connection between your on-premises data center and AWS services.
        • Direct Connect traffic travels over one or more virtual interfaces that you create in your customer AWS account. For SDDCs in which networking is supplied by NSX-T, all Direct Connect traffic, including vMotion, management traffic, and compute gateway traffic, uses a private virtual interface. This establishes a private connection between your on-premises data center and a single Amazon VPC.

    Resources:

        • Data Center Extension Deep Dive Webinar: This webinar talks about the deep-dive technical features and capabilities on VMware Cloud on AWS that help you extend your on-premises environment to the cloud.
        • Data Center Extension Solution Brief (TBD)

    For other information related to VMware Cloud on AWS, here are some more learning resources for you: