With contributions from: Massimo Re Ferre, Eric Fulton, Tomas Fojta, Ray Budavari, Jesse Schachter, Kyle Smith, Francois Misiak, Benham Chia, Ranga Maddipudi, Trevor Gerdes and Ben Byer
We hope you enjoy this month’s vCloud Suite Digest. This is where we take some questions that we get and disseminate the answers in the hopes that it will help someone else who might have a similar question. This month, we have some great tidbits on guest OS clustering, elastic VDCs, and networking among other things. Enjoy!
Guest OS Clustering
For some time vSphere has supported clustering technologies within the Guest Operating System, of which Microsoft Clustering Service (MSCS) is perhaps the most well known. In the early days of ESX 2.x we used to get students to set up a NodeA/NodeB cluster-in-a-box configuration. The recommendation since the rise of VMware Distributed Resource Scheduler (DRS) is to use “anti-affinity” rules to ensure that NodeA and NodeB never reside on the same physical vSphere Host.
Q. Does vCloud Director (vCD) 5.1 support clustering within a guest OS?
A. It is OS-dependent. For example, you can create MS SQL Server failover cluster databases with vCD VMs using Windows 2008 R2. With that said, you should also note that there is a built-in method to ensure the VMs are running on different hosts other than deploying into different Provider vDCs. As each Provider vDC generally points to a different cluster – this should be enough to guarantee separation. Alternatively, you could use vCO or similar to apply anti-affinity rules once the VMs are deployed.
vApp with VMs Spanning Clusters
Since vCloud Director 5.1 it has been possible to add multiple VMware HA/DRS clusters into the same Provider vDC. Such a configuration is often referred to as an “Elastic vDC” as the compute resources of a single cluster do not limit it. It’s recommend (although not required) to use the VXLAN feature with an elastic vDC as this allows the administrator to configure networks that span domains.
Q. Can we define a vApp that has VMs that span clusters? With vCD, can a vApp’s network span multiple clusters?
A. Deploying a vApp in an elastic VDC may deploy VMs belonging to the same vApp in multiple clusters. This is not user-controlled, however. A vApp Network can span multiple clusters and even Layer 3 domains if a VXLAN-backed network pool is used.
An admin can define elastic VDCs and span clusters, and vApp networks can span clusters, but both of these are on the back end—transparent, not accessible, and not even knowable to an Organization user defining a vApp. These rules also apply to every vApp in the Org vDC.
Whether what one creates spans or not is a happy accident based on settings and deploy-time distribution of resources, rather than a purposeful action on the vApp itself – even if the admin has defined everything such that it CAN span, there’s still nothing that is going to guarantee that it WILL, except for one approach that uses visibility to the storage to control where the VMs are placed – In this case you would create two storage tiers: cluster1 and cluster2 and assign to each cluster datastores (not shared between clusters). The user has control over which VM within the vApps uses which tier, and the placement engine takes care of the rest.
IP Masquerade in vCD 5.x
Q. In vCD 1.x there was an IP masquerade setting, but this seems to have disappeared in vCD 5.1. How do I achieve the equivalent functionality in vCD 5.1?
A. The behavior was changed in vCD 5.1. See KB article 2036040. Essentially, IP masquerade has been superseded by a new approach that improves the capabilities of vCloud Director. Now whenever a VM is created, its “internal” IP address is supplemented by an “external” IP address. This “external” IP addressed is allocated from a sub-allocation IP address range. You can see this mapping from the “Virtual Machines” tab of a vApp.
A combination of Source and Destination NAT rules (together with a firewall rule) allows you to grant VMs within the vApp access to the outside world, or to allow inbound access from the outside world.
Changes in Networking in an Upgrade from vCD 1.5 to 5.1
Q. When upgrading from vCD 1.5 to 5.1, what happens to an org network used in vCD 1.5 ?
A. Isolated and direct org networks get converted into an org VDC network. Routed org networks get converted into a gateway with two interfaces and an org VDC network.
Increasing vCD Cell Performance
Q. I am running vCloud Director 5.1 with 12GB systems RAM and increased JVM heap size per the best practices guide to 3GB and found the vCD cell response very good. Will increasing the memory size to 8GB help even more?
A. More memory does not necessarily mean better performance. You should profile your vCD cells to determine what, if any, bottlenecks exist. If you really want to optimize the memory and garbage collection options, see our white papers on Enterprise Java Applications on vSphere:
vCloud Director Licensing: Partially Powered-on vApps
A partially powered-on vApp is where the vApp contains some VMs which are powered on, and others which are powered off. The vApp in vCloud Director is given a process ID, just like a VM. So it is possible to a vApp that is “powered on” when none of the VMs are actually powered on within it. If this happen then you would power off the vApp as normal.
A. vCD is licensed at the VM level, counting the number of powered-on VMs. However, note that vCD itself does not enforce licensing based on the number of powered-on VMs – ensuring compliance is a manual process.
vCloud Network and Security (vCNS)
Q. Do we have a list of supported/non-supported third-party VPN products for vCNS?
A. VMware has tested our IPsec site-to-site VPN feature with Juniper and Cisco products, and this should work without any issues. Since IPsec is an open specification/protocol suite (IETF standard) we should be able to interoperate with any IPsec solution (of course there are limitations), but the typical deployments will work just fine.
The limitation of IPsec, which is also one of its core strengths, is its extensibility. Although nearly all products/solutions support the same base set of authentication and encryption algorithms, third-party vendors are free to add new algorithms as they come along.
vCNS Edge Gateway High Availability
vCNS introduced a new high-availability option for the Edge Gateway that can be enabled when it is being created – or enabled afterwards. This option can be enabled in vCloud Director on the properties of any “Edge Gateway” under the General tab; alternatively, if you want to use this feature without vCloud Director, consult the blogpost (see below) for a step-by-step guide to configuring with vCNS Manager.
vCNS Edge Storage Placement
When you create a new Organization Network or vApp Network it is likely that an Edge Gateway will be deployed by vCloud Director. Using your default “Storage Profile” configured for the Organization, the new Edge Gateway appliance will deployed. This is done when the Organization Virtual Datacenter is defined and is referred to as the “Default Instantiation Profile”.
A. The Edge will get placed in any valid datastore for the VDC, just like a regular VM, but there is no way to choose which datastore it will get placed in. If you are using storage profiles, then you can enable/disable storage profiles at the org VDC level to help control placement.
Q. How can I move an already-deployed Edge?
A. Reset the network to force the Edge to re-deploy.
Edge Gateway and Physical Servers
The Edge Gateway is NAT, VPN, Load-balancer, DHCP and Firewall all in one, and primarily acts as gateway device for VMs. As you would expect much of the automation is delivered to virtual machines, but it is possible to configure it for physical devices, too.
Q. Is there a licensing model for vCNS to protect physical servers with an Edge firewall?
A. VMware does not license for physical machines, only protected virtual machines. So any physical devices are protected for free.
Q. How would this be set up?
A. The portgroup being protected by the edge device would just be backed by VLAN in the physical world. The traffic patterns are essentially the same traffic patterns (in terms of tracing packets up and down through the switching fabric) that we see with a very typical firewall-on-a-stick deployment having the firewall attached to distribution layer multilayer switch. The only difference is that the firewall-on-a-stick is now the Edge device, but the same number of traffic hops through the physical network.