Home > Blogs > vCloud Architecture Toolkit (vCAT) Blog > Tag Archives: vCloud Air Network

Tag Archives: vCloud Air Network

Dedicated Hosted Cloud with vCloud Director for VMware Cloud Providers

When looking for service providers for hosted infrastructure, some customers require dedicated infrastructure for their workloads. Whether the customer is looking for additional separation for security or more predictable performance of hosted workloads, service providers will need tools that enable them to provide dedicated hardware service for customers while reducing their operational overhead. In some scenarios, providers will implement managed vSphere environments for customers to satisfy this type of request and then manage the individual vSphere environments manually or with custom automation and orchestration tools. However, it is also possible to leverage vCloud Director to provide dedicated hardware per customer while also providing a central management platform for service providers to manage multiple tenants. In this post, we will explore how this can be accomplished with ‘out of the box’ functionality in vCloud Director.

Continue reading

Deploying Cassandra for vCloud Availability Part 2

In the previous post, we reviewed the preparation steps necessary for the installation of Cassandra for use with vCloud Availability. In this post we will complete the deployment by showing the steps necessary to install Cassandra and then configure Cassandra for secure communication as well as clustering the 3 nodes. This post assumes basic proficiency with the ‘vi’ text editor.

Installing & Configuring Cassandra

For this example, the Datastax version of Cassandra will be deployed. To prepare the server for Cassandra, create the datastax.repo file in the /etc/yum.repos.d directory with the following command:

vi /etc/yum.repos.d/datastax.repo

Then input the Datastax repo details in to the file.

 [datastax]
 name = DataStax Repo for Apache Cassandra
 baseurl = https://rpm.datastax.com/community
 enabled = 1
 gpgcheck = 0

Once the repo details have been correctly entered, press the ESC key, type :wq! to write and exit the file.

Continue reading

Deploying Cassandra for vCloud Availability Part 1

With the recent release of vCloud Availability for vCloud Director 2.0, it seems like a good opportunity to review the steps for one of the key components required for its installation, the Cassandra database cluster.  While the vCloud Availability installation provides a container based deployment of Cassandra, this container instance of Cassandra is only meant for ‘proof of concept’ deployments.

To support a production implementation of vCloud Availability, a fully clustered instance of Cassandra must be deployed with a recommend minimum of 3 nodes. This post will outline the steps for prepping the nodes for the installation of Cassandra. These preparation steps consist of:

  • Installation of Java JDK 8
  • Installation of Python 2.7

This post assumes basic proficiency with the ‘vi’ text editor.

Infrastructure Considerations

Before deploying the Cassandra nodes for vCloud Availability, ensure that:

  • All nodes have access to communicate with the vSphere Cloud Replication Service over ports 9160 and 9042.
  • DNS is properly configured so that each node can successfully be resolved by the respective FQDN.

It is also worth mentioning that for this implementation, Cassandra does not require a load balancer as the vSphere Cloud Replication Service will automatically select an available node from the Cassandra cluster database communications.

Continue reading

Virtual Machine Performance Metrics in VMware vCloud Director 9.0

Starting with VMware vCloud Director® 5.6, service providers have been able to configure vCloud Director to store metrics that it collects on virtual machine performance and resource consumption. Data for historic metrics is stored in a Cassandra and KairosDB database.

VMware Cloud Providers™ can set up database schema to store basic VM historical performance and resource consumption metrics (CPU, memory and storage), which are collected every 5 minutes (with 20 seconds granularity) by a StatsFeeder process running on the vCloud Director cells. These metrics are then are pushed to a Cassandra NoSQL database cluster with KairosDB persistent storage.

However, this implementation has several limitations, including the following:

• Uses Kairos on top of Cassandra, with an extra layer to maintain
• Supports an outdated version of Kairos DB 0.9.1 and Cassandra 1.2.x/2.0.x
• VMware vCenter Server® does not provide metrics for NFS-based storage
• Difficult to maintain the size of performance data, there is no TTL setting
• Lack of SSL support

With vCloud Director 9.0, VMware has made the following enhancements:

• Provides hybrid mode (you can still choose to use KairosDB)
• Uses a native Cassandra schema and support Cassandra 3.x
• Uses SSL
• Uses vCloud Director entity IDs to tag data in Cassandra instead of Moref/VC-id
• Adds the CMT command to configure a Cassandra cluster

 

After the service provider has successfully implemented this VM performance metrics collecting mechanism, vCloud Director tenant users can directly view their VM’s performance chart from within their vCloud Director 9.0 tenant HTML5 user interface. Service providers are no longer required to use the API call for this purpose, enabling them to offer this benefit to their customers in a much simpler way.

To configure basic VM metrics for vCloud Director 9.0, follow the steps in “Install and Configure Optional Database Software to Store and Retrieve Historic Virtual Machine Performance Metrics” in the vCloud Director 9.0 Installation and Upgrade Guide here. In this version, the configuration file does not need to be generated first. Simply follow the documented steps and everything will automatically be done for you.

If you issue the cell-management-tool configure-metrics –metrics-config /tmp/metrics.groovy command described here, you might have a problem adding schema (as shown in the following screen capture) where vCloud Director 9.0 cannot start up normally and is stopped at the com.vmware.vcloud.metrices-core process.

You must perform the following steps before running the cell-management-tool cassandra command, because it will try to add the same schema again which will cause the error:

1. Remove the keyspace on Cassandra:
# cqlsh –ucassandra –pcassandra; // or other super account
#  drop keyspace vcloud_metrics;

2. Edit the content of the /tmp/metrics.groovy file to:

configuration {
}

3. Run the following command:
# cell-management-tool configure-metrics –metrics-config /tmp/metrics.groovy

4. Run the following command (replace with your Cassandra user and IPs):
# cell-management-tool cassandra –configure –create-schema –cluster-nodes ip1,ip2,ip3,ip4 –username cassandra –password ‘cassandra’ –ttl 15 –port 9042

Notes:

• See the latest vCloud Director 9.0 release notes here for supported vCloud Director Cassandra versions:
– Cassandra 2.2.6 (deprecated for new installations. Supported for legacy upgrades still using KairosDB)
– Cassandra 3.x (3.9 recommended)

• See the vCAT blog at https://blogs.vmware.com/vcat/2015/08/vmware-vcloud-director-virtual-machine-metric-database.html for detailed VM metrics explanations.

• The service provider can implement a more advanced tenant-facing performance monitoring solution for their tenants by using the VMware vRealize® Operations Manager™ Tenant App for vCloud Director, which provides a tenant administrator visibility in to their vCloud Director environment. For more information, go to https://marketplace.vmware.com/vsx/solutions/management-pack-for-vcloud-director.

• There is no need to setup additional Load Balancer in front of a Cassandra Cluster, Cassandra’s Java driver is smart enough in terms of load balancing the request between the Cassandra nodes.

Hybrid Container Management for vCloud Director with VMware Harbor™

Running VMware Harbor™ in a vCloud Air Network Environment

Continuing with the series of posts related to running Containers on vCloud Air Network (vCAN), this post covers VMware Harbor™.  VMware Harbor™ is VMware’s enterprise-class registry server for Docker images.  Private registry servers like VMware Harbor™, allow storage of Docker images without publishing them publicly on the internet and adds an additional layer of control that’s often desired in enterprise environments.

This post will show how to deploy VMware Harbor™, add the new registry to VMware Admiral™, then deploy and push images to the registry.  Since VMware Harbor™ has no special infrastructure requirements, this post applies to both providers as well as tenants wishing to deploy their own container service.  If you have not already, refer to https://blogs.vmware.com/vcat/2017/01/hybrid-container-management-vcloud-director-photon-os-admiral.html to deploy the VMware Admiral™ and VMware Photon OS™ components needed in this post.

The diagram below shows a high-level view of VMware Harbor™ added to the container management platform within a vCloud Director vApp.

Continue reading

NSX Revenue Planning calculator

The NSX revenue planning calculator is designed to show a service provider how to make additional revenue by up-selling component NSX derived services. Many service providers I speak to are asking VMware the age-old question, ‘How can I make money from your bundles?’ Equally we also hear that the bundles are expensive, my response to this is – are you realizing the value and selling the functionality of the bundles or just internally operationalizing it?

Most end consumers are after vCAN managed services, but also desire ‘cloud like’ self-service from a cloud catalogue; this has been compounded with vendors bringing cloud portals into the private cloud and the realization from consumers that this is now a reality. Hence rolling all services into a robust ‘managed service’ may or may not be ideal for your customers, they may desire a mix of both, and certainly to minimise operational spend, a provider could hand over as much as possible to self-service.

In the upcoming vCloud Director release 8.2 and in the previous release 8.1 VMware has included NSX functionality in the vCD self-service portal, this means for the first time a service provider can provide self-service NSX services (whilst maintaining multi-tenancy & security) to end customers if they are permitted access. This presents the ideal solution of managed services and self-service controls for customers who want them and allows providers to become much more granular about their charging and service definitions.

The calculator focuses on the vCAN 7, 9 & 12 point bundles (Advanced, Advanced with Networking and Advanced with Networking & Management). Of course we would like our providers to use the 12-point bundle, and this is what the calculator attempts to show – the additional margin with each vCAN bundle where NSX exposes capabilities & services.
Continue reading

Hybrid Container Management for vCloud Director with Photon OS and Admiral

Running Photon OS and Admiral in a vCloud Air Network Environment

VMware’s container story is growing and maturing every day. Many vCloud Air Network (vCAN) customers are looking to see how VMware’s container strategy maps to vCAN providers.  This is the first in a series of blog posts to help illustrate how VMware technologies can be leveraged to provide a robust and flexible environment for containers.  This first step is focused on creating a solid foundation for running containers using VMware Photon OS™ and VMware Admiral™.

Photon OS™ is a minimal open source Linux distribution optimized for VMware’s virtualization platform.  The main site for documentation and downloads for Photon OS™ is on the GitHub site https://vmware.github.io/photon/.

Admiral™ is VMware’s container management platform, which is a very light weight and scalable application.  Like Photon OS™, Admiral™ is also open source.  The main site for Admiral™ is available on its GitHub site at https://vmware.github.io/admiral/.

The diagram below gives a high-level view of what will be demonstrated with Admiral™ and some Photon OS™ VMs contained with a vCloud Director vApp.

Container 1 Continue reading

Automated Deployments of vRealize Automation for vCloud Air Network

In the previous blog post “Leveraging vRealize CloudClient with vRealize Automation deployments for vCAN”, we explored the use of VMware vRealize® CloudClient for the automated configuration of VMware vRealize Automation™ on a per-tenant basis to speed up the deployment of per-tenant instances in a service provider environment. This method relied on a manual installation of the vRealize Automation infrastructure components. However, the release of vRealize Automation 7.1 provides built-in silent installation capabilities for increased time-to-value deployments of vRealize Automation.

 

Overview of vRealize Automation for SPs

While vRealize Automation is typically implemented in Private Cloud – Enterprise environments, service providers still have an interest in providing services based on vRealize Automation for customers on a per-tenant basis as well as the management of the internal infrastructure. Customers benefit from this by experiencing an expedited time to value while also being able to offload the maintenance and management overhead of the Private Cloud infrastructure to a trusted VMware vCloud® Air™ Network service provider of their choice. Some of the common deployment models that service providers use for vRealize Automation are:

  • Internal Operations – Single tenant deployment of vRealize Automation by the service provider for internal operations users.
  • Dedicated Customer Private Cloud – Single tenant deployment of vRealize Automation with the optional use of multiple business groups. Customer manages user access and catalog content.
  • Fully Managed Service Offering – Service offering that leverages multiple business groups and/or tenants and is managed fully by the vCloud Air Network service provider on behalf of the customer.

At a platform level, each of these models enables the consumption of single and multiple data centers provided by the service provider, while the Dedicated Private Cloud and the Managed Service offering provide customers the capability to consume on-premises compute resources.

Continue reading

vRealize Automation Configuration with CloudClient for vCloud Air Network

As a number of vCloud Air Network service providers start to enhance their existing hosting offerings, VMware are seeing some demand from service providers to offer a dedicated vRealize Automation implementation to their end-customers to enable them to offer application services, heterogeneous cloud management and provisioning in a self-managed model.

This blog post details an implementation option where the vCloud Air Network service provider can offer “vRealize Automation as a Service” hosted in a vCloud Director vApp, with some additional automated configuration. This allows the service provider to offer vRealize Automation to their customers based out of their existing multi-tenancy IaaS platforms and achieve high levels of efficiency and economies of scale.

“vRealize Automation as a Service”

During a recent Proof of Concept demonstrating such a configuration, an vCloud Director Organizational vDC was configured for tenant consumption.  Within this Org vDC a vApp containing a simple installation of vRealize Automation was deployed that consisted of a vRealize Automation Appliance and one Windows Server for IaaS components and an instance of Microsoft SQL.  With vRealize Automation successfully deployed, the vRealize Automation instance was customized leveraging vRealize CloudClient via Microsoft PowerShell scripts.  Using this method for configuration of the tenant within vRealize Automation reduced the deployment time for vRealize Automation instances while ensuring that the vRealize Automation Tenant configuration was consistent and conformed to the pre-determined naming standards and conventions required by the provider.

vRaaS vCAN Operations
Continue reading

Streamlining VMware vCloud Air Network Customer Onboarding with VMware NSX Edge Services

When migrating private cloud workloads to a public or hosted cloud provider, the methods used to facilitate customer onboarding can provide some of the most critical challenges. The cloud service provider requires a method for onboarding tenants that reduces the need for additional equipment or contracts that often create barriers for customers when moving enterprise workloads onto a hosting or public cloud offering.

Customer Onboarding Scenarios

When a service provider is preparing for customer onboarding, there are a few options that can be considered. Some of the typical onboarding scenarios are:

  • Migration of live workloads
  • Offline data transfer of workloads
  • Stretching on-premises L2 networks
  • Remote site and user access to workloads

One of the most common scenarios is workload migration. For some implementations, this means migrating private cloud workloads to a public cloud or hosted service provider’s infrastructure. One path to migration leverages VMware vSphere® vMotion® to move live VMs from the private cloud to the designated CSP environment. In situations where this is not feasible, service providers can supply options for the offline migration of on-premises workloads where private cloud workloads that are marked for migration are copied to physical media, shipped to the service provider, and then deployed within the public cloud or hosted infrastructure. In some cases, migration can also mean the ability to move workloads between private cloud and CSP infrastructure on demand.

Continue reading