When looking for service providers for hosted infrastructure, some customers require dedicated infrastructure for their workloads. Whether the customer is looking for additional separation for security or more predictable performance of hosted workloads, service providers will need tools that enable them to provide dedicated hardware service for customers while reducing their operational overhead. In some scenarios, providers will implement managed vSphere environments for customers to satisfy this type of request and then manage the individual vSphere environments manually or with custom automation and orchestration tools. However, it is also possible to leverage vCloud Director to provide dedicated hardware per customer while also providing a central management platform for service providers to manage multiple tenants. In this post, we will explore how this can be accomplished with ‘out of the box’ functionality in vCloud Director.
In the previous post, we reviewed the preparation steps necessary for the installation of Cassandra for use with vCloud Availability. In this post we will complete the deployment by showing the steps necessary to install Cassandra and then configure Cassandra for secure communication as well as clustering the 3 nodes. This post assumes basic proficiency with the ‘vi’ text editor.
Installing & Configuring Cassandra
For this example, the Datastax version of Cassandra will be deployed. To prepare the server for Cassandra, create the datastax.repo file in the /etc/yum.repos.d directory with the following command:
Then input the Datastax repo details in to the file.
[datastax] name = DataStax Repo for Apache Cassandra baseurl = https://rpm.datastax.com/community enabled = 1 gpgcheck = 0
Once the repo details have been correctly entered, press the ESC key, type :wq! to write and exit the file.
With the recent release of vCloud Availability for vCloud Director 2.0, it seems like a good opportunity to review the steps for one of the key components required for its installation, the Cassandra database cluster. While the vCloud Availability installation provides a container based deployment of Cassandra, this container instance of Cassandra is only meant for ‘proof of concept’ deployments.
To support a production implementation of vCloud Availability, a fully clustered instance of Cassandra must be deployed with a recommend minimum of 3 nodes. This post will outline the steps for prepping the nodes for the installation of Cassandra. These preparation steps consist of:
- Installation of Java JDK 8
- Installation of Python 2.7
This post assumes basic proficiency with the ‘vi’ text editor.
Before deploying the Cassandra nodes for vCloud Availability, ensure that:
- All nodes have access to communicate with the vSphere Cloud Replication Service over ports 9160 and 9042.
- DNS is properly configured so that each node can successfully be resolved by the respective FQDN.
It is also worth mentioning that for this implementation, Cassandra does not require a load balancer as the vSphere Cloud Replication Service will automatically select an available node from the Cassandra cluster database communications.
Starting with VMware vCloud Director® 5.6, service providers have been able to configure vCloud Director to store metrics that it collects on virtual machine performance and resource consumption. Data for historic metrics is stored in a Cassandra and KairosDB database.
VMware Cloud Providers™ can set up database schema to store basic VM historical performance and resource consumption metrics (CPU, memory and storage), which are collected every 5 minutes (with 20 seconds granularity) by a StatsFeeder process running on the vCloud Director cells. These metrics are then are pushed to a Cassandra NoSQL database cluster with KairosDB persistent storage.
However, this implementation has several limitations, including the following:
• Uses Kairos on top of Cassandra, with an extra layer to maintain
• Supports an outdated version of Kairos DB 0.9.1 and Cassandra 1.2.x/2.0.x
• VMware vCenter Server® does not provide metrics for NFS-based storage
• Difficult to maintain the size of performance data, there is no TTL setting
• Lack of SSL support
With vCloud Director 9.0, VMware has made the following enhancements:
• Provides hybrid mode (you can still choose to use KairosDB)
• Uses a native Cassandra schema and support Cassandra 3.x
• Uses SSL
• Uses vCloud Director entity IDs to tag data in Cassandra instead of Moref/VC-id
• Adds the CMT command to configure a Cassandra cluster
After the service provider has successfully implemented this VM performance metrics collecting mechanism, vCloud Director tenant users can directly view their VM’s performance chart from within their vCloud Director 9.0 tenant HTML5 user interface. Service providers are no longer required to use the API call for this purpose, enabling them to offer this benefit to their customers in a much simpler way.
To configure basic VM metrics for vCloud Director 9.0, follow the steps in “Install and Configure Optional Database Software to Store and Retrieve Historic Virtual Machine Performance Metrics” in the vCloud Director 9.0 Installation and Upgrade Guide here. In this version, the configuration file does not need to be generated first. Simply follow the documented steps and everything will automatically be done for you.
If you issue the cell-management-tool configure-metrics –metrics-config /tmp/metrics.groovy command described here, you might have a problem adding schema (as shown in the following screen capture) where vCloud Director 9.0 cannot start up normally and is stopped at the com.vmware.vcloud.metrices-core process.
You must perform the following steps before running the cell-management-tool cassandra command, because it will try to add the same schema again which will cause the error:
1. Remove the keyspace on Cassandra:
# cqlsh –ucassandra –pcassandra; // or other super account
# drop keyspace vcloud_metrics;
2. Edit the content of the /tmp/metrics.groovy file to:
3. Run the following command:
# cell-management-tool configure-metrics –metrics-config /tmp/metrics.groovy
4. Run the following command (replace with your Cassandra user and IPs):
# cell-management-tool cassandra –configure –create-schema –cluster-nodes ip1,ip2,ip3,ip4 –username cassandra –password ‘cassandra’ –ttl 15 –port 9042
• See the latest vCloud Director 9.0 release notes here for supported vCloud Director Cassandra versions:
– Cassandra 2.2.6 (deprecated for new installations. Supported for legacy upgrades still using KairosDB)
– Cassandra 3.x (3.9 recommended)
• See the vCAT blog at https://blogs.vmware.com/vcat/2015/08/vmware-vcloud-director-virtual-machine-metric-database.html for detailed VM metrics explanations.
• The service provider can implement a more advanced tenant-facing performance monitoring solution for their tenants by using the VMware vRealize® Operations Manager™ Tenant App for vCloud Director, which provides a tenant administrator visibility in to their vCloud Director environment. For more information, go to https://marketplace.vmware.com/vsx/solutions/management-pack-for-vcloud-director.
• There is no need to setup additional Load Balancer in front of a Cassandra Cluster, Cassandra’s Java driver is smart enough in terms of load balancing the request between the Cassandra nodes.
We’re just days away from another VMworld in Las Vegas, and it’s going to be another amazing year, with a packed agenda crammed with sessions on our SDDC stack, including vSAN, NSX and vSphere, in addition to VMware on AWS and Cloud Foundation, all being my favorite topics at the moment. You’ll also find me discussing Cross-Cloud Architecture along with Adrian Roberts and Victor Sandoval, in the Ask the vCloud Air Network Cloud Experts [LHC1566PU] session which is on Monday at 12.30 so feel free to bring something to eat and drink for an hour of technical discussion!
I was also fortunate enough to be invited to the Virtustream Global Developer conference in Florida last week, and one of the topics I presented was titled ‘Cloud Momentum: Cross-Cloud Services and Architecture’. I must say that the team at Virtustream have some amazing talent so be sure to check them out at VMworld!
While I’m on the subject of Cross-Cloud architecture, there is a real challenge that I think customers are trying to solve. Firstly, cloud consumers have choice, but with that it’s inevitable that things don’t always turn out to be clear-cut. For example, let’s say we have a customer that wants to migrate their workload to the cloud. Most of their applications today have a traditional deployment with a database back-end, reliance on certain versions of Microsoft SQL and legacy dependencies which makes scale difficult. These traditional applications are not going to suit Azure, AWS or Google Cloud, but with VMware on AWS they can expand their existing vSphere infrastructure that they have on-premises, to an AWS data center.
As customers then introduce cloud native applications to their organization, they can take advantage of AWS services such as S3 and DynamoDB. What makes this relationship so unique is there traditional workloads can be placed side-by-side in the same AWS region and availability zone (AZ). This avoids network traffic having to occur over a VPN or Direct Connect, and they can keep the traffic internal to the AWS network. Taking things one step further, workloads can easily be moved using vMotion from their on-premises data center to AWS and visa-versa.
There will be much more to reveal at VMworld where you’ll hear the latest news on Cross-Cloud services and architecture.
See you in Las Vegas!
Running VMware Harbor™ in a vCloud Air Network Environment
Continuing with the series of posts related to running Containers on vCloud Air Network (vCAN), this post covers VMware Harbor™. VMware Harbor™ is VMware’s enterprise-class registry server for Docker images. Private registry servers like VMware Harbor™, allow storage of Docker images without publishing them publicly on the internet and adds an additional layer of control that’s often desired in enterprise environments.
This post will show how to deploy VMware Harbor™, add the new registry to VMware Admiral™, then deploy and push images to the registry. Since VMware Harbor™ has no special infrastructure requirements, this post applies to both providers as well as tenants wishing to deploy their own container service. If you have not already, refer to https://blogs.vmware.com/vcat/2017/01/hybrid-container-management-vcloud-director-photon-os-admiral.html to deploy the VMware Admiral™ and VMware Photon OS™ components needed in this post.
The diagram below shows a high-level view of VMware Harbor™ added to the container management platform within a vCloud Director vApp.
The NSX revenue planning calculator is designed to show a service provider how to make additional revenue by up-selling component NSX derived services. Many service providers I speak to are asking VMware the age-old question, ‘How can I make money from your bundles?’ Equally we also hear that the bundles are expensive, my response to this is – are you realizing the value and selling the functionality of the bundles or just internally operationalizing it?
Most end consumers are after vCAN managed services, but also desire ‘cloud like’ self-service from a cloud catalogue; this has been compounded with vendors bringing cloud portals into the private cloud and the realization from consumers that this is now a reality. Hence rolling all services into a robust ‘managed service’ may or may not be ideal for your customers, they may desire a mix of both, and certainly to minimise operational spend, a provider could hand over as much as possible to self-service.
In the upcoming vCloud Director release 8.2 and in the previous release 8.1 VMware has included NSX functionality in the vCD self-service portal, this means for the first time a service provider can provide self-service NSX services (whilst maintaining multi-tenancy & security) to end customers if they are permitted access. This presents the ideal solution of managed services and self-service controls for customers who want them and allows providers to become much more granular about their charging and service definitions.
The calculator focuses on the vCAN 7, 9 & 12 point bundles (Advanced, Advanced with Networking and Advanced with Networking & Management). Of course we would like our providers to use the 12-point bundle, and this is what the calculator attempts to show – the additional margin with each vCAN bundle where NSX exposes capabilities & services.
I recently published a white paper aimed at service providers offering VMware Horizon 7 for tenants adopting the digital workspace. Horizon 7 is a single-tenanted VDI and application platform, allowing IT administrators to manage not only desktop pools, but application delivery to their end-users.
The ‘digital workspace’ provides a “consumer simple” digital platform for end-users accessing their day to day and most critical applications. Underneath the hood is a VDI architecture that has evolved and long since the days of the traditional desktop broker.
This white paper breaks down the digital workspace into five distinct layers, which have a direct correlation to tenant-facing functionality, service provider boundaries (for instance, firewall ports, user portal integration), core and management infrastructure.
Running Photon OS and Admiral in a vCloud Air Network Environment
VMware’s container story is growing and maturing every day. Many vCloud Air Network (vCAN) customers are looking to see how VMware’s container strategy maps to vCAN providers. This is the first in a series of blog posts to help illustrate how VMware technologies can be leveraged to provide a robust and flexible environment for containers. This first step is focused on creating a solid foundation for running containers using VMware Photon OS™ and VMware Admiral™.
Photon OS™ is a minimal open source Linux distribution optimized for VMware’s virtualization platform. The main site for documentation and downloads for Photon OS™ is on the GitHub site https://vmware.github.io/photon/.
Admiral™ is VMware’s container management platform, which is a very light weight and scalable application. Like Photon OS™, Admiral™ is also open source. The main site for Admiral™ is available on its GitHub site at https://vmware.github.io/admiral/.
The diagram below gives a high-level view of what will be demonstrated with Admiral™ and some Photon OS™ VMs contained with a vCloud Director vApp.
An interesting topic that came to our attention is how to migrate VMware vCloud Director® vApps from one distributed virtual switch to another. Recently, from the experience of one of our field consultants, Aleksander Bukowinski, we received a detailed procedure to overcome the possible service disruptions due to such a move. Aleksander has also authored a whitepaper on this topic that will soon be available for our audience in VMware Partner Central. The paper also covers in detail an additional use case with Cisco Nexus 1000V and provides PowerShell and API call samples.
Depending on connectivity mode, we can have five different types of vApps in vCD: directly connected, routed, connected to routed vApp networks, isolated, and fenced. The migration process would not require shutting down the vApps while the migration happens, but rather could generate brief network outages in case the VMs are connected to a vCloud Director Edge Gateway, or no outage at all if the VMs use isolated networks with no dependency to the Edge. Continue reading