Home > Blogs > vCloud Architecture Toolkit (vCAT) Blog

Publishing vCloud Director User Interface Extensions

vCloud Director has been designed with extensibility in mind.  For many years now, developers have been able to extend the standard vCloud Director API, enabling Service Providers to provide a single point of integration to their customers.  The vCloud Director 9.1 release enhances this extensibility by allowing you to also extend the user interface with custom extensions, which enables you to add your own screens and workflows directly inside the vCloud Director HTML5 client.  For example, you could create a simple informational page displaying all of your service offerings, so that your customers can easily learn more about them.  Or with a little more effort, you could fully integrate your in-house ticketing system, allowing customers to create, view, edit and delete tickets without ever leaving the vCloud Director user interface.

To learn more about how to create your own user interface extensions, read the following two white papers: Extending VMware vCloud Director® User Interface Using Portal Extensibility and Extending VMware vCloud Director® User Interface Using Portal Extensibility – Ticketing Example.

This rest of this post assumes that you have already developed the desired code and are now ready to publish your extension into a vCloud Director environment.  Let’s walk through the process.

Continue reading

VMware Hybrid Cloud Extension (HCX) Network Ports

I’d like to share a high-resolution network ports diagram of VMware Hybrid Cloud Extension service (previously known as HCX) that I have been working on this week. If you are considering using Hybrid Cloud Extension to solve your hybrid cloud challenges then this PDF would look great on any 4k monitor or perhaps printed for the office wall!

Download the diagram here: VMware HCX Network Ports 1.1

The first thing to notice is that HCX abstracts the underlying vSphere architecture which means that between the source and destination data centers, there is no direct communication (E.g. vMotion) between vSphere ESXi hosts.

VMware Hybrid Cloud Extension allows enterprises to overcome some of the challenges with moving to the cloud. Cloud Providers with the VMware Hybrid Cloud Extension (HCX) service can provide a true hybrid-cloud experience for workload portability, agility, disaster recovery and business continuity. This allows cloud providers to take the lead with hybrid cloud solutions, abstracting customer on-premises and cloud resources as one seamless cloud. No changes are required on the source network infrastructure, eliminating complexity for tenants of the cloud platform.

Don’t just think of Hybrid Cloud Extension as just a workload migration tool. One of the fundamental components of Hybrid Cloud Extension is the ability to provide a layer 2 extension of the customer data center to the cloud. This provides the basis for cross-cloud mobility allowing for any-to-any vSphere zero downtime migrations, seamless disaster recovery, and enabling hybrid architecture.

While migration is probably the most common use-case, there are many other challenges that are solved. Firstly, unlike professional services led migrations that often require costly and time-consuming workload assessments, cloud providers can provide an end-to-end service without the much of the additional complexity involved in the past. Another use-case allows migrations from legacy to next-generation environments, that are complex due to different versions of the underlying infrastructure (vSphere).

To learn more about Hybrid Cloud Service, visit https://cloud.vmware.com or contact your VMware partner business manager.

Also available in the VMware HCX User Manual.

 

VMware Cloud on AWS Base Reference Architecture for Managed Service Providers

Reference Architecture

VMware Cloud on AWS is an on-demand cloud service that enables you to run applications consistently across VMware vSphere-based cloud environments across AWS’s global infrastructure with additional access to a broad range of native AWS services. Powered by VMware Cloud Foundation, this service integrates vSphere, vSAN and NSX along with VMware vCenter management, and is optimized to run on dedicated, elastic, bare-metal AWS infrastructure. With this service, IT teams can manage their cloud-based resources with familiar VMware tools and processes wherever they are running.

With the recent release of VMware Cloud on AWS through the Managed Service Provider program, MSP’s can now add the cloud service to their portfolio and offer it to their end-customers with advanced consulting and managed services to help accelerate successful adoption of the platform.

The reference architecture represents a base on which to offer VMware Cloud on AWS as part of your broader VMware Cloud Provider Platform services and your end-customers on-premises datacenter environments. The solution leverages IPsec VPN connectivity for both management and compute layers but could easily be adapted to leverage L2 VPN connectivity or direct-connect where required.

Offering VMware Cloud on AWS through the MSP program gives both the cloud provider and their end-customers additional choice, flexibility, geographical coverage, elastic scalability in a pay as you go model. This, coupled with the advanced services available from native AWS makes VMware Cloud on AWS a fantastic choice to expand your managed services portfolio.

Integrating in to cloud provider platform hosted vDC’s

The Cloud Provider Platform is the core cloud services platform that the cloud provider offers to their end-customers to consume compute, containers, networking, security, storage, applications, disaster recovery, backup and recovery etc.

The CPP platform is based on the same technologies as VMware Cloud on AWS (vSphere, NSX and vSAN) with the addition of vCloud Director for multi-tenancy. With vCloud Director each Virtual Datacenter is connected to an edge gateway for north / south network routing and advanced networking services. The cloud provider can connect the end-customers edge services gateway to the VMware Cloud on AWS’s compute gateway over either layer 2 (L2 VPN) or layer 3 (IPsec VPN). In the reference architecture we have leveraged layer 3 (IPsec VPN) for simplicity.

Integrating in to managed on-premises

The on-premises environment simply needs to be running VMware vSphere and have access to a VPN termination point. The VPN termination point can either be something that exists in the end-customer’s environment or we can leverage the NSX standalone edge device to provide VPN services.

To support advanced features such as hybrid linked mode the on-premises vSphere version would need to be at vSphere 6.0 Update 3 patch c and later.

Building professional services portfolio

Once the cloud provider has a reference topology of how you are going to connect your customers in to their newly provisioned VMware Cloud on AWS SDDC’s you can start to think about what professional services you would like to deliver to accelerate your customers on-boarding and success leveraging the cloud service.

Here are a few examples:

  • Connectivity and readiness – which is helping your customers connect their networking in to the target environment leveraging their existing investments.
  • Architecture and design – supporting your customers in architecting their cloud deployments to maximize their business impact.
  • Develop, deploy and build – support your customers in enhancing their development lifecycles, environments management, build processes, application modernization etc.
  • Plan and migrate – support your customers on-boarding workloads to the new VMware Cloud on AWS SDDC environments.

Building managed services portfolio

A key differentiator of working with a cloud provider is being able to take advantage of their advanced managed services portfolio, which can now be extended across VMware Cloud on AWS.

Here are a few examples:

  • Application support – as well as providing the support for the VMware Cloud on AWS environment, the MSP can offer advanced support and SLA’s across their customers applications.
  • Patching and lifecycle – support customers with lifecycle of applications.
  • Proactive reporting – plug service in to existing OSS and BSS systems to offer advanced capacity and performance reports.
  • Operate and optimize – support the customer by operating the whole environment for them and optimizing for cost and performance.

Architecting for the core MSP use-cases

VMware Cloud on AWS is a unique cloud service that enables many use-cases that meet many of your customers business drivers, from existing applications through to a new cloud native applications.

Here are a few example use-cases that you can help your customers architect as part of their cloud adoption business drivers:

  • Application migrations
  • Geographic expansion
  • Vertical extension
  • Disaster recovery
  • Elastic scalability
  • Application development
  • Application modernization

Call to action

To get started with VMware Cloud on AWS please visit https://cloud.vmware.com or contact your VMware partner business manager to discuss how you could add this managed service to your portfolio.

VMware Cloud on AWS – Managed Service Provider (MSP) Program

It is without doubt, that one of the most significant announcements that came from VMware during 2017 was the launch of VMware Cloud on AWS. While many enterprises and organizations are deliberating their specific use cases for this service, it is absolutely clear, that providing VMware customers and partners the ability to have a vSphere Cloud Platform running on AWS hardware, with a low latency and high bandwidth interconnect into AWS’ native services is highly appealing. This is because, as we all know, the public cloud is often not the most appropriate location for every workload type.

During 2018 we will see significant growth of this service across the multiple global regions, in which it will be made available, in addition to gaining visibility into the wide-ranging customer use cases, that will become key drivers for customer adoption.

Also in 2018, we will see VMware Cloud on AWS being made available through the VMware Cloud Provider ‘Managed Service Provider’ (MSP) program, allowing VMware’s cloud provider partners to deliver this service to their end consumers, as part of any fully managed service offering.

For those of you who are unfamiliar with the concept of Managed Services, this is the practice of outsourcing IT services based on the proactive management offered through pre-defined service-level agreements. With this model, a cloud provider takes responsibility for IT functions, and also in many cases, acts as a trusted advisor to the consumer, offering strategic solutions for improving IT operations and reducing costs.

In the VMC on AWS managed service provider model, the cloud provider has direct oversight of the VMC on AWS organization, and the systems being managed. This allows the cloud provider to deliver the solution, with the consumer being provided with a service-level agreement that defines the performance and quality metrics based on the overall service provider offering, which might include multiple different components. The key differentiator of this solution is that the cloud provider maintains the relationship with the end consumer at all times, while being backed by VMware support services.

 

One of the key advantages to the end consumer is that this is an efficient way to stay up to date with technology trends, and to have access to all of the necessary skills to manage and maintain this truly hybrid solution, which in turn, minimizes risk. A recent survey [2017 State of Cloud Adoption and Security] identified that it is a lack of knowledge and expertise in cloud computing, rather than reluctance, which appears to be the main obstacle to cloud adoption for many corporate organizations. Therefore, as a value-added managed service provider, VMware Cloud Providers can evolve to offer a higher level of service and adopt service models that are tailored to meet the needs of these organizations. In addition, managing day-to-day IT processes and reducing related business costs can provide a significant advantage for consumer organizations, and also provides efficiency to cloud providers through the centralization of technical expertise.

As a result, VMware Cloud Providers can be instrumental as the IT infrastructure components of some corporations are migrated to the cloud, making it easier than ever for them to capture these workloads. Also, for cloud providers who have been providing in-house cloud services or acting as brokers for cloud service providers, the VMC on AWS solution takes this approach to a whole new level of integration, opening the door to integrated cross-cloud services, which can meet the needs of the most demanding, complex or diverse application. For instance, a VMC on AWS managed service provider might stretch applications across the boundaries of the hybrid solution, allowing tenants to build solutions that can consume the best from both worlds, such as EC2/ECS applications querying an Oracle RAC database or SAP modules running on the VMC’s SDDC platform. There are unlimited use cases for customers to leverage solutions between the two environments, all of which can be provided as a fully managed infrastructure by a VMware Cloud Provider.

In all likelihood, the most common use cases and managed services that will be offered on a VMC on AWS solution will evolve around the low-latency and high bandwidth connectivity with AWS native services, and the disaster recovery solutions being made available through this offering. This takes application topologies and service development options beyond the capabilities of the traditional VMware infrastructure. As a result, cloud provider managed services can be extended significantly, and might include a wide range of new offerings, such as

  • Software – application production support and maintenance
  • Authentication solutions
  • Systems management
  • Secure mobile device management
  • Data backup and managed recovery services
  • Data storage, data warehouse and management
  • Network monitoring, overall operational management
  • End-to-end security services
  • Communications services (mail, phone, VoIP)
  • Managed video services

In addition, we also expect to see VMware cloud providers deploy VMC on AWS as a means of rapid deployment into new regions, versus building new co-locations, providing a significantly faster route to local markets. This use case will see VMware cloud providers deploying new infrastructure, while avoiding complex, expensive and time-consuming processes. Also, cloud providers who wish to provision one-off or multiple resources into a new global region, where AWS is present, can now do so in a matter of days, as opposed to months or years.

Also, managed cloud providers who wish to reduce their data center footprint and consolidate customer workloads, in what might be smaller regions, can employ VMC – reducing the need for some or all of their own facilities. Likewise, expanding resources for both short and long periods, based on the end consumer’s needs, delivers a new level of flexibility that cloud providers can offer. From the cloud provider’s perspective, this service delivers what you need, when you need it, with no upfront capital outlay – in effect, creating a cloud bursting model.

Managed disaster recovery services are also highly likely to be one of the key use cases for the managed cloud providers who offer this solution as part of their portfolio. Disaster Recovery-as-a-Service can, in a simplified architecture, deliver business continuity through an on-demand service solution, optimized by VMware Cloud on AWS. This solution allows VMware cloud providers to offer services that can provide the operationally consistent experience of a VMware data center, while also:

  • Accelerating time-to-protection
  • Simplifying disaster recovery operations
  • Reducing secondary site costs with cloud economics

This Disaster Recovery-as-a-Service is built, as you would expect, on established VMware solutions, including Site Recovery Manager, vSphere Replication, and optionally VMware vRealize Orchestrator, which together provides the application centric runbook, and removes the need for service consumers to require a dedicated disaster recovery data center.

Sold as an add-on service to VMware Cloud on AWS, the Disaster Recovery-as-a-Service solution offers multiple failure topologies, to provide flexibility to both the end consumer and cloud provider, as illustrated below:

 

In summary, the VMware Cloud on AWS solution provides VMware cloud service providers the means to offer a whole new range of service offerings based on the combined benefits of the VMware and AWS platforms, including:

  • Maintain your teams, tools & skills investments
  • Consumption based economics
  • Unique service architecture options
  •  Scale and elasticity with on-demand capacity and flexible consumption

It is important to recognise, that VMware Cloud Providers are uniquely placed to merge seamlessly, through the power of managed services, VMware SDDC platforms and Native AWS solutions, transforming entire IT service realities through a powerful combination of service offerings. However, to maximize the benefits of VMware Cloud on AWS, cloud service providers need a holistic cloud strategy, and a way to make it real. Also, to get there, cloud providers need to be ready to act. For this reason, over the coming months, I will be working with many of VMware’s key cloud providers to develop new service offerings based on VMware Cloud on AWS architectures. For more information as these services become a reality, watch this space…

Martin Hosken | Principal Architect | Office of the CTO, Global Field
VCDX-DCV & VCDX-CMA | VCIX-DCV | vExpert
AWS Certified Solutions Architect – Professional

Virtualizing perimeter security in the Service Provider world

Perimeter security is one area of the Service Provider world which has not seen the same adoption of virtualization in the way that first servers, then networking and latterly storage have. In this first in a series of posts we’ll look at some of the hidden challenges, as well as the benefits of bringing virtualization to the datacenter perimeter.

VMware 20th Anniversary iconIt should be no surprise that we’re big fans of virtualization here at VMware. Over our twenty-year history the question has changed from “what can we virtualize?” to “is there anything we can’t virtualize?”. During that time there have been big changes in other parts of our industry too. Gone are the custom hardware appliances, replaced by generic x86 based platforms. Take the trusty firewall or load-balancer, once a collection of custom components and code, now powerful x86 based “servers” with generic but still highly performant interface cards running custom operating systems or specialized Linux-based distributions.

In the Service Provider world, a physical appliance is a necessary evil. Necessary because it performs a crucial role, but evil because it physical nature means inventory, space, power and environmental challenges. Should the Service Provider carry stocks of different sized devices, or order from their supply chain against each customer order? How many units is sufficient, and how should they be kept up to date with changing code releases while they sit on a shelf waiting to be deployed?

Since these appliances are now x86 devices with standard interfaces, they can be delivered as virtual appliances which can be deployed in much the same way as any other virtual machines. Like other virtual machines, that means they can be deployed as needed, and typically, the latest version can be downloaded from the vendor’s website whenever its needed.

So, that’s great, problem solved! Deploy virtual perimeter firewalls, proxies or load-balancers whenever you need one. That was easy…

Except it’s not quite that simple. Let’s look at the traditional, physical, perimeter security model.

Over on the left we have the untrusted Internet connected to our perimeter firewall appliance. In this simple illustration, we won’t worry about dual vendor or defense in depth, we’ll treat the single firewall as an assured boundary device. That means, that once the traffic leaves the inside of the firewall, we trust it enough to connect it to the compute platform where our virtualized workloads run. There’s a clear demarcation or boundary here. The Internet is on the outside, and our workloads are on the inside. We can see the appeal of virtualizing devices like the firewall for Service Providers though, so let’s look at the same illustration if we simply virtualize the firewall.

Although nothing much changes, the firewall is still an assured boundary between the untrusted traffic outside on the Internet and the trusted traffic inside. There is one subtle difference though. Now, the untrusted Internet traffic is connected, unfiltered to the virtualization platform.

That little bit of red line is either an acceptable risk, or it’s a big deal depending upon your point of view. I’ve presented this scenario to Service Providers for several years now, and it’s interesting to see how their responses have differed over that time, and in different countries. At first I would present the option as a “journey”, where different customers would become more comfortable with the idea over time. The challenge for the Provider was therefore, how soon they could realize the benefits of virtualizing devices like this, without their customers thinking that the security of their solution was somehow being compromised?

About a year or so into my presenting this scenario, I started on my “journey” explanation, when the Product owner at the Service Provider where I was presenting said, “our customers have reached the end of that journey already!” That Service Provider was already using NSX to virtualize their customer networks and had been explaining the benefits and capabilities of micro-segmentation and the Edge Service Gateway. Up until that point, their policy was to deploy a physical perimeter firewall, just like the one in the first illustration, and use the Edge Gateway as a “services” device, only providing load balancing and dynamic routing to their customer’s WAN. They offered the NSX Distributed Firewall as a second security layer in combination with the physical firewall. Their service offering looked like this.

Or at least had done, until their customers started to ask why they were being asked to pay for a physical firewall when the next device behind it was already a capable firewall. Those customers, happy with the idea of virtualizing anything that ran on x86 hardware, saw the service they were being offered as over-engineered, with three firewalls not the two which the Service Provider described. Is there a way then, to mitigate the risk of customers concerns over virtualizing network and security appliances? To a degree it depends upon the type of hardware platform a Service Provider is running which will make any proposed solution more, or less, costly or complex. It also depends upon, whether the Service Provider feels that they need to demonstrate risk mitigation or whether their customers will accept the new solution without complex re-engineering being necessary.

In our NSX design guides we recommend separate racks/clusters to run Edge Service Gateways, as this constrains the external VLAN backed networks to those “Edge” racks, and simplifies the remaining “compute” racks which only need to be configured with standard management and NSX VXLAN transport networks. If we look at the last solution with separate Edge compute, it looks like this.

While it would be possible to argue, as that Service Provider’s customers did, that there is no need for three firewalls, and simply remove the physical firewall, instead relying on the Edge Service Gateway, what if the security perimeter requirements were more complex? What if the customer required Internet facing Load balancers with features only present in third party products, or if they wanted to implement in-line proxies or other protocol analysis services such as data-loss prevention only possible through third party devices? Well, if we extend the scope of the Edge Cluster and make it a network and security services cluster our solution stack could look like this.

Now, there’s no untrusted traffic reaching our Compute clusters and although the network and security cluster does have an unfiltered Internet connection, all the virtualized workloads in that cluster are appliances specifically designed to operate in that kind of “DMZ” environment. A solution like this is straightforward to implement in a modular rack server / top of rack switched leaf and spine design datacenter. Some consideration may be necessary in a hyper-converged infrastructure (HCI) environment where the balance of compute and storage requirements could be quite different between network and security, and compute workloads but otherwise it shouldn’t require major design changes.

In a datacenter based on chassis and blades, the challenge may be in creating a virtualization environment to run network and security workloads which is sufficiently “separate” to mitigate the perceived risk. Solutions which are limited to individual chassis may only require the provision of separate chassis, whereas those whose network fabric spans multiple chassis may require a different approach, possibly using separate rack-mounted servers to create network and security clusters outside of the compute workload chassis environment.

How much effort is necessary, or required, depends upon several factors. But, in most cases, the benefits to the Service Provider, commercially, operationally and most importantly in customer satisfaction and time-to-value, should provide a compelling argument to look at virtualizing those last few physical appliances without necessarily having to change vendors or compromise the services offered.

Service Providers running vCloud Director have a few different options for the deployment, management and operation of third party network and security appliances in their environments, and we’ll look at these in more detail in a follow-up post.

Introducing vCD CLI: Easy command line administration for vCloud Director

Easy consumption and developer friendliness are hallmarks of the cloud computing revolution.  On the vCloud Director team we know the market demands tools to make it easy for partners to manage clouds based on vSphere and for their customers to consume them.  With this in mind it is our pleasure to introduce vCD CLI, a Python CLI to administer vCloud Director using short, easy-to-remember commands.

vCD CLI derives from Python code developed for vCloud Air, which was based on vCloud Director.  Starting in 2017 our colleague Paco Gomez began to reinvigorate the CLI code to support new vCloud Director versions up through version 9.1, the latest GA release.  CLI code is now divided into two Github projects: vCD CLI and pyvcloud, a Python library for vCloud Director administration. More significantly, Paco enlisted the vCD engineering team to help. Thanks to work by a number of engineers led by Aashima Goel, the vCD CLI covers a substantial chunk of basic administrative operations.  In the process we dropped vCloud Air support and standardized on Python3.  Our goal is quick and easy-to-understand administration on a code base that can evolve rapidly to support new features.

vCD CLI is fully open source and licensed under the Apache 2.0 license. You can install with just a couple of commands on most platforms.  For gory details look at INSTALL.md, which has detailed installation instructions for Mac OS X, Linux, and Windows.  Meanwhile, here’s a typical example of deployment on Ubuntu.

# Install on Ubuntu 16.04 LTS.
sudo apt-get install python3-pip gcc -y
pip3 install --user vcd-cli

Once you have the code installed it’s time to login and start looking around.  vCD CLI has a wealth of commands for showing organizations, VDCs, vApps, catalogs, and the like, which makes it very helpful for navigating vCloud Director installations.  The following example logs in and gets a list of organizations.

$ vcd login vcd-91.test.vmware.com System administrator -i -w
Password: 
administrator logged in, org: 'System', vdc: ''
$ vcd org list
in_use   logged_in   name
-------- ----------- ------
True     True        System
False    False       Test1

As a side note, the preceding example used ‘vcd login’ with -i and -w options.  These suppress errors from self-signed certificates.  You don’t need them if your vCloud Director installation certificate is signed by a public CA.

Once logged in, we can select a particular organization with ‘vcd org use’ and dig down into its resources.  The following example shows commands to list VDCs and vApps.

$ vcd org use Test1
now using org: 'Test1', vdc: 'VDC-A', vApp: ''.
$ vcd vdc list
in_use   name   org
-------- ------ -----
True     VDC-A  Test1
$ vcd vapp list
isDeployed   isEnabled   memoryAllocationMB   name            numberOfCpus   numberOfVMs   ownerName   status      storageKB   vdcName
------------ ----------- -------------------- --------------- -------------- ------------- ----------- ----------- ----------- ---------
true         true                          48 vApp-Tiny-Linux              1             1 system      POWERED_OFF     1048576 VDC-A

Scrolling over a bit we see that our vApp is powered off.  Let’s fix that right away by issuing a power-on command, which is ‘vcd vapp power-on.’  As you can see, vCD CLI commands are hierarchical with the form ‘vcd <entity> [ <subentity> … ] <operation> <arguments>.’  In the case of ‘vcd vapp’ alone there are over 20 commands, so you have a wide range of management operations available.

$ vcd vapp power-on vApp-Tiny-Linux
vappDeploy: Starting Virtual Application vApp-Tiny-Linux(66d7f94f-4bbc-4597-a5fe-70f35b05acfb)
...
vappDeploy: Running Virtual Application vApp-Tiny-Linux(66d7f94f-4bbc-4597-a5fe-task: e88f9ed8-67fe-4d8d-af20-8edb510051c7, 
Running Virtual Application vApp-Tiny-Linux(66d7f94f-4bbc-4597-a5fe-70f35b05acfb), result: success

Speaking of management, being able to set permissions easily on resources like vApps or catalog items is a long-standing request from vCloud Director users.  vCD CLI delivers a solution.  Here’s a simple example of sharing a catalog with the rest of the organization.

$ vcd catalog acl list My-Catalog
subject_name       subject_type   access_level
------------------ -------------- --------------
Test1 (org_in_use) org            None
$ vcd catalog acl share My-Catalog
Catalog shared to all members of the org 'Test1'.
$ vcd catalog acl list My-Catalog
subject_name       subject_type   access_level
------------------ -------------- --------------
Test1 (org_in_use) org            ReadOnly

vCD CLI has even more fine-grained control over ACLs than this example shows.  Run ‘vcd catalog acl -h’ or ‘vcd vapp acl -h’ to see the richness of available commands.  You can also manage rights and roles using ‘vcd right’ and ‘vcd role’.  There’s a lot of power here to do operations that would take far longer going through the vCloud Director GUI.

Speaking of powerful commands, it would be remiss to omit my favorite vCD CLI operation, namely uploading OVA files directly into vCloud Director catalogs. ‘vcd catalog upload’ allows you to skip installation of ovftool and upload using intuitive options. Here’s an example of loading an OVA and starting it as a vApp.

$ vcd catalog upload My-Catalog photon-custom-hw11-2.0-304b817.ova 
upload 113,169,920 of 113,169,920 bytes, 100%
property   value
---------- ----------------------------------
file       photon-custom-hw11-2.0-304b817.ova
size 113207424
$ vcd catalog list My-Catalog
catalogName   entityType   isPublished   name                               ownerName   status   storageKB   vdcName
------------- ------------ ------------- ---------------------------------- ----------- -------- ----------- ---------
My-Catalog    vapptemplate false         photon-custom-hw11-2.0-304b817.ova system      RESOLVED       16384 VDC-A
My-Catalog    vapptemplate false         Tiny-Linux                         system      RESOLVED        1024 VDC-A
$ vcd vapp create Photon-2.0-Vapp \
  --description 'Test vApp' --catalog My-Catalog \
  --template photon-custom-hw11-2.0-304b817.ova \
  --network isolated-network-1 --ip-allocation-mode pool \
  --accept-all-eulas

Finally a quick word about scripting.  vCD CLI commands return standard Unix-style return codes with 0 for success and non-zero for failures. You can embed command in shell scripts and use techniques like the Bash  ‘set -e’ command to terminate automatically on failure.  For example, the following script will exit at the ‘vcd org use’ command if the organization does not exist.

#!/bin/bash
ORG=$1
set -e
vcd login vcd-91.test.vmware.com System administrator -i -w --password='my-pass'
vcd org use ${ORG}
vcd user list

There are so many commands available in vCD CLI that it is not possible to do them justice in a brief article like this one. Instead, have a look at the following documentation sources.

  • CLI help, which is available on all vcd commands.  ‘vcd -h’ shows all commands, ‘vcd vapp -h’ shows all vApp commands, etc.
  • The vCD CLI Site, which has abundant documentation for all commands as well as procedures like installation.
  • The vCD CLI Github project.  The Python3 sources are quite readable.

We are actively working on vCD CLI as well as the underlying pyvcloud library. You can expect to see new features, especially around networking and edge router management.  You may also see a bug or two, as they like to live in new code.  If you do hit a problem just log an issue on GitHub or–even better–fix it yourself in the code and send us a pull request.  The details for both are in CONTRIBUTING.md.

We hope you enjoy using vCD CLI.  Send us feedback and fixes–we look forward to hearing from you!

VMware Horizon 7.4 Network Ports for Cloud Pod Architecture


Earlier this month (January 2018) VMware released Horizon 7.4, and with that I wanted to share some updates in regard to the network port requirements. My good colleagues over in the EUC Technical Marketing team are doing a fine job of maintaining the diagram and have recently published a white paper PDF which you’ll find here. It’s a beast of a document and highly recommended if you are deploying a VMware Horizon architecture in your environment.

An important consideration when using this network ports diagram, is that it doesn’t necessarily contain all non-VMware related ports such as Active Directory, DNS, NTP, SMB and so on. In fact one of my colleagues in the Office of the CTO mentioned this, since one of his customers ran into an issue where TCP port 135 was blocked, but this was required when joining a Pod to a federation (Cloud Pod Architecture). I thought this would be a good opportunity to describe what Cloud Pod Architecture is doing behind the scenes and provide some updates. Continue reading

Dedicated Hosted Cloud with vCloud Director for VMware Cloud Providers

When looking for service providers for hosted infrastructure, some customers require dedicated infrastructure for their workloads. Whether the customer is looking for additional separation for security or more predictable performance of hosted workloads, service providers will need tools that enable them to provide dedicated hardware service for customers while reducing their operational overhead. In some scenarios, providers will implement managed vSphere environments for customers to satisfy this type of request and then manage the individual vSphere environments manually or with custom automation and orchestration tools. However, it is also possible to leverage vCloud Director to provide dedicated hardware per customer while also providing a central management platform for service providers to manage multiple tenants. In this post, we will explore how this can be accomplished with ‘out of the box’ functionality in vCloud Director.

Continue reading

Deploying Cassandra for vCloud Availability Part 2

In the previous post, we reviewed the preparation steps necessary for the installation of Cassandra for use with vCloud Availability. In this post we will complete the deployment by showing the steps necessary to install Cassandra and then configure Cassandra for secure communication as well as clustering the 3 nodes. This post assumes basic proficiency with the ‘vi’ text editor.

Installing & Configuring Cassandra

For this example, the Datastax version of Cassandra will be deployed. To prepare the server for Cassandra, create the datastax.repo file in the /etc/yum.repos.d directory with the following command:

vi /etc/yum.repos.d/datastax.repo

Then input the Datastax repo details in to the file.

 [datastax]
 name = DataStax Repo for Apache Cassandra
 baseurl = https://rpm.datastax.com/community
 enabled = 1
 gpgcheck = 0

Once the repo details have been correctly entered, press the ESC key, type :wq! to write and exit the file.

Continue reading

Deploying Cassandra for vCloud Availability Part 1

With the recent release of vCloud Availability for vCloud Director 2.0, it seems like a good opportunity to review the steps for one of the key components required for its installation, the Cassandra database cluster.  While the vCloud Availability installation provides a container based deployment of Cassandra, this container instance of Cassandra is only meant for ‘proof of concept’ deployments.

To support a production implementation of vCloud Availability, a fully clustered instance of Cassandra must be deployed with a recommend minimum of 3 nodes. This post will outline the steps for prepping the nodes for the installation of Cassandra. These preparation steps consist of:

  • Installation of Java JDK 8
  • Installation of Python 2.7

This post assumes basic proficiency with the ‘vi’ text editor.

Infrastructure Considerations

Before deploying the Cassandra nodes for vCloud Availability, ensure that:

  • All nodes have access to communicate with the vSphere Cloud Replication Service over ports 9160 and 9042.
  • DNS is properly configured so that each node can successfully be resolved by the respective FQDN.

It is also worth mentioning that for this implementation, Cassandra does not require a load balancer as the vSphere Cloud Replication Service will automatically select an available node from the Cassandra cluster database communications.

Continue reading