Home > Blogs > vCloud Architecture Toolkit (vCAT) Blog > Monthly Archives: March 2018

Monthly Archives: March 2018

Virtualizing perimeter security in the Service Provider world

Perimeter security is one area of the Service Provider world which has not seen the same adoption of virtualization in the way that first servers, then networking and latterly storage have. In this first in a series of posts we’ll look at some of the hidden challenges, as well as the benefits of bringing virtualization to the datacenter perimeter.

VMware 20th Anniversary iconIt should be no surprise that we’re big fans of virtualization here at VMware. Over our twenty-year history the question has changed from “what can we virtualize?” to “is there anything we can’t virtualize?”. During that time there have been big changes in other parts of our industry too. Gone are the custom hardware appliances, replaced by generic x86 based platforms. Take the trusty firewall or load-balancer, once a collection of custom components and code, now powerful x86 based “servers” with generic but still highly performant interface cards running custom operating systems or specialized Linux-based distributions.

In the Service Provider world, a physical appliance is a necessary evil. Necessary because it performs a crucial role, but evil because it physical nature means inventory, space, power and environmental challenges. Should the Service Provider carry stocks of different sized devices, or order from their supply chain against each customer order? How many units is sufficient, and how should they be kept up to date with changing code releases while they sit on a shelf waiting to be deployed?

Since these appliances are now x86 devices with standard interfaces, they can be delivered as virtual appliances which can be deployed in much the same way as any other virtual machines. Like other virtual machines, that means they can be deployed as needed, and typically, the latest version can be downloaded from the vendor’s website whenever its needed.

So, that’s great, problem solved! Deploy virtual perimeter firewalls, proxies or load-balancers whenever you need one. That was easy…

Except it’s not quite that simple. Let’s look at the traditional, physical, perimeter security model.

Over on the left we have the untrusted Internet connected to our perimeter firewall appliance. In this simple illustration, we won’t worry about dual vendor or defense in depth, we’ll treat the single firewall as an assured boundary device. That means, that once the traffic leaves the inside of the firewall, we trust it enough to connect it to the compute platform where our virtualized workloads run. There’s a clear demarcation or boundary here. The Internet is on the outside, and our workloads are on the inside. We can see the appeal of virtualizing devices like the firewall for Service Providers though, so let’s look at the same illustration if we simply virtualize the firewall.

Although nothing much changes, the firewall is still an assured boundary between the untrusted traffic outside on the Internet and the trusted traffic inside. There is one subtle difference though. Now, the untrusted Internet traffic is connected, unfiltered to the virtualization platform.

That little bit of red line is either an acceptable risk, or it’s a big deal depending upon your point of view. I’ve presented this scenario to Service Providers for several years now, and it’s interesting to see how their responses have differed over that time, and in different countries. At first I would present the option as a “journey”, where different customers would become more comfortable with the idea over time. The challenge for the Provider was therefore, how soon they could realize the benefits of virtualizing devices like this, without their customers thinking that the security of their solution was somehow being compromised?

About a year or so into my presenting this scenario, I started on my “journey” explanation, when the Product owner at the Service Provider where I was presenting said, “our customers have reached the end of that journey already!” That Service Provider was already using NSX to virtualize their customer networks and had been explaining the benefits and capabilities of micro-segmentation and the Edge Service Gateway. Up until that point, their policy was to deploy a physical perimeter firewall, just like the one in the first illustration, and use the Edge Gateway as a “services” device, only providing load balancing and dynamic routing to their customer’s WAN. They offered the NSX Distributed Firewall as a second security layer in combination with the physical firewall. Their service offering looked like this.

Or at least had done, until their customers started to ask why they were being asked to pay for a physical firewall when the next device behind it was already a capable firewall. Those customers, happy with the idea of virtualizing anything that ran on x86 hardware, saw the service they were being offered as over-engineered, with three firewalls not the two which the Service Provider described. Is there a way then, to mitigate the risk of customers concerns over virtualizing network and security appliances? To a degree it depends upon the type of hardware platform a Service Provider is running which will make any proposed solution more, or less, costly or complex. It also depends upon, whether the Service Provider feels that they need to demonstrate risk mitigation or whether their customers will accept the new solution without complex re-engineering being necessary.

In our NSX design guides we recommend separate racks/clusters to run Edge Service Gateways, as this constrains the external VLAN backed networks to those “Edge” racks, and simplifies the remaining “compute” racks which only need to be configured with standard management and NSX VXLAN transport networks. If we look at the last solution with separate Edge compute, it looks like this.

While it would be possible to argue, as that Service Provider’s customers did, that there is no need for three firewalls, and simply remove the physical firewall, instead relying on the Edge Service Gateway, what if the security perimeter requirements were more complex? What if the customer required Internet facing Load balancers with features only present in third party products, or if they wanted to implement in-line proxies or other protocol analysis services such as data-loss prevention only possible through third party devices? Well, if we extend the scope of the Edge Cluster and make it a network and security services cluster our solution stack could look like this.

Now, there’s no untrusted traffic reaching our Compute clusters and although the network and security cluster does have an unfiltered Internet connection, all the virtualized workloads in that cluster are appliances specifically designed to operate in that kind of “DMZ” environment. A solution like this is straightforward to implement in a modular rack server / top of rack switched leaf and spine design datacenter. Some consideration may be necessary in a hyper-converged infrastructure (HCI) environment where the balance of compute and storage requirements could be quite different between network and security, and compute workloads but otherwise it shouldn’t require major design changes.

In a datacenter based on chassis and blades, the challenge may be in creating a virtualization environment to run network and security workloads which is sufficiently “separate” to mitigate the perceived risk. Solutions which are limited to individual chassis may only require the provision of separate chassis, whereas those whose network fabric spans multiple chassis may require a different approach, possibly using separate rack-mounted servers to create network and security clusters outside of the compute workload chassis environment.

How much effort is necessary, or required, depends upon several factors. But, in most cases, the benefits to the Service Provider, commercially, operationally and most importantly in customer satisfaction and time-to-value, should provide a compelling argument to look at virtualizing those last few physical appliances without necessarily having to change vendors or compromise the services offered.

Service Providers running vCloud Director have a few different options for the deployment, management and operation of third party network and security appliances in their environments, and we’ll look at these in more detail in a follow-up post.

Introducing vCD CLI: Easy command line administration for vCloud Director

Easy consumption and developer friendliness are hallmarks of the cloud computing revolution.  On the vCloud Director team we know the market demands tools to make it easy for partners to manage clouds based on vSphere and for their customers to consume them.  With this in mind it is our pleasure to introduce vCD CLI, a Python CLI to administer vCloud Director using short, easy-to-remember commands.

vCD CLI derives from Python code developed for vCloud Air, which was based on vCloud Director.  Starting in 2017 our colleague Paco Gomez began to reinvigorate the CLI code to support new vCloud Director versions up through version 9.1, the latest GA release.  CLI code is now divided into two Github projects: vCD CLI and pyvcloud, a Python library for vCloud Director administration. More significantly, Paco enlisted the vCD engineering team to help. Thanks to work by a number of engineers led by Aashima Goel, the vCD CLI covers a substantial chunk of basic administrative operations.  In the process we dropped vCloud Air support and standardized on Python3.  Our goal is quick and easy-to-understand administration on a code base that can evolve rapidly to support new features.

vCD CLI is fully open source and licensed under the Apache 2.0 license. You can install with just a couple of commands on most platforms.  For gory details look at INSTALL.md, which has detailed installation instructions for Mac OS X, Linux, and Windows.  Meanwhile, here’s a typical example of deployment on Ubuntu.

# Install on Ubuntu 16.04 LTS.
sudo apt-get install python3-pip gcc -y
pip3 install --user vcd-cli

Once you have the code installed it’s time to login and start looking around.  vCD CLI has a wealth of commands for showing organizations, VDCs, vApps, catalogs, and the like, which makes it very helpful for navigating vCloud Director installations.  The following example logs in and gets a list of organizations.

$ vcd login vcd-91.test.vmware.com System administrator -i -w
Password: 
administrator logged in, org: 'System', vdc: ''
$ vcd org list
in_use   logged_in   name
-------- ----------- ------
True     True        System
False    False       Test1

As a side note, the preceding example used ‘vcd login’ with -i and -w options.  These suppress errors from self-signed certificates.  You don’t need them if your vCloud Director installation certificate is signed by a public CA.

Once logged in, we can select a particular organization with ‘vcd org use’ and dig down into its resources.  The following example shows commands to list VDCs and vApps.

$ vcd org use Test1
now using org: 'Test1', vdc: 'VDC-A', vApp: ''.
$ vcd vdc list
in_use   name   org
-------- ------ -----
True     VDC-A  Test1
$ vcd vapp list
isDeployed   isEnabled   memoryAllocationMB   name            numberOfCpus   numberOfVMs   ownerName   status      storageKB   vdcName
------------ ----------- -------------------- --------------- -------------- ------------- ----------- ----------- ----------- ---------
true         true                          48 vApp-Tiny-Linux              1             1 system      POWERED_OFF     1048576 VDC-A

Scrolling over a bit we see that our vApp is powered off.  Let’s fix that right away by issuing a power-on command, which is ‘vcd vapp power-on.’  As you can see, vCD CLI commands are hierarchical with the form ‘vcd <entity> [ <subentity> … ] <operation> <arguments>.’  In the case of ‘vcd vapp’ alone there are over 20 commands, so you have a wide range of management operations available.

$ vcd vapp power-on vApp-Tiny-Linux
vappDeploy: Starting Virtual Application vApp-Tiny-Linux(66d7f94f-4bbc-4597-a5fe-70f35b05acfb)
...
vappDeploy: Running Virtual Application vApp-Tiny-Linux(66d7f94f-4bbc-4597-a5fe-task: e88f9ed8-67fe-4d8d-af20-8edb510051c7, 
Running Virtual Application vApp-Tiny-Linux(66d7f94f-4bbc-4597-a5fe-70f35b05acfb), result: success

Speaking of management, being able to set permissions easily on resources like vApps or catalog items is a long-standing request from vCloud Director users.  vCD CLI delivers a solution.  Here’s a simple example of sharing a catalog with the rest of the organization.

$ vcd catalog acl list My-Catalog
subject_name       subject_type   access_level
------------------ -------------- --------------
Test1 (org_in_use) org            None
$ vcd catalog acl share My-Catalog
Catalog shared to all members of the org 'Test1'.
$ vcd catalog acl list My-Catalog
subject_name       subject_type   access_level
------------------ -------------- --------------
Test1 (org_in_use) org            ReadOnly

vCD CLI has even more fine-grained control over ACLs than this example shows.  Run ‘vcd catalog acl -h’ or ‘vcd vapp acl -h’ to see the richness of available commands.  You can also manage rights and roles using ‘vcd right’ and ‘vcd role’.  There’s a lot of power here to do operations that would take far longer going through the vCloud Director GUI.

Speaking of powerful commands, it would be remiss to omit my favorite vCD CLI operation, namely uploading OVA files directly into vCloud Director catalogs. ‘vcd catalog upload’ allows you to skip installation of ovftool and upload using intuitive options. Here’s an example of loading an OVA and starting it as a vApp.

$ vcd catalog upload My-Catalog photon-custom-hw11-2.0-304b817.ova 
upload 113,169,920 of 113,169,920 bytes, 100%
property   value
---------- ----------------------------------
file       photon-custom-hw11-2.0-304b817.ova
size 113207424
$ vcd catalog list My-Catalog
catalogName   entityType   isPublished   name                               ownerName   status   storageKB   vdcName
------------- ------------ ------------- ---------------------------------- ----------- -------- ----------- ---------
My-Catalog    vapptemplate false         photon-custom-hw11-2.0-304b817.ova system      RESOLVED       16384 VDC-A
My-Catalog    vapptemplate false         Tiny-Linux                         system      RESOLVED        1024 VDC-A
$ vcd vapp create Photon-2.0-Vapp \
  --description 'Test vApp' --catalog My-Catalog \
  --template photon-custom-hw11-2.0-304b817.ova \
  --network isolated-network-1 --ip-allocation-mode pool \
  --accept-all-eulas

Finally a quick word about scripting.  vCD CLI commands return standard Unix-style return codes with 0 for success and non-zero for failures. You can embed command in shell scripts and use techniques like the Bash  ‘set -e’ command to terminate automatically on failure.  For example, the following script will exit at the ‘vcd org use’ command if the organization does not exist.

#!/bin/bash
ORG=$1
set -e
vcd login vcd-91.test.vmware.com System administrator -i -w --password='my-pass'
vcd org use ${ORG}
vcd user list

There are so many commands available in vCD CLI that it is not possible to do them justice in a brief article like this one. Instead, have a look at the following documentation sources.

  • CLI help, which is available on all vcd commands.  ‘vcd -h’ shows all commands, ‘vcd vapp -h’ shows all vApp commands, etc.
  • The vCD CLI Site, which has abundant documentation for all commands as well as procedures like installation.
  • The vCD CLI Github project.  The Python3 sources are quite readable.

We are actively working on vCD CLI as well as the underlying pyvcloud library. You can expect to see new features, especially around networking and edge router management.  You may also see a bug or two, as they like to live in new code.  If you do hit a problem just log an issue on GitHub or–even better–fix it yourself in the code and send us a pull request.  The details for both are in CONTRIBUTING.md.

We hope you enjoy using vCD CLI.  Send us feedback and fixes–we look forward to hearing from you!