Build Next Gen Apps VMware Enterprise PKS VMware Network Insight Cloud

The Best of Docker and the Best of VMware

vSphere Integrated Containers (VIC) are changing the game of containers for VMware customers. The ability to consume VMware ESXi resources directly, leverage VMware’s best-in-class virtual machine (VM) management tools like vRealize Operations, and the ability to create segmented, multi-tenant ‘Docker enabled’ API endpoints are all compelling reasons to pull down the platform and take a serious look.

But can you take VIC for a test drive without being able to easily deploy the main VIC appliance? If you just have the VIC engine binaries, can you start there? What if you don’t have a vCenter readily available, but you do have a single host? Is there an easy path forward?

In this post, I’m going to take you on a textual adventure as we pull down the VIC Engine binaries, install and configure VIC on an individual host, and explore a bit of the commands to see how we can get started quickly consuming vSphere Integrated Containers in a simple environment. We’ll be working with the actual VIC-Machine binaries in this post, and downloading them directly from Bintray.

Where Do We Start?

First, we will head over to Bintray (https://bintray.com/vmware/vic), where VMware drop’s the builds for VIC engine.

Click through and we will download the (at the time of this writing, v1.1.1-rc2) binaries.

Note: In a typical vCenter Deployment we would head over to the official VIC download page, http://www.vmware.com/go/download-vic and download the OVA to install the VIC appliance.

This appliance holds the most recent binaries for VIC; the same ones we are downloading from Bintray. It also has the benefit of hosting VMware-driven open source projects Admiral and Harbor, which are a Docker management tool and image repository platform. Unfortunately, deploying the appliance to a single host with no vCenter has some challenges, so for simplicity and avoiding going too far into the weeds we are going to work directly with the VIC binary downloads.

Once we download the tar.gz file, we’ll extract it. We’re working with windows so we will use something like WinRAR or 7-Zip to extract the file. On Linux we’d use the “tar -xzvf” command to extract the contents.

Once extracted, we’re going to jump into the command-line to do our actual VIC install. Go ahead and open a command prompt and browse to the directory you extracted to. Within this directory, you should see our VIC files. We’re most interested in “VIC-Machine-Windows.exe”. There are also executables for Linux and Mac as well; but, since we’re working with windows currently there isn’t much use for us with these.

When we run VIC-Machine-Windows.exe we’re presented with a pretty simple response showing the possible switches we can use.

For the purposes of this article we’re going to focus on create, delete and update.

The first command we need to run is going to be used to enable the communication of management traffic through the ESXi firewall over port 2377. We’ll use vic-machine-windows.exe update firewall to accomplish this. If we run the same command with –help on the end, we can see the switches we will need to use to automatically update the firewall. For our environment, we’ll run the following:

vic-machine-windows update firewall –target 192.168.1.10 –user root –password VMware123! –allow –thumbprint=8C:E8:6B:36:8F:03:E0:C4:C5:AB:B7:A5:F2:94:EA:82:D1:69:AC:56

Note: You likely won’t have the thumbprint ID the first time. You’ll need to run it WITHOUT the ID first, it will produce an error and provide you the ID, then you’ll run it again with the flag set. You should see the following response.

Excellent, our rules are in place, and we can move forward!

Let’s run a “VIC-Machine-Windows.exe create –help”.

I’ve pre-created the install command we’re going to use to deploy VIC in this case:

vic-machine-windows.exe create –target 192.168.1.10 –user root –name hlvchost02 –tls-cname hlvchost02.humblelab.com –image-store Local-DS01/image-store –volume-store Local-DS01:vols –bridge-network bridge-pg –public-network “VM Network” –public-network-gateway 192.168.1.1 –public-network-ip 192.168.1.62/24 –dns-server 192.168.1.5 –no-tlsverify –thumbprint=8C:E8:6B:36:8F:03:E0:C4:C5:AB:B7:A5:F2:94:EA:82:D1:69:AC:56

Once this command runs, if all goes well, you’ll be presented with a screen similar to the following:

Let’s step through this command a bit so we can understand what’s happening:

–Target: This pertains to the target we are deploying to
–user: Credential requirements
–name: The name of our container in ESXi
–tls-cname: The name for our self-signed certificate
–image-store: The datastore location for our images
–volume-store: Our volume store location so we can leverage Docker volumes and persistence for storage
–bridge-network: This is the network used for container to container communication. We created this with no attached NIC’s and it lives on its own
–public-network: This is the network where our primary traffic will live; the address for our vSphere Container Host (VCH)
–public-network-ip: The IP for our VCH
–public-network-gateway: The gateway for our LAN network
–dns-server: Our DNS server
–no-tlsverify: Disables certificate verification checks
–thumbprint: The certificate thumbprint for our ESXi host

With our VCH deployed, we can use “docker -H 192.168.1.62:2376 –tls info” from any system with Docker installed on it to check the status of the API endpoint. If all is well, you should see an output similar to the following:

Also, if we inspect our ESXi host via the HTML5 interface we will see a new VM created that represents our Docker endpoint.

Now, we will test our deployment by running a docker run command. Let’s go ahead and spin up a busybox container since they are quick and easy to deploy. We’ll use the following command:

Docker -H ouripaddress:2376 –tls run -it busybox

Quick summary of what this command does:

Run docker against a remote host; our endpoint
Use TLS
Run in interactive mode so we can hit a command prompt
Run the busybox dockerhub image
If all goes well, you should end with the following result:

Also if we check our ESXi host, we’ll see a new VM object was created for this image with the randomly generated name of “laughing_pasteur”.

Now, when we exit the interactive container session we created by typing “exit” at the shell prompt, we will see the container VM stop within our ESXi host, powering itself off.

Taking it one step further, when we clean up after ourselves by using the “docker -H ouripaddress:2376 –tls rm laughing_pasteur” command, you’ll see the container is removed from our ESXi host entirely!

And there we have it!

Conclusion
Now, you’ve seen us successfully deploy a vSphere Integrated Containers API endpoint, and successfully build containers against it. This post just scratches the surface and there are dozens and dozens of things still left to explore, for example:

Creating/Managing/Consuming persistent Volumes
Creating/Managing/Consuming Docker Networks
Container Networks, and giving your containers actual dedicated IPs on the network
Implementing Admiral for Container Management
Implementing Harbor as an Image Repository
Integrating with VMware Tools (vRealize Operations/Network Insight/Automation, NSX)
Thanks for taking the time to read through this quick guide on getting started, and I look forward to tackling some more content around Cloud-Native Applications at VMware soon!

Follow us on Twitter @CloudNativeApps and here on our blog for more updates, tutorials and resources from our team. Additionally, you can follow the author, Cody, on Twitter: @codydearkland.