By Cody De Arkland, Senior Systems Engineer, SLED West
Personal Blog: Https://www.thehumblelab.com
In my last blog, I talked about the ability to leverage vSphere Integrated Containers on a standalone host. We highlighted how this creates a really low barrier for entry, since users can start to consume containers leveraging their existing vSphere infrastructure without ever having to setup a vCenter.
In this post, we’re going to put a spotlight on management of the platform; and talk about leveraging VMware’s Admiral product to handle container lifecycle.
Admiral enables users to leverage an easy to manage the user interface to deploy and manage Docker container infrastructure. It’s currently integrated in vRealize Automation 7.2 and later, but it also functions nicely as a standalone product. Vmware is making incredible investments in the container landscape from vSphere Integrated Containers, to Admiral, to Harbor, PhotonOS and many more products that I can’t go into yet. Stay tuned in the future as this is going to be a growing topic for us! With that being said; let’s jump in!
Getting Started – Provisioning a vSphere Container Host (VCH)
To start, let’s go ahead and deploy a new vSphere Integrated Container Host into our infrastructure. If you read my previous post; this command should look familiar.
–vic-machine-windows create –target hlcorevc01.humblelab.com/Core ^
–user email@example.com ^
–password VMware123! ^
–name hlvchost01 ^
–tls-cname hlvchost01.humblelab.com ^
–compute-resource Management ^
–image-store hl-block-ds01/images ^
–volume-store hl-block-ds01:default ^
–bridge-network vic-bridge ^
–bridge-network-range 126.96.36.199/16 ^
–public-network VM-Network ^
–public-network-gateway 192.168.1.1 ^
–public-network-ip 192.168.1.61/24 ^
–container-network lan-vds ^
–dns-server 192.168.1.5 ^
There are a couple of additions to this command in comparison to the one we leveraged before. In order for us to take advantage of some of fun management stuff that Admiral has to offer; we need to include them in the actual build. Specifically, I’m talking about the –container-network flag, which will let us give our containers a dedicated IP address on our network.
When that command completes successfully, we should be left with a successfully deployed Docker API endpoint as seen in the screenshot below:
Success! Our Docker API is available to use; and we can move along to deploying Admiral!
Deploying Admiral Container on VCH
So now we have a container host, lets pull down admiral. We can take a look at admiral on Docker Hub at https://hub.docker.com/r/vmware/admiral/ which will give us some information around the containers requirements.
Based on the documentation, to run Admiral we’ll need to pull it down and run it with the “docker run -d -p 8282:8282 –name admiral vmware/admiral” command. Since we’re using a VIC endpoint, we’ll need to run the command against our VIC host; this changes the command slightly to “docker -H 192.168.1.61:2376 –tls run -d -p 8282:8282 –name admiral vmware/admiral”
Docker will do its magic, and reach out to docker hub to pull down the container. This may take just a couple of moments to complete, but once it does the container will start up successfully against the 8282 port, and a NAT will be applied from the VIC Docker API address. If all goes well, you should see something similar to the screenshot below:
If we head to http://192.168.1.61:8282 we should see our Admiral interface! Good start!
Let’s select “Add a Host” and add our VCH endpoint. A little inception anyone? A container, running a container management platform, managing the container its running on? I’m confused. When we add in the details for our VCH, and hit verify we will likely be presented with a “You’re using an invalid cert” error message. Go ahead and accept and move forward; verification should happen successfully. Select Add and we should be in business!
After we hit add, we’ll be sent back to a screen showing us all of our registered container hosts. You’ll see the one we just added, with its memory and CPU usage, as well as how many containers (1 in this case) are running on the system. If we select the “1”, you’ll see our admiral container running within. Screenshots below!
Excellent! Let’s take a look at our Networks tab to see our externally available networks!
Here, we can see the network we created earlier – available for consumption from the UI Admiral provides.
Deploying our First Container with Admiral
Admiral by default is setup to be able to pull containers directly down from Docker Hub. In a future blog post I’ll cover integrating Harbor with Admiral. Harbor is VMware’s enterprise Docker registry. For now, we’ll continue on using Docker Hub. When we hit the “Templates” tab, we’re presented with the most popular templates available.
We’re going to use Nginx for this demonstration, so type Nginx into the search bar and watch the magic happen!
We find a number of Nginx containers. We’ll stick with the standard “library” version of Nginx; which is the “official” build. We could hit provision here and it would build successfully – but it wouldn’t be accessible. Why is that? Because we still need to map our connectivity into the host. We can do this in 2 ways; either by exposing the explicit ports used (80 in this case), or exposing all ports.
Instead of hitting “Provision”, select the drop down and select “Enter Additional Info”.
Once were in this screen; we have quite a few customization options. The tabs are broken up into logical configuration sections (Basic, Network, Storage, Policy, Environment, Health Config, Log Config). We’ll step through a few of the more relevant ones for a simple demo now.
Taking a look at the “Basic” tab…
Image: Refers to the Docker Hub or internal container repository image we are pulling from
Name: Friendly name that will show up in Docker as well as in vCenter
Command: Additional commands you want to run when the container initializes
Links: Setting up linking between Docker containers to enable connectivity
You’ll also notice on the bottom right corner there’s the options to Provision, and “Save as Template”. We’re going to be using this template option in a bit, but for now we’ll continue on…
Port Bindings: Traditional docker port exposure options; enter in the ports you want to use and what they relate to inside your container, or check the box to publish all ports
Hostname: Relates to what the systems hostname should be within the docker container
Network Mode: Specific to Docker networking options; these are well documented in hundreds of places on the internet. For details on docker networking options – take a look at the official Docker documentation here https://docs.docker.com/engine/userguide/networking/ or review the vSphere Integrated Containers networking documentation here – https://vmware.github.io/vic-product/assets/files/html/1.1/vic_app_dev/network_use_cases.html
For our purposes, for now, let’s expose port 80 to container port 80, set nginx01 as the hostname, using bridge networking. Moving on…
Volumes: This is what our –volume-store from earlier maps back into. Here, we can tell our VCH to leverage the docker volume we created earlier or if we are using a traditional Docker host, map mount points back to the host file system.
Volumes From: Relates to inheriting the same volumes from another container; unused in our example.
Working Directory: What directory that the container should be “working” out of when running commands. Since we’re going to let the NGINX container start up using its normal process – this isn’t important to us for our example. Leave it alone.
The remaining tabs (Policy, Environment, Health Config, Log Config) aren’t going to be covered in this example specifically; but at a high level…
Policy: Settings related to overall policy (imagine that…) driven control of a container cluster. Settings like, size, limits, shares, and affinity constraints are covered here.
Environment: For setting environment variables within a container. For example, if you had a container which expected an environment variable set around the vCenter you were connecting to; it would be configured here
Health Config: Relates to health monitoring for a container. Yes, Admiral can monitor the health of your container with some specific checks. Pretty cool huh!
Log Config: Log drivers to configure output of the Log’s to VIC or another driver.
To summarize, we’ve configured the following options:
- Basic Tab
- Image – Nginx
- Name – Nginx
- Network Tab
- Port Bindings – 80:80
- Hostname – nginx01.humblelab.com
- Network Mode – Bridge
Press “Provision” and we’ll be off to the races.
If you provisioned an Nginx container earlier, you’ll notice its substantially faster this time to build. This is because the image is already cached locally in our image-store. We don’t need to pull down the layers again. It should only take a few moments to spin up.
We can monitor in 2 places; first in the “Provision Requests” status screen in Admiral:
Or by looking directly in our vCenter:
Now, if we open our browser to http://192.168.1.61 we’ll see the Nginx screen! This is because http:// uses port 80 by default; and that’s the port we exposed.
If we compare within Admiral we’ll see this system there as well!
Now, let’s dive into something more interesting! Templates, which will allow us to use dedicated container networks!
Building our First Template with a Container Network
Let’s destroy our existing Nginx container. We don’t need it anymore for now. Highlight the container form the right context menu and hit “Remove”.
Now, head back to the Template search area, search for Nginx, and go to the “Enter Additional Info” screen again. Go ahead and press the “Save as Template” button on the bottom right that we mentioned earlier. This will take us into the template section, with a new Nginx template in place. Go ahead and hit edit from the side context menu.
In this template screen, we’re presented with some new options. One of which is to add a network. We know we already have a DHCP network setup that we used during the VCH build. Let’s go ahead and add it!
For our example, we’re going to use an existing network, but what pops up if we use Advanced?
This allows us to create a new network, on a specific subnet, with its own IPAM configuration. Very nifty! For now, uncheck Advanced and check existing. We’ll use the only result that pops up; “lan-vds”. Select it, and press save:
We’ll be presented with our lan-vds network below, and the ability to drag a connection to our container. Go ahead and drag that down. To connect, like the screenshot below:
If you explore the “Edit” screen on the container, you’ll notice a series of screens very similar to the edit screen we used earlier when provisioning an individual container.
You’ll notice on the Network tab that no networks are configured, but publish all ports has been checked. We’re going to be utilizing a –container-network in this example, which is going to leverage DHCP on our network to provide this container a dedicated IP address. Purists will say this isn’t best practice for 12-Factor application methodology; but when you couple the idea of container networks with the network and security virtualization that NSX provides, you can start to expose some pretty interesting possibilities regarding containers. This is a blog for another time; but, for this example – we’re going to stick with getting an IP assigned.
When you are satisfied with the configurations, go ahead and hit save. And then select provision on the upper right corner.
Again, we can watch our container being provisioned from either Admiral or vCenter. In this case, the Admiral screenshot is below:
If we inspect the container inside vCenter, we can see it’s been given its own IP address on our network.
And if we browse to http://192.168.1.164 we’re able to once again see our Nginx page:
Go ahead and destroy this container. We’re done with it for now.
Creating Templates with Multiple Containers
It’s not uncommon to want to be able to deploy multiple containers and have them communicate between each other. Let’s show a simple example of taking 2 vanilla PhotonOS containers, and placing them on the same container to demonstrate being able to ping between them.
We will select our “Templates” tab, and utilize the library/photon container that’s first on the list. We press the drop-down arrow and select “Enter Additional Info”. Immediately upon entering that screen we will hit “Save as Template” to start working our template build. Let’s add a second photon container and we can start our customizations. Select continue and save on the second photon deployment.
At this point, your template should look like the below:
Select edit on the first and change the name to “photon01”, on network, uncheck Publish all Ports and select “Bridge” network mode. Select Save.
On the second photon container, change the name to “photon02”, and make the same network changes (uncheck Publish All Ports, select “Bridge” network mode, save).
This should leave your template configured as indicated in the screenshot below:
We will press “Provision” and wait for the completion of the 2 containers. Photon is extremely lightweight and should deploy very quickly.
We can check the status of our deployment by selecting “Resources” in Admiral, and selecting “Applications”. When provisioning completes, we should see the below:
We will head into the command line, and “Attach” to one of our containers.
From our command line, we’ll first list out the available provisioned containers by using “docker -H 192.168.1.61:2375 –tls ps” to list. We see (in the screenshot below) our admiral container as well as the 2 photon containers. We will attach to the first one.
Run a “docker -H 192.168.1.61:2375 –tls attach 4a9c” and press enter, you may need to hit enter a second time to get the prompt to display. See below:
The PhotonOS container is extremely lightweight, so ping isn’t installed by default. We’ll install ping by using tdnf. From the command line, do “/usr/bin/tdnf install iputils -y” which will automatically install the iputils package. Upon completion, ping will be available.
If we look in vCenter, we can see that IP addresses we’re assigned on our bridge network for both of these containers, 188.8.131.52 and 184.108.40.206 respectively.
When we ping across the bridge network, we can see we are able to successfully hit the other container. Winning!
We’ve now shown how quick it is to build a simple template that leverages multiple containers, and enables them to communicate between each other. Thinking this through, you can see how easy it would be to build a template that deployed an Nginx server, a MySQL server, and enabled communication between the 2 to deploy a simple application.
It would be simple from here to change out the bridge network for our former –container-network to ensure both of these containers got IP addresses on our network if we wanted to.
Admiral has come a long way from its early beginnings; and, a ton of features are being added in each release. In this example, we’ve only shown the standalone Admiral interface. There’s much more to see as we explore integration with other products and services; namely vRealize Automation and how it consumes containers. Stay tuned!
In Summary, on our tour of Admiral, we have…
- Provisioned a new vSphere Container Host (VCH) with the –container-network option to enable an external network
- Spun up the Admiral service as a container in our infrastructure, using vSphere Integrated Containers
- Added our VCH to Admiral to be managed
- Viewed Statistics for our VCH
- Inspected Network configurations for our VCH
- Added an Nginx container to our VCH using standard bridged network mode (NAT)
- Deployed an Nginx container leveraging the –container-network to give it a dedicated IP address from our DHCP server
- Created a simple template that would deploy 2 PhotonOS builds on the same bridge network, and tested ping between them
In our next installment, we will configure Harbor, VMware’s Docker Image repository and show how we can pull/push images from both the API driven command line as well as from the Admiral interface!